deployment

How I upgrade Drupal 8 Sites with exported config and Composer

tl;dr: See the video below for a run-through of my process upgrading Drupal core on the real-world open source Drupal 8 site codebase Drupal Example for Kubernetes.

.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }

Over the years, as Drupal has evolved, the upgrade process has become a bit more involved; as with most web applications, Drupal's increasing complexity extends to deployment, and whether you end up running Drupal on a VPS, a bare metal server, in Docker containers, or in a Kubernetes cluster, you should formalize an update process to make sure upgrades are as close to non-events as possible.

Mounting a Kubernetes Secret as a single file inside a Pod

Recently I needed to mount an SSH private key used for one app to connect to another app into a running Pod, but to make sure it was done securely, we put the SSH key into a Kubernetes Secret, and then mounted the Secret into a file inside the Pod spec for a Deployment.

I wanted to document the process here because (a) I know I'm going to have to do it again and this will save me a few minutes' research, and (b) it's very slightly unintuitive (at least to me).

First I defined a secret in a namespace:

Updating a Kubernetes Deployment and waiting for it to roll out in a shell script

For some Kubernetes cluster operations (e.g. deploying an update to a small microservice or app), I need a quick and dirty way to:

  1. Build and push a Docker image to a private registry.
  2. Update a Kubernetes Deployment to use this new image version.
  3. Wait for the Deployment rollout to complete.
  4. Run some post-rollout operations (e.g. clear caches, run an update, etc.).

There are a thousand and one ways to do all this, and many are a bit more formal than this, but sometimes you just need a shell script you can run from your CI server to do it all. And it's not too hard, nor complex, to do it this way:

Ansible for DevOps - available now!

Ansible is a simple, but powerful, server and configuration management tool. Ansible for Devops is a book I wrote to teach you to use Ansible effectively, whether you manage one server—or thousands.

Ansible for DevOps cover - Book by Jeff Geerling

I've spent a lot of time working with Ansible and Drupal over the past couple years, culminating in projects like Drupal VM (a VM for local Drupal development) and the Raspberry Pi Dramble (a cluster of Raspberry Pi computers running Drupal 8, powering http://www.pidramble.com/). I've also given multiple presentations on Ansible and Drupal, like a session at DrupalCon Austin, a session at MidCamp earlier this year, and a BoF at DrupalCon LA.

Ansible for Drupal infrastructure and deployments - DrupalCon LA 2015 BoF

We had a great discussion about how different companies and individuals are using Ansible for Drupal infrastructure management and deployments at DrupalCon LA, and I wanted to post some slides from my (short) intro to Ansible presentation here, as well as a few notes from the presentation.

The slides are below:

And video/audio from the BoF:

Notes from the BoF

If first gave an overview of the basics of Ansible, demonstrating some Ad-Hoc commands on my Raspberry Pi Dramble (a cluster of six Raspberry Pi 2 computers running Drupal 8), then we dove headfirst into a great conversation about Ansible and Drupal.

Ansible + Drupal + Raspberry Pi Dramble - Presentation at MidCamp 2015

Earlier today, I gave a presentation on Ansible and Drupal 8 at MidCamp in Chicago. In the presentation, I introduced Ansible, then deployed and updated a Drupal 8 site on a cluster of 6 Raspberry Pi computers, nicknamed the Dramble.

Video from the presentation is below (sadly, slides/voice only—you can't see the actual cluster of Raspberry Pis... for that, come see me in person sometime!):

My slides from the presentation are embedded below, and I'll be posting a video of the presentation as soon as it's available.

DevOps for Humans - Ansible presentation at DrupalCon Austin

I'm still recovering from an intense week of Drupal here in Austin, TX. I kicked things off by walking around the downtown area, then taking the intensive Acquia Drupal Developer Certification exam. Once the conference started, I attended a few sessions, met a few awesome Drupalists, and learned a lot. On the last day of the 'Con (the last session, in fact), I presented DevOps for Humans: Ansible for Drupal Deployment Victory!.

I think the presentation went well, and I heard some great questions at the end which really contributed to the discussion of Ansible and Drupal deployments in general. It was a great way to finish up the official DrupalCon sessions, though it meant I was revising slides for the hundredth time during the rest of the week, instead of relaxing and enjoying DrupalCon!

Before I post a video and slides from the session, I wanted to highlight some resources for anyone who attended (or didn't attend) DrupalCon Austin:

Below is the video and slides from the DevOps for Humans presentation. Please let me know what you think!

Simple Git-based multi-server deployments

Ansible is used to manage most of Midwestern Mac's infrastructure and deployments, and while it's extremely easy to use, there are a couple situations where a project just needs a little code to be updated across two or more servers, from a central Git repository, or from one master application server.

All Git repositories include a hooks folder, which contains sample git hook scripts. Inside this folder are a series of sample hook files like post-commit.sample and pre-rebase.sample. If you add a shell script of the same name as any of these files (excluding the .sample) to this folder, Git will run the script when the particular action runs (e.g. git will run a post-commit script after a commit).

CI: Deployments and Static Code Analysis with Drupal/PHP

CI: Deplyments and Code Quality

tl;dr: Get the Vagrant profile for Drupal/PHP Continuous Integration Server from GitHub, and create a new VM (see the README on the GitHub project page). You now have a full-fledged Jenkins/Phing/SonarQube server for PHP/Drupal CI.

In this post, I'm going to explain how Jenkins, Phing and SonarQube can help you with your Drupal (or hany PHP-based project) deployments and code quality, and walk you through installing and configuring them to work with your codebase. Bear with me... it's a long post!

Code Deployment

If you manage more than one environment (say, a development server, a testing/staging server, and a live production server), you've probably had to deal with the frustration of deploying changes to your code to these servers.

In the old days, people used FTP and manually copied files from environment to environment. Then FTP clients became smarter, and allowed somewhat-intelligent file synchronization. Then, when version control software became the norm, people would use CVS, SVN, or more recently Git, to push or check out code to different servers.

All the aforementioned deployment methods involved a lot of manual labor, usually involving an FTP client or an SSH session. Modern server management tools like Ansible can help when there are more complicated environments, but wouldn't everything be much simpler if there were an easy way to deploy code to specific environments, especially if these deployments could be automated to either run on a schedule or whenever someone commits something to a particular branch?

Jenkins Logo

Enter Jenkins. Jenkins is your deployment assistant on steroids. Jenkins works with a wide variety of tools, programming languages and systems, and allows the automation (or radical simplification) of tasks surrounding code changes and deployments.

In my particular case, I use a dedicated Jenkins server to monitor a specific repository, and when there are commits to a development branch, Jenkins checks out that branch from Git, runs some PHP code analysis tools on the codebase using Phing, archives the code and other assets in a .tar.gz file, then deploys the code to a development server and runs some drush commands to complete the deployment.

Static Code Analysis / Code Review

If you're a solo developer, and you're the only one planning on ever touching the code you write, you can use whatever coding standards you want—spacing, variable naming, file structure, class layout, etc. don't really matter.

But if you ever plan on sharing your code with others (as a contributed theme or module), or if you need to work on a shared codebase, or if there's ever a possibility you will pass on your code to a new developer, it's a good idea to follow coding standards and write good code that doesn't contain too many WTFs/min.

DevOps, Server Deployment and Configuration Management

For the past few years, as the number of servers I manage has increased from a few to many, and the services I operate have required more flexibility in terms of adding and removing similarly-configured servers for different purposes, I've been testing different deployment and configuration management tools.

Many developers who are also sysadmins have progressed much the same way as I have, beginning by building everything by hand without documenting the process, then documenting the build with text files, and ultimately scripting builds with bash scripts. However, none of these techniques allow fast provisioning, continuous configuration management, or the flexibility required to make constantly-evolving applications adapt to the requirements of the day.

In recent years, 'DevOps' (better integration of development and operations) has become a hot buzzword and mantra of companies espousing agile development methodologies.

A Very Brief (and woefully inadequate) Philosophy of DevOps

Devops - fire meme
Source: DevOps.com

Servers, like instances of applications, should be managed via version-controlled configuration, and should be disposable (a common war cry: Trash Your Servers and Burn Your Code. If a server blows up, or if another few application servers are needed, they should be able to be provisioned or decommissioned in minutes, not hours (much less days or weeks!), and should be able to be provisioned and decommissioned automatically, without human intervention.