How I upgrade Drupal 8 Sites with exported config and Composer

tl;dr: See the video below for a run-through of my process upgrading Drupal core on the real-world open source Drupal 8 site codebase Drupal Example for Kubernetes.

Over the years, as Drupal has evolved, the upgrade process has become a bit more involved; as with most web applications, Drupal's increasing complexity extends to deployment, and whether you end up running Drupal on a VPS, a bare metal server, in Docker containers, or in a Kubernetes cluster, you should formalize an update process to make sure upgrades are as close to non-events as possible.

Gone are the days (at least for most sites) where you could just download a 'tarball' (.tar.gz) from Drupal.org, expand it, then upload it via SFTP to a server and run Drupal's update.php. That workflow (and even a workflow like drush up of old) might still work for some sites, but it is fragile and prone to cause issues whether you notice them or not. Plus if you're using Drush to do this, it's no longer supported in modern versions of Drush!

So without further ado, here is the process I've settled on for all the Drupal 8 sites I currently manage (note that I've converted all my non-Composer Drupal codebases to Composer at this point):

  1. Make sure you local codebase is up to date with what's currently in production (e.g. git pull origin master (or upstream or whatever git remote has your current production code)).
  2. Reinstall your local site in your local environment so it is completely reset (e.g. blt setup or drush site-install --existing-config). I usually use a local environment like Drupal VM or a Docker Compose environment, so I can usually just log in and run one command to reinstall Drupal from scratch.
  3. Make sure the local site is running well. Consider running behat and/or phpunit tests to confirm they're working (if you have any).
  4. Run composer update (or composer update [specific packages]).
  5. On your local site, run database updates (e.g. drush updb -y or go to /update.php). _This is important because the next step—exporting config—can cause problems if you're dealing with an outdated schema.
  6. Make sure the local site is still running well after updates complete. Run behat and/or phpunit tests again (if you have any).
  7. If everything passed muster, export your configuration (e.g. drush cex -y if using core configuration management, drush csex -y if using Config Split).
  8. (Optional but recommended for completeness) Reinstall the local site again, and run any tests again, to confirm the fresh install with the new config works perfectly.
  9. If everything looks good, it's time to commit all the changes to composer.lock and any other changed config files, and push it up to master!
  10. Run your normal deployment process to deploy the code to production.

All done!

That last step ("Run your normal deployment process") might be a little painful too, and I conveniently don't discuss it in this post. Don't worry, I'm working on a few future blog posts on that very topic!

For now, I'd encourage you to look into how Acquia BLT builds shippable 'build artifacts', as that's by far the most reliable way to ship your code to production if you care about stability! Note that for a few of my sites, I use a more simplistic "pull from master, run composer install, and run drush updb -y workflow for deployments. But that's for my smaller sites where I don't need any extra process and a few minutes' outage won't hurt!

Comments

Thanks for sharing Jeff! Actually, would you mind sharing the actual commands from terminal, for completeness? For example, if you use the official Drupal Composer structure (https://github.com/drupal-composer/drupal-project) just running composer update probably won't work ... In the documentation it says you should run this:

composer update drupal/core webflo/drupal-core-require-dev symfony/* --with-dependencies

Anyway, I am very much looking forward to your next blog post about deployment. Again, don't hold back, and feel free to share the raw terminal history :-)

I am also very interested in your simplistic approach, so also feel free to post all the commands for that one.

Have you considered checking in "everything" in git (Drupal core, modules, etc.) and move a complete deployable code from staging to production that way?

More to come on all of the above :)

But yes, composer update works okay with drupal-composer/drupal-project codebases—however it will update everything. If you just want to update Drupal core, then the docs are correct. In my case, I typically lock versions in composer.json for anything like Drupal modules that could break things if upgraded unintentionally, so I'm a bit safer running composer update bare.

Jeff - for config split, I thought plain old CEX works just like the command you suggested for the latest version of that module. Am I missing something?

Always appreciate your insights, BTW. Thanks!

To the other poster-- 'cex' no longer works in drush 9 which I found out after wasting painful days troubleshooting my first config_split project. Just use csex and csim to keep your sanity. Works flawlessly.

And I have an even easier deployment to prod-- git pull -> git merge origin/qa -> drush csim -> drush cr

I know the prevailing opinion by the elites of the drupal community is that a git based workflow with composer is outdated but that's throwing the baby out with the bathwater imo.

After some initial issues getting git to ignore any dev snapshot nested subrepos created by composer, it's been working perfectly. Yes my repos are larger than they need to be, but I don't pay by the byte so what's the big deal. I have several smallish sites I maintain and I don't need all the added complexities and break points created by adding automated dev ops to the mix.

Git is proven decades solid technology-- there's no reason to abandon it because its not the cool new thing.

Definitely not abandoning git, but rather using the idea of a 'deployment artifact'—basically, Git stores the source code for everything, but not all the dependencies and optimized generated code. You run something (e.g. blt artifact:build, docker build ., etc.) to generate a directory, Docker image, or whatever... and that directory, Docker image, etc. has everything for production (and nothing else).

Then you either copy that artifact to production directly, or drop it into a production repo to track it separate from the source code, or push it to a registry (e.g. for Docker images).

This way you always have:

  1. Full application source code
  2. A repository/registry containing every known good production version of the full application

It's definitely not the best workflow for smaller projects which don't need all the nice little things the above workflow provides—especially because at a minimum it requires two git repos or two branches in one repo, or one git repo and an artifact repo (like a private Docker registry), and that's just not realistic for everyone everywhere.

But it does mean I have not had any project under my supervision have one failed deployment in the past 3 years ??‍♂️

"But it does mean I have not had any project under my supervision have one failed deployment in the past 3 years"

Well that's very cool-- and good to know. If feel the need to move past the minimal workflow I have now, I'll definitely consider the Geerling method :-)

If minimal is working, then minimal is good!

The art of doing things well seems to involve finding just as much process/tooling as is required to go from nervous to confident in making changes. Sometimes that means no or few tests and very little extra stuff. Sometimes that means a lot of process, a QA team, etc.

OMG couldn't agree more! I see more wasted effort and churn implementing things that are so over the top of what's actually necessary it boggles the mind, lol.

"I don't need all the added complexities and break points created by adding automated dev ops to the mix"

This is exactly what I am looking for -- the less moving parts and steps, once the workflow is set up, the better: a Git/Composer-based workflow with the least configuration and steps, checking in "everything" (Drupal core, modules , libraries , etc) into git. So I really hope you follow along and comment on Jeff's next blog post on this subject, and share your experience and tips. Drupal really needs this to stay alive.

This is great Jeff. Thanks for putting it together. Any chance you'd like to weigh in on your thoughts about BLT and the advantages or disadvantages of using that tool? Is the complexity and overhead worth it in your mind? Should everyone be using BLT? Does it make this kind of update easier?

Good post... Might be worth chucking "drush entup" in there either after "composer update" to run any entity schema updates that may have occurred in the project updates you've just received via composer. These changes don't appear happen very often, but when they do, not running "entup" can result in bugs and unexpected behaviour.

Hi Jeff,

Every time I search for something, I end up reading one of your excellent articles, so mainly I wanted to say 'Thank You'. You seem to have *so* much quality output, I'm beginning to think that there is more than one Jeff???

Off topic:

I noticed '~/Dropbox ... ' in the video. I read your post about using Dropbox to sync. Just wondered if you've noticed any shortcomings with this approach since 2010!

I appreciate you'll be busy. Perhaps you could get one of the other Jeff's to reply?

Kind Regards - Tebb, UK

The only thing I don’t like about Dropbox is that you can’t pre-ignore certain directories like vendor or node_modules. You can only “selectively sync” then after they exist, and after Dropbox spends like 30 minutes syncing thousands of node files the first time.