scalability

Drupal startup time and opcache - faster scaling for PHP in containerized environments

Lately I've been spending a lot of time working with Drupal in Kubernetes and other containerized environments; one problem that's bothered me lately is the fact that when autoscaling Drupal, it always takes at least a few seconds to get a new Drupal instance running. Not installing Drupal, configuring the database, building caches; none of that. I'm just talking about having a Drupal site that's already operational, and scaling by adding an additional Drupal instance or container.

One of the principles of the 12 Factor App is:

IX. Disposability

Maximize robustness with fast startup and graceful shutdown.

Disposability is important because it enables things like easy, fast code deployments, easy, fast autoscaling, and high availability. It also forces you to make your code stateless and efficient, so it starts up fast even with a cold cache. Read more about the disposability factor on the 12factor site.

Getting the best performance out of Amazon EFS

tl;dr: EFS is NFS. Networked file systems have inherent tradeoffs over local filesystem access—EFS doesn't change that. Don't expect the moon, benchmark and monitor it, and you'll do fine.

On a recent project, I needed to have a shared network file system that was available to all servers, and able to scale horizontally to anywhere between 1 and 100 servers. It needed low-latency file access, and also needed to be able to handle small file writes and file locks synchronously with as little latency as possible.

Amazon EFS, which uses NFS v4.1, checks all of those checkboxes (at least, to a certain extent), and if you're already building infrastructure inside AWS, EFS is a very cost-effective way to manage a scalable NFS filesystem. I'm not going to go too much into the technical details of EFS or NFS v4.1, but I would like to highlight some of the painful lessons my team has learned implementing EFS for a fairly hefty CMS-based project.

Moving Server Check.in functionality to Node.js increased per-server capacity by 100x

Just posted a new blog post to the Server Check.in blog: Moving functionality to Node.js increased per-server capacity by 100x. Here's a snippet from the post:

One feature that we just finished deploying is a small Node.js application that runs in tandem with Drupal to allow for an incredibly large number of servers and websites to be checked in a fraction of the time that we were checking them using only PHP, cron, and Drupal's Queue API.

If you need to do some potentially slow tasks very often, and they're either network or IO-bound, consider moving those tasks away from Drupal/PHP to a Node.js app. Your server and your overloaded queue will thank you!

Read more.

tl;dr Node.js is awesome for running through a large number of network or IO-bound tasks that would otherwise become burdensome at scale using Drupal's Queue API.

Using apachebench (ab) with Drupal 7 to load test site with authenticated users

apachebench is an excellent performance and load-testing tool for any website, and Drupal-based sites are no exception. A lot of Drupal sites, though, need to be measured not only under heavy anonymous traffic load (users who aren't logged in), but also under heavy authenticated-user load.

Drupal.org has some good tips for ab testing, but the details for using ab's '-C' option (notice the capital C... C is for Cookie) are lacking. Basically, if you pass the -C option with a valid session ID/cookie, Drupal will send ab the page as if ab were authenticated.

Instead of constantly going into the database and looking up session IDs and such nonsense, I have a simple script, which is quite revised from the 2008-era script originally from 2bits that worked with Drupal 5, which will give you the proper ab commands for stress-testing your Drupal site under authenticated user load. Simply copy the attached script (source pasted below) to your site's docroot, and run the command from the command line as follows: