performance

Git 2.20.1 is super slow on macOS Mojave on my work Mac

Update: I just upgraded my personal mac to 2.20.1, and am experiencing none of the slowdown I had on my work Mac. So something else is afoot. Maybe some of the 'spyware-ish' software that's installed on the work mark is making calls like lstat() super slow? Looks like I might be profiling some things on that machine anyways :)

I regularly use Homebrew to switch to more recent versions of CLI utilities and other packages I use in my day-to-day software and infrastructure development. In the past, it was necessary to use Homebrew to get a much newer version of Git than was available at the time on macOS. But as Apple's evolved macOS, they've done a pretty good job of keeping the system versions relatively up-to-date, and unless you need bleeding edge features, the version of Git that's installed on macOS Mojave (2.17.x) is probably adequate for now.

But back to Homebrew—recently I ran brew upgrade to upgrade a bunch of packages, and it happened to upgrade Git to 2.20.1.

Make composer operations with Drupal way faster and easier on RAM

tl;dr: Run composer require zaporylie/composer-drupal-optimizations:^1.0 in your Drupal codebase to halve Composer's RAM usage and make operations like require and update 3-4x faster.

A few weeks ago, I noticed Drupal VM's PHP 5.6 automated test suite started failing on the step that runs composer require drupal/drush. (PSA: PHP 5.6 is officially dead. Don't use it anymore. If you're still using it, upgrade to a supported version ASAP!). This was the error message I was getting from Travis CI:

PHP Fatal error:  Allowed memory size of 2147483648 bytes exhausted (tried to allocate 32 bytes) in phar:///usr/bin/composer/src/Composer/DependencyResolver/RuleWatchNode.php on line 40

I ran the test suite locally, and didn't have the same issue (locally I have PHP's CLI memory limit set to -1 so it never runs out of RAM unless I do insane-crazy things.

Analyzing a MySQL slow query log with pt-query-digest

There are times when you may notice your MySQL or MariaDB database server getting very slow. Usually, it's a very stressful time, as it means your site or application is also getting very slow since the underlying database is slow. And then when you dig in, you notice that logs are filling up—and in MySQL's case, the slow query log is often a canary in a coal mine which can indicate potential performance issues (or highlight active performance issues).

But—assuming you have the slow query log enabled—have you ever grabbed a copy of the log and dug into it? It can be extremely daunting. It's literally a list of query metrics (time, how long the query took, how long it locked the table), then the raw slow query itself. How do you know which query takes the longest time? And is there one sort-of slow query that is actually the worst, just because it's being run hundreds of times per minute?

Drupal startup time and opcache - faster scaling for PHP in containerized environments

Lately I've been spending a lot of time working with Drupal in Kubernetes and other containerized environments; one problem that's bothered me lately is the fact that when autoscaling Drupal, it always takes at least a few seconds to get a new Drupal instance running. Not installing Drupal, configuring the database, building caches; none of that. I'm just talking about having a Drupal site that's already operational, and scaling by adding an additional Drupal instance or container.

One of the principles of the 12 Factor App is:

IX. Disposability

Maximize robustness with fast startup and graceful shutdown.

Disposability is important because it enables things like easy, fast code deployments, easy, fast autoscaling, and high availability. It also forces you to make your code stateless and efficient, so it starts up fast even with a cold cache. Read more about the disposability factor on the 12factor site.

The ASUS Tinker Board is a compelling upgrade from a Raspberry Pi 3 B+

I've had a long history playing around with Raspberry Pis and other Single Board Computers (SBCs); from building a cluster of Raspberry Pis to run Drupal, to building a distributed home temperature monitoring system with Raspberry Pis, I've spent a good deal of time testing the limits of an SBC, and also finding ways to use their strengths to my advantage.

ASUS Tinker Board SBC

Raspberry Pi microSD card performance comparison - 2018

Raspberry Pi microSD cards Noobs Samsung Kingston Toshiba Sony SanDisk SD SBC

Back in 2015, I wrote a popular post comparing the performance of a number of microSD cards when used with the Raspberry Pi. In the intervening three years, the marketplace hasn't changed a ton, but there have been two new revisions to the Raspberry Pi (the model 3 B and just-released model 3 B+). In that article, I stated:

One of the highest-impact upgrades you can perform to increase Raspberry Pi performance is to buy the fastest possible microSD card—especially for applications where you need to do a lot of random reads and writes.

Getting the best performance out of Amazon EFS

tl;dr: EFS is NFS. Networked file systems have inherent tradeoffs over local filesystem access—EFS doesn't change that. Don't expect the moon, benchmark and monitor it, and you'll do fine.

On a recent project, I needed to have a shared network file system that was available to all servers, and able to scale horizontally to anywhere between 1 and 100 servers. It needed low-latency file access, and also needed to be able to handle small file writes and file locks synchronously with as little latency as possible.

Amazon EFS, which uses NFS v4.1, checks all of those checkboxes (at least, to a certain extent), and if you're already building infrastructure inside AWS, EFS is a very cost-effective way to manage a scalable NFS filesystem. I'm not going to go too much into the technical details of EFS or NFS v4.1, but I would like to highlight some of the painful lessons my team has learned implementing EFS for a fairly hefty CMS-based project.

Slow Ansible playbook? Check ansible.cfg!

Today while I was running a particularly large Ansible playbook about the 15th time in a row, I was wondering why this playbook seemed to run quite a bit slower than most other playbooks, even though I was managing a server that was in the same datacenter as most of my other infrastructure.

I have had pipelining = True in my system /etc/ansible/ansible.cfg for ages, and initially wondered why the individual tasks were so delayed—even when doing something like running three lineinfile tasks on one config file. The only major difference in this slow playbook's configuration was that I had a local ansible.cfg file in the playbook, to override my global roles_path (I wanted the specific role versions for this playbook to be managed and stored local to the playbook).

So, my curiosity led me to a more thorough reading of Ansible's configuration documentation, specifically a section talking about Ansible configuration file precedence:

Setting up a Pi Hole for whole-home ad/tracker blocking

Pi Hole - Admin DNS query request dashboard page in Safari

Pi Hole is a nifty open source project that allows you to offload the task of blocking advertisements and annoying (and often malicious) trackers to a Raspberry Pi. The installation is deceptively simple (a curl | bash affair), but I wanted to document how I set up mine headless (just plugging the Pi into power and the network).

Set up Raspbian Lite

I bought a Raspberry Pi model 2 B along with the official Raspberry Pi foundation Case. Then I bought a Samsung Evo+ 32GB microSD card (which comes with a full-size SD card adapter), and did the following steps on my MacBook Pro to set up the Pi's OS:

Profiling Drupal 8 Sites in Drupal VM with XHProf and Tideways

XHProf, a PHP extension formerly created and maintained by Facebook, has for many years been the de-facto standard in profiling Drupal's PHP code and performance issues. Unfortunately, as Facebook has matured and shifted resources, the XHProf extension maintenance tailed off around the time of the PHP 7.0 era, and now that we're hitting PHP 7.1, even some sparsely-maintained forks are difficult (if not impossible) to get running with newer versions of PHP.

Enter Tideways.

Tideways has basically taken on the XHProf extension, updated it for modern PHP versions, but also re-branded it to be named 'Tideways' instead of 'XHProf'. This has created a little confusion, since Tideways also offers a branded and proprietary service for aggregating and displaying profiling information through Tideways.io. But you can use Tideways completely independent from Tideways.io, as a drop-in replacement for XHProf. And you can even browse profiling results using the same old XHProf UI!

Pages

Subscribe to RSS - performance