performance

UASP makes Raspberry Pi 4 disk IO 50% faster

You can view a video related to this blog post here: Does UASP make the Raspberry Pi faster?.

A couple weeks ago, I did some testing with my Raspberry Pi 4 and external USB SSD drives. I found a USB 3.0 SSD was ten times faster than the fastest microSD card I tested.

In the comments on the video associated with that post, Brad Manske mentioned something I never even thought about. He noticed that I had linked to an Inateck USB 3.0 SATA case that didn't have UASP.

What's UASP, you might ask?

I'm booting my Raspberry Pi 4 from a USB SSD

Recently, the Raspberry Pi Foundation announced a USB boot beta for the Raspberry Pi 4. For a very long time, the top complaint I've had with the Raspberry Pi is limited I/O speed (especially for the main boot volume). And on older Pis, with the maximum external disk speed limited especially by the USB 2.0 bus—which was shared with the network adapter, limiting its bandwidth further—even USB booting didn't make things amazing.

But the Pi 4 not only separated the network adapter from the USB bus, it also has USB 3.0, which can be 10x faster than USB 2.0 (theoretically). So when the USB boot beta was announced, I wanted to put it through its paces. And after testing it a bit, I decided to use the Pi 4 as my full-time workstation for a day, to see whether it can cope and where it falls short. I'll be posting a video and blog post with more detail on that experience very soon.

Update: I now have a video that goes along with this blog post:

Revisiting Docker for Mac's performance with NFS volumes

tl;dr: Docker's default bind mount performance for projects requiring lots of I/O on macOS is abysmal. It's acceptable (but still very slow) if you use the cached or delegated option. But it's actually fairly performant using the barely-documented NFS option!

Ever since Docker for Mac was released, shared volume performance has been a major pain point. It was painfully slow, and the community finally got a cached mode that offered a 20-30x speedup for common disk access patterns around 2017. Since then, the File system performance improvements issue has been a common place to gripe about the lack of improvements to the underlying osxfs filesystem.

Since around 2016, support has been around (albeit barely documented) for NFS volumes in Docker (see Docker local volume driver-specific options).

Limiting disk iops on a larger Munin server using rrdcached

I've long used Munin for basic resource monitoring on a huge variety of servers. It's simple, reliable, easy to configure, and besides the fact that it uses Perl for plugins, there's not much against it!

Last week, I got a notice from my 'low end box' VPS provider that my Munin server—which is aggregating data from about 50 other servers—had high IOPS and would be shut down if I didn't get it back into an allowed threshold. Most low end VPSes run things like static HTML websites, so disk IO is very low on average. I checked my Munin instance, and sure enough, it was constantly churning through around 50 iops. For a low end server, this can cause high iowait for other tenants of the same server, so I can understand why hosting providers don't want applications on their shared servers doing a lot of constant disk I/O.

Using iotop, I could see the munin-update processes were spending a lot of time writing to disk. And munin's own diskstats_iops plugin showed the same:

Raspberry Pi microSD follow-up, SD Association fools me twice?

 ____________________________________________
/ Fool me once, shame on you. Fool me twice, \
\ prepare to die. (Klingon Proverb)          /
 --------------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

(Excerpt from Ansible for DevOps, chapter 12.)

The fallout from this year's microSD card performance comparison has turned into quite a rabbit hole; first I found that new 'A1' and 'A2' classifications were supposed to offer better performance than the not-Application-Performance-class-rated cards I have been testing. Then I found that A2 rated cards offer no better performance for the Raspberry Pi—in fact they didn't even perform half as well as they were supposed to, for 4K random reads and writes, on any hardware I have in my possession.

A2-class microSD cards offer no better performance for the Raspberry Pi

Update: See follow-up post about A1 vs A2 performance, Raspberry Pi microSD follow-up, SD Association fools me twice?.

After I published my 2019 Raspberry Pi microSD card performance comparison, I received a lot of feedback about newer 'A2' Application Performance class microSD cards, and how they could produce even better performance for a Raspberry Pi.

A2 Performance Class SanDisk and Lexar microSD cards next to older Samsung and SanDisk cards
None of these cards are fakes; grainy halftone printing is visible because I shot this with a macro lens.

The Raspberry Pi 4 needs a fan, here's why and how you can add one

The Raspberry Pi Foundation's Pi 4 announcement blog post touted the Pi 4 as providing "PC-like level of performance for most users". The Foundation even offers a Raspberry Pi 4 Desktop Kit.

The desktop kit includes the official Raspberry Pi 4 case, which is an enclosed plastic box with nothing in the way of ventilation.

I have been using Pis for various projects since their introduction in 2012, and for many models, including the tiny Pi Zero and various A+ revisions, you didn't even need a fan or heatsink to avoid CPU throttling. And thermal images or point measurements using an IR thermometer usually showed the SoC putting out the most heat. As long as there was at least a little space for natural convection (that is, with no fan), you could do almost anything with a Pi and not have to worry about heat.

Raspberry Pi microSD card performance comparison - 2019

Note: I also posted a separate review of some A2 'Application Performance' class cards, see this post: A2-class microSD cards offer no better performance for the Raspberry Pi.

Raspberry Pi Noobs SD card adapter with a number of Samsung and other microSD cards

As a part-time tinkerer and full-time developer, I have been fascinated by single board computers (SBCs) since the first Raspberry Pi was introduced almost a decade ago. I have owned and used every generation of Raspberry Pi, in addition to most of the popular competitors. You can search my site for tons of articles on these experiences.

Git 2.20.1 is super slow on macOS Mojave on my work Mac

Update: I just upgraded my personal mac to 2.20.1, and am experiencing none of the slowdown I had on my work Mac. So something else is afoot. Maybe some of the 'spyware-ish' software that's installed on the work mark is making calls like lstat() super slow? Looks like I might be profiling some things on that machine anyways :)

I regularly use Homebrew to switch to more recent versions of CLI utilities and other packages I use in my day-to-day software and infrastructure development. In the past, it was necessary to use Homebrew to get a much newer version of Git than was available at the time on macOS. But as Apple's evolved macOS, they've done a pretty good job of keeping the system versions relatively up-to-date, and unless you need bleeding edge features, the version of Git that's installed on macOS Mojave (2.17.x) is probably adequate for now.

But back to Homebrew—recently I ran brew upgrade to upgrade a bunch of packages, and it happened to upgrade Git to 2.20.1.

Make composer operations with Drupal way faster and easier on RAM

tl;dr: Run composer require zaporylie/composer-drupal-optimizations:^1.0 in your Drupal codebase to halve Composer's RAM usage and make operations like require and update 3-4x faster.

A few weeks ago, I noticed Drupal VM's PHP 5.6 automated test suite started failing on the step that runs composer require drupal/drush. (PSA: PHP 5.6 is officially dead. Don't use it anymore. If you're still using it, upgrade to a supported version ASAP!). This was the error message I was getting from Travis CI:

PHP Fatal error:  Allowed memory size of 2147483648 bytes exhausted (tried to allocate 32 bytes) in phar:///usr/bin/composer/src/Composer/DependencyResolver/RuleWatchNode.php on line 40

I ran the test suite locally, and didn't have the same issue (locally I have PHP's CLI memory limit set to -1 so it never runs out of RAM unless I do insane-crazy things.