Recent Blog Posts

Revisiting Docker for Mac's performance with NFS volumes

tl;dr: Docker's default bind mount performance for projects requiring lots of I/O on macOS is abysmal. It's acceptable (but still very slow) if you use the cached or delegated option. But it's actually fairly performant using the barely-documented NFS option!

Ever since Docker for Mac was released, shared volume performance has been a major pain point. It was painfully slow, and the community finally got a cached mode that offered a 20-30x speedup for common disk access patterns around 2017. Since then, the File system performance improvements issue has been a common place to gripe about the lack of improvements to the underlying osxfs filesystem.

Since around 2016, support has been around (albeit barely documented) for NFS volumes in Docker (see Docker local volume driver-specific options).

The 2020 Drupal Local Development Survey

DrupalCon Minneapolis is two months away, and that means it's time for the 2020 Drupal Local Development Survey.

2019 results - Local Drupal development environments
Local development environment usage results from 2019's survey.

If you do any Drupal development work, no matter how much or how little, we would love to hear from you. This survey is not attached to any Drupal organization, it is simply a community survey to help highlight some of the most widely-used tools that Drupalists use for their projects.

Take the 2020 Drupal Local Development Survey

Enabling a stale issue bot on my GitHub repositories

For the past few years, the number of issues and PRs across all my GitHub repositories has gone from a steady stream to an ongoing deluge. There are currently over 1,500 open issues across my 194 GitHub repositories, and there's no way I can keep up with all of them.

Initially, I went through each issue in each project's issue queue on a monthly basis (mind you, this was—and is still—done on nights and weekends in my spare time). That slipped to a quarterly task... and has now slipped to only happening for higher-profile projects once or twice a year.

Probot Head from GitHub Probot project

Ansible best practices: using project-local collections and roles

Note for Tower/AWX users: Currently, Tower requires role and collection requirements to be split out into different files; see Tower: Ansible Galaxy Support. Hopefully Tower will be able to support the requirements layout I outline in this post soon!

Since collections will be a major new part of every Ansible user's experience in the coming months, I thought I'd write a little about what I consider an Ansible best practice: that is, always using project-relative collection and role paths, so you can have multiple independent Ansible projects that track their own dependencies according to the needs of the project.

Early on in my Ansible usage, I would use a global roles path, and install all the roles I used (whether private or on Ansible Galaxy) into that path, and I would rarely have a playbook or project-specific role or use a different playbook-local version of the role.

Automatically building and publishing Ansible Galaxy Collections

I maintain a large number of Ansible Galaxy roles, and publish hundreds of new releases every year. If the process weren't fully automated, there would be no way I could keep up with it. For Galaxy roles, the process of tagging and publishing a new release is very simple, because Ansible Galaxy ties the role strongly to GitHub's release system. All that's needed is a webhook in your .travis.yml file (if using Travis CI):

notifications:
  webhooks: https://galaxy.ansible.com/api/v1/notifications/

For collections, Ansible Galaxy actually hosts an artifact—a .tar.gz file containing the collection contents. This offers some benefits that I won't get into here, but also a challenge: someone has to build and upload that artifact... and that takes more than one or two lines added to a .travis.yml file.

Until recently, I had been publishing collection releases manually. The process went something like:

Everyone might be a cluster-admin in your Kubernetes cluster

Quite often, when I dive into someone's Kubernetes cluster to debug a problem, I realize whatever pod I'm running has way too many permissions. Often, my pod has the cluster-admin role applied to it through it's default ServiceAccount.

Sometimes this role was added because someone wanted to make their CI/CD tool (e.g. Jenkins) manage Kubernetes resources in the cluster, and it was easier to apply cluster-admin to a default service account than to set all the individual RBAC privileges correctly. Other times, it was because someone found a new shiny tool and blindly installed it.

One such example I remember seeing recently is the spekt8 project; in it's installation instructions, it tells you to apply an rbac manifest:

kubectl apply -f https://raw.githubusercontent.com/spekt8/spekt8/master/fabric8-rbac.yaml

What the installation guide doesn't tell you is that this manifest grants cluster-admin privileges to every single Pod in the default namespace!

Pages

Subscribe to Jeff Geerling's Blog