infrastructure

Running 'php artisan schedule:run' for Laravel in Kubernetes CronJobs

I am working on integrating a few Laravel PHP applications into a new Kubernetes architecture, and every now and then we hit a little snag. For example, the app developers noticed that when their cron job ran (php artisan schedule:run), the MySQL container in the cluster would drop an error message like:

2019-03-27T16:20:05.965157Z 1497 [Note] Aborted connection 1497 to db: 'database' user: 'myuser' host: '10.0.76.130' (Got an error reading communication packets)

In Kubernetes, I had the Laravel app CronJob set up like so:

Analyzing a MySQL slow query log with pt-query-digest

There are times when you may notice your MySQL or MariaDB database server getting very slow. Usually, it's a very stressful time, as it means your site or application is also getting very slow since the underlying database is slow. And then when you dig in, you notice that logs are filling up—and in MySQL's case, the slow query log is often a canary in a coal mine which can indicate potential performance issues (or highlight active performance issues).

But—assuming you have the slow query log enabled—have you ever grabbed a copy of the log and dug into it? It can be extremely daunting. It's literally a list of query metrics (time, how long the query took, how long it locked the table), then the raw slow query itself. How do you know which query takes the longest time? And is there one sort-of slow query that is actually the worst, just because it's being run hundreds of times per minute?

Fixing '503 Service Unavailable' and 'Endpoints not available' for Traefik Ingress in Kubernetes

In a Kubernetes cluster I'm building, I was quite puzzled when setting up Ingress for one of my applications—in this case, Jenkins.

I had created a Deployment for Jenkins (in the jenkins namespace), and an associated Service, which exposed port 80 on a ClusterIP. Then I added an Ingress resource which directed the URL jenkins.example.com at the jenkins Service on port 80.

Inspecting both the Service and Ingress resource with kubectl get svc -n jenkins and kubectl get ingress -n jenkins, respectively, showed everything seemed to be configured correctly:

Kubernetes' Complexity

Over the past month, I started rebuilding the Raspberry Pi Dramble project using Kubernetes instead of installing and configuring the LEMP stack directly on nodes via Ansible (track GitHub issues here). Along the way, I've hit tons of minor issues with the installation, and I wanted to document some of the things I think turn people away from Kubernetes early in the learning process. Kubernetes is definitely not the answer to all application hosting problems, but it is a great fit for some, and it would be a shame for someone who could really benefit from Kubernetes to be stumped and turn to some other solution that costs more in time, money, or maintenance!

Raspberry Pi Dramble cluster running Kubernetes with Green LEDs

Hosted Apache Solr's Revamped Docker-based Architecture

I started Hosted Apache Solr almost 10 years ago, in late 2008, so I could more easily host Apache Solr search indexes for my Drupal websites. I realized I could also host search indexes for other Drupal websites too, if I added some basic account management features and a PayPal subscription plan—so I built a small subscription management service on top of my then-Drupal 6-based Midwestern Mac website and started selling a few Solr subscriptions.

Back then, the latest and greatest Solr version was 1.4, and now-popular automation tools like Chef and Ansible didn't even exist. So when a customer signed up for a new subscription, the pipeline for building and managing the customer's search index went like this:

Hosted Apache Solr original architecture

Original Hosted Apache Solr architecture, circa 2009.

dockrun oneshot — quick local environments for testing infrastructure

Since I work among a ton of different Linux distros and environments in my day-to-day work, I have a lot of tooling set up that's mostly-OS-agnostic. I found myself in need of a quick barebones CentOS 7 VM to play around in or troubleshoot an issue. Or I needed to run Ubuntu 16.04 and Ubuntu 14.04 side by side and run the same command in each, checking for differences. Or I needed to bring up Fedora. Or Debian.

I used to use my Vagrant boxes for VirtualBox to boot a full VM, then vagrant ssh in. But that took at least 15-20 seconds—assuming I already had the box downloaded on my computer!

Reintroducing the sanity of CM to container management

Recently, Ansible introduced Ansible Container, a tool that builds and orchestrates Docker containers.

While tools that build and orchestrate Docker containers are a dime a dozen these days (seriously... Kubernetes, Mesos, Rancher, Fleet, Swarm, Deis, Kontena, Flynn, Serf, Clocker, Paz, Docker 1.12+ built-in, not to mention dozens of PaaSes), many are built in the weirdly-isolated world of "I only manage containers, and don't manage other infrastructure tasks."

The cool thing about using Ansible to do your container builds and orchestration is that Ansible can also do your networking configuration. And your infrastructure provisioning. And your legacy infrastructure configuration. And on top of that, Ansible is, IMO, the best-in-class configuration management tool—easy for developers and sysadmins to learn and use effectively, and as efficient/terse as (but much more powerful than) shell scripts.

From Ansible Container's own README:

Drupal VM's latest update adds Redis, PHP-FPM support to Apache

tl;dr: Drupal VM 2.2.0 'Wormhole' was released today, and it adds even more features for local dev!

Over the past few months, I've been working towards a more reliable release cadence for Drupal VM, and I've targeted one or two large features, a number of small improvements, and as many bugfixes as I have time to review. The community surrounding Drupal VM's development has been amazing; in the past few months I've noticed:

Launching my first Drupal 8 website — in my basement!

I've been working with Drupal 8 for a long time, keeping Honeypot and some other modules up to date, and doing some dry-runs of migrating a few smaller sites from Drupal 7 to Drupal 8, just to hone my D8 familiarity.

Raspberry Pi Dramble Drupal 8 Website

I finally launched a 'for real' Drupal 8 site, which is currently running on Drupal 8 HEAD—on a cluster of Raspberry Pi 2 computers in my basement! You can view the site at http://www.pidramble.com/, and I've already started posting some articles about running Drupal 8 on the servers, how I built the cluster, some of the limitations of at-home webhosting, etc.

Some of the things I've already learned from building and running this cluster for the past few days:

Simple GlusterFS Setup with Ansible

The following is an excerpt from Chapter 8 of Ansible for DevOps, a book on Ansible by Jeff Geerling.

Modern infrastructure often involves some amount of horizontal scaling; instead of having one giant server, with one storage volume, one database, one application instance, etc., most apps use two, four, ten, or dozens of servers.

GlusterFS Architecture Diagram

Many applications can be scaled horizontally with ease, but what happens when you need shared resources, like files, application code, or other transient data, to be shared on all the servers? And how do you have this data scale out with your infrastructure, in a fast but reliable way? There are many different approaches to synchronizing or distributing files across servers: