kubernetes

Running Drupal in Kubernetes with Docker in production

Update: Since posting this, there have been some interesting new developments in this area, for example:

Since 2014, I've been working on various projects which containerized Drupal in a production environment. There have always been a few growing pains—there will for some time, as there are so few places actually using Docker or containers in a production environment (at least in a 'cloud native' way, without tons of volume mounts), though this is changing. It was slow at first, but it's becoming much more rapid.

Running 'php artisan schedule:run' for Laravel in Kubernetes CronJobs

I am working on integrating a few Laravel PHP applications into a new Kubernetes architecture, and every now and then we hit a little snag. For example, the app developers noticed that when their cron job ran (php artisan schedule:run), the MySQL container in the cluster would drop an error message like:

2019-03-27T16:20:05.965157Z 1497 [Note] Aborted connection 1497 to db: 'database' user: 'myuser' host: '10.0.76.130' (Got an error reading communication packets)

In Kubernetes, I had the Laravel app CronJob set up like so:

Monitoring Kubernetes cluster utilization and capacity (the poor man's way)

If you're running Kubernetes clusters at scale, it pays to have good monitoring in place. Typical tools I use in production like Prometheus and Alertmanager are extremely useful in monitoring critical metrics, like "is my cluster almost out of CPU or Memory?"

But I also have a number of smaller clusters—some of them like my Raspberry Pi Dramble have very little in the way of resources available for hosting monitoring internally. But I still want to be able to say, at any given moment, "how much CPU or RAM is available inside the cluster? Can I fit more Pods in the cluster?"

So without further ado, I'm now using the following script, which is slightly adapted from a script found in the Kubernetes issue Need simple kubectl command to see cluster resource usage:

Usage is pretty easy, just make sure you have your kubeconfig configured so kubectl commands are working on the cluster, then run:

Expanding K8s PVs in EKS on AWS

If that post title isn't a mouthful...

I'm excited to be moving a few EKS clusters into real-world production use after a few months of preparation. Besides my Raspberry Pi Dramble project (which is pretty low-key), these are the only production-grade Kubernetes clusters I've dealt with—and I've learned a lot. Enough that I'm working on a new book.

Anyways, back to the main topic: As of Kubernetes 1.11, you can auto-expand PVs from most cloud providers, AWS included. And since EKS now runs Kubernetes 1.11.x, you can have your EBS PVs automatically expand by just increasing the PVC claim size in spec.resources.requests.storage to a larger size (e.g. 10Gi to 20Gi).

To make sure this works, though, you need to make sure of a few things:

Make sure you have the proper setting on your StorageClass

You need to make sure the StorageClass you're using has the allowVolumeExpansion setting enabled, e.g.:

Mounting a Kubernetes Secret as a single file inside a Pod

Recently I needed to mount an SSH private key used for one app to connect to another app into a running Pod, but to make sure it was done securely, we put the SSH key into a Kubernetes Secret, and then mounted the Secret into a file inside the Pod spec for a Deployment.

I wanted to document the process here because (a) I know I'm going to have to do it again and this will save me a few minutes' research, and (b) it's very slightly unintuitive (at least to me).

First I defined a secret in a namespace:

Updating a Kubernetes Deployment and waiting for it to roll out in a shell script

For some Kubernetes cluster operations (e.g. deploying an update to a small microservice or app), I need a quick and dirty way to:

  1. Build and push a Docker image to a private registry.
  2. Update a Kubernetes Deployment to use this new image version.
  3. Wait for the Deployment rollout to complete.
  4. Run some post-rollout operations (e.g. clear caches, run an update, etc.).

There are a thousand and one ways to do all this, and many are a bit more formal than this, but sometimes you just need a shell script you can run from your CI server to do it all. And it's not too hard, nor complex, to do it this way:

Deploying an Acquia BLT Drupal 8 site to Kubernetes

BLT to Kubernetes

Wait... what? If you're reading the title of this post, and are familiar with Acquia BLT, you might be wondering:

  • Why are you using Acquia BLT with a project that's not running in Acquia Cloud?
  • You can deploy a project built with Acquia BLT to Kubernetes?
  • Don't you, like, have to use Docker instead of Drupal VM? And aren't you [Jeff Geerling] the maintainer of Drupal VM?

Well, the answers are pretty simple:

Deploying a React single-page web app to Kubernetes

React seems to have taken the front-end development community by storm, and is extremely popular for web UIs.

It's development model is a breath of fresh air compared to many other tools: you just clone your app, and as long as you have Node.js installed in your environment, to start developing you run (either with npm or yarn or whatever today's most popular package manager is):

yarn install
yarn serve

And then you have a local development server running your code, which updates in real time when you change code.

But when it comes time to deploy a real-world React app to non-local environments, things can get a little... weird.

For most modern projects I work on, there are usually multiple environments:

Running Drupal Cron Jobs in Kubernetes

There are a number of things you have to do to make Drupal a first-class citizen inside a Kubernetes cluster, like adding a shared filesystem (e.g. PV/PVC over networked file share) for the files directory (which can contain generated files like image derivatives, generated PHP, and twig template caches), and setting up containers to use environment variables for connection details (instead of hard-coding things in settings.php).

But another thing which you should do for better performance and traceability is run Drupal cron via an external process. Drupal's cron is essential to many site operations, like cleaning up old files, cleaning out certain system tables (flood, history, logs, etc.), running queued jobs, etc. And if your site is especially reliant on timely cron runs, you probably also use something like Ultimate Cron to manage the cron jobs more efficiently (it makes Drupal cron work much like the extensive job scheduler in a more complicated system like Magento).

Decoding Kubernetes Ingress auth Secrets

Update: In the comments, the following one-liner is suggested by Matt T if you have jq installed (a handy utility if there ever was one!):

kubectl get secret my-secret -o json | jq '.data | map_values(@base64d)'

I figured it would be handy to have a quick reference for this, since I'll probably forget certain secrets many, many times in the future (I'm like that, I guess):

I have a Kubernetes Secret used for Traefik ingress basic HTTP authentication (using annotation ingress.kubernetes.io/auth-secret), and as an admin with kubectl access, I want to see (or potentially modify) its structure.

Let's say the Secret is in namespace testing, and is named test-credentials. To get the value of the basic auth credentials I do:

kubectl get secret test-credentials -n testing -o yaml

This spits out the Kubernetes object definition, including a field like:

data:
  auth: [redacted base64-encoded string]

So then I copy out that string and decode it:

Pages

Subscribe to RSS - kubernetes