kubernetes

Expanding K8s PVs in EKS on AWS

If that post title isn't a mouthful...

I'm excited to be moving a few EKS clusters into real-world production use after a few months of preparation. Besides my Raspberry Pi Dramble project (which is pretty low-key), these are the only production-grade Kubernetes clusters I've dealt with—and I've learned a lot. Enough that I'm working on a new book.

Anyways, back to the main topic: As of Kubernetes 1.11, you can auto-expand PVs from most cloud providers, AWS included. And since EKS now runs Kubernetes 1.11.x, you can have your EBS PVs automatically expand by just increasing the PVC claim size in spec.resources.requests.storage to a larger size (e.g. 10Gi to 20Gi).

To make sure this works, though, you need to make sure of a few things:

Make sure you have the proper setting on your StorageClass

You need to make sure the StorageClass you're using has the allowVolumeExpansion setting enabled, e.g.:

Mounting a Kubernetes Secret as a single file inside a Pod

Recently I needed to mount an SSH private key used for one app to connect to another app into a running Pod, but to make sure it was done securely, we put the SSH key into a Kubernetes Secret, and then mounted the Secret into a file inside the Pod spec for a Deployment.

I wanted to document the process here because (a) I know I'm going to have to do it again and this will save me a few minutes' research, and (b) it's very slightly unintuitive (at least to me).

First I defined a secret in a namespace:

Updating a Kubernetes Deployment and waiting for it to roll out in a shell script

For some Kubernetes cluster operations (e.g. deploying an update to a small microservice or app), I need a quick and dirty way to:

  1. Build and push a Docker image to a private registry.
  2. Update a Kubernetes Deployment to use this new image version.
  3. Wait for the Deployment rollout to complete.
  4. Run some post-rollout operations (e.g. clear caches, run an update, etc.).

There are a thousand and one ways to do all this, and many are a bit more formal than this, but sometimes you just need a shell script you can run from your CI server to do it all. And it's not too hard, nor complex, to do it this way:

Deploying an Acquia BLT Drupal 8 site to Kubernetes

BLT to Kubernetes

Wait... what? If you're reading the title of this post, and are familiar with Acquia BLT, you might be wondering:

  • Why are you using Acquia BLT with a project that's not running in Acquia Cloud?
  • You can deploy a project built with Acquia BLT to Kubernetes?
  • Don't you, like, have to use Docker instead of Drupal VM? And aren't you [Jeff Geerling] the maintainer of Drupal VM?

Well, the answers are pretty simple:

Deploying a React single-page web app to Kubernetes

React seems to have taken the front-end development community by storm, and is extremely popular for web UIs.

It's development model is a breath of fresh air compared to many other tools: you just clone your app, and as long as you have Node.js installed in your environment, to start developing you run (either with npm or yarn or whatever today's most popular package manager is):

yarn install
yarn serve

And then you have a local development server running your code, which updates in real time when you change code.

But when it comes time to deploy a real-world React app to non-local environments, things can get a little... weird.

For most modern projects I work on, there are usually multiple environments:

Running Drupal Cron Jobs in Kubernetes

There are a number of things you have to do to make Drupal a first-class citizen inside a Kubernetes cluster, like adding a shared filesystem (e.g. PV/PVC over networked file share) for the files directory (which can contain generated files like image derivatives, generated PHP, and twig template caches), and setting up containers to use environment variables for connection details (instead of hard-coding things in settings.php).

But another thing which you should do for better performance and traceability is run Drupal cron via an external process. Drupal's cron is essential to many site operations, like cleaning up old files, cleaning out certain system tables (flood, history, logs, etc.), running queued jobs, etc. And if your site is especially reliant on timely cron runs, you probably also use something like Ultimate Cron to manage the cron jobs more efficiently (it makes Drupal cron work much like the extensive job scheduler in a more complicated system like Magento).

Decoding Kubernetes Ingress auth Secrets

Update: In the comments, the following one-liner is suggested by Matt T if you have jq installed (a handy utility if there ever was one!):

kubectl get secret my-secret -o json | jq '.data | map_values(@base64d)'

I figured it would be handy to have a quick reference for this, since I'll probably forget certain secrets many, many times in the future (I'm like that, I guess):

I have a Kubernetes Secret used for Traefik ingress basic HTTP authentication (using annotation ingress.kubernetes.io/auth-secret), and as an admin with kubectl access, I want to see (or potentially modify) its structure.

Let's say the Secret is in namespace testing, and is named test-credentials. To get the value of the basic auth credentials I do:

kubectl get secret test-credentials -n testing -o yaml

This spits out the Kubernetes object definition, including a field like:

data:
  auth: [redacted base64-encoded string]

So then I copy out that string and decode it:

Install kubectl in your Docker image, the easy way

Most of the time, when I install software on my Docker images, I add a rather hairy RUN command which does something like:

  1. Install some dependencies for key management.
  2. Add a GPG key for a new software repository.
  3. Install software from that new software repository.
  4. Clean up apt/yum/dnf caches to save a little space.

This is all well and good; and this is the most recommended way to install kubectl in most situations, but it's not without it's drawbacks:

Drupal startup time and opcache - faster scaling for PHP in containerized environments

Lately I've been spending a lot of time working with Drupal in Kubernetes and other containerized environments; one problem that's bothered me lately is the fact that when autoscaling Drupal, it always takes at least a few seconds to get a new Drupal instance running. Not installing Drupal, configuring the database, building caches; none of that. I'm just talking about having a Drupal site that's already operational, and scaling by adding an additional Drupal instance or container.

One of the principles of the 12 Factor App is:

IX. Disposability

Maximize robustness with fast startup and graceful shutdown.

Disposability is important because it enables things like easy, fast code deployments, easy, fast autoscaling, and high availability. It also forces you to make your code stateless and efficient, so it starts up fast even with a cold cache. Read more about the disposability factor on the 12factor site.

Using BLT with Config Split outside Acquia Cloud or Pantheon Hosting

I am currently building a Drupal 8 application which is running outside Acquia Cloud, and I noticed there are a few 'magic' settings I'm used to working on Acquia Cloud which don't work if you aren't inside an Acquia or Pantheon environment; most notably, the automatic Configuration Split settings choice (for environments like local, dev, and prod) don't work if you're in a custom hosting environment.

You have to basically reset the settings BLT provides, and tell Drupal which config split should be active based on your own logic. In my case, I have a site which only has a local, ci, and prod environment. To override the settings defined in BLT's included config.settings.php file, I created a config.settings.php file in my site in the path docroot/sites/settings/config.settings.php, and I put in the following contents: