kubernetes

Deploying a React single-page web app to Kubernetes

React seems to have taken the front-end development community by storm, and is extremely popular for web UIs.

It's development model is a breath of fresh air compared to many other tools: you just clone your app, and as long as you have Node.js installed in your environment, to start developing you run (either with npm or yarn or whatever today's most popular package manager is):

yarn install
yarn serve

And then you have a local development server running your code, which updates in real time when you change code.

But when it comes time to deploy a real-world React app to non-local environments, things can get a little... weird.

For most modern projects I work on, there are usually multiple environments:

Running Drupal Cron Jobs in Kubernetes

There are a number of things you have to do to make Drupal a first-class citizen inside a Kubernetes cluster, like adding a shared filesystem (e.g. PV/PVC over networked file share) for the files directory (which can contain generated files like image derivatives, generated PHP, and twig template caches), and setting up containers to use environment variables for connection details (instead of hard-coding things in settings.php).

But another thing which you should do for better performance and traceability is run Drupal cron via an external process. Drupal's cron is essential to many site operations, like cleaning up old files, cleaning out certain system tables (flood, history, logs, etc.), running queued jobs, etc. And if your site is especially reliant on timely cron runs, you probably also use something like Ultimate Cron to manage the cron jobs more efficiently (it makes Drupal cron work much like the extensive job scheduler in a more complicated system like Magento).

Decoding Kubernetes Ingress auth Secrets

I figured it would be handy to have a quick reference for this, since I'll probably forget certain secrets many, many times in the future (I'm like that, I guess):

I have a Kubernetes Secret used for Traefik ingress basic HTTP authentication (using annotation ingress.kubernetes.io/auth-secret), and as an admin with kubectl access, I want to see (or potentially modify) its structure.

Let's say the Secret is in namespace testing, and is named test-credentials. To get the value of the basic auth credentials I do:

kubectl get secret test-credentials -n testing -o yaml

This spits out the Kubernetes object definition, including a field like:

data:
  auth: [redacted base64-encoded string]

So then I copy out that string and decode it:

Install kubectl in your Docker image, the easy way

Most of the time, when I install software on my Docker images, I add a rather hairy RUN command which does something like:

  1. Install some dependencies for key management.
  2. Add a GPG key for a new software repository.
  3. Install software from that new software repository.
  4. Clean up apt/yum/dnf caches to save a little space.

This is all well and good; and this is the most recommended way to install kubectl in most situations, but it's not without it's drawbacks:

Drupal startup time and opcache - faster scaling for PHP in containerized environments

Lately I've been spending a lot of time working with Drupal in Kubernetes and other containerized environments; one problem that's bothered me lately is the fact that when autoscaling Drupal, it always takes at least a few seconds to get a new Drupal instance running. Not installing Drupal, configuring the database, building caches; none of that. I'm just talking about having a Drupal site that's already operational, and scaling by adding an additional Drupal instance or container.

One of the principles of the 12 Factor App is:

IX. Disposability

Maximize robustness with fast startup and graceful shutdown.

Disposability is important because it enables things like easy, fast code deployments, easy, fast autoscaling, and high availability. It also forces you to make your code stateless and efficient, so it starts up fast even with a cold cache. Read more about the disposability factor on the 12factor site.

Using BLT with Config Split outside Acquia Cloud or Pantheon Hosting

I am currently building a Drupal 8 application which is running outside Acquia Cloud, and I noticed there are a few 'magic' settings I'm used to working on Acquia Cloud which don't work if you aren't inside an Acquia or Pantheon environment; most notably, the automatic Configuration Split settings choice (for environments like local, dev, and prod) don't work if you're in a custom hosting environment.

You have to basically reset the settings BLT provides, and tell Drupal which config split should be active based on your own logic. In my case, I have a site which only has a local, ci, and prod environment. To override the settings defined in BLT's included config.settings.php file, I created a config.settings.php file in my site in the path docroot/sites/settings/config.settings.php, and I put in the following contents:

Fixing '503 Service Unavailable' and 'Endpoints not available' for Traefik Ingress in Kubernetes

In a Kubernetes cluster I'm building, I was quite puzzled when setting up Ingress for one of my applications—in this case, Jenkins.

I had created a Deployment for Jenkins (in the jenkins namespace), and an associated Service, which exposed port 80 on a ClusterIP. Then I added an Ingress resource which directed the URL jenkins.example.com at the jenkins Service on port 80.

Inspecting both the Service and Ingress resource with kubectl get svc -n jenkins and kubectl get ingress -n jenkins, respectively, showed everything seemed to be configured correctly:

AnsibleFest 2018 is a Wrap! Slides from my presentation and notes

AnsibleFest 2018 is in the books, and it was a great conference! I was able to attend the 'Contributors Summit' in Austin on Monday, and remotely Thursday, and I learned quite a bit! I also presented Make your Ansible playbooks maintainable, flexible, and scalable on both days of the conference. Slides from that session are available below, but you'll have to wait for the actual video to be uploaded to see the fun little gimmick I added for the live presentation 🙃.

Kubernetes' Complexity

Over the past month, I started rebuilding the Raspberry Pi Dramble project using Kubernetes instead of installing and configuring the LEMP stack directly on nodes via Ansible (track GitHub issues here). Along the way, I've hit tons of minor issues with the installation, and I wanted to document some of the things I think turn people away from Kubernetes early in the learning process. Kubernetes is definitely not the answer to all application hosting problems, but it is a great fit for some, and it would be a shame for someone who could really benefit from Kubernetes to be stumped and turn to some other solution that costs more in time, money, or maintenance!

Raspberry Pi Dramble cluster running Kubernetes with Green LEDs

Reintroducing the sanity of CM to container management

Recently, Ansible introduced Ansible Container, a tool that builds and orchestrates Docker containers.

While tools that build and orchestrate Docker containers are a dime a dozen these days (seriously... Kubernetes, Mesos, Rancher, Fleet, Swarm, Deis, Kontena, Flynn, Serf, Clocker, Paz, Docker 1.12+ built-in, not to mention dozens of PaaSes), many are built in the weirdly-isolated world of "I only manage containers, and don't manage other infrastructure tasks."

The cool thing about using Ansible to do your container builds and orchestration is that Ansible can also do your networking configuration. And your infrastructure provisioning. And your legacy infrastructure configuration. And on top of that, Ansible is, IMO, the best-in-class configuration management tool—easy for developers and sysadmins to learn and use effectively, and as efficient/terse as (but much more powerful than) shell scripts.

From Ansible Container's own README:

Subscribe to RSS - kubernetes