kubernetes

Raspberry Pi Cluster Episode 1 - Introduction to Clusters

I will be posting a few videos discussing cluster computing with the Raspberry Pi in the next few weeks, and I'm going to post the video + transcript to my blog so you can follow along even if you don't enjoy sitting through a video :)

.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }

This is a Raspberry Pi Compute Module.

7 Raspberry Pi Compute Modules in a stack

And this is a stack of 7 Raspberry Pi Compute Modules.

Everyone might be a cluster-admin in your Kubernetes cluster

Quite often, when I dive into someone's Kubernetes cluster to debug a problem, I realize whatever pod I'm running has way too many permissions. Often, my pod has the cluster-admin role applied to it through its default ServiceAccount.

Sometimes this role was added because someone wanted to make their CI/CD tool (e.g. Jenkins) manage Kubernetes resources in the cluster, and it was easier to apply cluster-admin to a default service account than to set all the individual RBAC privileges correctly. Other times, it was because someone found a new shiny tool and blindly installed it.

One such example I remember seeing recently is the spekt8 project; in it's installation instructions, it tells you to apply an rbac manifest:

kubectl apply -f https://raw.githubusercontent.com/spekt8/spekt8/master/fabric8-rbac.yaml

What the installation guide doesn't tell you is that this manifest grants cluster-admin privileges to every single Pod in the default namespace!

The Kubernetes Collection for Ansible

October 2020 Update: This post still contains relevant information, but one update: the community.kubernetes collection is moving to kubernetes.core. Otherwise everything's the same, it's just changing names.

Opera-bull with Ansible bull looking on

The Ansible community has long been a victim of its own success. Since I got started with Ansible in 2013, the growth in the number of Ansible modules and plugins has been astronomical. That's what happens when you build a very simple but powerful tool—easy enough for anyone to extend into any automation use case.

Debugging networking issues with multi-node Kubernetes on VirtualBox

Since this is the third time I've burned more than a few hours on this particular problem, I thought I'd finally write up a blog post. Hopefully I find this post in the future, the fourth time I run into the problem.

What problem is that? Well, when I build a new Kubernetes cluster with multiple nodes in VirtualBox (usually orchestrated with Vagrant and Ansible, using my geerlingguy.kubernetes role), I get everything running. kubectl works fine, all pods (including CoreDNS, Flannel or Calico, kube-apiserver, the scheduler) report Running, and everything in the cluster seems right. But there are lots of strange networking issues.

Sometimes internal DNS queries work. Most of the time not. I can't ping other pods by their IP address. Some of the debugging I do includes:

Everything I know about Kubernetes I learned from a cluster of Raspberry Pis

I realized I haven't posted about my DrupalCon Seattle 2019 session titled Everything I know about Kubernetes I learned from a cluster of Raspberry Pis, so I thought I'd remedy that. First, here's a video of the recorded session:

.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }

The original Raspberry Pi Dramble Cluster
The original Pi Dramble 6-node cluster, running the LAMP stack.

Run Ansible Tower or AWX in Kubernetes or OpenShift with the Tower Operator

Note: Please note that the Tower Operator this post references is currently in early alpha status, and has no official support from Red Hat. If you are planning on using Tower for production and have a Red Hat Ansible Automation subscription, you should use one of the official Tower installation methods. Someday the operator may become a supported install method, but it is not right now.

I have been building a variety of Kubernetes Operators using the Operator SDK. Operators make managing applications in Kubernetes (and OpenShift/OCP) clusters very easy, because you can capture the entire application lifecycle in the Operator's logic.

AWX Tower Operator SDK built with Ansible for Kubernetes

Mcrouter Operator - demonstration of K8s Operator SDK usage with Ansible

It wouldn't surprise me if you've never heard of Mcrouter. Described by Facebook as "a memcached protocol router for scaling memcached deployments", it's not the kind of software that everyone needs.

There are many scenarios where a key-value cache is necessary, and in probably 90% of them, running a single Redis or Memcached instance would adequately serve the application's needs. There are more exotic use cases, though, where you need better horizontal scaling and consistency.

Trying out CRC (Code Ready Containers) to run OpenShift 4.x locally

I've been working a bit with Red Hat lately, and one of the products that has intrigued me is their OpenShift Kubernetes platform; it's kind of like Kubernetes, but made to be more palatable and UI-driven... at least that's my initial take after taking it for a spin both using Minishift (which works with OpenShift 3.x), and CRC (which works with OpenShift 4.x).

Because it took me a bit of time to figure out a few details in testing things with OpenShift 4.1 and CRC, I thought I'd write up a blog post detailing my learning process. It might help someone else who wants to get things going locally!

CRC System Requirements

First things first, you need a decent workstation to run OpenShift 4. The minimum requirements are 4 vCPUs, 8 GB RAM, and 35 GB disk space. And even with that, I constantly saw HyperKit (the VM backend CRC uses) consuming 100-200% CPU and 12+ GB of RAM (sheesh!).

A Drupal Operator for Kubernetes with the Ansible Operator SDK

Kubernetes is taking over the world of infrastructure management, at least for larger-scale operations, and best practices have started to solidify. One of those best practices is the cultivation of Custom Resource Definitions (CRDs) to describe your applications in a Kubernetes-native way, and Operators to manage your the Custom Resources running on your Kubernetes clusters.

In the Drupal community, Kubernetes uptake has been somewhat slow, but is on the rise. Just like with Docker adoption for local development, the tooling and documentation has been slowly percolating. For example, Tess Flynn from TEN7 has been boldly going where no one has gone before (oops, wrong scifi series!) using the Force to promote Drupal usage in a Kubernetes environment.

Moving on, aka 'New job, 2019 edition'

Since 2014, I've been working for Acquia, doing some fun work with a great team in Professional Services. I started out managing some huge Drupal site builds for Acquia clients, and ended up devoting all my time for the past couple years to some major infrastructure projects, diving deeper into operations work, Ansible, AWS, Docker, and Kubernetes in production.

In that same time period, I began work on my second book, Ansible for Kubernetes, but have not had the dedicated time to get too deep into writing—especially now that I have three young kids. When I started writing Ansible for DevOps, I had one newborn!