kubernetes

Raspberry Pi Cluster Episode 6 - Turing Pi Review

A few months ago, in the 'before times', I noticed this post on Hacker News mentioning the Turing Pi, a 'Plug & Play Raspberry Pi Cluster' that sits on your desk.

It caught my attention because I've been running my own old-fashioned 'Raspberry Pi Dramble' cluster since 2015.

Raspberry Pi Dramble Cluster with Sticker - 2019 PoE Edition

So today, I'm wrapping up my Raspberry Pi Cluster series with my thoughts about the Turing Pi that I used to build a 7-node Kubernetes cluster.

Video version of this post

This blog post has a companion video embedded below:

Raspberry Pi Cluster Episode 4 - Minecraft, Pi-hole, Grafana and More!

This is the fourth video in a series discussing cluster computing with the Raspberry Pi, and I'm posting the video + transcript to my blog so you can follow along even if you don't enjoy sitting through a video :)

In the last episode, I showed you how to install Kubernetes on the Turing Pi cluster, running on seven Raspberry Pi Compute Modules.

In this episode, I'm going to show you some of the things you can do with the cluster.

10,000 Kubernetes Pods for 10,000 Subscribers

It started with a tweet, how did it end up like this?

I've had a YouTube channel since 2006—back when YouTube was a plucky upstart battling against Google Video (not Google Videos) and Vimeo. I started livestreaming a couple months ago on a whim, and since that time I've gained more subscribers than I had gained between 2006-2020!

So it seems fitting that I find some nerdy way to celebrate. After all, if Coline Furze can celebrate his milestones with ridiculous fireworks displays, I can do ... something?

Raspberry Pi Cluster Episode 2 - Setting up the Cluster

This post is based on one of the videos in my series on Raspberry Pi Clustering, and I'm posting the video + transcript to my blog so you can follow along even if you don't enjoy sitting through a video :)

.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }

In the first episode, I talked about how and why I build Raspberry Pi clusters.

I mentioned my Raspberry Pi Dramble cluster, and how it's evolved over the past five years.

Raspberry Pi Cluster Episode 1 - Introduction to Clusters

I will be posting a few videos discussing cluster computing with the Raspberry Pi in the next few weeks, and I'm going to post the video + transcript to my blog so you can follow along even if you don't enjoy sitting through a video :)

.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }

This is a Raspberry Pi Compute Module.

7 Raspberry Pi Compute Modules in a stack

And this is a stack of 7 Raspberry Pi Compute Modules.

Everyone might be a cluster-admin in your Kubernetes cluster

Quite often, when I dive into someone's Kubernetes cluster to debug a problem, I realize whatever pod I'm running has way too many permissions. Often, my pod has the cluster-admin role applied to it through its default ServiceAccount.

Sometimes this role was added because someone wanted to make their CI/CD tool (e.g. Jenkins) manage Kubernetes resources in the cluster, and it was easier to apply cluster-admin to a default service account than to set all the individual RBAC privileges correctly. Other times, it was because someone found a new shiny tool and blindly installed it.

One such example I remember seeing recently is the spekt8 project; in it's installation instructions, it tells you to apply an rbac manifest:

kubectl apply -f https://raw.githubusercontent.com/spekt8/spekt8/master/fabric8-rbac.yaml

What the installation guide doesn't tell you is that this manifest grants cluster-admin privileges to every single Pod in the default namespace!

The Kubernetes Collection for Ansible

October 2020 Update: This post still contains relevant information, but one update: the community.kubernetes collection is moving to kubernetes.core. Otherwise everything's the same, it's just changing names.

Opera-bull with Ansible bull looking on

The Ansible community has long been a victim of its own success. Since I got started with Ansible in 2013, the growth in the number of Ansible modules and plugins has been astronomical. That's what happens when you build a very simple but powerful tool—easy enough for anyone to extend into any automation use case.

Debugging networking issues with multi-node Kubernetes on VirtualBox

Since this is the third time I've burned more than a few hours on this particular problem, I thought I'd finally write up a blog post. Hopefully I find this post in the future, the fourth time I run into the problem.

What problem is that? Well, when I build a new Kubernetes cluster with multiple nodes in VirtualBox (usually orchestrated with Vagrant and Ansible, using my geerlingguy.kubernetes role), I get everything running. kubectl works fine, all pods (including CoreDNS, Flannel or Calico, kube-apiserver, the scheduler) report Running, and everything in the cluster seems right. But there are lots of strange networking issues.

Sometimes internal DNS queries work. Most of the time not. I can't ping other pods by their IP address. Some of the debugging I do includes: