redhat

Install Python 3.9 on Raspberry Pi OS or Debian 10 (for Ansible or other uses)

I've started getting a lot of bug reports on my repos to the effect of "Ansible won't install on my Raspberry Pi anymore". Accompanying it is a debug message like one of the following:

$ python3 -m pip install ansible
...
No matching distribution found for ansible-core<2.13,>=2.12.0 (from ansible)

# Alternatively:
ERROR: No matching distribution found for ansible-core<2.13,>=2.12.0

The problem is ansible-core 2.12 has a new hard requirement for Python 3.8 or newer. And ansible-core 2.12 is included in Ansible 5.0.0, which was recently released. Raspberry Pi OS, which was based on Debian 10 ("Buster") until recently, includes Python 3.7, which is too old to satisfy Ansible's installation requirements.

There was recently a fix that makes it so Ansible 5.x won't get installed on these older systems, but who wants to get stuck on old unsupported Ansible versions?

There are three options:

Trying out CRC (Code Ready Containers) to run OpenShift 4.x locally

I've been working a bit with Red Hat lately, and one of the products that has intrigued me is their OpenShift Kubernetes platform; it's kind of like Kubernetes, but made to be more palatable and UI-driven... at least that's my initial take after taking it for a spin both using Minishift (which works with OpenShift 3.x), and CRC (which works with OpenShift 4.x).

Because it took me a bit of time to figure out a few details in testing things with OpenShift 4.1 and CRC, I thought I'd write up a blog post detailing my learning process. It might help someone else who wants to get things going locally!

CRC System Requirements

First things first, you need a decent workstation to run OpenShift 4. The minimum requirements are 4 vCPUs, 8 GB RAM, and 35 GB disk space. And even with that, I constantly saw HyperKit (the VM backend CRC uses) consuming 100-200% CPU and 12+ GB of RAM (sheesh!).

Discovering whether an Ansible component is 'core' or 'community'

As you get deeper into your journey using Ansible, you might start filing issues on GitHub, chatting in #ansible on Freenode IRC, or otherwise interacting more with the Ansible community. Because the Ansible community has grown tremendously over the years—and as Ansible has been subsumed by Red Hat, which has various support plans for Ansible—there's been a greater distinction between parts of Ansible that are 'core' (e.g. maintained by the Ansible Engineering Team) and those that are not.

When everything works, and when you're living in a world where security and compliance requirements are fairly free, you would never even care about the support for Ansible components (modules, plugins, filters, Galaxy content). But if something goes wrong, or if there are security or compliance concerns, it is important to be able to figure out what's core, what's 'certified' by Red Hat, and what's not.

Updating all your servers with Ansible

From time to time, there's a security patch or other update that's critical to apply ASAP to all your servers. If you use Ansible to automate infrastructure work, then updates are painless—even across dozens, hundreds, or thousands of instances! I've written about this a little bit in the past, in relation to protecting against the shellshock vulnerability, but that was specific to one package.

I have an inventory script that pulls together all the servers I manage for personal projects (including the server running this website), and organizes them by OS, so I can run commands like ansible [os] command. Then that enables me to run commands like: