ci

Getting colorized output from Molecule and Ansible on GitHub Actions for CI

For many new Ansible-based projects, I build my tests in Molecule, so I can easily run them locally or in CI. I also started using GitHub Actions for many of my new Ansible projects, just because it's so easy to get started and integrate with GitHub repositories.

I'm actually going to talk about this strategy in my next Ansible 101 live stream, covering Testing Ansible playbooks with Molecule and GitHub Actions CI, but I also wanted to highlight one thing that helps me when reviewing or observing playbook and molecule output, and that's color.

By default, in an interactive terminal session, Ansible colorizes its output so failures get 'red' color, good things / ok gets 'green', and changes get 'yellow-ish'. Also, warnings get a magenta color, which flags them well so you can go and fix them as soon as possible (that's one core principle I advocate to make your playbooks maintainable and scalable).

Install Drupal Coder and PHP CodeSniffer to your Drupal project to lint PHP code

In the official Coder Sniffer install guide on Drupal.org, it recommends installing Coder and the Drupal code sniffs globally using the command:

composer global require drupal/coder

I don't particularly like doing that, because I try to encapsulate all project requirements within that project, especially since I'm often working on numerous projects, some on different versions of PHP or Drupal, and installing things globally can cause things to break.

So instead, I've done the following (see this issue) for my new JeffGeerling.com Drupal 8 site codebase (in case you haven't seen, I'm live-streaming the entire migration process!):

Automatically building and publishing Ansible Galaxy Collections

I maintain a large number of Ansible Galaxy roles, and publish hundreds of new releases every year. If the process weren't fully automated, there would be no way I could keep up with it. For Galaxy roles, the process of tagging and publishing a new release is very simple, because Ansible Galaxy ties the role strongly to GitHub's release system. All that's needed is a webhook in your .travis.yml file (if using Travis CI):

notifications:
  webhooks: https://galaxy.ansible.com/api/v1/notifications/

For collections, Ansible Galaxy actually hosts an artifact—a .tar.gz file containing the collection contents. This offers some benefits that I won't get into here, but also a challenge: someone has to build and upload that artifact... and that takes more than one or two lines added to a .travis.yml file.

Until recently, I had been publishing collection releases manually. The process went something like:

Running a Github Actions workflow on schedule and other events

One thing that was not obvious when I was setting up GitHub Actions on the Ansible Kubernetes Collection repository was how to have a 'CI' workflow run both on pull requests and on a schedule. I like to have scheduled runs for most of my projects, so I can see if something starts failing because an underlying dependency changes and breaks my tests.

The documentation for on.schedule just has an example with the workflow running on a schedule. For example:

on:
  schedule:
    # * is a special character in YAML so you have to quote this string
    - cron:  '*/15 * * * *'

Separately, there's documentation for triggering a workflow on events like a 'push' or a 'pull_request':

Testing your Ansible roles with Molecule

After the announcement on September 26 that Ansible will be adopting molecule and ansible-lint as official 'Ansible by Red Hat' projects, I started moving more of my public Ansible projects over to Molecule-based tests instead of using the homegrown Docker-based Ansible testing rig I'd been using for a few years.

Molecule sticker in front of AnsibleFest 2018 Sticker

There was also a bit of motivation from readers of Ansible for DevOps, many of whom have asked for a new section on Molecule specifically!

In this blog post, I'll walk you through how to use Molecule, and how I converted all my existing roles (which were using a different testing system) to use Molecule and Ansible Lint-based tests.

Idempotently adding an SSH key for a host to known_hosts file with bash

I noticed on one of the CI servers I'm running that the .ssh/known_hosts file had ballooned up to over 1,000,000 lines!

Looking into the root cause (I tailed the file until I could track down a few jobs that ran every minute), I found that there was the following line in a setup script:

ssh-keyscan -t rsa github.com >> /var/lib/jenkins/.ssh/known_hosts

"This can't be good!" I told myself, and I decided to add a condition to make it idempotent (that is, able to be run once or one million times but only affecting change the first time it's run—basically, a way to change something only if the change is required):

if ! grep -q "^github.com" /var/lib/jenkins/.ssh/known_hosts; then
  ssh-keyscan -t rsa github.com >> /var/lib/jenkins/.ssh/known_hosts
fi

Now the host key for github.com is only scanned once the first time that script runs, and it is only stored in known_hosts one time for the host github.com... instead of millions of times!

Properly deploying updates to or shutting down Jenkins

One of my most popular Ansible roles is the geerlingguy.jenkins role, and for good reason—Jenkins is pretty much the premiere open source CI tool, and has been used for many years by Ops and Dev teams all over the place.

As Jenkins (or other CI tools) are adopted more fully for automating all aspects of infrastructure work, you begin to realize how important the Jenkins server(s) become to your daily operations. And then you realize you need CI for your CI. And you need to have version control and deployment processes for things like Jenkins updates, job updates, etc. The geerlingguy.jenkins role helps a lot with the main component of automating Jenkins install and configuration, and then you can add on top of that a task that copies config.xml files with each job definition into your $JENKINS_HOME to ensure every job and every configuration is in code...

CI for Ansible playbooks which require Ansible Vault protected variables

I use Ansible Vault to securely store the project's secrets (e.g. API keys, default passwords, private keys, etc.) in the git repository for many of my infrastructure projects. I also like to make sure I cover everything possible in automated tests/CI, using either Jenkins or Travis CI (usually).

But this presents a conundrum: if some of your variables are encrypted with an Ansible Vault secret/passphrase, and that secret should be itself store securely... how can you avoid storing it in your CI system, where you might not be able to guarantee it's security?

The method I usually use for this case is including the Vault-encrypted vars at playbook runtime, using include_vars:

Get started using Ansible AWX (Open Source Tower version) in one minute

Since yesterday's announcement that Ansible had released the code behind Ansible Tower, AWX, under an open source license, I've been working on an AWX Ansible role, a demo AWX Vagrant VM, and an AWX Ansible Container project.

As part of that last project, I have published two public Docker Hub images, awx_web and awx_task, which can be used with a docker-compose.yml file to build AWX locally in about as much time as it takes to download the Docker images:

Automating Your Automation with Ansible Tower

The following is an excerpt from Chapter 11 of Ansible for DevOps, a book on Ansible by Jeff Geerling. The example highlights the effectiveness of Ansible Tower for automating infrastructure operations, especially in a team environment.

Throughout this book, all the examples use Ansible's CLI to run playbooks and report back the results. For smaller teams, especially when everyone on the team is well-versed in how to use Ansible, YAML syntax, and follows security best practices with playbooks and variables files, using the CLI can be a sustainable approach.

But for many organizations, there are needs that stretch basic CLI use too far: