continuous integration

Docker and systemd, getting rid of dreaded 'Failed to connect to bus' error

The following error has been the bane of my existence for the past few months:

TASK [geerlingguy.containerd : Ensure containerd is started and enabled at boot.] ***
fatal: [instance]: FAILED! => {
  "changed": false,
  "cmd": "/bin/systemctl",
  "msg": "Failed to connect to bus: No such file or directory",
  "rc": 1,
  "stderr": "Failed to connect to bus: No such file or directory",
  "stderr_lines": [
    "Failed to connect to bus: No such file or directory"
  ],
  "stdout": "",
  "stdout_lines": []
}

Since I use Molecule with my Ansible roles and playbooks to test them in identical CI environments both locally and in GitHub Actions, I can maintain an identical environment inside which tests are run. And many of my roles and playbooks need to test whether systemd services are configured and run correctly.

But Docker recently switched from cgroups v1 to cgroups v2, and that started this 'Failed to connect to bus' business—systemd relied on some configuration that was easy enough to add in the past: just run your containers with these options:

Travis CI's new pricing plan threw a wrench in my open source works

I just spent the past 6 hours migrating some of my open source projects from Travis CI to GitHub Actions, and I thought I'd pause for a bit (12 hours into this project, probably 15-20 more to go) to jot down a few thoughts.

I am not one to look a gift horse in the mouth. For almost a decade, Travis CI made it possible for me to build—and maintain, for years—hundreds of open source projects.

I have built projects for Raspberry Pi, PHP, Python, Drupal, Ansible, Kubernetes, macOS, iOS, Android, Docker, Arduino, and more. And almost every single project I built got immediate integration with Travis CI.

Without that testing, and the ability to run tests on a schedule, I would have abandoned most of these projects. But with the testing, I'm able to keep up with build failures induced by bit rot over the years and review PRs more easily.

What went wrong with Travis CI?

From the outset, Travis CI was built to integrate with GitHub repositories and offer free open source CI. At one time it was showered with praise on Hacker News and elsewhere for its culture and ethos.

Running a Github Actions workflow on schedule and other events

One thing that was not obvious when I was setting up GitHub Actions on the Ansible Kubernetes Collection repository was how to have a 'CI' workflow run both on pull requests and on a schedule. I like to have scheduled runs for most of my projects, so I can see if something starts failing because an underlying dependency changes and breaks my tests.

The documentation for on.schedule just has an example with the workflow running on a schedule. For example:

on:
  schedule:
    # * is a special character in YAML so you have to quote this string
    - cron:  '*/15 * * * *'

Separately, there's documentation for triggering a workflow on events like a 'push' or a 'pull_request':

Properly deploying updates to or shutting down Jenkins

One of my most popular Ansible roles is the geerlingguy.jenkins role, and for good reason—Jenkins is pretty much the premiere open source CI tool, and has been used for many years by Ops and Dev teams all over the place.

As Jenkins (or other CI tools) are adopted more fully for automating all aspects of infrastructure work, you begin to realize how important the Jenkins server(s) become to your daily operations. And then you realize you need CI for your CI. And you need to have version control and deployment processes for things like Jenkins updates, job updates, etc. The geerlingguy.jenkins role helps a lot with the main component of automating Jenkins install and configuration, and then you can add on top of that a task that copies config.xml files with each job definition into your $JENKINS_HOME to ensure every job and every configuration is in code...

Automating Your Automation with Ansible Tower

The following is an excerpt from Chapter 11 of Ansible for DevOps, a book on Ansible by Jeff Geerling. The example highlights the effectiveness of Ansible Tower for automating infrastructure operations, especially in a team environment.

Throughout this book, all the examples use Ansible's CLI to run playbooks and report back the results. For smaller teams, especially when everyone on the team is well-versed in how to use Ansible, YAML syntax, and follows security best practices with playbooks and variables files, using the CLI can be a sustainable approach.

But for many organizations, there are needs that stretch basic CLI use too far:

Testing Ansible Roles with Travis CI on GitHub

This post was originally written in 2014, using a technique that only easily allows testing on Ubuntu 12.04; since then, I've been adapting many of my roles (e.g. geerlingguy.apache) to use a Docker container-based testing approach, and I've written a new blog post that details the new technique: How I test Ansible configuration on 7 different OSes with Docker.

Since I'm now maintaining 37 roles on Ansible Galaxy, there's no way I can spend as much time reviewing every aspect of every role when doing maintenance, or checking out pull requests to improve the roles. Automated testing using a continuous integration tool like Travis CI (which is free for public projects and integrated very well with GitHub) allows me to run tests against my Ansible roles with every commit and be more assured nothing broke since the last commit.

CI: Deployments and Static Code Analysis with Drupal/PHP

CI: Deplyments and Code Quality

tl;dr: Get the Vagrant profile for Drupal/PHP Continuous Integration Server from GitHub, and create a new VM (see the README on the GitHub project page). You now have a full-fledged Jenkins/Phing/SonarQube server for PHP/Drupal CI.

In this post, I'm going to explain how Jenkins, Phing and SonarQube can help you with your Drupal (or hany PHP-based project) deployments and code quality, and walk you through installing and configuring them to work with your codebase. Bear with me... it's a long post!

Code Deployment

If you manage more than one environment (say, a development server, a testing/staging server, and a live production server), you've probably had to deal with the frustration of deploying changes to your code to these servers.

In the old days, people used FTP and manually copied files from environment to environment. Then FTP clients became smarter, and allowed somewhat-intelligent file synchronization. Then, when version control software became the norm, people would use CVS, SVN, or more recently Git, to push or check out code to different servers.

All the aforementioned deployment methods involved a lot of manual labor, usually involving an FTP client or an SSH session. Modern server management tools like Ansible can help when there are more complicated environments, but wouldn't everything be much simpler if there were an easy way to deploy code to specific environments, especially if these deployments could be automated to either run on a schedule or whenever someone commits something to a particular branch?

Jenkins Logo

Enter Jenkins. Jenkins is your deployment assistant on steroids. Jenkins works with a wide variety of tools, programming languages and systems, and allows the automation (or radical simplification) of tasks surrounding code changes and deployments.

In my particular case, I use a dedicated Jenkins server to monitor a specific repository, and when there are commits to a development branch, Jenkins checks out that branch from Git, runs some PHP code analysis tools on the codebase using Phing, archives the code and other assets in a .tar.gz file, then deploys the code to a development server and runs some drush commands to complete the deployment.

Static Code Analysis / Code Review

If you're a solo developer, and you're the only one planning on ever touching the code you write, you can use whatever coding standards you want—spacing, variable naming, file structure, class layout, etc. don't really matter.

But if you ever plan on sharing your code with others (as a contributed theme or module), or if you need to work on a shared codebase, or if there's ever a possibility you will pass on your code to a new developer, it's a good idea to follow coding standards and write good code that doesn't contain too many WTFs/min.