ansible

Patching or using a forked version of an Ansible Galaxy role

I maintain a lot of Ansible Galaxy roles. I probably have a problem, but I won't admit it, so I'll probably keep adding more roles :)

One thing I see quite often is someone submitting a simple Pull Request for one of my roles on GitHub, then checking in here and there asking if I have had a chance to merge it yet. I'm guessing people who end up doing this might not know about one of the best features of Ansible Galaxy (and more generally, open source!): you can fork the role and maintain your changes in the fork, and it's pretty easy to do.

I just had to do it for one project I'm working on. I am using the rvm_io.ruby role to install specific versions of Ruby on some servers. But there seems to have been a breaking change to the upstream packages RVM uses, summarized in this GitHub issue. I found a pretty simple fix (removing one array item from a variable), and submitted this PR.

Get started using Ansible AWX (Open Source Tower version) in one minute

Since yesterday's announcement that Ansible had released the code behind Ansible Tower, AWX, under an open source license, I've been working on an AWX Ansible role, a demo AWX Vagrant VM, and an AWX Ansible Container project.

As part of that last project, I have published two public Docker Hub images, awx_web and awx_task, which can be used with a docker-compose.yml file to build AWX locally in about as much time as it takes to download the Docker images:

Ansible open sources Ansible Tower with AWX

Ever since Red Hat acquired Ansible, I and many others have anticipated whether or when Ansible Tower would be open sourced. Ansible Tower is one of the nicest automation tools I've used... but since I haven't been on a project with the budget to support the Tower licensing fees, I have only used it for testing small-scale projects.

I wrote a guide for Automating your Automation with Ansible Tower, and it's both on the web and in Chapter 11 of Ansible for DevOps, and in the guide, I wrote:

For smaller teams, especially when everyone on the team is well-versed in how to use Ansible, YAML syntax, and follows security best practices with playbooks and variables files, using the CLI can be a sustainable approach... Ansible Tower provides a great mechanism for team-based Ansible usage.

Quick way to check if you're in AWS in an Ansible playbook

For many of my AWS-specific Ansible playbooks, I need to have some operations (e.g. AWS inspector agent, or special information lookups) run when the playbook is run inside AWS, but not run if it's being run on a local test VM or in my CI environment.

In the past, I would set up a global playbook variable like aws_environment: False, and set it manually to True when running the playbook against live AWS EC2 instances. But managing vars like aws_environment can get tiresome because if you forget to set it to the correct value, a playbook run can fail.

So instead, I'm now using the existence of AWS' internal instance metadata URL as a check for whether the playbook is being run inside AWS:

Self-signed certificates via Ansible for local testing with Nginx

Most of my servers are using TLS certificates to encrypt all traffic over HTTPS. Since Let's Encrypt (and certbot) have taken the world of hosting HTTPS sites by storm (free is awesome!), I've been trying to make sure all my servers use the best settings possible to ensure private connections stay private. This often means setting up things like HSTS, which can make local / non-production test environments harder to manage.

Consider the following:

Slow Ansible playbook? Check ansible.cfg!

Today while I was running a particularly large Ansible playbook about the 15th time in a row, I was wondering why this playbook seemed to run quite a bit slower than most other playbooks, even though I was managing a server that was in the same datacenter as most of my other infrastructure.

I have had pipelining = True in my system /etc/ansible/ansible.cfg for ages, and initially wondered why the individual tasks were so delayed—even when doing something like running three lineinfile tasks on one config file. The only major difference in this slow playbook's configuration was that I had a local ansible.cfg file in the playbook, to override my global roles_path (I wanted the specific role versions for this playbook to be managed and stored local to the playbook).

So, my curiosity led me to a more thorough reading of Ansible's configuration documentation, specifically a section talking about Ansible configuration file precedence:

Add a path to the global $PATH with Ansible

When building certain roles or playbooks, I often need to add a new directory to the global system-wide $PATH so automation tools, users, or scripts will be able to find other scripts or binaries they need to run. Just today I did this yet again for my geerlingguy.ruby Ansible role, and I thought I should document how I do it here—mostly so I have an easy searchable reference for it the next time I have to do this and want a copy-paste example!

    - name: Add another bin dir to system-wide $PATH.
      copy:
        dest: /etc/profile.d/custom-path.sh
        content: 'PATH=$PATH:{{ my_custom_path_var }}'

In this case, my_custom_path_var would refer to the path you want added to the system-wide $PATH. Now, on next login, you should see your custom path when you echo $PATH!

Updating all your servers with Ansible

From time to time, there's a security patch or other update that's critical to apply ASAP to all your servers. If you use Ansible to automate infrastructure work, then updates are painless—even across dozens, hundreds, or thousands of instances! I've written about this a little bit in the past, in relation to protecting against the shellshock vulnerability, but that was specific to one package.

I have an inventory script that pulls together all the servers I manage for personal projects (including the server running this website), and organizes them by OS, so I can run commands like ansible [os] command. Then that enables me to run commands like:

Mount an AWS EFS filesystem on an EC2 instance with Ansible

If you run your infrastructure inside Amazon's cloud (AWS), and you need to mount a shared filesystem on multiple servers (e.g. for Drupal's shared files folder, or Magento's media folder), Amazon Elastic File System (EFS) is a reliable and inexpensive solution. EFS is basically a 'hosted NFS mount' that can scale as your directory grows, and mounts are free—so, unlike many other shared filesystem solutions, there's no per-server/per-mount fees; all you pay for is the storage space (bandwidth is even free, since it's all internal to AWS!).

I needed to automate the mounting of an EFS volume in an Amazon EC2 instance so I could perform some operations on the shared volume, and Ansible makes managing things really simple. In the below playbook—which easily works with any popular distribution (just change the nfs_package to suit your needs)—an EFS volume is mounted on an EC2 instance:

Ansible playbook to upgrade all Ubuntu 12.04 LTS hosts to 14.04 (or 16.04, 18.04, 20.04, etc.)

Generally speaking, I'm against performing major OS upgrades on my Linux servers; there are often little things that get broken, or configurations gone awry, when you attempt an upgrade... and part of the point of automation (or striving towards a 12-factor app) is that you don't 'upgrade'—you destroy and rebuild with a newer version.

But, there are still cases where you have legacy servers running one little task that you haven't yet automated entirely, or that have data on them that is not yet stored in a way where you can tear down the server and build a new replacement. In these cases, assuming you've already done a canary upgrade on a similar but disposable server (to make sure there are no major gotchas), it may be the lesser of two evils to use something like Ubuntu's do-release-upgrade.