automation

Properly deploying updates to or shutting down Jenkins

One of my most popular Ansible roles is the geerlingguy.jenkins role, and for good reason—Jenkins is pretty much the premiere open source CI tool, and has been used for many years by Ops and Dev teams all over the place.

As Jenkins (or other CI tools) are adopted more fully for automating all aspects of infrastructure work, you begin to realize how important the Jenkins server(s) become to your daily operations. And then you realize you need CI for your CI. And you need to have version control and deployment processes for things like Jenkins updates, job updates, etc. The geerlingguy.jenkins role helps a lot with the main component of automating Jenkins install and configuration, and then you can add on top of that a task that copies config.xml files with each job definition into your $JENKINS_HOME to ensure every job and every configuration is in code...

Get started using Ansible AWX (Open Source Tower version) in one minute

Since yesterday's announcement that Ansible had released the code behind Ansible Tower, AWX, under an open source license, I've been working on an AWX Ansible role, a demo AWX Vagrant VM, and an AWX Ansible Container project.

As part of that last project, I have published two public Docker Hub images, awx_web and awx_task, which can be used with a docker-compose.yml file to build AWX locally in about as much time as it takes to download the Docker images:

Self-signed certificates via Ansible for local testing with Nginx

Most of my servers are using TLS certificates to encrypt all traffic over HTTPS. Since Let's Encrypt (and certbot) have taken the world of hosting HTTPS sites by storm (free is awesome!), I've been trying to make sure all my servers use the best settings possible to ensure private connections stay private. This often means setting up things like HSTS, which can make local / non-production test environments harder to manage.

Consider the following:

Updating all your servers with Ansible

From time to time, there's a security patch or other update that's critical to apply ASAP to all your servers. If you use Ansible to automate infrastructure work, then updates are painless—even across dozens, hundreds, or thousands of instances! I've written about this a little bit in the past, in relation to protecting against the shellshock vulnerability, but that was specific to one package.

I have an inventory script that pulls together all the servers I manage for personal projects (including the server running this website), and organizes them by OS, so I can run commands like ansible [os] command. Then that enables me to run commands like:

Mount an AWS EFS filesystem on an EC2 instance with Ansible

If you run your infrastructure inside Amazon's cloud (AWS), and you need to mount a shared filesystem on multiple servers (e.g. for Drupal's shared files folder, or Magento's media folder), Amazon Elastic File System (EFS) is a reliable and inexpensive solution. EFS is basically a 'hosted NFS mount' that can scale as your directory grows, and mounts are free—so, unlike many other shared filesystem solutions, there's no per-server/per-mount fees; all you pay for is the storage space (bandwidth is even free, since it's all internal to AWS!).

I needed to automate the mounting of an EFS volume in an Amazon EC2 instance so I could perform some operations on the shared volume, and Ansible makes managing things really simple. In the below playbook—which easily works with any popular distribution (just change the nfs_package to suit your needs)—an EFS volume is mounted on an EC2 instance:

dockrun oneshot — quick local environments for testing infrastructure

Since I work among a ton of different Linux distros and environments in my day-to-day work, I have a lot of tooling set up that's mostly-OS-agnostic. I found myself in need of a quick barebones CentOS 7 VM to play around in or troubleshoot an issue. Or I needed to run Ubuntu 16.04 and Ubuntu 14.04 side by side and run the same command in each, checking for differences. Or I needed to bring up Fedora. Or Debian.

I used to use my Vagrant boxes for VirtualBox to boot a full VM, then vagrant ssh in. But that took at least 15-20 seconds—assuming I already had the box downloaded on my computer!

How I test Ansible configuration on 7 different OSes with Docker

The following post is an excerpt from chapter 11 in my book Ansible for DevOps. The example used is an Ansible role that installs Java—since the role is supposed to work across CentOS 6 and 7, Fedora 24, Ubuntu 12.04, 14.04, and 16.04, and Debian 8, I use Docker to run an end-to-end functional test on each of those Linux distributions. See an example test run in Travis CI, and the Travis file that describes the build.

Note: I do the same thing currently (as of 2019), but now I'm using Molecule to tie everything together; see Testing your Ansible roles with Molecule.

Automating Your Automation with Ansible Tower

The following is an excerpt from Chapter 11 of Ansible for DevOps, a book on Ansible by Jeff Geerling. The example highlights the effectiveness of Ansible Tower for automating infrastructure operations, especially in a team environment.

Throughout this book, all the examples use Ansible's CLI to run playbooks and report back the results. For smaller teams, especially when everyone on the team is well-versed in how to use Ansible, YAML syntax, and follows security best practices with playbooks and variables files, using the CLI can be a sustainable approach.

But for many organizations, there are needs that stretch basic CLI use too far:

Highly-Available Infrastructure Provisioning and Configuration with Ansible

The following is an excerpt from Chapter 8 of Ansible for DevOps, a book on Ansible by Jeff Geerling. The example highlights Ansible's simplicity and flexibility by provisioning and configuring of a highly available web application infrastructure on a local Vagrant-managed cloud, DigitalOcean droplets, and Amazon Web Services EC2 instances, with one set of Ansible playbooks.

tl;dr Check out the code on GitHub, and buy the book to learn more about Ansible!

Secure your servers from Shellshock Bash vulnerability using Ansible

Now that all Server Check.in infrastructure is managed by Ansible (some servers are running CentOS, others are running Ubuntu), it's very simple to update all the servers to protect against vulnerabilities like Heartbleed or today's new Shellshock bash vulnerability.

For CentOS (or RedHat)

$ ansible [inventory_group] -m yum -a "name=bash state=latest" [-u remote_username] [-s] [-K]

For Debian (or Ubuntu)

$ ansible [inventory_group] -m apt -a "update_cache=yes name=bash state=latest" [-u remote_username] [-s] [-K]

If you have a different method of patch management, or you need to apply the fixes manually, then this method won't apply—but for most infrastructure using normal system-provided packages, using the above commands will get the fixes with minimal effort.

A little further explanation: