This post was originally written in 2014, using a technique that only easily allows testing on Ubuntu 12.04; since then, I've been adapting many of my roles (e.g. geerlingguy.apache) to use a Docker container-based testing approach, and I've written a new blog post that details the new technique: How I test Ansible configuration on 7 different OSes with Docker.
Since I'm now maintaining 37 roles on Ansible Galaxy, there's no way I can spend as much time reviewing every aspect of every role when doing maintenance, or checking out pull requests to improve the roles. Automated testing using a continuous integration tool like Travis CI (which is free for public projects and integrated very well with GitHub) allows me to run tests against my Ansible roles with every commit and be more assured nothing broke since the last commit.
Plus, I love the small endorphin kick induced by seeing the green "build passing" icon on my project's home page.
There are four main things I make sure I test when building and maintaining an Ansible role:
- The role's syntax (are all the .yml files formatted correctly?).
- Whether the role will run through all the included tasks without failing.
- The role's idempotence (if run again, the role should not make any changes!).
- The role's success (does the role do what it should be doing?).
Ultimately, the most important aspect is #4, because what's the point of a role if it doesn't do what you want it to do (e.g. start a web server, configure a database, deploy an app, etc.)?
Since you're going to need a simple Ansible playbook and inventory file to test your role, you can create both inside a new 'tests' directory in your Ansible role:
# Directory structure:
inventory file, add:
We just want to tell Ansible to run commands on the local machine (we'll use the
--connection=local option when running the test playbook).
- hosts: localhost
Substitute your own role name for
ansible-role-django). This is a typical Ansible playbook, and we tell Ansible to run the tasks on localhost, with the
root user (otherwise, you could run tasks with
travis if you want, and use
sudo on certain tasks). You can add
vars_files, etc. if you want, but we'll keep things simple, because for many smaller roles, the role is pre-packaged with sane defaults and all the other info it needs to run.
The next step is to add a
.travis.yml file to your role so Travis CI will pick it up and use it for testing. Add that file to the root level of your role, and add the following to kick things off:
# Make sure everything's up to date.
- sudo apt-get update -qq
# Install Ansible.
- pip install ansible
# Add ansible.cfg to pick up roles path.
- "printf '[defaults]\nroles_path = ../' > ansible.cfg"
# We'll add some commands to test the role here.
The only surprising part here is the
printf line in the
install section; I've added that line to create a quick and dirty
ansible.cfg configuration file Ansible will use to set the
roles_path one directory up from the current working directory. That way, we can include roles like
github-role-project-name, or if we use
ansible-galaxy to download dependencies (as another command in the
install section), we can just use
- galaxy-role-name-here to include that role in our
Now that we have the basic structure, it's time to start adding the commands to test our role.
This is the easiest test;
ansible-playbook has a built in command that will check a playbook's syntax (including all the included files and roles), and return
0 if there are no problems, or an error code and some output if there were any syntax issues.
ansible-playbook -i tests/inventory tests/test.yml --syntax-check
Add this as a command in the
script section of
# Check the role/playbook's syntax.
- ansible-playbook -i tests/inventory tests/test.yml --syntax-check
If there are any syntax errors, Travis will fail the build and output the errors in the log.
The next aspect to check is whether the role runs correctly or fails on it's first run.
# Run the role/playbook with ansible-playbook.
- "ansible-playbook -i tests/inventory tests/test.yml --connection=local --sudo"
This is a basic ansible-playbook command, which runs the playbook
test.yml against the local host, using
--sudo, and with the
inventory file we added to the role's
Ansible returns a non-zero exit code if the playbook run fails, so Travis will know whether the command succeeded or failed.
Another important test is the idempotence test—does the role change anything if it runs a second time? It should not, since all tasks you perform via Ansible should be idempotent (ensuring a static/unchanging configuration on subsequent runs with the same settings).
# Run the role/playbook again, checking to make sure it's idempotent.
ansible-playbook -i tests/inventory tests/test.yml --connection=local --sudo
| grep -q 'changed=0.*failed=0'
&& (echo 'Idempotence test: pass' && exit 0)
|| (echo 'Idempotence test: fail' && exit 1)
This command runs the exact same command as before, but pipes the results through grep, which checks to make sure 'changed' and 'failed' both report
0. If there were no changes or failures, the idempotence test passes (and Travis sees the
0 exit and is happy), but if there were any changes or failures, the test fails (and Travis sees the
1 exit and reports a build failure).
The last thing I check is whether the role actually did what it was supposed to do. If it configured a web server, is the server responding on port 80 or 443 without any errors? If it configured a command line application, does that command line application work when invoked, and do the things it's supposed to do?
# Request a page via the web server, to make sure it's running and responds.
- "curl http://localhost/"
In this example, I'm testing a web server by loading 'localhost'; curl will exit with a 0 status (and dump the output of the web server's response) if the server responds with a
200 OK status, or will exit with a non-zero status if the server responds with an error status (like
500) or is unavailable.
Taking this a step further, you could even run a deployed application or service's own automated tests after ansible is finished with the deployment, thus testing your infrastructure and application in one go—but we're getting ahead of ourselves here... that's a topic for a future post :)
There are a few things you need to know about Travis CI, especially if you're testing Ansible, which will rely heavily on the VM environment inside which it is running:
- Ubuntu 12.04: As of this writing, the only OS available via Travis CI is Ubuntu 12.04. Most of my roles work with Ubuntu/Debian/RedHat/CentOS, so it's not an issue for me... but if your roles strictly target a non-Debian-flavored distro, you probably won't get much mileage out of Travis.
Preinstalled packages: Travis CI comes with a bunch of services installed out of the box, like MySQL, Elasticsearch, Ruby, etc. In the
before_installsection, you may need to do some
apt-get remove --purge [package]commands and/or other cleanup commands to make sure the VM is fresh for your Ansible role's run.
Networking/Disk/Memory: Travis CI continously shifts the VM specs you're using, so don't assume you'll have X amount of RAM, disk space, or network capacity. You can add commands like
free -m, etc. in the
before_installsection if you need to figure out the resources available in your VM.
See much more information on the Travis CI Build Environment page.
I have integrated this style of testing into many of the roles I've submitted to Ansible Galaxy; here are a few example roles that use Travis CI integration in the way I've outlined in this blog post:
These are some of the things I do to make my roles as thoroughly-tested as I can using some free resources; there are some other ways to go even deeper, and I hope to have more to share soon!
Also check out the Testing Strategies section of Ansible's documentation. There is some good information about how and what you should be testing your Ansible roles and playbooks.
This post has been adapted from one of the chapters in my book Ansible for DevOps, which is available for sale on LeanPub.