automation

It's not me, Google, it's you - from GA to Fathom

tl;dr: I'm now using Fathom for my personal website analytics, and it's easy to self-host and maintain, better for privacy, and can lead to better site performance.

Since the mid-2000s, right after it became available, I started using Google Analytics for almost every website I built (whether it be mine or someone else). It quickly became (and remains) the de-facto standard for website usage analytics and user tracking.

Google Analytics UI

Before that you basically had web page visit counters (some of them with slightly more advanced features ala W3Counter and Stat Counter), and then on the high end you had Urchin Web Analytics (which is what Google acquired and turned into a 'cloud' version, naming the new product Google Analytics and tying it deeply into the Google AdWords ecosystem).

Updating a Kubernetes Deployment and waiting for it to roll out in a shell script

For some Kubernetes cluster operations (e.g. deploying an update to a small microservice or app), I need a quick and dirty way to:

  1. Build and push a Docker image to a private registry.
  2. Update a Kubernetes Deployment to use this new image version.
  3. Wait for the Deployment rollout to complete.
  4. Run some post-rollout operations (e.g. clear caches, run an update, etc.).

There are a thousand and one ways to do all this, and many are a bit more formal than this, but sometimes you just need a shell script you can run from your CI server to do it all. And it's not too hard, nor complex, to do it this way:

Fixing Jenkins CLI 'ERROR: anonymous is missing the Overall/Read permission'

For the past decade or so, I've been working to automate as much of a Jenkins server build process as possible. There are a few 'hacky' bits to doing so, like managing some Jenkins XML files (or if you really want to go crazy, storing your entire $JENKINS_HOME somewhere in a source control repository!).

One of the most annoying things about automating Jenkins is using the jenkins-cli.jar file to interact with Jenkins on the CLI. It doesn't come with any automated solution for authenticating with Jenkins, and is meant for running either on the same server where Jenkins is running, or really anywhere that has SSH access. I generally don't like putting any Jenkins bits (including the CLI tool) on servers outside the actual Jenkins instance itself, so I've traditionally used the --username and --password method of authenticating with jenkins-cli.

However, it seems those CLI flags were deprecated and removed at some point in the past few months (maybe around 2.130 or so?), and now I get the following error when running CLI commands that way:

Kubernetes' Complexity

Over the past month, I started rebuilding the Raspberry Pi Dramble project using Kubernetes instead of installing and configuring the LEMP stack directly on nodes via Ansible (track GitHub issues here). Along the way, I've hit tons of minor issues with the installation, and I wanted to document some of the things I think turn people away from Kubernetes early in the learning process. Kubernetes is definitely not the answer to all application hosting problems, but it is a great fit for some, and it would be a shame for someone who could really benefit from Kubernetes to be stumped and turn to some other solution that costs more in time, money, or maintenance!

Raspberry Pi Dramble cluster running Kubernetes with Green LEDs

Reboot and wait for reboot to complete in Ansible playbook

September 2018 Update: Ansible 2.7 (to be released around October 2018) will include a new reboot module, which makes reboots a heck of a lot simpler (whether managing Windows, Mac, or Linux!):

- name: Reboot the server and wait for it to come back up.
  reboot:

That's it! Much easier than the older technique I used in Ansible < 2.7!

One pattern I often need to implement in my Ansible playbooks is "configure-reboot-configure", where you change some setting that requires a reboot to take effect, and you have to wait for the reboot to take place before continuing on with the rest of the playbook run.

For example, on my Raspberry Pi Dramble project, before installing Docker and Kubernetes, I need to make sure the Raspberry Pi's /boot/cmdline.txt file contains a couple cgroup features so Kubernetes runs correctly. But after adding these options, I also have to reboot the Pi.

Properly deploying updates to or shutting down Jenkins

One of my most popular Ansible roles is the geerlingguy.jenkins role, and for good reason—Jenkins is pretty much the premiere open source CI tool, and has been used for many years by Ops and Dev teams all over the place.

As Jenkins (or other CI tools) are adopted more fully for automating all aspects of infrastructure work, you begin to realize how important the Jenkins server(s) become to your daily operations. And then you realize you need CI for your CI. And you need to have version control and deployment processes for things like Jenkins updates, job updates, etc. The geerlingguy.jenkins role helps a lot with the main component of automating Jenkins install and configuration, and then you can add on top of that a task that copies config.xml files with each job definition into your $JENKINS_HOME to ensure every job and every configuration is in code...

Get started using Ansible AWX (Open Source Tower version) in one minute

Since yesterday's announcement that Ansible had released the code behind Ansible Tower, AWX, under an open source license, I've been working on an AWX Ansible role, a demo AWX Vagrant VM, and an AWX Ansible Container project.

As part of that last project, I have published two public Docker Hub images, awx_web and awx_task, which can be used with a docker-compose.yml file to build AWX locally in about as much time as it takes to download the Docker images:

Self-signed certificates via Ansible for local testing with Nginx

Most of my servers are using TLS certificates to encrypt all traffic over HTTPS. Since Let's Encrypt (and certbot) have taken the world of hosting HTTPS sites by storm (free is awesome!), I've been trying to make sure all my servers use the best settings possible to ensure private connections stay private. This often means setting up things like HSTS, which can make local / non-production test environments harder to manage.

Consider the following:

Updating all your servers with Ansible

From time to time, there's a security patch or other update that's critical to apply ASAP to all your servers. If you use Ansible to automate infrastructure work, then updates are painless—even across dozens, hundreds, or thousands of instances! I've written about this a little bit in the past, in relation to protecting against the shellshock vulnerability, but that was specific to one package.

I have an inventory script that pulls together all the servers I manage for personal projects (including the server running this website), and organizes them by OS, so I can run commands like ansible [os] command. Then that enables me to run commands like:

Mount an AWS EFS filesystem on an EC2 instance with Ansible

If you run your infrastructure inside Amazon's cloud (AWS), and you need to mount a shared filesystem on multiple servers (e.g. for Drupal's shared files folder, or Magento's media folder), Amazon Elastic File System (EFS) is a reliable and inexpensive solution. EFS is basically a 'hosted NFS mount' that can scale as your directory grows, and mounts are free—so, unlike many other shared filesystem solutions, there's no per-server/per-mount fees; all you pay for is the storage space (bandwidth is even free, since it's all internal to AWS!).

I needed to automate the mounting of an EFS volume in an Amazon EC2 instance so I could perform some operations on the shared volume, and Ansible makes managing things really simple. In the below playbook—which easily works with any popular distribution (just change the nfs_package to suit your needs)—an EFS volume is mounted on an EC2 instance:

Pages

Subscribe to RSS - automation