security

Be careful, Docker might be exposing ports to the world

Recently, I noticed logs for one of my web services had strange entries that looked like a bot trying to perform scripted attacks on an application endpoint. I was surprised, because all the endpoints that were exposed over the public Internet were protected by some form of authentication, or were locked down to specific IP addresses—or so I thought.

I had re-architected the service using Docker in the past year, and in the process of doing so, I changed the way the application ran—instead of having one server per process, I ran a group of processes on one server, and routed traffic to them using DNS names (one per process) and Nginx to proxy the traffic.

In this new setup, I built a custom firewall using iptables rules (since I had to control for a number of legacy services that I have yet to route through Docker—someday it will all be in Kubernetes), installed Docker, and set up a Docker Compose file (one per server) that ran all the processes in containers, using ports like 1234, 1235, etc.

The Docker Compose port declaration for each service looked like this:

Everyone might be a cluster-admin in your Kubernetes cluster

Quite often, when I dive into someone's Kubernetes cluster to debug a problem, I realize whatever pod I'm running has way too many permissions. Often, my pod has the cluster-admin role applied to it through its default ServiceAccount.

Sometimes this role was added because someone wanted to make their CI/CD tool (e.g. Jenkins) manage Kubernetes resources in the cluster, and it was easier to apply cluster-admin to a default service account than to set all the individual RBAC privileges correctly. Other times, it was because someone found a new shiny tool and blindly installed it.

One such example I remember seeing recently is the spekt8 project; in it's installation instructions, it tells you to apply an rbac manifest:

kubectl apply -f https://raw.githubusercontent.com/spekt8/spekt8/master/fabric8-rbac.yaml

What the installation guide doesn't tell you is that this manifest grants cluster-admin privileges to every single Pod in the default namespace!

Getting AWS STS Session Tokens for MFA with AWS CLI and kubectl for EKS automatically

I've been working on some projects which require MFA for all access, including for CLI access and things like using kubectl with Amazon EKS. One super-annoying aspect of requiring MFA for CLI operations is that every day or so, you have to update your STS access token—and also for that token to work you have to update an AWS profile's Access Key ID and Secret Access Key.

I had a little bash function that would allow me to input a token code from my MFA device and it would spit out the values to put into my .aws/credentials file, but it was still tiring copying and pasting three values every single morning.

So I wrote a neat little executable Ansible playbook which does everything for me:

To use it, you can download the contents of that file to /usr/local/bin/aws-sts-token, make the file executable (chmod +x /usr/local/bin/aws-sts-token), and run the command:

Fixing Safari's 'can't establish a secure connection' when updating a self-signed certificate

I do a lot of local development, and since almost everything web-related is supposed to use SSL these days, and since I like to make local match production as closely as possible, I generate a lot of self-signed certificates using OpenSSL (usually using Ansible's openssl_* modules).

This presents a problem, though, since I use Safari. Every time I rebuild an environment using my automation, and generate a new certificate for a domain that's protected with HSTS, I end up getting this fun error page:

Safari Can't Open the Page - Safari can't open the page because Safari can't establish a secure connection to the server servername.

Safari Can't Open the Page – Safari can't open the page because Safari can't establish a secure connection to the server 'servername'.

CI for Ansible playbooks which require Ansible Vault protected variables

I use Ansible Vault to securely store the project's secrets (e.g. API keys, default passwords, private keys, etc.) in the git repository for many of my infrastructure projects. I also like to make sure I cover everything possible in automated tests/CI, using either Jenkins or Travis CI (usually).

But this presents a conundrum: if some of your variables are encrypted with an Ansible Vault secret/passphrase, and that secret should be itself store securely... how can you avoid storing it in your CI system, where you might not be able to guarantee it's security?

The method I usually use for this case is including the Vault-encrypted vars at playbook runtime, using include_vars:

Self-signed certificates via Ansible for local testing with Nginx

Most of my servers are using TLS certificates to encrypt all traffic over HTTPS. Since Let's Encrypt (and certbot) have taken the world of hosting HTTPS sites by storm (free is awesome!), I've been trying to make sure all my servers use the best settings possible to ensure private connections stay private. This often means setting up things like HSTS, which can make local / non-production test environments harder to manage.

Consider the following:

Cloning private GitHub repositories with Ansible on a remote server through SSH

One of Ansible's strengths is the fact that its 'agentless' architecture uses SSH for control of remote servers. And one classic problem in remote Git administration is authentication; if you're cloning a private Git repository that requires authentication, how can you do this while also protecting your own private SSH key (by not copying it to the remote server)?

As an example, here's a task that clones a private repository to a particular folder:

- name: Clone a private repository into /opt.
  git:
    repo: git@github.com:geerlingguy/private-repo.git
    version: master
    dest: /opt/private-repo
    accept_hostkey: yes
  # ssh-agent doesn't allow key to pass through remote sudo commands.
  become: no

If you run this task, you'll probably end up with something like:

How to securely erase free space on a hard drive (Mac)

From time to time, I need to clean off the contents of a hard drive on one of my Macs—most often this is the case prior to selling the mac or giving it to someone else. Instead of just formatting the drive, installing macOS, then handing it off, I want to make sure all the contents I had stored on it are irrecoverably erased (I sometimes work on projects under NDA, and I also like having some semblance of privacy in general).

Disk Utility used to expose this functionality in the UI, which made this a very simple operation. But it seems to have gone missing in recent macOS versions. Luckily, it's still available on the command line (via Terminal.app):

diskutil secureErase freespace 0 "/Volumes/Macintosh HD"

This command would write zeroes on the entire 'Macintosh HD' drive. You can see a list of all the drives connected to your Mac with ls /Volumes. There are a few other common options available (instead of 0) if you run man diskutil and scroll down to the secureErase section. I most commonly use:

Fastest way to reset a host key when rebuilding servers on the same IP or hostname frequently

I build and rebuild servers quite often, and when I want to jump into the server to check a config setting (when I'm not using Ansible, that is...), I need to log in via SSH. It's best practice to let SSH verify the host key every time you connect to make sure you're not getting MITMed or anything else is going on.

However, any time you rebuild a server from a new image/OS install, the host key should be new, and this will result in the following message the next time you try to log in: