aws

AWS S3 Glacier Deep Archive - Difficulty deleting files with accents

A few days ago, my personal AWS account's billing alert fired, and delivered me an email saying I'd already exceeded my personal comfort threshold—in the second week of the month!

AWS billing alert email

Knowing that I had just rearranged my entire backup plan because I wanted to change the structure of my archives both locally and in my S3 Glacier Deep Archive mirror on AWS, I suspected something didn't get moved or deleted within my backup S3 bucket.

And I was right.

But I wanted to write this up for two reasons:

Retrieving individual files from S3 Glacier Deep Archive using AWS CLI

I still haven't blogged about my overall backup strategy (though I've mentioned it in the past a few times on my YouTube channel)—but overall, how it works is I have two local copies of any important data, and most of the non-video data is also stored in my Dropbox folder, so I get two local copies and one cloud backup for 'free'.

Then I also back up everything (including video content) from my NAS to an Amazon S3 Glacier Deep Archive-backed bucket at least once a week (sometimes more frequently, when I am working on a big project and manually kick off a mid-week backup).

10,000 Kubernetes Pods for 10,000 Subscribers

It started with a tweet, how did it end up like this?

I've had a YouTube channel since 2006—back when YouTube was a plucky upstart battling against Google Video (not Google Videos) and Vimeo. I started livestreaming a couple months ago on a whim, and since that time I've gained more subscribers than I had gained between 2006-2020!

So it seems fitting that I find some nerdy way to celebrate. After all, if Coline Furze can celebrate his milestones with ridiculous fireworks displays, I can do ... something?

Expanding K8s PVs in EKS on AWS

If that post title isn't a mouthful...

I'm excited to be moving a few EKS clusters into real-world production use after a few months of preparation. Besides my Raspberry Pi Dramble project (which is pretty low-key), these are the only production-grade Kubernetes clusters I've dealt with—and I've learned a lot. Enough that I'm working on a new book.

Anyways, back to the main topic: As of Kubernetes 1.11, you can auto-expand PVs from most cloud providers, AWS included. And since EKS now runs Kubernetes 1.11.x, you can have your EBS PVs automatically expand by just increasing the PVC claim size in spec.resources.requests.storage to a larger size (e.g. 10Gi to 20Gi).

To make sure this works, though, you need to make sure of a few things:

Make sure you have the proper setting on your StorageClass

You need to make sure the StorageClass you're using has the allowVolumeExpansion setting enabled, e.g.:

Getting AWS STS Session Tokens for MFA with AWS CLI and kubectl for EKS automatically

I've been working on some projects which require MFA for all access, including for CLI access and things like using kubectl with Amazon EKS. One super-annoying aspect of requiring MFA for CLI operations is that every day or so, you have to update your STS access token—and also for that token to work you have to update an AWS profile's Access Key ID and Secret Access Key.

I had a little bash function that would allow me to input a token code from my MFA device and it would spit out the values to put into my .aws/credentials file, but it was still tiring copying and pasting three values every single morning.

So I wrote a neat little executable Ansible playbook which does everything for me:

To use it, you can download the contents of that file to /usr/local/bin/aws-sts-token, make the file executable (chmod +x /usr/local/bin/aws-sts-token), and run the command:

Getting the best performance out of Amazon EFS

tl;dr: EFS is NFS. Networked file systems have inherent tradeoffs over local filesystem access—EFS doesn't change that. Don't expect the moon, benchmark and monitor it, and you'll do fine.

On a recent project, I needed to have a shared network file system that was available to all servers, and able to scale horizontally to anywhere between 1 and 100 servers. It needed low-latency file access, and also needed to be able to handle small file writes and file locks synchronously with as little latency as possible.

Amazon EFS, which uses NFS v4.1, checks all of those checkboxes (at least, to a certain extent), and if you're already building infrastructure inside AWS, EFS is a very cost-effective way to manage a scalable NFS filesystem. I'm not going to go too much into the technical details of EFS or NFS v4.1, but I would like to highlight some of the painful lessons my team has learned implementing EFS for a fairly hefty CMS-based project.

Quick way to check if you're in AWS in an Ansible playbook

For many of my AWS-specific Ansible playbooks, I need to have some operations (e.g. AWS inspector agent, or special information lookups) run when the playbook is run inside AWS, but not run if it's being run on a local test VM or in my CI environment.

In the past, I would set up a global playbook variable like aws_environment: False, and set it manually to True when running the playbook against live AWS EC2 instances. But managing vars like aws_environment can get tiresome because if you forget to set it to the correct value, a playbook run can fail.

So instead, I'm now using the existence of AWS' internal instance metadata URL as a check for whether the playbook is being run inside AWS:

Mount an AWS EFS filesystem on an EC2 instance with Ansible

If you run your infrastructure inside Amazon's cloud (AWS), and you need to mount a shared filesystem on multiple servers (e.g. for Drupal's shared files folder, or Magento's media folder), Amazon Elastic File System (EFS) is a reliable and inexpensive solution. EFS is basically a 'hosted NFS mount' that can scale as your directory grows, and mounts are free—so, unlike many other shared filesystem solutions, there's no per-server/per-mount fees; all you pay for is the storage space (bandwidth is even free, since it's all internal to AWS!).

I needed to automate the mounting of an EFS volume in an Amazon EC2 instance so I could perform some operations on the shared volume, and Ansible makes managing things really simple. In the below playbook—which easily works with any popular distribution (just change the nfs_package to suit your needs)—an EFS volume is mounted on an EC2 instance:

Adding strings to an array in Ansible

From time to time, I need to dynamically build a list of strings (or a list of other things) using Ansible's set_fact module.

Since set_fact is a module like any other, you can use a with_items loop to loop over an existing list, and pull out a value from that list to add to another list.

For example, today I needed to retrieve a list of all the AWS EC2 security groups in a region, then loop through them, building a list of all the security group names. Here's the playbook I used: