amazon

Self-published Ansible book – 87k copies, 300k revenue, 41 revisions

I just published the 41st revision of my self-published book Ansible for DevOps, which has sold 87,234 copies as of this writing across LeanPub, Amazon (Kindle and paperback), and iBooks.

There are multiples of that number of eBooks downloaded, as I've never DMCA'ed the sites that re-host the book illegally. I just... provide new and better versions. People who download the illegal copies know they can come to me for the best reading experience. Plus, I provide free updates forever for anyone who's purchased or gotten the book free on LeanPub.

My self-published book earned $300,000+ in revenue over the past 9 years, and still earns enough every month to pay my health insurance bill (sans deductible)—which has soared to beyond $2,000/month! (Living with a pre-existing condition in the USA is... bad.)

Streaming services lost the plot

Do you remember when Netflix first started their movie streaming service, back in 2007?

2007 era Netflix home page courtesy of the Wayback Machine

Physical media was still the preferred way to consume media. Besides sports content, and some TV shows that were cable-exclusive for a time, most people would run by Blockbuster and pick up a movie.

Netflix started with mail-order DVDs, then switched to streaming. The absence of ads (which were rampant on cable channels) and the convenience of not having to drive to a physical store (Blockbuster et all) made Netflix a no-brainer, especially considering the depth of their initial library.

Retrieving individual files from S3 Glacier Deep Archive using AWS CLI

I still haven't blogged about my overall backup strategy (though I've mentioned it in the past a few times on my YouTube channel)—but overall, how it works is I have two local copies of any important data, and most of the non-video data is also stored in my Dropbox folder, so I get two local copies and one cloud backup for 'free'.

Then I also back up everything (including video content) from my NAS to an Amazon S3 Glacier Deep Archive-backed bucket at least once a week (sometimes more frequently, when I am working on a big project and manually kick off a mid-week backup).

Self-publishing and the 2nd edition of Ansible for DevOps

Five years, 834 commits, and 24 major revisions later, I've just published the 2nd edition of Ansible for DevOps, a book which has now sold over 60,000 copies and spawned a popular free Ansible 101 video series on YouTube.

Ansible for DevOps, 2nd Edition - Cover

Making good on my promise to make the ebook updates free, forever, I've published a new revision of the book at least once a quarter since I published the first revision (version 0.42) on LeanPub in 2014, and the second edition begins the 2.x series of book revisions.

The book covers the basics of managing Linux servers, then dives deeper into continuous integration, application deployments, container image management, and even Kubernetes cluster management with Ansible.

Getting the best performance out of Amazon EFS

tl;dr: EFS is NFS. Networked file systems have inherent tradeoffs over local filesystem access—EFS doesn't change that. Don't expect the moon, benchmark and monitor it, and you'll do fine.

On a recent project, I needed to have a shared network file system that was available to all servers, and able to scale horizontally to anywhere between 1 and 100 servers. It needed low-latency file access, and also needed to be able to handle small file writes and file locks synchronously with as little latency as possible.

Amazon EFS, which uses NFS v4.1, checks all of those checkboxes (at least, to a certain extent), and if you're already building infrastructure inside AWS, EFS is a very cost-effective way to manage a scalable NFS filesystem. I'm not going to go too much into the technical details of EFS or NFS v4.1, but I would like to highlight some of the painful lessons my team has learned implementing EFS for a fairly hefty CMS-based project.

Quick way to check if you're in AWS in an Ansible playbook

For many of my AWS-specific Ansible playbooks, I need to have some operations (e.g. AWS inspector agent, or special information lookups) run when the playbook is run inside AWS, but not run if it's being run on a local test VM or in my CI environment.

In the past, I would set up a global playbook variable like aws_environment: False, and set it manually to True when running the playbook against live AWS EC2 instances. But managing vars like aws_environment can get tiresome because if you forget to set it to the correct value, a playbook run can fail.

So instead, I'm now using the existence of AWS' internal instance metadata URL as a check for whether the playbook is being run inside AWS:

Self-Publish, don't write for a Publisher

I'm not a writer. I'm a software developer who communicates well. Because I'm a developer and software architect, I spend time evaluating solutions to find the best one. There are often multiple good options, but I try to pick the best among them.

When I chose to write a book two years ago, I evaluated whether to self-publish or seek out a publisher. I spent a lot of time evaluating my options, and chose the self-publishing route.

Because I'm asked about this a lot, I decided to summarize my reasons in a blog post, both to posit why self-publishing is almost always the right option for a beginning author, and to challenge publishers to convince me I'm wrong.

Sending emails to multiple receipients with Amazon SES

After reading through a ton of documentation posts and forum topics for Amazon SES about this issue, I finally found this post about the string list format that helped me be able to send an email with Amazon SES's sendmail API to multiple recipients.

Every way I tried getting this working, I was receiving errors like InvalidParameter for the sender, Unexpected list element termination for the error code, etc.

Normally, when sending email, you can either pass a single address or multiple addresses as a string, and you'll be fine: