testing

Testing iperf through an SSH tunnel

I recently had a server with some bandwidth limitations (tested using scp and rsync -P), where I was wondering if the problem was the data being transferred, or the server's link speed.

The simplest way to debug and verify TCP performance is to install iperf3 and run an iperf speed test between the server and my computer.

On the server, you run iperf3 -s, and on my computer, iperf3 -c [server ip].

But iperf3 requires port 5201 (by default) to be open on the server, and in many cases—especially if the server is inside a restricted environment and only accessible through SSH (e.g. through a bastion or limited to SSH connectivity only)—you won't be able to get that port accessible.

So in my case, I wanted to run iperf through an SSH tunnel. This isn't ideal, because you're testing the TCP performance through an encrypted connection. But in this case both the server and my computer are extremely new/fast, so I'm not too worried about the overhead lost to the connection encryption, and my main goal was to get a performance baseline.

How many AMD RX 7900 XTX's are defective?

I'm working on a project—a very dumb project, mind you—and I was trying to acquire the two current-gen flagship GPUs: an Nvidia RTX 4090, and an AMD Radeon RX 7900 XTX.

In some weird stroke of luck (it has been difficult to find either in stock), I was able to get one of each this week.

AMD RX 7900 XTX Reference Sapphire edition versus Nvidia RTX 4090 Gigabyte boxes

(lol at the size difference...)

Besides exorbitant price gouging, Nvidia's ownership of the crown in terms of GPU performance remains in this generation, as the 4090 blows past any competing card so far. But AMD's 7900 XTX was poised to be the best value in terms of price, performance, and efficiency (at least compared to any Nvidia offering).

Travis CI's new pricing plan threw a wrench in my open source works

I just spent the past 6 hours migrating some of my open source projects from Travis CI to GitHub Actions, and I thought I'd pause for a bit (12 hours into this project, probably 15-20 more to go) to jot down a few thoughts.

I am not one to look a gift horse in the mouth. For almost a decade, Travis CI made it possible for me to build—and maintain, for years—hundreds of open source projects.

I have built projects for Raspberry Pi, PHP, Python, Drupal, Ansible, Kubernetes, macOS, iOS, Android, Docker, Arduino, and more. And almost every single project I built got immediate integration with Travis CI.

Without that testing, and the ability to run tests on a schedule, I would have abandoned most of these projects. But with the testing, I'm able to keep up with build failures induced by bit rot over the years and review PRs more easily.

What went wrong with Travis CI?

From the outset, Travis CI was built to integrate with GitHub repositories and offer free open source CI. At one time it was showered with praise on Hacker News and elsewhere for its culture and ethos.

Getting colorized output from Molecule and Ansible on GitHub Actions for CI

For many new Ansible-based projects, I build my tests in Molecule, so I can easily run them locally or in CI. I also started using GitHub Actions for many of my new Ansible projects, just because it's so easy to get started and integrate with GitHub repositories.

I'm actually going to talk about this strategy in my next Ansible 101 live stream, covering Testing Ansible playbooks with Molecule and GitHub Actions CI, but I also wanted to highlight one thing that helps me when reviewing or observing playbook and molecule output, and that's color.

By default, in an interactive terminal session, Ansible colorizes its output so failures get 'red' color, good things / ok gets 'green', and changes get 'yellow-ish'. Also, warnings get a magenta color, which flags them well so you can go and fix them as soon as possible (that's one core principle I advocate to make your playbooks maintainable and scalable).

Running a Github Actions workflow on schedule and other events

One thing that was not obvious when I was setting up GitHub Actions on the Ansible Kubernetes Collection repository was how to have a 'CI' workflow run both on pull requests and on a schedule. I like to have scheduled runs for most of my projects, so I can see if something starts failing because an underlying dependency changes and breaks my tests.

The documentation for on.schedule just has an example with the workflow running on a schedule. For example:

on:
  schedule:
    # * is a special character in YAML so you have to quote this string
    - cron:  '*/15 * * * *'

Separately, there's documentation for triggering a workflow on events like a 'push' or a 'pull_request':

Molecule fails on converge and says test instance was already 'created' and 'prepared'

I hit this problem every once in a while; basically, I run molecule test or molecule converge (in this case it was for a Kubernetes Operator I was building with Ansible), and it says the instance is already created/prepared—even though it is not—and then Molecule fails on the 'Gathering Facts' portion of the converge step:

How to add integration tests to an Ansible collection with Molecule

Note: Ansible Collections are currently in tech preview. The details of this blog post may be outdated by the time you read this, though I will try to keep things updated if possible.

Ansible 2.8 and 2.9 introduced a new type of Ansible content, a 'Collection'. Collections are still in tech preview state, so things are prone to change.

Ansible Collections must be in a very specific path, like {...}/ansible_collections/{namespace}/{collection}/

You have to make sure your collection is in that specific path—with an empty directory named ansible_collections, then a directory for the namespace, and finally a directory for the collection itself. I opened an issue in the Ansible issue queue asking if ansible-test can allow running tests in an arbitrary collection directory, and for Molecule itself, there's more of a 'meta' issue, Molecule and Ansible Collections.

How to add integration tests to an Ansible Collection with ansible-test

Note: Ansible Collections are currently in tech preview. The details of this blog post may be outdated by the time you read this, though I will try to keep things updated if possible.

Ansible 2.8 and 2.9 introduced a new type of Ansible content, a 'Collection'. Collections are still in tech preview state, so things are prone to change, but one thing that the Ansible team has been working on is improving ansible-test to be able to test modules, plugins, and roles in Collections (previously it was only used for testing Ansible core).

ansible-test currently requires your Collection be in a very specific path, either:

Cleaning up after adding files in Drupal Behat tests

I've been going kind of crazy covering a particular Drupal site I'm building in Behat tests—testing every bit of core functionality on the site. In this particular case, a feature I'm testing allows users to upload arbitrary files to an SFTP server, then Drupal shows those filenames in a streamlined UI.

I needed to be able to test the user action of "I'm a user, I upload a file to this directory, then I see the file listed in a certain place on the site."

These files are not managed by Drupal (e.g. they're not file field uploads), but if they were, I'd invest some time in resolving this issue in the drupalextension project: "When I attach the file" and Drupal temporary files.

Since they are just random files dropped on the filesystem, I needed to:

Testing the 'Add user' and 'Edit account' forms in Drupal 8 with Behat

On a recent project, I needed to add some behavioral tests to cover the functionality of the Password Policy module. I seem to be a sucker for pain, because often I choose to test the things it seems there's no documentation on—like testing the functionality of the partially-Javascript-powered password fields on the user account forms.

In this case, I was presented with two challenges:

  • I needed to run one scenario where a user edits his/her own password, and must follow the site's configured password policy.
  • I needed to run another scenario where an admin creates a new user account, and must follow the site's configured password policy for the created user's password.

So I came up with the following scenarios: