ssh

My own magic-wormhole relay, for zippier transfers

If you've ever had to transfer a file from one computer to another over the Internet, with minimal fuss, there are a few options. You could use scp or rsync if you have SSH access. You could use Firefox Send, or Dropbox, or iCloud Drive, or Google Drive, and upload from one computer, and download on the other.

But what if you just want to zap a file from point A to point B? Or what if—like me—you want to see how fast you can get an individual file from one place to another over the public Internet?

rsync 40 MB/second

Testing iperf through an SSH tunnel

I recently had a server with some bandwidth limitations (tested using scp and rsync -P), where I was wondering if the problem was the data being transferred, or the server's link speed.

The simplest way to debug and verify TCP performance is to install iperf3 and run an iperf speed test between the server and my computer.

On the server, you run iperf3 -s, and on my computer, iperf3 -c [server ip].

But iperf3 requires port 5201 (by default) to be open on the server, and in many cases—especially if the server is inside a restricted environment and only accessible through SSH (e.g. through a bastion or limited to SSH connectivity only)—you won't be able to get that port accessible.

So in my case, I wanted to run iperf through an SSH tunnel. This isn't ideal, because you're testing the TCP performance through an encrypted connection. But in this case both the server and my computer are extremely new/fast, so I'm not too worried about the overhead lost to the connection encryption, and my main goal was to get a performance baseline.

Using LibreELEC like a pro—management via SSH

For a recent project, I needed to install LibreELEC/Kodi on a Raspberry Pi Compute Module 4 with built-in eMMC storage.

Because it's inconvenient to be swapping the Pi around from the embedded display I was using it in to my preferred carrier board I use for flashing Pis and interacting with their filesystems, I wanted to manage my LibreELEC install over SSH.

It seems like whatever documentation the LibreELEC Wiki used to have for remote SSH access is missing, and all I could find were references to enabling SSH during a GUI setup wizard. If you didn't see that during initial setup, the easiest way is to add ssh to the end of the line in the system's cmdline.txt file, then reboot.

So I pulled the Pi, used usbboot to mount the fat32 volume on my Mac, and opened cmdline.txt and added ssh. Then I popped the Pi back in the embedded display, and started it up.

Sure enough, I could now SSH in:

Using an Ansible playbook with an SSH bastion / jump host

Since I've set this up a number of times, but I just realized I've never documented it on my blog, I thought I'd finally do that.

I have a set of servers that are running on a private network. That network is connected to the Internet through a single reverse proxy / 'bastion' host.

But I still want to be able to manage the servers on the private network behind the bastion from outside.

Method 1 - Inventory vars

The first way to do it with Ansible is to describe how to connect through the proxy server in Ansible's inventory. This is helpful for a project that might be run from various workstations or servers without the same SSH configuration (the configuration is stored alongside the playbook, in the inventory).

In my Ansible project, I had an inventory file like the following:

SSH and HTTP to a Raspberry Pi behind CG-NAT

For a project I'm working on, I'll have a Raspberry Pi sitting behind a 4G LTE modem:

Raspberry Pi 4 with 4G LTE modem and antenna on desk

This modem is on AT&T's network, but regardless of the provider, unless you're willing to pay hundreds or thousands of dollars a month for a SIM with a public IP address, the Internet connection will be running behind CG-NAT.

What this means is there's no publicly routable address for the Pi—you can't access it from the public Internet, since it's only visible inside the cell network's private network.

There are a few different ways people have traditionally dealt with accessing devices running through CG-NAT connections:

  1. Using a VPN
  2. Using a one-off tool like ngrok
  3. Using reverse tunnels, often via SSH

And after weighing the pros and cons, I decided to go with option 3, since—for my needs—I want to have two ports open back to the Raspberry Pi:

Setting up a Pi for remote Internet connection monitoring

So... recently I acquired a Starlink 'Dishy', and I'm going to be installing it at a rural location near where I live, but since it's a bit of a drive to get to it, I wanted to set up a Raspberry Pi to monitor the Starlink connection quality over time.

Internet monitoring dashboard in Grafana

I know the Starlink app has its own monitoring, but I like to have my own fancy independent monitoring in place too.

The wrinkle with a Starlink-based Internet connection, though, is that SpaceX is using Carrier-Grade NAT (CGNAT) on their network, so there won't be any kind of IPv4 address I could reach the Pi at, nor does SpaceX yet have IPv6 set up in their network.

So to make remote access possible, I would have to find a way to have the Pi reach out to one of my servers with a persistent connection, then I could 'tunnel' through that server from other locations to reach the Pi.

Setting up a Mac mini from MacStadium for headless CI

I recently got an offer from MacStadium to use one of their dedicated Mac minis to perform CI and testing tasks for my Mac-based open source projects (for example, my Mac Dev Ansible Playbook, which I use to configure my own Macs).

Apple logo on glowy laptop background

So I thought I'd document a little bit in this blog post about how I configured the Mac mini for more secure remote administration, since Macs tend to be a little more 'open' out of the box than comparable Linux machines that I'm used to working with.

Securing SSH

First of all, I used ssh-copy-id to add my SSH key to the default administrator account on the Mac mini that was created for me:

Idempotently adding an SSH key for a host to known_hosts file with bash

I noticed on one of the CI servers I'm running that the .ssh/known_hosts file had ballooned up to over 1,000,000 lines!

Looking into the root cause (I tailed the file until I could track down a few jobs that ran every minute), I found that there was the following line in a setup script:

ssh-keyscan -t rsa github.com >> /var/lib/jenkins/.ssh/known_hosts

"This can't be good!" I told myself, and I decided to add a condition to make it idempotent (that is, able to be run once or one million times but only affecting change the first time it's run—basically, a way to change something only if the change is required):

if ! grep -q "^github.com" /var/lib/jenkins/.ssh/known_hosts; then
  ssh-keyscan -t rsa github.com >> /var/lib/jenkins/.ssh/known_hosts
fi

Now the host key for github.com is only scanned once the first time that script runs, and it is only stored in known_hosts one time for the host github.com... instead of millions of times!

Git gives 'ERROR: Repository not found.' when URL is correct and SSH key is used

I had a fun problem that made me spin my wheels an hour or so today. I was having no issue cloning a remote repository a number of times in the morning while debugging a Jenkins build job that runs a git clone + Docker image build and push operation.

Suddenly, when I was doing some final testing, I started to get the following:

git clone [email protected]:geerlingguy/my-project.git                            
Cloning into 'my-project'...
ERROR: Repository not found.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

I know that I had the repository's SSH key loaded (via eval "$(ssh-agent -s)" && ssh-add ~/.ssh/deploy-key), and if I unloaded the key, I would instead get:

Cloning private GitHub repositories with Ansible on a remote server through SSH

One of Ansible's strengths is the fact that its 'agentless' architecture uses SSH for control of remote servers. And one classic problem in remote Git administration is authentication; if you're cloning a private Git repository that requires authentication, how can you do this while also protecting your own private SSH key (by not copying it to the remote server)?

As an example, here's a task that clones a private repository to a particular folder:

- name: Clone a private repository into /opt.
  git:
    repo: [email protected]:geerlingguy/private-repo.git
    version: master
    dest: /opt/private-repo
    accept_hostkey: yes
  # ssh-agent doesn't allow key to pass through remote sudo commands.
  become: no

If you run this task, you'll probably end up with something like: