proxy

Using an Ansible playbook with an SSH bastion / jump host

Since I've set this up a number of times, but I just realized I've never documented it on my blog, I thought I'd finally do that.

I have a set of servers that are running on a private network. That network is connected to the Internet through a single reverse proxy / 'bastion' host.

But I still want to be able to manage the servers on the private network behind the bastion from outside.

Method 1 - Inventory vars

The first way to do it with Ansible is to describe how to connect through the proxy server in Ansible's inventory. This is helpful for a project that might be run from various workstations or servers without the same SSH configuration (the configuration is stored alongside the playbook, in the inventory).

In my Ansible project, I had an inventory file like the following:

Nginx serving up the wrong site content for a Drupal multisite install with https

I had a 'fun' and puzzling scenario present itself recently as I finished moving more of my Drupal multisite installations over to HTTPS using Let's Encrypt certificates. I've been running this website—along with six other Drupal 7 sites—on an Nginx installation for years. A few of the multisite installs use bare domains, (e.g. jeffgeerling.com instead of www. jeffgeerling.com), and because of that, I have some http redirects on Nginx to make sure people always end up on the canonical domain (e.g. example.com instead of www. example.com).

My Nginx configuration is spread across multiple .conf files, e.g.:

Stripping the 'Vary: Host' header from an Apache response using Varnish

A colleague of mine found out that many static resource requests which should've been cached upstream by a CDN were not being cached, and the reason was an extra Vary http header being sent with the response—in this case Host.

It was hard to reproduce the issue, but in the end we found out it was related to Apache bug #58231. Basically, since we used some RewriteConds that evaluated the HTTP_HOST value before a RewriteRule, we ran into a bug where Apache would dump a Vary: Host header into the request response. When this was set, it effectively bypassed Varnish's cache, as well as our upstream CDN... and since it applied to all image, css, js, xml, etc. requests, we saw a lot of unexpected volume hitting the backend Apache servers.

To fix the issue, at least until the upstream bug is fixed in Debian, we decided to strip Host from the Vary header inside our Varnish default.vcl. Inside the vcl_backend_response, we added:

Reverse-proxying a SOAP API accessed via PHP's SoapClient

I'm documenting this here, just because it's something I imagine I might have to do again someday... and when I do, I want to save myself hours of pain and misdirection.

A client had an old SOAP web service that used IP address whitelisting to authenticate/allow requests. The new PHP infrastructure was built using Docker containers and auto-scaling AWS instances. Because of this, we had a problem: a request could come from one of millions of different IP addresses, since the auto-scaling instances use a pool of millions of AWS IP addresses in a wide array of IP ranges.

Because the client couldn't change their API provider (at least not in any reasonable time-frame), and we didn't want to throw away the ability to auto-scale, and also didn't want to try to build some sort of 'Elastic IP reservation system' so we could draw from a pool of known/reserved IP addresses, we had to find a way to get all our backend API SOAP requests to come from one IP address.

The solution? Reverse-proxy all requests to the backend SOAP API.

Apache, fastcgi, proxy_fcgi, and empty POST bodies with chunked transfer

I've been working on building a reproducible configuration for Drupal Photo Gallery, a project born out of this year's Acquia Build Hackathon.

We originally built the site on an Acquia Cloud CD environment, and this environment uses a pretty traditional LAMP stack. We didn't encounter any difficulty using AWS Lambda to post image data back to the Drupal site via Drupal's RESTful Web Services API.

The POST request is built in Node.js using:

Connect to IRC via Adium when connected through an LTE hotspot

When I'm on the go, I like to use my iPhone 5s as a hotspot, as I get 10-20 Mbps up and down (much better than any public WiFi I've used), and it's a more secure connection than a public, unsecured hotspot.

However, when I open Adium, I'm greeted with:

Notice -- You need to identify via SASL to use this server

To fix this, I forward port 6667 on my Mac to one of my remote servers using SSH, then tell Adium to use that server's connection with my Mac as a SOCKS5 proxy. If you need to do this, you can do the following:

  1. We need to forward port 6667 from your local Mac to a remote server ('example.com') to which you have SSH access. In Terminal, enter: ssh -D 6667 [email protected]
  2. In Adium, go to the IRC connection settings, and under Proxy, check the 'Connect using proxy' checkbox, choose 'SOCKS5' for Type, enter 'localhost' for Server, and '6667' for Port (see screenshot below).

Adium SOCKS5 proxy settings for IRC tunnel on port 6667

Use a Raspberry Pi running Raspian OS behind a proxy server

I've been working on figuring out some interesting ways to use my revision A Raspberry Pi, and one of the things I'm doing with it requires it to work correctly behind a corporate proxy server. If you're in a similar situation, and need your Pi to work with a proxy server, it's simple to get set up:

You need to edit the ~/.profile file (where ~ is your home folder, e.g. /home/jeffgeerling, adding the following lines to the bottom of the file:

# Proxy server (example: http://username:[email protected]:8080). User/pass optional.
export http_proxy=http://[user]:[pass]@[proxy_server_address]:[port]

# Proxy exclusions (don't use the proxy server for these hostnames and IP addresses).
export no_proxy=localhost,127.0.0.0/8

If you'd also like the proxy to apply when running sudo commands and when using your Pi as the root user, you need to add the same configuration to /root/.profile (this would be helpful if you need to use sudo apt-get to install or update software packages).

Git through an NTLM Proxy (Corporate Firewall) for drupal.org

Borrowing from answers in this Stack Overflow question, here's how you can get through a corporate (Microsoft) NTLM Proxy to clone git repositories from drupal.org:

cd into your drupal contrib directory (or wherever you want to put the repository).

$ export http_proxy="http://username:password@proxy:port/"

$ git clone http://git.drupal.org/project/[projectname]

Basically, you're first setting an environment variable to tell your shell to use an HTTP proxy, with your username/password combo. This variable will be used when making connections to git.drupal.org (and other services, like github). You can also set this in your ~/.profile, .bash_rc, or .bash_profile so it will be saved for future Terminal sessions.

Share a Proxied Network Connection via WiFi to your iPad/iPhone/iPod

For the past six weeks that I've had my iPad, I've fought with my office network, because it uses a Microsoft/NTLM authenticated proxy server which wreaks havoc on the iPhone OS's ability to use the Internet effectively (especially for third party apps).

After reading through countless forum support requests for people asking the same questions, I've finally found a (mostly) workable solution for this problem—at least for most apps and browsing on the iPad.

Doubling the Proxy

Since the iPhone OS seems to have a pretty hard time dealing with proxy authentication (most apps don't act like there's even an internet connection, even if Safari will work through the proxy), I used a solution I often use on my Macs at work: doubling up the proxy.

Basically, you can use an application like Authoxy on the Mac to make the Mac translate all its web traffic through a special internal connection, which gets messaged correctly by Authoxy to work with your company's proxy server.

Gzip/mod_deflate not Working? Check your Proxy Server

Recently, I was troubleshooting performance issues on a few different websites, and was stymied by the fact that YSlow repeatedly reported an F for "Compress components with gzip," even though online sites like GIDNetwork's Gzip test were reporting successful Gzipping of text components on the site.

Gzip Failed
Yslow results - not very happy.

After scratching my head for a while, I finally figured out the problem, hinted at by a comment on a question on Stack Overflow. Our work's proxy server was blocking the 'Accept-Encoding' http header that is sent along with every file request; this prevented a gzipped transfer of any file, thus Yslow gave an F.

I set up a secure tunnel (using SSH) from my computer to the web server directly, and then reloaded the page in FireFox, and re-ran YSlow: