linux

Route local emails to another email address using Postfix on Linux

When I set up new servers, I like to make sure any system messages like cron failures, server issues, or emails that are routed to johndoe@example.com (where 'example.com' is the hostname of the server—meaning emails to that domain will get routed through the server itself and not hit an external MX server unless postfix/sendmail is configured correctly) are sent to my own email address.

It's relatively straightforward to route emails to internal users (like webmaster, root, etc.) to an external email address; you simply need to edit the /etc/aliases file, adding a rule like the one below, then run the command sudo newaliases:

<br />
webmaster: root

# Person who should get root's mail
root: youremail@example.com

Diagnosing Disk I/O issues: swapping, high IO wait, congestion

One one small LEMP VPS I manage, I noticed munin graphs that showed anywhere between 5-50 MB/second of disk IO. Since the VM has an SSD instead of traditional spinning hard drive, performance wasn't too bad, but all that disk I/O definitely slowed things down.

I wanted to figure out what was the source of all the disk I/O, so I used the following techniques to narrow down the culprit (spoilers: it was MySQL, which was using some swap space because it was tuned to use a little too much memory).

iotop

First up was iotop, a handy top-like utility for monitoring disk IO in real-time. Install it via yum or apt, then run it with the command sudo iotop -ao to see an aggregated summary of disk IO over the course of the utility's run. I let it sit for a few minutes, then checked back in to find:

A brief history of SSH and remote access

This post is an excerpt from Chapter 11: Server Security and Ansible, in Ansible for DevOps.

In the beginning, computers were the size of large conference rooms. A punch card reader would merrily accept pieces of paper that instructed the computer to do something, and then a printer would etch the results into another piece of paper. Thousands of mechanical parts worked harmoniously (when they did work) to compute relatively simple commands.

As time progressed, computers became somewhat smaller, and interactive terminals became more user-friendly, but they were still wired directly into the computer being used. Mainframes came to the fore in the 1960s, originally used via typewriter and teletype interfaces, then via keyboards and small text displays. As networked computing became more mainstream in the 1970s and 1980s, remote terminal access was used to interact with the large central computers.

Quick logrotate example for Apache logs and some gotchas

On one server, where I have a custom directory where all the Apache (httpd) error and access logs are written, one set per virtualhost, I noticed the folder had grown to multiple gigabytes in size (found using du -h --max-depth=1)—in this situation, there's a handy utility on pretty much every Linux/UNIX system called logrotate that is made to help ensure log files don't grow too large. It periodically copies and optionally compresses the log files and deletes old logs, daily, monthly, or on other schedules.

For this server, to quickly fix the problem of growing-too-large log files, I added a file 'httpd-custom' at /etc/logrotate.d/httpd-custom, with the following contents:

/home/user/log/httpd/*log<br />
/home/user/log/httpd/*err<br />
{<br />
rotate 5<br />
size 25M<br />
missingok<br />
notifempty<br />
sharedscripts<br />
compress<br />
postrotate<br />
/sbin/service httpd reload &gt; /dev/null 2&gt;/dev/null || true<br />
endscript<br />
}<br />

Use a Raspberry Pi running Raspian OS behind a proxy server

I've been working on figuring out some interesting ways to use my revision A Raspberry Pi, and one of the things I'm doing with it requires it to work correctly behind a corporate proxy server. If you're in a similar situation, and need your Pi to work with a proxy server, it's simple to get set up:

You need to edit the ~/.profile file (where ~ is your home folder, e.g. /home/jeffgeerling, adding the following lines to the bottom of the file:

<br />
# Proxy server (example: http://username:password@10.0.0.1:8080). User/pass optional.<br />
export http_proxy=http://[user]:[pass]@[proxy_server_address]:[port]

# Proxy exclusions (don't use the proxy server for these hostnames and IP addresses).
export no_proxy=localhost,127.0.0.0/8

If you'd also like the proxy to apply when running sudo commands and when using your Pi as the root user, you need to add the same configuration to /root/.profile (this would be helpful if you need to use sudo apt-get to install or update software packages).

Make sure your Linux servers' date and time are correct and synchronized

Nowadays, most people assume that all modern computers and operating systems have network time synchronization set up properly and switched on by default. However, this is not the case with many Linux servers—especially if you didn't install Linux and configure it yourself (as would be the case with most cloud-based OS images like those used to generate new servers on Linode).

After setting up a new server on Linode or some other Linux VPS or dedicated server provider, you should always do the following to make sure the server's timezone and date and time synchronization are configured and working correctly:

Simple iptables rules for a typical LAMP server

I've seen a ton of iptables configurations on the Internet, and none of them really got to the heart of what I need to do for the majority of my LAMP-based web servers (hosted on Linode, HostGator, Hot Drupal, and elsewhere). For these servers, I just need a really simple set of rules that restricts all incoming traffic except for web (port 80/443 for http/https traffic), ssh (usually port 22), smtp (port 25), and icmp ping requests.

The script below (save it as 'firewall.bash', chmod u+x it to make it executable, and run it with $ sudo /path/to/firewall.bash, then test your server (access websites, log on to it from another Terminal session, ping it, etc., and make sure that's all working)):

Problems copying a huge Aperture library from one drive to another

I've often had trouble copying files with Mac OS X's Finder. From back in the Mac OS X Beta days (when it was based on NeXT's UI), hard drive to hard drive copies, network copies, and backups have often had strange quirks, and one of the strangest I've yet found happened yesterday when I tried copying a ~170GB Aperture library from one external USB drive to another.

I tried copying the library three times, and each time the copy would get to about 24GB, the hard drive (from which the library was being copied) would make a loud CLICK, and then it would unmount and remount, stopping the library file copy. This particular drive has never had troubles in the past, and the fact that it kept doing the CLICK-die thing at 24GB meant that there may have been a file problem or a Finder bug causing the problem.

Arrow and Command Keys Not working in Ubuntu 10.04 for non-root Account

For some time, I was having trouble getting the arrow keys to function correctly in my terminal sessions when logging into one of my remote Linode servers running Ubuntu 10.04. Whenever I pressed an arrow key, instead of moving the cursor or going up and down the command history, I would get a string of gibberish like [[A^[[B^[[D^[[C. Not very helpful!

So, after some searching, I found that the cause for this is an incorrect shell environment being set in the passwd file. To fix this problem, simply edit the /etc/passwd file and change the final string (after the last :) to /bin/bash (it is set to /bin/sh if you create a user via the command line/useradd):

$ sudo nano /etc/passwd

Change this:
:x:1000:1000::/home//:/bin/sh

to this:
:x:1000:1000::/home//:/bin/bash

...and then save the file, log out, and log back in. Problem solved!