disk

HTGWA: Partition, format, and mount a large disk in Linux with parted

This is a simple guide, part of a series I'll call 'How-To Guide Without Ads'. In it, I'm going to document how I partition, format, and mount a large disk (2TB+) in Linux with parted.

Note that newer fdisk versions may work better with giant drives... but since I'm now used to parted I'm sticking with it for the foreseeable future.

List all available drives

$ sudo parted -l
...
Error: /dev/sda: unrecognised disk label
Model: ATA Samsung SSD 870 (scsi)                                         
Disk /dev/sda: 8002GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

Good, I had plugged in that SSD just now, and it's brand new, so it doesn't have a partition table, label, or anything. It's the one I want to operate on. It's located at /dev/sda. I could also find that info with lsblk.

Limiting disk iops on a larger Munin server using rrdcached

I've long used Munin for basic resource monitoring on a huge variety of servers. It's simple, reliable, easy to configure, and besides the fact that it uses Perl for plugins, there's not much against it!

Last week, I got a notice from my 'low end box' VPS provider that my Munin server—which is aggregating data from about 50 other servers—had high IOPS and would be shut down if I didn't get it back into an allowed threshold. Most low end VPSes run things like static HTML websites, so disk IO is very low on average. I checked my Munin instance, and sure enough, it was constantly churning through around 50 iops. For a low end server, this can cause high iowait for other tenants of the same server, so I can understand why hosting providers don't want applications on their shared servers doing a lot of constant disk I/O.

Using iotop, I could see the munin-update processes were spending a lot of time writing to disk. And munin's own diskstats_iops plugin showed the same:

Mount a Raspberry Pi SD card on a Mac (read-only) with osxfuse and ext4fuse

So you're telling me I can read files from a Raspberry Pi microSD card?

For my Raspberry Pi Time-Lapse App, I find myself having to either copy hundreds (or thousands!) of 3+ MB image files, or a 1-2 GB video file from a Raspberry Pi Zero W to my Mac.

Copying over the WiFi network works, but it's extremely slow (usually topping out around 5 Mbps... which means it could take a couple hours to copy). So I decided to finally try to mount the Raspberry Pi's drive directly on my MacBook Pro (running macOS Sierra 10.12). This is normally a bit tricky, because the Raspberry Pi uses the Linux ext4 filesystem—which is not compatible with either macOS or Windows!

Resizing a VirtualBox Disk Image (.vmdk) on a Mac

Every now and then, a project I'm managing through Vagrant (using either a box I built myself using Packer, or one of the many freely available Vagrant Boxes) needs more than the 8-12 GB that's configured for the disk image by default. Often, you can find ways around increasing the disk image size (like proxying file storage, mounting a shared folder, etc.), but sometimes it's just easier to expand the disk image.

Unfortunately, VBoxManage's modifyvm --resize option doesn't work with .vmdk disk images (the default format used with Vagrant boxes in VirtualBox). Luckily, you can easily clone the image to a .vdi image (which can be resized), then either use that image, or convert it back to a .vmdk image. Either way, you can expand your virtual disk image however large you want (up to the available free space on your physical drive, of course!).

Here's how:

1 - Convert and resize the disk image

First, vagrant halt/shutdown your VM, then in Terminal or on the command line:

Quick logrotate example for Apache logs and some gotchas

On one server, where I have a custom directory where all the Apache (httpd) error and access logs are written, one set per virtualhost, I noticed the folder had grown to multiple gigabytes in size (found using du -h --max-depth=1)—in this situation, there's a handy utility on pretty much every Linux/UNIX system called logrotate that is made to help ensure log files don't grow too large. It periodically copies and optionally compresses the log files and deletes old logs, daily, monthly, or on other schedules.

For this server, to quickly fix the problem of growing-too-large log files, I added a file 'httpd-custom' at /etc/logrotate.d/httpd-custom, with the following contents:

/home/user/log/httpd/*log
/home/user/log/httpd/*err
{
rotate 5
size 25M
missingok
notifempty
sharedscripts
compress
postrotate
/sbin/service httpd reload > /dev/null 2>/dev/null || true
endscript
}

Some notes:

2013 VPS Benchmarks - Linode, Digital Ocean, Hot Drupal

Every year or two, I like to get a good overview of different hosting providers' VPS performance, and from time to time, I move certain websites and services to a new host based on my results.

In the past, I've stuck with Linode for many services (their end-to-end UX, and raw server performance is great!) that weren't intense on disk operations, and Hot Drupal for some sites that required high-performance IO (since Hot Drupal's VPSes use SSDs and are very fast). This year, though, after Digital Ocean jumped into the VPS hosting scene, I decided to give them a look.

Before going further, I thought I'd give a few quick benchmarks from each of the providers; these are all on middle-range plans (1 or 2GB RAM), and with the exception of Linode, the disks are all SSD, so should be super fast:

Disk Performance

Disk Performance Chart