ssd

HTGWA: How to completely erase a hard drive in Linux

This is a simple guide, part of a series I'll call 'How-To Guide Without Ads'. In it, I'll show you how I completely initialize a hard drive so I can re-use it somewhere else (like Ceph) that doesn't like drives with partition information!

First, a warning: this blog post does not show how to zero a hard drive, or secure erase. That's a slightly different process.

But as someone with way too many storage devices (from testing, mostly), I find myself in the position of trying to use a spare drive in some place where it expects a brand new drive, but winds up failing because the drive had a partition, or had valid boot files from an SBC or something.

I wanted to document the easiest way in Linux to completely reset a hard drive—at least from Linux's perspective.

The impetus was when I was trying to get some hard drives added to a Ceph OSD, and the process that tried adding them ran into an error stating RuntimeError: Device /dev/sda has partitions.

First look: ASUSTOR's new 12-bay all-M.2 NVMe SSD NAS

Last year, after I started a search for a good out-of-the-box all-flash-storage setup for a video editing NAS, I floated the idea of an all-M.2 NVMe NAS to ASUSTOR. I am not the first person with the idea, nor is ASUSTOR the first prebuilt NAS company to build one (that honor goes QNAP, with their TBS-453DX).

But I do think the concept can be executed to suit different needs—like in my case, video editing over a 10 Gbps network with minimal latency for at least one concurrent user with multiple 4K streams and sometimes complex edits, without lower-bitrate transcoded media (e.g. ProRes RAW).

ASUSTOR Flashstor 12 Pro - front and top

Using PiBenchmarks.com for SBC disk performance testing

For many years, I've maintained some scripts to do basic disk benchmarking for SBCs, to test 1M and 4K sequential and random access speeds, since those are the two most relevant tests for the Linux workloads I run on my Pis.

I've been using this script for years, and it uses fio and iozone to get the metrics I need.

And from time to time, I would test a number of microSD cards on the Pi, or run tests on NVMe SSDs on the Pi, Rock 5 model B, or other SBCs. But my results were usually geared towards a single blog post or a video project.

In 2021 James Chambers set up PiBenchmarks to move to a more community-driven testing dataset.

You can run the following command on your SBC to test the boot storage and upload results directly to PiBenchmarks.com:

Building a fast all-SSD NAS (on a budget)

All SSD Edit NAS build - completed

I edit videos non-stop nowadays. In a former life, I had a 2 TB backup volume and that stored my entire digital life—all my photos, family video clips, and every bit of code and text I'd ever written.

Video is a different beast, entirely.

Every minute of 4K ProRes LT footage (which is a very lightweight format, compared to RAW) is 3 GB of space. A typical video I produce has between 30-60 minutes of raw footage (which brings the total project size up to around 100-200 GB).

HTGWA: Use bcache for SSD caching on a Raspberry Pi

This is a simple guide, part of a series I'll call 'How-To Guide Without Ads'. In it, I'm going to document how I set up bcache on a Raspberry Pi, so I could use an SSD as a cache in front of a RAID array.

Getting bcache

bcache is sometimes used on Linux devices to allow a more efficient SSD cache to run in front of a single or multiple slower hard drives—typically in a storage array.

In my case, I have three SATA hard drives: /dev/sda, /dev/sdb, and /dev/sdc. And I have one NVMe SSD: /dev/nvme0n1.

I created a RAID5 array with mdadm for the three hard drives, and had the raid device /dev/md0.

I then installed bcache-tools:

$ sudo apt-get install bcache-tools

And used make-bcache to create the backing and cache devices:

Kubesail's PiBox mini 2 - 16 TB of SSD storage on a Pi

Kubesail Raspberry PiBox mini 2 front side exposed

Many months ago, when I was first testing different SATA cards on the Raspberry Pi Compute Module 4, I started hearing from GitHub user PastuDan about his experiences testing a few different SATA interface chips on the CM4.

As it turns out, he was working on the design for the PiBox mini 2, a small two-drive NAS unit powered by a Compute Module 4 with 2 native SATA ports (providing data and power), 1 Gbps Ethernet, HDMI, USB 2, and a front-panel LCD for information display.

The Hardware

The PiBox mini 2 is powered by the Compute Module 4 on this interesting carrier board:

PiBox mini carrier board with Raspberry Pi Compute Module 4

Raspberry Pi OS now has SATA support built-in

After months of testing various SATA cards on the Raspberry Pi Compute Module 4, the default Raspberry Pi OS kernel now includes SATA support out of the box.

SATA card and Samsung SSD with Raspberry Pi Compute Module 4 IO Board

In the past, if you wanted to use SATA hard drives or SSDs and get native SATA speeds, and be able to RAID them together for redundancy or performance, you'd have to recompile the Linux kernel with SATA and AHCI.

Sure you could always use hard drives and SSDs with SATA to USB adapters, but you sacrifice 10-20% of the performance, and can't RAID them together, at least not without some hacks.

There's a video version of this post: SATA support is now built into Raspberry Pi OS!

Trying KIOXIA CM6 and PM6 Enterprise SSDs on a Raspberry Pi

Late last year, an engineer at Broadcom sent me some hardware and offered some help getting Broadcom's MegaRAID card working on the Raspberry Pi. It took some time, but eventually we were able to get the card and a demonstrator 'UBM' backplane working on the Pi, and it culminated in my posting about Hardware RAID on the Pi, and on a livestream, getting 16 hard drives working on a Pi.

The one thing I couldn't test in those earlier videos was the backplane and storage card's 'Tri-mode' support, allowing PCI Express NVMe drives—like KIOXIA's CM6—to work in the same slot as the SATA and SAS drives I was used to testing.

So after some conversation with reps at KIOXIA, I was able to get a PM6 and three CM6 drives on loan to test them:

KIOXIA CM6 and PM6 SSD with Raspberry Pi Compute Module 4

The Raspberry Pi can boot off NVMe SSDs now

When the Compute Module 4 was released (see my CM4 review here), I asked the Pi Foundation engineers when we might be able to boot off NVMe storage, since it was trivially easy to use with the exposed PCIe x1 lane on the CM4 IO Board.

The initial response in October 2020 was "we'll see". Luckily, after more people started asking about it, beta support was added for direct NVMe boot just a couple weeks ago.

MirkoPC with SN750 WD_BLACK NVMe SSD and Raspberry Pi Compute Module 4

Building the World's Tiniest NVMe RAID Array

Just posting to the blog for reference; I posted this video on YouTube recently, in which I built (what I believe to be) the world's tiniest NVMe SSD RAID array, using the Raspberry Pi Compute Module 4 and three diminutive WD SN520 NVMe drives (which are M.2 2230 size, which makes them each about the size of a quarter):

.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }

I ran some benchmarks in RAID 5 and RAID 0, as well as one drive by itself, and found one surprising thing: the Pi's overall IO bandwidth is already saturated by just one drive, so putting NVMe disks in RAID doesn't really help with performance, like it does with slower spinning hard drives.