storage

Building a tiny 6-drive M.2 NAS with the Rock 5 model B

As promised in my video comparing SilverTip Lab's DIY Pocket NAS (express your interest here) to the ASUSTOR Flashstor 12 Pro, this blog post outlines how I built a 6-drive M.2 NAS with the Rock 5 model B.

The Rockchip RK3588 SoC on the Rock 5 packs an 8-core CPU (4x A76, 4x A55, in a 'big.LITTLE' configuration). This SoC powers a PCIe Gen 3 x4 M.2 slot on the back, which is used in this tiny 6-drive design to make a compact, but fast, all-flash NAS:

6-bay Rock 5 NAS

First look: ASUSTOR's new 12-bay all-M.2 NVMe SSD NAS

Last year, after I started a search for a good out-of-the-box all-flash-storage setup for a video editing NAS, I floated the idea of an all-M.2 NVMe NAS to ASUSTOR. I am not the first person with the idea, nor is ASUSTOR the first prebuilt NAS company to build one (that honor goes QNAP, with their TBS-453DX).

But I do think the concept can be executed to suit different needs—like in my case, video editing over a 10 Gbps network with minimal latency for at least one concurrent user with multiple 4K streams and sometimes complex edits, without lower-bitrate transcoded media (e.g. ProRes RAW).

ASUSTOR Flashstor 12 Pro - front and top

Answering Questions about the PetaPi

A few weeks ago, I posted a video about the Petabyte Pi Project—an experiment to see if a single Raspberry Pi Compute Module 4 could directly address sixty 20TB hard drives, totaling 1.2 Petabytes.

Petabyte of Seagate Exos Hard Drives

And in that video, it did, but with a caveat: RAID was unstable. For some reason, after writing 2 or 3 GB of data at a time, one of the HBAs I was using would flake out and reset itself, due to PCI Express bus errors.

The Petabyte Pi Project

I haven't had time to write up the details yet, but I wanted to share a project that's been many months in the making: The Petabyte Pi Project on YouTube.

I'm still doing follow-up testing based on feedback from Broadcom storage engineers, and will put out a much more in-depth blog post later, but the gist is:

Can a single Raspberry Pi cosplay as an 'enterprise' storage server, directly addressing 1 PB of storage?

Now... caveats abound here. What does 'enterprise' mean? And what does 'directly addressing' mean? Those things are all answered in the video linked above.

But to give a tl;dr: The Pi does not perform swimmingly. But... I did get a single array of 60 hard drives—20TB Exos HDDs to be exact—working in a 45Drives Storinator XL60 chassis, controlled only through a single Raspberry Pi Compute Module 4. Of course I had to rip out the Xeon guts and replace them with said Pi:

Hardware RAID on the Raspberry Pi CM4

A few months ago, I posted a video titled Enterprise SAS RAID on the Raspberry Pi... but I never actually showed a SAS drive in it. And soon after, I posted another video, The Fastest SATA RAID on a Raspberry Pi.

Broadcom MegaRAID SAS storage controller HBA with HP 10K drives and Raspberry Pi Compute Module 4

Well now I have actual enterprise SAS drives running on a hardware RAID controller on a Raspberry Pi, and it's faster than the 'fastest' SATA RAID array I set up in that other video.

A Broadcom engineer named Josh watched my earlier videos and realized the ancient LSI card I was testing would not likely work with the ARM processor in the Pi, so he was able to send two pieces of kit my way:

Building the fastest Raspberry Pi NAS, with SATA RAID

Since the day I received a pre-production Raspberry Pi Compute Module 4 and IO Board, I've been testing a variety of PCI Express cards with the Pi, and documenting everything I've learned.

The first card I tested after completing my initial review was the IO Crest 4-port SATA card pictured with my homegrown Pi NAS setup below:

Raspberry Pi Compute Module 4 with IOCrest 4-port SATA card and four Kingston SSDs

But it's been a long time testing, as I wanted to get a feel for how the Raspberry Pi handled a variety of storage situations, including single hard drives and SSD and RAID arrays built with mdadm.

I also wanted to measure thermal performance and energy efficiency, since the end goal is to build a compact Raspberry-Pi based NAS that is competitive with any other budget NAS on the market.

Expanding K8s PVs in EKS on AWS

If that post title isn't a mouthful...

I'm excited to be moving a few EKS clusters into real-world production use after a few months of preparation. Besides my Raspberry Pi Dramble project (which is pretty low-key), these are the only production-grade Kubernetes clusters I've dealt with—and I've learned a lot. Enough that I'm working on a new book.

Anyways, back to the main topic: As of Kubernetes 1.11, you can auto-expand PVs from most cloud providers, AWS included. And since EKS now runs Kubernetes 1.11.x, you can have your EBS PVs automatically expand by just increasing the PVC claim size in spec.resources.requests.storage to a larger size (e.g. 10Gi to 20Gi).

To make sure this works, though, you need to make sure of a few things:

Make sure you have the proper setting on your StorageClass

You need to make sure the StorageClass you're using has the allowVolumeExpansion setting enabled, e.g.:

Simple GlusterFS Setup with Ansible

The following is an excerpt from Chapter 8 of Ansible for DevOps, a book on Ansible by Jeff Geerling.

Modern infrastructure often involves some amount of horizontal scaling; instead of having one giant server, with one storage volume, one database, one application instance, etc., most apps use two, four, ten, or dozens of servers.

GlusterFS Architecture Diagram

Many applications can be scaled horizontally with ease, but what happens when you need shared resources, like files, application code, or other transient data, to be shared on all the servers? And how do you have this data scale out with your infrastructure, in a fast but reliable way? There are many different approaches to synchronizing or distributing files across servers: