Uptime Lab's CM4 Blade adds NVMe, TPM 2.0 to Raspberry Pi

A few weeks ago, I received two early copies of Uptime.Lab's CM4 Blade.

Uptime Lab's Raspberry Pi CM4 Blade Computer with NVMe SSD

The Blade is built for the Raspberry Pi Compute Module 4, which has the same processor as the Pi 4 and Pi 400, but without any of the built-in IO ports. You plug the CM4 into the Blade, then the Blade breaks out the connections to add some interesting features.

A 1U rackmount enclosure is in the works, and 161 of these boards would deliver:

  • 64 ARM CPU cores
  • up to 128 GB of RAM
  • 16 TB+ of NVMe SSD storage

That's assuming you can find 8 GB Compute Modules—they've been out of stock since launch almost a year ago, and even smaller models are hard to come by. More realistically, with 4 GB models, you could cram in 64 GB of total RAM.

Having the capacity spread across multiple nodes means you'd need to use something like Kubernetes to coordinate workloads, but this is an interesting alternative to something like an Ampere Altra, if you have parallelizable workloads and need as many 64-bit ARM compute cores as possible.

Each blade has the following features:

  • M.2 M-key slot for NVMe SSDs
  • TPM 2.0 module
  • Integrated PoE+ with Gigabit Ethernet
  • A PWM fan header
  • UART debug header and partial GPIO header for RTC or Zymkey 4i
  • 1x HDMI, USB 2.0, and USB-C for eMMC flashing
  • microSD card slot for Lite CM4 modules
  • GPIO-controlled ID LED and SSD activity LED

I have a full video walkthrough of the board on my YouTube channel:

I tested the onboard NVMe drive, and was able to get up to 415 MB/sec sequential reads, which is right at the limit of the Pi's single PCIe Gen 2.0 lane.

I was also able to communicate with the TPM 2.0 chip after enabling it in /boot/config.txt with the following configuration line:

# Enable TPM module
dtoverlay=tpm-slb9670

I used Infineon's Embedded Linux TPM Toolbox 2 with the command sudo ./eltt2 -g to confirm I could communicate with the module, and it returned valid data.

Infineon TPM 2.0 embedded chip

But don't get your hopes up for secure boot on the Pi—it would need support in the Pi's bootloader... right now it's not supported. Since the bootloader is closed source, I wouldn't count on TPM support being added anytime soon, especially since this is the first Pi device I've seen with a TPM 2.0 chip! You can still use it for other secure computing features and cryptography storage, though.

The board gets fairly warm, likely due to the overhead of the PoE+ power conversion (which consumes 6-7W on its own!), so active cooling is probably a good idea—the plan is to have an official 1U rackmount case with integrated Noctua 40mm fans at the rear, plugged into the Blade's 4-pin fan header.

Ivan (@Merocle, who posts to @Uptime.Lab) already has a short 10" rack case design, which holds 8 boards, and is working on the full 19" version:

Uptime Lab's blade servers in 1U 10 inch mini desktop rack

His plan is to launch a Kickstarter for the board, but until then, the best way to follow along is to subscribe to the Uptime.Lab mailing list for a Kickstarter launch notification, and follow @Uptime.Lab on Instagram.


1 Ivan stated the final 1U rack enclosure could support up to 20 or even 22 blades!

Comments

that is amazing, I think it would really be helpful to have a usb 2.0 port or 2 on the front (for hubs or extender cables), as well as a gpio pin set on the front version of the module to connect things like home assistant gateway, SDR adapters, air quality monitors, etc.

You can get secure boot if you really want it by using another distribution. Opensuse Tumbleweed installs on the rpi4 with SB as an option. It just chain loads the grub bootloader to achieve this.

You'll also get the full kernel this way and not have to build missing modules.

A few people have been asking about the rack stand used in that last picture—according to Ivan, it's this 254mm (10") Mini Rack Stand.

There are some others like it, but it's harder to find any from US retailers... some keywords to search for are SOHO mini rack or 10" mini rack, but good luck finding them!

I like the blade design, but a backplane with a integrated power supply and a switch would have been nice. The power supply would reduce costs (no PoE needed) and the switch would take care of the cables. One would need only one uplink cable, the pi would get ethernet from the backplane. The icing on the cake would be a front accessible sd-card.

No kidding, the POE power to each board is fairly wasteful. A fixed backplane with power distribution would be simple and cheaper. Some people would want to use their own switch so you could have versions with and without the ethernet switch function.

PoE does provide one benefit for this sort of application: managed control of power (reboots). A fixed power supply in the blade chassis would need to provide that to be useful in many situations.

So that might work really well for those instances where you're going to be clustering them. For those of us who want a POE powered PI4 that has a TPM built into it and NVME support for KIOSKS or digital signage, this would be huge (and having a TPM 2.0 on it would allow those people to use un-attended windows autopilot setup so if a system has issues, we can nix it from Intune and have it setup on it's own)

In light of the recent TPM hack, the cryptolocker/etc chip's not such a hot thing to be using as it's vulnerable in this configuration.

That being said, Jeff, I'm keenly waiting to see this one show up like I am with Wiretrustee's NAS board. There's some potentials with a blade carrier like this with the PCIe M.2 M-key on this. (I'm seeing NVMe like you talked to in the video...I'm also seeing an Edge TPU in it's future for some configurations...)

There are in kernel crypto methods that trade encrypted blobs to the TPM to prevent that. It slows down access, but depending on your use case, it is completely tolerable. Bootlicker was interacting in the clear. Likely because they are interacting with the TPM to frequently for that to be effective. If it is only part of a single cert chain validation or something like Diffie Hellman, where the interaction rate is low, encrypted exchange overhead is tolerable.

I shipped an RPi based module commercial with an infineon TPM 2.0 module in Spring of 2019. You cannot get a full secure boot, as you say, but you can sign all your modules that load after the core loader. Plus we stored other key data to validate access to our back end servers. The TPM gave us secure certificate validation that would have been difficult by other means.

That's great to hear! Is the product you shipped something commercially available, or was that more of a one-off for a particular need? Would be cool to read more about it.

For people commenting on the recent TPM / Bitlocker hack

Link with https stripped off...

gnupg.org/blog/20210315-using-tpm-with-gnupg-2.3.html

If you don't know who James Bottomley is, you don't know TPM.

I have no idea what you are talking about but it sounds interesting...

Is that '6-7W' for the PoE hardware alone accurate? If so, it could use some serious optimization. I've got a 4GB RPi4 using the RPF PoE Hat, with a USB3-SATA adapter and Samsung SATA SSD attached, and the whole thing draws 5.4W at idle.

Yes; this board uses PoE+ and I was using it with a PoE+ switch, which does use a little more power... I didn't test with a PoE switch though—I'm not sure if that would use less overhead.

For as long as those blades are I wouldn’t mind seeing a version that ditched the m.2 and added a second cm4 spot.

This looks awesome, and this gives me a thought about server boards. Let's say, I'm trying to use the raspberry pi to host a personal site, is there a possibility to have 2 of these pis run it, in a tandem mode? i.e...just like tandem computers, where one is the main system, the other one run as the fault tolerance site? Just to make it more interesting, is there a chance we can have this set up with a disaster recovery setup, where a second set of Pis will run at another location, acting as the DR site in case something goes wrong?

Unfortunately, my knowledge isn't very extensive, and I'm drawing a blank as to whether or not this configuration comes out of the box for unix/raspbian. Let me know!

In a Classic bladecenter, like Dell m1000, are 2 Redundante control units to controll the case itself. Is there any plan for it? Like webbased administration and controlled the power units, fans, switches and something else.