kernel

How to customize the dtb (device tree binary) on the Raspberry Pi

Every so often, when you're debugging weird hardware issues on SBCs like the Raspberry Pi, it's useful to get way down into the guts of how the Pi represents its hardware to Linux.

And the Linux kernel uses a method called Device Tree overlays to do it. On the Pi 5 (and other Pis), these overlays are stored as .dtb files inside the /boot/firmware directory, and there's an overlay for every major Raspberry Pi hardware model.

I've had to modify the dtb files in the past to increase the PCIe BAR space for early GPU testing on the Compute Module 4. And recently I've had to mess with how the PCIe address space is set up for testing certain devices on the Raspberry Pi 5.

Network interface routing priority on a Raspberry Pi

52Pi Raspberry Pi Compute Module 4 Router Board

As I start using Raspberry Pis for more and more network routing activities—especially as the Compute Module 4 routers based on Debian, OpenWRT, and VyOS have started appearing—I've been struggling with one particular problem: how can I set routing priorities for network interfaces?

Now, this is a bit of a loaded question. You could dive right into routing tables and start adding and deleting routes from the kernel. You could mess with subnets, modify firewalls, and futz with iptables.

But in my case, my need was simple: I wanted to test the speed of a specific interface, either from one computer to another, or over the Internet (e.g. via speedtest-cli).

The problem is, even if you try limiting an application to a specific IP address (each network interface has its own), the Linux kernel will choose whatever network route it deems the best.

Setting 9000 MTU (Jumbo Frames) on Raspberry Pi OS

Raspberry Pi OS isn't really built to be a server OS; the main goals are stability and support for educational content. But that doesn't mean people like me don't use and abuse it to do just about anything.

In my case, I've been doing a lot of network testing lately—first with an Intel I340-T4 PCIe interface for 4.15 Gbps of networking, and more recently (yesterday, in fact!) with a Rosewill 2.5 GbE PCIe NIC.

And since the Pi's BCM2711 SoC is somewhat limited, it can't seem to pump through many Gbps of bandwidth without hitting IRQ limits, and queueing up packets.

In the case of the 2.5G NIC, I was seeing it max out around 1.92 Gpbs, and I just wouldn't accept that (at least not for a raw benchmark). Running atop, I noticed that during testing, the IRQ interrupts would max out at 99% on one CPU core—and it seems like it may be impossible to distribute interrupts across all four cores on the BCM2711.