My name is Robert Mustacchi. I’ve spent a while working on operating systems, having misadventures with hardware, and a few things in between. You can find a collection of blog entries below, most of which are technical in nature.
On the side, I’ve been wrapping up some improvements to the classic Unix
stdio libraries in illumos. stdio contains the
classic functions like
printf(), and the security
While working on support for
and friends I got to reacquaint myself with some of the joys of the
stdio ABI and its history from
7th Edition Unix. With
that in mind, let’s dive into this, history, and some mistakes not to
repeat. While this is written from the perspective of the C programming
language, aspects of it apply to many other languages.
2019-12-02 Joining Oxide
Early in my professional career I had two different opportunities at Sun Microsystems. The first saw me as a summer intern up in Burlington, Massachusetts working on what would become the UltraSPARC T3 though at the time was just known as KT doing hardware verification working on part of the I/O chip that was responsible for PCIe, MSIs, and MSI-X support. Little did I know that would be the start of a long period of time straddling the hardware and software boundary.
From there, I found myself spending a summer instead with the Fishworks Team which affirmed a belief that I had that by better understanding hardware we could write better software and vice versa. At Joyent, after that, I spent a lot of time on everything from network virtualization to hardware support, fighting with side-channel vulnerabilities, and the never-ending slog of bugs that were usually either psychotic or reproducible.
When it comes to debugging, most often it makes sense to start at a high level in the system and work your way down, isolating problems and getting more specific along the way. However, sometimes you find that you’re spending all your time on CPU and the OS is doing its best to get out of the way. To make it easier to understand what the CPU is doing, many CPU vendors have gone through and added support for what they call performance monitoring counters.
These counters cover lots of different things on the CPU. For example:
The number of instructions executed
Information related to the TLBs (Translation lookaside buffers) and Caches
Information about branches
Information about floating point units
There is a true wealth of information here. Of course, making it easily accessible to software can be a bit more of a challenge. First we’ll look briefly at how these work from a software perspective and then we’ll talk about what we’ve done to make them a bit easier. This article is focused on x86 Intel and AMD CPUs. If you’re using ARM, RISC-V, SPARC, MIPS, or other CPUs, then while the principles may be the same, the actual implementation is quite different.
2019-09-27 USB Topology
USB devices have been a mainstay of extending x86 systems for some time now. At Joyent, we used USB keys to contain our own version of iPXE to boot. As part of discussions around RFD 77 Hardware-backed per-zone crypto tokens with Alex Wilson we talked about knowing and constricting which USB devices were trusted based on whether or not they were plugged into an internal USB port or external USB port.
While this wasn’t the first time that this idea had come up, by the time I started working on ideas on improving data center management, having better USB topology ended up on the list of problems I wanted to solve in RFD 89 Project Tiresias. Though at that point, how it was going to work was still a bit of an unknown.
The rest of this blog entry will focus on giving a bit of background on how USB works, some of the building blocks used for topology, examples of how we use the topology information, and then how to flesh it out for a new system.
One of the stories that has stuck with me over the years came from a support case that a former colleague, Ryan Nelson, had point on. At Joyent, we had third parties running our cloud orchestration software in their own data centers with hardware that they had acquired and assembled themselves. In this particular episode, Ryan was diagnosing a case where a customer was complaining about the fact that networking wasn’t working for them. The operating system saw the link as down, but the customer insisted it was plugged into the switch and that a transceiver was plugged in. Eventually, Ryan asked them to take a picture of the back of the server, which is where the NIC (Network Interface Card) would be visible. It turned out that the transceiver looked like it had been run over by a truck and had been jammed in — it didn’t matter what NIC it was plugged into, it was never going to work.
As part of a broader push on datacenter management, I was thinking about this story and some questions that had often come up in the field regarding why the NIC said the link was down. These were:
Was there actually a transceiver plugged into the NIC?
If so, did the NIC actually support using this transceiver?
Now, the second question is a bit of a funny one. The NIC obviously knows whether or not it can use what’s plugged in, but almost every time, the system doesn’t actually make it easy to find out. A lot of NIC drivers will emit a message that goes to a system log when the transceiver is plugged in or the NIC driver first attaches, but if you’re not looking for that message or just don’t happen to be on the system’s console when that happens, suddenly you’re out of luck. You might also ask why are there transceivers that aren’t supported by a NIC, but that’s a real can of worms.
Anyways, with that all in mind, I set out on a bit of a journey and put together some more concrete proposals for what to do here in terms of RFD 89: Project Tiresias. We’ll spend the rest of this entry going into a bit of background on transceivers and then discuss how we go from knowing whether or not they’re plugged in to actually determining who made them and where they are in the system.
2019-09-06 A Tale of Two LEDs
It was the brightest of LEDs, it was the darkest of LEDs, it was the age of data links, it was the age of AHCI enclosure services, …
Today, I’d like to talk about two aspects of a project that I worked on a little while back under the aegis of RFD 89 Project Tiresias. This project covered improving the infrastructure for how we gathered and used information about various components in the system. So, let’s talk about LEDs for a moment.
LEDs are strewn across systems and show up on disks, networking cards, or just to tell us the system is powered on. In many cases, we rely on the blinking lights of a NIC, switch, hard drive, or another component to see that data is flowing. The activity LED is a mainstay of many devices. However, there’s another reason that we want to be able to control the LEDs: for identification purposes. If you have a rack of servers and you’re trying to make sure you pull the right networking cable, it can be helpful to be able to turn a LED on, off, or blink it with a regular pattern. So without further ado, let’s talk about how we control LEDs for network interface cards (NICs or data links) and a class of SATA hard drives.
2019-08-14 CPU and PCH Temperature Sensors in illumos
A while back, I did a bit of work that I’ve been meaning to come back to and write about. The first of these are all about making it easier to see the temperature that different parts of the system are working with. In particular, I wanted to make sure that I could understand the temperature of the following different things:
Intel CPU Cores
Intel CPU Sockets
While on some servers this data is available via IPMI, that doesn’t help you if you’re running a desktop or a laptop. Also, if the OS can know about this as a first class item, why bother going through IPMI to get at it? This is especially true as IPMI sometimes summarizes all of the different readings into a single one.
I previously mantained my blog on dtrace.org. You can find even older entries there.