Four exercises that touch basic multithreaded and lockfree programming concepts.
- Implement a program that attempts to use two threads to increment a global counter to 10,000 with each thread incrementing 5000. But make it buggy so that there are interleaving problems and the end result of the counter is less than 10,000.
- Fix the above with atomics.
- Implement a variant of the program: instead of simply incrementing the counter, make the counter wrap every 16 increments (as if incrementing through indices of an array of length 16). Make two threads each attempt to increment the counter (16 * 5000) times. The end state should have the counter be back at index zero. Implement it in a buggy naive way that causes the counter to often be nonzero, even if atomics are used.
- Fix the above using a CAS loop.
- (Bonus question for the above: Why isn’t std::atomic::compare_exchange_strong a good fit here?)
Some rough lab notes on these topics to record the current state of my knowledge. I’m not an expert, so there may be inaccuracies.
- On Linux, libpthread mutexes are implemented using the underlying futex syscall
- They are basically a combination of a spinlock (in userspace), backed by the kernel for wait/signal operations only when absolutely necessary (i.e. when there’s contention). In the common case of an uncontended lock acquire, there is no context switch which improves performance
- The userspace spinlock portion uses atomics as spinlocks usually do, specifically because the compare and set must be atomic
- Jeff Preshing (see below) writes that each OS/platform has an analogous concept to this kind of “lightweight” mutex — Windows and macOS have them too
- Before futex(2), other syscalls were used for blocking. One option might have been the semaphore API, but commit 56c910668cff9131a365b98e9a91e636aace337a in glibc is before futex, and it seems like they actually use signals. (pthread_mutex_lock -> __pthread_lock (still has spinlock elements, despite being before futex) -> suspend() -> __pthread_suspend -> __pthread_wait_for_restart_signal -> sigsuspend)
- A primary advantage of futex over previous implementations is that futexes only require kernel resources when there’s contention
- Like atomics, mutexes implementations include memory barriers (maybe even implicitly due to atomics) to prevent loads/stores from inappropriately crossing the lock/unlock boundary due to compiler and/or hardware instruction reordering optimizations
It’s really all about memory. But to start at the beginning, the rough stack looks like this:
- Userspace application
- Kernel driver
- Hardware device
I find it easier to think about this from the middle out. On Linux, the kernel exposes hardware devices as files backed by the
/dev virtual filesystem. Userspace can do normal syscalls like
mmap on them, as well as the less typical
ioctl (for more arbitrary, device-specific functionality).1.
The files are created by kernel drivers which are modules of kernel code whose sole purpose is to interface with and abstract hardware so it can be used by other parts of the operating system, or userspace. They are implemented implemented using internal driver “frameworks” in the kernel, e.g. the I2C or SPI frameworks. When you interface with a file in
/dev, you are directly triggering callback handlers in a driver which execute in the process context.
That’s how userspace interfaces with the kernel. How do drivers interface with hardware? These days, mostly via memory mapped I/O (MMIO)2. This is when device hardware “appears” at certain physical addresses, and can be interfaced with via load and store instructions using an “API” that the device defines. For example, you can read data from a sensor by simply reading a physical address, or write data out to a device by writing to an address. The technical term for the hardware component these reads/writes interface with is “registers” (i.e. memory mapped registers).
(Aside: Other than MMIO, the other main interface the kernel has with hardware is interrupts, for interrupt driven I/O processing (as opposed to polling, which is what MMIO enables). I’m not very knowledgeable about this, so I won’t get into it other than to say drivers can register handlers for specific IRQ (interrupt requests) numbers, which will be invoked by the kernel’s generic interrupt handling infrastructure.)
Using MMIOs looks a lot like embedded bare metal programming you might do on a microcontroller like a PIC. At the lowest level, a kernel driver is really just embedded bare metal programming.
Here’s an example of a device driver for UART (serial port) hardware for ARM platforms: linux/drivers/tty/serial/amba-pl011.c. If you’re debugging an ARM Linux system via a serial connection, this is might be the driver being used to e.g. show the boot messages.
The lines like:
cr = readb(uap->port.membase + UART010_CR);
are where the real magic happens.
This is simply doing a read from a memory address derived from some base address for the device, plus some offset of the specific register in question. In this case it’s reading some control information from a Control Register.
#define UART010_CR 0x14 /* Control register. */
Device interfaces may range from having just a few to many registers.
To go one step deeper down the rabbit hole, how do devices “end up” at certain physical addresses? How is this physical memory map interface implemented?3
The device/physical address mapping is implemented in digital logic outside the CPU, either on the System on Chip (SOC) (for embedded systems), or on the motherboard (PCs)4. The CPU’s physical interface include the address, data, and control buses. Digital logic converts bits of the address bus into signals that mutually exclusively enable devices that are physically connected to the bus. The implementations of load/store instructions in the CPU set a read/write bit appropriately in the Control bus, which lets devices know whether a read or write is happening. The data bus is where data is either transferred out from or into the CPU.
In practice, documentation for real implementations of these systems can be hard to find, unless you’re a customer of the SoC manufacturer. But there are some out there for older chips, e.g.
Here’s a block diagram for the Tegra 2 SoC architecture, which shipped in products like the Motorola Atrix 4G, Motorola Droid X2, and Motorola Photon. Obviously it’s much more complex than my description above. Other than the two CPU cores in the top left, and the data bus towards the middle, I can’t make sense of it. (link)
While not strictly a “System on Chip”, a class PIC microcontroller has many shared characteristics of a SoC (CPU, memory, peripherals, all in one chip package), but is much more approachable.
We can see the single MIPS core connected to a variety of peripheral devices on the peripheral bus. There’s even layers of peripheral bussing, with a “Peripheral Bridge” connected to a second peripheral bus for things like I2C and SPI.
Note to self:
- Remember: You are already enough just as you are, right here, right now. You don’t need to achieve or do anything. 1
- Remember: The only competition in life — if you must think of it that way — is to know yourself as fully as possible, and act with maximum authenticity towards that truth.
- Remember: All things considered, you have it good — many around the world would kill to switch places and inherit every single one of your problems.
This is a mindset I use to help with procrastination. It first came to me in my senior year on university when I needed to do lab reports. At that point, I had been doing lab reports for 8 years — ever since the start of high school. And throughout that whole time, they were always excruciating.
But I realized that they were excruciating partly because I always waited until the days before the deadline to do them, which was about a week after the actual lab. By that point, the details of the lab were much fuzzier, making the lab report way harder.
It occurred to me that even though all I wanted to do after the lab is forget everything about it and push it off to the side, that exact moment — right after the lab — would be the easiest moment to ever do the report. As more time passes, it will strictly get harder as I begin to lose the context of the lab.
So I sucked it up and started to immediately go to the library right after the lab and simply do the report right then. It worked very well and I only wished I had started the habit years earlier.
I try to remember this lesson and apply it to my life now. If there are situations where I need to do something, and no additional information will arrive that will influence how the job gets done, I try to do it as quickly as possible to take advantage of the context fresh in my brain.
I was debugging a microcontroller recently and was surprised to see that I was limited to 4 breakpoints. That surprised me because I’m usually only used to seeing that kind of limitation when using watchpoints, not breakpoints. (See my youtube video for more on this).
But it makes sense after all — code running on this microcontroller is different than in a process in an OS’s userspace because the code will probably be flashed into some kind of ROM (Read Only Memory) (or EEPROM). Since the code is unwritable at a hardware level, the typical method of inserting software breakpoint instructions isn’t possible. So the alternative is to use the microprocessor’s support for hardware debugging, which occupies hardware resources and is thus finite.
Business models are an interesting topic that lives directly at the boundary between business and technology.
The business model describes an abstract plan for how the business is to function sustainably, and is a full time job to develop in and of itself. Then, it must be implemented in the product using technology.
For example, a typical SaaS business model involves subscription pricing at various intervals (monthly & yearly), with a discount given for the yearly plan in exchange for more money paid up front and longer commitment. There may or may not be a limited time free trial period, or alternatively a limited time guaranteed refund. Furthermore, there may be different pricing tiers that unlock more advanced features.
Ultimately, this is all going to end up as code that models the various pricing tiers, plans, timing deadlines, and enforces security (i.e. making sure the advanced features are only available to pro users). Stripe is a common service used to model arbitrary business models and execute payments processing. They provide client libraries offering data models for concepts like users and plans.
While subscription models have grown in popularity as software shifts to the web, it’s not the only model. The older model of buying discrete software packaged versions still exists, mostly used by vendors of desktop software. In this model, customers pay a larger sum for unlimited use of a specific major version of software. Then they can optionally pay again to upgrade to a newer major version when it’s released. A variant of this exists where there’s no upgrade fee (i.e. “lifetime free updates”). Another model exists called “Rent to own” which is like a subscription, except payments stop at a certain point after which there is free unlimited use.
Whether the software runs on the client or server is another dimension to consider which may influence the business model — server side software is most commonly sold using subscriptions these days.
|Subscription||Adobe||SaaS Web Apps|
|One time payment per version||Dash, Ableton Live|
|One time payment, lifetime free updates||FL Studio, Tailwind CSS (technically more assets than software)||SaaS Web Apps Lifetime Plans (e.g Roam Research)|
|Rent to own||Audio plugins via Splice|
The business model impacts technical strategy in a few ways.
- Branching and release management
- Compatibility of document artifacts
Subscriptions, “lifetime free updates”, and “rent to own” are simplest in terms of branching and release management. All users are expected to run the latest software (because it’s either free or automatic, in the case of server side), so development can largely happen on a single main branch which releases are cut from.
“One time payment per version” is more complex because potentially multiple major versions of the software need to be maintained in parallel. Ideally bugfixes in Version 2 would also be applied to V3 and V4 when appropriate, but no new feature development should make its way back to V2 and risk being included in binaries send to users that only paid for V2. For this situation, long term release branches make sense as they provide code isolation, though they require some mechanism to forward bugfixes, etc.
The second aspect is document artifact compatibility. Again in this scenario, subscriptions, “lifetime free updates”, and “rent to own” are simpler in that all users can be assumed to be running on the latest version — or at least can be told to upgrade at no cost.
“One time payment per version” is again more complex because multiple versions of the software exist in the wild, all producing artifacts of slightly different versions. If artifacts may be sent between users of different versions, it creates a mess of trying dealing with all the compatibility issues that may arise.
A major difference between x86 and ARM32 is that while x86 generally1 only offers conditional execution of branch instructions (e.g. BNE) ARM32 offers conditional execution for many more instructions (e.g. ALU instructions, like ADD.EQ- add if not equal). The idea was that these can be used to avoid emitting traditional branching instruction sequences which may suffer from pipeline stalls when a branch is mispredicted — instead, straight line code with equivalent semantics can be emitted.
This was actually removed in ARM64. The official quote:
The A64 instruction set does not include the concept of predicated or conditional execution. Benchmarking shows that modern branch predictors work well enough that predicated execution of instructions does not offer sufficient benefit to justify its significant use of opcode space, and its implementation cost in advanced implementations.https://www.element14.com/community/servlet/JiveServlet/previewBody/41836-102-1-229511/ARM.Reference_Manual.pdf
Turns out that the whole pipeline stall problem is generally not a huge issue anymore as branch prediction has gotten so good, while support for the feature still requires allocating valuable instructions bits to encode the conditions. Note that ARM64 still uses 32 bit instructions, so conserving bits is still useful.
What is very interesting is that Intel’s recent APX extensions to x86-64 (whose purpose is to primarily add more general purpose registers) moves closer to this conditional instructions direction.
The performance features introduced so far will have limited impact in workloads that suffer from a large number of conditional branch mispredictions. As out-of-order CPUs continue to become deeper and wider, the cost of mispredictions increasingly dominates performance of such workloads. Branch predictor improvements can mitigate this to a limited extent only as data-dependent branches are fundamentally hard to predict.
To address this growing performance issue, we significantly expand the conditional instruction set of x86, which was first introduced with the Intel® Pentium® Pro in the form of CMOV/SET instructions. These instructions are used quite extensively by today’s compilers, but they are too limited for broader use of if-conversion (a compiler optimization that replaces branches with conditional instructions).”
Including support for conditional loads and stores which is apparently tricky with modern out of order and superscalar architectures.
I’ve managed to make writing fun again and have been publishing a lot on my blog lately.
It’s due to three reasons:
1 – Having explicit content categories creates the freedom to post things that aren’t highly effort intensive deep dives.
By introducing explicit post categories (i.e. Micropost, Lab Notes, Essay, Deep Dive, etc) I feel much more free to post without the expectation that every post needs to be a time consuming, deeply researched technical post (which are the kind of posts that are popular).
I don’t have time for those lately, but that doesn’t mean I can’t write or post anything! Explicitly tagging something as a micropost or lab notes takes a lot of the pressure off, and makes me much more willing to write and publish.
2 – I stopped sharing on Twitter.
The curse of growing an audience is that posting to that audience has increasing weight as the audience grows, which creates stress and friction about posting. What if I post something that people don’t like and I lose a bunch of followers? What if I’m straight up wrong? What if I want to share something but don’t have time to fully polish it and people judge me? What if I post too often?
It’s also distracting to share on Twitter, and very difficult to not monitor notifications afterwards.
Furthermore, writing on Twitter was actually difficult in that it took extra energy to abide by the character limits, or fit things into threads otherwise. Writing freeform of my own blog is easier in this regard.
Not posting on Twitter removes a nontrivial amount of friction and stress that used to prevent me from sharing.
3 – Using WordPress (i.e. not a static site generator) removes a ton of friction.
The fact that I can click buttons in a web interface, write and post as easily as sending a tweet makes all the difference.
It’s so nice being able to change the website without coding. For instance, I just added a “Popular posts” feature in the sidebar in the last 5 minutes. Turns out Jetpack already has the feature included and I just had to enable it. I don’t even want to imagine what it would have taken to implement that by hand or with a static site generator.
Though not as fast a static site, the blog loads fast enough and I’m more than happy to take the performance hit.
It’s also awesome that I can quickly edit posts to fix typos, even from my phone with the WordPress app. I do this very often.
(bonus reason: It’s also due to the realization that posts don’t have to be long, can be written in one sitting, and don’t have to be absolutely perfect!)