That’s a personal goal of mine. I’m not sure why, but I think it comes from spending a long time in computer security and feeling unsatisfied with just breaking “low level” software β I want to be able to build it too.
But what does “pro” mean? What does “systems programmer” even mean?
Fluent in a systems programming language. Able to write idiomatic code.
x
Understanding of modern systems programming concepts β e.g. smart pointers.
x
Understanding of operating system and hardware/software interface
x
Competent with C/C++ build concepts β e.g. headers, source, compilation, linking
x
Able to engage professionally in open source
x
Competent with git β e.g. can rebase, rewrite history and prepare perfect patch series (no stray newlines, commits build on each other, etc)
x
Understanding of multithreaded programming and concurrency. Able to write idiomatic multithreaded code.
x
Skilled with development tools β e.g. debugger
x
Competent with performance profiling
x
Understanding of how to model data with a static type system β e.g. static vs dynamic dispatch, polymorphism, OOP concepts, reference vs value semantics.
x
Understanding of I/O programming, esp. async I/O.
x
Understanding of different OS platforms β e.g Linux, Windows, macOS
x
Experience with a variety of architectures
x
Antigoals:
Become highly skilled in C++ template meta-programming
Milestones
2014 β First professional Linux kernel experience (internship)
2016 β Landed a job working on symbolic execution engine (Trail of Bits)
2019 β First professional C++ work (Trail of Bits / osquery)
2019 β Landed commit in lldb
2020 β Landed commit in Linux kernel
2021 β Landed a full time C++ job (Ableton)
2021 β Built significant git competence (Ableton)
2023 β Built significant Linux kernel & hardware / software interface knowledge at work (Ableton)
2023 β Built knowledge of data modeling with static type system (Ableton)
While C++ is used here as an example, the concepts apply to any statically typed programming language that supports polymorphism.
For example, while Rust doesn’t have virtual functions and inheritance, it’s traits/dyn/Boxes are conceptually equivalent and. Rust enums are conceptually equivalent to std::variant as a closed set runtime polymorphism feature.
Virtual Functions/Inheritance
std::variant
Runtime Polymorphism
Yes – dynamic dispatch via vtable
Yes – dynamic dispatch via internal union tag (discriminant) and compile-time generated function pointer table
Semantics
Reference – clients must operate using pointer or reference
Value – clients use value type
Open/Closed?
Open – Can add new types without recompiling (even via DLL). Clients do not need to be adjusted.
Closed – Must explicitly specify the types in the variant. Generally clients/dispatchers may need to be adjusted.
Codegen
Client virtual call + virtual methods
Client function table dispatch based on union tag + copy of callable for each type in the dispatch. If doing generic dispatch (virtual function style), then also need the functions in each struct. Inlining possible.
Class definition boilerplate
Class/pure virtual methods boilerplate.
Almost none.
Client callsite boilerplate
Almost none
std::visit() boilerplate can be onerous.
Must handle all cases in dispatch?
No support β the best you can do is an error-prone chain of dynamic_cast<>. If you need this, virtual functions are not the best tool.
Yes, can support this.
Overall, virtual functions and std::variant are similar, though not completely interchangeable features. Both allow runtime polymorphism, however each has different strengths.
Virtual functions excels when the interface/concept for the objects is highly uniform and the focus is around code/methods; this allows callsites to be generic and avoid manual type checking of objects. Adding a const data member to the public virtual interface is awkward and must go through a virtual call.
std::variant excels when the alternative types are highly heterogenous, containing different data members, and the focus is on data. The dispatch/matching allows one to safely and maintainably handle the different cases, and be forced to update when a new alternative type is added. Accessing data members is much more ergonomic than for virtual functions, but the opposite is true for generic function dispatch across all alternative types, because the std::visit() is not ergonomic.
Building on these low level primitives, one can build:
Component pattern (using virtual functions typically) (value semantics technique; moves away from static typing and towards runtime typing)
Type erase pattern (also virtual functions internally) (value semantics wrapper over virtual functions)
Fun facts:
Rust also has exactly these, but just with different names and syntax. The ergonomics and implementation are different, but the concepts are the same. Rust uses fat pointers instead of normal pointer pointing to a vtable. Rust’s match syntax is more ergonomic for the variant-equivalent. Rust uses fat pointers apparently because it allows “attaching a vtable to an object whose memory layout you cannot control” which is apparently required due to Rust Traits. (source)
Go uses type erasure internally, but offers this as a first class language feature.
Case study: Component pattern
The component pattern is a typical API layer alternative to classical virtual functions. With classical runtime polymorphism via virtual functions, the virtual functions and inheritance are directly exposed to the client β the client must use reference semantics and does direct invocation of virtual calls.
With the component pattern, virtual functions are removed from the API layer. Clients use value semantics and then “look up” a component for behavior that would have previously been inherited.
API classes, instead of inheriting, contain a container of Components, who are themselves runtime polymorphic objects of heterogenous types. The components can classically use virtual functions for this, inheriting from some parent class. Then the API class contains a container of pointers to the parent class. API clients look up the component they are interested in via its type, and the API class implements a lookup method that iterates the components and identifies the right one using dynamic_cast or similar.
However, variants offer another way to implement this. Rather than having all components inherit from the superclass, they can be separate classes that are included in a variant. The API class then has a container of this variant type. In the lookup method, instead of using dynamic_cast, it uses std::holds_alternative which is conceptually equal.
This is a somewhat unusual application of runtime polymorphism and neither implementation method stands out as strictly better. Since components do not share a common interface really (they would just inherit so they can be stored heterogenously in a container), virtual functions does not offer a strong benefit. But also since the component objects are never dispatched on (they are always explicitly looked up by type), the variant method also does not offer a strong benefit.
The main difference in this scenario is the core difference between virtual functions and variants: whether the set of “child” types is open or closed. With virtual functions being open, it offers the advantage that new components can be added by simply inheriting from the parent class and no existing code needs to be touched. Potentially new components could even be loaded dynamically and this would work.
With variants, when new components are added, the core definition of the component variant needs to be adjusted to include the new type. No dynamic loading is supported.
So it appears that virtual functions have slight advantage here.
std::any is loosely similar to virtual functions or std::variant in that it implements type erasure, allowing a set of heterogenous objects of different types, to be referenced using a single type. Virtual functions and std::variant aren’t typically called “type erasure” as far as I’ve heard, but this is effectively what they do.
However that’s where the similarities end. std::any represents type erasure, but not any kind of object polymorphism. With std::any, there is no notion of a common interface that can be exercised across a variety of types. In fact, there is basically nothing you can do with a std::any but store it and copy it. In order to extract the internally stored object, it must be queried using its type (via std::any_cast()) which tends to defeat the purpose of polymorphism.
std::any is exclusively designed to replace instances where you might have previously used a void * in C code, offering improved type safety and possibly efficiency. 1 The classic use case is implementing a library that allows clients to pass in some context object that will later be passed to callbacks supplied by the client.
For this use case, the library must be able to store and retrieve the user’s context object. It’s it. It literally never will interpret the object or access it in any other way. This is why std::any fits here.
Another use case for std::any might be the component pattern in C++, where objects store a list of components, which are then explicitly queried for by client code. In this case, the framework also never deals directly with the components, but simply stores and exposes the to clients on request.
Why do I have such anxiety when my phone isn’t on me, which prompts me to periodically tap my pocket to check on it?
A few key properties of phones:
Will cause significant pain and expense, in both time and money if lost
Small
Carried around everywhere
Beyond keys and wallets, there are not many other objects that can inflict so much pain & expense on one’s life if lost, while simultaneously being small and carried everywhere, which increases the chance of losing it. Thus producing a natural reaction to check if it’s there, and anxiety if not found. And furthermore, unlike keys and wallets, phones actively provide dopamine.
In the future, we will see an industry of AI coaches. Some of these will compete and take business away from existing human coaches (e.g. a writing coach) but much of it will fill gaps that are simply unfilled right now. Think coaching for areas that could be helpful for people, but are too niche to go out and find a coach for. Human coaches will not totally go away because the flood of AI into society will reinforce in people the desire to speak to “real humans”. We see this today with customer support and how people simple just want to get on a phone with a “real person” to can help them.
Just like how smartphones eventually became ubiquitous, personal AI assistants living on your phone or $BIG_TECH account will become the norm. Signing up for a Google account will initialize an AI assistant that will get to know you as you start to use GMail, GCal, etc. When you buy an Android phone, it will be there as soon as you log in.
Apps will come with AI assistants built into them similar to how we have chatbots in the lower right hand corner of websites.
Things will really start to get interesting once AIs can spend money for you. Think a monthly budget you give to your AI for buying groceries, household supplies, and more. Maybe it asks you before it triggers a buy, maybe it doesn’t.
The nice part about working for big companies is that you can be a part of massive product launches that makes real waves and get massive press coverage… and you didn’t even have to be there for the decades of work that led to it. You can join at the end for the “fun parts”.
This week we launched Push 3 at Ableton. I was only barely involved, but I was still able to participate in the excitement of releasing our new product to an audience that has been hungrily waiting for it for years.
This is not something you get when working for yourself, or for most startups. Or at least not without 100x the effort.
It’s also not a guarantee at large companies, but if you choose your company and team right, it’s a significant positive that counterbalances many of the negatives of a large company.
After calling fork(), a parent process gets its entire address space write protected to facilitate COW. This causes page faults.
This makes fork() unsafe to call from anywhere in a process with realtime deadlines β including non realtime threads! Usually non RT can do what they want, but that is an interesting exception.
On modern glibc, system() doesn’t use fork(), it uses posix_spawn(). But is posix_spawn() safe from a non RT thread?
posix_spawn() doesn’t COW β the parent/child literally share memory β so the page fault issue doesn’t apply. However the parent is suspended to prevent races between the child and parent. This seems RT unsafe…
However, only the caller thread of the parent is suspended, meaning the RT threads are not suspended and continue running with no page faults.
So it is safe to use system() or posix_spawn() from a non RT thread.
There are a few main reasons why we get busier as we get older:
Adulting
As you age, you increasingly lose free time towards dealing with βadultingβ type of tasks: taxes, paying bills, taking your car to the shop, researching insurance alternatives.
Relationships
We as age, we accumulate relationships. And while they have numerous benefits and make life worth living, they donβt come for free. They require time and energy to maintain β and at the end of the day, can become tasks on our todo lists. Even something as innocuous as an old friend reaching to send a text or schedule a call can, at times, feel like burdensome tasks to accomplish.
When youβre a child or teenager, the only people you know are your family and your friends (your first generation of friends). Since you barely know anyone, you donβt really have to keep in touch with anyone. Thus, more free time.
Hobbies
We as age, we accumulate interests, hobbies, and pursuits. These also donβt come for free.
As an adult, you begin to explore the world β reading books, picking up rock climbing, learning to paint, planning & taking trips one or twice a year. Your old interests donβt exactly go away, and there are always worlds of new interests to discover. Part of you feels like you should maintain or get back to some of those old interests you cherished so much. Another part is excited to get into scuba diving.
When youβre a child or teenager, you might have just one or two pursuits that occupy your time outside school. That lack of all the historical hobbies from your past = more free time.
Aging
Aging implies that your body will start to perform worse and more slowly, likely even breaking in ways. Youβll spend more and more time going to doctorβs appointments, surgeries, tending to medical conditions. It will take more effort to maintain your body through fitness. This all takes time.
Like many problems in life, the frustration at your seemingly decreasing time as one ages can be helped by setting expectations properly. Instead of feeling cheated as you feel your allowance of time seems to shrink year by year, expect it. Expect that by all logic, given the adulting to do, relationships to maintain, pursuits to keep up with, and the natural course of aging, you should have no free time at all β which gives you more reason to celebrate and appreciate the rare free moment when it comes along.
I’m trying something new. I added WordPress tags for higher level categories of content that cut across the typical tags:
Deep Dive: Longer, more detailed posts that require significant research. Very expensive to produce.
Tech: Normal technical blog post. More polished than Lab Notes, less researched than Deep Dive.
Lab Notes: Rough notes, typically technical, usually bullet points about some topic.
Essay: Nonfiction writing, usually nontechnical.
Micropost: Tiny, short thought.
I hope that by categorizing things this way and acknowledging that this blog is a collection of different art forms (that appear similar because they’re all writing), I’ll be more comfortable publishing publicly. For a while I was scared to publish because I don’t have as much time for deep dives as before, which is what people seemed to really like, but acknowledging these different categories and specifically calling out a shorter, rougher, less polished piece as such takes the pressure off of publishing it.
There are many instances where your application has some expensive work to do, that would be not good to execute in the hot path. (e.g. responding to an HTTP request)
The typical solution is to enqueue a task in a queue and have a worker process it “offline”
In Python, Celery is a popular library for this. It uses backends for the actual queue. Popular backends are RabbitMQ and Redis.
RQ is another Python that supports Redis only.
RabbitMQ is designed to be a queue β it’s in the name.
Redis is a in memory key value store/database. (or “data structure” store). It includes a number of primitives that might be used to implement a queue. It’s basically a in memory hashmap. Keys are strings in a flat namespace. Value are a set of supported fundamental data types. Everything is serialized as it’s interacted with IPC.
List β (Implemented in RQ.) You can use opcodes like LPUSH and RPOP.
Pub/Sub β Unsure about this. (Possibly implemented in Celery?)
Stream β Advanced but apparently is not implemented in either Celery or RQ.
Celery has sleek Pythonic syntactic sugar for specifying a “Task” and then calling it from the client (web app). It completely abstracts the queue. It returns a future to the client (AsyncResult) β the interface is conceptually similar to std::async in C++.
RQ is less opinionated and any callable can be passed into this .enqueue() function, with arguments to call it with. This has the advantage that the expensive code does not need to have Celery as a dependency (to decorate it). However that is not a real downside, as you can always keep things separate by making Celery wrappers around otherwise dependency free Python code. But it is an addition level of layers.
Heroku offers support for this β you just need to add a new process to your Procfile for the celery/RQ worker process. Both celery/RQ generate a main.
Redis has other uses beyond being a queue: it can be a simple cache that you application accesses on the same server before accessing the real database server. It can also be used to implement a distributed lock (sounds fancy but is basically just a single entry in redis that clients can check to see if a “lock” is taken. Of course it’s more complicated than just that). Redis also supports transactions, in a similar way to transactional memory on CPUs. In general there are direct parallels from from local parallel programming to nearly everything in this distributed system world. That said there are unique elements β like the Redis distributed lock includes concepts like a timeout and a nonce in case the client that acquired the lock crashes or disappears forever. That is generally not something you’d see in a mutex implementation. Another difference is that even though accessing Redis is shared mutable state, clients probably don’t need some other out of band mutex because Redis implements atomicity likely. That’s different than local systems because even if the shared, mutable data type you’re writing to/reading from locally is atomic (like a int/word), you should still use a mutex/atomic locally due to instruction reordering (mutexes and atomics insert barriers).
It’s been three years since I launched https://timestamps.me, and a little less than three years since I stopped working on it.
Since March 25th, 2020, here are the stats:
4465 uploads
$437 revenue earned
This works out to about 4 uploads per day, which for me, is a great success.
Rough recap:
Feb 2020: I was taking my gap year and wanted to code again. I had the idea to work on a little tool for exporting locators from Ableton Live sets. I thought it would be fun to just quickly make it into a web app, and ship it. I ended up hyperfocusing on the problem space and making it super high accuracy (handling tempo automation). I also made it work for FL Studio which was difficult but a fun challenge.
March: I launch the web app (timestamps.fm at the time) and start trying to get users. I was extremely difficult. I posted on subreddits for DJing, Ableton, FL Studio, and Rekordbox.
Learnings and mistakes:
I wasted a ton of time porting to Google Cloud in an attempt to make the site run for free. It was an utter failure and I ended up porting back to Heroku.
I spent a ton of time DM’ing music producers that were performing at “e-fests” which were popular at the time (due to Covid-19, which was in full swing at the time). This was doomed to failure β none of them would find value in this niche product.
Ironically, the customer and user that got the most use out of it reached out to me, not the other way around. The CEO of a company that makes a high volume of DJ mixes for hotels and restaurants DM’d me on LinkedIn asking to set up a call. He found me via SEO/Google search and was able to find me on LinkedIn because I had been bold enough to put myself as “CEO, Timestamps.fm” on my LinkedIn profile. Lesson: Be bold!
If I really wanted to do this in a time and capital efficient way, I should have put in way more customer research before building this whole product (including super advanced features like hyper accurate tempo automation support). I didn’t care about this though because I was on my gap year, and was first and foremost doing it because it was a fun programming challenge.
I wasted a ton of time hand coding HTML and modifying a free HTML theme I found online. I eventually rewrote the whole site in Bootstrap which took a bunch of time. The breaking point was when I was trying to make a pricing page with different subscription options. It was going terribly with my hand-hacking of the HTML page, and Bootstrap included great looking UI components for this already. Learning Bootstrap was probably a good investment.
If I were to do it again: I would use no-code WAY more. I would try to avoid hand-coding any HTML if at all possible, and just do the minimal amount of code to have an API server running for doing the actual processing.
I put up a donation button. In 3 years, I’ve had 3 donors, for a total of $25 in donations.
I eventually learned that the majority of DJs don’t care about time accurate records for their DJ mixes. A small subset of them do β those that operate in the radio DJ world where they have licensing or reporting requirements. One DJ said they were required to submit timestamps so a radio show could show the track title on their web ui or something like that. But I was later surprised to see the site continuing to get traffic. Clearly there are some people out there that care. I haven’t bothered to figure out who they are or why. The site continues to be free, with no accounts necessary.
Smart things that I did right:
I had a lot of requests to support of DJ software like Traktor. I ignored them which was a great move β it would have taken a lot more time and wouldn’t have moved the needle on the proejct.
I negotiated a good rate initially for the commercial customer’s subscription β $40 a month! I then did a questionable move and lowered itsignificantly to $15 or a so per month when I made it free. The deal was that I would make the site free, but the customer would pay to help me break even on it. I later lowered it even more for them when I switched to Timestamps.me ($~20) which is a much cheaper domain then timestamps.fm ($80). It was a good move to move domains β that domain was a risky liability β if the customer ever left, I would have been stuck with an expensive, vanity domain for no real reason. I wasn’t going to become the next “last.fm” anytime soon.
SEO is the main driver, and continues to be until this day. I dominate the results for “ableton timestamps” etc. Posting on reddit and the Ableton forum were good calls.
I experimented with different monetization strategies. Pay per use was an interesting experiment and I made a small amount of money.
If I were to actually try to start a business again, I would do a lot of things different:
Be much more deliberate about picking the market and kind of customer to server
If you want to make money, make something that helps people that already make (and spend) money make even more money (the fact that they already make and spend it powerful and important)
Pick a product idea that isn’t totally novel so that it’s not so hard to sell it or introduce. It’s great to be able to say “I’m like X, but different because of A and B and specifically designed for C”