Author Archives: Mark

About Mark

Ten years into my journey towards becoming a pro systems programmer, sharing what I learn along the way. Also on Twitter: @offlinemark.

If you're reading this, I'd love to meet you. Please email mark@offlinemark.com and introduce yourself!

How to be happy

Note to self:

  1. Remember: You are already enough just as you are, right here, right now. You don’t need to achieve or do anything. 1
  2. Remember: The only competition in life โ€” if you must think of it that way โ€” is to know yourself as fully as possible, and act with maximum authenticity towards that truth.
  3. Remember: All things considered, you have it good โ€” many around the world would kill to switch places and inherit every single one of your problems.

It will never be easier than right now

This is a mindset I use to help with procrastination. It first came to me in my senior year on university when I needed to do lab reports. At that point, I had been doing lab reports for 8 years โ€” ever since the start of high school. And throughout that whole time, they were always excruciating.

But I realized that they were excruciating partly because I always waited until the days before the deadline to do them, which was about a week after the actual lab. By that point, the details of the lab were much fuzzier, making the lab report way harder.

It occurred to me that even though all I wanted to do after the lab is forget everything about it and push it off to the side, that exact moment โ€” right after the lab โ€” would be the easiest moment to ever do the report. As more time passes, it will strictly get harder as I begin to lose the context of the lab.

So I sucked it up and started to immediately go to the library right after the lab and simply do the report right then. It worked very well and I only wished I had started the habit years earlier.

I try to remember this lesson and apply it to my life now. If there are situations where I need to do something, and no additional information will arrive that will influence how the job gets done, I try to do it as quickly as possible to take advantage of the context fresh in my brain.

TIL: Debugging microcontrollers may require hardware breakpoints

I was debugging a microcontroller recently and was surprised to see that I was limited to 4 breakpoints. That surprised me because I’m usually only used to seeing that kind of limitation when using watchpoints, not breakpoints. (See my youtube video for more on this).

But it makes sense after all โ€” code running on this microcontroller is different than in a process in an OS’s userspace because the code will probably be flashed into some kind of ROM (Read Only Memory) (or EEPROM). Since the code is unwritable at a hardware level, the typical method of inserting software breakpoint instructions isn’t possible. So the alternative is to use the microprocessor’s support for hardware debugging, which occupies hardware resources and is thus finite.

WIP: Business models can have significant technical impact

Business models are an interesting topic that lives directly at the boundary between business and technology.

The business model describes an abstract plan for how the business is to function sustainably, and is a full time job to develop in and of itself. Then, it must be implemented in the product using technology.

For example, a typical SaaS business model involves subscription pricing at various intervals (monthly & yearly), with a discount given for the yearly plan in exchange for more money paid up front and longer commitment. There may or may not be a limited time free trial period, or alternatively a limited time guaranteed refund. Furthermore, there may be different pricing tiers that unlock more advanced features.

Ultimately, this is all going to end up as code that models the various pricing tiers, plans, timing deadlines, and enforces security (i.e. making sure the advanced features are only available to pro users). Stripe is a common service used to model arbitrary business models and execute payments processing. They provide client libraries offering data models for concepts like users and plans.

While subscription models have grown in popularity as software shifts to the web, it’s not the only model. The older model of buying discrete software packaged versions still exists, mostly used by vendors of desktop software. In this model, customers pay a larger sum for unlimited use of a specific major version of software. Then they can optionally pay again to upgrade to a newer major version when it’s released. A variant of this exists where there’s no upgrade fee (i.e. “lifetime free updates”). Another model exists called “Rent to own” which is like a subscription, except payments stop at a certain point after which there is free unlimited use.

Whether the software runs on the client or server is another dimension to consider which may influence the business model โ€” server side software is most commonly sold using subscriptions these days.

Client Server
SubscriptionAdobeSaaS Web Apps
One time payment per versionDash, Ableton Live
One time payment, lifetime free updatesFL Studio, Tailwind CSS (technically more assets than software)SaaS Web Apps Lifetime Plans (e.g Roam Research)
Rent to ownAudio plugins via Splice

The business model impacts technical strategy in a few ways.

  1. Branching and release management
  2. Compatibility of document artifacts

Subscriptions, “lifetime free updates”, and “rent to own” are simplest in terms of branching and release management. All users are expected to run the latest software (because it’s either free or automatic, in the case of server side), so development can largely happen on a single main branch which releases are cut from.

“One time payment per version” is more complex because potentially multiple major versions of the software need to be maintained in parallel. Ideally bugfixes in Version 2 would also be applied to V3 and V4 when appropriate, but no new feature development should make its way back to V2 and risk being included in binaries send to users that only paid for V2. For this situation, long term release branches make sense as they provide code isolation, though they require some mechanism to forward bugfixes, etc.

The second aspect is document artifact compatibility. Again in this scenario, subscriptions, “lifetime free updates”, and “rent to own” are simpler in that all users can be assumed to be running on the latest version โ€” or at least can be told to upgrade at no cost.

“One time payment per version” is again more complex because multiple versions of the software exist in the wild, all producing artifacts of slightly different versions. If artifacts may be sent between users of different versions, it creates a mess of trying dealing with all the compatibility issues that may arise.

TIL: ARM64 doesn’t include conditional instructions

A major difference between x86 and ARM32 is that while x86 generally1 only offers conditional execution of branch instructions (e.g. BNE) ARM32 offers conditional execution for many more instructions (e.g. ALU instructions, like ADD.EQ- add if not equal). The idea was that these can be used to avoid emitting traditional branching instruction sequences which may suffer from pipeline stalls when a branch is mispredicted โ€” instead, straight line code with equivalent semantics can be emitted.

This was actually removed in ARM64. The official quote:

The A64 instruction set does not include the concept of predicated or conditional execution. Benchmarking shows that modern branch predictors work well enough that predicated execution of instructions does not offer sufficient benefit to justify its significant use of opcode space, and its implementation cost in advanced implementations.

https://stackoverflow.com/a/22169950/1790085

Turns out that the whole pipeline stall problem is generally not a huge issue anymore as branch prediction has gotten so good, while support for the feature still requires allocating valuable instructions bits to encode the conditions. Note that ARM64 still uses 32 bit instructions, so conserving bits is still useful.

What is very interesting is that Intel’s recent APX extensions to x86-64 (whose purpose is to primarily add more general purpose registers) moves closer to this conditional instructions direction.

The performance features introduced so far will have limited impact in workloads that suffer from a large number of conditional branch mispredictions. As out-of-order CPUs continue to become deeper and wider, the cost of mispredictions increasingly dominates performance of such workloads. Branch predictor improvements can mitigate this to a limited extent only as data-dependent branches are fundamentally hard to predict.

To address this growing performance issue, we significantly expand the conditional instruction set of x86, which was first introduced with the Intelยฎ Pentiumยฎ Pro in the form of CMOV/SET instructions. These instructions are used quite extensively by todayโ€™s compilers, but they are too limited for broader use of if-conversion (a compiler optimization that replaces branches with conditional instructions).”

…including support for conditional loads and stores which is apparently tricky with modern out of order and superscalar architectures.


https://stackoverflow.com/questions/22168992/why-are-conditionally-executed-instructions-not-present-in-later-arm-instruction

https://www.intel.com/content/www/us/en/developer/articles/technical/advanced-performance-extensions-apx.html

What cured my writer’s/publisher’s block

I’ve managed to make writing fun again and have been publishing a lot on my blog lately.

It’s due to three reasons:

1 – Having explicit content categories creates the freedom to post things that aren’t highly effort intensive deep dives.

By introducing explicit post categories (i.e. Micropost, Lab Notes, Essay, Deep Dive, etc) I feel much more free to post without the expectation that every post needs to be a time consuming, deeply researched technical post (which are the kind of posts that are popular).

I don’t have time for those lately, but that doesn’t mean I can’t write or post anything! Explicitly tagging something as a micropost or lab notes takes a lot of the pressure off, and makes me much more willing to write and publish.

2 – I stopped sharing on Twitter.

The curse of growing an audience is that posting to that audience has increasing weight as the audience grows, which creates stress and friction about posting. What if I post something that people don’t like and I lose a bunch of followers? What if I’m straight up wrong? What if I want to share something but don’t have time to fully polish it and people judge me? What if I post too often?

It’s also distracting to share on Twitter, and very difficult to not monitor notifications afterwards.

Furthermore, writing on Twitter was actually difficult in that it took extra energy to abide by the character limits, or fit things into threads otherwise. Writing freeform of my own blog is easier in this regard.

Not posting on Twitter removes a nontrivial amount of friction and stress that used to prevent me from sharing.

3 – Using WordPress (i.e. not a static site generator) removes a ton of friction.

The fact that I can click buttons in a web interface, write and post as easily as sending a tweet makes all the difference.

It’s so nice being able to change the website without coding. For instance, I just added a “Popular posts” feature in the sidebar in the last 5 minutes. Turns out Jetpack already has the feature included and I just had to enable it. I don’t even want to imagine what it would have taken to implement that by hand or with a static site generator.

Though not as fast a static site, the blog loads fast enough and I’m more than happy to take the performance hit.

It’s also awesome that I can quickly edit posts to fix typos, even from my phone with the WordPress app. I do this very often.

(bonus reason: It’s also due to the realization that posts don’t have to be long, can be written in one sitting, and don’t have to be absolutely perfect!)

How to not feel dumb around “smart” programmers

When going to programming meetups, conferences, or starting a new job, it can be extremely easy to feel dumb around other engineers there. Sometimes these engineers truly are geniuses, but many times they’re not as “smart” as your immediate imposter syndrome is leading you to believe.

My advice, especially for beginners, is to remember that these people are probably talking about a domain they are extremely familiar with, and have spent probably 1000x the time with than you. You probably have stumbled upon to a topic that they are specifically knowledgeable about, which gives the impression that they’re ultra smart (and you’re ultra dumb, for not knowing anything about it).

The key is to remember that this impressive depth of knowledge is likely limited in breadth. If you were talking about your domain, you’d might know a lot more than them.

It’s also easy to feel dumb when watching a conference talk. My trick here is to remember that the reason this person can speak so confidently about such a technical topic is because, again, they’ve spent 1000x more time on it than you have. They’re had their face pressed directly against this problem for a long time, which is why they know all this and can talk about it.


This comes from my experience:

  • Entering the programming world (feeling dumb at hacker club in college)
  • Enter the infosec world (feeling dumb at infosec conferences)
  • Going to conferences not directly in my expertise (i.e. the LLVM Dev Mtg)
  • Entering the audio programming world (feeling dumb at audio conferences)
  • Feeling dumb in many, many conference talks

Distance between application programming and standard library programming

This is a rough, potentially half-baked thought:

This might an interesting metric to compare programming languages: distance between application programming and standard library programming.

In general, standard library programming is more difficult than average application programming for any programming language. Or at the very least “different” in some way — language features used, programming style required, etc. That difference might vary depending on the language.

Mainly, I’m thinking about C++, where standard library programming requires expertise in C++ template meta-programming (TMP) (i.e. to implement std::variant), which is effectively an entirely different programming language, existing in the C++ type system. While many application developers may also be competent in TMP, there are many that aren’t (including myself). It’s possible to be a very productive C++ programmer, and STL user, without being an expert at implementing generic libraries using TMP.

Given this, my impression that C++ has a relatively high distance between application programming and stdlib programming.

Python of course also has some distance here. I’m not qualified to speak on it, but I can imagine it also involved more obscure language feature that do not occur often in normal app development. I would guess that this distance is less than C++.

Lastly, C is an interesting language to consider because it offers such a spartan feature set that there aren’t particularly that many more language features available to be used. (I’m probably wrong here and there are obscure things I’m not aware of, including symbol versioning for compatibility). But in general, my assumption would be that C has a some limited distance, given it’s restrained feature set.

Ultimately, this metric is probably impossible to quantify and may have limited value, but is something I find intriguing anyway if it were to exist.

“What could go wrong” considered harmful

The retort “What could go wrong” is one of my big pet peeves.

It’s often used in response to a failure of a complex system or operation. Sometimes the system had clearly poor design, making it warranted. But more often these comments reek of hindsight bias and carry an arrogance โ€” as if the speaker could have easily avoided the failure if they were the one in charge.

It’s possible to construct that kind of retort for almost anything if you try hard enough:

  • “Flying massive metal tubes around in the sky filled with hundreds of people, what could go wrong?”
  • “Cementing metal wires into the mouths of children, what could go wrong?”
  • “Shooting lasers into peoples’ eyes, what would go wrong?”

But if you did so, you’d actually be wrong a lot โ€” because there are many complex systems that function correctly for most users, most of the time. They function because many people have poured blood, sweat, and human ingenuity into them to make them reliable. And it’s often not intuitive that they can work.

Even if sometimes people are truly negligent and deserve it, I don’t find the phrase of net benefit to the culture. I consider it harmful because it ups the consequences of failure โ€” a necessity for innovation โ€” in exchange for cheap virtue signaling from bystanders who often have no experience in the domain.

So rather than assuming incompetence, let’s all be a bit more charitable. The world is complex and less intuitive than it looks.

Core devs are not necessarily product experts

A common misconception I long held is that core devs of a product must be the top product experts.

The reality is that, as a core dev

  • there isn’t enough time to be both a dev and a user
  • knowledge of the implementation taints you and can prevent you from seeing the product clearly
  • it’s extremely difficult to focus deep on the details, and also view the product from a high level, holistically

Yes, you will have the absolute expertise on certain product behaviors or possibilities. But almost certainly never for the whole product at once; just the part you’ve been working on lately, where the knowledge is most fresh.

This is why it’s so important to surround yourself with “power users” โ€” those that are untainted and unburdened by the implementation, and can use their full mental power to absorb and innovate on the product simply as a product.

These are often the people that find the most interesting uses and abuses of systems, that the core devs weren’t even aware of.

This can happen for any kind of product, including and especially programming languages. Many of the interesting programming “patterns” are created not by the developers of the language, but by “power users”.[citation needed]