Dimensions of job happiness

I reflected on the qualities of a job that matter to me and here’s what I could come up with:

  • Technical intensity
  • Personal interest in the domain
  • “Sexiness” of the domain / Can laypeople understand it? (Or better yet, find it intriguing?)
  • Deploy at “scale” (leverage/power) (>= 1000 users)
  • Brand recognition & enthusiasm

These build on top of some base requirements of: working with nice & smart people that I enjoy being around, and being a good fit with the overall company future.

Software engineers must learn to write

There is an immense amount of written communication involved in a programming job:

  • Commit messages
  • Bug reports & debugging discussions
  • Design documents
  • User documentation
  • Project documentation, READMEs, internal dev documentation
  • Project announcements
  • Comments & in-code documentation
  • Making complex situations simple for stakeholders
  • Handling support tickets
  • Mailing list discussions

Software engineers must learn to write well in the same way that they should also learn how to stay organized β€” they avoid doing so to their own detriment.

How I create so much content

Between my two personal projects, I create a lot of content.

offlinemark:

  • Blog – Writing
  • Twitter – Shorter writing, writing meant to be seen more widely
  • Youtube – Educational content, screencasts
  • Podcast – Stories & lessons from my life, more personal

comfort (my music production project):

  • Music
  • Youtube – Educational content, screencasts on music production
  • Twitter
  • Instagram
  • Podcast

I also have a travel blog that I don’t share publicly.

Some of these channels are more active than others, but I have three principles that help me do it all.

1 – Harvest, don’t generate

I try to harvest content from what I experience & learn in my life, rather than sit down, brainstorm, and generate content from nothing.

2 – Respect inspiration

I’m significantly more productive when I’m inspired, so I try to treat moments of inspiration with great respect and get to work when they come. Sometimes this means being awake at 5:18am writing (as I am now).

Rick Rubin talks about this exact concept in “The Creative Act”.

I try to avoid queueing things at all costs; if possible I just sit down and do it now.

3 – Have buckets for everything

Different kinds of ideas naturally have a medium in which they are best expressed. Insights or essays are best expressed in writing. Educational walkthroughs are best done via screencast. Stories are best told via audio.

I try to have “buckets” (media channels) ready to receive ideas in their best form the moment they strike. This helps you harvest as much of your creative potential as possible.

For a long time, I left content on the table by not having a podcast for offlinemark. I regularly had stories I thought of and wanted to share, but simply had no great place for them, so they weren’t shared. Now that I have the podcast, I notice that I fairly regularly have a thought that is suitable for that medium, and can capture it.


Bonus: 4 – Default to public, and iterate

Don’t get tied up polishing content before you publish it. Default to publishing whatever you have, and remember: You can always polish, expand, or re-release content later. Defaulting to in public maximizes the potential for it to be discovered.


Bonus (Dec 2023 ):

5 – Never let an idea slip

If you’re going to create a lot of content, step one is to not needlessly throw away good ideas when you have them. Ensure that you have some system or tools in place for quickly capturing ideas wherever you might have them. I heavily rely on “Hey Siri, remind me…” on my iPhone which lets me quickly record notes to process later. I use Omnifocus as my todo app which integrates with this. Omnifocus and most other todo apps have a “Quick Add” global keyboard shortcut which is useful if you’re already on your computer.

6 – Not all content needs to be long

Not all blog posts or content needs to be long and arduous to write. In fact, it’s better if it’s not.

7 – Minimize friction

My current blogging setup with WordPress feels very friction optimized β€” I just browse to my blog, click new post, write, and hit publish. No command lines. No writing in a separate app, then copying the post over. In-place construction if you will.

8 – One-shot it

Get in the habit of “one-shotting” content β€” forcing yourself to “finish” it in some way in the same session of work. It’s incredibly tempting to leave a piece in a half-finished state and say that you’ll come back later. But rarely does that ever happen and adding things to todo-lists/queues adds weight to your life that doesn’t feel good. Plus, forcing yourself to finish is a creative muscle in and of itself that can be exercised and improved at. I’ve noticed improvement with this for me for music making and writing.

I started a podcast

You can find it here: https://podcasters.spotify.com/pod/show/offlinemark/

It’s an experimental home for content which favors the audio medium β€” mostly non-technical stories & lessons from my life. I will have audio versions of some of the blog posts here.

I was thinking about the growing number of publishing channels I now have and what belongs where. Here’s what I have so far:

  • Twitter: More polished posts that I feel comfortable directly sharing with a larger audience.
  • Blog: Home base for everything.
  • Youtube: Technical topics where screencasting is most natural
  • Podcast: Stories, life & career lessons, more intimate or personal topics

Resolving git conflicts perfectly every time (using diff3)

Reminder: You should unconditionally be using the diff3 merge style config with git. It’s strictly superior to the default config and provides critical context for resolving conflicts.

Instead of simply showing the the state of the original code, and then the incoming conflicting change, it also shows the code before either change was made. This lets you see the changes both sides were attempting, and mechanically reason about how to merge them.

The mechanical process is (credit to Mark Zadel for showing me):

  • Begin with the common ancestor code
  • Identify the difference between it and the original code.
  • Apply that difference to the incoming change. Keep only the incoming change.

The opposite direction (identify difference between middle and incoming change; apply difference to original code and keep it) also works. You can choose whichever is simpler.

Example:

Here’s a diff that happened on master.

 int main()
 {
     int x = 41;
-    return x + 1;
+    return x + 2;
 }

Here’s a diff that happened in parallel on a development branch.


 int main() 
 {
     int x = 41;
-    return x + 1;
+    int ret = x + 1;
+    return ret;
 }

Here’s the merge conflict from e.g. rebasing the branch onto master.

int main()
{
    int x = 41;
<<<<<<< HEAD
    return x + 2;
||||||| parent of 4cfa6e2 (Add intermediate variable)
    return x + 1;
=======
    int ret = x + 1;
    return ret;
>>>>>>> 4cfa6e2 (Add intermediate variable)
}

On the first side, we change the core computation. On the second side, we extract a variable.

One way to resolve the conflict is to take that change between the middle and top (x + 1 -> x + 2), then applying it to the bottom.

That produces the correct conflict resolution:

int main()
{
    int x = 41;
    int ret = x + 2;
    return ret;
}

The other way of extracting it (refactor a variable out from the top x+2 code) produces the same end result.

Links:

https://git-scm.com/docs/merge-config#Documentation/merge-config.txt-mergeconflictStyle

https://blog.nilbus.com/take-the-pain-out-of-git-conflict-resolution-use-diff3/

WIP: What’s the deal with memory ordering? (seq_cst, acquire, release, etc)

(This is a high level summary of my current knowledge, primarily to help me crystallize the knowledge. It comes entirely from from Jeff Preshing’s blog (see end of post) and youtube talk. This is not intended to be a comprehensive overview; for that, please see the aforementioned materials. I am very much a non-expert on this topic; please treat everything with skepticism.)

When programming with atomics, how are you suppose to know which of the ~four memory orderings to use? For example, the main ones (C++ terminology) are:

  • memory_order_seq_cst
  • memory_order_acquire
  • memory_order_release
  • memory_order_relaxed
  • (and a few other niche ones: acq_rel, consume)

First, as Jeff Preshing states, there is a distinction between “sequentially consistent” atomics and “low level” atomics. He describes it as two libraries for atomics masquerading as a single one within the C++ standard library.

The first, “sequentially consistent”, can be considered a higher level way of using atomics. You can safely use seq_cst everywhere. You get simpler semantics and higher likelihood of correctness, just at the expensive of performance. As an optimization, you can then port the code to the second form of “low level atomics”. This is where you must choose the explicit memory orderings.

But why do sequentially consistent atomics come with a performance hit?

The performance hit comes from cross core communication. The sequentially consistent memory model offers a very strong guarantee to the programmer; in addition to the ordering of atomic operations being consistent across cores (which is always the case), the ordering of non-atomic operations is also guaranteed to be consistent (i.e. no reordering) relative to the atomic ones. This is relevant because programming with atomics often involves “guard” (atomic) variables who regulate access to “normal” (non-atomic) data that is transferred between threads. This guarantee requires extra effort from the memory subsystem of the CPU in the form of cross core communication as the cores need to effectively synchronize their caches.

When one moves to “low level” atomics, the strict constraints required of the memory subsystem are relaxed. Not all orderings of non-atomic accesses relative to atomic accesses must be maintained. The consequence is less cross-core coordination is required. This can be exploited for higher performance in specific scenarios where the strict ordering constraint is not required in both (or any) directions (i.e. non-atomic memory accesses are allowed to move before or after the atomic access).

Exercise: Would one expect to see a performance improvement from porting code from sequentially consistent atomics to low level atomics, if the code is run on a single core system?

The whole point of low level atomics is to optimize performance by relaxing constraints and reducing cross core communication, so no. There is no cross core communication in a single core system, so there is nothing substantial to optimize.

(I am not 100% sure of this answer. This is the current state of my knowledge and I would appreciate being corrected or affirmed either way!)

So how does one choose between all those memory orderings?

With my non-expert understanding, I believe there are some simple rules that make the decision much easier than it might seem.

First off: Decide whether you’re using sequentially consistent or low level atomics. If the former, you use seq_cst everywhere (this is even the default with C++ if you don’t specify anything).

If you want to optimize to use low level atomics, then for most cases, you then only have three choices: acquire, release, and relaxed. (seq_cst is no longer an option; acq_rel is more niche; consume is actively discouraged). Then:

  • If you’re deciding for a load operation, you then only choose between acquire and relaxed. Loads are never release.
  • And vice verse, If you’re deciding for a store operation, you then only choose between release and relaxed. Stores are never acquire.

This narrows it down to two choices. To determine whether it’s acquire/release or relaxed, determine whether the load/store has a synchronizes-with relation to a corresponding store/load. If there is one, you want acquire/release. Otherwise, choose relaxed.

Read these blog posts for a fuller answer to this:

Links:

https://www.youtube.com/watch?v=X1T3IQ4N-3g

KPTI: The virtual memory 101 fact that’s no longer true

(This is not news; just something I was surprised to learn recently.)

The classic virtual memory design for an operating system maps the kernel in the address space of every process. This improves context switch performance; switching into the kernel then requires no expensive page table reset. The kernel can run using the same page tables userspace was running with.

Typically, the kernel is mapped into the upper section of virtual memory. For example, on 32 bit Linux, the kernel is mapped into the top gigabyte. Concretely, to implement this, the page table entries mapping those kernel pages are set with the supervisor bit on. This means only privileged code (running in Ring 0 on x86) can access those pages. This is what enforces security and prevents userspace from accessing kernel memory. The MMU is therefore responsible for enforcing security.

In the world of CPU side-channel vulnerabilities this MMU enforced security boundary is no longer reliable. Specifically, the Meltdown vulnerability allows userspace to read arbitrary memory, anywhere in the virtual address space, regardless of whether the supervisor bit is set. It does this using cache-based timing side-channels that exist due to speculative execution of memory accesses.

This means that it’s no longer safe to map the kernel into the address space of userspace processes, and indeed that’s no longer done. The general name for this mitigation is “Kernel Page Table Isolation” (KPTI). As of “modern” kernels (since 5.15 for aarch64 Linux I believe),it’s on by default. (See CONFIG_UNMAP_KERNEL_AT_EL0). Context switches now must reset the page tables to a set private to the kernel.

KAISER will affect performance for anything that does system calls or interrupts: everything. Just the new instructions (CR3 manipulation) add a few hundred cycles to a syscall or interrupt. Most workloads that we have run show single-digit regressions. 5% is a good round number for what is typical. The worst we have seen is a roughly 30% regression on a loopback networking test that did a ton of syscalls and context switches.

https://lwn.net/Articles/738975/

The lesson here? Even the most seemingly fundamental knowledge about how computers work is subject to change. Don’t assume things are still as you learned them, and exercise caution and humility when discussing details of things you haven’t actively kept up with development of.

Links:

https://wiki.osdev.org/Paging

https://en.wikipedia.org/wiki/Kernel_page-table_isolation

https://lwn.net/Articles/738975/

You don’t need to load code into RAM to execute it

This will be a basic fact to some, but you don’t need to load code into RAM to execute it. You can execute code straight from ROM.

In fact, this is how most computer systems boot up. After the CPU finishes initializing, it starts executing at a specific physical address which is generally mapped to some kind of Boot ROM.

(On x86, this first instruction is located at 0xFFFFFFF0, which is interestingly almost completely at the top of memory. The code there then needs to contain a jump to the rest of the actual boot code. (Source: Intel 64 and IA-32 Architectures Software Developer’s Manual, Vol 3A Section 9.1.4)

I believe ARM systems are different and the start address can vary.)

The Boot ROM β€” like the name suggests β€” is not RAM. It’s ROM. It’s a totally separate device on the memory bus offering nonvolatile storage. It’s mapped into physical memory using the mesh of digital logic that implements the physical memory mapping. (More: https://offlinemark.com/2023/08/09/how-the-hardware-software-interface-works/)

The CPU is generally not aware of what specific device is on the other end of the memory bus, servicing reads and writes. During instruction fetch, it simply issues reads to the memory bus, receives instruction data, then executes it. The data can transparently come from RAM, ROM, or potentially even some other device, provided it is fast enough.

The reason this was unintuitive to me, is because until recently I’ve only ever done “normal” programming, where programs are loaded from disk into memory before running them. This is the domain of probably 99% of programmers. And it’s not even just limited to userspace application programmers; even kernel developers have their code loaded into RAM before its run. It’s usually only the developers of very early stage bootloaders and microcontroller firmware developers that need to be aware of the CPU running code from locations other than RAM.

Links:

Wikipedia booting

SerenityOS Day 1: Debugging the piano app

I love spelunking into unknown codebases with nothing but find and grep. It’s one of the most valuable skills one can develop as a programmer imo and in this video you can see how I approach it.

This video focuses on debugging GUI event handling. At first the bug seemed related to the app’s waveform selection, but I then realized it was a more general topic with the SerenityOS GUI UX β€” selecting a dropdown entry retains focus, and requires an explicit escape key.

Ultimately I made progress accidentally by hitting the keyboard while the selection was still active, revealing to me that fact (which I hadn’t noticed before).

You can see my general debugging flow:

  • Get things building
  • How to run app from command line (to see stdout)?
  • How to print to stdout?
  • Using debug prints to understand the GUI event handling

Overall I’m quite impressed with SerenityOS. I only realized after looking into the code exactly how much code they had written and how fully featured the system is. Well done to the team.

Do you need to learn how to implement a red-black tree?