Category Archives: Software Development

Tip: When you’re learning a new programming language, look up prominent open source projects and copy their style

Tip: When you’re learning a new programming language, look up prominent open source projects and copy their style.

Aside from the core language, there are many conventions & little details to learn: naming (variables, classes, files), file structuring, literal code formatting

These are things few blogs talk about because it’s highly opinionated. But nevertheless when you’re learning, you’ll benefit from at least some reference for these.

Find a few “professional” open source projects and browse to see what various interpretations of “professional style” are. Then pick one you like most.

Be careful of picking projects that are too old β€” they might use older style for consistency with legacy, even though they might ideally wish to modernize it.

And ideally pick projects whose contributors are experienced engineers who work on it full-time. Since they “live in” the codebase, they’re less likely to tolerate sloppiness – or are at least more invested in cleaning it up.

The last idea is influenced by @awesomekling, who talks about similar things in his classic “My top advice for programmers looking to improve” car-talk video =]

Resolving git conflicts perfectly every time (using diff3)

Reminder: You should unconditionally be using the diff3 merge style config with git. It’s strictly superior to the default config and provides critical context for resolving conflicts.

Instead of simply showing the the state of the original code, and then the incoming conflicting change, it also shows the code before either change was made. This lets you see the changes both sides were attempting, and mechanically reason about how to merge them.

The mechanical process is (credit to Mark Zadel for showing me):

  • Begin with the common ancestor code
  • Identify the difference between it and the original code.
  • Apply that difference to the incoming change. Keep only the incoming change.

The opposite direction (identify difference between middle and incoming change; apply difference to original code and keep it) also works. You can choose whichever is simpler.

Example:

Here’s a diff that happened on master.

 int main()
 {
     int x = 41;
-    return x + 1;
+    return x + 2;
 }

Here’s a diff that happened in parallel on a development branch.


 int main() 
 {
     int x = 41;
-    return x + 1;
+    int ret = x + 1;
+    return ret;
 }

Here’s the merge conflict from e.g. rebasing the branch onto master.

int main()
{
    int x = 41;
<<<<<<< HEAD
    return x + 2;
||||||| parent of 4cfa6e2 (Add intermediate variable)
    return x + 1;
=======
    int ret = x + 1;
    return ret;
>>>>>>> 4cfa6e2 (Add intermediate variable)
}

On the first side, we change the core computation. On the second side, we extract a variable.

One way to resolve the conflict is to take that change between the middle and top (x + 1 -> x + 2), then applying it to the bottom.

That produces the correct conflict resolution:

int main()
{
    int x = 41;
    int ret = x + 2;
    return ret;
}

The other way of extracting it (refactor a variable out from the top x+2 code) produces the same end result.

Links:

https://git-scm.com/docs/merge-config#Documentation/merge-config.txt-mergeconflictStyle

https://blog.nilbus.com/take-the-pain-out-of-git-conflict-resolution-use-diff3/

WIP: Business models can have significant technical impact

Business models are an interesting topic that lives directly at the boundary between business and technology.

The business model describes an abstract plan for how the business is to function sustainably, and is a full time job to develop in and of itself. Then, it must be implemented in the product using technology.

For example, a typical SaaS business model involves subscription pricing at various intervals (monthly & yearly), with a discount given for the yearly plan in exchange for more money paid up front and longer commitment. There may or may not be a limited time free trial period, or alternatively a limited time guaranteed refund. Furthermore, there may be different pricing tiers that unlock more advanced features.

Ultimately, this is all going to end up as code that models the various pricing tiers, plans, timing deadlines, and enforces security (i.e. making sure the advanced features are only available to pro users). Stripe is a common service used to model arbitrary business models and execute payments processing. They provide client libraries offering data models for concepts like users and plans.

While subscription models have grown in popularity as software shifts to the web, it’s not the only model. The older model of buying discrete software packaged versions still exists, mostly used by vendors of desktop software. In this model, customers pay a larger sum for unlimited use of a specific major version of software. Then they can optionally pay again to upgrade to a newer major version when it’s released. A variant of this exists where there’s no upgrade fee (i.e. “lifetime free updates”). Another model exists called “Rent to own” which is like a subscription, except payments stop at a certain point after which there is free unlimited use.

Whether the software runs on the client or server is another dimension to consider which may influence the business model β€” server side software is most commonly sold using subscriptions these days.

Client Server
SubscriptionAdobeSaaS Web Apps
One time payment per versionDash, Ableton Live
One time payment, lifetime free updatesFL Studio, Tailwind CSS (technically more assets than software)SaaS Web Apps Lifetime Plans (e.g Roam Research)
Rent to ownAudio plugins via Splice

The business model impacts technical strategy in a few ways.

  1. Branching and release management
  2. Compatibility of document artifacts

Subscriptions, “lifetime free updates”, and “rent to own” are simplest in terms of branching and release management. All users are expected to run the latest software (because it’s either free or automatic, in the case of server side), so development can largely happen on a single main branch which releases are cut from.

“One time payment per version” is more complex because potentially multiple major versions of the software need to be maintained in parallel. Ideally bugfixes in Version 2 would also be applied to V3 and V4 when appropriate, but no new feature development should make its way back to V2 and risk being included in binaries send to users that only paid for V2. For this situation, long term release branches make sense as they provide code isolation, though they require some mechanism to forward bugfixes, etc.

The second aspect is document artifact compatibility. Again in this scenario, subscriptions, “lifetime free updates”, and “rent to own” are simpler in that all users can be assumed to be running on the latest version β€” or at least can be told to upgrade at no cost.

“One time payment per version” is again more complex because multiple versions of the software exist in the wild, all producing artifacts of slightly different versions. If artifacts may be sent between users of different versions, it creates a mess of trying dealing with all the compatibility issues that may arise.

Core devs are not necessarily product experts

A common misconception I long held is that core devs of a product must be the top product experts.

The reality is that, as a core dev

  • there isn’t enough time to be both a dev and a user
  • knowledge of the implementation taints you and can prevent you from seeing the product clearly
  • it’s extremely difficult to focus deep on the details, and also view the product from a high level, holistically

Yes, you will have the absolute expertise on certain product behaviors or possibilities. But almost certainly never for the whole product at once; just the part you’ve been working on lately, where the knowledge is most fresh.

This is why it’s so important to surround yourself with “power users” β€” those that are untainted and unburdened by the implementation, and can use their full mental power to absorb and innovate on the product simply as a product.

These are often the people that find the most interesting uses and abuses of systems, that the core devs weren’t even aware of.

This can happen for any kind of product, including and especially programming languages. Many of the interesting programming “patterns” are created not by the developers of the language, but by “power users”.[citation needed]

Runtime polymorphism cheat sheet

While C++ is used here as an example, the concepts apply to any statically typed programming language that supports polymorphism.

For example, while Rust doesn’t have virtual functions and inheritance, it’s traits/dyn/Boxes are conceptually equivalent and. Rust enums are conceptually equivalent to std::variant as a closed set runtime polymorphism feature.

Virtual Functions/Inheritancestd::variant
Runtime PolymorphismYes – dynamic dispatch via vtableYes – dynamic dispatch via internal union tag (discriminant) and compile-time generated function pointer table
SemanticsReference – clients must operate using pointer or referenceValue – clients use value type
Open/Closed?Open – Can add new types without recompiling (even via DLL). Clients do not need to be adjusted.Closed – Must explicitly specify the types in the variant. Generally clients/dispatchers may need to be adjusted.
CodegenClient virtual call + virtual methodsClient function table dispatch based on union tag + copy of callable for each type in the dispatch. If doing generic dispatch (virtual function style), then also need the functions in each struct. Inlining possible.
Class definition boilerplateClass/pure virtual methods boilerplate.Almost none.
Client callsite boilerplateAlmost nonestd::visit() boilerplate can be onerous.
Must handle all cases in dispatch?No support β€” the best you can do is an error-prone chain of dynamic_cast<>. If you need this, virtual functions are not the best tool.Yes, can support this.

Overall, virtual functions and std::variant are similar, though not completely interchangeable features. Both allow runtime polymorphism, however each has different strengths.

Virtual functions excels when the interface/concept for the objects is highly uniform and the focus is around code/methods; this allows callsites to be generic and avoid manual type checking of objects. Adding a const data member to the public virtual interface is awkward and must go through a virtual call.

std::variant excels when the alternative types are highly heterogenous, containing different data members, and the focus is on data. The dispatch/matching allows one to safely and maintainably handle the different cases, and be forced to update when a new alternative type is added. Accessing data members is much more ergonomic than for virtual functions, but the opposite is true for generic function dispatch across all alternative types, because the std::visit() is not ergonomic.

Building on these low level primitives, one can build:

  • Component pattern (using virtual functions typically) (value semantics technique; moves away from static typing and towards runtime typing)
  • Type erase pattern (also virtual functions internally) (value semantics wrapper over virtual functions)

Fun facts:

  • Rust also has exactly these, but just with different names and syntax. The ergonomics and implementation are different, but the concepts are the same. Rust uses fat pointers instead of normal pointer pointing to a vtable. Rust’s match syntax is more ergonomic for the variant-equivalent. Rust uses fat pointers apparently because it allows “attaching a vtable to an object whose memory layout you cannot control” which is apparently required due to Rust Traits. (source)
  • Go uses type erasure internally, but offers this as a first class language feature.

Case study: Component pattern

The component pattern is a typical API layer alternative to classical virtual functions. With classical runtime polymorphism via virtual functions, the virtual functions and inheritance are directly exposed to the client β€” the client must use reference semantics and does direct invocation of virtual calls.

With the component pattern, virtual functions are removed from the API layer. Clients use value semantics and then “look up” a component for behavior that would have previously been inherited.

API classes, instead of inheriting, contain a container of Components, who are themselves runtime polymorphic objects of heterogenous types. The components can classically use virtual functions for this, inheriting from some parent class. Then the API class contains a container of pointers to the parent class. API clients look up the component they are interested in via its type, and the API class implements a lookup method that iterates the components and identifies the right one using dynamic_cast or similar.

However, variants offer another way to implement this. Rather than having all components inherit from the superclass, they can be separate classes that are included in a variant. The API class then has a container of this variant type. In the lookup method, instead of using dynamic_cast, it uses std::holds_alternative which is conceptually equal.

This is a somewhat unusual application of runtime polymorphism and neither implementation method stands out as strictly better. Since components do not share a common interface really (they would just inherit so they can be stored heterogenously in a container), virtual functions does not offer a strong benefit. But also since the component objects are never dispatched on (they are always explicitly looked up by type), the variant method also does not offer a strong benefit.

The main difference in this scenario is the core difference between virtual functions and variants: whether the set of “child” types is open or closed. With virtual functions being open, it offers the advantage that new components can be added by simply inheriting from the parent class and no existing code needs to be touched. Potentially new components could even be loaded dynamically and this would work.

With variants, when new components are added, the core definition of the component variant needs to be adjusted to include the new type. No dynamic loading is supported.

So it appears that virtual functions have slight advantage here.

See: https://gameprogrammingpatterns.com/component.html

Q: What about std::any?

std::any is loosely similar to virtual functions or std::variant in that it implements type erasure, allowing a set of heterogenous objects of different types, to be referenced using a single type. Virtual functions and std::variant aren’t typically called “type erasure” as far as I’ve heard, but this is effectively what they do.

However that’s where the similarities end. std::any represents type erasure, but not any kind of object polymorphism. With std::any, there is no notion of a common interface that can be exercised across a variety of types. In fact, there is basically nothing you can do with a std::any but store it and copy it. In order to extract the internally stored object, it must be queried using its type (via std::any_cast()) which tends to defeat the purpose of polymorphism.

std::any is exclusively designed to replace instances where you might have previously used a void * in C code, offering improved type safety and possibly efficiency. 1 The classic use case is implementing a library that allows clients to pass in some context object that will later be passed to callbacks supplied by the client.

For this use case, the library must be able to store and retrieve the user’s context object. It’s it. It literally never will interpret the object or access it in any other way. This is why std::any fits here.

Another use case for std::any might be the component pattern in C++, where objects store a list of components, which are then explicitly queried for by client code. In this case, the framework also never deals directly with the components, but simply stores and exposes the to clients on request.

More: https://devblogs.microsoft.com/cppblog/stdany-how-when-and-why

WIP: “Interesting” is in the eye of the beholder

I posit that in many software projects there is a small core of the “most interesting” work, surrounded by a larger core of “support engineering”. The “support engineering” is in service of the “most interesting” core in order to make it usable and a good product.

For example:

  • Compilers: Core optimizations vs cli arg parsing
  • Kernels: Core context switching vs some module that prints out the config the kernel was built with
  • Audio software: Core engine, data model, or file format work vs UI work on the settings menu
  • ChatGPT: Core machine learning vs front end web dev to implement the chat web UI

But the funny thing is that “interesting” is in the eye of the beholder. For every person that thinks X is the “most interesting”, perhaps most technical part of the project, there will be a different person that is totally uninterested in X and is delighted to let someone else handle this for them. This very well may be because X is too technical, too in the weeds.

The generalizes to work and society as a whole β€” people gravitate towards work that suits their interests. The areas they don’t find interesting are hopefully filled by others who are naturally wired differently. This does happen in real life but of course plays out less cleanly.

Tip: Have two checkouts of the repo

It can be handy to have two checkouts of a repo. One is the primary one (A) for working in. And the other is the “spare” (B).

This can be useful in a number of situations:

  • You’re in the middle of a rebase in A and want to quickly reference something in another branch. Instead of having to mess up your rebase state, or go to github, you just go to B.
  • You’re code reviewing an intense refactor of an API. It can be handy to quickly flip back and forth between the versions of the codebase before and after the API change to get a better sense of what changes. Sometimes the diff isn’t quite enough.
  • You’re code reviewing one branch and want to quickly code review another in a way that’s “immutable” to your work environment.
  • If you want to quickly flip back and forth between builds of two different branches.