In streams 97-100 or so, I start a basic implementation of code that manages the page tables of an address space. I figured this is fairly generic data structure code in the end, so it might be useful to write unit tests for it.
In the end the experiment sort of succeeded, but also I’ve abandoned the strong unit testing focus for the forseeable future.
The theory was mostly correct that this code is fairly generic data structure code, with some exceptions.
The code needs to allocate physical memory frames (in production), for page tables. It uses a frame allocator in the kernel to do so. That’s not available in the userspace unit tests, so that needs to be mocked out by a mock physical memory allocator.
Also, there’s address conversion. The VM code needs to convert between virtual and physical addresses because page tables store physical frame numbers. We need to convert it back to virtual in order to work with the data in the kernel, e.g. when walking the page tables.
These were a bit annoying to mock, but overall the approach worked and I could write unit tests for the map function.
But the reason I abandoned the approach in general for the kernel is because there are so many other things that will eventually need to be mocked. TLB flushing after page tables are modified. Other kinds of hardware accesses. And that’s not even to mention timing and concurrency which will play a major role.
A major difference between a kernel and normal application is the extent to which the code interfaces with “external services”. It’s commonplace to mock out external services, but in app code, there is usually finite set of them. In the kernel, there are so many touchpoints with hardware and external services that the mock layer would grow huge.
This actively gets in the way of development and causes friction. It’s not to say it’s not worth it, but there are reasons it’s hard to unit test kernels and why higher level system and integration tests are preferred.
This is hands-down, the largest, most technically sophisticated, most well-organized event I’ve ever been to.
16000 people, 4 days in Hamburg’s CCH convention center
While there is massive chaos from the nature of 16k people in one place, with so much to do, it is also the most organized, controlled, and structured form of chaos I’ve ever seen
These people obviously just love taking things to an extreme in terms of how organized, visualized, and well-functioning they can make things. They did it so well, that in 39c3, infra teams ran out of problems to solve and started fixing the actual CCH facilities itself (a broken accessibility ramp). This is also not just digital work, this is real, physical labor and handywork.
The online documentation is incredible. Everything is very clearly written out either on the blog or wiki, or somewhere else. My only nit is that it can sometimes be hard to find info.
There is a mind-blowing amount of custom technical infrastructure created for this. Everything from managing and submitting events, to securely selling the actual open tickets (which sell out in seconds), to dashboards for the queue times for checking in (with comparisons to previous years), to dashboards for the CCH power consumption, network traffic, … you name it
Massive lines for checkin or coat check are well organized, often with volunteers clearly marking the end of the line, or at least some kind of laminated paper held by the last person and line and passed on when new people enter the lines.
Talks are all live-streamed AND often have live-translation in multiple languages by volunteers AND are all archived on https://media.ccc.de
Incredible focus on accessibility. They add custom blind-accessible braille bathroom maps in from of all bathrooms, just for the congress(?)
There is a locally running phone system (?) and even GSM (mobile phone network) running at the event
There is on-site security, autism support, accessibility support, etc.
There is complete automated infrastructure for volunteers — for posting jobs that need to be done, and for signing up for work, and tracking that it’s done. Plus break room and food for volunteers (angels).
The queues are super-optimized for speed, e.g. check in and baggage claim.
Tips
The food on site is somewhat expensive and small portions. Buy a few sandwiches and items at Dammtor station on the way in to last you for the day.
There is apparently c3print printing service on site if you need to print flyers (e.g. for a meetup)
If you are organizing a meetup last minute, a long table in the food/bar area is a decent option. If you go a bit early, you can often claim an entire table for the meetup. This is nice as it requires no reservation (the room reservations easy get booked out), and allows for standing space with a table so people can mingle around and laptops can be shown.
Early check in to get your wristband opens on Day -1 in the afternoon until late. Go there to avoid a large queue.
If you want to go to the opening talk, go a bit early. It can fill up completely.
In general you need to be fast for things. With so many people, things are often booked quickly.
Consider not connecting to the Wifi, and using mobile data instead. Also consider using Lockdown mode on iOS devices, especially near the Hack Different assembly 😉
The line for retrieving coats/luggage gets very long on Day 4 towards the end of the day – reserve up to ~30 min for waiting in line.
Coat check and baggage claim can fill up.
Everyone leaves and the party ends on Day 4 – no need to book hotel lodging for that night, unless you especially want to stay.
The infodesk does not love it if you need to borrow a marker from them for your signs. They’re a bit more accomodating for the special tape needed to hang things up on the walls.
Time goes extremely fast during the congress due to all the chaos and things to do.
It can be lonely, since there are so many people there, many seemingly with their own friend groups.
Talks can sometimes be a waste of time (as with any conference), prefer other ways of spending time unless you really want to see a talk, want to hang out with someone by seeing a talk together, or want to sit in a comfortable seat and talk a break and watch a talk.
It’s easy to get sick. Take immune supplements and consider masking.
Inspired by Naval Ravikant, when I learn life lessons, I try to compress them into a short phrase so I remember the lesson better. Here are 75 of my personal learnings:
—
Your lowest points might be your greatest opportunities in disguise.
All truly incredible outcomes start as “crazy” ideas.
If believing everything happens for a reason makes life better, believe it.
Only keep tense what absolutely must be. Relax everything else.
Before they call you visionary, they call you weird.
Everything useful in the world was created by someone who cared enough to push it into reality.
Just because all your friends do something, doesn’t mean you should.
Just because all your friends don’t do something, doesn’t mean you shouldn’t.
Mix your interests to find your area of world-class potential.
World-class expertise is more attainable than you think.
Zoom in unusually far and narrow on anything, and you’ll see things no one has seen before.
Good ideas aren’t enough – they need to look incredible.
It’s easier to get a good deal if you have cash in hand, exact change, arm extended.
Be able to distinguish investments that look like luxuries.
The true cost of things: (Price Paid – Price Sold For) / (# of Times Used).
Invest aggressively in tools used daily.
Money is the worst form of capital. Prefer health, relationships, knowledge, experience.
Half the battle of making great art is knowing the tools to use.
People will tell you the tools they use, if you ask nicely.
Investing aggressively in the right tools will save money in the long run.
When beginning an art form, try many styles, share, and see what works.
When you find what works, stop exploring. Create in that style until you get tired.
Repeat.
New hobbies can have defined, planned lifetimes.
But previous pursuits do remain part of your identity.
Everything you make builds toward your global body of work.
Your global body of work is a ball of dry kindling, waiting for a spark.
The bigger the ball of kindling, the bigger the flame.
The spark might come soon, in decades, or never.
Being public and meeting many people reduces the risk of the latter.
You don’t need to be a world expert to generate novelty.
Remixing is easier than synthesizing from scratch to generate novelty.
The paradox of art: creative decisions lead to different ends. There is no best end, but some are better than others.
Your life is a decades-long performance art project.
A master chef can answer not only the “right” way to make rice, but also: “What if we use half the water? Twice as much? Half the heat?” – because she’s tried.
Everything good in life comes from people.
Find a community where it’s normal to do the things you aspire to do.
Buy your way in if that’s the easiest way.
Cold email or DM people with gratitude and one hyper-specific question.
Don’t assume you’ll be ignored. Test it.
Lack of reply = Test to see how serious you are.
Don’t rely on your memory for following up. Have a system.
Don’t rely on your memory, in general. Have a system.
Mentorship begins the moment they reply.
Finding mentorship is about making yourself an attractive investment.
You’re not a nobody; you’re a rocket on the launch pad.
Show proof of work to de-risk yourself as a mentee.
Go out of your way to travel to where your mentors live.
Some seeds take years to sprout, but bear the most incredible fruit.
Buying something from them is a way to get closer to a potential mentor.
Being in need is a great way to start conversations with strangers.
You can invent small needs on a moment’s notice, anywhere.
For example, simply needing a recommendation.
Compliments are a great way to start conversations with strangers.
You can take actions that make it easier for strangers to start conversations with you, like wearing interesting clothes.
When surrounded by strangers, gravitate toward who shows you warmth.
Mingling is easier when you’re early to an event.
The transition from stranger to friend can happen in seconds.
The connection isn’t crystallized until you’ve followed up later online.
Reach out to everyone on the internet whose work you admire.
Move from email to text message to deepen relationships.
You’re not competing against the best – only those who show up.
Any great pursuit is a marathon. Learn the art of long-term consistency.
Genuine passion = endurance.
Copycats will have weak endurance.
You can often bypass bureaucracy by showing up in person, early.
Do things that terrify you.
Sometimes impossible decisions solve themselves with time.
Focus less on winning; focus more on not losing. (Warren Buffett)
Don’t be afraid to exploit your unfair advantages.
Have a personal agenda.
When no one has a strong opinion, that’s an opportunity to advance your agenda, if you wish.
“A healthy man wants a thousand things. A sick man wants one.”
The only competition is to know yourself as fully as possible, and act with maximum authenticity towards that truth.
Remember: Millions would switch lives with you in a heartbeat, and readily inherit every single one of your problems.
Every time I’ve leveled up my life, it’s been because of the people I surrounded myself with, who helped pull me in the direction I wanted to go.
I’ve done this four times in the worlds of:
Heavy metal music
Electronic music
Cybersecurity
Audio software
And I’m currently doing it to learn operating systems development.
By the time I was 16, I had released two heavy metal albums on the internet. A large reason why this happened was because I surrounded myself online with a community of people who really cared about this.
In these communities, it was completely normal to be recording your own instrumental heavy metal music, and releasing it every 6-12 months.
Imagine a real-life party for this kind of person. You walk in the room, and if you’re not personally making and releasing your own instrumental heavy metal music online, you’re going to be a bit of the odd one out.
You’re doing to do one of two things. Either, you’ll leave the room, because it’s not the room for you… Or, if you choose to keep hanging out with these people, you’ll probably start making some music.
Working at Ableton has probably been the best example of this in my life. It was one of the hardest rooms to get into, but the learning on the other side has been incredible.
I’ve been able to work with masters of the craft, who have been doing this for 20+ years. And because I’m on the same team as them, they’re incentivized to pull me up to the level I need to be at to work alongside them.
The point is: You need to find alignment between:
the things you care about, your passions, what you want
the spaces, rooms, and people you’re surrounding yourself with
and the natural direction those rooms are going to pull you in.
My YouTube channel recently crossed 10,000 subscribers, and I’ve done this by exploiting an intersection of three of my unique strengths:
Systems programming
Not being camera shy
Discipline & Consistency
I’m not world class in any of these by themselves, but the combination is a bit more rare and helps me to stand out.
I’m definitely not the best programmer in the world.
I’m also definitely not the most charismatic person in the world. But the bar is pretty low for programmers, especially in my niche of systems programming. I’m a lot less camera shy than most programmers I know.
I’m also not the most consistent person, but I’ve been able to sustain a pace of one livestream per week for about two years.
The end result is that I don’t really have competitors. 95% of the people with the technical skill set that I do have no interest in making content or putting themselves out there online. The remaining 5% either don’t quite have the skill set, or don’t quite have the consistency and burn out.
—
Everyone has unfair advantages relative to the other players in the field.
Maybe you have a natural inclination for [thing]?
Maybe you’re young and beautiful?
Maybe you’re experienced and wise?
Maybe you have a lot of energy?
Maybe you’re calm and comforting?
Maybe you have a nice voice?
Maybe you’re really tall or strong?
Maybe you’re a man in a female-dominated field?
Maybe you’re a woman in a male-dominated field?
Maybe you’re not shy?
Maybe you can hyper-focus so intensely?
Maybe you find talking to people effortless?
Maybe you have a lot of time?
Maybe you have a lot of money?
Maybe you’re resourceful under constraints?
Exploiting your unfair advantages is nothing to be guilty for, once you realize that everyone has them.
Doing things in the world is hard enough as it. You can choose to attempt it without exploiting your strengths, but just know you’re playing on extra hard mode.
Here’s my rough lab notes from what I learned during weeks 69-73 of streaming where I did my “boot tour” and ported JOS to a variety of boot methods.
JOS originally used a custom i386 BIOS bootloader. This is a classic approach for osdev: up to 512 bytes of hand written 16 bit real mode assembly packed into the first sector of a disk.
I wanted to move away from this though — I had the sense that using a real third party bootloader was the more professional way to go about this.
This requires integrating the OS with the Multiboot standard. Grub is actually designed to simply be a reference implementation of a generic boot protocol, called Multiboot. The goal is to allow different implementations of bootloaders and operating systems to all transparently interoperate with each other, as opposed to the specific bootloaders made for each OS which was common at the time of its development.
(Turns out Multiboot never really took off. Linux and BSDs already had bootloaders and boot protocols and never ported to use Multiboot. Grub supports them via implementing their specific boot protocols in addition to Multiboot. I’m not sure any mainstream OS is natively using Multiboot. Probably mostly hobby os projects.)
This integration looks like:
Adding a Multiboot header
Optionally making use of an info struct pointer in EBX
The Multiboot header is interesting. Multiboot was designed to be binary format agnostic. While there is native ELF support, OS’s need to advertise that they are Multiboot compatible by including magic bytes in the first few KB of their binary, along with some other metadata (e.g. about architecture). The multiboot conforming boot loader will scan for this header. Exotic binary formats can add basic metadata about what load address they need and have a basic form of loading be done (probably just memcpying the entire OS binary into place. The OS might need to load itself further from there if it has non-contiguous segments.)
Then, for Multiboot v1, the OS receives a pointer to an info struct in EBX. This contains useful information provided from the bootloader (cli args, memory maps, etc), which is the second major reason to use a third party bootloader.
There are two versions of the Multiboot standard. V1 is largely considered obsolete and deprecated because this method of passing a struct wasn’t extensible in a backward compatible way. An OS coded against a newer version of the struct (which might have grown) would possibly crash if loaded against an older bootloader that only provided a smaller struct (because it might dereference struct offsets that go out of bounds of the given struct).
So the Multiboot V2 standard was developed to fix this. Instead of passing a struct, it uses a TLV format where the OS receives an array of tagged values, and can interpret only those whose tags it’s aware of.
The build process is a bit nicer for Grub also compared with a custom bootloader. Instead of creating a “disk image” by concatenating a 512 byte assembly block, and my kernel, with Grub you can use an actual filesystem.
You simply create a directory with a specific directory structure, then you can use grub-mkrescue to convert that into an .iso file with some type of CD-ROM based filesystem format. (Internally it uses xorriso). You can then pass the .iso to QEMU with -cdrom instead of -drive as I was doing previously.
Limine
Limine is a newer, modern bootloader aimed at hobby OS developers. I tried it out because it’s very popular, which I now think is well deserved. In addition to implementing essentially every boot protocol, it includes its own native boot protocol with some advanced features like automatic SMP setup, which is otherwise fairly involved.
It uses a similar build process to grub-mkrescue with creating a special directory structure and running xorriso to produce an iso.
I integrated against Limine, but kept my OS as Multiboot2 since Limine’s native protocol only supported 64 bit.
BIOS vs UEFI
Everything I’ve mentioned so far has been in the context of legacy BIOS booting.
Even though I ported away from a custom bootloader to these fancy third party ones, I’m still using them in BIOS mode. I don’t know exactly what’s in these .iso files, but that means they must populate the first 512 bytes of the media with their own version of the 16 bit real mode assembly, and bootstrap from there.
But BIOS is basically obsolete — the modern way to boot a PC is UEFI.
The nice thing about integrating against a mature third party bootloader, is it abstracts the low level boot interface for you. So all you need to do is target Grub or Limine, and then you can (nearly) seamlessly boot from either BIOS or UEFI.
It was fairly easy to get this working with Limine, because Limine provides prebuilt UEFI binaries (BOOTIA32.EFI) and has good documentation.
The one tricky thing is that QEMU doesn’t come with UEFI firmware by default, unlike with BIOS (where SeaBIOS is included). So you need to get a copy of OVMF to pass to QEMU to do UEFI boot. (Conveniently there are pre-built OVMF binaries available by the Limine author).
I failed at getting UEFI booting with Grub to work on my macOS based dev setup, because I couldn’t easily find a prebuilt Grub BOOTIA32.EFI. There is a package on apt, but I didn’t have a Linux machine quickly available to investigate if I could extract the file out of that.
Even though UEFI is the more modern standard, I’m proceeding with just using BIOS simply to avoid dealing with the whole OVMF thing.
Comparison table
Pros
Cons
Custom
– No external dependency
– More finicky custom code to support, more surface area for bugs – Doable to get basics working but nontrivial effort required to reimplement more advanced features in Grub/Limine (a boot protocol, cli args, memory map, etc) – No UEFI support
Grub
– Well tested, industrial strength – Available prebuilt from Homebrew – Simple build process, just use single i386-grub-mkrescue to create iso
– Difficult to get working in UEFI mode on Mac (difficult to find a prebuilt BOOTIA32.EFI)
Limine
– Good documentation – Easy to get working for both BIOS and UEFI – Supports Multiboot/Multiboot2. Near drop in replacement for grub – Can opt into custom boot protocol with advanced features (SMP bringup)
– Not used industrially, mostly for hobby osdev – Not packaged in Homebrew, requires building driver tool from source (but this is trivial)
libclang_rt/libgcc are compiler runtime support library implementations, which the compiler occasionally emits calls into instead of directly inlining the codegen. Usually this is for software implementations of math operations (division/mod). Generally you’ll need a runtime support library for all but the most trivial projects.
cc-runtime is a utility library for hobby OS developers. It is a standalone version of libclang_rt, which can be included vendored into an OS build.
The main advantage for me is that it lets me use a prebuilt clang from Homebrew. The problem with prebuilt clang from Homebrew, is it doesn’t come with a libclang_rt compiled for i386 (which makes sense, why would it — I’m on an ARM64 Mac).
(This is unlike the prebuilt i386-elf-gcc in Homebrew, which does come with a prebuilt libgcc for i386).
Since it doesn’t come with libclang_rt for i386, my options are:
Option
Keep using libgcc from i386-elf-gcc in Homebrew
Undesirable — the goal is to only depend on one toolchain, and here I’d depend on both clang and gcc.
Build clang and libclang_rt from source
Undesirable — it’s convenient to avoid building the toolchain from source if possible.
Undesirable — vendoring binaries should be a last resort
Use cc-runtime
Best — No vendored binaries, no gcc dependency, no building toolchain from source
However, cc-runtime does have a gotcha. If you’re not careful, you’ll balloon your binary size.
This is because the packed branch of cc-runtime (which is the default and easiest to integrate) packs all the libclang_rt source files into a single C file, which produces a single .o file. So the final .a library has a single object file in it.
This is in contrast to libgcc.a (or a typical compilation of libclang_rt) where the .a library probably contains multiple .o files — one for each .c file.
By default, linkers will optimize and only use any .o files in the .a library that are needed. But since cc-runtime is a single .o file, the whole thing will get included! This means, the binary will potentially include many libclang_rt functions that are unused.
In my case, the size of one of my binaries went from 36k (libgcc) to 56k (cc-runtime, naive).
To work around this, you either need to use the trunk branch of cc-runtime (which doesn’t pack them all into one .c file). This is ~30 .c files and slightly more annoying to integrate into the build system.
Or, you can use some compiler/linker flags to make the linker optimization more granular and work at the function level, instead of the object file level.
Those are:
Compiler flag:
-ffunction-sections -fdata-sections
Linker flag
--gc-sections
With this, my binary size reduced to 47k. So there is still a nontrivial size increase, but the situation is slightly improved.
Ultimately, my preferred solution is the first: to use the trunk branch. The build integration is really not that bad, and the advantage is you don’t need to remember to use the special linker flag, which you’d otherwise need to ensure is in any link command for any binary that links against cc-runtime.
That said, those compiler/linker flags are probably a good idea to use anyway, so the best solution might be to do both.
At work, every so often the product teams take a break from normal work and do a ‘hack sprint’ for working on creative, innovative ideas that aren’t necessarily relevant to hot topics in the main work streams.
This time, many of the designers used AI tools to generate code and build prototypes. Normally, they would have required a developer to collaborate with.
In the end, there were simply more hacks done in the end than otherwise would be. So in this local scope, AI didn’t put devs “out of a job” in the hack sprint because designers no longer needed them.
Instead it just allowed the same fixed pool of people to make more things happen, pulling more ideas into reality, from the infinitely deep idea pool, than before.
The “infinitely deep idea pool” is my preferred mental model here.
There’s people on one end, the pool on the other, and the people can pull ideas out of the pool into reality at a fixed rate.
Here’s productivity is defined as “ideas pulled, per person, per second”.
Improvements to tech increase that “idea pull” rate.
People become redundant when technology improves productivity, and the goal is just to maintain the status quo. Then a smaller number of people with higher productivity can pull the same number of ideas as the previously larger team with less productivity.
But often, the goal is not to just maintain the status quo. It’s way too tempting to try to outdo it, and push beyond. We want to pull more ideas out of the pool, which is always possible because the idea pools are infinitely deep.
And if that’s true, then no one becomes redundant — the team could use as much help as it can get to pull more ideas out. (People * Productivity = Ideas Pulled Per Second) This is the phenomenon I observed in the hack sprint.
But that’s an if. Some organizations might be fine to maintain the status quo, or only grow it a small amount, relative to the productivity increase. Then real redundancy is created.
But that’s only in the local scope of that organization. In the global scope, the global idea pool from which we all draw from is infinite — there will always be ideas for someone to pull.
This metaphor can help explain why technological advancements haven’t yielded the relaxation and leisure promised by past futurists. In order to really benefit like this, you need to hold your needs constant (maintain the status quo) while productivity improves. And that’s very difficult to do.