Category Archives: Favorite

Life is a business; life is a game; life is art

Here are three ways to view your life:

Life as a business

Life is a business, and you are the Founder & CEO.

You have goals, resources, and agency. The idea is to build the strongest business and life for yourself. You do this by making good decisions and generating profit.

The most savvy founders look for holes in the fabric of society, the market, their industry — and exploit them. They seek trends that allow them to be ahead of the curve.

Life as a game

Life is a game, and you are a player.

You are initialized with random parameters that inform your strengths, weaknesses, and initial environment. There’s an endless world in front of you to explore and play in.

The idea is to do well at the game — build points, power, connections. You encounter other players, transact with them, exchange moves, determine if they are trustworthy or hostile.

You learn the rules over time and understand what game you are even playing. You discover that within the greater context of “the game”, there are many sub-games you can play.

Randomness and luck are all built into the game.

Life as an art piece

Life is an art project, and you are the artist.

You have a blank canvas in front of you and an infinite number of creative decisions to make. The idea is to find a beautiful & satisfying creative endpoint for your piece.

There are many creative paths to take, leading to different ends. There is no best end, but some ends are better than others.

Like any other artist, you must apply techniques to manage the decision space. You apply constraints — sometimes artificially, sometimes destructively — to move forward.

The difference between computer science and computer engineering

I first wondered this question as a confused 17 year-old, considering options for university — this blog post is for that kid and people like him. It’s my opinion on this after working professionally in both fields and earning a degree in one them (CE).

I think tracing each back to its roots is a good way to gain an intuition for the difference. Computer Science is ultimately derived from Mathematics, while Computer Engineering derives from Electrical Engineering.

The boundary is blurry, but CS is classically more concerned with topics like algorithms & runtime (“Big O”), data structures, and the theory of computation (“What kinds of problems can we solve with computers, what kinds can’t we?”). You’ll do many more formal proofs in CS.

CE concerns itself more with how digital computing systems work in the real world, starting from foundations in electronics (i.e. transistors), to digital logic, building up towards embedded systems, the hardware-software interface, and computer architecture. The latter is where things get blurry and bridge to the “lower levels” of CS.

In practice, most practitioners of either have some awareness of the other, although I suspect (without evidence) that CE’s tend to know more about CS than vice-versa, on average. However I also expect that CE’s tend to write “worse” code than CS’s, on average — “writing good code” and “knowing CS” do not necessarily correlate. 1

So which should one choose?

I can only offer this scattered set of anecdotes and advice:

  • I knew many more CE majors that switched to CS, than the other way around. (Mostly people that just wanted to become “programmers” and realized that circuits & electronics are not for them. I’m not sure if they enjoyed the theory courses more, though.)
  • I have heard and personally agree that it’s easier to go “up” the stack than “down”.
  • Don’t give any experience building computers or doing IT support too much weight in the decision — those topics are neither CE nor CS and rather fall into the field of “IT” which is largely separate. You might be surprised how possible it is to study either CE or CS, work in these areas professionally, and not know how to physically build a computer. But in general, building a computer falls closer to CE than CS — but you will be learning how to design (some of, at a very high level) these lego pieces you are putting together.
  • Personally I’ve always been foremost curious “how computers actually work” and CE has served me well.

Lastly, what about “software engineering?”

Software engineering is an even newer field, derived from Computer Science, but describes the job descriptions of many (if not most) people that study either CS or CE. My view of it is that a degree in it focuses on the most practical elements of CS, focusing less (or not at all) on the theory. I don’t expect much of CE is covered it in, except for maybe a cursory overview of digital logic and computer architecture. But frankly, I don’t know any software engineering majors and am not qualified to really speak on this.

How did you choose between them?

Not rigorously. My decision was mostly based on the presence of other “engineers” in my life and a friend that studied CE. These were not good rationale for the decision, but I lucked out and had a good outcome anyway. I think I would have been fine either way, and naturally gravitated up or down the stack to the point I’m at now. Being CE did allow me to take some very cool classes in college like a very modern, practical compilers course using LLVM (unlike the more theory based compilers course taught in the CS school) (which had a direct impact on my ability to get a great job after) and a fun embedded systems class.

Sometimes, you just need to be willing

Sometimes, you just need to be willing. No special skills or particular competence necessary — simply just being willing to do something others can but won’t do.

I learned and have profited from this realization at work in the past year. A certain set of important tasks on my team are somewhat hands-on in nature and not compatible with remote work. Our colleague that usually does these left the team. I stepped up and filled this gap.

The work is not particularly difficult — my teammates are smarter than me and could surely do it also. But I’m the only one that was willing and able to sacrifice remote work and commute every day. So I’ve been doing it.

Although it’s not the hardest, the work is nonetheless very important — critical, even. So I’ve been getting a lot of credit, brownie points, and even a raise from doing it.

Not because I’m smart — just willing.

How the hardware/software interface works

It’s really all about memory. But to start at the beginning, the rough stack looks like this:

  • Userspace application
  • Kernel driver
  • Hardware device

I find it easier to think about this from the middle out. On Linux, the kernel exposes hardware devices as files backed by the /dev virtual filesystem. Userspace can do normal syscalls like open, read, write, and mmap on them, as well as the less typical ioctl (for more arbitrary, device-specific functionality).1.

The files are created by kernel drivers which are modules of kernel code whose sole purpose is to interface with and abstract hardware so it can be used by other parts of the operating system, or userspace. They are implemented implemented using internal driver “frameworks” in the kernel, e.g. the I2C or SPI frameworks. When you interface with a file in /dev, you are directly triggering callback handlers in a driver which execute in the process context.

That’s how userspace interfaces with the kernel. How do drivers interface with hardware? These days, mostly via memory mapped I/O (MMIO)2. This is when device hardware “appears” at certain physical addresses, and can be interfaced with via load and store instructions using an “API” that the device defines. For example, you can read data from a sensor by simply reading a physical address, or write data out to a device by writing to an address. The technical term for the hardware component these reads/writes interface with is “registers” (i.e. memory mapped registers).

(Aside: Other than MMIO, the other main interface the kernel has with hardware is interrupts, for interrupt driven I/O processing (as opposed to polling, which is what MMIO enables). I’m not very knowledgeable about this, so I won’t get into it other than to say drivers can register handlers for specific IRQ (interrupt requests) numbers, which will be invoked by the kernel’s generic interrupt handling infrastructure.)

Using MMIOs looks a lot like embedded bare metal programming you might do on a microcontroller like a PIC or Arduino (AVR). At the lowest level, a kernel driver is really just embedded bare metal programming.

Here’s an example of a device driver for UART (serial port) hardware for ARM platforms: linux/drivers/tty/serial/amba-pl011.c. If you’re debugging an ARM Linux system via a serial connection, this is might be the driver being used to e.g. show the boot messages.

The lines like:

cr = readb(uap->port.membase + UART010_CR);

are where the real magic happens.

This is simply doing a read from a memory address derived from some base address for the device, plus some offset of the specific register in question. In this case it’s reading some control information from a Control Register.

#define UART010_CR		0x14	/* Control register. */

linux/include/linux/amba/serial.h#L28

Device interfaces may range from having just a few to many registers.

To go one step deeper down the rabbit hole, how do devices “end up” at certain physical addresses? How is this physical memory map interface implemented?3

The device/physical address mapping is implemented in digital logic outside the CPU, either on the System on Chip (SOC) (for embedded systems), or on the motherboard (PCs)4. The CPU’s physical interface include the address, data, and control buses. Digital logic converts bits of the address bus into signals that mutually exclusively enable devices that are physically connected to the bus. The implementations of load/store instructions in the CPU set a read/write bit appropriately in the Control bus, which lets devices know whether a read or write is happening. The data bus is where data is either transferred out from or into the CPU.

In practice, documentation for real implementations of these systems can be hard to find, unless you’re a customer of the SoC manufacturer. But there are some out there for older chips, e.g.

Here’s a block diagram for the Tegra 2 SoC architecture, which shipped in products like the Motorola Atrix 4G, Motorola Droid X2, and Motorola Photon. Obviously it’s much more complex than my description above. Other than the two CPU cores in the top left, and the data bus towards the middle, I can’t make sense of it. (link)

While not strictly a “System on Chip”, a classic PIC microcontroller has many shared characteristics of a SoC (CPU, memory, peripherals, all in one chip package), but is much more approachable.

We can see the single MIPS core connected to a variety of peripheral devices on the peripheral bus. There’s even layers of peripheral bussing, with a “Peripheral Bridge” connected to a second peripheral bus for things like I2C and SPI.

How to be happy

Note to self:

  1. Remember: You are already enough just as you are, right here, right now. You don’t need to achieve or do anything. 1
  2. Remember: The only competition in life — if you must think of it that way — is to know yourself as fully as possible, and act with maximum authenticity towards that truth.
  3. Remember: All things considered, you have it good — many around the world would kill to switch places and inherit every single one of your problems.

Why we get busier as we get older

There are a few main reasons why we get busier as we get older:

Adulting

As you age, you increasingly lose free time towards dealing with “adulting” type of tasks: taxes, paying bills, taking your car to the shop, researching insurance alternatives

Relationships

We as age, we accumulate relationships. And while they have numerous benefits and make life worth living, they don’t come for free. They require time and energy to maintain — and at the end of the day, become tasks on our todo lists. Even something as innocuous as an old friend reaching to send a text or schedule a call can, at times, feel like burdensome tasks to accomplish.

When you’re a child or teenager, the only people you know are your family and your friends (your first generation of friends). Since you barely know anyone, you don’t really have to keep in touch with anyone. Thus, more free time.

Hobbies

We as age, we accumulate interests, hobbies, and pursuits. These also don’t come for free.

As an adult, you begin to explore the world — reading books, picking up rock climbing, learning to paint, planning & taking trips one or twice a year. Your old interests don’t exactly go away, and there are always worlds of new interests to discover. Part of you feels like you should maintain or get back to some of those old interests you cherished so much. Another part is excited to get into scuba diving.

When you’re a child or teenager, you you might have just one or two pursuits that occupy your time outside school. That lack of all the historical hobbies from your past = more free time.

Aging

Aging implies that your body will start to perform worse and more slowly, likely even breaking in ways. You’ll spend more and more time going to doctor’s appointments, surgeries, tending to medical conditions. It will take more effort to maintain your body through fitness. This all takes time.


Like many problems in life, the frustration at your seemingly decreasing time as one ages can be helped by setting expectations properly. Instead of feeling cheated as you feel your allowance of time seems to shrink year by year, expect it. Expect that by all logic, given the adulting to do, relationships to maintain, pursuits to keep up with, and the natural course of aging, you should have no free time at all — which gives you more reason to celebrate and appreciate the rare free moment when it comes along.

Linux Internals: How /proc/self/mem writes to unwritable memory

Introduction

An obscure quirk of the /proc/*/mem pseudofile is its “punch through” semantics. Writes performed through this file will succeed even if the destination virtual memory is marked unwritable. In fact, this behavior is intentional and actively used by projects such as the Julia JIT compiler and rr debugger.

This behavior raises some questions: Is privileged code subject to virtual memory permissions? In general, to what degree can the hardware inhibit kernel memory access?

By exploring these questions1, this article will shed light on the nuanced relationship between an operating system and the hardware it runs on. We’ll examine the constraints the CPU can impose on the kernel, and how the kernel can bypass these constraints.

Continue reading

Double fetches, scheduling algorithms, and onion rings

Most people thought I was crazy for doing this, but I spent the last few months of my gap year working as a short order cook at a family-owned fast-food restaurant. (More on this here.) I’m a programmer by trade, so I enjoyed thinking about the restaurant’s systems from a programmer’s point of view. Here’s some thoughts about two such systems.

Continue reading

What they don’t tell you about demand paging in school

This post details my adventures with the Linux virtual memory subsystem, and my discovery of a creative way to taunt the OOM (out of memory) killer by accumulating memory in the kernel, rather than in userspace.

Keep reading and you’ll learn:

  • Internal details of the Linux kernel’s demand paging implementation
  • How to exploit virtual memory to implement highly efficient sparse data structures
  • What page tables are and how to calculate the memory overhead incurred by them
  • A cute way to get killed by the OOM killer while appearing to consume very little memory (great for parties)

Note: Victor Michel wrote a great follow up to this post here.

Continue reading

How setjmp and longjmp work (2016)

Pretty recently I learned about setjmp() and longjmp(). They’re a neat pair of libc functions which allow you to save your program’s current execution context and resume it at an arbitrary point in the future (with some caveats1). If you’re wondering why this is particularly useful, to quote the manpage, one of their main use cases is “…for dealing with errors and interrupts encountered in a low-level subroutine of a program.” These functions can be used for more sophisticated error handling than simple error code return values.

I was curious how these functions worked, so I decided to take a look at musl libc’s implementation for x86. First, I’ll explain their interfaces and show an example usage program. Next, since this post isn’t aimed at the assembly wizard, I’ll cover some basics of x86 and Linux calling convention to provide some required background knowledge. Lastly, I’ll walk through the source, line by line.

Continue reading