Monthly Archives: December 2023

The difference between computer science and computer engineering

I first wondered this question as a confused 17 year-old, considering options for university โ€” this blog post is for that kid and people like him. It’s my opinion on this after working professionally in both fields and earning a degree in one them (CE).

I think tracing each back to its roots is a good way to gain an intuition for the difference. Computer Science is ultimately derived from Mathematics, while Computer Engineering derives from Electrical Engineering.

The boundary is blurry, but CS is classically more concerned with topics like algorithms & runtime (“Big O”), data structures, and the theory of computation (“What kinds of problems can we solve with computers, what kinds can’t we?”). You’ll do many more formal proofs in CS.

CE concerns itself more with how digital computing systems work in the real world, starting from foundations in electronics (i.e. transistors), to digital logic, building up towards embedded systems, the hardware-software interface, and computer architecture. The latter is where things get blurry and bridge to the “lower levels” of CS.

In practice, most practitioners of either have some awareness of the other, although I suspect (without evidence) that CE’s tend to know more about CS than vice-versa, on average. However I also expect that CE’s tend to write “worse” code than CS’s, on average โ€” “writing good code” and “knowing CS” do not necessarily correlate. 1

So which should one choose?

I can only offer this scattered set of anecdotes and advice:

  • I knew many more CE majors that switched to CS, than the other way around. (Mostly people that just wanted to become “programmers” and realized that circuits & electronics are not for them. I’m not sure if they enjoyed the theory courses more, though.)
  • I have heard and personally agree that it’s easier to go “up” the stack than “down”.
  • Don’t give any experience building computers or doing IT support too much weight in the decision โ€” those topics are neither CE nor CS and rather fall into the field of “IT” which is largely separate. You might be surprised how possible it is to study either CE or CS, work in these areas professionally, and not know how to physically build a computer. But in general, building a computer falls closer to CE than CS โ€” but you will be learning how to design (some of, at a very high level) these lego pieces you are putting together.
  • Personally I’ve always been foremost curious “how computers actually work” and CE has served me well.

Lastly, what about “software engineering?”

Software engineering is an even newer field, derived from Computer Science, but describes the job descriptions of many (if not most) people that study either CS or CE. My view of it is that a degree in it focuses on the most practical elements of CS, focusing less (or not at all) on the theory. I don’t expect much of CE is covered it in, except for maybe a cursory overview of digital logic and computer architecture. But frankly, I don’t know any software engineering majors and am not qualified to really speak on this.

How did you choose between them?

Not rigorously. My decision was mostly based on the presence of other “engineers” in my life and a friend that studied CE. These were not good rationale for the decision, but I lucked out and had a good outcome anyway. I think I would have been fine either way, and naturally gravitated up or down the stack to the point I’m at now. Being CE did allow me to take some very cool classes in college like a very modern, practical compilers course using LLVM (unlike the more theory based compilers course taught in the CS school) (which had a direct impact on my ability to get a great job after) and a fun embedded systems class.

Memory is an abstraction

I was learning about the different types of RAM recently (e.g. SRAM and DRAM) and it occurred to me that “computer memory” is just an abstraction. This is obvious once you think about it but I think as a programmer, it’s very easy to not realize this. The idea of memory as a linear array of bits is an abstraction created and implemented by an electrical device.

Most programmers when they think of memory are thinking of virtual memory, which is a completely different abstraction. While it’s also a linear array of bits, the abstraction is created by the operating system and lives at a higher level.

One level below, the abstraction the operating system itself uses โ€” “physical memory” โ€” is the one I’m talking about, created by a a set of electrical devices connected to the CPU with wires (the memory bus).

I’m projecting without any basis, but I presume the reason so few programmers think of memory as an abstraction is because the abstraction is so strong. Nearly all of the time, it “just works” โ€” you write bits and read them out later.2 The abstraction can leak slightly during programming disciplines that require awareness of low level details like the memory hierarchy and cache coherence (i.e. lockfree) but this is a leak of the abstraction of the memory hierarchy. The core abstraction of physical memory stays in tact โ€” for example, programmers never need to be aware of the internal refresh mechanism of DRAM.3

Of course, you can go infinitely far with this โ€” it’s turtles all the way down.

Sometimes, you just need to be willing

Sometimes, you just need to be willing. No special skills or particular competence necessary โ€” simply just being willing to do something others can but won’t do.

I learned and have profited from this realization at work in the past year. A certain set of important tasks on my team are somewhat hands-on in nature and not compatible with remote work. Our colleague that usually does these left the team. I stepped up and filled this gap.

The work is not particularly difficult โ€” my teammates are smarter than me and could surely do it also. But I’m the only one that was willing and able to sacrifice remote work and commute every day. So I’ve been doing it.

Although it’s not the hardest, the work is nonetheless very important โ€” critical, even. So I’ve been getting a lot of credit, brownie points, and even a raise from doing it.

Not because I’m smart โ€” just willing.

Reflections on learning to write

For the last three years, I’ve invested into learning to write. The main reason: strong writing skill is generally a shared quality among all the greatest thinkers, leaders and generally wise people of history.

How has it gone?

Really well. It’s paying off in many ways, and I only feel more inspired to keep doing it. I find that it gets more fun then more you do โ€” the more ideas you have and the easier and faster it is to express them.

It’s paid off in my personal life with my offlinemark project. My writing (and general public creative efforts) has reached some unexpected places and put me in touch with some very interesting people. My most prized accomplishment is being cited in the Linux Kernel security mailing list.

It’s paid off in many ways at work. As a software engineer, I do a lot of writing at work, and I pride myself in having great written communication there. I write a lot of documentation, hold it up to a high standard, and people notice. I get points for this because most engineers don’t like writing docs, but I’ve trained myself to enjoy it.

Beyond these, modern life generally involves a good amount of written communication. I’ve found that putting in these reps has simply made this part of life a bit easier.


P.S. Something else I’ve observed is that I find myself having more ideas and being more aware of them. Whether they’re good or insightful is a different matter, but I’d like to believe that over time they’ll get there.

Learning on your own pales compared to spending time with experts

This has become crystal clear to me at my current job. As much as I’ve had success teaching myself things about computers, nothing, nothing, beats spending time with experts and learning from them.

I had never truly experienced this before and was shocked at how much I was able to learn, in such little time โ€” about C++, git, Linux, and more.

It can be a significant price to pay, but nothing beats literally becoming an employee at a company with experts in a field you care about and joining their team. This is ideal from a learning perspective because as a team member, the experts become invested in your growth and are incentivized to transfer knowledge to you. You just need to be ready to receive it.

Once you’ve tasted this, this truth is also bittersweet. You can only work at so many places. If you have many interests, you now have to live with the fact that for most areas, your learning will be stunted relative to what it could be.


My deep gratitude to the experts in my life: STK, DIR, CSK, MKL, RYB, MZA โ€” thank you.

Back to basics: FIFOs (Named Pipes) and simple IPC logging

FIFOS (“First-in-first-out”s aka Named Pipes) are are somewhat obscure part of Unix based operating systems โ€” at least based on my personal experience. I’ve been using Linux for ten years and simply have never needed to interact with them much.

But today, I actually thought of a use for them! If you’ve ever had a program which had human readable output on stdout, diagnostics on stderr, and you wanted to observe the two as separate streams of text, without intermingled output โ€” then FIFOs might be for you. More on that in a bit.

Youtube version of this blog post:

FIFOs 101

As background, FIFOs are one of the simpler IPC (Interprocess Communication) mechanisms. Like normal (un-named) pipes, they are a unidirectional channel for data transmission. Unlike normal pipes, they can be used between processes that don’t have a parent-child relationship. This happens via the filesystem โ€” FIFOs exist as filesystem entities that can be operated on with the typical file API (open, close, read, write), also similar to normal pipes.

The simplest experiment you can try with them is to send data between two processes running cat and echo.

First, create the FIFO:

$ mkfifo my_fifo

Then, start “listening” on the FIFO. It will hang waiting to receive something.

$ cat my_fifo

In a separate terminal, “send a message” to the FIFO:

$ echo hello > my_fifo

This should return immediately, and you should see a message come in on the listening side:

$ cat my_fifo
hello1

FIFO-based logging, v1

Today I was working on a long-running program that produces a lot of human reading terminal output as it runs. There are some warning conditions in the code that I wanted to monitor, but not stop execution for.

I could print to stdout when the warning conditions occurred, but they’d get lost in the sea of terminal output that normally comes from the program. You can also imagine situations where a program is writing structured data to stdout, and you wouldn’t want a stray log text to be interspersed with the output.

The typical solution is to write the logs to stderr, which separates the two data streams, but as an exercise, we can try writing the logs to a FIFO, which can then be monitored by a separate process in another terminal pane. This lets us run our program with its normal output, while also monitoring the warning logs when they come.

In python, it would look like this (assuming the fifo is created manually):

import time

fifo = open('logging_fifo', 'w')

def log(msg):
    fifo.write(msg + '\n')
    fifo.flush()

i = 0
while True:
    print(i)
    i += 1

    if i % 5 == 0:
        log(f'reached a multiple of 5: {i}')

    time.sleep(.1)

In a separate pane, we can monitor for logs with a simple:

$ while true; do cat logging_fifo ; done

In all, it looks like this:

~/c/2/fifoplay โฏ python3 writer.py
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ while true; do cat logging_fifo ; done
reached a multiple of 5: 5
reached a multiple of 5: 10
reached a multiple of 5: 15
reached a multiple of 5: 20
reached a multiple of 5: 25

Note that the listener process must be running or else the writer will block!

FIFO-based logging, v2

An even more elegant and UNIX-y way to do this would be to still write to stderr in our writer application, but use shell redirection to redirect stderr to the fifo.

import time
import sys

i = 0
while True:
    print(i)
    i += 1

    if i % 5 == 0:
        print(f'reached a multiple of 5: {i}', file=sys.stderr)

    time.sleep(.1)

Which you then invoke as:

$ python3 writer2.py 2> logging_fifo

This yields the same result, just is more semantically correct.

Why you feel stupider than ever, despite being smarter than ever

I’m 10 years into my tech career and yet I constantly feel so, so stupid.

It’s entirely self-originating; not because others are mean to me. (Ok maybe partly also because I’m lucky to be around smart people).

Why do I feel this way, despite mountains of evidence to the contrary โ€” that I’m not stupid, and know a thing or two about what I’m talking about? Is my self esteem really that bad?

I think it’s also because I’ve been learning so much. The more I learn, I more I realize I don’t know. And I posit that the weight of a known unknown feels disproportionately bad to the relief of converting it to known known.

So, the equation looks like:

All Knowns = Known Knowns + Known Unknowns

Perception Of Intelligence = Known Knowns / All Knowns

(Smaller = Less Intelligence = More Stupid)

This makes sense to me. I think I accumulate known unknowns at 10x the rate of known knowns โ€” it takes so much more energy to learn something, than to learn merely of the existence of something.

WIP: Asking for help in the age of AI

Before AI, if you asked someone a question that you could have just as well Googled, you might have gotten a link to letmegooglethat.com.

Sometimes this is fair; question askers do have some responsibility to not ask ultra-common questions which are easily answerable on the internet. But if the answer is not so easily found, it’s valid to ask.

AI changes things because it quickly gives answers to nearly any question. So it vastly reduces the number of “valid” opportunities to ask. Almost everything gets reduced to something that could easily be “AI’d”. That’s especially true if community-specific AIs exist (e.g. trained on your codebase and team wiki at work). Follow this far enough and one could imagine a dystopian future where no knowledge transfer happens human-to-human anymore.

What this robotic understanding misses, and a reason I dislike being excessively strict about lmgtfy’ing people, is that asking questions is a great way to build relationships and community. Communities (especially online ones) need activity to survive and good questions are a great source of activity. Questions often also spark valuable conversations adjacent to the original topic, and those would be missed out on if everyone strictly asked only the AI everything that is answerable by the AI.

Ultimately, I view this change as another step down the existing path of information becoming more readily accessible (like the internet before it, and books before that), rather than something fundamentally different from it.

We’ll use it to improve efficiency in the same way using internet search improved efficiency, but I doubt we’ll fully stop asking each other questions. Beyond the practical reasons (interesting adjacent conversations), relationship building via human-to-human knowledge sharing is too innate to our humanity and we’ll notice its absence.

Everyone’s career sounds more impressive than yours

This is something I’ve observed ever since I started working. Everyone else’s job always sounds way more interesting, technical, or impactful than mine.

For example, when I worked at Trail of Bits, I spent years working on a research project for automatically finding crashes in computer programs using a technique called symbolic execution. Sounds hard! Oh, and the technique is designed for programs where you don’t have source code (sounds extra hard!).

In many ways the job was cool, hard, and technical but what was really uninspiring was that 1. the tool wasn’t actually that effective and 2. we didn’t have any users (probably because of #1). As my colleague Yan once joked, it sometimes felt like we were working on a “Fisher-Price Symbolic Executor”. I often felt like an imposter “playing” software engineer/researcher.

When I visited my old colleagues at Drift, who were working on a web product with thousands of users, I felt inferior. It seemed like they were all working on such impactful and real-world problems. Maybe it wasn’t quite as “novel” as what I was doing but it sounded pretty technical and not easy to me!

However, the way they introduced me to other colleagues that didn’t know me was “This is Mark, he’s moved on and now works on the real hard stuff.” I can only speculate that they felt the same thing that I was feeling. From their perspective, I was working on a super hard, technical, sexy research problem โ€” unlike their boring, run of the mill web app.

This has happened more times than I can count in all sorts of settings, especially conferences, where you’re meeting a lot of strangers who all seem to be smarter than you and do very impressive things at highly functional organizations. (Unlike stupid me, doing unimportant stuff at the dysfunctional dumpster fire of a company I work for!)

The point is familiarity breeds contempt. Being so close to your work dulls the good parts and makes the bad parts stand out sharply in relief. Chances are if you joined their team or organization you wouldn’t find it as rosy as you picture it now.