Back to basics: FIFOs (Named Pipes) and simple IPC logging

FIFOS (“First-in-first-out”s aka Named Pipes) are are somewhat obscure part of Unix based operating systems โ€” at least based on my personal experience. I’ve been using Linux for ten years and simply have never needed to interact with them much.

But today, I actually thought of a use for them! If you’ve ever had a program which had human readable output on stdout, diagnostics on stderr, and you wanted to observe the two as separate streams of text, without intermingled output โ€” then FIFOs might be for you. More on that in a bit.

Youtube version of this blog post:

FIFOs 101

As background, FIFOs are one of the simpler IPC (Interprocess Communication) mechanisms. Like normal (un-named) pipes, they are a unidirectional channel for data transmission. Unlike normal pipes, they can be used between processes that don’t have a parent-child relationship. This happens via the filesystem โ€” FIFOs exist as filesystem entities that can be operated on with the typical file API (open, close, read, write), also similar to normal pipes.

The simplest experiment you can try with them is to send data between two processes running cat and echo.

First, create the FIFO:

$ mkfifo my_fifo

Then, start “listening” on the FIFO. It will hang waiting to receive something.

$ cat my_fifo

In a separate terminal, “send a message” to the FIFO:

$ echo hello > my_fifo

This should return immediately, and you should see a message come in on the listening side:

$ cat my_fifo
hello1

FIFO-based logging, v1

Today I was working on a long-running program that produces a lot of human reading terminal output as it runs. There are some warning conditions in the code that I wanted to monitor, but not stop execution for.

I could print to stdout when the warning conditions occurred, but they’d get lost in the sea of terminal output that normally comes from the program. You can also imagine situations where a program is writing structured data to stdout, and you wouldn’t want a stray log text to be interspersed with the output.

The typical solution is to write the logs to stderr, which separates the two data streams, but as an exercise, we can try writing the logs to a FIFO, which can then be monitored by a separate process in another terminal pane. This lets us run our program with its normal output, while also monitoring the warning logs when they come.

In python, it would look like this (assuming the fifo is created manually):

import time

fifo = open('logging_fifo', 'w')

def log(msg):
    fifo.write(msg + '\n')
    fifo.flush()

i = 0
while True:
    print(i)
    i += 1

    if i % 5 == 0:
        log(f'reached a multiple of 5: {i}')

    time.sleep(.1)

In a separate pane, we can monitor for logs with a simple:

$ while true; do cat logging_fifo ; done

In all, it looks like this:

~/c/2/fifoplay โฏ python3 writer.py
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ while true; do cat logging_fifo ; done
reached a multiple of 5: 5
reached a multiple of 5: 10
reached a multiple of 5: 15
reached a multiple of 5: 20
reached a multiple of 5: 25

Note that the listener process must be running or else the writer will block!

FIFO-based logging, v2

An even more elegant and UNIX-y way to do this would be to still write to stderr in our writer application, but use shell redirection to redirect stderr to the fifo.

import time
import sys

i = 0
while True:
    print(i)
    i += 1

    if i % 5 == 0:
        print(f'reached a multiple of 5: {i}', file=sys.stderr)

    time.sleep(.1)

Which you then invoke as:

$ python3 writer2.py 2> logging_fifo

This yields the same result, just is more semantically correct.

Any thoughts?