Chris Penner recently wrote a blog post titled Beating C with 80 lines of Haskell, where he showed how to optimise and parallelise a Haskell implementation of
wc to outperform the implementation bundled with macOS. This made a lot of people on the Internet angry, as they felt that
wc is already fast enough (it is), is often applied on streams whereas Chris Penner’s implementation requires an input file (probably), and that the optimised Haskell code is unreasonably complicated (arguably). However, I really enjoyed the explanation of how to parallelise a seemingly-sequential problem via a clever monoid.
Not long after, Matthew Maycock wrote Beating C with Dyalog APL, where he implemented
wc in APL. In a degenerate way, Futhark can be seen as the bastard offspring of Haskell and APL, so I obviously have to get involved as well. In particular, I want to show that it is straightforward to write highly efficient parallel code using the same high-level principles as we would in Haskell. The full source code is available on GitHub.
Now, to pre-appease angry Internet people, let me make things clear from the start: Futhark is a decent language for counting words, but a bad language for implementing
wc specifically. In particular, Futhark is a pure language, and not in the way that Haskell is “pure” but still seems to find room for highly sophisticated and efficient IO mechanisms. No, in Futhark, you cannot read a file or print to the screen, which makes it rather tricky to implement a program like
wc that is entirely about reading from a file and printing to the screen! What I will show is how to implement the core word-counting logic in Futhark (using the same approach as for Haskell), and how to call it from a wrapper program written in C that takes care of the IO.
The core of the word counting logic is going to be taken directly from Chris Penner’s Haskell implementation. I won’t go through it in detail, but highly recommend reading his original article. Like Haskell, Futhark is a functional language, so the translation is mostly straightforward.
To start out, we define two sum types that encode a kind of state machine for the word counting:
We then define an associative function for combining two
flux values, along with a neutral element. This means that
flux is a monoid. In Haskell, this is done by implementing the
Monoid type class. Futhark does not have type classes, so we just write the definitions like any other:
Now we need a function for mapping characters to
flux values. Futhark is generally not a good language for string processing (in fact, it has neither a string nor a character type), so we are going to restrict ourselves to ASCII, and represent characters as bytes. First, a function for determining whether something is a whitespace character:
This function recognises newlines, tabs, and spaces as whitespace. We can now define the
Next we define the
counts type, which is a record tracking the number of characters, words, and lines we have seen so far.
And we also define a combining function and neutral element for
And given a single character on its own, its
Finally we can put together the pieces and create a function for counting the number of characters, words, and lines in a “string” (modelled as an array of bytes), with a
reduce composition just as in Haskell:
Performance depends crucially on the compiler performing fusion to avoid constructing a large intermediate array as the result of the
map. Fortunately, the Futhark compiler is very good at fusion.
wc function is defined with
entry rather than
let because we want it to be callable from the outside world. When we compile this program, only
entry functions will be visible in the generated API. The final lambda simply transforms the
counts record, which for technical reasons would be opaque to the outside world, into a simple triple of integers.
libwc.fut into a C library containing (for now) ordinary sequential C code:
$ futhark c --library libwc.fut
This produces two files:
libwc.c, with the former defining the interface to the latter. Futhark’s C API is a bit verbose, but fundamentally simple. First, we initialise our generated library by creating a context:
libwc.h declares a function
futhark_entry_wc() that corresponds to our
wc function. It has the following type:
So, the argument cannot just be any old C array, it has to be a specific
futhark_u8_1d. There is a function
futhark_new_u8_1d() for creating these:
Creating such an array always involves a copy, because Futhark wants to manage its own memory. To avoid copying the input file contents more than once, we use
mmap() on the open file and pass the resulting pointer to Futhark. The entire procedure looks like this:
We can then call the Futhark entry point:
Print the result:
Putting all of this (plus some boilerplate and cleanup) in
wc.c, we can compile with:
$ gcc wc.c libwc.c -o wc -O3 -lm
And that’s it! So, how fast is it? I’ll be testing on a 100MiB file
huge.txt that is merely
big.txt from the original post repeated some times. First, let us check out GNU
wc 8.22 (using the C locale so
wc can also assume ASCII):
Now for our Futhark
Not bad! It actually runs faster than GNU
wc. I ran both programs a few times and took the fastest runtime for each. And note that this is without any parallelism or low-level optimisations! I am of course hugely biased, but I think the Futhark program is an easier read than the optimised Haskell program.
But really, Futhark is a parallel language, and generating sequential C code is not what it’s for. So how do we make this program run in parallel on my employers’ $1000 RTX 2080 Ti GPU? We simply recompile using
futhark opencl instead of
$ futhark opencl --library libwc.fut $ gcc wc.c libwc.c -o wc -O3 -lm -lOpenCL
Alright, let’s check out the performance:
Well, it’s better, but not really by that much. There are two possible reasons:
- Word counting is primarily IO-bound, and it is much too expensive to ferry the file contents all the way to the GPU over the (relatively) slow PCI Express bus just to do a relatively meagre amount of computation.
- GPU initialisation (hidden inside the
futhark_context_new()call) takes a nontrivial amount of time, as it may involve JIT compilation of GPU kernels and other bookkeeping operations by the GPU driver.
On this machine, for this problem, reason (2) is the most significant. If we augment
wc.c with a
-t option that makes it perform its own internal timing, excluding the context initialisation (but including copying the entire file to the GPU), we get this:
./wc-opencl -t huge.txt 2055312 17531120 103818656 huge.txt runtime: 0.070s
Much faster! Apparently context initialisation has a fixed cost of about 230 milliseconds on this machine. This is relatively fast - I have seen multiple seconds on other systems. This is the main reason why Futhark is a bad choice for
wc, or other kinds of very short-running processes - you really do not want to pay this startup cost unless it can be amortised by a significant amount of subsequent computation.
Is the Futhark code as efficiently written as possible? I think it’s close, but I know that the
char_type values will be stored as an entire byte each, despite only encoding a single bit of information. This does not matter on the CPU, but on the GPU, this storage comes out of the fairly sparse on-chip scratchpad memory. I have not measured the impact, but a more compact encoding might improve performance slighly. However, I generally believe that such representation-level optimisations are the job of the compiler.
In conclusion, I’m actually surprised that Futhark manages to out-compete GNU
wc at all - I would have thought that the overhead of copying the file to the GPU would offset the faster computation. Most likely, GNU
wc does not have any special optimisations for the case of word-counting large ASCII files, as it is already more than fast enough.
Since I still don’t believe Futhark is a good choice for implementing
wc, I think the main takeaway here is that data-parallelisation techniques developed for other languages (e.g. Haskell) can be transfered to Futhark with good results.