Book review: "C++ Concurrency in Action" by Anthony Williams



Anthony Williams is a member of the C++ standards committee, and the author of Just Threads, one of the earliest implementations of a standards-conforming C++11 threading library. This book is an overview of concurrent and parallel programming with the new C++11 threading features. It's both a tutorial and a reference, with a large chunk dedicated to a detailed encyclopedic listing of all the C++11 threading-related objects and their methods (I'm not sure how useful this is in 2016 when all these references are already online, but could certainly be more relevant in early 2012 when the book was initially published).

The book is very comprehensive. It not only goes over the C++11 threading and concurrency features (of which there's a very good and thorough coverage), but also discusses general parallelism topics like concurrent data structures, including lock-free variants, thread pools and work-stealing. As such, it's not light reading and is definitely a book you go back to after finishing it to re-read some of the more complex topics.

On the critical side, the book's age already shows. I imagine the author didn't have access to fully conformant compilers when he was initially writing it, so many C++11 features are not used when they should be: things like range loops, reasonable uses of auto, even placing the ending >> of nested templates together without whitespace in between. Instead, there are occasional uses of Boost. All of this is forgivable given the book's publish date, but a bit unfortunate in a book specifically dealing with the C++11 standard.

Other random bits of criticism:

  • The analogies the author uses are weird, and often unhelpful. The book is clearly aimed at seasoned programmers, so we should drop the dumbing down.
  • Diagrams are sometimes ugly and sometimes nice.
  • The explanation of memory ordering semantics wasn't amazing, IMHO. I realize it's a devilishly complex topic to explain, but feel it's important to mention in case someone wants to get this book solely to understand memory ordering.
  • The code samples living in a .zip file that you can download are sometimes slighly different from the listings in the book, and I found several occasions where they don't compile. Unfortunately, emails sent to the author about these were not answered.

Overall, I liked the book. It's not perfect, but it's the best we've currently got to cover advanced concurrency and parallelism with modern C++. This book is hard to fully digest in a single reading because you're not likely to really need everything it covers. I expect it to be useful in the future as I need to refresh some specific topics.


Book review: "Structured Parallel Programming" by M. McCool, J. Reinders, A. Robinson



The authors - all senior software engineers at Intel (an important factor I'll come back to later), attempt to come up with a "pattern language" for parallel programming, similar to the gang of four's Design Patterns of OOP. This is a daring attempt, and while the book certainly has some good things going for it, I think that the end result is fairly mediocre.

First, what I liked about the book:

  • The introductory part (first two chapters) are well written and paced just right to serve as a good introduction to the topic - not too wordy, not too terse.
  • The list of patterns is comprehensive and definitely provides a good starting point for a common language programmers can use to talk about parallel programming. Folks experienced with parallel programming throw around terms like "map", "scatter-gather" and "stencil" all the time; if you want to know what they are talking about, this book has a good coverage.
  • At least some of the examples are interesting and insightful. Especially the first example (Seismic simulation) is enlightening in its use of space-time tiling. This is actually the kind of topic I really wish the book spent more time on.
  • The formatting is good: diagrams are well executed and instructive, and the many code samples use consistent C++ style and are comprehensive.

Now, what I didn't like:

  • First and foremost, and this is a criticism that permeates throughout this review: this book is thinly veiled marketing material for Intel. Except for one OpenCL example, the authors just use OpenMP, TBB, ArBB and Cilk Plus to demonstrate the patterns - all Intel-specific techonologies that are only optimized for Intel CPUs. If you care about more HW than Intel CPUs, or are using a different programming model/library, you're out of luck.
  • The book takes a very narrow view of parallel hardware, which is IMHO unforgivable for something written in 2012. For many parallel workloads these days, GPUs offer a much better performance/cost alternative to CPUs. Naturally, you won't have Intel engineers admit it in their book. GPUs are mentioned only very briefly in the introduction, and then only to shame them being less flexible than CPUs. This is despite the fact that many of the patterns presented in the book are actually perfectly good for GPUs. The authors do mention Intel's MIC, of course. But MIC, 4 years after the book has been written, still very much looks like a fringe technology which is inferior to Nvidia's server-class GPUs for number crunching.
  • The book also takes a very narrow view of software. Some of the Intel technologies it presents are already defunct - like ArBB. Others, like Cilk Plus are so esoteric and rarely used that only Intel still doesn't realize it. TBB is probably the most reasonable of the technologies presented, since it's an open-source library. If the book used plain threads and then showed what TBB brings to the table, it would be significantly more useful, IMHO.
  • The actual "meat" part of the book is extremely short. After the introduction, less than 200 pages are spent listing the patterns, and maybe half of that is dedicated to discussing the idiosyncracies of the specific Intel technologies the authors use to implement them.
  • Distributed computing is completely neglected - only shared-memory models are discussed. If you want to break a task up into multiple machines that don't share memory, this book won't help you much (except a brief mention of map-reduce towards the very end).

Overall, I won't say this is a bad book - there's certainly useful information within it and it's well written. But it's also far from being a great book. Maybe if all your parallelism needs are confined to a single multi-core Intel CPU and you're happy to use one of the Intel technologies it covers, the book can be great. Another audience which can take more from the book is relative beginners who had only basic exposure to parallel programming - then the patterns are truly useful to know about.

I'll be happy to hear suggestions about great books on parallel programming. Being such an important topic these days, it's surprising to me that I can't find a single book that's universally recommended.


Persistent history in Bash - redux



A couple of years ago I wrote about saving all the commands I ever ran in the terminal into a "persistent history" file, for later lookup. Since some people asked me whether this ended up being worthwhile, here's a short redux.

The TL;DR version is - keeping persistent history has been one of the best productivity hacks I ever put to use; I rely on it daily, and would be much less productive without it.

Before doing this, the only way I had to remember which commands/flags are needed to run something was to write it down in all kinds of notes files, personal wikis and so on. It was cumbersome, unorganized and time-consuming to reuse. With the .persistent_history file automatically populated by Bash from any terminal I'm typing into, and being kept in a Git repository for safekeeping, I have quick access to any command I ever ran. It's a life safer for someone who spends as much time in the terminal as me. I warmly recommend it, or some equivalent approach, to anyone who is using Linux daily.

Interestingly, at the time of the original post I was worried that with time this file will grow too long and will have to be trimmed. That turned out to be a completely needless worry. In over two years of using it at work, my .persistent_history is somewhat over 6 MB long, with ~60000 lines [1]. It takes a negligible amount of time to append to it and to search within it (15 milliseconds for a full search is the most I was able to measure). It doesn't even matter if you have a SSD or a hard drive as your main storage device; since the file is continously written to, it's almost certainly paged into memory most of the time anyway.

Also, I posted a histogram of the 10 most commonly used commands on my home machine for hobby hacking, so it's interesting to revisit that. Here's a histogram for the past year:

git          : 1564
ls           : 861
gs           : 669
cd           : 546
vi           : 543
make         : 538
ll           : 388
pssc         : 379
PYTHONPATH=. : 337
python       : 286

As the original post foresaw, the impending switch from Mercurial to Git for my personal projects, along with spending much less time on CPython core development have pushed hg to the fringes, and Git is certainly the most used command now (gs is my alias for git status). Python should be higher than it appears because commands starting with PYTHONPATH=. always precede python. The rest is a fairly expected bunch from a terminal hermit. pssc is one of the aliases I use for pss, which is why you don't see grep or find in the list.

I placed the Bash code enabling persistent history, along with the Python script I used to compute the command usage histogram shown above on Github.


[1]In reality there were likely many more commands, but the script does some amount of de-duplication - it won't write down a command if it's exactly the same as the last one written. For example, if you spend the whole day hacking in an editor and rerunning python foo.py every couple of minutes, the only commands that will be written in the history are opening the editor and then a single instance of python foo.py.