Nonblocking concurrent data structures are an increasingly valuable tool for shared-memory parallel programming. By ensuring that no reachable state precludes forward progress by any given thread, nonblocking structures avoid performance anomalies in the presence of preemption, and can in principle tolerate thread failures. They can also outperform lock-based alternatives in several important cases, and are competitive in others. They are, however, quite difficult to write — due, among other things, to inherent data races, interactions with the language and hardware memory model, and the need for concurrent memory management.
This course will briefly survey background material on hardware primitives and memory models, together with formal notions of safety, liveness, and proof techniques. It will then explore nonblocking versions of important data structures, including stacks, queues, linked lists, hash tables, skip lists, and search trees. In the process, it will introduce appropriate memory management techniques. To the extent that time permits, it will also point to work on several more advanced topics, including condition synchronization (partial methods), combining and flat combining, universal constructions, nonblocking software transactional memory, and RCU.
Presentation Slides and exercises