Transactional Memory
Larus, Kazyrakis
memory performance parallel programming languages concurrency multicore
article{larus:cacm2008,
title="Transactional Memory",
author="James Larus and Christos Kozyrakis",
journal="Communications of the ACM",
volume="51",
number="7",
month="July",
year="2008",
pages="80--88"
}
Programming paradigms are shifting, from sequential processing to parallel processing
- Rapid growth in sequential performance is coming to an end
- Moore's Law still holds, but limits created by power dissipation, diminishing returns on deeper pipelining and other complex architecture features, and other factors have curtailed benefits from larger processor
- Now, the law applies to putting more processors on each chip
- But, that requires widespread parallelism to really be effective
Parallel programming as performed currently is difficult, error prone, and generally doesn't meet performance goals
- Low-level threading, locks, and semaphores are just too difficult to do correctly and well
Transactional memory borrows from database notion of a transaction
- Ensures atomicity, isolation, generally not consistency or durability
- Transactions are blocks of code which, much like database queries, are rolled back and so on if a conflict is determined, such as another transaction writing to the same objects or parts of objects
Two major approaches have developed:
- Software based TM generally works at the object level
- Hardware based TM generally works at the word level
TM is promising, but several major difficulties loom
- I/O is hard to manage, since it can't be rolled back in general and restricted it to one thread at a time doesn't scale well
- TM doesn't work well for some common patterns, such as producers and consumers
- Though this can be mitigated somewhat with guard conditions that must hold for a transaction to begin
- Performance overhead is large, given the amount of state saving and conflict detection going on