30-08-2017, 04:03 PM
In computing and engineering, transactional memory attempts to simplify concurrent programming by allowing a set of loading and storage instructions to run atomically. It is a concurrency control mechanism analogous to database transactions to control access to shared memory in concurrent computing. Transactional memory systems provide high-level abstraction as an alternative to low-level thread synchronization. This abstraction allows for coordination between concurrent readings and shared data writes in parallel systems.
In concurrent programming, synchronization is required when parallel threads attempt to access a shared resource. Low-level thread synchronization constructs, such as locks, are pessimistic and prohibit subprocesses outside a critical section from making changes. The process of applying and releasing locks often functions as additional overhead on workloads with little conflict between threads. Transactional memory provides optimistic concurrency control by allowing threads to run in parallel with minimal interference. The purpose of transactional memory systems is to transparently support code regions marked as transactions by reinforcing atomicity, consistency, and isolation.
A transaction is a collection of operations that can execute and commit changes as long as there is no conflict. When a conflict is detected, a transaction will return to its initial state (before any changes) and will be re-executed until all conflicts are eliminated. Before a successful commit, the result of any transaction is purely speculative within a transaction. In contrast to lock-based synchronization in which transactions are serialized to prevent data corruption, transactions allow for additional parallelism, as long as few transactions attempt to modify a shared resource. Because the programmer is not responsible for explicitly identifying locks or the order in which they are acquired, programs that use transactional memory can not cause a lock.
In concurrent programming, synchronization is required when parallel threads attempt to access a shared resource. Low-level thread synchronization constructs, such as locks, are pessimistic and prohibit subprocesses outside a critical section from making changes. The process of applying and releasing locks often functions as additional overhead on workloads with little conflict between threads. Transactional memory provides optimistic concurrency control by allowing threads to run in parallel with minimal interference. The purpose of transactional memory systems is to transparently support code regions marked as transactions by reinforcing atomicity, consistency, and isolation.
A transaction is a collection of operations that can execute and commit changes as long as there is no conflict. When a conflict is detected, a transaction will return to its initial state (before any changes) and will be re-executed until all conflicts are eliminated. Before a successful commit, the result of any transaction is purely speculative within a transaction. In contrast to lock-based synchronization in which transactions are serialized to prevent data corruption, transactions allow for additional parallelism, as long as few transactions attempt to modify a shared resource. Because the programmer is not responsible for explicitly identifying locks or the order in which they are acquired, programs that use transactional memory can not cause a lock.