06-04-2013, 03:32 PM
SHARED MEMORY ARCHITECTURE
SHARED MEMORY.docx (Size: 239.29 KB / Downloads: 22)
INTRODUCTION
In computer architecture, shared memory architecture (SMA) refers to a multiprocessing design where several processors access globally shared memory
Shared memory architectures may use
• Uniform Memory Access (UMA): all the processors share the physical memory uniformly.
• Non-Uniform Memory Access (NUMA): memory access time depends on the memory location relative to a processor.
• Cache-only memory architecture (COMA): the local memories for the processors at a node is used as cache.
In a SMA system processors communicate by reading and writing memory locations. The two key problems in scaling an SMA system are]
• performance degradation due to "contention" when several processors try to access the same memory location.
• lack of "coherence" if memory is cached and goes out of synch with the original values as modifications take place.
In computer hardware, shared memory refers to a (typically large) block of random access memory (RAM) that can be accessed by several different central processing units (CPUs) in a multiple-processor computer system.
DISTRIBUTED MEMORY ARCHITECTURE
The advantage of distributed memory is that it excludes race conditions, and that it forces the programmer to think about data distribution.
The key issue in programming distributed memory systems is how to distribute the data over the memories.
In a distributed memory system there is typically a processor, a memory, and some form of interconnection that allows programs on each processor to interact with each other.
In computer science, distributed memory refers to a multiple-processor computer system in which each processor has its own private memory.