15-05-2012, 01:13 PM
An overview of parallel computing
Seminar parallel Computing.doc (Size: 73 KB / Downloads: 29)
INTRODUCTION
Parallel computing is the simultaneous use of multiple compute resources to solve a computational problem. Parallel computing is to be run on a single computer having a single Central Processing Unit (CPU). A problem is broken into a discrete series of instructions and then instructions are executed one after another. Only one instruction may execute at any moment in time.
USE OF PARALLEL COMPUTING
Parallel computing can be used to save the time and money it is the task which shorten the time to complete any task with potential and cost savings. Many problems are so large and complex that it is impractical or impossible to solve them on a single computer, especially when there is limited computer memory. for e.g. in web search engines or databases processing millions of transactions of data per second can happened. It provides concurrency i.e. a single compute resource can only do one thing at a time but the multiple computing resources can be doing many things simultaneously.
CONCEPT AND TERMINOLOGY
John Von Neumann Architecture:
John von Neumann who first authored the general requirements for an electronic computer in his 1945 papers. Since then, virtually all computers have followed this basic design, differing from earlier computers which were programmed through "hard wiring". John von Neumann Architecture mainly consist of four components 1) Memory 2) Control Unit 3) Arithmetic Logic Unit 4) Input/output.
Memory: Read/write, random access memory is used to store both program instructions and data.
Control unit: Control unit fetches instructions/data from memory, decodes the instructions and then sequentially coordinates operations to accomplish the programmed task.
Arithmetic Logic Unit: Arithmetic Unit performs basic arithmetic operations.
Input/output: Input/output is the interface to the human operator.
Flynn's Classical Taxonomy:
One of the more widely used classifications, in use since 1966, is called Flynn's Taxonomy. Flynn's taxonomy distinguishes multi-processor computer architectures according to how they can be classified along the two independent dimensions of Instruction and Data. Each of these dimensions can have only one of two possible states: Single or Multiple.
S I S D
Single Instruction, Single Data S I M D
Single Instruction, Multiple Data
M I S D
Multiple Instruction, Single Data M I M D
Multiple Instruction, Multiple Data
Single Instruction, Single Data (SISD):
Single Instruction: Only one instruction stream is being acted on by the CPU during any one clock cycle.
Single Data: Only one data stream is being used as input during any one clock cycle Deterministic execution this is the oldest and even today, the most common type of computer Examples: older generation mainframes, minicomputers and workstations; most modern day PCs.
Single Instruction, Multiple Data (SIMD):
Single Instruction: All processing units execute the same instruction at any given clock cycle.
Multiple Data: Each processing unit can operate on a different data element. Best suited for specialized problems characterized by a high degree of regularity, such as graphics/image processing. Synchronous (lockstep) and deterministic execution two varieties: Processor Arrays and Vector Pipelines.
Multiple Instructions, Single Data (MIMD):
Multiple Instructions: Each processing unit operates on the data independently via separate instruction streams.
Single Data: A single data stream is fed into multiple processing units.
Multiple Instructions, Multiple Data (MIMD):
Multiple Instructions: Every processor may be executing a different instruction stream.
Multiple Data: Every processor may be working with a different data stream Execution can be synchronous or asynchronous, deterministic or non-deterministic.