02-10-2012, 03:26 PM
PARALLEL COMPUTING
PARALLEL.doc (Size: 88 KB / Downloads: 34)
ABSTRACT
This presentation covers some of the advanced concepts of the parallel computing memory architecture and implementations of these architectures. Beginning with a brief overview and some concepts and terminology associated with parallel computing, this paper consists of advanced topics like hybrid shared -distributed memory .This paper also contains the terminologies used in parallel computing. Designing and developing parallel programs has characteristically been a very manual process but in this paper it has been explained how automatically it has been made. Designing of parallel programs have been discussed using domain decomposition and functional decomposition methods .These topics are followed by a discussion on a number of issues related to designing parallel programs. The last portion of the presentation is spent examining how to parallelize several different types of serial programs.
INTRODUCTION
Traditionally, software has been written for serial computation to be run on a single computer having a single Central Processing Unit (CPU) A problem is broken into a discrete series of instructions. Instructions are executed one after another. Only one instruction may execute at any moment in time. In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem. To be run using multiple CPUs A problem is broken into discrete parts that can be solved concurrently. Each part is further broken down to a series of instructions. Instructions from each part execute simultaneously on different CPUs.
NEED FOR PARALLEL COMPUTING
The primary reasons for using parallel computing: Save time - wall clock time, Solve larger problems ,Provide concurrency (do multiple things at the same time) .Other reasons might include: Taking advantage of non-local resources - using available compute resources on a wide area network, or even the Internet when local compute resources are scarce. Cost savings - using multiple "cheap" computing resources instead of paying for time on a supercomputer. Overcoming memory constraints - single computers have very finite memory resources. For large problems, using the memories of multiple computers may overcome this obstacle. Limits to serial computing - both physical and practical reasons pose significant constraints to simply building ever faster parallel computers: The future: during the past 10 years
DESIGNING PARALLEL PROGRAMS
Manual Vs Automatic parallelization
Designing and developing parallel programs has characteristically been a very manual process. The programmer is typically responsible for both identifying and actually implementing parallelism. Very often, manually developing parallel codes is a time consuming, complex, error-prone and iterative process. For a number of years now, various tools have been available to assist the programmer with converting serial programs into parallel programs. The most common type of tool used to automatically parallelize a serial program is a parallelizing compiler or pre-processor.
CONCLUSION
However, there are several important caveats that apply to automatic parallelization: Wrong results may be produced, Performance may actually degrade, Much less flexible than manual parallelization, Limited to a subset (mostly loops) of code .May actually not parallelize code if the analysis suggests there are inhibitors or the code is too complex.
PARALLEL.doc (Size: 88 KB / Downloads: 34)
ABSTRACT
This presentation covers some of the advanced concepts of the parallel computing memory architecture and implementations of these architectures. Beginning with a brief overview and some concepts and terminology associated with parallel computing, this paper consists of advanced topics like hybrid shared -distributed memory .This paper also contains the terminologies used in parallel computing. Designing and developing parallel programs has characteristically been a very manual process but in this paper it has been explained how automatically it has been made. Designing of parallel programs have been discussed using domain decomposition and functional decomposition methods .These topics are followed by a discussion on a number of issues related to designing parallel programs. The last portion of the presentation is spent examining how to parallelize several different types of serial programs.
INTRODUCTION
Traditionally, software has been written for serial computation to be run on a single computer having a single Central Processing Unit (CPU) A problem is broken into a discrete series of instructions. Instructions are executed one after another. Only one instruction may execute at any moment in time. In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem. To be run using multiple CPUs A problem is broken into discrete parts that can be solved concurrently. Each part is further broken down to a series of instructions. Instructions from each part execute simultaneously on different CPUs.
NEED FOR PARALLEL COMPUTING
The primary reasons for using parallel computing: Save time - wall clock time, Solve larger problems ,Provide concurrency (do multiple things at the same time) .Other reasons might include: Taking advantage of non-local resources - using available compute resources on a wide area network, or even the Internet when local compute resources are scarce. Cost savings - using multiple "cheap" computing resources instead of paying for time on a supercomputer. Overcoming memory constraints - single computers have very finite memory resources. For large problems, using the memories of multiple computers may overcome this obstacle. Limits to serial computing - both physical and practical reasons pose significant constraints to simply building ever faster parallel computers: The future: during the past 10 years
DESIGNING PARALLEL PROGRAMS
Manual Vs Automatic parallelization
Designing and developing parallel programs has characteristically been a very manual process. The programmer is typically responsible for both identifying and actually implementing parallelism. Very often, manually developing parallel codes is a time consuming, complex, error-prone and iterative process. For a number of years now, various tools have been available to assist the programmer with converting serial programs into parallel programs. The most common type of tool used to automatically parallelize a serial program is a parallelizing compiler or pre-processor.
CONCLUSION
However, there are several important caveats that apply to automatic parallelization: Wrong results may be produced, Performance may actually degrade, Much less flexible than manual parallelization, Limited to a subset (mostly loops) of code .May actually not parallelize code if the analysis suggests there are inhibitors or the code is too complex.