03-10-2012, 05:32 PM
Parallel Computing
Parallel Computing.pdf (Size: 384.14 KB / Downloads: 30)
What is Parallel Computing?
I would like to start this with providing some metaphors of real life.What do
we do when we are provided with a Herculean task and need to do it fast?
Divide the task into pieces,and employ more people to do the work if it needs
to be done faster.What would you do if not this? Similarly, a large corporation
requires many employees for it’s proper functioning, whereas a large economy
requires many people at work in various sectors of business.
If a computer was human, then its central processing unit (CPU) would be its
brain. A CPU is a microprocessor -- a computing engine on a chip. While
modern microprocessors are small, they're also really powerful. They can
interpret millions of instructions per second. Even so, there are some
computational problems that are so complex that a powerful microprocessor
would require years to solve them.
Computer scientists use different approaches to address this problem. One
potential approach is to push for more powerful microprocessors. Usually this
means finding ways to fit more transistors on a microprocessor chip.
Computer engineers are already building microprocessors with transistors that
are only a few dozen nanometers wide. How small is a nanometer? It's onebillionth
of a meter. A red blood cell has a diameter of 2,500 nanometers -- the
width of modern transistors is a fraction of that size.
Building more powerful microprocessors requires an intense and expensive
production process. Some computational problems take years to solve even
with the benefit of a more powerful microprocessor. Partly because of these
factors, computer scientists sometimes use a different approach: parallel
processing.
Frequency Scaling and Power Consumption
Frequency scaling was the dominant reason for improvements in
computer performance from the mid-1980s until 2004. The runtime of a
program is equal to the number of instructions multiplied by the average time
per instruction. Maintaining everything else constant, increasing the clock
frequency decreases the average time it takes to execute an instruction. An
increase in frequency thus decreases runtime for all computation-bounded
programs.
However, power consumption by a chip is given by the
equation P = C × V2 × F, where P is power, C is the capacitance being switched
per clock cycle (proportional to the number of transistors whose inputs
change), V is voltage, and F is the processor frequency (cycles per
second).Increases in frequency increase the amount of power used in aprocessor. Increasing processor power consumption led ultimately to Intel's
May 2004 cancellation of its Tejas and Jayhawk processors, which is generally
cited as the end of frequency scaling as the dominant computer architecture
paradigm.
Approaches To Parallel Computing
To understand parallel processing, we need to look at the four basic
programming models. Computer scientists define these models based on two
factors: the number of instruction streams and the number of data streams
the computer handles. Instruction streams are algorithms. An algorithm is just a
series of steps designed to solve a particular problem. Data streams are
information pulled from computer memory used as input values to the
algorithms. The processor plugs the values from the data stream into the
algorithms from the instruction stream. Then, it initiates the operation to obtain
a result.
Flynn’s Taxonomy
Michael J. Flynn created one of the earliest classification systems for parallel
(and sequential) computers and programs, now known as Flynn's taxonomy.
Flynn classified programs and computers by whether they were operating using
a single set or multiple sets of instructions, whether or not those instructions
were using a single or multiple sets of data.