09-08-2014, 04:26 PM
DISTRIBUTED COMPUTING
[attachment=66824]
Introduction
Computer architectures consisting of interconnected, multiple processors are basically of two types:
1) Tightly coupled system
2) Loosely coupled system
Definition
A distributed computing system is basically a collection of processors interconnected by a communication network in which each processor has its own local memory and other peripherals, and the communication between any two processors of the system takes place by message passing over the communication network
clustered vs distributing computing
Distributed computing can also be less compute-intensive, with no need for the speed of a supercomputer, and spread out over many different machines in many locales
A cluster-based system is an example of distributing computing. Clusters can be many CPUs linked together in the same, single room. In fact, supercomputers are not always single monolithic machines anymore; the fastest supercomputer in the world is a Cray-built cluster
Different algorithms
Centralized algorithms
The graph G is encoded as a string, and the string is given as input to a computer. The computer program finds a coloring of the graph, encodes the coloring as a string, and outputs the result.
Parallel algorithms
Distributed algorithms
The graph G is the structure of the computer network. There is one computer for each node of G and one communication link for each edge of G. Initially, each computer only knows about its immediate neighbours in the graph G; the computers must exchange messages with each other to discover more about the structure of G. Each computer must produce its own colour as output.
The main focus is on coordinating the operation of an arbitrary distributed system.
Conclusion
In distributed computing identifying the exact requirements is the key step.
Is the intention to exchange data, is it necessary to distribute computation or is it desired to have a user model that needs remote programs? The answer to this
question determines the solution that best fits. In general, distribution adds a tremendous amount of complexity to each program. Debugging remote calls is more complicated and difficult than local calls, and concurrency issues (like interdependency of threads) are abundant. Furthermore, remote calls are substantially slower than local ones,typically orders of magnitude. Thus, optimization issues are much more pressing than in purely local programs.