Science Fair Project Encyclopedia
Parallel computing is the simultaneous execution of the same task (split up and specially adapted) on multiple processors in order to obtain faster results.
Parallel computing systems
The term parallel processor is sometimes used for a computer with more than one central processing unit, available for parallel processing. Systems with thousands of such processors are known as massively parallel.
There are many different kinds of parallel computers (or "parallel processors"). They are distinguished by the kind of interconnection between processors (known as "processing elements" or PEs) and between processors and memories. Flynn's taxonomy also classifies parallel (and serial) computers according to whether all processors execute the same instructions at the same time (single instruction/multiple data -- SIMD) or each processor executes different instructions (multiple instruction/multiple data -- MIMD). Parallel processor machines are also divided into symmetric and asymmetric multiprocessors, depending on whether all the processors are capable of running all the operating system code and, say, accessing I/O devices or if some processors are more or less privileged.
While a system of n parallel processors is less efficient than one n-times-faster processor, the parallel system is often cheaper to build. For tasks which require very large amounts of computation, have time constraints on completion and especially for those which can be divided into n execution threads, parallel computation is an excellent solution. In fact, in recent years, most high performance computing systems, also known as supercomputers, have a parallel architecture.
Parallel computers are theoretically modeled as Parallel Random Access Machines (PRAMs). The PRAM model ignores the cost of interconnection between the constituent computing units, but is nevertheless very useful in providing upper bounds on the parallel solvability of many problems. In reality the interconnection plays a significant role.
It should not be imagined that successful parallel computing is a matter of obtaining the required hardware and connecting it suitably. In practice, linear speedup (i.e., speedup proportional to the number of processors) is very difficult to achieve. This is because many algorithms are essentially sequential in nature. They must be redesigned in order to make effective use of the parallel hardware. Further, programs which work correctly in a single CPU system may not do so in a parallel environment. This is because multiple copies of the same program may interfere with each other, for instance by accessing the same memory location at the same time. Therefore, careful programming is required in a parallel system.
Superlinear speedup - the effect of a N processor machine completing a task more than N times faster than a machine with a single processor similar to that in the multiprocessor has at times been a controversial issue (and lead to much benchmarketing) but can be brought about by such effects as the multiprocessor machine having not just N times the processing power but also N times cache and memory thus flatenning the cache-memory-disk hierarchy, more efficient use of memory by the individual processors due to partitioning of the problem and a number of other effects. Similar boosted efficency claims are sometimes aired for the use of a cluster of cheap computers as a replacement of a large multiprocessor, but again the actual results depend much on the problem at hand and the ability to partition the problem in a way that is conductive to clustering.
The processors may either communicate in order to be able to cooperate in solving a problem or they may run completely independently, possibly under the control of another processor which distributes work to the others and collects results from them (a "processor farm "). The difficulty of cooperative problem solving is aptly demonstrated by the following dubious reasoning:
- If it takes one man one minute to dig a post-hole then sixty men can dig it in one second.
Amdahl's law states this more formally.
Processors in a parallel computer may communicate with each other in a number of ways, including shared (either multiported or multiplexed) memory, a crossbar, a shared bus or an interconnect network of a myriad of topologies including star, ring, tree, hypercube, fat hypercube (an hypercube with more than one processor at a node), a n-dimensional mesh, etc. Parallel computers based on interconnect network need to employ some kind of routing to enable passing of messages between nodes that are not directly connected. The communication medium used for communication between the processors is likely to be hierachical in large multiprocessor machines. Similarily, memory may be either private to the processor, shared between a number of processors, or globally shared. Systolic array is an example of a multiprocessor with fixed function nodes, local-only memory and no message routing.
Approaches to parallel computers include:
- Computer cluster
- Parallel supercomputers
- Distributed computing
- NUMA vs. SMP vs. massively parallel computer systems
- Grid computing
A huge number of software systems have been designed for programming parallel computers, both at the operating system and programming language level. These systems must provide mechanisms for partitioning the overall problem into separate tasks and allocating tasks to processors. Such mechanisms may provide either implicit parallelism -- the system (the compiler or some other program) partitions the problem and allocates tasks to processors automatically (also called automtic parallelizing compilers ) -- or explicit parallelism where the programmer must annotate his program to show how it is to be partitioned. Most of the current implementations of parallelizing compilers only support single-level parallelism , as opposed to multi-level parallelism (also called nested parallelism ), which allows threads already running in parallel to spwan further parallelism. It is also usual to provide synchronisation primitives such as semaphores and monitors to allow processes to share resources without conflict.
Load balancing attempts to keep all processors busy by moving tasks from heavily loaded processors to less loaded ones.
Well known parallel software problem sets are:
Parallel programming models:
Topics in parallel computing
- Parallel programming
- Parallel algorithm
- Finding parallelism in problems and algorithms
- Cellular automaton
Computer science topics:
- Lazy evaluation vs strict evaluation
- Complexity class NC
- Communicating sequential processes
- Dataflow architecture
- Parallel graph reduction
- Parallel computer interconnects
- Parallel computer I/O
- Reliability problems in large systems
- Atari Transputer Workstation
- BBN Butterfly computers
- Beowulf cluster
- Blue Gene
- Deep Blue
- Fifth generation computer systems project
- ILLIAC III
- ILLIAC IV
- Meiko Computing Surface
Parallel computing to increase fault tolerance:
Companies (largely historical):
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details