Introduction to Parallel Computing
Traditionally, software has been written for serial computation: •To be run on a single computer having a single Central Processing Unit (CPU); •A problem is broken into a discrete series of instructions. •Instructions are executed one after another.
•Only one instruction may execute at any moment in time.
In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem: •To be run using multiple CPUs
•A problem is broken into discrete parts that can be solved concurrently •Each part is further broken down to a series of instructions •Instructions from each part execute simultaneously on different CPUs
• The computer resources can include:
•A single computer with multiple processors;
•An arbitrary number of computers connected by a network; •A combination of both.
Introduction to Parallel Computing In India
Although the performance of single processors has been steadily increasing over the years, the only way to build the next generation teraflop architecture supercomputers seems to be through parallel processing technology. Even with today's workstation-class high performance processors exceeding 100 megaflops, thousands of processors are required to build a teraflop architecture machine. Further, the fastest special purpose vector processors have a few Gigaflop peak performance, and thus they too need to be utilized in parallel to achieve Teraflop levels of performance.
In 1987, India decided to launch a national initiative in supercomputing in the form of a time-bound mission to design, develop and deliver a supercomputer in the gigaflops range. The major motivation came from delays (political) in getting a CRAY XMP for weather forecasting. A decision was made to support the development of indigenous parallel processing technology. The Center for Development of Advanced Computing (C-DAC) was set up in August 1988 with 3- year budget of Rs. 375 million (approximately US$ 12 million). C-DAC's First Mission was directed to deliver 1000 MFlops parallel supercomputer (1GF) by 1991. Simultaneously, several other complementary projects were initiated to develop high-performance parallel computers at the National Aerospace Laboratory of the Council of Scientific and Industrial Research (CSIR), the Center for Development of Telematics (C-DOT), Advanced Numerical Research & Analysis Group (ANURAG) of Defense Research and Development Organization (DRDO) and Bhabha Atomic Research Center (BARC). India's first generation parallel computers were delivered starting from 1991.
We all know that the silicon based chips are reaching a physical limit in processing speed, as they are constrained by the speed of electricity, light and certain thermodynamic laws. A viable solution to overcome this limitation is to connect multiple processors working in coordination with each other to solve grand challenge problems. Hence, high performance computing requires the use of Massively Parallel Processing (MPP) systems containing thousands of power full CPUs.
Processing of multiple tasks simultaneously on multiple processors is called Parallel Processing. The parallel program consists of multiple active processes simultaneously solving a given problem. A given task is divided into multiple sub tasks using divide-and-conquer technique and each one of them are processed on different CPUs. Programming on multiprocessor system using divide-and-conquer technique is called Parallel Processing.
The development of parallel processing is being influenced by many factors. The prominent among them include the following:
Computational requirements are ever increasing, both in the area of scientific and business computing. The technical computing problems, which require high-speed computational power, are related to life sciences, aerospace, geographical...