Parallel Computing at a Glance
It is now clear that silicon based processor chips are reaching their physical limits in processing speed, as they are constrained by the speed of electricity, light, and certain thermodynamic laws. A viable solution to overcome this limitation is to connect multiple processors working in coordination with each other to solve grand challenge problems. Hence, high performance computing requires the use of Massively Parallel Processing (MPP) systems containing thousands of powerful CPUs. A dominant representative computing system (hardware) built using MPP approach is C-DACs PARAM supercomputer. By the end of this century, all high performance systems will be parallel computer systems. High-end super computers will be the Massively Parallel Processing (MPP) systems having thousands of processors interconnected. To perform well, these parallel systems require an operating system radically different from current ones. Most researchers in the field of operating systems (including PARAS microkernel designers!) have found that these new operating systems will have to be much smaller than traditional ones to achieve the efficiency and flexibility needed. The solution appears to be to have a new kind of OS that is effectively a compromise between having no OS at all and having a large monolithic OS that does many things that are not needed. At the heart of this approach is a tiny operating system core called a microkernel. Dominant representative operating systems built using microkernel approach are Mach and C-DACs PARAS microkernel. This chapter presents an overview of parallel computing in general and correlates all those concepts to the PARAM and PARAS advented by the Centre for Development of Advanced Computing (CDAC). It starts with the discussion on need of parallel systems for High Performance Computing and Communication (HPCC). It also presents an overview of PARAM family of supercomputers with the PARAS operating environment for respective representative systems. Thus, it brings out the four important elements of computing: hardware architectures, system software, applications, and problem solving environments.
1.1 History Parallel Computing
The history of parallel processing can be traced back to a tablet dated around 100 BC. Tablet had three calculating positions capable of operating simultaneously. From this, we can infer that, these multiple positions were aimed either at providing reliability or high speed computation through parallelism. Just as we learned to fly, not by constructing machine that flap their wings like birds, but by applying aerodynamic principles demonstrated by nature; we modeled parallel processing after these biological species. The feasibility of parallel processing can be demonstrated by neurons in brain. Aggregate speed with which complex calculations carried out by neurons is tremendously high, even though individual response of neurons is too slow (in terms of milli seconds). 1
The Design of PARAS Microkernel
Eras of Computing
The most prominent two eras of computing are: sequential and parallel era. In the past decade, parallel machines have become significant competitors to vector machines in the quest for high performance computing. A century wide view of development of computing eras is shown in Figure 1.1. The computing era starts with a development in hardware architectures, followed by system software (particularly in the area of compilers and operating systems), applications, and reaching its saturation point with its growth in problem solving environments. Every element of computing undergoes three phases: R & D, commercialization, and commodity.
1.2 What is Parallel Processing ?
Processing of multiple tasks simultaneously on multiple processors is called parallel processing. The parallel program consists of multiple active processes simultaneously solving a given problem. A...