CPU Scheduling is the process by which an Operating System decides which programs get to use the CPU. The major current O/S all work pretty much alike, although there are many possible ways to decide this issue. First, one must understand that, although every computer task takes CPU time, some take lots and some take just a bit, and then use I/O devices, or wait for user input. When that happens, processes running on the computer usually release the CPU, and let other processes use it.
In most systems, each process has a priority number, although numbers mean different things in different systems. In Unix, for example, priority 0 is the highest, and the bigger the priority number, the lower the priority. When a process is started, it is given a number, and then waits until all higher priority processes are not using the CPU before it gets a shot. Usually, the CPU is given to a process, and then, after a "time slice" has gone by, the scheduler re-evaluates the processes to see which one "deserves" the CPU next. Sometimes, the priority of a process is lowered when it uses CPU, so that a "greedy" process won't make the system too slow for other processes. It's all a balancing act, and the better it is done, the faster the computer appears to be to the user or users.
INTRODUCTION OF ROUND ROBIN
It is one of the oldest, simplest, fairest and most widely used scheduling algorithms, designed especially for time-sharing systems. A small unit of time, called time slice or quantum, is defined. All runnable processes are kept in a circular queue. The CPU scheduler goes around this queue, allocating the CPU to each process for a time interval of one quantum. New processes are added to the tail of the queue. The CPU scheduler picks the first process from the queue, sets a timer to interrupt after one quantum, and dispatches the process. If the process is still running at the end of the quantum, the CPU is preempted and the process is added to the tail of the queue. If the process finishes before the end of the quantum, the process itself releases the CPU voluntarily. In either case, the CPU scheduler assigns the CPU to the next process in the ready queue. Every time a process is granted the CPU, a context switch occurs, which adds overhead to the process execution time. A round robin is an arrangement of choosing all elements in a group equally in some rational order, usually from the top to the bottom of a list and then starting again at the top of the list and so on. A simple way to think of round robin is that it is about "taking turns." Used as an adjective, round robin becomes "round-robin." In computer operation, one method of having different program process take turns using the resources of the computer is to limit each process to a certain short time period, then suspending that process to give another process a turn (or "time-slice"). This is often described as round-robin process scheduling. Problem and solution of Round Robin
Explain and Solves: Round Robin (RR) of Operating System Concepts
Round Robin: This method is quite same as the FCFS but the difference is the in this case the processor will not process the whole job (process) at a time. Instead, it will complete an amount of job (quantum) at a turn and then will go to the next process and so on. When all job has got a turn, it will again start from the first job and work for a quantum of time/cycle on each job and proceed. Now consider a CPU and also consider a list in which the processes are listed as follows,
Arrival| Process| Burst Time|
0| 1| 3|
1| 2| 2|
2| 3| 1|
Quantum = 2 Second
Here, Arrival is the time when the process has arrived the list, Process Number is used instead of the process name, and Burst Time is the amount of time required by the process from CPU. Well, as the unit of time can take anything like nano-second, second, minute etc. whatever. Consider it as second.
Now for an instance,...