Intro
Most OSs today use very similar cpu time scheduling algorithms,all based
on the same basic ideas, but with OS-specific adaptations and extensions.
What follows is a description of those rough basic ideas.
What should be remarked is that this algorithm is not the best algorithm
that we can imagine, but it is, proven mathematically and by experience in
the early days of OS programming (sixties and seventies), the algorithm
that is the closest to the 'best' algorithm. Perhaps when computers get
more powerfull some day, that we might implement the ideal cpu time
scheduler.
Another remark is that this algorithm is designed for general-purpose
computers. Special-purpose OSs or systems, and some real-time systems
will use a very different algorithm.
I won't revue here all the possible algorithms, but only thos relevant
to the topic.
[Top]
Shortest Job First
The SJF algorithm takes processes that use the shortest cpu time first.
Mathematically seen, and corresponding to the experience, this is the
ideal scheduling algorithm. I won't give details in here about the
performance. It's all to do about overhead and response time, actually:
the system will be fast when the scheduler doesn't take much of the cpu
time itself, and when interactive processes react quickly (get a fast
response). This system would be very good. Would be...The overhead caused
by the algorithm itself is huge, however.
The scheduler would have top implement some way to time the cpu usage of
processes and predict it, or the user should tell the scheduler how long a
job (this is really a word that comes from very early computer design,
when Batch Job OSs were used :-)) would take. So , it is impossible to
implement this algorithm without hurting performance very much.
[Top]
Round-robin
I know I'm not following the historical line, but...
Round-robin scheduling is really the easiest way of scheduling. All
process form a circular array and the scheduler gives control to each
process at a time. It is off course very easy to implement and causes
almost no overhead, when compared to all other algorithms. But response
time is very low for the processes that need it. Off course it is not the
algorithm we want, but it can be used eventually.This algortihm is not the
best at all for general-purpose OSs, but it is useful for bath-processing
OSs, in which all jobs have the same priority, and in which response time
is of minor or no importance.
And this priority leads us to the next way of scheduling.
[Top]
Priority scheduling
Every process gets assigned a priority. The process with the highest
priority that requests cpu time gets it always. Interactive processes
could get assigned a high priority, so they get a high response time.
Processes that do computing etc get a lower priority.
This algorithm is an approximation of SJF, because interactive processes
will be shorted (eg. they read a key entered and then block again) and are
given a short response time. Computing processes are longer and don't need
the response time. But how do we know which processes are interactive ?
And how do we regulate how much cpu time a process gets? The answer is :
[Top]
Multiple Priority Queues
There we are. MPQ is the rough algortihm used by most operating systems
today.
The scheduler keeps a list of process queues. Every queue gets a priority
assigned. Processes start in a given (by the user or OS) priority
queue.
But first a remark: Different OSs use other numbers corresponding
with priorities. In Unix, for example, the highest priority (which is
chosen first), is 0. Lower priorities have a larger number. In other OSs
this is just inverted. Different OSs also use an other number of
priorities, depending on for which situations or purposes they are
designed.
Well, processes in queues with a higher priority get less cpu time, so a
smaller 'time quantum' than processes in lower priority queues. In this
way, interactive processes get less cpu time and computing processes more.
But how do we know which processes are interactive and wich are not? There
is an easy solution: let processes move from queue to queue. When a
process blocks before it's time quantum is spent, the scheduler will
increase its priority. So interactive processes, which normally just read
some input and then quit, will automatically promote to higher priorities.
Processes which use their quantum get a lower priority, so computing
processes, which take all the cpu time they can get, will automatically
move to the lower priorities. Then, if there are multiple processes in a
queue that are ready to be executed, they are scheduled round-robin. This
is useful, because all processes in one queue are as important.
This system is the best approximization of SJF, and so it is the best we
can use. It has very little overhead and gives high response time to
interactive processes.
Some more enhancements are added to this algorithm:
Real-time processes can get a fixed and high priority, in this way they
are always chosen when they need to be. And drivers can get some fixed
priority in the higher priority regions, under the real-time processes and
above the normal processes.
[Top]
Inheritance scheduling
This is the algorithm described in a paper from CMU. Processes can give
their cpu time to "child" processes and as such act as schedulers themselves.
In this way, multiple types of scheduling can be implemented : a scheduler for
real-time processes, a scheduler for interactive processes, one for batch
processing etc... The "root" scheduler is the basic scheduler implemented by
the OS itself, that schedules processes and scheduler processes. While the
individual scheduler processes might provide each another way of scheduling,
the "root" scheduler still uses some MPQ-form.
[Top]
Notes
- I hate the word 'algorithm'.
- When I use the word "process" in this particular document, and only in
this, you can substitute it by "thread" or "program" or whatever.
|