OS kernels :

a little overview and comparison


Preface

There are different types of operating system models around. Some are used in existing commercial/freeware operating systems, and others are being invented at universities in development projects. All OS types have their strong sides and their weaknesses, making them suited for different types of hardware or purposes. Off course, computers have changed a lot, so kernels have changed too. Older OSs are still based on the low-performant hardware of the sixties and seventies, but do deliver stability, while newer OS need teh power of the modern computers and still have to prove themselves.

Talking about the weak and strong sides of OSs is difficult, because most OSs are targeted to a specific group of users or applications or are used on a specific base of computers. There are OSs that claim to be general-purpose, but the first OSs that can handle all tasks as efficiently as one should like, has still to be written.

First, some words about the meaning of "kernel". Operating Systems can be written so that most services are moved outside the OS core and implemented as processes.This OS core then becomes a lot smaller, and we call it a kernel. When this kernel only provides the basic services, such as basic memory management ant multithreading, it is called a microkernel or even nanokernel for the super-small ones. To stress the difference between the

Unix-type of OS, the Unix-like core is called a monolithic kernel. A monolithic kernel provides full process management, device drivers,file systems, network access etc. I will here use the word kernel in the broad sense, meaning the part of the OS supervising the machine.

Most kernels offer 2 basic incapsulations of programs. The terms used to describe these differ between the OSs, but I will use these:

  • processes : a process has a certain (protected) memory area for its own It has a status that can be : running, waiting for some event etc
  • threads : a thread is part of a process, it incapsulated an execution flow. Each thread has its own set of registers (thus, it's own virtual cpu) , but all threads in a process share the same memory space.

This leads us to the second important feature in operating system kernels: memory spaces. Each process has it's own protected (whatever this means in different OSs) memory space it runs in. It can share parts of this memory with other processes. Some OSs use a shared memory space for all processes. This means less or no protection however, like in DOS.

When it comes to existing "modern" and wide-spread operating systems, only monolithic kernels and microkernels are used. There are however, different types of microkernel architectures. Off course, lots of large mainframes in the world still run the good old VMS and other old operating systems. Unix, the OS with a monolithical kernel, was based on the ideas of the first successful OS: Multics.

[top]


The OS without kernel

Not all operating systems have a "kernel" wich is protected from user programs and wich manages the hardware and the user programs. Some operating systems, the early ones, just provided some interface to the hardware programs could run on, but didn't protect themselves from these programs or didn't offer protect the programs from each other. Thus, the user could do with the machine what he wanted to do,access the hardware directly. The strong side of this "architecture" is the speed of the system, but it can off course only be used as a personal system for 1 user.It's not stable at all when running several programs at once (if this is, at all, possible).

[top]


OS with ring structure, Multics

Then people found out "protection"... The OS was divided in several rings with different privileges. The parts of the OS that needed to access the hardware and provided the basic metafores of processes,memory and devices, run in ring0, some system tasks run in ring 1 etc... The normal user processes run in the rign with the lowest privileges. This means a process running in a certain ring cannot harm the processes in a ring with more privilege. Multics was the OS that brought this ideas to us, and formed the base for all later operating systems up to now. This architecture offers off course a lot more stability and security than the earlier architectures, and is able to provide multitasking and multi-user facilities. It can however only implemented on a system if the hardware (mainly the cpu and the mmu) provides a kind of protection facility.

[top]


OS with a kernel

This architecture evolved to an OS design with two rings: one ring running in system mode, and a ring running in user mode. The kernel has full control of the hardware and provides abstractions for the processes running in user mode. A process running in user mode cannot access the hardware, and must use the abstractions provided by the kernel. It can call certain services of the kernel by making "system calls" or kernel calls. The kernel only offers the basic services. All others are provided by programs running in user mode. This is generally the whole interface.

[top]


Monolithic kernels

This type of kernel can easily be described as "the big soup". The older monolithic kernels were written as a mixture of everything the OS needed, without much of an organization.The monolithic kernel offers everything the OS needs : processes, memory management, multiprogramming, interprocess communication (IPC), device access, file systems, network protocols and whatever the OS should implement.
Newer monolithic kernels have a modular design, which offers run-time adding and removal of services. The whole kernel runs in "kernel mode", a processor mode in which the software has full control over teh machine. The processes running on top of the kernel run in "user mode", in which programs have only access to the kernel services.
The monolithical kernels (mainly Unix) were build for systems that don't have to turned on and off every day or hour and for systems to which no devices are added during their lifetime or hardware is changed. The manufacturer tailors the OS to the machine and then sells the machine with the OS installed for once and always.
This leads to a very stable system, but it is not very suited for PCs, where devices are added or removed and that reboot every day. It is a fact, however that it's just these OSs that now even offer hot-swapping of devices etc. e.g., the new Solaris can even hot-swap cpu boards ! The main strong sides of monolithic systems is that they are extremely stable and that they achieve a very high speed.
Also, they have been in the world for so long (Unix was developped in '69, first as a single-user OS for the minicomputer)

[top]


Microkernels

Microkernel designs put a lot of OS services in seperate processes, that can be started or stopped at runtime.This makes the kernel a lot smaller and offers a far greater flexibility.
File systems, device drivers ,process management and even part of the memory management can be put in processes running on top of the microkernel. This architecture is actually a client-server (what a buzz-word :-) ) model: processes (clients) can call OS services by sending requests through IPC to server processes. eg,. a process that wants to read from a certain file send a request to the file system process.The central processes that provide the process management, file system etc are frequently called the servers. Microkernels are often also highly multithreaded, putting every different service in a different thread, offering greater speed and stability. The main difficulty of microkernels is then to make the IPC as fast as possible. This was a design problem in early microkernel design, because IPC, while being intended to be the power of the architecture, often proved to be the bottleneck. Now, however, microkernels do offer fast IPC.
When talking about microkernels, one must really clearly make a difference between the first and the second generation. The first-generation microkernels, like Mach, are fat and provide lots of services, or multiple ways to do the same thing. The second-generation microkernels more follow the "pure" microkernel idea: kernels with a very small footprint, only offering the abstractions really needed, with a clean and unambigous ABI. Examples of the second generation are l4 and QNX.

[top]


The Exokernel

The Exokernel let user programs override the standard code exported by the system and the kernel itself. This leads to very fast operation , because a programmer will know how to implement the specific algorithm as fast as possible. So, exokernels let normal users take over the functionality of the kernel. But the weakness is safety, how to restrict what user programs can do. So, this means another boom in kernel size, when protection algorithms have to be implemented. Also, exokernels use a "soft" protection boundary, trying to force the programmer use only safe languages.
This depends on the integrity of the programmers, and faulty programs can heavily weigh on the fault tolerance of the system.

[top]


Cache kernel

The cache kernel (developped at Stanford) is based on the idea of caching. It only caches threads, memory spaces, inter process communication and kernels. This cache kernel executes in kernel mode. With each cached object, an application kernel is associated. This application kernel runs in user mode and provides the management of the address spaces and thread scheduling.So, every application kernel can do thread scheduling the way it wants, e.g. the Unix scheduling and it can manage the memory how it wants.This is a very short description, but it makes clear that the Cache Kernel also offers the extensibility of exokernels. Cache kernels provide a "hard" protection boundary: they use the kernel-mode/user-mode convention.


To my personal opinion, however, I think also well-designed microkernels can offer the extensibility needed while retaining the stablility. The big example is QNX. Most OS designers still say microkernels don't offer the performance one should want, but this is not true: a microkernel with a good design can perform as well as a monolithic OS and as the Exokernel.

[top]


Unix

Unix uses a monolithic design, implementing all services in the kernel. It offers great stability and speed, but it is technically overaged. There are off course now some enhancements to monolithic kernels, that make monolithical kernels more modern.
Modules, pieces of code that can be plugged into the kernel,offer run-time extensibility like microkernels do, but these modules run also in kernel mode, so they must be secure and stable.
Linux is off course the hype of the moment, with it's stability,it's speed and it's power, but from an architectural point of view, it is outdated.(Not the design of the part of the OS running on top of the kernel is outdated, no, Linux's (and Unix's in general) programs offer the most high-tech in the world, but the idea of a monolithical kernel itself is outdated.
Linux uses a modular design, which lets you slot in drivers and other modules into the kernel at run-time. The downside is that all drivers , even those not needing access to the hardware, run in full kernel mode and are not protected from each other and the kernel is not protected from them.

[top]


Mach

Mach was the first microkernel architecture with a lot of impact. Most modern operating systems are based on its ideas. It was developped at CMU. Mach offers everything discussed in the microkernel section. It is a microkernel of the first generation and is very large, providing more than 1 way to do something. Mach has very powerfull memory management, but this enlarges the kernel. The Mach memory management was designed to handle very large and sparse memory spaces, providing memory-mapped files etc..
Mach is a message-passing microkernel based on the client-server ideas (well, most microkernels are, but Mach was really the big inventor).
So-called servers extend the functionality of the kernel (file systems etc). These servers can run in a seperate memory space, but also in the kernel memory space, offering bigger speed for vital services (device drivers etc). A Unix/POSIX server was thus implemented on mach, so that all Unix programs can run on Mach without needing to do lots of porting work.
All abstractions provided by the kernel are represented as objects, so Mach is fully object-oriented (allthough written completely in C). Mach uses the idea of "ports" to implement the access to these objects. Processes can send messages to objects through these ports only if they have access to them. The messages are then sent through the port to the owner of the port. So, Mach is a message-passing microkernel. Message-passing is the way the IPC works between the processes, but also how processes request services from servers and do system calls.

[top]


Windows NT

Windows NT uses a HAL that implements the hardware-dependent part of the OS. This HAL provides functions to access the hardware in a generic way.This allows for a hardware-independent kernel. The NT Hal can also emulate parts of the hardware if the actual hardware isn't available. A microkernel runs on top of this HAL, based on Mach ideas. Then, running on top of the microkernel, but also in supervisor mode, are the servers : the object manager, I/O manager, process manager, file system, GDI etc. The I/O manager contains the device drivers. These access the hardware through the HAL. All these servers, together with the microkernel, form the Windows NT executive.
The servers are running, just like the kernel, in kernel mode, so this leads to fast I/O and system services, but the downside is that all drivers have access to the whole system. What is the difference with Linux, offering as much extensibility with it's modular design?
Well, the servers/drivers in the NT executive are all running in seperate memory spaces, so they can't interfere with each other and can't play with the kernel's memory.
Windows NT is Object-oriented just like Mach. All system services and abstractions are represented as objects. Programs must have the right access rights to use them. The WindowsNT memory management is also much like that of Mach, using the same objects just with other names :-).

[top]


QNX

QNX uses a very small microkernel (only 32KB !!), that provides in-place execution, so it is suited for embedded systems too. It is very stable and the overall code size of the system is very small.This small kernel and code size allows of course for very easy development of both kernel and programs. The microkernel only implements multithreading, interrupt handling, IPC and memory management. A process manager thread (32KB) extends the kernel and runs in kernel mode too. All other drivers, servers and user programs run in user mode as normal user processes. The scheduling algorithm and overall design make it a real-time OS; when speed is your need... You can easily scale QNX from a coffee-machine to a huge SMP-server if you want. What is Microsoft talking about "scalability" ??. QNX can easily recover from faulting drivers or system processes without even rebooting and you can add and change even system DLLs (shared libraries) without rebooting. This is what we call real extensibility and flexibility. QNX has a really very clean design.

Visit QNX

[top]


BeOS

BeOS also uses a Mach-like microkernel. It uses pervasive multithreading, wich means it uses lots of threads, for each possible job, so it has a threaded design throughout the system. This makes BeOS very fast and very useful for multimedia applications. It is ready for multi-processing. BeOS also uses a 64-bit file system and direct graphics access, bypassing the graphical subsystem. BeOS has a simple API and it has a POSIX-interface too, with all the standard UNIX utilities and a bash shell. It is said to boot in a few seconds, unlike other operating systems.

Visit Be

[top]


L4/Fiasco

L4 is a microkernel of the second generation. It is the successor to L3. And Fiasco is a new L4-compatible kernel. The Fiasco kernel can be preempted at allmost any time, which leads to short response time5 for high-priority threads, so Fiasco is suited for real-time systems.

Visit Fiasco

[top]


Minix

Minix is a Unix-like operating system developped for learning OS design at universities. Well, actually, it is not like Unix at all. It uses a modular design, more like a microkernel. All system services are implemented as threads running on the kernel. The drivers also run as threads, but in kernel mode, statically linked. So, it's design is much more modern than Unix's, more something like NT's structure, but with the file system and user interface running in user mode. And because it's really very small, it' easy to program and teach.

[top]


Notes

  1. This is not a very in-depth look to specific operating systems. Visit their web sites for more information.
  2. When I write "he" in this document, I mean he or she :-)
  3. If you think I made a fundamental or other error, please mail me and discuss it.

(C)1998 Pieter Dumon
This document can be distributed freely but not altered.
Suggestion are welcome.
Last updated: 26/11/1997
Pieter.Dumon@rug.ac.be
http://studwww.rug.ac.be/~pdumon/