Science Fair Project Encyclopedia
Kernel (computer science)
In computer science, the kernel is the fundamental part of an operating system. It is a piece of software responsible for providing secure access to the machine's hardware to various computer programs. Since there are many programs, and access to the hardware is limited, the kernel is also responsible for deciding when and how long a program should be able to make use of a piece of hardware, in a technique called multiplexing. Accessing the hardware directly could also be very complex, so kernels usually implement a set of hardware abstractions. These abstractions are a way of hiding the complexity, and providing a clean and uniform interface to the underlying hardware, which makes it easier on application programmers.
An operating system kernel is not strictly needed to run a computer. Programs can be directly loaded and executed on the "bare metal" machine, provided that the authors of those programs are willing to do without any hardware abstraction or operating system support. This was the normal operating method of many early computers, which were reset and reloaded between the running of different programs. Eventually, small ancillary programs such as program loaders and debuggers were typically left in-core between runs, or loaded from read-only memory. As these were developed, they formed the basis of what became early operating system kernels.
There are four broad categories of kernels :
- Monolithic kernels provide rich and powerful abstractions of the underlying hardware.
- Microkernels provide a small set of simple hardware abstractions and use applications called servers to provide more functionality.
- Hybrid (modified microkernels) are very much like pure microkernels, except that they include some additional code in kernelspace so that it runs more quickly.
- Exokernels provide no abstractions but allow the use of libraries to provide more functionality via direct or nearly direct access to hardware.
The monolithic approach defines a high-level virtual interface over the hardware, with a set of primitives or system calls to implement operating system services such as process management, concurrency, and memory management in several modules that run in supervisor mode.
Even if every module servicing these operations is separate from the whole, the code integration is very tight and difficult to do correctly, and, as all the modules run in the same address space, a bug in one of them can bring down the whole system. However, when the implementation is complete and trustworthy, the tight internal integration of components allows the low-level features of the underlying system to be effectively exploited, making a good monolithic kernel highly efficient. Proponents of the monolithic kernel approach make the case that if code is not correct, it does not belong in a kernel, and if it is, there is little advantage in the microkernel approach.
More modern monolithic kernels such as Linux and the FreeBSD kernel can load executable modules at runtime, allowing easy extension of the kernel's capabilities as required, while helping to keep the amount of code running in kernelspace to a minimum.
Examples of monolithic kernels:
The microkernel approach consists in defining a very simple abstraction over the hardware, with a set of primitives or system calls to implement minimal OS services such as thread management , address spaces and interprocess communication.
The main objective is the separation of basic service implementations from the operation policy of the system. For example, the process I/O locking could be implemented by a user server running on top of the microkernel. These user servers, used to carry on the system high level parts, are very modular and simplify the structure and design of the kernel. A service server that fails doesn't bring the entire system down; this simpler module can be restarted independently of the rest.
However, part of the system state is lost with the failing server, and it is generally difficult to continue execution of applicative software, or even of other servers with a fresh copy. For example, if a (theoretic) server responsible for TCP/IP connections is restarted, applications could be told the connection was "lost" and reconnect, going through the new instance of the server. However, other system objects, like files, do not have these convenient semantics, are supposed to be reliable, not become unavailable randomly and keep all the information written to them previously. So, database techniques like transactions, replication and checkpointing need to be used between servers in order to preserve essential state across single server restarts. The simplification in the "kernel" is replaced with much increased "complexification" above, if the microkernelization is supposed to bring anything else than an improved detection of stray pointers. This makes microkernelized systems less efficient and more complicated to write. Microkernelized systems often used only one server to implement a given operating system personality.
Examples of microkernels and OSs based on microkernels:
- Chorus microkernel
- LSE/OS (a nanokernel)
- KeyKOS (a nanokernel)
- The L4 microkernel family
- Mach, used in GNU Hurd and Mac OS X
- Spring operating system
Monolithic kernels vs. microkernels
Monolithic kernels are often preferred over microkernels due to the lower level of complexity of dealing with all system control code in one address space. For example, the Mac OS X kernel (XNU) while based on the Mach 3.0 microkernel, includes code from the BSD kernel in the same address space, in order to cut down on the latency incurred by the traditional microkernel design.
In the early 1990s, monolithic kernels were considered obsolete. The design of Linux as a monolithic kernel rather than a microkernel was the topic of a famous flame war (or what then passed for flaming) between Linus Torvalds and Andrew Tanenbaum.  
There is merit in both sides of the arguments presented in the Tanenbaum/Torvalds debate.
Monolithic kernels tend to be easier to design correctly, and therefore may grow more quickly than a microkernel-based system. There are success stories in both camps. Microkernels are often used in embedded robotic or medical computers because most of the OS components reside in their own private, protected memory space. This is not possible with monolithic kernels, even with modern module-loading ones.
Although Mach is the best known general-purpose microkernel, several other microkernels have been developed with more specific aims. L3 was created to demonstrate that microkernels are not necessarily slow. L4 is a successor to L3 and a popular implementation called Fiasco is able to run Linux next to other L4 processes in separate address spaces. There are screenshots available on freshmeat.net showing this feat. A newer version called Pistachio also has this capability.
QNX is an operating system that has been around since the early 1980s and has a very minimalistic microkernel design. This system has been far more successful than Mach in achieving the goals of the microkernel paradigm. It is used in situations where software is not allowed to fail. This includes the robotic arms on the space shuttle to machines that grind glass where a tiny mistake may cost hundreds of thousands of dollars.
Many believe that since Mach basically failed to address the sum of the issues that microkernels were meant to solve, that all microkernel technology is useless. Mach enthusiasts state that this is a closed-minded attitude which has become popular enough that people just accept it as truth.
Hybrid kernels (modified microkernels)
Hybrid kernels are essentially microkernels that have some "non-essential" code in kernelspace in order for that code to run more quickly than it would were it to be in userspace. This was a compromise struck early on in the adoption of microkernel based architectures by various operating system developers before it was shown that pure microkernels could indeed be high performers. Most modern operating systems today fall into this category, Microsoft Windows NT and successors being the most popular examples.
Windows NT's microkernel is called the kernel, while higher-level services are implemented by the NT executive. The Win32 personality was originally implemented using a user-mode server, but in recent versions has been moved into the supervisor address space. Various servers communicate through a cross-address-space mechanism called Local Procedure Call or LPC, and notably use shared memory in order to optimize performances.
XNU, the Mac OS X kernel, is also a modified microkernel, due to the inclusion of BSD kernel code in the Mach based kernel. DragonFly BSD is the first non-Mach based BSD OS to adopt a hybrid kernel architecture.
Other Hybrid kernels are:
Some people confuse the term "Hybrid kernel" with monolithic kernels that can load modules after boot. This is not correct. "Hybrid" implies that the kernel in question shares architectural concepts or mechanisms with both monolithic and microkernel designs - specifically message passing and migration of "non-essential" code into userspace while retaining some "non-essential" code in the kernel proper for performance reasons.
Exokernels, also known as vertically structured operating systems, are a new and rather radical approach to OS design.
The idea behind this is to enable the developer to make all the decisions about hardware performance. Exokernels are extremely small, since they arbitrarily limit their functionality to the protection and multiplexing of resources.
Classic kernel designs (both monolithic and microkernels) abstract the hardware, hiding resources under a hardware abstraction layer, or behind device drivers. In these classic systems, if physical memory is allocated, one cannot assure its actual placement, for example.
The goal of an exokernel is to allow an application to request a specific piece of memory, a specific disk block etc., and merely ensure that the requested resource is free, and the application is allowed to access it.
Since an exokernel therefore only provides a very low-level interface to the hardware, lacking any of the higher-level functionalities of other operating systems, it is augmented by a "library operating system". Such a library OS interfaces to the exokernel below, and provides application writers with the familiar functionalities of a complete OS.
Some theoretical implications of an exokernel system are that it becomes possible to have several kinds of operating systems (Windows, Unix) running under a single exokernel, and that a developer may choose to override or increase functionality for performance reasons.
The concept of an exokernel has been around since at least 1995 , but as of 2005 the design is still very much a research effort and it is not used in any major commercial operating systems. One concept operating system is Nemesis , written by University of Cambridge, University of Glasgow, Citrix Systems, and the Swedish Institute of Computer Science . MIT has also built several exokernel based systems.
- Operating System Kernels at Sourceforge
- MIT Exokernel Operating System
- kernel image
- The KeyKOS Nanokernel Architecture, a 1992 paper by Norman Hardy et al.
- An Overview of the NetWare Operating System, a 1994 paper by Drew Major, Greg Minshall, and Kyle Powell (primary architects behind the Netware OS).
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details