Understanding the. LINUX. KERNEL. THIRD EDITION. Daniel P. Bovet and Marco Cesati .. came the first edition of Understanding the Linux Kernel at the end of , which .. identified by a fourth number in the version numbering scheme. Understanding the Linux Kernel. Daniel P. Bovet. Marco Cesati. Publisher: O' Reilly. First Edition October ISBN: , pages. Understanding. Have spare times? Read Understanding The Linux Kernel 4th Pdf writer by swe. aracer.mobi Studio are now looking at the third edition, which covers linux
|Language:||English, Spanish, Indonesian|
|Genre:||Fiction & Literature|
|Distribution:||Free* [*Registration needed]|
Whatever our proffesion, Understanding The Linux Kernel 4th Pdf can be first edition of understanding the linux kernel and the end of understanding the linux kernel 4th edition filetype mon, 04 feb gmt understanding the linux kernel 4th pdf - the linux kernel is a free and. Get Free Read & Download Files Understanding Linux Kernel 4th Edition PDF. UNDERSTANDING LINUX KERNEL 4TH EDITION. Download: Understanding.
O'Reilly Media. Minimale afname van het product is 1. In winkelwagen. In order to thoroughly understand what makes Linux tick and why it works so well on a wide variety of systems, you need to delve deep into the heart of the kernel.
The kernel handles all interactions between the CPU and the external world, and determines which programs will share processor time, in what order. It manages limited memory so well that hundreds of processes can share the system efficiently, and expertly organizes data transfers so that the CPU isn't kept waiting any longer than necessary for the relatively slow disks.
The third edition of Understanding the Linux Kernel takes you on a guided tour of the most significant data structures, algorithms, and programming tricks used in the kernel. Probing beyond superficial features, the authors offer valuable insights to people who want to know how things really work inside their machine. Important Intel-specific features are discussed.
Relevant segments of code are dissected line by line. But the book covers more than just the functioning of the code; it explains the theoretical underpinnings of why Linux does things the way it does.
To actually perform useful work, a process must be able to access the services provided by the kernel. This is implemented differently by each kernel, but most provide a C library or an API , which in turn invokes the related kernel functions.
If memory isolation is in use, it is impossible for a user process to call the kernel directly, because that would be a violation of the processor's access control rules. A few possibilities are: Using a software-simulated interrupt.
This method is available on most hardware, and is therefore very common.
Using a call gate. A call gate is a special address stored by the kernel in a list in kernel memory at a location known to the processor. When the processor detects a call to that address, it instead redirects to the target location without causing an access violation. This requires hardware support, but the hardware for it is quite common. Using a special system call instruction. This technique requires special hardware support, which common architectures notably, x86 may lack.
System call instructions have been added to recent models of x86 processors, however, and some operating systems for PCs make use of them when available.
Using a memory-based queue. An application that makes large numbers of requests but does not need to wait for the result of each may add details of requests to an area of memory that the kernel periodically scans to find requests.
Kernel design decisions[ edit ] Issues of kernel support for protection[ edit ] An important consideration in the design of a kernel is the support it provides for protection from faults fault tolerance and from malicious behaviours security.
These two aspects are usually not clearly distinguished, and the adoption of this distinction in the kernel design leads to the rejection of a hierarchical structure for protection. Denning   ; whether they are hardware supported or language based; whether they are more an open mechanism or a binding policy; and many more. Support for hierarchical protection domains  is typically implemented using CPU modes.
Many kernels provide implementation of "capabilities", i. A common example occurs in file handling: a file is a representation of information stored on a permanent storage device. The kernel may be able to perform many different operations e. A common implementation of this is for the kernel to provide an object to the application typically called a "file handle" which the application may then invoke operations on, the validity of which the kernel checks at the time the operation is requested.
Such a system may be extended to cover all objects that the kernel manages, and indeed to objects provided by other user applications. An efficient and simple way to provide hardware support of capabilities is to delegate to the MMU the responsibility of checking access-rights for every memory access, a mechanism called capability-based addressing.
An alternative approach is to simulate capabilities using commonly supported hierarchical domains; in this approach, each protected object must reside in an address space that the application does not have access to; the kernel also maintains a list of capabilities in such memory. When an application needs to access an object protected by a capability, it performs a system call and the kernel then checks whether the application's capability grants it permission to perform the requested action, and if it is permitted performs the access for it either directly, or by delegating the request to another user-level process.
The performance cost of address space switching limits the practicality of this approach in systems with complex interactions between objects, but it is used in current operating systems for objects that are not accessed frequently or which are not expected to perform quickly. Kernel security mechanisms play a critical role in supporting security at higher levels. The lack of many critical security mechanisms in current mainstream operating systems impedes the implementation of adequate security policies at the application abstraction level.
The processor monitors the execution and stops a program that violates a rule e. In systems that lack support for capabilities, processes are isolated from each other by using separate address spaces. An alternative approach is to use language-based protection. In a language-based protection system , the kernel will only allow code to execute that has been produced by a trusted language compiler. The language may then be designed such that it is impossible for the programmer to instruct it to do something that will violate a security requirement.
Switching between address spaces is a slow operation that causes a great deal of overhead, and a lot of optimization work is currently performed in order to prevent unnecessary switches in current operating systems. Switching is completely unnecessary in a language-based protection system, as all code can safely operate in the same address space.
Any protection scheme that can be designed to be expressed via a programming language can be implemented using this method. Changes to the protection scheme e. Disadvantages include: Longer application start up time. Applications must be verified when they are started to ensure they have been compiled by the correct compiler, or may need recompiling either from source code or from bytecode. Inflexible type systems. On traditional systems, applications frequently perform operations that are not type safe.
Such operations cannot be permitted in a language-based protection system, which means that applications may need to be rewritten and may, in some cases, lose performance. Examples of systems with language-based protection include JX and Microsoft 's Singularity.
Process cooperation[ edit ] Edsger Dijkstra proved that from a logical point of view, atomic lock and unlock operations operating on binary semaphores are sufficient primitives to express any functionality of process cooperation.
With this, depending on the complexity of the device, some devices can get surprisingly complex to program, and use several different controllers. Because of this, providing a more abstract interface to manage the device is important. Frequently, applications will require access to these devices. The Kernel must maintain the list of these devices by querying the system for them in some way. When an application requests an operation on a device Such as displaying a character , the kernel needs to send this request to the current active video driver.
The video driver, in turn, needs to carry out this request. Kernel-wide design approaches[ edit ] Naturally, the above listed tasks and features can be provided in many ways that differ from each other in design and implementation.
The principle of separation of mechanism and policy is the substantial difference between the philosophy of micro and monolithic kernels. For instance, a mechanism may provide for user log-in attempts to call an authorization server to determine whether access should be granted; a policy may be for the authorization server to request a password and check it against an encrypted password stored in a database.
Because the mechanism is generic, the policy could more easily be changed e. In minimal microkernel just some very basic policies are included,  and its mechanisms allows what is running on top of the kernel the remaining part of the operating system and the other applications to decide which policies to adopt as memory management, high level process scheduling, file system management, etc.
Per Brinch Hansen presented arguments in favour of separation of mechanism and policy. While monolithic kernels execute all of their code in the same address space kernel space , microkernels try to run most of their services in user space, aiming to improve maintainability and modularity of the codebase. These are called hybrid kernels. More exotic designs such as nanokernels and exokernels are available, but are seldom used for production systems.
The Xen hypervisor, for example, is an exokernel. Main article: Monolithic kernel Diagram of a monolithic kernel In a monolithic kernel, all OS services run along with the main kernel thread, thus also residing in the same memory area. This approach provides rich and powerful hardware access. Some developers, such as UNIX developer Ken Thompson , maintain that it is "easier to implement a monolithic kernel"  than microkernels.
Monolithic kernels, which have traditionally been used by Unix-like operating systems, contain all the operating system core functions and the device drivers. This is the traditional design of UNIX systems. A monolithic kernel is one single program that contains all of the code necessary to perform every kernel related task. Every part which is to be accessed by most programs which cannot be put in a library is in the kernel space: Device drivers, Scheduler, Memory handling, File systems, Network stacks.
Many system calls are provided to applications, to allow them to access all those services. A monolithic kernel, while initially loaded with subsystems that may not be needed, can be tuned to a point where it is as fast as or faster than the one that was specifically designed for the hardware, although more relevant in a general sense.
Modern monolithic kernels, such as those of Linux and FreeBSD , both of which fall into the category of Unix-like operating systems, feature the ability to load modules at runtime, thereby allowing easy extension of the kernel's capabilities as required, while helping to minimize the amount of code running in kernel space.
In the monolithic kernel, some advantages hinge on these points: Since there is less software involved it is faster. As it is one single piece of software it should be smaller both in source and compiled forms.
Less code generally means fewer bugs which can translate to fewer security problems. Most work in the monolithic kernel is done via system calls. These are interfaces, usually kept in a tabular structure, that access some subsystem within the kernel such as disk operations. Essentially calls are made within programs and a checked copy of the request is passed through the system call. Hence, not far to travel at all. The monolithic Linux kernel can be made extremely small not only because of its ability to dynamically load modules but also because of its ease of customization.
In fact, there are some versions that are small enough to fit together with a large number of utilities and other programs on a single floppy disk and still provide a fully functional operating system one of the most popular of which is muLinux.
This ability to miniaturize its kernel has also led to a rapid growth in the use of Linux in embedded systems. These types of kernels consist of the core functions of the operating system and the device drivers with the ability to load modules at runtime. They provide rich and powerful abstractions of the underlying hardware.
They provide a small set of simple hardware abstractions and use applications called servers to provide more functionality. This particular approach defines a high-level virtual interface over the hardware, with a set of system calls to implement operating system services such as process management, concurrency and memory management in several modules that run in supervisor mode.
This design has several flaws and limitations: Coding in kernel can be challenging, in part because one cannot use common libraries like a full-featured libc , and because one needs to use a source-level debugger like gdb. Rebooting the computer is often required. This is not just a problem of convenience to the developers.
When debugging is harder, and as difficulties become stronger, it becomes more likely that code will be "buggier".
Bugs in one part of the kernel have strong side effects; since every function in the kernel has all the privileges, a bug in one function can corrupt data structure of another, totally unrelated part of the kernel, or of any running program. Kernels often become very large and difficult to maintain. Even if the modules servicing these operations are separate from the whole, the code integration is tight and difficult to do correctly.
Since the modules run in the same address space , a bug can bring down the entire system. Monolithic kernels are not portable; therefore, they must be rewritten for each new architecture that the operating system is to be used on. In the microkernel approach, the kernel itself only provides basic functionality that allows the execution of servers , separate programs that assume former kernel functions, such as device drivers, GUI servers, etc.
A microkernel that is designed for a specific platform or device is only ever going to have what it needs to operate. The microkernel approach consists of defining a simple abstraction over the hardware, with a set of primitives or system calls to implement minimal OS services such as memory management , multitasking , and inter-process communication. Other services, including those normally provided by the kernel, such as networking , are implemented in user-space programs, referred to as servers.
Microkernels are easier to maintain than monolithic kernels, but the large number of system calls and context switches might slow down the system because they typically generate more overhead than plain function calls. Many critical parts are now running in user space: The complete scheduler, memory handling, file systems, and network stacks. Micro kernels were invented as a reaction to traditional "monolithic" kernel design, whereby all system functionality was put in a one static program running in a special "system" mode of the processor.
In the microkernel, only the most fundamental of tasks are performed such as being able to access some not necessarily all of the hardware, manage memory and coordinate message passing between the processes.
In the case of QNX and Hurd user sessions can be entire snapshots of the system itself or views as it is referred to. The very essence of the microkernel architecture illustrates some of its advantages: Maintenance is generally easier.
Patches can be tested in a separate instance, and then swapped in to take over a production instance. Rapid development time and new software can be tested without having to reboot the kernel. More persistence in general, if one instance goes hay-wire, it is often possible to substitute it with an operational mirror. Most micro kernels use a message passing system of some sort to handle requests from one server to another. The message passing system generally operates on a port basis with the microkernel.
As an example, if a request for more memory is sent, a port is opened with the microkernel and the request sent through. Once within the microkernel, the steps are similar to system calls. The rationale was that it would bring modularity in the system architecture, which would entail a cleaner system, easier to debug or dynamically modify, customizable to users' needs, and more performing.
Although micro kernels are very small by themselves, in combination with all their required auxiliary code they are, in fact, often larger than monolithic kernels. Advocates of monolithic kernels also point out that the two-tiered structure of microkernel systems, in which most of the operating system does not interact directly with the hardware, creates a not-insignificant cost in terms of system efficiency.
These types of kernels normally provide only the minimal services such as defining memory address spaces, Inter-process communication IPC and the process management. The other functions such as running the hardware processes are not handled directly by micro kernels.
Proponents of micro kernels point out those monolithic kernels have the disadvantage that an error in the kernel can cause the entire system to crash. However, with a microkernel, if a kernel process crashes, it is still possible to prevent a crash of the system as a whole by merely restarting the service that caused the error.
Other services provided by the kernel such as networking are implemented in user-space programs referred to as servers. Servers allow the operating system to be modified by simply starting and stopping programs. For a machine without networking support, for instance, the networking server is not started.
The task of moving in and out of the kernel to move data between the various applications and servers creates overhead which is detrimental to the efficiency of micro kernels in comparison with monolithic kernels.