Biography Linux Kernel Primer Pdf


Sunday, December 29, 2019

𝗣𝗗𝗙 | Learn Linux kernel programming, hands-on: a uniquely effective top-down approachThe Linux® Kernel Primer is the definitive guide to Linux kernel. Contribute to iamroot11c/kernel development by creating an account on GitHub. · iamroot11c/ebooks/etc/The Linux(R) Kernel Primer A Top-Down. The Linux Kernel Primer: A Top-Down Approach for x86 and PowerPC Architectures [Claudia Salzberg Rodriguez, Gordon The Linux(R) Kernel Primer is the definitive guide to Linux kernel programming. I rather read an online PDF.

Linux Kernel Primer Pdf

Language:English, Spanish, German
Genre:Health & Fitness
Published (Last):31.07.2016
ePub File Size:29.33 MB
PDF File Size:19.70 MB
Distribution:Free* [*Regsitration Required]
Uploaded by: DONETTE

This book is for Linux enthusiasts who want to know how the Linux kernel works. It is not an internals manual. Rather it describes the principles. Learn many tips and techniques for debugging within the Linux kernel. Embedded Linux Primer: A Practical, Real-World Approach. - Buy The Linux Kernel Primer: A Top-Down Approach for x86 and PowerPC Architectures (Prentice Hall Open Source I rather read an online PDF .

This caused a fairly rapid evolution of Linux as the rate of involvement, whether in development, testing, or documentation, is staggering.

If you plan on hacking the Linux kernel, it is a good idea to become familiar with the terms of this license so that you know what the legal fate of your contribution will be. There are two main camps around the conveyance of free and open-source software.

The Free Software Foundation and the open-source groups differ in ideology. The Free Software Foundation, which is the older of the two groups, holds the ideology that the word free should be applied to software in much the same way that the word free is applied to speech.

The open-source group views free and open-source software as a different methodology on par with proprietary software. With the rise in 19 20 Linux's demand and popularity, the packaging of the kernel with these and other tools has becoming a significant and lucrative undertaking. Groups of people and corporations take on the mission of providing a particular distribution of Linux in keeping with a particular set of objectives. Without getting into too much detail, we review the major Linux distributions as of this writing.

New Linux distributions continue to be released. Most Linux distributions organize the tools and applications into groups of header and executable files. These groupings are called packages and are the major advantage of using a Linux distribution as opposed to downloading header files and compiling everything from source. Referring to the GPL, the license gives the freedom to charge for added value to the open-source software, such as these services provided in the code's redistribution.

Like other distributions, the majority of applications and tools come from GNU software and the Linux kernel. Debian has one of the better package-management systems, apt advanced packaging tool.

Linux Kernel Primer, The: A Top-Down Approach for x86 and PowerPC Architectures

The major drawback of Debian is in the initial installation procedure, which seems to cause confusion among novice Linux users. Debian is not tied to a corporation and is developed by a community of volunteers. Red Hat Linux was the company's Linux distribution until recently when it replaced its sole offering with two separate distributions Red Hat Enterprise Linux and the Fedora Core. Red Hat Enterprise Linux is aimed at business, government, or other industries that require a stable and supported Linux environment.

The Fedora Core is targeted to individual users and enthusiasts. The major difference between the two distributions is stability versus features. Fedora will have newer, less stable code included in the distribution than Red Hat Enterprise.

Red Hat appears to be the Linux enterprise version of choice in America. Mandriva Mandriva Linux[4] formerly Mandrake Linux originated as an easier-to-install version of Red Hat Linux, but has since diverged into a separate distribution that targets the individual Linux user. The major features of Mandriva Linux are easy system configuration and setup.

SUSE targets business, government, industry, and individual users. Gentoo Gentoo[6] is the new Linux distribution on the block, and it has been winning lots of accolades. The major difference with Gentoo Linux is that all the packages are compiled from source for the specific configuration of your machine.

This is done via the Gentoo portage system. Although a number of the recently described distributions work on PPC, their emphasis is on i versions of Linux. Other Distros Linux users can be passionate about their distribution of choice, and there are many out there. Slackware is a classic, MontaVista is great for embedded and, of course, you can roll your own distribution.

This likely contains the most up-to-date information and, if not, links to further information on the Web. Kernel Release Information As with any software project, understanding the project's versioning scheme is a key element in your involvement as a contributor.

Prior to Linux kernel 2. The even-number releases 2. The only code that was accepted into stable branches was code that would fix existing errors. Development would continue in the development tree that was marked by odd numbers 2. Eventually, the development tree would be deemed complete enough to take most of it and release a new stable tree.

In mid , a change occurred with the standard release cycle: Code that might normally go into a development tree is being included in the stable 2. Specifically, "…the mainline kernel will be the fastest and most feature-rich kernel around, but not, necessarily, the most stable.

If You're a Student

As this is a relatively new development, only time will tell whether the release cycle will be changed significantly in the long run. An increase in the purchase of PowerPC-based systems with the intention of running Linux on them can be seen among businesses and corporations.

The reason for the increase in purchase of PowerPC microprocessors is largely because of the fact that they provide an extremely scalable architecture that addresses a wide range of needs.

These SOCs encompass the processor along with built-in clocks, memory, busses, controllers, and peripherals. Although these three companies develop their chips independently, the chips share a common instruction set and are therefore compatible.

Linux is running on PowerPC-based game consoles, mainframes, and desktops around the world. With the growing popularity of Linux on this platform, we have undertaken to explore how Linux interfaces and makes use of PowerPC functionality. Numerous sites contain helpful information related to Linux on Power, and we refer to them as we progress through our explanations. We now look at general operating system concepts, basic Linux usability and features, and how they tie together.

This section overviews the concepts we cover in more detail in future chapters. If you are familiar with these concepts, you can skip this section and dive right into Chapter 2, "Exploration Toolkit. It is in charge of managing the resources provided by your system's particular hardware components and of providing a base for application programs to be developed on and executed. If there were no operating system, each program would have to include drivers for all the hardware it was interested in using, which could prove prohibitive to application programmers.

The anatomy of an operating system depends on its type. Linux and most UNIX variants are monolithic systems. When we say a system is monolithic, we do not necessarily mean it is huge although, in most cases, this second interpretation might well apply.

Rather, we mean that it is composed of a single unita single object file.

The operating system structure is defined by a number of procedures that are compiled and linked together. How the procedures interrelate defines the internal structure of a monolithic system. In Linux, we have kernel space and user space as two distinct portions of the operating system.

User space does not access the kernel and hence, the hardware resources directly but by way of system callsthe outermost layer of procedures defined by the kernel. Kernel space is where the hardware-management functionality takes place. Within the kernel, the system call procedures call on other procedures that are not available to user space to manipulate finer grain functionality.

Device drivers also provide well-defined interface functions for system call or kernel subsystem access. Figure 1. Linux Architecture Perspective Linux also sports dynamically loadable device drivers, breaking one of the main drawbacks inherent in monolithic operating systems. Dynamically loadable device drivers allow the systems programmer to incorporate system code into the kernel without having to compile his code into the kernel image.

Doing so implies a lengthy wait depending on your system capabilities and a reboot, which greatly increases the time a systems programmer spends in developing his code. With dynamically loadable device drivers, the systems programmer can load and unload his device driver in real time without needing to recompile the entire kernel and bring down the system. Throughout this book, we explain these different "parts" of Linux. When possible, we follow a top-down approach, starting with an example application program and tracing its execution path down through system calls and subsystem functions.

This way, you can associate the more familiar user space functionality with the kernel components that support it. Kernel Organization Linux supports numerous architecturesthis means that it can be run on many types of processors, which include alpha, arm, i, ia64, ppc, ppc64, and sx. The Linux source code is packaged to include support for all these architectures.

Most of the source code is written in C and is hardware independent. A portion of the code is heavily hardware dependent and is written in a mix of C and assembly for the particular architecture. The heavily machine-dependent portion is wrapped by a long list of system calls that serve as an interface. Overview of the Linux Kernel There are various components to the Linux kernel. Throughout this book, we use the word component and subsystem interchangeably to refer to these categorical and functional differentiators of the kernel functions.

In the following sections, we discuss some of those components and how they are implemented in the Linux kernel. We also cover some key features of the operating system that provide insight into how things are implemented in the kernel.

We break up the components into filesystem, processes, scheduler, and device drivers. Although this is not intended to be a comprehensive list, it provides a reference for the rest of this book.

User Interface Users communicate with the system by way of programs. A user first logs in to the system through a terminal or a virtual terminal. In Linux, a program, called mingetty for virtual terminals or agetty for serial terminals, monitors the inactive terminal waiting for users to notify that they want to log in. To do this, they enter their account name, and the getty program proceeds to call the login program, which prompts for a password, accesses a list of names and passwords for authentication, and allows them into the system if there is a match, or exits and terminates the process if there is no match.

The getty programs are all respawned once terminated, which means they restart if the process ever exits. Once authenticated in the system, users need a way to tell the system what they want to do. If the user is authenticated successfully, the login program executes a shell.

Although technically not part of the operating system, the shell is the primary user interface to the operating system.

A shell is a command interpreter and consists of a listening process. The listening process one that blocks until the condition of receiving input is met then interprets and executes the requests typed in by the user.

The shell is one of the programs found in the top layer of Figure 1. The shell displays a command prompt which is generally configurable, depending on the shell and waits for user input. A user can then interact with the system's devices and programs by entering them using a syntax defined by the shell. The programs a user can call are executable files stored within the filesystem that the user can execute.

The execution of these requests is initiated by the shell spawning a child process. The child process might then make system call accesses. After the system call returns and the child process terminates, the shell can go back to listen for user requests. User Identification A user logs in with a unique account name. The kernel uses this UID to validate the user's permissions with respect to file accesses.

When a user logs in, he is granted access to his home directory, which is where he can create, modify, and destroy files. The superuser or root is a special user with unrestricted permissions; this user's UID is 0. When a user is created, he is automatically a member of a group whose name is identical to his username.

A user can also be manually added to other groups that have been defined by the system administrator. A file or a program an executable file is associated with permissions as they apply to users and groups. Any particular user can determine who is allowed to access his files and who is not. Files and Filesystems A filesystem provides a method for the storage and organization of data.

Linux supports the concept of the file as a device-independent sequence of bytes. By means of this abstraction, a user can access a file regardless of what device for example, hard disk, tape drive, disk drive stores it. Files are grouped inside a container called a directory. Because directories can be nested in each other which means that a directory can contain another directory , the filesystem structure is that of a hierarchical tree.

The root of the tree is the top-most node under which all other directories and files are stored. A filesystem is stored in a hard-drive partition, or unit of storage. Directories, Files, and Pathnames Every file in a tree has a pathname that indicates its name and location. A file also has the directory to which it belongs.

A pathname that takes the current working directory, or the directory the user is located in, as its root is called a relative pathname, because the file is named relative to the current working directory.

In Figure 1. Hierarchical File Structure The concepts of absolute versus relative pathnames come into play because the kernel associates processes with the current working directory and with a root directory. The current working directory is the directory from which the process was called and is identified by a. As an aside, the parent directory is the directory that contains the working directory and is identified by a.. Recall that when a user logs in, she is "located" in her home directory.

The root is always its own parent. A filesystem is mounted with the mount system call and is unmounted with the umount system call.

Hm... Are You a Human?

A filesystem is mounted on a mount point, which is a directory used as the root access to the mounted filesystem. A directory mount point should be empty. Any files originally located in the directory used as a mount point are inaccessible after the filesystem is mounted and remains so until the filesystem is unmounted.

File Protection and Access Rights Files have access permissions to provide some degree of privacy and security. Access rights or permissions are stored as they apply to three distinct categories of users: the user himself, a designated group, and everyone else. The three types of users can be granted varying access rights as applied to the three types of access to a file: read, write, and execute. According to this, she has granted everyone the ability to enter her home directory but not to edit it.

She herself has read, write, and execute permission. In sophia's home directory, she has a directory called sources, which she has granted read, write, and execute permissions to herself, members of the group called department, and no permissions to anyone else. Execute permission as applied to a file indicates that it can be run and is used only on executable files. File Modes In addition to access rights, a file has three additional modes: sticky, suid, and sgid.

Let's look at each mode more closely. Back in the day when disk accesses were slower than they are today, when memory was not as large, and when demand-based methodologies hadn't been conceived,[10] an executable file could have the sticky bit enabled and ensure that the kernel would keep it in memory despite its state of execution.

When applied to a program that was heavily used, this could increase performance by reducing the amount of time spent accessing the file's information from disk. We see more of this in detail in Chapter 4. When the sticky bit is enabled in a directory, it prevents the removal or renaming of files from users who have write permission in that directory with exception of root and the owner of the file.

When a user executes an executable file, the process is associated with the user who called it. If an executable has the suid bit set, the process inherits the UID of the file owner and thus access to its set of access rights. This introduces the concepts of the real user ID as opposed to the effective user ID. As we soon see when we look at processes in the "Processes" section, a process' real UID corresponds to that of the user that started the process.

The sgid bit acts just like the suid bit but as applied to the group. File Metadata File metadata is all the information about a file that does not include its content. For example, metadata includes the type of file, the size of the file, the UID of the file owner, the access rights, and so on.

As we soon see, some file types devices, pipes, and sockets contain no data, only metadata. All file metadata, with the exception of the filename, is stored in an inode or index node. An inode is a block of information, and every file has its own inode.

A file descriptor is an internal kernel data structure that manages the file data. File descriptors are obtained when a process accesses a file. Regular File A regular file is identified by a dash in the first character of the mode field for example, -rw-rw-rw-. The kernel does not care what type of data is stored in a file and thus makes no distinctions between them. User programs, however, might care. Regular files have their data stored in zero or more data blocks.

A directory is a file that holds associations between filenames and the file inodes. A directory consists of a table of entries, each pertaining to a file that it contains. Block Devices A block device is identified by a "b" in the first character of the mode field for example, brw At certain intervals, the kernel looks at the data in the buffer cache that has been updated and synchronizes it with the disk.

This provides great increases in performance; however, a computer crash can result in loss of the buffered data if it had not yet been written to disk. Synchronization with the disk drive can be forced with a call to the sync, fsync, or fdatasync system calls, which take care of writing buffered data to disk.

A block device does not use any data blocks because it stores no data. Only an inode is required to hold its information. Character Devices A character device is identified by a "c" in the first character of the mode field for example, crw Pseudo devices or device drivers that do not represent hardware but instead perform some unrelated kernel side function can also be character devices.

These devices are also known as raw devices because of the fact that there is no intermediary cache to hold the data. Similar to a block device, a character device does not use any data blocks because it stores no data. Link A link device is identified by an "l" in the first character of the mode field for example, lrw A link is a pointer to a file.

This type of file allows there to be multiple references to a particular file while only one copy of the file and its data actually exists in the filesystem. There are two types of links: hard link and symbolic, or soft, link. Both are created through a call to ln.

A hard link has limitations that are absent in the symbolic link. These include being limited to linking files within the same filesystem, being unable to link to directories, and being unable to link to non-existing files. Links reflect the permissions of the file to which it is pointing. Named Pipes A pipe file is identified by a "p" in the first character of the mode field for example, prw A pipe is a file that facilitates communication between programs by acting as data pipes; data is written into them by one program and read by another.

The pipe essentially buffers its input data from the first process. Named pipes are also known as FIFOs because they relay the information to the reading program in a first in, first out basis. Much like the device files, no data blocks are used by pipe files, only the inode. Sockets are special files that also facilitate communication between two processes. One difference between pipes and sockets is that sockets can facilitate communication between processes on different computers connected by a network.

Socket files are also not associated with any data blocks. Because this book does not cover networking, we do not go over the internals of sockets. Types of Filesystems Linux filesystems support an interface that allows various filesystem types to coexist. A filesystem type is determined by the way the block data is broken down and manipulated in the physical device and by the type of physical device. Some examples of types of filesystems include network mounted, such as NFS, and disk based, such as ext3, which is one of the Linux default filesystems.

File Control When a file is accessed in Linux, control passes through a number of stages. First, the program that wants to access the file makes a system call, such as open , read , or write.

Control then passes to the kernel that executes the system call. There is a high-level abstraction of a filesystem called VFS, which determines what type of specific filesystem for example, ext2, minix, and msdos the file exists upon, and control is then passed to the appropriate filesystem driver.

The filesystem driver handles the management of the file upon a given logical device. A hard drive could have msdos and ext2 partitions. The filesystem driver knows how to interpret the data stored on the device and keeps track of all the metadata associated with a file. The filesystem driver then calls a lower-level device driver that handles the actual reading of the data off of the device.

This lower-level driver knows about blocks, sectors, and all the hardware information that is necessary to take a chunk of data and store it on the device. The lower-level driver passes the information up to the filesystem driver, which interprets and formats the raw data and passes the information to the VFS, which finally transfers the data back to the originating program.

Processes If we consider the operating system to be a framework that developers can build upon, we can consider processes to be the basic unit of activity undertaken and managed by this framework. More specifically, a process is a program that is in execution. A single program can be executed multiple times so there might be more than one process associated with a particular program. The concept of processes became significant with the introduction of multiuser systems in the s.

Consider a single-user operating system where the CPU executes only a single process. In this case, no other program can be executed until the currently running process is complete.

When multiple users are introduced or if we want the ability to perform multiple tasks concurrently , we need to define a way to switch between the tasks. The process model makes the execution of multiple tasks possible by defining execution contexts. In Linux, each process operates as though it were the only process. The operating system then manages these contexts by assigning the processor to work on one or the other according to a predefined set of rules.

The scheduler 29 30 defines and executes these rules. The scheduler tracks the length of time the process has run and switches it off to ensure that no one process hogs the CPU.

The execution context consists of all the parts associated with the program such as its data and the memory address space it can access , its registers, its stack and stack pointer, and the program counter value. Except for the data and the memory addressing, the rest of the components of a process are transparent to the programmer. However, the operating system needs to manage the stack, stack pointer, program counter, and machine registers. In a multiprocess system, the operating system must also be responsible for the context switch between processes and the management of system resources that processes contend for.

Process Creation and Control A process is created from another process with a call to the fork system call. When a process calls fork , we say that the process spawned a new process, or that it forked. The new process is considered the child process and the original process is considered the parent process. All processes have a parent, with the exception of the init process. All processes are spawned from the first process, init, which comes about during the bootstrapping phase.

This is discussed further in the next section. Process Tree When a child process is created, the parent process might want to know when it is finished. The wait system call is used to pause the parent process until its child has exited. A process can also replace itself with another process. This is done, for example, by the mingetty functions previously described. When a user requests access into the system, the mingetty function requests his username and then replaces itself with a process executing login to which it passes the username parameter.

This replacement is done with a call to one of the exec system calls. A PID is a non-negative integer. Process IDs are handed out in incrementing sequential order as processes are created. When the maximum PID value is hit, the values wrap and PIDs are handed out starting at the lowest available number greater than 1.

I'm looking forward to spending more time exploring this book. From the Back Cover Learn Linux kernel programming, hands-on: All rights reserved. See all Product description. To get the free app, enter mobile phone number. See all free Kindle reading apps. Tell the Publisher!

I'd like to read this book on Kindle Don't have a Kindle? Product details Paperback: Prentice Hall 19 September Language: English ISBN Be the first to review this item Amazon Bestsellers Rank: No customer reviews.

Share your thoughts with other customers. Write a product review. Most helpful customer reviews on Amazon. Verified Purchase. Very bad printing quality, the ink is watered down to an extreme and causing eye strain. The binding quality is also bad, within one day of receiving the book the pages started cracking out. Very disappointing since it is a good Linux reference.

I rather read an online PDF. In any operating system, kernel programming is reserved for a few experts in the field. Largely due to the very specialised nature of what they want to do. Linux is no exception. This book stands far afield from a typical linux sysadmin text.

The authors choose to explain for two chip sets. Namely for the Intel x Natural and obvious, given Intel's domination of the microprocessor market. As for a second choice, there are many contenders, like the Alpha or the ARM chips. I suspect this choice might have been influenced by one of the authors being at IBM. Arguably, a credible alternative is the chip set from AMD.

But the main thrust of the book is to offer a top down view of linux, drilling as close to the hardware as possible.

The code excerpts are in C. Very little kernel development seems to be done in other languages, and this book is no exception. For example, one code fragment explains the reading of the real time clock on the x For some developers who might be used to dealing in pure software, these hardware interaction examples may be the most valuable portions of the text.

Another merit of the book is presence of 2 architectures that are described. If you are developing for another chip set, the text may still be quite useful. You get 2 case studies here of how linux works. Contrasting these might give insight into what changes you need for your chip set. Overall, this is definitely an advanced book. Having a strong background in C coding is a plus. I've been less than happy with other kernel books before, so when this landed in my mailbox, I was prepared for disappointment.

I've been down this road: Well, the subject is overwhelming, and it really can't be covered in one book. But you have to start somewhere, and this looks like the best place I've seen so far. Yes, of course you'll need and want other books, and you'll need to spend a lot of time experimenting on your own, but this is as the title says your primer: I really like the approach of trying to relate everything to user space programs and of writing example code and drivers to illustrate concepts.

The authors have also made an effort to point out and elucidate the things that confused them when they first started looking at the kernel.

Every chapter has at least a few review questions at the end, and lots of annotated source code. Four projects get you started with actual kernel programming.

There are, of course, ommissions and lightly covered areas. Six hundred pages aren't enough to cover everything in depth, and there has to be at least some basic assumption of programming knowledge. But overall, this looks great and I'm looking forward to spending more time with it. This is an excellent primer on the intricacies of the Linux kernel. It includes programming information for assembly and C, some very useful kernel exploration tools, information on memory management, filesystem details, IO operations, boot loaders, memory initialization, building the kernel, and adding your code to the kernel.

The section on processes includes creating, lifespan, termination, schedulers and wait queues and is particularly good. Throughout the book you will find lots of examples and illustrations to clearly illustrate the concepts discussed.In addition, the fact that Linux is free does in fact matter. If an executable has the suid bit set, the process inherits the UID of the file owner and thus access to its set of access rights. The scheduler tracks the length of time the process has run and switches it off to ensure that no one process hogs the CPU.

It also describes in detail how the Kernel Source Build system operates and how to add configuration options into the kernel build system. In-depth coverage of kernel synchronization and locking. The child process might then make system call accesses. Amazon Restaurants Food delivery from local restaurants.