09-11-2012, 05:11 PM
CASE STUDY 1: Linux
linux.pdf (Size: 658.82 KB / Downloads: 158)
In the previous chapters, we examined many operating system principles,
abstractions, algorithms, and techniques in general. Now it is time to look at some
concrete systems to see how these principles are applied in the real world. We
will begin with Linux, a popular variant of UNIX, which runs on a wide variety of
computers. It is one of the dominant operating systems on high-end workstations
and servers, but it is also used on systems ranging from cell phones to supercomputers.
It also illustrates many important design principles well.
Our discussion will start with its history and evolution of UNIX and Linux.
Then we will provide an overview of Linux, to give an idea of how it is used.
This overview will be of special value to readers familiar only with Windows,
since the latter hides virtually all the details of the system from its users. Although
graphical interfaces may be easy for beginners, they provide little flexibility and
no insight into how the system works.
Next we come to the heart of this chapter, an examination of processes,
memory management, I/O, the file system, and security in Linux. For each topic
we will first discuss the fundamental concepts, then the system calls, and finally
the implementation.
HISTORY OF UNIX AND LINUX
UNIX and Linux have a long and interesting history, so we will begin our
study there. What started out as the pet project of one young researcher (Ken
Thompson) has become billion dollar industry involving universities, multinational
corporations, governments, and international standardization bodies. In the
following pages we will tell how this story has unfolded.
UNICS
Back in the 1940s and 1950s, all computers were personal computers, at least
in the sense that the then-normal way to use a computer was to sign up for an hour
of time and take over the entire machine for that period. Of course, these
machines were physically immense, but only one person (the programmer) could
use them at any given time. When batch systems took over, in the 1960s, the programmer
submitted a job on punched cards by bringing it to the machine room.
When enough jobs had been assembled, the operator read them all in as a single
batch. It usually took an hour or more after submitting a job until the output was
returned. Under these circumstances, debugging was a time-consuming process,
because a single misplaced comma might result in wasting several hours of the
programmer’s time.
PDP-11 UNIX
Thompson’s work so impressed many of his colleagues at Bell Labs, that he
was soon joined by Dennis Ritchie, and later by his entire department. Two major
developments occurred around this time. First, UNIX was moved from the
obsolete PDP-7 to the much more modern PDP-11/20 and then later to the PDP-
11/45 and PDP-11/70. The latter two machines dominated the minicomputer
world for much of the 1970s. The PDP-11/45 and PDP-11/70 were powerful
machines with large physical memories for their era (256 KB and 2 MB, respectively).
Also, they had memory protection hardware, making it possible to support
multiple users at the same time. However, they were both 16-bit machines
that limited individual processes to 64 KB of instruction space and 64 KB of data
space, even though the machine may have had far more physical memory.
The second development concerned the language in which UNIX was written.
By now it was becoming painfully obvious that having to rewrite the entire system
for each new machine was no fun at all, so Thompson decided to rewrite UNIX in
a high-level language of his own design, called B. B was a simplified form of
BCPL (which itself was a simplified form of CPL, which, like PL/I, never
worked). Due to weaknesses in B, primarily lack of structures, this attempt was
not successful. Ritchie then designed a successor to B, (naturally) called C, and
wrote an excellent compiler for it. Working together, Thompson and Ritchie
rewrote UNIX in C. C was the right language at the right time, and has dominated
system programming ever since.
Portable UNIX
Now that UNIX was written in C, moving it to a new machine, known as porting
it, was much easier than in the early days. A port requires first writing a C
compiler for the new machine. Then it requires writing device drivers for the new
machine’s I/O devices, such as monitors, printers, and disks. Although the driver
code is in C, it cannot be moved to another machine, compiled, and run there
because no two disks work the same way. Finally, a small amount of machinedependent
code, such as the interrupt handlers and memory management routines,
must be rewritten, usually in assembly language.
The first port beyond the PDP-11 was to the Interdata 8/32 minicomputer.
This exercise revealed a large number of assumptions that UNIX implicitly made
about the machine it was running on, such as the unspoken supposition that
integers held 16 bits, pointers also held 16 bits (implying a maximum program
size of 64 KB), and that the machine had exactly three registers available for
holding important variables. None of these were true on the Interdata, so considerable
work was needed to clean UNIX up .