Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Network Operating System Evolution
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Network Operating System Evolution

[attachment=26669]
Introduction

Modern network devices are complex entities composed of both silicon and software. Thus, designing an efficient
hardware platform is not, by itself, sufficient to achieve an effective, cost-efficient and operationally tenable product.
The control plane plays a critical role in the development of features and in ensuring device usability.
Although progress from the development of faster CPU boards and forwarding planes is visible, structural changes
made in software are usually hidden, and while vendor collateral often offers a list of features in a carrier-class
package, operational experiences may vary considerably.
Products that have been through several generations of software releases provide the best examples of the
difference made by the choice of OS. It is still not uncommon to find routers or switches that started life under
older, monolithic software and later migrated to more contemporary designs. The positive effect on stability and
operational efficiency is easy to notice and appreciate.

Origin and Evolution of Network Operating Systems

Contemporary network operating systems are mostly advanced and specialized branches of POSIX-compliant
software platforms and are rarely developed from scratch. The main reason for this situation is the high cost of
developing a world-class operating system all the way from concept to finished product. By adopting a generalpurpose
OS architecture, network vendors can focus on routing-specific code, decrease time to market, and benefit
from years of technology and research that went into the design of the original (donor) products.
For example, consider Table 1, which lists some operating systems for routers and their respective origins (the
Generation column is explained in the following sections).

First-Generation OS: Monolithic Architecture

Typically, first-generation network operating systems for routers and switches were proprietary images running in
a flat memory space, often directly from flash memory or ROM. While supporting multiple processes for protocols,
packet handling and management, they operated using a cooperative, multitasking model in which each process
would run to completion or until it voluntarily relinquished the CPU.
All first-generation network operating systems shared one trait: They eliminated the risks of running full-size
commercial operating systems on embedded hardware. Memory management, protection and context switching
were either rudimentary or nonexistent, with the primary goals being a small footprint and speed of operation.
Nevertheless, first-generation network operating systems made networking commercially viable and were deployed
on a wide range of products. The downside was that these systems were plagued with a host of problems associated
with resource management and fault isolation; a single runaway process could easily consume the processor or
cause the entire system to fail. Such failures were not uncommon in the data networks controlled by older software
and could be triggered by software errors, rogue traffic and operator errors.

Basic OS Design Considerations

Choosing the right foundation (prototype) for an operating system is very important, as it has significant implications
for the overall software design process and final product quality and serviceability. This importance is why OEM
vendors sometimes migrate from one prototype platform to another midway through the development process,
seeking a better fit. Generally, the most common transitions are from a proprietary to a commercial code base and
from a commercial code base to an open-source software foundation.
Regardless of the initial choice, as networking vendors develop their own code, they get further and further away
from the original port, not only in protocol-specific applications but also in the system area. Extensions such as
control plane redundancy, in-service software upgrades and multichassis operation require significant changes
on all levels of the original design. However, it is highly desirable to continue borrowing content from the donor
OS in areas that are not normally the primary focus of networking vendors, such as improvements in memory
management, scheduling, multicore and symmetric multiprocessing (SMP) support, and host hardware drivers. With
proper engineering discipline in place, the more active and peer-reviewed the donor OS is, the more quickly related
network products can benefit from new code and technology.

Functional Separation and Process Scheduling
Multiprocessing, functional separation and scheduling are fundamental for almost any software design, including
network software. Because CPU and memory are shared resources, all running threads and processes have to
access them in a serial and controlled fashion. Many design choices are available to achieve this goal, but the two
most important are the memory model and the scheduling discipline. The next section briefly explains the intricate
relation between memory, CPU cycles, system performance and stability.

Memory Model

The memory model defines whether processes (threads) run in a common memory space. If they do, the overhead
for switching the threads is minimal, and the code in different threads can share data via direct memory pointers.
The downside is that a runaway process can cause damage in memory that does not belong to it.
In a more complex memory model, threads can run in their own virtual machines, and the operating system switches
the context every time the next thread needs to run. Because of this context switching, direct communication
between threads is no longer possible and requires special interprocess communication (IPC) structures such as
pipes, files and shared memory pools.

Virtual Memory/Preemptive Scheduling Programming Model

Virtual memory with preemptive scheduling is a great design choice for properly constructed functional blocks,
where interaction between different modules is limited and well defined. This technique is one of the main benefits
of the second-generation OS designs and underpins the stability and robustness of contemporary network operating
systems. However, it has its own drawbacks.

Generic Kernel Design

Kernels normally do not provide any immediately perceived or revenue-generating functionality. Instead, they
perform housekeeping activities such as memory allocation and hardware management and other system-level
tasks. Kernel threads are likely the most often run tasks in the entire system. Consequently, they have to be robust
and run with minimal impact on other processes.
In the past, kernel architecture largely defined the operating structure of the entire system with respect to memory
management and process scheduling. Hence, kernels were considered important differentiators among competing
designs.
Historically, the disputes between the proponents and opponents of lightweight versus complex kernel architectures
came to a practical end when most operating systems became functionally decoupled from their respective kernels.
Once software distributions became available with alternate kernel configurations, researchers and commercial
developers were free to experiment with different designs.
Network Operating Systems

[attachment=34539]

Contents and copyrights

• Digital House appliances Forum, 2002
• Digital information are easy to copy
– Network enables sharing of the information
• Digital copyright protection
– CSS (Contents scramble system)
– AEA (Advanced Encryption standard)
– CPPM (Content Protection for Prerecorded Media)
– CPRM (Content Protection for Recordable Media)
– DTCP (Digital Transmission Content Protection)
– DDCP (High-bandwidth Digital Content Protection )

What is an Operating system?

• 2 kinds of a software
– Application Software
• Word processors, database manager, compiler,
web browser
– System Software
• Operating system itself
• Bridges between the hardware and users

Resource management

• Memory management
• Device management
– Printer
– Hard drive
– display
• Process management
• Processor management

History of UNIX

• Development of TSS @ Multix
– TSS development :AT&T, GE, MIT
• 1976 Bell Lab. UNIX Version 6
– Mini Computer
– DEC: PDP-9(16bit, 256K)
– Small TSS
– Free source code for the Software
– Abstraction based on the file system

How PC Boots up?

• Tie up your own boot
• POST
– Power On Self Test
• When you power on the computer
• Clears the CPU memory register
• Sets the CPU program counter to F000
• Reads the program fixed in f000 from the BIOS
• Which is the check program for basic systems
• Checks system bus
• Checks its own memory

Post installation configuration

• Pre packaged software installation
• Password and user configuration
• Time zone configuration
• X window configuration
• Start up service configuration