01-01-2013, 02:33 PM
Processor Management
Processor Management.doc (Size: 1.25 MB / Downloads: 24)
Overview
This term paper explains how the Processor Manager manages and allocates a single central processing unit (CPU) to execute the incoming jobs. The paper begins by presenting an overview of single-user systems and providing a foundation of terms and concepts used in processor management. Multi-core technologies are then briefly reviewed, followed by a discussion and comparison of job scheduling and process scheduling. The role of the Process Scheduler is explained in more detail, including a presentation of job and process status concepts, the process control block (PCB) data structure, and queuing. Process scheduling policies are presented next with an explanation of the advantages and disadvantages of pre-emptive versus non pre-emptive process scheduling algorithms. The goals of process scheduling policies are included. Six common process scheduling algorithms are explained in detail, followed by a description of four cases exemplifying how jobs move through the system. Finally, the paper discusses the role of internal interrupts and the tasks performed by the interrupt handler.
Process
A process is a program in execution. A process is a name given to a program instance that has been loaded into memory and managed by the operating system. Process address space is generally organized into code, data (static/global), heap, and stack segments. Every process in execution works with Registers (PC, Working Registers) and Call stack (Stack Pointer Reference). Program itself is not a process, a program is a passive entity such as a file containing a list of instructions stored on disk (called an executable file) whereas process is an active entity with a program counter specifying the next instruction to execute and a set of associated resources. A program becomes a process when an executable file is loaded into memory.
Process Scheduling:
The objective of multi programming is to have some process running at all times to maximize CPU utilization. The objective of time sharing is to switch the CPU among processes so frequently that users can interact with each program while it is running. To meet these objectives, the process scheduler selects an available process for program execution on the CPU.
Scheduling Queues
As processes enter the system, they are put inside the job queue, which consists of all processes in the system. The processes that are residing in main memory and are ready and waiting to execute are kept on a list called the ready queue. This queue is stored as a linked list. A ready queue header contains pointers to the first and final PCB’s in the list. Each PCB includes a pointer field that points to the next PCB in the ready queue.
Process Termination
A process terminates when it finishes executing its final statement and asks the operating system to delete it by using the exit system call. The child process may return data (output) to the parent process. All the resources of the process-including physical and virtual memory, open files, and I/0 buffers-are deallocated by the operating system.
A parent may terminate the execution of one of its children for a variety of reasons, such as:
• The child has exceeded its usage of some of the resources that it has been allocated.
• The task assigned to the child is no longer required.
• The parent is exiting, and the operating system does not allow a child to continue if its parent terminates. On such systems, if a process terminates (either normally or abnormally), then all its children must also be terminated. This phenomenon, referred to as cascading termination, is normally initiated by the operating system.