16-08-2012, 02:17 PM
Thread
Thread (computing) - Wikipedia, the free encyclopedia.pdf (Size: 239.85 KB / Downloads: 40)
How threads differ from processes
Threads differ from traditional multitasking operating system processes in that:
processes are typically independent, while threads exist as subsets of a process
processes carry considerably more state information than threads, whereas multiple threads within a process
share process state as well as memory and other resources
processes have separate address spaces, whereas threads share their address space
processes interact only through system-provided inter-process communication mechanisms
Context switching between threads in the same process is typically faster than context switching between
processes.
Multithreading
Multithreading as a widespread programming and execution model allows multiple threads to exist within the
context of a single process. These threads share the process' resources but are able to execute independently. The
threaded programming model provides developers with a useful abstraction of concurrent execution. However,
perhaps the most interesting application of the technology is when it is applied to a single process to enable
parallel execution on a multiprocessor system.
This advantage of a multithreaded program allows it to operate faster on computer systems that have multiple
CPUs, CPUs with multiple cores, or across a cluster of machines — because the threads of the program naturally
lend themselves to truly concurrent execution. In such a case, the programmer needs to be careful to avoid race
conditions, and other non-intuitive behaviors. In order for data to be correctly manipulated, threads will often need
to rendezvous in time in order to process the data in the correct order. Threads may also require mutually exclusive
operations (often implemented using semaphores) in order to prevent common data from being simultaneously
modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.
Thread and fiber issues
Concurrency and data structures
Threads in the same process share the same address space. This allows concurrently-running code to couple tightly
and conveniently exchange data without the overhead or complexity of an IPC. When shared between threads,
however, even simple data structures become prone to race hazards if they require more than one CPU instruction
to update: two threads may end up attempting to update the data structure at the same time and find it unexpectedly
changing underfoot. Bugs caused by race hazards can be very difficult to reproduce and isolate.
To prevent this, threading APIs offer synchronization primitives such as mutexes to lock data structures against
concurrent access. On uniprocessor systems, a thread running into a locked mutex must sleep and hence trigger a
context switch. On multi-processor systems, the thread may instead poll the mutex in a spinlock. Both of these may
sap performance and force processors in SMP systems to contend for the memory bus, especially if the granularity
of the locking is fine.
I/O and scheduling
User thread or fiber implementations are typically entirely in userspace. As a result, context switching between user
threads or fibers within the same process is extremely efficient because it does not require any interaction with the
kernel at all: a context switch can be performed by locally saving the CPU registers used by the currently executing
user thread or fiber and then loading the registers required by the user thread or fiber to be executed. Since
scheduling occurs in userspace, the scheduling policy can be more easily tailored to the requirements of the
program's workload.