Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: History of Microprocessors Report
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
History of Microprocessors

[attachment=35773]

A microprocessor (sometimes abbreviated µP) is a digital electronic component with transistors on a single semiconductor integrated circuit (IC). One or more microprocessors typically serve as a central processing unit (CPU) in a computer system or handheld device.

Various microprocessors

Microprocessors made possible the advent of the microcomputer. Before this, electronic CPUs were typically made from bulky discrete switching devices (and later small-scale integrated circuits) containing the equivalent of only a few transistors. By integrating the processor onto one or a very few large-scale integrated circuit packages (containing the equivalent of thousands or millions of discrete transistors), the cost of processor power was greatly reduced. Since the advent of the IC in the mid-1970s, the microprocessor has become the most prevalent implementation of the CPU, nearly completely replacing all other forms.
The evolution of microprocessors has been known to follow Moore's Law when it comes to steadily increasing performance over the years. This law suggests that the complexity of an integrated circuit, with respect to minimum component cost, doubles every 24 months. This dictum has generally proven true since the early 1970s. From their humble beginnings as the drivers for calculators, the continued increase in power has led to the dominance of microprocessors over every other form of computer; every system from the largest mainframes to the smallest handheld computers now uses a microprocessor at its core.

The Birth of the Microprocessor

Before we describe the birth of the microprocessor, we need to briefly introduce the integrated circuit that made the microprocessor possible. The transistor, invented in 1947, works by controlling the flow of electrons through a structure embedded in silicon. This structure is composed of nothing more than adjoining regions of silicon with different concentrations of impurities. These impurities are atoms of elements like boron, phosphorous, and arsenic. By combining silicon with oxygen you get silicon dioxide, SiO2, a powerful insulator that allows you to separate regions of silicon. By evaporating (or sputtering) aluminum on to the surface of a silicon chip, you can create contacts and connectors. By putting all these elements together, several transistors can be combined to create a simple functional circuit, rather than a component. This is the IC (or integrated circuit) whose invention is attributed to Jack Kilby of TI and Robert Noyce of Fairchild The first practical IC was fabricated in 1959 at Fairchild and Texas Instruments and Fairchild began its commercial manufacture in 1961 [Tredennick96].

Structure of the 4040

The 4004 was a 4-bit chip that used BCD arithmetic (i.e., it processed one BCD digit at a time). It had 16 general-purpose 4-bit registers, a 4-bit accumulator, and a four-level 12-bit pushdown address stack that held the program counter and three subroutine return addresses. Its logic included a binary and a BCD ALU. It also featured a pin that can be tested by a jump conditional instruction in order to poll external devices such as keyboards. This pin was replaced by a more general-purpose interrupt request input in later microprocessors.
The 4004 was followed, remarkably rapidly, by the 8-bit 8008 microprocessor. In fact, the 8008 was originally intended for a CRT application and was developed concurrently with the 4004. By using some of the production techniques developed for the 4004, Intel was able to manufacture the 8008 as early as March 1972.