17-03-2014, 11:03 AM
GRAPHICS PROCESSING UNIT
GRAPHICS PROCESSING.docx (Size: 330.37 KB / Downloads: 14)
INTRODUCTION
There are various applications that require a 3D world to be simulated as realistically as possible on a computer screen. These include 3D animations in games, movies and other real world simulations. It takes a lot of computing power to represent a 3D world due to the great amount of information that must be used to generate a realistic 3D world and the complex mathematical operations that must be used to project this 3D world onto a computer screen. In this situation, the processing time and bandwidth are at a premium due to large amounts of both computation and data.
The functional purpose of a GPU then, is to provide a separate dedicated graphics resources, including a graphics processor and memory, to relieve some of the burden off of the main system resources, namely the Central Processing Unit, Main Memory, and the System Bus, which would otherwise get saturated with graphical operations and I/O requests. The abstract goal of a GPU, however, is to enable a representation of a 3D world as realistically as possible. So these GPUs are designed to provide additional computational power that is customized specifically to perform these 3D tasks.
A Graphics Processing Unit (GPU) is a microprocessor that has been designed specifically for the processing of 3D graphics. The processor is built with integrated transform, lighting, triangle setup/clipping, and rendering engines, capable of handling millions of math-intensive processes per second. GPUs form the heart of modern graphics cards, relieving the CPU (central processing units) of much of the graphics processing load. GPUs allow products such as desktop PCs, portable computers, and game consoles to process real-time 3D graphics that only a few years ago were only available on high-end workstations.
History and Standards
The first graphics cards, introduced in August of 1981 by IBM, were monochrome cards designated as Monochrome Display Adapters (MDAs). The displays that used these cards were typically text-only, with green or white text on a black background. Color for IBM-compatible computers appeared on the scene with the 4-color Hercules Graphics Card (HGC), followed by the 8-color Color Graphics Adapter (CGA) and 16-color Enhanced Graphics Adapter (EGA). During the same time, other computer manufacturers, such as Commodore, were introducing computers with built-in graphics adapters that could handle a varying number of colors.
When IBM introduced the Video Graphics Array (VGA) in 1987, a new graphics standard came into being. A VGA display could support up to 256 colors (out of a possible 262,144-color palette) at resolutions up to 720x400. Perhaps the most interesting difference between VGA and the preceding formats is that VGA was analog, whereas displays had been digital up to that point. Going from digital to analog may seem like a step backward, but it actually provided the ability to vary the signal for more possible combinations than the strict on/off nature of digital.
Over the years, VGA gave way to Super Video Graphics Array (SVGA). SVGA cards were based on VGA, but each card manufacturer added resolutions and increased color depth in different ways. Eventually, the Video Electronics Standards Association (VESA) agreed on a standard implementation of SVGA that provided up to 16.8 million colors and 1280x1024 resolutions. Most graphics cards available today support Ultra Extended Graphics Array (UXGA). UXGA can support a palette of up to 16.8 million colors and resolutions up to 1600x1200 pixels.
AGP Memory Allocation
During AGP memory initialization, the OS allocates 4K byte pages of AGP memory in main (physical) memory. These pages are usually discontiguous. However, the graphics controller needs contiguous memory. A translation mechanism called the GART (Graphics Address Remapping Table) makes discontiguous memory appear as contiguous memory by translating virtual addresses into physical addresses in main memory through a remapping table.
A block of contiguous memory space, called the Aperture is allocated above the top of memory. The graphics card accesses the Aperture as if it were main memory. The GART is then able to remap these virtual addresses to physical addresses in main memory. These virtual addresses are used to access main memory, the local frame buffer, and AGP memory.
How is 3D acceleration done?
There are different steps involved in creating a complete 3D scene. It is done by different parts of the GPU, each of which are assigned a particular job. During 3D rendering, there are different types of data the travel across the bus. The two most common types are texture and geometry data. The geometry data is the "infrastructure" that the rendered scene is built on. This is made up of polygons (usually triangles) that are represented by vertices, the end-points that define each polygon. Texture data provides much of the detail in a scene, and textures can be used to simulate more complex geometry, add lighting, and give an object a simulated surface.
Many new graphics chips now have accelerated Transform and Lighting (T&L) unit, which takes a 3D scene's geometry and transforms it into different coordinate spaces. It also performs lighting calculations, again relieving the CPU from these math-intensive tasks.
Following the T&L unit on the chip is the triangle setup engine. It takes a scene's transformed geometry and prepares it for the next stages of rendering by converting the scene into a form that the pixel engine can then process. The pixel engine applies assigned texture values to each pixel. This gives each pixel the correct color value so that it appears to have surface texture and does not look like a flat, smooth object. After a pixel has been rendered it must be checked to see whether it is visible by checking the depth value, or Z value.
GRAPHICS PROCESSING.docx (Size: 330.37 KB / Downloads: 14)
INTRODUCTION
There are various applications that require a 3D world to be simulated as realistically as possible on a computer screen. These include 3D animations in games, movies and other real world simulations. It takes a lot of computing power to represent a 3D world due to the great amount of information that must be used to generate a realistic 3D world and the complex mathematical operations that must be used to project this 3D world onto a computer screen. In this situation, the processing time and bandwidth are at a premium due to large amounts of both computation and data.
The functional purpose of a GPU then, is to provide a separate dedicated graphics resources, including a graphics processor and memory, to relieve some of the burden off of the main system resources, namely the Central Processing Unit, Main Memory, and the System Bus, which would otherwise get saturated with graphical operations and I/O requests. The abstract goal of a GPU, however, is to enable a representation of a 3D world as realistically as possible. So these GPUs are designed to provide additional computational power that is customized specifically to perform these 3D tasks.
A Graphics Processing Unit (GPU) is a microprocessor that has been designed specifically for the processing of 3D graphics. The processor is built with integrated transform, lighting, triangle setup/clipping, and rendering engines, capable of handling millions of math-intensive processes per second. GPUs form the heart of modern graphics cards, relieving the CPU (central processing units) of much of the graphics processing load. GPUs allow products such as desktop PCs, portable computers, and game consoles to process real-time 3D graphics that only a few years ago were only available on high-end workstations.
History and Standards
The first graphics cards, introduced in August of 1981 by IBM, were monochrome cards designated as Monochrome Display Adapters (MDAs). The displays that used these cards were typically text-only, with green or white text on a black background. Color for IBM-compatible computers appeared on the scene with the 4-color Hercules Graphics Card (HGC), followed by the 8-color Color Graphics Adapter (CGA) and 16-color Enhanced Graphics Adapter (EGA). During the same time, other computer manufacturers, such as Commodore, were introducing computers with built-in graphics adapters that could handle a varying number of colors.
When IBM introduced the Video Graphics Array (VGA) in 1987, a new graphics standard came into being. A VGA display could support up to 256 colors (out of a possible 262,144-color palette) at resolutions up to 720x400. Perhaps the most interesting difference between VGA and the preceding formats is that VGA was analog, whereas displays had been digital up to that point. Going from digital to analog may seem like a step backward, but it actually provided the ability to vary the signal for more possible combinations than the strict on/off nature of digital.
Over the years, VGA gave way to Super Video Graphics Array (SVGA). SVGA cards were based on VGA, but each card manufacturer added resolutions and increased color depth in different ways. Eventually, the Video Electronics Standards Association (VESA) agreed on a standard implementation of SVGA that provided up to 16.8 million colors and 1280x1024 resolutions. Most graphics cards available today support Ultra Extended Graphics Array (UXGA). UXGA can support a palette of up to 16.8 million colors and resolutions up to 1600x1200 pixels.
AGP Memory Allocation
During AGP memory initialization, the OS allocates 4K byte pages of AGP memory in main (physical) memory. These pages are usually discontiguous. However, the graphics controller needs contiguous memory. A translation mechanism called the GART (Graphics Address Remapping Table) makes discontiguous memory appear as contiguous memory by translating virtual addresses into physical addresses in main memory through a remapping table.
A block of contiguous memory space, called the Aperture is allocated above the top of memory. The graphics card accesses the Aperture as if it were main memory. The GART is then able to remap these virtual addresses to physical addresses in main memory. These virtual addresses are used to access main memory, the local frame buffer, and AGP memory.
How is 3D acceleration done?
There are different steps involved in creating a complete 3D scene. It is done by different parts of the GPU, each of which are assigned a particular job. During 3D rendering, there are different types of data the travel across the bus. The two most common types are texture and geometry data. The geometry data is the "infrastructure" that the rendered scene is built on. This is made up of polygons (usually triangles) that are represented by vertices, the end-points that define each polygon. Texture data provides much of the detail in a scene, and textures can be used to simulate more complex geometry, add lighting, and give an object a simulated surface.
Many new graphics chips now have accelerated Transform and Lighting (T&L) unit, which takes a 3D scene's geometry and transforms it into different coordinate spaces. It also performs lighting calculations, again relieving the CPU from these math-intensive tasks.
Following the T&L unit on the chip is the triangle setup engine. It takes a scene's transformed geometry and prepares it for the next stages of rendering by converting the scene into a form that the pixel engine can then process. The pixel engine applies assigned texture values to each pixel. This gives each pixel the correct color value so that it appears to have surface texture and does not look like a flat, smooth object. After a pixel has been rendered it must be checked to see whether it is visible by checking the depth value, or Z value.