Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: An Intelligence Cache System
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
An Intelligence Cache System

[attachment=26817]

Introduction

Cache memory is a key mechanism for improving overall system performance. A cache exploits the locality inherent in the reference stream of the typical application. Two primary type of locality are available and the degrees to which they can be exploited depend on the program input characteristic. Temporal locality relies on the greater probability that recently accessed data will be accessed in near future. Spatial locality refers to the tendency for adjacent or nearby locations to be reference together in time.

Selective-Mode Intelligence Cache System

The motivation for the design, the architectural model for the SMI cache system, and its operation in the context of hardware prefetching is as follows.

Motivation:

Common design objective for the cache are to improve utilization of the temporal and spatial locality inherent in applications. However no single cache organization exploit both temporal and spatial locality optimally because of their contradictory characteristics.
Increasing block size reduces the number of cache blocks. Thus, conventional cache design attempt to compromise by selecting an adequate block size. The cache’s lack of adaptability to pattern of references of different types of locality poses a further drawback. A fixed block size result in suboptimal overall access time and bandwidth
Utilization because it fails to match the varying spatial and temporal locality across and within programs. A cache system that provides two caches with different configurations is an effective structure for overcoming these structural drawbacks.

SMI Cache System structure:

The direct mapped cache is the main cache and its organization is similar to a traditional direct mapped cache. But with a smaller block size. The spatial buffer is designed so that each entry is a collection of several banks, each of which is the size of a block in the direct mapped cache. The tag space of the buffer is a content addressable memory (CAM). The small blocks in each bank include a hit bit (H), to distinguish referenced blocks from unreferenced blocks. Each large block entry in the spatial buffer further has a prefetch bit (P), which direct the operation of prefetch controller. The prefetch bits are used to dynamically generate prefetch operations.

Case of Cache Hits:

On every memory access, a hit may occurs at the direct mapped cache or the spatial buffer. First, consider the case of a hit in the direct mapped cache. If a read access to the direct mapped cache is a hit, the requested data item is transmitted to the CPU without delay. If a write access to the direct mapped cache is a hit, the write operation is performed and the dirty bit for the block is set.

Conclusion

To design a simple but high performance cache system with low cost , a new caching mechanism for exploiting two types of locality effectively and adaptively is designed: A direct mapped cache with a small block size for exploiting temporal locality and a fully associative spatial buffer with a large block size for exploiting spatial buffer. An intelligence hardware based prefetching mechanism is used to maximize the effect of spatial locality.