30-06-2012, 12:27 PM
Architecture and Algorithm for an Cooperative Cache wireless p2p Networks
Architecture and Algorithm for an Cooperative Cache.pdf (Size: 308.08 KB / Downloads: 32)
Abstract
The ad hoc networks focus on routing, and not much work has been done on data access. Data caching can improve the efficiency of information access in a wireless networks by reducing the access latency and bandwidth. A common technique used to improve the performance of data access is caching. Cooperative caching, which allows the sharing and coordination of cached data among multiple nodes and in this paper we are discussing about the data pipelines, by using these to reduce the end-to-end delay. We are going to address the pipeline with MAC layer interference like performance caching. In this paper, we present our design and implementation of cooperative cache in wireless p2p networks, for finding the best place to cache the data.
INTRODUCTION
Wireless P2P networks, such as ad hoc network, mesh networks, and sensor networks, have received considerable attention due to their potential applications in civilian and military environments such as disaster recovery efforts, group conferences. Most of the previous researches in ad hoc networks focus on the development of dynamic routing protocols that can efficiently find routes between two Communicating nodes. Although routing is an important issue in ad hoc networks, other issues such as information (data) access are also very important we use the following example to motivate our research.
BASIC COOPERATIVE CACHE
SCHEMES
Cache Path and Cache Data and hybrid Cache concepts
Fig. 2 illustrates the Cache Path concept. Suppose node N1 requests a data item di from N0. When N3 forwards di to N1;N3 knows that N1 has a copy of the data. Later, if N2 requests di;N3 knows that the data source N0 is three hops away whereas N1 is only one hop away. Thus, N3 forwards the request to N1 instead of N4. Many routing algorithms (such as AODV [20] and DSR [12]) provide the hop count information between the source and destination. Caching the data path for each data item reduces bandwidth and power consumption because nodes can obtain the data using fewer hops. However, mapping data items and caching nodes increase routing overhead, and the following
techniques are used to improve Cache Path’s performance.
The Asymmetric Approach
Our asymmetric caching approach has three phases:
Phase 1: Forwarding the request message. After a request message is generated by the application, it is passed down to the cache layer. To send the request message to the next hop, the cache layer wraps the original request message with a new destination address, which is the next hop to reach the data server (real destination). Here, we assume that the cache layer can access the routing table and find out the next hop to reach the data center.
Cache Placement Algorithms
Optimal cache placement: We first define a new term called aggregate delay, which includes the time to serve the current client request and the delay to serve future data requests coming from the same path. We can also assign different weights to these two parts of the delay based on the optimization objective, e.g., giving less weight to the future access delay if the current request has strict delay requirement. For simplicity, we assign equal weight for the current and future access delay in this paper. Below, we formally define the optimal cache placement problem.
A greedy cache placement algorithm.
To get the optimal cache placement, the data server needs to compute the aggregate delay for every possible cache placement set. Since there are 2n (n is the length of the forwarding path) possible ways to choose cache placement set, which is too expensive. Therefore, we propose a greedy heuristic to efficiently compute the optimal cache placement. Let P*(k) be the optimal cache placement for a forwarding path when only the nearest k hops from the data server are considered as possible cache nodes.
CONCLUSIONS
In this paper, we are implementing of cooperative cache in wireless P2P networks, and proposed solutions to find the best place to cache the data. In our asymmetric approach, data request packets are transmitted to the cache layer on very node; however, the data reply packets are only transmitted to the cache layer on the intermediate nodes which need to cache the data. This solution not only reduces the overhead of copying data between the user space and the kernel space, but also allows data pipeline to reduce the end-to-end delay. We have developed a prototype to demonstrate the advantage of the asymmetric approach.