12-06-2012, 02:20 PM
Minimizing Retrieval Latency for Conten Cloud
Abstract
Content cloud systems, e.g. CloudFront [1] and CloudBurst [2], in which content items are retrieved by endusers from the edge nodes of the cloud, are becoming increasingly popular. The retrieval latency in content clouds depends on content availability in the edge nodes, which in turn depends on the caching policy at the edge nodes. In case of local content unavailability (i.e., a cache miss), edge nodes resort to source selection strategies to retrieve the content items either vertically from the central server, or horizontally from other edge nodes. Consequently, managing the latency in content clouds needs to take into account several interrelated issues: asymmetric bandwidth and caching capacity for both source types as well as edge node heterogeneity in terms of caching policies and source selection strategies applied. In this paper, we study the problem of minimizing the retrieval latency considering both caching and retrieval capacity of the edge nodes and server simultaneously. We derive analytical models to evaluate the content retrieval latency under two source selection strategies, i.e., Random and Shortest-Queue, and three caching policies: selfish, collective, and a novel caching policy that we call the adaptive caching policy. Our analysis allows the quantification of the interrelated performance impacts of caching and retrieval capacity and the exploration of the corresponding design space. In articular, we show that the adaptive caching policy combined with Shortest-Queue selection scales well with various network configurations and adapts to the load changes in our simulation and analytical results.
Existing System:
In case of local content unavailability (i.e., a cache miss), edge nodes resort to source selection strategies to retrieve the content items either vertically from the central server, or horizontally from other edge nodes. Consequently, managing the latency in content clouds needs to take into account several interrelated issues: asymmetric bandwidth and caching capacity for both source types as well as edge node heterogeneity in terms of caching policies and source selection strategies applied. In this paper, we study the problem of minimizing the retrieval latency considering both caching and retrieval capacity of the edge nodes and server simultaneously.
Proposed System:
This system, we give the first analytical framework for characterizing (and minimizing) latency in hybrid contentclouds that simultaneously considers the impact of different caching policies and source selection strategies. The system we consider is a general one that allows for heterogeneity in edge nodes with respect to local caching and source selection policies as well as asymmetric bandwidth and caching capacity across edge nodes and the central server. In principle, we characterize the average retrieval latency as the weighted average of the server retrieval latency and the edge network retrieval latency.
Hardware Requirements:
• System : Pentium IV 2.4 GHz.
• Hard Disk : 40 GB.
• Floppy Drive : 1.44 Mb.
• Monitor : 15 VGA Colour.
• Mouse : Logitech.
• Ram : 256 Mb.
Software Requirements:
• Operating System : - Windows Xp Professional.
• Front End : - Net 3.5.
• Coding Language : - Visual C# .Net.
• Back-End : - Sql Server 2005.