24-06-2014, 10:18 AM
Minimizing File Download Time in Stochastic Peer-to-Peer Networks
Minimizing File Download.doc (Size: 37 KB / Downloads: 30)
Abstract
The peer to peer(p2p) file-sharing application are becoming increasingly popular and account for more than 70% of the Internet’s bandwidth usage.Measurement studies show several hour s depending on the level of network congestion or the capacity fluctation.We consider two major factors that have significant impact on average download time,namely spatial service capability in different source peer.
INTRODUCTION
PEER-TO-PEER (P2P) technology is heavily used for content distribution applications. The early model for content distribution is a centralized one, in which the service provider simply sets up a server and every user downloads files from it. In this type of network architecture (server-client), many users have to compete for limited resources in terms of bottleneck bandwidth or processing power of a single server. As a result, each user may receive very poor performance. From a single user’s perspective, the duration of a download session, or the download time for that individual user is the most often usedperformance metric.P2P technology tries to solve the issue of scalability bymaking the system distributed. Each computer (peer) in thenetwork can act as both a server and a client at the sametime. When a peer completes downloading some files fromthe network, it can become a server to service other peers inthe network. It is obvious that as time goes on, the servicecapacity of the entire network will increase due to the increasein the number of servicing peers. With this increasing servicecapacity, theoretical studies have shown that the average downloadtime for each user in the network is much shorter than thatof a centralized network architecture in ideal cases [2], [3]. Inother words, users of a P2P network should enjoy much fasterdownloads.
However, the measurement results in [4] show that a file
download session in a P2P network is rather long and varies a
lot from user to user. For instance, downloading an 100 MB file
in a Gnutella network can range from several hours to a whole
week. While theoretical studies provide performance bounds
for ideal cases, there are many factors that make the real world
performance much worse than the theoretical prediction. Some
of the major challenges facing a P2P network in the real world
include peer selection [5]–[8] and data search and routing[9]–[13].
Due to the distributed nature of the P2P network, searchingand locating data of interest in the network has been an importantissue in the literature. In reality, data searching time onlycontributes a very small portion of a download session while themost delay is caused by actually transferring the file from sourcepeers as shown in [14]. Thus, if we want to minimize the downloadtime for each user, reducing the actul file transfer timewould make more noticeable difference. Most recent studies,however, have focused on reducing the total download duration,i.e., the time required for all users to finish their download. Thistotal download time is a system-wide erformance metric. Onthe other hand, there are very few results in analyzing the performanceof each individual user.As the measurement study shows[4], the per-user performance in a P2P network may be evenworse than that of a centralized network architecture. Those resultssuggest that there is much room for improvement in theP2P system in terms of per-user performance, i.e., the file downloadtime of each user.However, there have been very few results in minimizing thedownload time for each user in a P2P network. In recent work[5], [6], the problem of minimizing the download time is formulatedas an optimization problem by maximizing the aggregatedservice capacity over multiple simultaneous active links (parallelconnections) under some global constraints. There are twomajor issues in this approach. One is that global information ofthe peers in the network is required, which is not practical inreal world. The other is that the analysis is based on the averagedquantities, e.g., average capacities of all possible sourcepeers in the network. The approach of using the average servicecapacity to analyze the average download time has been a
SCOPE OF THE PROJECT:
The system is effectively used in out sourcing service(BPO), Network in LAN connection. Data consists of text, documents, image are transmitted through network, which increases the packet transmission that led to increases the traffic. So traffic is nothing but increasing the packet information that information should be analysis and displays it graphically. It is a network based project and it reduces the network traffic which transfer the speed.
Let us take an example the Network is work based on the client server communication. Client means placing a request (i.e.) client is a running application programs on a local site that requests service from a running application program on a remote site. Server means a program that can provide services to others program
EXISTING SYSTEM:
Content distribution is a centralized one, where the content is distributed from the centralized server to all clients requesting the document.
Clients send request to the centralized server for downloading the file.
Server accepts the request and sends the file as response to the request.
In most client-server setups, the server is a dedicated computer whose entire purpose is to distribute files