24-12-2012, 01:07 PM
Host-to-Host Congestion Control for TCP
Congestion Control.doc (Size: 2.39 MB / Downloads: 77)
INTRODUCTION
MOST CURRENT Internet applications rely on the Transmission Control Protocol (TCP) [1] to deliver data reliably across the network. Although it was not part of its initial design, the most essential element of TCP is congestion control; it defines TCP's performance characteristics. In this paper we present a survey of the congestion control proposals for TCP that preserve its fundamental host-to-host principle, meaning they do not rely on any kind of explicit signaling from the network.1 The proposed algorithms introduce a wide variety of techniques that allow senders to detect loss events, congestion state, and route changes, as well as measure the loss rate, the RTT, the RTT variation, bottleneck buffer sizes, and congestion level with different levels of reliability and precision. The key feature of TCP is its ability to provide a reliable, bi-directional, virtual channel between any two hosts on the Internet. Since the protocol works over the IP network [3], which provides only best-effort service for delivering packets across the network, the TCP standard [1] specifies a sliding window based flow control. This flow control has several mechanisms. First, the sender buffers all data before the transmission, assigning a sequence number to each buffered byte. Continuous blocks of the buffered data are packetized into TCP packets that include a sequence number of the first data byte in the packet. Second, a portion (window) of the prepared packets is transmitted to the receiver using the IP protocol.
Algorithm Used:
Slow Start ,Congestion Avoidance algorithms,
Slow Start and Congestion Avoidance algorithms. These provide two slightly different distributed host-to-host mechanisms which allow a TCP sender to detect available network resources and adjust the transmission rate of the TCP flow to the detected limits. Assuming the probability of random packet corruption during transmission is negligible (_ 1%), the sender can treat all detected packet losses as congestion indicators. In addition, the reception of any ACK packet is an indication that the network can accept and deliver at least one new packet Thus the sender, reasonably sure it will not cause congestion, can send at least the amount of data that has just been acknowledged. This in-out packet balancing is called the packet conservation principle and is a core element, both of
Slow Start and of Congestion Avoidance
Fast Packet Recovery algorithms
Fast Recovery algorithms, has a direct means to calculate the number of outstanding data packets using information extracted from SACKs. Instead of the congestion window inflation technique, the FACK maintains three special state variables (Fig H, the highest sequence number of all sent data packets—all data packets with sequence number less than H have been sent at least once; (2) F, the forward-most sequence number of all acknowledged data packets—no data packets with sequence number above F have been delivered..
Feasibility Study:
Feasibility Report:
Preliminary investigation examines project feasibility, the likelihood the system will be useful to the organization. The main objective of the feasibility study is to test the Technical, Operational and Economical feasibility for adding new modules and debugging old running system. All systems are feasible if they are given unlimited resources and infinite time. There are aspects in the feasibility study portion of the preliminary investigation:
• Technical Feasibility
• Operation Feasibility
• Economical Feasibility
ORGANIZATION PROFILE
COMPANY PROFILE.
SYSTEM ANALYSIS
Existing system:
Existing TCP specification: The standard already requires receivers to report the sequence number of the last in-order delivered data packet each time a packet is received, even if received out of order . For example, in response to a data packet sequence 5,6,7,10,11,12, the receiver will ACK the packet sequence 5,6,7,7,7,7. In the idealized case, the absence of reordering guarantees that an out-of-order delivery occurs only if some packet has been lost. Thus, if the sender sees several ACKs carrying the same sequence numbers (duplicate ACKs), it can be sure that the network has failed to deliver some data and can act accordingly.
Proposed system:
The Host to Host congestion control proposals that build a foundation for all currently known host-to-host algorithms. This foundation includes
1) The basic principle of probing the available network resources, 2) Loss-based and delay-based techniques to estimate the congestion state in the network,
3) Techniques to detect packet losses quickly
TCP standard specifies a sliding window based flow control. This flow control has several mechanisms. First, the sender buffers all data before the transmission, assigning a sequence number to each buffered byte. Continuous blocks of the buffered data are packetized into TCP packets that include a sequence number of the first data byte in the packet. Second, a portion (window) of the prepared packets is transmitted to the receiver using the IP protocol. As soon as the sender receives delivery confirmation for at least one data packet, it transmits a new portion of packets.
Problem Formulation
Objective:
The Transmission Control Protocol (TCP) carries most Internet traffic, so performance of the Internet depends to a great extent on how well TCP works. Performance characteristics of a particular version of TCP are defined by the congestion control algorithm it employs. This paper presents a survey of various congestion control proposals that preserve the original host-to-host idea of TCP—namely, that neither sender nor receiver relies on any explicit notification from the network. The proposed solutions focus on a variety of problems, starting with the basic problem of eliminating the phenomenon of congestion collapse, and also include the problems of effectively using the available network resources in different types of environments (wired, wireless, high-speed, long-delay, etc.). In a shared, highly distributed, and heterogeneous environment such as the Internet, effective network use depends not only on how well a single TCP based application can utilize the network capacity, but also on how well it cooperates with other applications transmitting data through the same network. Our survey shows that over the last 20 years many host-to-host techniques have been developed that address several problems with different levels of reliability and precision. There have been enhancements allowing senders to detect fast packet losses and route changes. Other techniques have the ability to estimate the loss rate, the bottleneck buffer size, and level of congestion. The survey describes each congestion control alternative, its strengths and its weaknesses. Additionally, techniques that are in common use or available for testing are described.