13-02-2013, 03:48 PM
Optical Buffering in Internet Routers
Optical Buffering .docx (Size: 3.25 MB / Downloads: 15)
INTRODUCTION
Over the years, there has been much debate about whether it is possible—or sensible to build all-optical data paths for routers. On one hand, optics promise much higher capacities and potentially much lower power. On the other hand, most of the functions of a router are still beyond optical processing, including header parsing, address lookup, contention resolution and arbitration, and large optical buffers.
Alternative architectural approaches have been proposed to ease the task of building optical routers. For example, label swapping simplifies header processing and address lookup and some implementations transmit headers slower than the data so they can be processed electronically Valiant load balancing (VLB) has been proposed to avoid packet-by-packet switching at routers and eliminates the need for arbitration.
In this paper, we consider just one function of an optical router optical packet buffering—and ask the question: Is it possible to build optical buffers for an Internet router?
Conventional wisdom says that it is not. Electronic Internet backbone routers today maintain millions of packet buffers in first-come–first-served queues. None of the many proposed schemes to build optical buffers comes close to replacing the huge buffers in an electronic router.
The basic premise of this paper is that because of two recent innovations, we are now much closer to being able to build optical buffers for a backbone router. First, as we show in Section III, there is growing evidence that backbone networks can be built from routers with very small buffers, perhaps only a few dozen packet buffers on each line in each router, if we are willing to sacrifice a small amount of throughput. Second, as we show in Section IV, it is now possible to build optical packet buffers that are capable of holding a few dozen packets in an integrate optoelectronic chip. We describe both innovations, show how they can be applied to build packet buffers for optical routers, and explain some of the shortcomings yet to be overcome.
Congestion:
Congestion occurs when packets for a switch output arrive faster than the speed of the outgoing line.
For example, packets might arrive continuously at two different inputs, all destined to the same output. If a switch output is constantly overloaded, its buffer will eventually overflow, no matter how large it is; it simply cannot transmit the packets as fast as they arrive. Short-term congestion is common due to the statistical arrival time of packets. Long-term congestion is usually controlled by an external mechanism, such as the end-to-end congestion avoidance mechanisms of TCP, the XON/XOFF mechanisms of Ethernet, or by the end-host application. In practice, we have to decide how big to make the congestion buffers. The decision is based on the congestion control mechanism—if it responds quickly to reduce congestion, then the buffers can be small; else, they have to be large.The congestion buffers are the largest buffers in a router, and so will be our main focus in this paper. A typical Internet router today holds millions of packet buffers for congestion.
HOW BIG SHOULD THE BUFFERS BE?
The historical answer to this question is the well-known rule of thumb: Buffers should be at least as large as the delay-bandwidth product of the network to achieve full utilization, i.e. , where B is the average round-trip time of flows, and C is the data-rate of the bottleneck link. According to this rule, 1-Gb buffers are required for a 10-Gb/s link with an average two-way delay of 100ms. To follow the rule, this number has to grow linearly as the link speed increases.
Recently, Appenzeller et al. showed that with N concurrent flows on the link, the buffer size can be scaled down to. B=RTT *C/ √N, without compromising the throughput. This means a significant reduction in the buffer size of backbone routers because backbone links often carry tens of thousands of flows. With 10 000 flows on a link, the buffer size can be reduced by 99% without any change in performance (i.e., a 1-Gb buffer becomes 10 Mb). This result has been found to hold very broadly in real networks
Simulation Results
To validate the results of Section III-A and B, we perform simulations using the ns-2 simulator We have enhanced ns-2 to include an accurate CIOQ router model.
Fig. 5 shows the topology of the simulated network. Flows are generated at separate source nodes (TCP servers) using TCP Reno,2 go through individual access links, and are multiplexed onto faster backbone(core)links before reaching the input ports of the switch. Large buffers are used at the multiplexing nodes to prevent drops at these nodes. Core links run at 2.5 Gb/s, and the propagation delay between each server–client pair is uniformly picked from the interval 75–125 ms (with an average of 100 ms). All data packets are 1000 bytes.