28-09-2013, 02:48 PM
Optical Packet Buffers for Backbone Internet Routers
Optical Packet Buffers .docx (Size: 48.85 KB / Downloads: 14)
Abstract
If optical routers are to become reality, we will need
several new optical technologies, one of which is to build sufficiently
large optical buffers. Building optical buffers for routers is
daunting: Today’s electronic routers often holdmillions of packets,
which is well beyond the capabilities of optical technology. In this
paper, we argue that two new results offer a solution. First, we
show that the size of buffers in backbone routers can be made
very small—just about 20 packets per linecard—at the expense of
a small loss in throughput. Second, we show that integrated delay
line optical buffers can store a few dozen packets on a photonic
chip. With the combination of these two results, we conclude that
future Internet routers could use optical buffers.
INTRODUCTION
OVER the years, there has been much debate about
whether it is possible—or sensible—to build all-optical
datapaths for routers. On one hand, optics promises much
higher capacities and potentially much lower power. On the
other hand, most of the functions of a router are still beyond
optical processing, including header parsing, address lookup,
contention resolution and arbitration, and large optical buffers.
Alternative architectural approaches have been proposed to
ease the task of building optical routers. For example, label
swapping simplifies header processing and address lookup
[1]–[3], and some implementations transmit headers slower
than the data so they can be processed electronically [4], [5].
Valiant load balancing (VLB) has been proposed to avoid
packet-by-packet switching at routers and eliminates the need
for arbitration [6].
WHY DO ROUTERS HAVE BUFFERS?
There are three main reasons that routers have buffers.
1) Congestion: Congestion occurs when packets for a switch
output arrive faster than the speed of the outgoing line.
For example, packets might arrive continuously at two
different inputs, all destined to the same output. If a switch
output is constantly overloaded, its buffer will eventually
overflow, no matter how large it is; it simply cannot
transmit the packets as fast as they arrive. Short-term
congestion is common due to the statistical arrival time
of packets. Long-term congestion is usually controlled
by an external mechanism, such as the end-to-end congestion
avoidance mechanisms of TCP, the XON/XOFF
mechanisms of Ethernet, or by the end-host application. In
practice, we have to decide how big tomake the congestion
buffers. The decision is based on the congestion control
mechanism—if it responds quickly to reduce congestion,
then the buffers can be small; else, they have to be large.
The congestion buffers are the largest buffers in a router,
and so will be our main focus in this paper.
Simulation Results
To validate the results of Section III-A and B, we perform
simulations using the ns-2 simulator [12]. We have enhanced
ns-2 to include an accurate CIOQ router model.
Fig. 5 shows the topology of the simulated network. Flows
are generated at separate source nodes (TCP servers) using TCP
Reno,2 go through individual access links, and are multiplexed
onto faster backbone (core) links before reaching the input ports
of the switch. Large buffers are used at themultiplexing nodes to
prevent drops at these nodes. Core links run at 2.5 Gb/s, and the
propagation delay between each server–client pair is uniformly
picked from the interval 75–125 ms (with an average of 100
ms). All data packets are 1000 bytes.
Link Utilization Metric
In this paper, our metric for buffer sizing is link utilization.
This metric is operator-centric; if a congested link can keep operating
at 100% utilization, then it makes efficient use of the operator’s
congested resource. This is not necessarily ideal for an
individual end-user since the metric does not guarantee a short
flow completion time (i.e., a quick download). However, if the
buffer size is reduced, then the round-trip time will also be reduced,
which could lead to higher per-flow throughput for TCP
flows. The effect of tiny buffers on user-centric performance
metrics, such as flow completion time, has been discussed in
[15] and [16].
Optical Buffering Approaches
Storage of optical data is accomplished by delaying the optical
signal—either by increasing the length of the signal’s path
Fig. 7. Link utilization versus bottleneck-link bandwidth. With a fixed
access-to-core bandwidth ratio (1%) and a fixed buffer size (20 packets),
increasing the bottleneck-link bandwidth does not change the utilization.
or by decreasing the speed of the light. In both cases, the delay
must be dynamically controlled to offer variable storage times,
i.e., to have a choice in when to read the data. Delay paths
provide variable storage time by traversing a variable number
of short delay lines—either several concatenated delays (feedforward
configuration) or by looping repeatedly through one
delay (feedback configuration). Buffers that store optical data
through slowing the speed of light do so by controlling resonances
either in the material itself or in the physical structure
of the waveguide. Integrated recirculating (feedback delay line)
buffers have been shown to be the most promising solution by
evaluating the requirements that optical memory must meet to
provide a viable solution for optical routers [26].
Future Work
This initial demonstration of optical buffering indicates
what is possible, but is primitive compared to what should
become available over the next few years. This work used silica
waveguides butt-coupled to InP gate matrix arrays. Park et al.
1606 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 5, OCTOBER 2010
Fig. 12. Maximum number of circulations possible as a function of the loop
gain needed in order to maintain an OSNR of 20 dB. Curves are shown for a
range of common amplifier noise figures.
have demonstrated a similar structure using silicon waveguides
with integrated SOAs [29]. This approach is fully integrated
and eliminates the coupling loss between the chips, albeit at the
expense of higher propagation loss.