25-06-2012, 01:01 PM
Open Issues in Buffer Sizing
Open Issues in Buffer Sizing.ppt (Size: 2.52 MB / Downloads: 62)
Common operational practices
Major router vendor recommends 500ms of buffering
Implication: buffer size increases proportionally to link capacity
Why 500ms?
Bandwidth Delay Product (BDP) rule:
Buffer size B = link capacity C x typical RTT T (B = CxT)
What does “typical RTT” mean?
Measurement studies showed that RTTs vary from 1ms to 10sec!
How do different types of flows (TCP elephants vs mice) affect buffer requirement?
Poor performance is often due to buffer size:
Under-buffered switches: high loss rate and poor utilization
Over-buffered DSL modems: excessive queuing delay for interactive apps
Origin of BDP rule
Consider a single flow with RTT T
Window follows TCP’s saw-tooth behavior
Maximum window size = CT + B
At this point packet loss occurs
Window size after packet loss = (CT + B)/2
Key step: Even when window size is minimum, link should be fully utilized
(CT + B)/2 ≥ CT which means B ≥ CT
Known as the bandwidth delay product rule
Same result for N homogeneous TCP connections
Saturable/congestible links
A link is saturable when offered load is sufficient to fully utilize it, given large enough buffer
A link may not be saturable at all times
Some links may never be saturable
Advertised-window limitation, other bottlenecks, size-limited
Small buffers are sufficient for non-saturable links
Only needed to absorb short term traffic bursts
Stanford model applicable: when N is large
Backbone links are usually not saturable due to over-provisioning
Edge links are more likely to be saturable
But N may not be large for such links
Closed-loop traffic
Per-flow throughput for large flows is slightly better with larger buffer
Majority of small flows see better throughput with smaller buffer
Similar to persistent case
Not a significant difference in per-flow loss rate
Reason: Loss rate decreases slowly with buffer size.