Title: Emulating AQM from End Hosts
1Emulating AQM from End Hosts
- Presenters
- Syed Zaidi
- Ivor Rodrigues
2Introduction
- Congestion control at the End Host
- Treating the Network as a Black Box
- Main indicator Round Trip Time
- Probabilistic Early Response TCP (PERT)
3Motivation
- Implementing AQM at the Router is not easy.
- Current techniques depend on Packet loss to
detect congestion. - Easier to modify TCP stack at the End Host.
- Can work any AQM mechanism at the router.
4Challenges
- RTT based estimation have been characterized to
be inaccurate. - Hard to measure Queuing Delays when they are
small compared to the RTT.
5Accuracy of End-host Based Congestion Estimation
- Previous studies looked at the relation between
increase in RTT and packet loss for a single
stream. - Results
- Losses are preceded by increase in RTT in very
few cases. - Responding to a false prediction results in
severe loss in performance.
6Accuracy of End-host Based Congestion Estimation
4 is false negative
and 5 is false positive
7Accuracy of End-host Based Congestion Estimation
- Previous studies claim transition 5 happens more
then transition 2 - Limitation of previous studies is to look at the
relation between higher RTT in packet loss for a
single flow - Packet loss should be looked at the router not
for a single flow
8Accuracy of End-host Based Congestion Estimation
- Ns-2 simulation
- Two routers connected to a100 Mps link with end
nodes having 500 Mbps link, different combination
of long term and short term flows. The reference
flows have RTT of 60ms which is equal to 12000Km.
9Different Congestions Predictors
- Efficiency of Packet loss prediction
- (Number of 2 transitions)/(2 transitions 5
transitions) - False Positives
- (Number of 5 transitions)/(2 transitions 5
transitions) - False Negatives
- (Number of 4 transitions)/(2 transitions 4
transitions)
10Previous Work
- In 1989 first paper was published proposing to
enhance TCP with delay-based congestion
avoidance. - TRI-S Throughput is used to detect congestion
instead of delay - DUAL Current RTT is compared with Average of
Minimum and Maximum RTT - Vegas Achieved throughput is compared to
expected throughput based on minimum Observed
RTT. - CIM Moving Average of small number of RTT
samples is compared with moving average of large
number of RTT samples - CARD Congestion Avoidance using RTT Delay
11Improving Congestion Prediction
Vegas, Card, TRI-S, and dual obtain RTT samples
once per RTT. Smoothed RTT Exponential Weighted
Moving Average
12Improving Congestion Prediction
- We improve accuracy by more frequent sampling and
history information - End-host congestion prediction is not perfect,
thus we need mechanisms to counter this
inaccuracy. -
-
13Response to Congestion PredictionHow do we
reduce the impact of FALSE Positives?
- Keeping the amount of Response small.
- Respond Probabilistically.
14Response to Congestion PredictionHow do we
reduce the impact of FALSE Positives?
- Keeping the amount of Response small.
- Respond Probabilistically.
- Not much Loss in throughput
- Maintains High link Utilization
- Buildup of the bottleneck queue may not be
cleared out quickly. - VEGAS
15Response to Congestion PredictionHow do we
reduce the impact of FALSE Positives?
- No Loss of throughput
- Maintains High link Utilization
- Buildup of the bottleneck queue may not be
cleared out quickly. - VEGAS
- This causes a tradeoff in the fairness properties
of TCP to maintain high link utilization
Vegas uses additive decrease for early
congestion response
16Response to Congestion PredictionHow do we
reduce the impact of FALSE Positives?
- No Loss of throughput
- Maintains High link Utilization
- Buildup of the bottleneck queue may not be
cleared out quickly. - VEGAS
- This causes a tradeoff in the fairness properties
of TCP to maintain high link utilization - AI/AD for these transitions will result in
compromising the fairness properties of the
protocol.
Vegas uses additive decrease for early
congestion response
17Response to Congestion PredictionHow do we
reduce the impact of FALSE Positives?
- Compared to the flow starting earlier, flows that
start late may have a different idea of the
Minimum RTT on the path. - This gives an unfair advantage to flows starting
later, giving them more share of the Bandwidth.
- No Loss of throughput
- Maintains High link Utilization
- Buildup of the bottleneck queue may not be
cleared out quickly. - VEGAS
RTT Propagation Delay Queuing Delay
18Response to Congestion PredictionHow do we
reduce the impact of FALSE Positives?
- Keeping the amount of Response small.
- Respond Probabilistically.
- When the probability of false positives is high,
the probability of response to an early
congestion signal should be low
High Probability of False Positives
Low Response! Low Probability of False
Positives High response!
19Designing the Probabilistic ResponseFalse
positives occur
- False Positives occur when the queue length is
smaller. - False positives occur when the queue length is
less than 50 of the total queue size.
srtt0.99 is the signal congestion predictor
20Designing the Probabilistic Responsewhat should
be my response function?
- Response should be
- Small for low queue size
- Response should large for large queue size.
srtt0.99 is the signal congestion predictor
21Designing the Probabilistic Responsewhat should
be my response function?
- Thus we emulate the probabilistic response
function of RED. - Thus
- P - probabilistic
- E - early
- R - response
- T - TCP
22PERT
- Tmin Minimum Threshold P 5ms5ms
- Tmax Maximum ThresholdP10ms10ms
- pmax maximum probablity of response.05
- P propagation delay ?? 0!!!
23Probabilistic Response Curve used by PERT
24Is it necessary to have a 50 reduction in the
congestion window in case of early response??
- Routers are commonly set to the Bandwidth Delay
Product of the Link since the TCP flow reduces
its window by 50 - If B is the buffer size and f is the window
reduction factor, the relationship between them
is given by -
Since the flows respond before the bottleneck
queue is full, a large multiplicative decrease
can result in lower link utilization but reducing
the amount of response make it hard to empty the
buffer, leading to unfairness.
25Experimental Evaluation
- Impact of Bottleneck link Bandwidth
- Setup Single bottleneck with bottle neck
bandwidth between 1 Mbps to 1Gbps, RTT from 10ms
to 1s. Simulations run for 400s. Results measured
between stable period. RTT set to 60ms. -
26Experimental Evaluation
- Impact of Round Trip Delays
- The bottleneck link bandwidth is 150 Mbps and
number of flows is 50. The end-to-end delay is
varied from 10ms to 1s.
27Experimental Evaluation
- Impact of Varying the Number of Long-term Flows.
- Link bandwidth set to 500 Mbps, end to end delay
set to 60ms.
28 Bottle Neck Link b/w -150Mbps End-End Delay -
60ms Long term Flows 50 Short Term varying from
10 to 1000
Bottle Neck Link b/w -150Mbps End-End Delay n
12 1ltnlt10 Short Term - 100
29Multiple Bottlenecks
Bottleneck link bandwidth 150Mbps Delay - 5ms
Link capacity 1 Gbps Delay 5ms
Response to sudden changes in responsive traffic
30(No Transcript)
31Modeling of PERT
Forward propagation delay
( 2 )
C link capacity q(t) queue size at time t
( A )
Note Queuing Delay is perceived before R(t) The
Window Dynamics of PERT
( 3 )
32Modeling of PERT
Note PERT makes its decision at the end host and
not the router.
( 4 )
( 5 )
( 6 )
Incoming rate y(t) gt
33Modeling of PERT
By equation (A)
( 7 )
34Simulations
Stability
35Emulating PI
36Discussion
- Impact of Reverse traffic
- Co-existence with Non-Proactive Flows
37Conclusion
- Congestion prediction at end host is more
accurate than characterized by previous studies,
but requires further research to improve the
accuracy of end host delay-based predictors. - PERT emulates the behavior of AQM in the
congestion response function - Benefits are similar to ECN
- Its link utilization is similar to router based
schemes - PERT is flexible, in the sense that other AQM
schemes can be emulated.
38Few of Our Observations
- The authors have put a good deal of effort, but
is its as simple and eye-catching if we
implemented on any kind of network in real time? - What modifications have to now be made at the end
host, such as additional hardware/software and
cost?? - Is it compatible with other versions of TCP?
- Will this implementation give an advantage to
other connections less/least proactive
connections or misbehaving connections to take
advantage of my readiness to lessen the job a
router has to perform?
39 Questions