Active Queue Management - PowerPoint PPT Presentation

About This Presentation
Title:

Active Queue Management

Description:

Queues are to absorb bursts of packets. They are required for statistical ... This could give small queue occupancy but still have big queues to absorb bursts. ... – PowerPoint PPT presentation

Number of Views:96
Avg rating:3.0/5.0
Slides: 21
Provided by: skb
Category:

less

Transcript and Presenter's Notes

Title: Active Queue Management


1
Active Queue Management
2
Fundamental problem Queues and TCP
  • Queues
  • Queues are to absorb bursts of packets.
  • They are required for statistical multiplexing.
  • Queuing theory shows how far higher throughput is
    possible if queues are included.

M/M/1/K queue gt
If packets arrive at rate ?, the rate of lost
packets is ??P(full queue). Rate that packets are
transmitted is ??(1-P(full queue)) Fractional
reduction of packets transmitted is (1-P(full
queue))
Conclusion With longer queues we get higher
good-put.
3
Fundamental problem Queues and TCP
  • TCP
  • TCP fills the queue to determine the bandwidth.
  • The transmission time of a short file is
    dependent on the RTT, which depends on the queue
    occupancy.
  • Faster networks are achieved by smaller queues.
    (?)

TCP fills queues. Long queues cause delay. Small
queue capacity decreases good-put.
TCP uses the queues in a funny way.
Maybe the router can detect that TCP has reached
the link bit-rate and then drop packets to inform
TCP. This could give small queue occupancy but
still have big queues to absorb bursts.
4
Objectives of AQM
  • Maximize throughput
  • If the queue is empty, then the link is idle -gt a
    reduction in throughput.
  • Due to random variations in packet arrivals, the
    queue occupancy will vary.
  • If the mean queue occupancy is large, then it is
    less likely that the queue will ever wander to
    empty.
  • In the single bottleneck case, if the queue
    capacity is the same as the bottleneck delay
    product, then the queue will never empty.
  • Minimize queue size
  • Voice-over-IP requires a delay of no more than
    250ms. A 10Mbps link with 1500B packets. A queue
    of about 200 packets will cause 250ms queuing
    delay.
  • Of course a few late packets are permissible, but
    a long string of late packets would degrade the
    quality.
  • Average Google web page is 40KB 26 packets 6
    RTT (including SYN, but not including DNS).
  • 25ms RTT gt 150 ms transfer time 0
  • 250 ms RTT gt 1.5 s transfer time not zero.
  • Objective full throughput but with small queuing
    delay. Maximize the time in which the queue is
    not empty with a bound on the queue occupancy, or
    on the probability of the queue exceeding some
    occupancy.

5
Objectives of AQM
  • No bias against bursty traffic.
  • TCP sends packets in bursts.
  • The burst size is approximately the size of the
    cwnd.
  • Flows with large RTT will send a large number of
    packets in a burst.
  • Drop tail queues will tend to drop packets that
    arrive in bursts. Hence, these flows are treated
    unfairly.
  • AQM should attempt to solve this problem.

6
Global Synchronization
  • If several flows share the same bottleneck and
    have the same RTT, they will synchronize. This
    synchronization will give the optimal throughput.
  • If the RTTs are different, then weird
    synchronizations can happen where some flows get
    a huge amount of the bandwidth.
  • In simulation, such synchronization is very
    difficult to avoid.
  • If packets are dropped randomly, as most AQM
    schemes do, then there is not synchronization.
  • However, if there are short flows, then
    synchronization does not occur.
  • End-host random delay also can get rid of
    synchronization.
  • It is not known if synchronization occurs in real
    networks.
  • It appears to be more likely in high-speed
    networks.

7
Protect against non-conformant TCP flows
  • If a flow sends packets very fast, the queue will
    fill and all flows sharing the bottleneck will
    receive drops. TCP flows will decrease their
    bit-rate.
  • If a non-TCP flow sends too fast, the TCP flows
    will starve.
  • The router should detect such situations and drop
    the non-TCP flows packets.

8
stability
  • AQM/TCP is a closed-loop feedback system, hence
    stability is a concern.
  • If the system is unstable, then the queue
    occupancy and flow bit-rates might wildly
    oscillate.
  • On the other hand, as long as the queue never
    empties and never fills too far, then stability
    doesnt matter.
  • A huge amount of research has focused on the
    stability issue. But only on the infinitely long
    flow case.

9
Ease of use
  • Network operators should not need a Ph.D. to
    setup the AQM (routing is hard enough).
  • Complicated systems are difficult to understand
    and might bring unforeseen problems/weaknesses.
    The Internet is necessary for economic vitality.
  • Complicated protocols are likely to be
    implemented incorrectly. This might cause other
    unforeseen problems.
  • Complicated schemes are viewed as being
    non-robust and hence are not trusted by network
    operators.
  • Reliability takes precedence over performance.

10
AQM schemes
  • Drop tail - the first and the most widely used.
  • RED (random early discard/detection) Sally Floyd
    and Van Jacobson, 1993

Smoothed/filtered queue occupancy
Kth packet arrives at ?k
Gentle RED
RED
1
1
0.8
0.8
0.6
f(q) - marking probability
0.6
f(q) - marking probability
0.4
0.4
maxp
maxp
0.2
0.2
0
0
0
10
20
30
40
50
60
0
10
20
30
40
50
60
q
minth
maxth
qmax
q
minth
maxth
qmax
11
Adaptive RED (ARED)
  • REDs parameters are difficult to adjust. They
    seem to depend on the number of flows and the RTT
    of the flows.
  • ARED dynamically adjusts maxp to account for
    random variations in the traffic.

m denotes the sample, T denotes the sample
period, usually 0.5 seconds
The ARED paper says that maxp is AIMD, but this
is only true of maxplt4!
12
Linear Control Theoretic PI Controller
TCP model
This is a nonlinear ODE, near an operating point,
a linear model can be found.
A proportional integral controller is used, i.e.
the transfer function from queue occupancy to
dropping probability is
Queue dynamics
13
REM (random exponential marking)
A variable r is maintained
q is the desired queue occpancy (0?) ?(k) is the
arrival rate during the kth sample C is link
capacity T is the sample period
Loss probability during the kth sample interval is
14
Adaptive Virtual Queue (AVQ)
  • Objective is to keep queue empty.
  • Most AQM methods could be extended to the virtual
    queue case.

Two queues a virtual one and a real one When a
packet arrives, it is put in the real queue and a
token is put in the virtual queue. Packets in the
real queue are served at the rate of the real
link. Tokens in the virtual queue are served at
the virtual bit-rate (given below). If the
virtual queue fills, then the real packet is
dropped.
  • - user parameter
  • - desired utilization
  • ?(t) - arrival rate at time t

15
The Effects of Active Queue Management on Web
Performancesigcomm 2003 Le, Aikat, Jeffay,
Smith
  • Monitored the UNC network to determine realistic
    network traffic.
  • Applied this type of traffic to an experimental
    network (not simulations, well not really).
  • Conclusions
  • At 80 utilization, all AQM schemes performed the
    same.
  • At 90 utilization, if ECN was not used, then all
    AQM schemes yields the same performance.
  • At 90 utilization, if ECN is used, then PI and
    REM work the best
  • ARED works the worst.
  • The experiment is a bit funny because all AQM
    methods tested were designed for long-lived flows
    and yet the test was for short-lived flows.

16
The Effects of Active Queue Management on Web
Performance
  • Experimental set-up
  • Traffic From a large data collection project,
    the file size and think times for HTTP traffic
    was determined.
  • Calibration With the bottleneck set at 1Gbps,
    the number of users was found that produce an
    average demand of 80Mbps, 90Mbps, 98Mbps and
    105Mbps.
  • Experiment With a 1Mbps bottleneck, the number
    of users was set according to the demands found
    during calibration. PI, REM, and ARED were tested
    at each demand.

17
Results
  • Without ECN droptail, PI, and REM all perform the
    same. ARED is a bit worst. PI is slightly better.
  • The highest utilization and smallest loss
    probability is with drop tail and a large queue.

18
Results with ECN
19
At 105 demand, the difference between drop-tail
and PI/REM and larger
20
conclusions
  • ARED is an ad hoc scheme
  • REM and PI are better thoughtout and seem to work
    better.
  • But, without ECN these schemes dont bring any
    benefit. One possible reason is that for large
    drop probability, TCP times out and no scheme
    considers the effect of timeout.
  • Another problem with PI and REM is they are
    developed for long-lived flows and the experiment
    as well as the Internet is has a significant
    amount of traffic from short-flows.
Write a Comment
User Comments (0)
About PowerShow.com