The War Between Mice and Elephants - PowerPoint PPT Presentation

About This Presentation
Title:

The War Between Mice and Elephants

Description:

In Proceedings of ICNP'2001: The 9th IEEE International Conference on Network ... Such method thus mistakenly classifies the first few packets of a long flow as ... – PowerPoint PPT presentation

Number of Views:42
Avg rating:3.0/5.0
Slides: 41
Provided by: siddhart9
Learn more at: http://web.cs.wpi.edu
Category:

less

Transcript and Presenter's Notes

Title: The War Between Mice and Elephants


1
The War Between Mice and Elephants
  • By
  • Liang Guo Ibrahim Matta
  • In Proceedings of ICNP'2001 The 9th IEEE
    International Conference on Network Protocols,
    Riverside, CA, November 2001.
  • Presented By
  • Eswin Anzueto

2
Outline
  • Introduction
  • Analyzing Short TCP Flow Performance
  • Architecture and Mechanism RIO-PS
  • Simulations
  • Discussions
  • Conclusions and Future work

3
Introduction
  • Internet traffic Most (80) of the traffic is
    actually carried by only small number of
    connection (elephants), while the remaining
    large amount of connections are very small in
    size or lifetime (mice)
  • In a fair network environment, short
    connections expect relatively fast service that
    long connection. However, some times we can not
    observe such a nice property in the current
    internet.

4
Introduction (cont)
  • Factors effecting the performance of mice
  • TCP tries to conservatively ramp up its
    transmission rate to the maximum available
    bandwidth. Therefore the sending window is
    initiated at the minimum possible value
    regardless of what is available in the network.

5
Introduction (cont)
  • For short connections, since most of the time the
    congestion window is very small, packet loss
    always requires timeout to detect. (not enough
    packets to activate the duplicate ACK mech.)
  • ITO (initial timeout) is very conservative. (no
    sampling data is available), Short Connection
    performance is degraded due to large timeout
    period.

6
Introduction (cont)
  • In this paper, we propose to give preferential
    treatment to short flows with help from an Active
    Queue Management policy inside the network.
  • We also rely on the proposed Diffserv-like
    architecture to classify flows into short and
    long at the edge of the network.

7
Related work
  • Crovella et al 2001 16 and Bansal et al 2001
    17 comment that size aware job scheduling helps
    enhance the response time of short jobs without
    hurting the performance of long jobs.
  • Seddigh et al 2 shows the negative impact of
    the initial timeout value on the short TCP flow
    latency and propose to reduce the default
    recommended value

8
Outline
  • Introduction
  • Analyzing Short TCP Flow Performance
  • Architecture and Mechanism RIO-PS
  • Simulations
  • Discussions
  • Conclusions and Future work

9
Sensitivity Analysis for Short and Long TCP Flows
  • Transmission time of short TCP flows is not very
    sensitive to loss rate when the
  • loss rate is relatively small, but it increases
    drastically as loss rate becomes
  • larger.

10
Sensitivity Analysis of Transmission Time
  • For small-size TCP flows, increasing the loss
    probability can
  • lead to increased variability, while for long TCP
    flows, large
  • rate reduces the variability of transmission
    times

11
Factors Effecting Variability
  • When Loss rate high TCP Congestion control is
    more likely to enter exponential back off phase,
    which can cause significantly high variability in
    transmission time of each individual packet of a
    flow.
  • When loss rate low, TCP either in slow start or
    congestion avoidance phase. This dimension of
    variability is more pronounced for long flows.
  • Since the first source of variability is on
    individual packets of a flow, the law of large
    numbers indicates that its impact is more
    significant on short flows

12
Sensitivity Analysis of Transmission Time
  • We thus conclude that reducing the loss
    probability is more critical to help short TCP
    flows experience less variations in transmission
    (response) time.
  • Observe that the C.O.V. of transmission times is
    closely related to the fairness of the
    systemsmaller values imply higher fairness.
  • Such interesting behavior motivates us to give
    preferential treatment to short TCP flows.

13
Preferential Treatment to Short TCP flows
  • Assumption
  • Giving preferential treatment to short TCP flows
    can significantly enhance their transmission
    time, without degrading long flow performance.
  • Simulation using NS simulator
  • 10 long(10000-packet) TCP-NewReno flows and 10
    short(100-packet) TCP-Newreno flows over 1.25Mbps
    link.
  • Queue Management Policy Drop Tail, RED ,RIO
    with preference to short flows.

14
Link Utilization under Drop Tail, RED and RIO-PS
15
Network Goodput Under Different Schemes
16
Preferential Treatment to Short TCP flows (cont)
  • In fact, this preferential treatment might even
    enhance the transmission of long flows since they
    operate in a more stable network environment
    (less disturbed by short flows) for longer
    periods.
  • In a congested network, reducing the packet drops
    experienced by short flows can significantly
    enhance their response time and fairness among
    them.

17
Outline
  • Introduction
  • Analyzing Short TCP Flow Performance
  • Architecture and Mechanism RIO-PS
  • Simulations
  • Discussions
  • Conclusions and Future work

18
Proposed Architecture
19
Edge Router
  • Determines packet coming from long or short flow.
  • Accurate flow characterization can be very
    complicated,
  • instead we simply maintain a counter (Lt) that
    tracks how many packets have been observed so far
    for a flow. Once Lt exceeds a certain threshold
    we consider the flow to be long.
  • Per flow state information are softly maintained
    to detect the termination of flow. The flow hash
    table is updated periodically every Tu time
    units.
  • It is configured with SLR (Short to Long ratio).
    It then periodically (every Tc time units)
    performs AIAD control over the threshold to
    achieve the target SLR

20
Core Router
  • Gives preferential treatment to short packets.
  • RIO (Red In and Out) queuing policy is used
    because its conformity to the Diff-serv
    specification.
  • The probability of dropping short packets depends
    on the average backlog of short packets queue.
    On the contrary, for long packets the total
    average queue size is used to detect incipient
    congestion and those flows have to give up some
    resources.
  • No packet reordering will happen in the FIFO
    queue with RIO
  • RIO inherits all features of RED

21
RIO Queue with preferential treatment to short
flows
22
Outline
  • Introduction
  • Analyzing Short TCP Flow Performance
  • Architecture and Mechanism RIO-PS
  • Simulations
  • Discussions
  • Conclusions and Future work

23
Simulation
Core router
Edge router
24
Simulation (cont)
25
Simulation (cont)
  • 4000 secs simulation time,2000 secs warm up time.
  • Average response time relative to RED

26
Simulation (cont)
27
Simulation (cont)
28
Transmission Time of foreground traffic
29
Network goodput
30
Outline
  • Introduction
  • Analyzing Short TCP Flow Performance
  • Architecture and Mechanism RIO-PS
  • Simulations
  • Discussions
  • Conclusions and Future work

31
Discussion Comments on the Simulation Model
  • Our simulation is one-way traffic model.
  • All TCP connections have similar end to end
    propagation delays, this is not common topology
    seen by internet users
  • When there is also reverse traffic present, out
    proposed scheme will even have a superior
    performance over the traditional policies.

32
DiscussionQueue Management Policy
  • RIO neither provides absolute aggregate (class
    based) nor relative flow based guarantees.
  • To attack these problems, one may resort to AQM
    policies like the PI controlled RED queue , or a
    service model like the Proportional Diffserv
    proposed in for better control over the
    classified traffic and more predictable service

33
DiscussionDeployment Issues
  • Our proposed scheme requires edge devices to be
    able to perform per-flow state maintenance and
    per-packet processing.
  • The scheme does not require the queue mechanisms
    to be implemented at each router.

34
DiscussionFlow Classification
  • We use a threshold based classification method.
  • Such method thus mistakenly classifies the first
    few packets of a long flow as if they came from a
    short flow. However, such mistake may help to
    enhance performance and make the system more fair
    to all TCP connections
  • The first few packets of a long flow are more
    vulnerable to packet losses and deserve to be
    treated with high preference.

35
DiscussionsController Design
  • Our preliminary results indicate that the
    performance is not very sensitive to the target
    load ratio of active short to active long flows
    (the value of SLR at the edge).
  • The actual SLR depends on the values of Tc and
    Tu, which determine how often the classification
    threshold and active flow table are updated,
    respectively.

36
DiscussionMalicious Users
  • One concern regarding our proposed scheme may be
    that users are then encouraged to break long
    transmissions into small pieces so that they can
    enjoy faster services. However, we argue that
    such initiative may not be so attractive to users
    given the large overhead of fragmentation and
    reassembly.

37
Outline
  • Introduction
  • Analyzing Short TCP Flow Performance
  • Architecture and Mechanism RIO-PS
  • Simulations
  • Discussions
  • Conclusions

38
Conclusions
  • The Performance of the majority of TCP flows (the
    short transfers or mice) is improved in terms of
    response time and fairness
  • The Performance of few elephants is also improved
  • Overall goodput of the system is also improved or
    at least stays almost the same

39
Conclusion (cont)
  • The proposed architecture is flexible in that the
    functionality that defines this scheme can be
    largely tuned at the edge routers

40
Thank you!
  • Thank you to the following persons for the
    pictures used for this presentation
  • Matt Hartling Sumit Kumbhar
  • Iris Su
  • Preeti Phadnis
Write a Comment
User Comments (0)
About PowerShow.com