Understanding the Performance of TCP Pacing - PowerPoint PPT Presentation

About This Presentation
Title:

Understanding the Performance of TCP Pacing

Description:

Understanding the Performance of TCP Pacing. Amit Aggarwal, Stefan Savage, Thomas ... Phase1: no losses. Latency of paced TCP slightly higher due to pacing. ... – PowerPoint PPT presentation

Number of Views:37
Avg rating:3.0/5.0
Slides: 12
Provided by: khaledh2
Learn more at: https://www.cs.bu.edu
Category:

less

Transcript and Presenter's Notes

Title: Understanding the Performance of TCP Pacing


1
Understanding the Performance of TCP Pacing
  • Amit Aggarwal, Stefan Savage, Thomas Anderson
  • Department of Computer Science and Engineering
  • University of Washington

2
TCP Overview
  • TCP is a sliding window-based algorithm.
  • Ack-clocking.
  • Slow-start phase (W2W each RTT).
  • Congestion-avoidance phase (W each RTT).

TCP Burstiness
  • Slow Start
  • Losses
  • Ack compression
  • Multiplexing

3
Motivation
  • From queuing theory, we know that bursty traffic
    produces
  • Higher queuing delays.
  • More packet losses.
  • Lower throughput.

Random
Worst Case
Best Case
Response Time
Load
1
Queue Capacity
4
Contribution
  • Evaluate the impact of evenly pacing TCP packets
    across a round-trip time.

What to expect from pacing TCP packets?
  • Better for flows
  • Since packets are less likely to be dropped if
    they are not clumped together.
  • Better for the network
  • Since competing flows will see less queuing delay
    and burst losses.

5
Simulation Setup
4x Mbps 5 ms
R1
S1
x Mbps
B
40 ms
S pkts
4x Mbps 5 ms
Rn
Sn
  • Jains fairness index f
  • f (? xi)2 (? xi RTTi)2
  • n ? xi2 n ? (xi RTTi)2

6
Experimental Results
  • A) Single Flow
  • case S0.25BRTT
  • TCP Reno due to its burstiness in slow-start,
  • incurs a loss when W0.5BRTT
  • paced TCP incurs its first loss after it
    saturates the pipe, I.e when W2BRTT
  • As a result, TCP Reno takes more time in
    congestion avoidance to ramp up to BRTT
  • (paced TCP achieves better throughput only at
    the beginning)
  • case S?BRTT
  • (They both achieve similar throughput)
  • The bursty behavior of TCP Reno is absorbed
    by the buffer and it does not get a loss until
    WBRTT

7
  • B) Multiple Flows
  • 50 flows starting at the same time. All flows
    have same RTT.
  • case S0.25BRTT
  • (TCP Reno achieves better throughput at the
    beginning!)
  • (Paced TCP achieves better throughput in
    steady-state!)
  • TCP Reno Flows send bursts of packets in
    clusters some drop early and backoff, allowing
    the others to ramp up.
  • paced TCP All the flows first saturate the
    pipe. At this point everyone drops because of
    congestion and mixing of flows, thereby leaving
    the bottleneck under-utilized. (Synchronization
    effect)
  • In steady state, all packets are spread out and
    flows are mixed as a result there is a
    randomness in the way packets are dropped. During
    a certain phase, some flows might get multiple
    losses, while others might get away without any.
    (De-synchronization effect)
  • case S?BD
  • De-synchronization effect of Paced TCP persists.

8
  • C) Multiple Flows - Variable RTT
  • 50 flows starting at the same time. 25 flows
    have RTT100 msec
  • and 25 flows with RTT280 msec.
  • case S0.25BRTT
  • (Paced TCP achieves better fairness without
    sacrificing throughput)
  • TCP Reno the higher burstiness as a result of
    overlap of packet clusters from different flows
    becomes visible. It has a higher drop rate at the
    bottleneck link while achieving similar
    throughput.
  • case S?BD
  • TCP Reno higher drop rate persists.

9
  • D) Variable Length Flows
  • A constant size flow is established between each
    of 20 senders and corresponding 20 receivers. As
    a particular flow finishes, a new flow is
    established between the same nodes after an
    exponential think time of mean 1 sec.
  • Ideal-latency the latency of a flow that does
    slow start until it reaches its fair share of the
    bandwidth and then continues with a constant
    window. (just for comparison reasons)
  • Phase1 no losses. Latency of paced TCP slightly
    higher due to pacing.
  • Phase 2 S0.25BRTT TCP Reno experience more
    losses in slow start some flows timeout. Case
    S?BD this effect disappears.
  • Phase 3 Synchronization effect of paced TCP is
    visible.
  • Phase 4 Synchronization effect disappears
    because flows are so large that new flows start
    infrequently.

10
  • E) Interaction of Paced and non-paced flows
  • A paced flow is very likely to experience loss
    as a result of one of its packets landing in a
    burst from a Reno flow.
  • Reno flows are less likely to be affected by
    bursts from other flows.
  • Result TCP Reno have much better latency than
    paced flows, when both are competing for
    bandwidth in a mixed flow environment.
  • If we continuously instantiate new flows, the
    performance of paced TCP even deteriorates more.
    New flows in slow start, cause the old paced
    flows to regularly drop packets, further
    diminishing the performance of pacing.

11
Conclusion
Pacing improves fairness and drop
rates. Pacing offers better performance with
limited buffering. In other cases pacing leads
to performance degradation due to 1. Pacing
delays the congestion signals to a point where
the network is already over subscribed. 2. Due
to mixing of traffic pacing synchronizes drops.
Write a Comment
User Comments (0)
About PowerShow.com