Cheng Jin - PowerPoint PPT Presentation

1 / 39
About This Presentation
Title:

Cheng Jin

Description:

Bunn, Choe, Doyle, Newman, Ravot, Singh, J. Wang. UCLA. Paganini, Z. Wang. CERN. Martin. SLAC. Cottrell. Internet2. Almes, Shalunov. Cisco ... – PowerPoint PPT presentation

Number of Views:27
Avg rating:3.0/5.0
Slides: 40
Provided by: ultraligh
Category:
Tags: cheng | jin | paganini

less

Transcript and Presenter's Notes

Title: Cheng Jin


1
FAST TCPdesign, implementation, and experiments
  • Cheng Jin

2
Acknowledgments
  • Caltech
  • Bunn, Choe, Doyle, Newman, Ravot, Singh, J. Wang
  • UCLA
  • Paganini, Z. Wang
  • CERN
  • Martin
  • SLAC
  • Cottrell
  • Internet2
  • Almes, Shalunov
  • Cisco
  • Aiken, Doraiswami, Yip
  • Level(3)
  • Fernes
  • LANL
  • Wu

3
Outline
  • The problem
  • Our solution
  • Our prelim experiments

4
Difficulties at large window
  • Equilibrium problem
  • Packet level AI too slow, MD too drastic.
  • Flow level requires very small loss probability.
  • Dynamic problem
  • Packet level must oscillate on a binary signal.
  • Flow level unstable at large window.

5
Performance at large windows
10Gbps
capacity 1Gbps 180 ms RTT 1 flow C. Jin, D.
Wei, S. Ravot, etc (Caltech, Nov 02)
capacity 155Mbps, 622Mbps, 2.5Gbps, 5Gbps,
10Gbps 100 ms RTT 100 flows J. Wang (Caltech,
June 02)
6
Difficulties at large window
  • Equilibrium problem
  • Packet level AI too slow, MD too drastic.
  • Flow level requires very small loss probability.
  • Dynamic problem
  • Packet level must oscillate on a binary signal.
  • Flow level unstable at large window.

7
Problem binary signal
TCP
oscillation
8
Solution multibit signal
FAST
stabilized
9
Problem no target
  • Reno AIMD (1, 0.5)

ACK W ? W 1/W Loss W ? W 0.5W
  • HSTCP AIMD (a(w), b(w))

ACK W ? W a(w)/W Loss W ? W b(w)W
  • STCP MIMD (1/100, 1/8)

ACK W ? W 0.01 Loss W ? W 0.125W
10
Solution estimate target
11
Outline
  • The problem
  • Our solution
  • Our prelim experiments

12
Optimization Model
  • Network bandwidth allocation as utility
    maximization
  • Optimization problem
  • Primal-dual components
  • x(t1) F(q(t), x(t)) Source
  • p(t1) G(y(t), p(t)) Link

13
Use of Queueing Delay in FAST
  • Each FAST TCP flow has a target number of packets
    to maintain in network buffers in equilibrium
  • Queueing delay allows FAST to estimate the number
    of packets currently buffered and estimate its
    distance from the target

14
FAST TCP
15
Window Control Algorithm
  • RTT exponential moving average with weight of
    min 1/8, 3/cwnd
  • baseRTT latency, or minimum RTT
  • ? determines fairness and convergence rate

16
Packet Level
ACK W ? W 1/W Loss W ? W 0.5 W
ACK W ? W a(w)/W Loss W ? W b(w) W
ACK W ? W 0.01 Loss W ? W 0.125 W
17
Architecture
  • Each component
  • designed independently
  • upgraded asynchronously

Data Control
Window Control
Burstiness Control
Estimation
TCP Protocol Processing
18
Outline
  • The problem
  • Our solution
  • Our prelim experiments

19
Experiments
  • In-house dummynet testbed
  • PlanetLab Internet experiments
  • Internet2 backbone experiments
  • ns-2 simulations

20
What Have We Learnt?
  • FAST performs well under normal network
    conditions
  • Well-known scenarios where FAST doesnt perform
    well
  • Network behavior is important
  • Dynamic scenarios are important
  • Host implementation (Linux) also important

21
Linux Related Issues
  • Complicated state transition
  • Linux TCP kernel documentation
  • Netdev implementation and NAPI
  • frequent delays between dev and TCP layers
  • Linux loss recovery
  • too many acks during fast recovery
  • high CPU overhead per SACK
  • very long recovery times
  • Scalable TCP and H-TCP offer enhancements

22
Dummynet Setup
  • Single bottleneck link, multiple path latencies
  • Iperf for memory-to-memory transfers
  • Intra-protocol testings
  • Dynamic network scenarios
  • Instrumentation on the sender and the router

23
Dynamic sharing 3 flows
FAST
Linux
Steady throughput
HSTCP
STCP
24
Aggregate Throughput
Ideal CDF
large windows
  • Dummynet cap 800Mbps delay 50-200ms flows
    1-14 29 expts

25
Stability
stable under diverse scenarios
  • Dummynet cap 800Mbps delay 50-200ms flows
    1-14 29 expts

26
Fairness
Reno and HSTCP have similar fairness
  • Dummynet cap 800Mbps delay 50-200ms flows
    1-14 29 expts

27
queue
FAST
Linux
loss
throughput
STCP
HSTCP
HSTCP
28
FAST TCP v.s. Buffer SizeSanjay Hegde David Wei
29
Backward Queueing Delay IBartek Wydrowski
  • Use timestamp option on both sender and receiver
  • Precision limited by sender clock
  • Not requiring synchronization, same-resolution
    clocks, or receiver modification

30
Backward Queueing Delay IIBartek Wydrowski
31
PlanetLab Internet Experiment
Jayaraman Wydrowski
Throughput v.s. loss and delay qualitatively
similar results
FAST saw higher loss due to large alpha value
32
Known Issues
  • Network latency estimation
  • route changes, dynamic sharing
  • does not upset stability
  • Small network buffer
  • at least like TCP Reno
  • adapt?? on slow timescale, but how?
  • TCP-friendliness
  • friendly at least at small window
  • how to dynamically tune friendliness?
  • Reverse path congestion

33
http//netlab.caltech.edu/FAST
  • FAST TCP motivation, architecture,
    algorithms, performance.
  • IEEE Infocom 2004
  • Code reorganization, ready for integration with
    web100.
  • b-release summer 2004
  • Inquiry fast-support_at_cs.caltech.edu

34
  • The End

35
Implementation Strategy

36
Brief History of FAST TCP
  • Congestion control as an optimization problem
  • Primal-dual framework to study TCP congestion
    control
  • Modeling existing TCP implementations
  • Theoretical analysis on FAST TCP
  • FAST TCP Implememtation

37
TCP/AQM
pl(t)
xi(t)
  • Congestion control has two components
  • TCP adjusts rate according to congestion
  • AQM feeds back congestion based on utilization
  • Distributed feed-back system
  • equilibrium and stability properties determine
    system performance

38
Network Model
x
y
  • Components TCP and AQM algorithms, and routing
    matrices
  • Each TCP source sees an aggregate price, q
  • Each link sees an aggregate incoming rate

39
FAST and Other DCAs
  • FAST is one implementation within the more
    general primal-dual framework
  • Queueing delay is used as an explicit feedback
    from the network
  • FAST does not use queueing delay to predict or
    avoid packet losses
  • FAST may use other forms of price in the future
    when they become available
Write a Comment
User Comments (0)
About PowerShow.com