Title: MULTIMEDIA TRAFFIC MANAGEMENT ON TCP/IP OVER ATM-UBR
1MULTIMEDIA TRAFFIC MANAGEMENT ON TCP/IP OVER
ATM-UBR
2OVERVIEW
- Introduction
- Problem Definition.
- Previous related work.
- Unique experimental design.
- Analysis of different TCP implementations that
proves that these implementations do not utilize
the available bandwidth efficiently. - Based on our analysis we proposed Dynamic
Granularity Control algorithm for TCP. - Conclusions.
3INTRODUCTION
- Management of Multimedia communications requires
- Efficient resource management.
- Maximum utilization of bandwidth allocated.
- Providing QoS parameters.
4Tools subjected to Multimedia Communications
Among the tools for Multimedia Communications,
the ATM Networks and TCP/IP protocol were
selected.
5ATM (Asynchronous Transfer Mode)
High-Speed Network Technology
Features
Multi-Service Traffic Categories - CBR (Constant
Bit Rate) - UBR (Unspecified Bit
Rate) - Promising Traffic with Quality of Service
(QoS) Academic Network Easy to Use
6TCP/IP
- Most Widely Used Protocol in the Internet.
- It has lot of research potential to meet the
network communication requirements. - Source code and helping material are easily
available.
7PROBLEM DEFINITION
ATM switch with buffer size 3K or 4K cells
ATM switch with buffer size 1K cells or 2K cells
8PROBLEM DEFINITION
- Multimedia Communications suffers from three
major problems - ATM switch buffer overflow.
- Loss of Protocol Data Units by the Protocol being
used. - Fairness among multiple TCP connections.
9My Research Problem
- Research is Dealing with the two above mentioned
problems such that - Avoiding Congestion in the ATM network.
- Efficient utilization of bandwidth allocated.
- Fairness among multiple TCP connections.
10Transmission Control Protocol
- Different Implementations of TCP
- - TCP Tahoe
- - TCP Reno
- - TCP NewReno
- - TCP SACK
- Congestion Control algorithms of TCP
- - Slow-Start
- - Congestion Avoidance
- - Fast Retransmit
- - Fast Recovery
11Previous Related Work
- TCP/IP over ATM
- Jacobson (1988)
- TCP Tahoe is added with Slow-Start, Congestion
Avoidance and Fast Retransmit algorithms to avoid
loss of data. - Jacobson (1990)
- TCP Reno modified the Fast Retransmit algorithm
of TCP Tahoe and added Fast Recovery algorithm. - Gunningberg (1994)
- The large MTU size of ATM causes throughput
Deadlock.
12Previous Related Work
- TCP/IP over ATM
- Romanow (1995)
- Cells of Large packet when lost at ATM level
heavily effects TCP throughput. - This gives rise to cell discard strategies like
PPD (Partial Packet Discard) and EPD (Early
Packet Discard). - Larger the MTU size smaller will be the TCP
Throughput - Hoe (1996)
- The slow-start algorithm ends up pumping too much
data. - Fast Retransmit algorithm may recover only one of
the packet losses.
13Previous Related Work
- TCP/IP over ATM
- Floyd (1996)
- TCP Reno implementation is modified to recover
multiple segment losses. The implementation is
named as TCP NewReno. - Mathis (1996)
- The Fast Retransmit and Fast Recovery algorithms
are modified using Selective Acknowledgment
Options. The new TCP version is known as TCP SACK.
14Problems of TCP/IP over ATM
- SUMMARY of Related Research Work
- Segment losses badly effect the throughput of TCP
over congested ATM networks. - Fast Retransmit and Fast Recovery algorithms of
Reno TCP are unable to recover multiple segment
losses in the same window of data. - NewReno TCP and Linux TCP algorithms are supposed
to recover these segment losses but..
15Previous Research on TCP/IP over ATM is related
to
- Multiple UBR streams from different sources are
contending at the same output port of the ATM
switch. - Major part of the related research is based on
simulated studies.
16Unique Experimental Design
- The ATM Network being Congested due to
- CBR flow, which has absolute precedence, and
the TCP flow on UBR sharing the same output port
in the ATM switch. - The cell buffer size in the ATM switch for UBR
meets the minimum requirement.
17Unique Experimental Design
Fujitsu EA1550
CBR Streams
CBR
B
A
FreeBSD 3.2-R
Switch Buffer
MTU Size
TCP Traffic over UBR
TCP Throughput Analysis depending on 4 parameters
C
FreeBSD 3.2-R
Netperf Tcpdump
Socket Buffer Size
18My Research Contribution
- Throughput measurement and analysis of TCP over
congested ATM under variety of network
parameters. - Throughput evaluation and analysis of several TCP
implementations. - Proposed a new congestion control scheme for TCP
to avoid congestion in the ATM network and to
improve the throughput of TCP.
19Performance Analysis of Linux TCP
20Congestion Control Algorithms of Linux TCP
- Slow-Start algorithm
- Congestion Avoidance
- Fast Retransmit algorithm.
- Fast Recovery algorithm.
21Slow-Start Algorithm
Sender
Receiver
ATM switch
1St segment
2nd segment
Acknowledgment (ACK)
22Congestion Avoidance Algorithm
Sender
Receiver
ATM switch
one segment per RTT
2nd segment after RTT
Acknowledgment (ACK)
23Fast Retransmit and Fast Recovery Algorithms
Sender
Receiver
ATM switch
one segment per RTT
2nd segment after RTT
Acknowledgment (ACK)
Duplicate ACK
24Throughput Results
CBR Stream 100Mbps Socket Buffer Size64Kbytes
100
RENO TCP MTU9180bytes
Linux TCP MTU9180bytes
80
Linux TCP MTU1500bytes
Linux TCP MTU512bytes
60
Effective ThroughputMbps
40
20
0
0
50
100
150
200
UBR Switch Buffer SizeKbytes
25Throughput Results
Switch Buffer Size 53Kbytes Socket Buffer
64Kbytes MTU9180bytes CBR Stream Pressure
TCP Throughput over UBR
140
Reno TCP
120
Linux TCP
100
80
ThroughputMbps
60
40
20
0
0
20
40
60
80
100
120
140
CBR PressureMbps
26Segments Acknowledged ( No. of bytes)
CBR100Mbps MTU9180bytes Buffer Size53Kbytes
10000
Linux TCP E.TP7.06 Mbps
8000
6000
Packets AcknowledgedKbytes
4000
2000
0
0
2
4
6
8
10
Timesec
27Analysis of Linux TCP
- TCP throughput is less than 20 of the available
bandwidth and varies 14 to 16. - Retransmission time outs are less than Reno TCP.
- Linux TCP will be bad in connection sensitive
applications due to expiry of its retransmission
timer. - Retransmission timer expires due to the making a
decision what to send. - Retransmission timeouts and FRR processes
consumed almost more than 50 of the total time. - If MTU size is large then congestion occurs soon.
28Proposed Dynamic Granularity Control (DGC)
algorithm for TCP
- A more conservative Jacobsons congestion
avoidance scheme is applied by reducing the MSS
size. - Step 1 Congestion Avoidance
- Decrease MSS to 1460 bytes if MSS gt1460 bytes.
- If MSS 1460 bytes, decrease MSS to 512 bytes.
29Proposed DGC Algorithm for TCP
- Fast retransmit machine (FRM) consist of
following stages. - Fast retransmission.
- Fast recovery.
- TCP re-ordering.
- Segment loss.
- Step 2FRM.
- Reducing MSS to 512 bytes under FRM events.
30Implementation of DGC algorithm
- DGC is implemented on Linux kernel 2.4.0-test10
with ATM-0.78 distribution. - Sender side implementation.
31Results and Discussions
CBR Stream 100Mbps Window Size64Kbytes
120
DGC TCP MTU9180 bytes
Linux TCP MTU9180 bytes
100
Linux TCP MTU512 bytes
80
Available bandwidth
60
Effective ThroughputMbps
40
20
0
0
50
100
150
200
Switch Buffer SizeKb
32Results and Discussions
Switch buffer size 53Kbytes Window
size64Kbytes
160
DGC TCP MTU9180 bytes
140
Linux TCP MTU9180 bytes
Linux TCP MTU512 bytes
120
Available bandwidth
100
80
ThroughputMbps
60
40
20
0
0
20
40
60
80
100
120
140
160
CBR PressureMbps
33Segments Acked (No. of Bytes)
Switch buffer size 53Kbytes MTU9180bytes
60000
DGC TCP E.TP41.71 Mbps
Linux TCP E.TP7.06 Mbps
50000
40000
30000
Number of Segments Acked Kbytes
20000
10000
0
0
2
4
6
8
10
Timesec
34TWO UBR STREAMS without any External CBR Pressure
100
UBR Stream1 CBR100Mbps
UBR Stream2 CBR100Mbps
80
Single UBR Stream Mbps
60
Effective ThroughputMbps
40
20
0
0
50
100
150
200
Switch Buffer SizeK bytes
35Multiple Streams of Linux TCP under CBR pressure
100
Maximum Available Bandwidth Mbps
80
Linux TCP UBR1 CBR100Mbps
Linux TCP UBR2 CBR100Mbps
60
Effective ThroughputMbps
40
20
0
0
50
100
150
200
Switch Buffer SizeK bytes
36Linux TCP and DGC TCP Streams
100
DGC TCP UBR Flow1 CBR100Mbps
Linux TCP UBR Flow2 CBR100Mbps
80
Maximum Available Bandwidth Mbps
60
Effective ThroughputMbps
40
20
0
0
50
100
150
200
Switch Buffer SizeK bytes
37DGC and Linux TCP under CBR Pressure
140
DGC TCP UBR Flow1
120
Linux TCP UBR Flow2
Total Additive Throughput Mbps
100
80
ThroughputMbps
60
40
20
0
0
20
40
60
80
100
120
140
CBR PressureMbps
38CONCLUSIONS
- Proposed TCP DGC algorithm used more than 98 of
the available bandwidth. - No Retransmission timeout occurs and hence
synchronization effect is minimized. - Fairness is thoroughly better than the other
available flavors of TCP.
39Final Concluding Remarks
- Analysis of TCP Reno
- Slow-start algorithm pumps too much in the
network. - If MTU size is large then throughput will be
better. - Throughput of TCP is less than 2 of the
available bandwidth during heavy congestion in
the network. - The retransmission timeout occurs too frequently
producing TCP throughput deadlock. - Fast Retransmit and Fast Recovery algorithms are
unable to recover multiple segment losses.
40Final Concluding Remarks
- Analysis of Linux TCP
- Throughput of Linux TCP is improved as compared
to TCP Reno but still less than 20 of the
available bandwidth. - If MTU size is large then congestion state in the
network is achieved soon. - More than 50 of the total is consumed to recover
a segment loss. - Retransmission timeout still occurs, therefore
Linux TCP will be bad in connection sensitive
applications.
41Final Concluding Remarks
- Analysis of proposed TCP DGC algorithm
- Almost all the available bandwidth is utilized.
- The idea is equally applicable to other
communication protocols facing congestion problem
in the network. - DGC TCP may not be useful over the Internet in
certain cases.
42Future Directions
- High-Speed TCP
- http//www.icir.org/floyd/hstcp.html
- Fast TCP
- http//netlab.caltech.edu/FAST/
- TCP Performance Tuning Page
- http//www.psc.edu/networking/projects/
43Future Directions
- Performance analysis of multiple TCP connections,
fairness, and buffer requirements over Gigabit
Networks. - Multi-homing
- Multi-streaming
- SCTP (Stream Control Transmission Protocol)
- Performance analysis of TCP flavours over
Wireless Ad-hoc networks
44Thank you very much.