Title: Outline
1Outline
- Motivation
- Simulation Study
- Scheduled OFS
- Experimental Results
- Discussion
2Optical Flow Switching Motivation
- OFS reduces the amount of electronic processing
by switching long sessions at the WDM layer - Lower costs, reduced delays, increased switch
capacity - Provide specific QoS for advanced services
3OFS Motivation (cont)
Optical Domain
Elect. Domain
Optical Domain
Elect. Domain
Total Bytes
Number of Flows
1KB
1MB
100MB
10MB
1MB
100MB
10MB
1KB
Flow Size
Flow Size
- Internet displays a heavy-tail distribution of
connections - More efficient optics gt more transactions in
optical domain (red line moves left)
4Optical Flow Switching Study
- Short-duration optical connections
- Access area
- Wide area
- Network architecture issues
- Connection setup
- Route/wavelength assignment
- Goal efficient use of network resources I.e.
high throughput - Previous work probabilistic approaches
- Difficulty high-arrival rate leads to high
blocking probability - Problem lack of timely network state information
- Our proposed solution Use of timing information
in network - Schedule connections
- Gather timely network state information
- This demonstration
- Demonstrate flow switching
- Demonstrate viability of timing and scheduling
connections - Investigate key sources of overhead
- High efficiency
5Connection Setup Investigation
- Key issue
- How to learn optical resource availability?
- Distribution problem
- Wavelength continuity problem makes it worse
- Previous work
- Addresses issues one at a time
- Assumes perfect network state information
- Will these results be useful for ONRAMP, WAN
implementation? - This work
- Assesses effects of distributed network state
information - Models some current proposals
- MP-lambda-S
- ASON
6Methodology
- Design distributed approaches
- Combined routing, wavelength assignment
- Connection setup
- Baseline flow switching architecture
- Requested flows from user to user
- Durations on order of seconds
- All-optical
- Simulate approaches on WAN topology
- End-to-end latency (time of flight only)
- Approaches Ideal, Tell-and-Go, Reverse
Reservation - Assess performance versus idealized approach
- Blocking probability
7Ideal Approach Illustration
Assume Flow Requested from A-gtB
l-Changers
l-Changers
l-Changers
A
C
B
A
C
B
Optical Flow
Tell cntl packet
Bidirectional Multi-fiber Link
D
D
l-Changers
Network Infrastructure
LLR Routing, Connection Setup
8Tell-and-Go Approach Illustration
Assume Flow Requested from A-gtB
Available l 2,3
Available l 2,3,4
A
C
B
Optical Flow
Available l 1,2,3
Available l 1,2
Tell Packet - Single wavelength
D
Link-State Protocol
Connection Setup
9Reverse Reservation Approach Illustration
Assume Flow Requested from A-gtB
A
C
B
A
C
B
Information Packets
Reservation Packet
Route Chosen by B
D
D
Route, Wavelength Reservation
Route Discovery
10Simulation Description
- Results shown as Blocking Probability vs. Traffic
Intensity - Uniform, Poisson flow traffic per node
- Fixed WAN topology
- Parameters
- F Number of fibers/link
- L Number of channels/link
- K Number of routes considered for routing
decisions - U Update interval (seconds)
- ? Average service rate for flows (flows/second)
- ? Average arrival rate of flows (flows/second)
- ? Traffic intensity. Equal to ?/?
- not utilization factor
11Simulation Topology
12Latency-free Control Network Results (1sec flows)
RR F1, L16, K10
TG F1, L16, K10
13Control Network With Latency Results (1sec flows)
TG, RR U0.1, F1, L16, K10
14Interesting Phenomenon
- Why is TG performance better than RR?
- 1 sec flows and large rho gt small inter-arrival
times - Smaller than round trip time
- Thus, with high probability, successive flows
will see same state (at least locally) - Increases chance of collision
- Effect of distribution (latency)
- Why is Rand better than FF?
- This is exactly opposite of analytical papers
claim - Combination of reasons
- Nodes have imperfect information
- FF makes them compete for same wavelengths (false
advertisement) - Not seen in analysis because distribution was
ignored -
15Scheduled OFS in ONRAMP
- Inaccurate information hurts performance
- In this case Simple speed of light
- Biggest problem Core network resources wasted
- Our proposal Use of timing information to
schedule flows - Deliver network information on time to make
decisions - Exchange flow-based information
- Maximize utilization of core network
- Possible small delay for user
- Issues
- Can timing be implemented cheaply, scaled?
- Can schedules be implemented?
- Must make use of current/future optical devices
- Low cost
- ONRAMP OFS
- Demonstration of scheduled OFS in access-area
network - One example of an implementation
16Scheduling in ONRAMP
Intermediate Node
Intermediate Node
Router
Router
Router
Router
OXC Sched
OXC
OXC
OXC
OXC
Access Node 2
Access Node 1
Access Node 2
Access Node 1
Control
Control
Control
Control
IP
IP
IP
IP
X-a
R-a
OXC Sched
OXC Sched
IP
FLOW
IP
FLOW
IP
FLOW
IP
FLOW
GE
GE
GE
GE
GE
GE
GE
GE
17ONRAMP Connection Setup
- Uses timeslotting and schedules for lightpaths
- X gt li busy on output of node i at corresponding
slot
OXC Schedule
18Algorithm Timeline
Overhead - Dependent on timing uncertainty
Slot 1
Slot 3
Slot 2
TIME
Scheduling OH
Scheduling OH
Cannot go in next timeslot
Can go in next timeslot
-Overheads includes all timing uncertainty -Effic
iency of any scheduled algorithm related to
timing uncertainty, and switching/electronic
overheads -Rough efficiency Flow duration /
Flow duration Overhead
19Utilizing Link Capacity
- Sending GigE over transparent optical channel
- Clock rate 1.244 Ghz
- Rate 8/10 coding results in raw bit rate of 995.2
Mb/s - Payload capacity for UDP
- Send MTU-sized packets
- 9000 bytes
- Avoid fragmentation
- Headers
- Ethernet (26 bytes) IP (20 bytes) UDP (8
bytes) 54 bytes - Result 8946 bytes of payload/packet
- Link payload limit
- 989.2288 Mb/s
- Rate-limited UDP
- Input desired rate
- Timed sends of UDP packets achieve desired rates
- Demonstrates transparency of OFS channel
20Experimental Setup
- OFS implemented in lab
- One second timeslots
- Timing overhead negligible
- Routing/wavelength selection
- All available wavelengths (currently 14)
- Both directions around ring
- Gigabit Ethernet link layer
- Flows achieve theoretical maximum link rate 989
Mb/s - Rate limited UDP
- Unidirectional flows
- No packet loss (100s of flows)
- Variable rate
- Demonstrates transparent use of optical
connection
21OFS Performance
22Current Performance Limitations
23Current Performance Limitations(cont.)
- Current overhead is 0.10 seconds
- Efficiency for one-second flows is therefore 90
- Analysis of overhead reveals possible overhead of
Gigabit Ethernet frame sync - Still under investigation
- Switching overhead and timing uncertainty
negligible - I.e. scheduling viable, efficient
Algorithm Overhead Timeline
Flow Request
Begin Slot
Flow begins
10ms
150ms
100ms
time
Scheduling
GBE Sync?
Switching
Command
Receiver
Laser