Unicast Routing Tradeoffs - PowerPoint PPT Presentation

1 / 97
About This Presentation
Title:

Unicast Routing Tradeoffs

Description:

Find ways to make better utilization of resources by making use of ... static routing at low utilization ... Improving Long Term Utilization of Network. Traffic ... – PowerPoint PPT presentation

Number of Views:62
Avg rating:3.0/5.0
Slides: 98
Provided by: selmay
Category:

less

Transcript and Presenter's Notes

Title: Unicast Routing Tradeoffs


1
Unicast Routing Tradeoffs
  • Selma Yilmaz
  • Committee Prof. Ibrahim Matta
  • Prof. Azer Bestavros
  • Prof. John Byers

2
What is Routing?
  • Process of finding a path from a source to a
    destination
  • Requirements of routing
  • Find optimal route
  • Scalable
  • space complexity topology, resource
    availability, routing table
  • time complexity route calculations, processing
    update packets
  • communication complexity volume of updates,
    keeping up soft states
  • No black hole, loops, oscillations
  • Types of routing
  • Static, Dynamic
  • Inter-domain, Intra-domain
  • Hop-by-hop, Source
  • Link State, Distance Vector, Path Vector
  • Unicast, Multicast

3
Conventional Routing
  • Static link metrics, like hop count
  • Shortest path routing
  • Destination-based only
  • Stable, metrics does not change often only in
    case of topology changes
  • Connectionless
  • No service guarantees
  • Plus
  • Scales very well
  • Minus
  • Does not use resources in an efficient way

4
The FISH Problem
  • Sub-path R2-R3-R4 may get over-utilized
  • Sub-path R2-R2-R6-R7-R4 may stay
    under-utilized
  • Find ways to make better utilization of
    resources by making use of
  • alternate paths
  • 30-80 of the cases there is an alternate
    path with significantly superior quality i.e.
    loss rate, bandwidth, RTT SavageSig99

5
UNICAST ROUTING
ROUTING APPROACH
FORWARDING
ADAPTIVENESS
DOMAIN
DISTANCE VECTOR /PATH VECTOR
LINK STATE
SOURCE
HOP-BY-HOP
STATIC
DYNAMIC
INTRA
INTER
KarInfo00, KodialamInfo01, SuriSV01,




BW CONSTRAINED
QOS ROUTING
ShaikhSig99





BW-DELAY CONSTRAINED
YangICNP01




NeveCC00




MULTIPLE ADDITIVE
MieghemCC01




SalamaInfo97




DELAY CONSTRAINED LEAST COST
GuoMattaICDCS99




YangICNP01, SuriSV01, KarInfo00,
KodialamInfo01
USING INGRESS-EGRESS PAIR INFORMATION




TRAFFIC AWARE
SuriSV01




USING TRAFFIC MATRIX
McQuillanIEEETC80, KhannaSig89




BEST EFFORT




GriffinSig99
6
Outline
  • Best Effort
  • QoS Routing/Constraint-based Routing
  • Traffic Aware Routing
  • Ingress-Egress Pair
  • Traffic Matrix
  • On the Scalability-Performance Tradeoffs in MPLS
    and IP Routing YilmazSpie02
  • Future Directions

7
BEST EFFORT
8
Outline
  • Best Effort
  • Per-packet Dynamic Routing
  • Load Balancing Along Equal Length Paths
  • Stability
  • Inter-domain Routing
  • QoS Routing/Constraint-based Routing
  • Traffic Aware Routing
  • Ingress-Egress Pair
  • Traffic Matrix
  • On the Scalability-Performance Tradeoffs in MPLS
    and IP Routing YilmazSpie02
  • Future Directions

9
Per-packet Dynamic Routing
  • Promises
  • Avoiding congested links
  • Computationally simple
  • Distributed
  • Stateless
  • Scalable
  • Difficulties
  • Link states change at packet level
  • Impractical to generate link state updates at
    packet level
  • Larger link state update periods result in
    fluctuations in link state between successive
    updates

10
Per-Packet Dynamic Routing
  • ARPANET experience showed that
  • choosing right metric is important
  • to find optimal paths
  • detect congestion, avoid congested areas
  • to avoid oscillations
  • variations of the link metric
  • if goal is to minimize packet delay
  • 1stversion instantaneous queue sizeconstant
  • delay on links with different characteristics
    looks same
  • value changes very rapidly
  • poor measure of expected delay on the link
  • sub-optimal and unstable paths

11
Per-Packet Dynamic Routing
  • McQuillanIEEETC80 average (queuing
    propagationtransmission delay)
  • new metric is good predictor of future loads on
    links
  • opposite is true at high load
  • leads oscillation
  • range of delay values are too broad
  • makes some links unattractive to all
  • no limit on variations reported on successive
    updates for a link
  • more update generation
  • KhannaSig89 to reduce oscillation, dont
    target only best routes
  • use hop normalized metric
  • for the same type of links, range of metric value
    is 31
  • for all types of links is 71
  • static routing at low utilization
  • value of a link metric can only change 1/2 hop in
    successive updates
  • bounds the the range of oscillations

12
Load Balancing Along Equal Length Paths
  • Remedy load balancing inability of static
    routing
  • distribute traffic equally along equal cost
    shortest paths
  • Ways to achieve this
  • Per-packet Round Robin
  • not advised PaxsonSig96
  • path characteristics may be different
  • TCP packets arrive at destination out of order
  • unnecessary re-transmissions, waste of bandwidth
  • Source destination address based hash
  • may not actually balance the load
  • Ex OSPF-ECMP
  • Static routing if weights are
    administratively assigned
  • Ex Cisco suggests 1/link capacity

13
Load Balancing Along Equal Length Paths
  • Not always efficient
  • not aware of actual loads on the ECMPs
  • Need to dynamically assign optimal weights
  • But still..
  • Conventional IP routing (destination based
    only) is O(N) worse than OSPF-ECMP.
    LorenzDIMACS01

OSPF-ECMP
IP
Per-flow
14
Stability
  • Stability Do routes change often? How long does
    it take to reach consistent tables?
  • Why it is important?
  • routers spend too much time on
  • updating their routing tables
  • propagating changes
  • Static routing is stable
  • How to deal with it?
  • dont target only the best routes
  • Quantizing increases number of target paths
  • Use normalized metrics KhannaSig89
  • deal with stale link state information
  • Update period lt connection arrival and holding
    times
  • Use triggered updates
  • limit the amount of updates
  • per-flow routing instead per-packet

15
Inter-domain Routing
  • What if source and destination are in different
    ASs?
  • based on min-hop AS path routing
  • there is no universally agreed metric among ASs
  • each AS may have its own criteria for path
    selection
  • BGP is the current inter-domain routing protocol
  • path vector, policy based
  • BGP allows each AS
  • independently formulate its routing policies
  • overwrite distance metrics in favor of policy
    constraints
  • BGP is not pure distance vector
  • may diverge, result in persistent oscillation in
    the global Internet
  • not robust to failures

16
Inter-domain Routing
  • statically analyzing convergence of BGP (even for
    known policies and single destination) is
    impractical GriffinSig99
  • Inter-domain QoS routing is more difficult than
    intra-domain
  • no universally agreed metric among Ass
  • policies overwrites metrics
  • scale of topology

17
QOS ROUTING/CONSTRAINED-BASED ROUTING
18
Outline
  • Best Effort
  • QoS Routing/Constraint-based Routing
  • What is QoS Routing?
  • Objectives of QoS Routing
  • Difficulties with QoS Routing
  • Ways to Deal with Increased Cost of QoS Routing
  • Closer Look at Some QoS Performance-Cost
    Tradeoffs
  • Effects of Topology
  • Effects of Traffic Characteristics
  • QoS Routing Problems
  • A Possible Classification of QoS Routing Problems
  • Multiple Constraints Routing Problem
  • Delay Constrained Least Cost Routing
  • Can we achieve QoS guarantees at lower cost?
  • Improving Long Term Utilization of Network
  • Traffic Aware Routing
  • Ingress-Egress Pair
  • Traffic Matrix
  • On the Scalability-Performance Tradeoffs in MPLS
    and IP Routing YilmazSpie02

19
What is QoS Routing?
  • QoS Routing Problem Definition
  • Given a network graph G(V,E)
  • a source s and a destination d
  • a set of QoS constraints C
  • possibly an optimization goal
  • Find the best feasible path from s to d, which
    satisfies C
  • Example
  • Delay at most 4 gt s -gt k -gtl -gtt
  • Bandwidth at least
    1.5 gt s -gt i -gt l -gt d
  • gt s -gt k -gt l -gt d
  • Both gt s -gt k -gtl -gtt
  • Components 1. Maintain topology and link state
    information
  • 2. Distribution of link state
    information
  • 3. Route computation

(bandwidth,delay)
j
20
Objectives of QoS Routing
  • Provide guarantees for end-to-end performance
  • Can identify feasible paths better
  • dynamic routing, accurate information about
    available resources
  • aware of QoS requirements of the requests
  • Resource reservation
  • Perform admission control
  • Improve the long term utilization of the network
  • Perform admission control
  • Prefer social paths
  • Use network resources in such a way that
    probability of accepting future requests is
    increased
  • Use the network resources in an efficient way
  • Prefer shortest paths to limit resource
    consumption
  • Prefer least loaded path to load balance

21
Difficulties with QoS Routing
  • Performance depends on accuracy of the network
    state information
  • changes frequently
  • flooding costs
  • tradeoff between communication overhead and
    accuracy
  • More sophisticated route computations
  • to satisfy multiple constraints (some
    combinations are NP-hard)
  • tradeoff between simpler path computations and
    better quality paths
  • More frequent route computations
  • maybe on-demand
  • tradeoff between per-request computational cost
    and quality of path
  • Paths need to be pinned and maintained afterwards
  • per-flow state
  • signaling cost to set-up, tear down, keep alive,
  • routing table entry larger tables, slower lookups

22
Ways to Deal with Increased Cost of QoS Routing
  • Reduce volume of updates by controlling
  • when/how often send
  • periodic updates, threshold/class-based triggers,
    clamp-down timers
  • what to send
  • only the specific link, or all the nodes links,
  • absolute or quantized value
  • Use heuristics for simpler route computations
  • Use pre-computation or path caching instead of
    on-demand
  • Increase granularity, like class-based
  • Use hierarchical routing
  • Use hybrid routing Restrict QoS routing only a
    group of flows and use static routing for the rest

23
Closer Look at Some QoS Performance-Cost
Tradeoffs
  • 1. How Often Paths are Computed?
  • On-demand, pre-computation, path caching
  • 2. Type of Computation
  • Hop-by-hop, source, crank-back
  • 3. Granularity
  • Per-flow, larger aggregates
  • 4. Absolute vs. quantized values
  • 5. Hierachical vs Flat
  • 6. Hybrid Routing StaticDynamic
  • 7. Link state, Distance Vector, Path Vector

24
Closer Look at Some QoS Performance-Cost
Tradeoffs
  • 1. How Often Paths are Computed?
  • On-demand per-request
  • plus
  • Exact QoS requirements and destination are known
    at path selection
  • Yields better routes using the most recent link
    metrics available
  • Reduces space complexity
  • minus
  • Increases computational complexity
  • Per-request processing overhead is high
  • If large clamp down timer is used, re-discovers
    same paths
  • use pre-computation

25
Closer Look at Some QoS Performance-Cost
Tradeoffs
  • Pre-computation
  • periodic/after receiving certain number of
    updates
  • plus
  • Reduces per-request processing overhead
  • minus
  • Must compute all possible paths to all
    destinations for all possible QoS requirements
  • Must store pre-computed paths
  • Adds path selection cost Given destination and
    bandwidth find a suitable path from QoS table
  • Some of the paths may never be used
  • Periodic pre-computation results in routing
    performance loss ApostolopoulosSig98
  • For sensitive triggers maybe more costly than
    on-demand

26
Closer Look at Some QoS Performance-Cost
Tradeoffs
  • Hybrid Path Caching ApostolopoulosICNP98
  • Remember k previously computed paths to a
    destination
  • Choose shortest feasible
  • plus
  • good balance between per-request processing
    overhead and quality of paths
  • ability of load balance/pack
  • the more entries, the better results
  • bigger cache size
  • update paths instead of invalidate
  • Small cache size for faster lookup and less
    storage
  • minus
  • cache need to maintained entries must be
    updated, invalidated etc.
  • for small update periods, cost can exceed on
    demand
  • storage overhead at each node per destination

27
Closer Look at Some QoS Performance-Cost
Tradeoffs
  • 1. How Often Paths are Computed?
  • On-demand, pre-computation, path caching
  • 2. Type of Computation
  • Hop-by-hop, source, crank-back
  • 3. Granularity
  • Per-flow, larger aggregates
  • 4. Absolute vs. quantized values
  • 5. Hierachical vs Flat
  • 6. Hybrid Routing StaticDynamic
  • 7. Link state, Distance Vector, Path Vector

28
Closer Look at Some QoS Performance-Cost
Tradeoffs
  • 2. Type of Computation
  • Hop-by-hop
  • path computation is distributed
  • scalable
  • occasionally suffer from loops due to
    inconsistencies in the path discovered by
    different routers
  • speeds up the establishment of a path by avoiding
    the route computation at the source
  • generally used for pre-computation

29
Closer Look at Some QoS Performance-Cost
Tradeoffs
  • Source routing
  • plus
  • guarantees loop free routing
  • enforces the route computed at the source
  • allows sophisticated and complex path selection
    algorithms with diverse resource requirements
  • generally used for on-demand
  • minus
  • has scalability problem
  • centralized computation
  • global state needs to maintained at each node
  • the path is included in the header of the packet
  • each router has to process the packet

30
Closer Look at Some QoS Performance-Cost
Tradeoffs
  • Hybrid Crank-back (PNNI)
  • route using source routing
  • in case of failure during path establishment
    phase, use a local search at the point of failure
  • a way to handle inaccurate network state
    information
  • increases time to set up a path for an incoming
    flow

31
Closer Look at Some QoS Performance-Cost
Tradeoffs
  • 1. How Often Paths are Computed?
  • On-demand, pre-computation, path caching
  • 2. Type of Computation
  • Hop-by-hop, source, crank-back
  • 3. Granularity
  • Per-flow, larger aggregates
  • 4. Absolute vs. quantized values
  • 5. Hierachical vs Flat
  • 6. Hybrid Routing StaticDynamic
  • 7. Link state, Distance Vector, Path Vector

32
Closer Look at Some QoS Performance-Cost
Tradeoffs
  • 3. Granularity
  • Per-flow
  • plus
  • better load balance, better long term performance
  • better service guarantees
  • minus
  • expensive
  • path computation occurs more frequently
  • network state changes more frequently
  • requires smaller update trigger
  • state must be maintained for each flow
  • signaling cost path set-up, tear down, keep
    alive
  • routing table entry size, and lookup

33
Closer Look at Some QoS Performance-Cost
Tradeoffs
  • Flows can be larger aggregates of multiple flows,
    like class-based, or destination based
  • plus
  • cheaper less state, smaller routing tables,
    faster lookup
  • more scalable
  • minus
  • demands with larger bandwidth requirements have
    higher blocking probability MaICNP97 ,
    MattaInfo98 , ShaikhICNP08
  • bandwidth fragmentation
  • may increase potential of instability
  • large volumes of traffic will be shifted from one
    place to the other
  • activates update triggers more often

34
Closer Look at Some QoS Performance-Cost
Tradeoffs
  • 1. How Often Paths are Computed?
  • On-demand, pre-computation, path caching
  • 2. Type of Computation
  • Hop-by-hop, source, crank-back
  • 3. Granularity
  • Per-flow, larger aggregates
  • 4. Absolute vs. quantized values
  • 5. Hierachical vs Flat
  • 6. Hybrid Routing StaticDynamic
  • 7. Link state, Distance Vector, Path Vector

35
Closer Look at Some QoS Performance-Cost
Tradeoffs
  • 4. Absolute vs. quantized values
  • Quantized
  • may reduce accuracy especially if only a few
    quantized values are being used
  • increases number of equal cost paths
  • by not targeting only a single best choice,
    increases stability
  • help the path selection being stuck with a single
    bad choice
  • influences when the next update will take place
  • allows smaller routing table size

36
Closer Look at Some QoS Performance-Cost
Tradeoffs
  • 1. How Often Paths are Computed?
  • On-demand, pre-computation, path caching
  • 2. Type of Computation
  • Hop-by-hop, source, crank-back
  • 3. Granularity
  • Per-flow, larger aggregates
  • 4. Absolute vs. quantized values
  • 5. Hierachical vs Flat
  • 6. Hybrid Routing StaticDynamic
  • 7. Link state, Distance Vector, Path Vector

37
Closer Look at Some QoS Performance-Cost Tradeoffs
  • 5. Hierachical vs Flat
  • Flat
  • nodes have full knowledge of topology
  • Does not scale
  • cost of communication, computation, and storing
    overhead is huge
  • Hierachical PNNI, OSPF (only two level)
  • reduce the size of network by aggregating
    topology
  • Each node knows
  • complete topology within its group
  • summarized information of parent
    group
  • Link
    aggregation
  • A.1-A.2 is aggregate of

  • A.1.3-A.2.3 and A.1.1-A.2.2

Parent Group
Border routers
38
Closer Look at Some QoS Performance-Cost Tradeoffs
  • 5. Hierachical vs Flat
  • plus
  • can scale to large networks
  • amount of information that is stored is reduced
  • amount of information that is distributed is
    reduced
  • minus
  • tradeoff between amount of aggregation and
    accuracy
  • loss of detailed information
  • state of logical links are combination of many
    lower-level links
  • as states are aggregated, imprecision is also
    aggregated
  • effects quality of selected paths

39
Closer Look at Some QoS Performance-Cost
Tradeoffs
  • 1. How Often Paths are Computed?
  • On-demand, pre-computation, path caching
  • 2. Type of Computation
  • Hop-by-hop, source, crank-back
  • 3. Granularity
  • Per-flow, larger aggregates
  • 4. Absolute vs. quantized values
  • 5. Hierachical vs Flat
  • 6. Hybrid Routing StaticDynamic
  • 7. Link state, Distance Vector, Path Vector

40
Closer Look at Some QoS Performance-Cost Tradeoffs
  • 6. Hybrid Routing StaticDynamic
  • Classify flows according to their
    characteristics and dynamically
  • /statically route some classes
  • Use dynamic per-flow routing only for
    long-lived flows
  • Route class of short-lived flows statically
    ShaikhSig99
  • plus
  • increases stability
  • per-flow is more stable than per-packet
  • class-based is more stable than per-flow
  • link states change more slowly
  • number of flows that are dynamically routed is
    decreased
  • flow duration is increased
  • frequency of updates are reduced

41
Closer Look at Some QoS Performance-Cost Tradeoffs
  • 6. Hybrid Routing StaticDynamic
  • slower link state change and less flows to be
    dynamically routed reduces overheads
  • number of route computation
  • number of per-flow state
  • number of signaling opertions pinning, keeping
    up soft state, tearing down
  • smaller forwarding tables
  • more robust to stale link state information
  • flow trigger value can be used to
  • balance cost and performance tradeoff
  • balance stability and adaptiveness
  • minus
  • under accurate link state information,
    performance is worse than dynamic routing
  • short flows have to be on static paths

42
Closer Look at Some QoS Performance-Cost
Tradeoffs
  • 1. How Often Paths are Computed?
  • On-demand, pre-computation, path caching
  • 2. Type of Computation
  • Hop-by-hop, source, crank-back
  • 3. Granularity
  • Per-flow, larger aggregates
  • 4. Absolute vs. quantized values
  • 5. Hierachical vs Flat
  • 6. Hybrid Routing StaticDynamic
  • 7. Link state, Distance Vector, Path Vector

43
Closer Look at Some QoS Performance-Cost Tradeoffs
  • 7. Link state, Distance Vector, Path Vector
  • Link state OSPF, IS_IS
  • each router knows the entire network topology
  • plus
  • allows more complex path selection algorithms
  • minus
  • must maintain topology database at each node
  • flood updates
  • does not scale well to large networks
  • use hierarchical topology aggregation
  • Distance Vector RIP
  • each node keeps shortest path tree rooted at
    itself to destinations
  • plus
  • requires less memory
  • more scalable

44
Closer Look at Some QoS Performance-Cost Tradeoffs
  • minus
  • cannot support sophisticated path computations
  • may have convergence problems
  • does not allow computation routes specific to the
    requirements of an individual flow
  • Path Vector BGP
  • routing tables not only keeps
    ltdestination,nexthop,costgt but also the
  • corresponding path
  • plus
  • eliminates loops
  • minus
  • needs more storage to keep paths
  • the paths needs to be processed

45
Effects of Topology on QoS
  • Overheads of computing routes and distributing
    link states
  • Specifies number of candidate paths between each
    pair
  • better chances of load balance
  • Larger hop count connections are harder to route
    under inaccurate link state information
    ShaikhICNP98
  • Pruning should be disabled for loosely connected
    topologies ShaikhICNP98 ApostolopoulosSig98
  • Specifies relative cost of single destination and
    all-destination path computation

46
Effects of Traffic Characteristics on QoS
  • Connection inter-arrival time and holding time
  • Frequency of link state updates
  • Large update periods relative to arrival rates
    and hold time leads to flapping
  • Longer-lived flows allows use of larger link
    state update period
  • ShaikhSig99
  • Exponential holding times gives lower blocking
    probability than Pareto with the same mean for
    the same link state update periods
  • With increasing number of short-lived flows,
    inaccuracy increases
  • MaICNP97 ,ShaikhICNP98

47
Effects of Traffic Characteristics on QoS
  • Requested Bandwidth
  • High bandwidth flows have higher blocking
    probability
  • Low bandwidth flows can more easily use the
    available bandwidth
  • more robust to inaccuracies
  • High bandwidth flows cause large fluctuations in
    link state
  • needs more frequent update messages

48
Effects of Traffic Characteristics on QoS
  • Uniform vs Non-uniform Traffic
  • Under non-uniform load, QoSR performs better than
    static
  • Under uniform traffic, QoSR may perform worse
    than static
  • depending on the update period staleness causes
    route flapping
  • ShaikhICNP98
  • very accurate link state information causes
    excessive alternate routing
  • uses longer paths with extra resources
  • interfere with min hop traffic computing for the
    same links
  • MattaInfo98, ApostolopoulosSig98
  • Solution Trunk reservation

49
QoS Routing Problems
  • Link metrics Additive delay, cost
  • Multiplicative probability
    of successful transmission
  • Concave bandwidth, buffer
    space
  • Path metrics
  • Additive Sum of link metrics along the
    path
  • Concave Max(Min) of link metrics along
    the path
  • Find a path that optimizes path metric
  • for concave metric Link optimization
  • for additive/multiplicative metric Path
    optimization
  • Find a path whose path metric is above/below a
    specified value
  • for concave metric Link constrained
  • for additive metric Path constrained
  • ChenIEEEN98

50
A Possible Classification of QoS Routing Problems
  • basic routing problems
    composite routing problems
  • link optimization link-constrained
    link-optimization

  • link-constrained path-optimization
  • link-constrained
  • multiple link-constrained
  • link-constrained path-constrained
  • path-constrained link-optimization
  • path-optimization
  • path-constrained path-optimization
  • NP-complete
  • path-constrained multi-path-constrai
    ned
  • NP-complete

51
Multiple Constraints Routing Problem
  • Problem Definition
  • Given
  • a network graph G(V,E)
  • vector of m additive link metrics for each edge
  • a source node s
  • a destination node d
  • vector of m positive constraints, L
  • and
  • Path P(s,k,..,l,d) is specified by a vector
  • Find a path satisfying constraints such that
    li(P)ltLi for all i1,2,..,m
  • How to run shortest path algorithm?

52
Multiple Constraints Routing Problem
  • Path Length assuming m2
  • Linear l1(P)a l2(P)ß
  • Non-linear max ( l1(P)/L1, l2(P)/L2)
    NeveCC00 MieghemCC01
  • Linear Representation of Path Length

  • Problems
  • 1. May fail


  • Optimal path according to the

  • new weight function maybe infeasible

  • 2. How to choose weights?


Solution returned by Dijkstra
53
Multiple Constraints Routing Problem
  • Non-linear Representation of Path
  • Length
  • Problem
  • subsections of shortest paths are not
    necessarily shortest paths
  • Dijkstra fails to find shortest paths

54
Multiple Constraints Routing Problem
  • Example
  • From source to u, Dijkstra will choose P2
    since the path length of P2
  • Length of P2max(5/12,5/12)5/12 lt
    Length of P1max(10/12,1/12)10/12
  • Shortest path from source to destination is
    through P1, since
  • max(11/12,11/12)ltmax(6/12,15/12)
  • Solution
  • Use k shortest path algorithm
  • At each step of Dijkstra, store k shortest paths
  • kmaxminL1L2..Lm/max(Li), floor(e(N-2)!)
  • tunable tradeoff between accuracy and complexity

55
Delay Constrained Least Cost Routing
LDP
Path Cost
  • Problem Definition
  • Given
  • a network graph G(V,E)
  • nonnegative cost C(e) for each edge e
  • nonnegative delay D(e) for each edge e
  • a source node s
  • a destination node d
  • positive delay constraint ?d
  • Find a path satisfying
  • min Cost(Pi) and Pi ?P(s,d)
    iff Delay(Pi)lt ?d
  • Pi ?P(s,d)
  • where for a path P vo,v1,..,vn cost of a
    path P Cost(P) Se in P C(e)
  • delay of a path P Delay(P)
    Se in P D(e)

Optimal Path
LCP
?d
Path Delay
56
Delay Constrained Least Cost Routing
  • Example ?d 3, delay/cost

destination
source
Optimal DCLC path delay3,cost5
Least Delay Path delay2,cost6
Least Cost Path delay4,cost4 infeasible
57
Delay Constrained Least Cost Routing
  • Convert DCLC Routing Problem to DCC
  • GuoMattaICDCS99,COMNET02
  • What will be the cost bound?
  • Path Length
  • Path Delay/(1- Path Cost/Cost Bound)) if
    feasible
  • infinity if not feasible
  • which gives preference to low cost paths
  • Example
  • ?cCost(LDP)6 LDP is AGE
  • Look at the paths with smaller cost
  • AGE Cost6, Delay2
  • ABCE Cost6, Delay3
  • ABCDE Cost4, Delay4, Path Lengthinfinity
  • AGFE Cost5, Delay3, Path Length18

58
Issues
  • Non-linear path length requires k-shortest path
    algorithm
  • Added space complexity is O(kV)
  • With non-integer constraints, k is big O(N!)
  • run time complexity, O(kVlogkVk2mE), is no
    longer polynomial
  • if choose smaller value for k, algorithm is not
    exact
  • may miss the shortest path
  • Reduced search space GuoMattaICDCS99,COMNET02
  • eliminate infeasible links by assigning infinite
    weight
  • find tighter cost bound, run BlokhGutin
  • Better chance of finding optimal
  • smaller k may achieve good enough
  • run time complexity increased because of
    pre-processing step
  • Hop-by-hop computation of DCLC

59
Delay Constrained Least Cost Routing
  • Distributed solution
    ?d 3, d/c
  • Each node keeps Cost and Delay Vectors
  • Destination/least cost(delay) from source
    to destination/nexthop
  • At each hop, choose either LCP/LDP
  • Active node chooses LCP if
  • delaysofardelayD(LDP)lt ?d
  • otherwise chooses LDP
  • Better scalability, and time complexity
  • Increased communication complexity
  • Still per-flow state

D
D
S
60
Can we achieve QoS guarantees at lower cost?
  • QoS routing generally
  • asks for source routing, link state, on-demand
    mode,
  • requires per-flow state
  • NOT SCALABLE!
  • Is it possible to achieve sufficient level of QoS
    guarantees with hop-by-
  • hop destination based only routing
    (connectionless) ?
  • HbHDBO QoS cannot guarantee to find exact results
    where
  • routes are computed from intermediate node to
    destinations
  • only next hop information is kept
  • MieghemCC01
  • Simulations shows that quality of HBHDBO QoS
    routing is good
  • 90 of the time exact solution is found
  • With active networking HBHDBO QoS can be done
  • each packet carries remaining constraints
  • each packet carries path traversed so far , and
    constraints

61
2 5 5
5 4 6
d
4 1 7
b
3 7 1
2 1 2
f

3 3 2
a
e
i
5 3 8
2 2 7
2 3 9
(2352)/140.86 (1333)/110.91 (2289)/220
.95 i chooses f as next hop
3 5 4
g
c
7 8 2
2 5 5
5 4 6
2 5 5
5 4 6
(43)/140.5 (17)/110.73 (71)/220.36 e
chooses b as next hop
d
Hop-by-hop path (2333)/140.86 (1371)/111.0
9 (2217)/220.55
4 1 7
d
4 1 7
b
b
3 7 1
2 1 2
3 7 1
2 1 2
f
3 3 2
f
3 3 2
a
e
a
e
i
5 3 8
i
2 2 7
5 3 8
2 2 7
Shortest Path (2352)/140.86 (1333)/110.91
(2289)/220.95
(52)/140.5 (33)/110.54 (98)/220.77
2 3 9
3 5 4
2 3 9
3 5 4
g
c
7 8 2
g
c
7 8 2
62

TAMCRA NeveCC00
SAMCRA MieghemCC01
DCUR SalamaInfo97
SSRDCCR GuoICDCS99
Solves
Generic Multiple Additive
Generic Multiple Additive
Delay Constrained Least Cost
Delay Constrained Least Cost
Source
Hop-by-hop
Routing Strategy
Source
Hop-by-hop
Type
Link State or Distance Vector
Link State or Distance Vector
Link State
Link State
O(hVlog(hV)h2mE)) per-flow h is accuracy tuning
index m is number of metrics
O(hVlog(hV)h2mE)) per-packet h is accuracy
tuning index m is number of metrics
O(hVlog(hV)h2mE)) per-flow BG O(cElogV) c
of iterations h is accuracy tuning index m is
number of metrics

O(1)
Time Complexity
O(k) to distribute link states if Link State k is
number of neighbors O(V) if Distance Vector O(V3)
O(k) to distribute link states k is number of
neighbors
O(k) to distribute link states if Link State k is
number of neighbors O(V) if Distance Vector
O(k) to distribute link states k is number of
neighbors
Communication Complexity
O(E) for network state O(hmN) for h shortest
path h is accuracy tuning index
O(E) for network state if Link State O(kV) for
path state if Distance Vector k is number of
neighbors O(hmN) for h shortest path m is number
of metrics
O(E) for network state if Link State O(kV) for
path state if Distance Vector k is number of
neighbors O(V) for cost vector O(V) for delay
vector
O(E) for network state O(hmN) for h
shortest path h is accuracy tuning index
Space Complexity
Per-flow
none
Per-flow
Per-flow
Maintained State
none
none
Extra Information Used
none
none
63
Improving Long Term Utilization of Network
  • Choose paths that uses network resources in an
    efficient way
  • Increase revenue, decrease blocking
    probability
  • Use additional constraints
  • limit resource consumption shortest paths
  • improves performance under heavy load
    MaICNP97
  • load balance use alternate (wider) paths
  • improves performance under light load MaICNP97
  • increases blocking probability of high bandwidth
    requests
  • MattaBestavrosInfo98
  • good for evenly distributed load
  • load packing most utilized paths (best fit)
  • approximates perfect fit, which is NP-hard
  • minimize fragmentation
  • improves fairness for large bandwidth requests
  • bad for uniform traffic MattaBestavrosInfo98
  • extremely sensitive to link state inaccuracies

64
Improving Long Term Utilization of Network
  • load profiling MattaBestavrosInfo98 try to
    match load profile and bandwidth availability
    profile
  • more robust to routing inaccuracies, allows more
    longer update periods

65
TRAFFIC AWARE ROUTING
66
Outline
  • Best Effort
  • QoS Routing/Constraint-based Routing
  • Traffic Aware Routing
  • Ingress-Egress Pair
  • Traffic Matrix
  • On the Scalability-Performance Tradeoffs in MPLS
    and IP Routing YilmazSpie02
  • Future Directions

67
What is Traffic Aware Routing?
  • Input available resources and some knowledge
    about traffic like ingress-egress pairs, or
    traffic matrix
  • Goal optimize/maximize network usage or service
    guarantees
  • Ingress-egress Pairs location of
    source-destination traffic
  • allows efficient use of network resources,
    maximize revenue
  • Traffic Matrix demands between
    source-destination pairs
  • allows to pre-compute set of routes optimizing
    some performance metric
  • based on long-term averages, e.g. daily
  • need to be re-optimized
  • static for a period of time
  • generally done off-line in a centralized manner
  • not designed to handle traffic fluctuations in
    real-time
  • Routing tables converge slowly

68
Is achieving good long-term performance enough?
  • Traffic matrix is for long term traffic
    characteristics
  • Used to optimize long term performance
  • What about short term performance?
  • Short-term performance may be sub-optimal
    SridharanITC01
  • Traffic and link loads over shorter time scales
    fluctuate around the long term average values
  • finer granularity, shorter time scales gt more
    variability
  • Compute a new set of optimal routes at shorter
    time scales to avoid overloading
  • not feasible, not stable
  • Use traffic aggregates
  • coarser granularity restricts load balancing
    ability
  • poorer long term performance

69
Ingress-egress Pairs
70
How to use this extra information to improve
performance?
  • Smarter usage of resources for QoSR
  • increase utilization and long term performance
  • on-demand, no knowledge of future requests, no
    splitting of flows
  • Bandwidth Constrained
  • Among feasibles, pick paths that interfere
    least with future requests
  • KarInfo00
  • How to find minimum interference paths?
  • Maxflow Upperbound on the total amount
    of bandwidth that can be
  • routed between an ingress-egress pair

71
How to use this extra information to improve
performance?
  • Can we use LP to maximize the minimum/sum of
    maxflow/s between other ingress-egress pairs?
  • Unsplitability restriction makes the problem
    NP-hard
  • Heuristic MIRAKarInfo00
  • Assign weights that are increasing function of
    criticality
  • Routing over a critical link
    decreases the maxflow value of some
  • ingress-egress pairs
  • defer loading of critical links whenever possible
  • By Dijkstra, find shortest path
  • Maxflow23
  • Mincut(s,v1,v2,v4,v3,d)
  • Capacity of mincut23
  • Critical links (v1,v3), (v4,v3), (v4,d)


72
Traffic Matrix
73
How to use traffic matrix to improve performance?
  • Optimal Solution solve LP formulation to
    optimize a metric like minimizing maximum link
    utilization
  • allows unrestricted split
  • can only be simulated by directing packets along
    logical connections
  • costly, more than per-flow state is needed in
    case of split
  • not scalable
  • Calculate OSPF Weights
  • Assign weights in such a way that shortest
    path routing will prefer the underutilized links
  • plus
  • no per-flow state
  • simple shortest path routing
  • scalable

74
How to use traffic matrix to improve performance?
  • Unrestricted Case WangInfo01
  • 1. solve LP formulation to optimize a metric
  • 2. solve dual of LP to find link weights that
    with shortest path computation will reproduce
    optimal routes obtained in (1)
  • Minus current OSPF only supports equal split
  • OSPF-ECMP
  • cannot be formulated as an LP
  • equal split along equal cost paths makes the
    problem NP-hard FortzInfo00
  • find heuristics to set weights
  • local search heuristic FortzInfo00
  • set weights as an exponential function of link
    load LorenzDIMACS01

75
How to use traffic matrix to improve performance?
  • Use both traffic matrix and ingress-egress pair
    information
  • Offline (pre-processing) phase
  • Compute an optimal solution by using LP for
    traffic matrix
  • solve multi-commodity flow problem
  • no restriction on splitting
  • each profile is a commodity
  • result is pre-allocated capacities for each
    commodity
  • Online phase Use the result of offline phase as
    virtual capacities to route individual requests
  • Suri01

76
Can clever weight setting for OSPF-ECMP replace
per-flow routing?
  • Weight setting for OSPF-ECMP cannot replace
    MPLS as a TE tool.
  • OSPF-ECMP is O(N) worse than per-flow routing
    with respect to
  • maximum throughput and maximum utilization that
    can be achieved.
  • LorenzDIMACS01 FortzInfo00
  • What about the gap in practice?
  • may be not too bad for realistic topologies and
    traffic patterns
  • worst-case may never be observed in practice
  • what aspects of topology makes gap less
    significant
  • what aspects of traffic patterns make gap less
    significant?

77
Demands are 1unit
If only one link is used at all nodes, then
maximum throughput1
If 2 links are used at some nodes, then
maximum throughput2
OSPF-ECMP
Per-flow maximum throughputN
78
...
Sources
1
1
1
u1
CN-1
CN-2
u2
u3
C1
CN-3
C1
N
...
C1
vNuN
v2
v1
v3
N
Destination
d
If only one link is used at all nodes, then
maximum utilizationN
If 2 links are used at some nodes, then maximum
utilizationN/2
OSPF-ECMP
Per-flow maximum utilization1
79
MDWCRAYangICNP01
MOCAKodialamInfo01


Solves
Bandwidth-Delay Constrained
Guaranteed Bandwith


Source
Source
Source
Hop-by-hop
Routing Strategy
Type
Link State
Link State
Link State
Link State or Distance Vector

O(nVE2) n is number of ingress-egress pairs
O(VE) for on-line phase offline phase
O(pE2logVp Vlog(pV)) p number of
ingress-egress pairs
O(1)
Time Complexity
O(k) to distribute link states k is number of
neighbors
O(k) to distribute link states k is number of
neighbors
O(k) to distribute link states if Link State k is
number of neighbors O(V) if Distance Vector
O(k) to distribute link states k is number of
neighbors
Communication Complexity
O(E) for network state O(V2) for ingress-egress
pair matrix
O(E) for network state O(V2) for ingress-egress
pair matrix O(V2) for traffic profile matrix
O(E) for network state if Link State O(kV) for
path state if Distance Vector k is number of
neighbors
O(E) for network state O(pV) to keep paths with
different weights
Space Complexity
Per-flow
Per-flow
Per-class and per-flow
none
Maintained State
Ingress-egress pair matrix Traffic profile matrix
none
Extra Information Used
Ingress-egress pair matrix
Ingress-egress pair matrix
80
Outline
  • Best Effort
  • QoS Routing/Constraint-based Routing
  • Traffic Aware Routing
  • Ingress-Egress Pair
  • Traffic Matrix
  • On the Scalability-Performance Tradeoffs in MPLS
    and IP Routing YilmazSpie02
  • Future Directions

81
On the Scalability-Performance Tradeoffs in MPLS
and IP Routing YilmazSpie02
  • How far are MIRA and PBR from Optimal?
  • Does increased complexity mean better
    performance? Scalability?
  • Compare the performance of
  • Optimal, WSP, MIRA,
    PBR
  • Performance metrics
  • bandwidth acceptance ratio
  • utility
  • maximum utilization

82
On the Scalability-Performance Tradeoffs in MPLS
and IP Routing YilmazSpie02
  • Optimal per-packet routing Optimizes for
    bandwidth acceptance ratio
  • Solve multicommodity flow problem at each flow
    arrival/departure where
  • each flow is a commodity
  • xi(e) amount of commodity i routed through
    edge e with appropriate constraints
  • Excess edges (with costinfinity) are added
    to always have feasible solution

83
On the Scalability-Performance Tradeoffs in MPLS
and IP Routing YilmazSpie02
  • An individual flow can be split, get different
    bandwidth values, assigned to different paths
    during its lifetime
  • Prefers shorter paths (less costly)
  • Does not load balance

Possible bandwidth assignment for a flow asking
for b units of bandwidth. Accepted amountb-b4
84
On the Scalability-Performance Tradeoffs in MPLS
and IP Routing YilmazSpie02
  • Plus
  • Avoid congested links, better utilization and
    performance
  • Computationally simple
  • Does not use extra information like
    ingress-egress pairs or traffic matrix
  • Distributed
  • Stateless
  • Scalable
  • Minus
  • Cannot provide guaranteed service
  • Hard to achieve in practice
  • routing oscillation

85
On the Scalability-Performance Tradeoffs in MPLS
and IP Routing YilmazSpie02
  • Widest-Shortest Path
  • Choose feasible min-hop path
  • Break ties by picking the widest
  • Improves performance by
  • limiting resource consumption
  • balancing load
  • MaICNP97

86
On the Scalability-Performance Tradeoffs in MPLS
and IP Routing YilmazSpie02
  • MIRA KarInfo00
  • To route a demand (a,b,D)
  • 1. For all ingress-egress pairs (s,d) ! (a,b)
  • Compute maxflow values
  • Compute critical links that belong to all
    possible mincuts
  • Compute weights w(l)S(s,d)l is critical asd
  • 2. Eliminate all links with residual bandwidth lt
    D
  • 3. Use Dijkstra to compute shortest path from a
    to b
  • 4. Route the demand and update residual
    capacities
  • Some choices for asd
  • Can be chosen to reflect the importance of pairs
  • If asd1, w(l) shows number of ingress-egress
    pairs for which link is critical
  • If asd1/ maxflow value for ingress-egress pair
    (s,d), then critical links for ingress-egress
    pairs whose maxflow value is lower will weigh
    heavier

87
On the Scalability-Performance Tradeoffs in MPLS
and IP Routing YilmazSpie02
  • Problems with MIRA
  • Computationally expensive
  • Time ComplexityO(pVE2) for each request
  • pnumber of ingress-egress pairs
  • Ingress-egress pair matrix need to be maintained
  • Per-flow state
  • No admission control
  • Should be able to reject a request even if
    there is a feasible path, if accepting will
    result in high blocking probability for future
  • demands

88
On the Scalability-Performance Tradeoffs in MPLS
and IP Routing YilmazSpie02
  • Profile-Based Routing Suri01
  • Traffic Profile (classID, si, si, Bi) Aggregate
    expected traffic between

  • ingress si -egress di for a class classID.
  • Each class is separate commodity.
  • Offline phase
  • find optimal distribution of profiles
  • use result as virtual capacity for each class
  • Online phase
  • route individual requests using pre-allocated
    capacities
  • Offline phase serves as admission control
    mechanism
  • may reject a request even if there is a feasible
    path if a
Write a Comment
User Comments (0)
About PowerShow.com