Title: Broadcast Federation Untangling the Internet Multicast Landscape
1Broadcast FederationUntangling the Internet
Multicast Landscape
Yatin Chawathe
Research, Menlo Park
Joint work with Mukund Seshadri
2Overview
- The problem
- No global multicast/broadcast solution
- Non-interoperable broadcast technologies
- The missing piece
- An internetworking architecture
- Our approach
- Overlay of peering gateways with explicit
peering agreements
3The Problem
Too many non-interoperable broadcast protocols
How do clients in one network access content
being broadcast in another network?
4No single solution is viable
- IP multicast
- No viable inter-domain protocol
- Address scarcity
- SSM
- Better semantics and business model
- But, restricted service model
- Overlay CDNs
- Easier to deploy, but less efficient, need more
infrastructure
5An Interconnection Architecture
- Composition of diverse broadcast networks
- Equivalent of BGP in the unicast IP world
- Requirements
- Support range of net- app-layer protocols
- Scale up in size ( of sessions, of clients)
- Support explicit service agreements
6Our Approach Broadcast Federation
Broadcast Networks (BNs)
Build an overlay network across broadcast
networks i.e., a Broadcast Federation
7Service Model
- Federation session owned by single BN
- Convenient rendezvous point
- Distribution trees rooted at owner BN
- Independent of intra-network protocols
- URL-style session names
- bfed//owner_bn/native_session_name?pmtrvalue
- e.g., bfed//multicast.att.net/224.4.4.44444
- Parameters provide session-specific information
- e.g., sourcesmultiple, metricbandwidth
8Protocol Layers
- I. Routing
- Propagate reachability information
- II. Tree-building
- Handle session JOINs and LEAVEs
- III. Data-forwarding
- Construct transport channels for data packets
- IV. NativeNet
- Customize lower layers for specific BN
9I. Routing Layer
- Session-agnostic
- Routing from BN to BN
- For finer-grained routesMaintain routes to BN
via all reachable BGs - Content-aware routing
- Maintain multiple routing tables
- Real-time vs bulk-data
- Single-source vs multi-source
- Latency vs bandwidth
10II. Tree-building Layer
- One tree per session
- Reverse shortest path, rooted at owner BN
- Single-source ? uni-directional treeMulti-source
? bi-directional tree - Two components
- Mediator
- How does client send JOIN to its access BN
- SROUTEs
- How does access BN pick best upstream node
11Mediator
- Abstract interface to clients
- Clients send JOINs to Mediator
- Mediator forwards them on
- Implemented in BNs native fabric or integrated
in BGs
BN1
BN2
Mediator
- For example
- CDN mediator is part of edge servers
- IP multicast network well-known multicast group
JOIN
12SROUTEs Session-specific Routes
BN1
A
B
SROUTE request
SROUTE response
JOIN
D
JOIN
C
BN2
JOIN
JOIN
REDIRECT
Mediator
- All messages are soft-state
- Distribution tree automatically adapts to route
changes - Pros and Cons
- SROUTEs stored only along distribution tree path
- Increased setup latency
- Client sends JOIN request to local Mediator
- Two possible routes for connecting from BN2 to BN1
- Mediator forwards JOIN request to default BG (D)
for owner BN
- If default BG has no session-specific route, then
- It sends SROUTE request toward BN1
- BN1 returns SROUTE response
- Contains session-specific costs local to owner BN
- Default BG (D) computes best session-specific
route - Sends REDIRECT response to mediator
- Mediator send JOIN request to session BG (C)
- JOIN request propagates up to BN1
13III. Data Forwarding Layer
CDN
BN1
JOIN CDN-URL
TRANSLATE udp//IPport
JOIN bfed//BN1/CDN-URL
SSM
JOIN bfed//BN1/CDN-URL
TRANSLATE ssm//S,Gport
TRANSLATE udp//IPport
IP multicast
JOIN bfed//BN1/CDN-URL
BN2
TRANSLATE multicast//Gport
JOIN bfed//BN1/CDN-URL
Mediator
- Hop-by-hop TRANSLATE messages establish data path
- Map Federation session names into local network
addresses - External peers use unicast within a BN, use
native broadcast
- Provides flexible data path allocation
- E.g., cluster-based BGs assign different backend
nodes for different sessions
14IV. NativeNet Layer
- Customization API for each BG
- allocate_channel
- subscribe/refresh/unsubscribe
- reclaim_channel
- get_sroutes
- send_data
- recv_join/recv_leave
- recv_data
15Status
- We have a prototype implementation
- Linux/C user-level application
- NativeNet implementations for
- IP multicast, Source-specific multicast, and
HTTP-based CDN - Each is 400-600 lines of code
- Preliminary results
- Single BG can handle load on 100Mbps network, 4
BG nodes sufficient for 1Gbps
16Conclusion
- Fragmented broadcasting landscape
- Many non-interoperable broadcast protocols
- Loosely-coupled Federation architecture
- Internetwork of diverse broadcast technologies
- Application-layer Broadcast Gateways
- Explicit peering agreements
- Overlay of unicast and broadcast connections
17Open Questions
- Automated mediator discovery
- How do clients discover their access BN?
- Transport mismatch
- Multiple routing tables avoids problematic paths,
e.g., real-time video via TCP-based BNs - What if the only path has a transport mismatch?
- Complex routing queries
- E.g., combination of bandwidth and system load
18Rate-unlimited Sessions
19Rate-limited Sessions