Overview - PowerPoint PPT Presentation

1 / 51
About This Presentation
Title:

Overview

Description:

Overview. Content Addressable Networks. Intentional Naming System. CAN vs INS ... Scalable any number of hosts and any amount of content. Problems. Flooding. ... – PowerPoint PPT presentation

Number of Views:45
Avg rating:3.0/5.0
Slides: 52
Provided by: LCU1
Category:
Tags: content | overview

less

Transcript and Presenter's Notes

Title: Overview


1
Overview
  • Content Addressable Networks
  • Intentional Naming System
  • CAN vs INS

2
Resource Discovery Desirable Features
  • Expressiveness
  • Responsiveness
  • Robustness
  • Easy Configuration
  • Example app. sending a job to a printer.

3
Gnutella Architecture.
  • Advantages
  • No central file server.
  • Scalable any number of hosts and any amount of
    content.
  • Problems
  • Flooding.
  • No guarantee that file will be found if it is in
    the network.
  • Every query receiving host has to process query.

4
Motivation for CAN
  • Popularity of peer-to-peer networks.
  • Need for scalability no central bottleneck.
  • Need to avoid flooding.
  • Separate naming scheme from name resolution.
  • Counter example DNS, .edu resolved by .edu
    root name server.

5
Basic Design of CAN
  • Use of key-value pair.
  • D-dimensional space divided amongst nodes.
  • Key hashed to a point P in D-dimensional space.
  • Retrieve value for key from the host for point P.
  • Supported operations on key-value pairs.
  • Insertion.
  • Lookup
  • Deletion.

6
How this works.
Neighbour list for 6 2,3 5 1 2
6,1 4 1 3 6,1 1 2, 3, 4
,5
7
How this works.
4.0
5
2
6
3
3
1
5
1
4
2
6
0.0
4.0
4
retrieve (K)
(2.5,1.5) hash (K)
Route request for node for zone (2-3,1-2)
return value

8
Routing in CAN
  • Nodes have information about their logical
    neighbours.
  • Routing is greedy.
  • CAN messages have the source and destination
    cordinates.
  • Average routing path length (d/4) (n1/d).

9
1st step locate bootstrap node
bootstrap node
4.0
2
6
3
1
5
4
0.0
4.0
10
2nd step select point.
bootstrap node
4.0
2
6
selected point
3
1
5
p
4
0.0
4.0
11
3rd step send join request
bootstrap node
4.0
2
6
selected point
3
1
5
p
4
0.0
4.0
12
4th step split and inform neighbours
4.0
2
6
3
1
5
7
4
0.0
4.0
13
CAN Construction
  • Join affects only O (D) nodes.
  • Bootstrap mechanism assumes availability of DNS.
  • Key-value pairs are also split.
  • Similar mechanism for departure.

14
CAN Construction (contd)
  • Problem logically close nodes may be far apart.
  • Key value pairs periodically refreshed.

15
Design Improvements
  • Reduce CAN path length
  • To correspond to IP path length.
  • Reduce CAN path latency
  • To correspond to IP path latency.
  • Improve robustness.
  • Employ load balancing mechanisms.

16
Multi-dimensional Coordinate Space
  • Increasing dimensions reduces routing path.
  • Path length scales as
  • O (d (n 1/d)).
  • Improves routing fault tolerance.

17
Realities Multiple Coordinate Space
  • With r realities, each node maintains r zones.
  • Replication of hash table.
  • Increased availability.
  • Consistency?

18
Multiple Cordinate Spaces
6
2
1
5
2
1
4
3
5
6
3
4
6
3
1
1
6
4
5
2
5
2
2
5
3
6
6
3
3
1
1
5
4
2
4
5
6
2
4
5
3
3
1
4
6
2
1
19
Other Improvements
  • RTT weighted algorithms.
  • Table 1.
  • Topologically-sensitive construction.
  • Example follows.

20
1st step ping DNS.
Y
DNS 1
4
DNS 2
New
DNS 3
4
X
ping time 2 gt 1 gt 3.
21
Partition along X.
Y
DNS 1
4
DNS 2
New
1,2 gt 3
1,3 gt 2
2,3 gt 1
DNS 3
4
X
ping time 2 gt 1 gt 3.
22
Partition along Y.
Y
DNS 1
4
DNS 2
New
DNS 3
4
X
ping time 2 gt 1 gt 3.
23
Topologically-sensitive construction.
24
Other Improvements (contd)
  • Overloading Coordinate zones.
  • To me looks like a combination of
  • Multiple reality RTT weighted routing.
  • Multiple hash functions.
  • Does not improve routing fault tolerance single
    neighbour list.
  • Uniform Partitioning.

25
Other Improvements (contd)
26
Self-configuration/healing
  • Organizes itself as nodes join and leave.
  • Use of heart-beat to keep track of peers.
  • In case of node failure
  • Neighbours take over.
  • Problem existing key-value pairs lost.
  • Rebuild key-value information.

27
Self-configuration/healing (contd)
  • Possibility of fragmentation.
  • One node maintaining multiple zones.
  • To solve this background zone reassignment.

28
Robustness
  • Resilient in case of node failure/departure.
  • However, needs to build key-value pairs.
  • Use of multiple realities.
  • Replicated content.
  • Reaching a coordinate point means reaching in any
    reality.
  • Consistency? Weak!

29
Robustness (contd)
  • Overloaded zones.
  • Multiple hash functions.

30
Scalable?
  • self-reconfigurable if large number of nodes?
  • Yes
  • Path length? Possibly!
  • traceroute 30 hops max. IP level!
  • Num. hops lt 32 if 4 realities (fig. 4.)
  • Num. Hops lt 32 if 5 dimensions. (fig. 5.)

31
Scalable? (contd)
  • Latency stretch constant for 4 dimension with
    topologically sensitive construction. (fig. 8)

32
INS - Design Overview
  • Applications specify intention.
  • Ex. least loaded printer.
  • Hierarchy of attributes-values (av pair).
  • Map intention to location.
  • Ex. Map to the address of a particular printer.

33
Name specifier
  • Used for query.
  • Used for advertising services.
  • Example
  • city charlottesville
  • building olsson
  • service printer laser cs-bw1
  • Is simple and quite expressive!

34
Example printer
root
services
building
city
olsson
charlottesville
printer
water
projector
camera
laser
ink
cs-ink1
cs-bw1
35
Resolution and Routing
  • Early and late binding options
  • Integrated resolution and routing (for late
    binding).
  • The resolution of a name yields a set of name
    records.
  • Services specify application-controlled metric.

36
Early Binding.
data
Service
network location
intentional name
Client
Service
INRs
37
Late binding Intentional Anycast.
Service
intentional name data
Client
Service
INRs
38
Late binding Intentional Multicast.
Service
intentional name data
Client
Service
INRs
39
INR Join.
ping all
new
DSR
DSR - Domain Space Resolver
INRs
40
Self-reconfiguration.
  • INR-to-INR ping times used to readjust topology.
  • Each INR designates host with minimum ping time
    as neighbour.
  • Thus resolver overlay network organized into
    (minimum ?) spanning tree.
  • Spawn instances of INR instances on inactive
    INRs.
  • INRs self-terminate if low load.
  • Exchange soft state to discover/update service.

41
Scalability Issues Problems.
  • Join has problems.
  • Centralized bottleneck DSR.
  • Flooding ping all in list.
  • Not a minimum spanning tree.
  • Look ups - may not scale.
  • Soft state message processing.
  • CPU bound.

42
Scalability Issues - Problems (contd)
43
Scalability Issues Solutions.
  • Spawn new instances on inactive INRs.
  • Partition into virtual spaces (explained later).
  • One overlay network for each virtual space.
  • Updates limited to virtual space.
  • Additional av pair vspace specified by services.

44
Virtual Space
vspace printer
vspace camera
vspace water
Each INR has a small cache of INRs in diff.
vspace.
On cache miss contact DSR.
45
Virtual Space (contd)
46
DSR Revisited
  • Really DNS .
  • Functions
  • Entry point for new INRs.
  • Cache of virtual space partitions.
  • Is this really scalable?
  • Robustness?
  • Single point of failure.

47
Robustness
  • Resolution via network of replicated resolvers.
  • Weak consistency
  • Makes replication possible without much overhead.
  • Is this sufficient?
  • Use of spanning tree for resource
    discovery/update has single point of failure.
  • DSR vulnerability.

48
Name-lookup performance
49
Name-lookup performance (contd)
  • Polynomial in number of attributes in name
    specifiers.
  • From graph max number of lookups decreases
    almost linearly with number of names in tree.
  • Google scans 2.5 billion pages.
  • Will this be able to support 2.5 billion services?

50
CAN vs. INS
  • Scalability bottlenecks.
  • CAN bootstrap mechanism.
  • INS DSR.
  • Consistency
  • Both have weak consistency if replication.
  • Self-configuration.
  • Mechanism somekind ping discovery.

51
CAN vs. INS (contd)
  • Routing ?
Write a Comment
User Comments (0)
About PowerShow.com