P2P-SIP Peer to peer Internet telephony using SIP - PowerPoint PPT Presentation

1 / 59
About This Presentation
Title:

P2P-SIP Peer to peer Internet telephony using SIP

Description:

P2P-SIP Peer to peer Internet telephony using SIP Kundan Singh and Henning Schulzrinne Columbia University, New York May 2005 http://www.cs.columbia.edu/IRT/p2p-sip – PowerPoint PPT presentation

Number of Views:173
Avg rating:3.0/5.0
Slides: 60
Provided by: HenningSc7
Category:

less

Transcript and Presenter's Notes

Title: P2P-SIP Peer to peer Internet telephony using SIP


1
P2P-SIPPeer to peer Internet telephony using SIP
  • Kundan Singh and Henning Schulzrinne
  • Columbia University, New York
  • May 2005
  • http//www.cs.columbia.edu/IRT/p2p-sip

2
Agenda
  • Introduction
  • What is P2P? and SIP? Why P2P-SIP?
  • Architecture
  • SIP using P2P vs P2P over SIP Components that
    can be P2P
  • Implementation
  • Choice of P2P (DHT) Node join, leave message
    routing
  • Conclusions and future work

Total 33 slides
3
What is P2P?
  • Share the resources of individual peers
  • CPU, disk, bandwidth, information,

4
P2P goals
  • Resource aggregation - CPU, disk,
  • Cost sharing/reduction
  • Improved scalability/reliability
  • Interoperability - heterogeneous peers
  • Increased autonomy at the network edge
  • Anonymity/privacy
  • Dynamic (join, leave), self organizing
  • Ad hoc communication and collaboration
  • Definition fuzzy
  • both client and server?
  • true for proxy
  • no need for central server
  • true for SIP-based media
  • SIP can be e2e
  • proxy functions distributed among end systems

5
P2P file sharing
  • Napster
  • Centralized, sophisticated search
  • Gnutella
  • Flooding, TTL, unreachable nodes
  • FastTrack (KaZaA)
  • Heterogeneous peers
  • Freenet
  • Anonymity, caching, replication

6
P2P goals re-visited
  • If present gt find it
  • Flooding is not scalable
  • Blind search is inefficient

P2P systems
Structured
Unstructured
  • Efficient searching
  • Proximity
  • Locality
  • Data availability
  • Decentralization
  • Scalability
  • Load balancing
  • Fault tolerance
  • Maintenance
  • Join/leave
  • Repair
  • Query time, number of messages, network usage,
    per node state

7
Distributed Hash Table (DHT)
  • Types of search
  • Central index (Napster)
  • Distributed index with flooding (Gnutella)
  • Distributed index with hashing (Chord)
  • Basic operations
  • find(key), insert(key, value), delete(key),
  • but no search()

Properties/types Every peer has complete table Chord Every peer has one key/value
Search time or messages O(1) O(log(N)) O(n)
Join/leave messages O(n) O(log(N)2) O(1)
8
Why P2P-SIP?
REGISTER alice_at_columbia.edu gt128.59.19.194
INVITE alice_at_columbia.edu
Contact 128.59.19.194
Alices host 128.59.19.194
Bobs host
columbia.edu
Client-servergt maintenance, configuration,
controlled infrastructure
9
How to combine SIP P2P?
  • P2P-over-SIP
  • Additionally, implement P2P using SIP messaging
  • SIP-using-P2P
  • Replace SIP location service by a P2P protocol

P2P network
REGISTER
INVITE alice
FIND
INSERT
P2P-SIP overlay
Alice 128.59.19.194
INVITE sipalice_at_128.59.19.194
Alice 128.59.19.194
10
SIP-using-P2P
  • Reuse optimized and well-defined external P2P
    network
  • Define P2P location service interface to be used
    in SIP
  • Extends to other signaling protocols

11
P2P-over-SIP
  • P2P algorithm over SIP without change in
    semantics
  • No dependence on external P2P network
  • Reuse and interoperate with existing components,
    e.g., voicemail
  • Built-in NAT/media relays
  • Message overhead

12
What else can be P2P?
  • Rendezvous/signaling
  • Configuration storage
  • Media storage
  • Identity assertion (?)
  • Gateway (?)
  • NAT/media relay (find best one)

13
What is our P2P-SIP?
  • Unlike server-based SIP architecture
  • Unlike proprietary Skype architecture
  • Robust and efficient lookup using DHT
  • Interoperability
  • DHT algorithm uses SIP communication
  • Hybrid architecture
  • Lookup in SIPP2P
  • Unlike file-sharing applications
  • Data storage, caching, delay, reliability
  • Disadvantages
  • Lookup delay and security

14
Background DHT (Chord)
  • Identifier circle
  • Keys assigned to successor
  • Evenly distributed keys and nodes
  • Finger table logN
  • ith finger points to first node that succeeds n
    by at least 2i-1
  • Stabilization for join/leave

Key node
81 9 14
82 10 14
84 12 14
88 16 21
81624 32
83240 42
1
54
8
58
10
14
47
21
42
38
32
38
24
30
15
Background DHT (Chord)
  • Find
  • Map key to node
  • Join, Leave, or Failure
  • Update the immediate neighbors
  • Successor and predecessor
  • Stabilize eventually propagate the info
  • Reliability
  • Log(N) successors data replication

16
Design Alternatives
servers
1
54
10
38
24
30
clients
Use DHT in server farm
Use DHT for all clients But some are resource
limited
  • Use DHT among super-nodes
  • Hierarchy
  • Dynamically adapt

17
Architecture
Signup, Find buddies
IM, call
On reset
Signout, transfer
On startup
Leave
Find
Join
REGISTER, INVITE, MESSAGE
Multicast REGISTER
Peer found/ Detect NAT
REGISTER
SIP-over-P2P
P2P-using-SIP
18
Naming and authentication
  • SIP URI as node and user identifiers
  • Known node sip15_at_192.2.1.3
  • Unknown node sip17_at_example.com
  • User sipalice_at_columbia.edu
  • User name is chosen randomly by the system, by
    the user, or as users email
  • Email the randomly generated password
  • TTL, security

19
SIP messages
1
  • DHT (Chord) maintenance
  • Query the node at distance 2k with node id 11
  • REGISTER
  • To ltsip11_at_example.invalidgt
  • From ltsip7_at_128.59.15.56gt
  • SIP/2.0 200 OK
  • To ltsip11_at_example.invalidgt
  • Contact ltsip15_at_128.59.15.48gt
    predecessorsip10_at_128.59.15.55
  • Update my neighbor about me
  • REGISTER
  • To ltsip1_at_128.59.15.60gt
  • Contact ltsip7_at_128.59.15.56gt predecessorsip1_at_1
    28.59.15.60

10
22
7
15
Find(11) gives 15
20
SIP messages
  • User registration
  • REGISTER
  • To sipalice_at_columbia.edu
  • Contact sipalice_at_128.59.19.1948094
  • Call setup and instant messaging
  • INVITE sipbob_at_example.com
  • To sipbob_at_example.com
  • From sipalice_at_columbia.edu

21
Node Startup
columbia.edu
  • SIP
  • REGISTER with SIP registrar
  • DHT
  • Discover peers multicast REGISTER
  • SLP, bootstrap, host cache
  • Join DHT using node-keyHash(ip)
  • Query its position in DHT
  • Update its neighbors
  • Stabilization repeat periodically
  • User registers using user-keyHash(alice_at_columbia.
    edu)

REGISTER
alice_at_columbia.edu
Detect peers
REGISTER alice42
58
42
12
14
REGISTER bob12
32
22
Node Leaves
  • Chord reliability
  • Log(N) successors, replicate keys
  • Graceful leave
  • Un-REGISTER
  • Transfer registrations
  • Failure
  • Attached nodes detect and re-REGISTER
  • New REGISTER goes to new super-nodes
  • Super-nodes adjust DHT accordingly

REGISTER key42
REGISTER
OPTIONS
DHT
42
42
23
Dialing Out (message routing)
  • Call, instant message, etc.
  • INVITE siphgs10_at_columbia.edu
  • MESSAGE sipalice_at_yahoo.com
  • If existing buddy, use cache first
  • If not found
  • SIP-based lookup (DNS NAPTR, SRV,)
  • P2P lookup
  • Use DHT to locate proxy or redirect to next hop

INVITE key42
Last seen
302
INVITE
DHT
42
24
Implementation
31
  • sippeer C, Unix (Linux), Chord
  • Node join and form the DHT
  • Node failure is detected and DHT updated
  • Registrations transferred on node shutdown

29
31
25
26
15
25
Adaptor for existing phones
  • Use P2P-SIP node as an outbound proxy
  • ICE for NAT/firewall traversal
  • STUN/TURN server in the node

26
Hybrid architecture
  • Cross register, or
  • Locate during call setup
  • DNS, or
  • P2P-SIP hierarchy

27
Offline messages
  • INVITE or MESSAGE fails
  • Responsible node stores voicemail, instant
    message.
  • Delivered using MWI or when online detected
  • Replicate the message at redundant nodes
  • Sequence number prevents duplicates
  • Security How to avoid spies?
  • How to recover if all responsible nodes leave?

28
Conferencing (further study)
  • One member becomes mixer
  • Centralized conferencing
  • What if mixer leaves?
  • Fully distributed
  • Many to many signaling and media
  • Application level multicast
  • Small number of senders

29
Evaluationscalability
  • messages depends on
  • Keep-alive and finger table refresh rate
  • Call arrival distribution
  • User registration refresh interval
  • Node join, leave, failure rates
  • Mrs rf(log(N))2 c.log(N) (k/t)log(N)
    ?(log(N))2/N
  • nodes f(capacity,rates)
  • CPU, memory, bandwidth
  • Verify by measurement and profiling

30
Evaluationreliability and call setup latency
  • User availability depends on
  • Super-node failure distribution
  • Node keep-alive and finger refresh rate
  • User registration refresh rate
  • Replicate user registration
  • Measure effect of each
  • Call setup latency
  • Same as DHT lookup latency O(log(N))
  • Calls to known locations (buddies) is direct
  • DHT optimization can further reduce latency
  • User availability and retransmission timers
  • Measure effect of each

31
P2P vs. server-based SIP
  • Prediction
  • P2P for smaller quick setup scenarios
  • Server-based for corporate and carrier
  • Need federated system
  • multiple p2p systems, identified by DNS domain
    name
  • with gateway nodes

2000 requests/second 7 million registered users
32
Explosive growth (further study)
  • Cache replacement at super-nodes
  • Last seen many days ago
  • Cap on local disk usage (automatic)
  • Forcing a node to become super node
  • Graceful denial of service if overloaded
  • Switching between flooding, CAN, Chord,
  • . . .

33
More open issues (further study)
  • Security
  • Anonymity, encryption
  • Attack/DOS-resistant, SPAM-resistant
  • Malicious node
  • Protecting voicemails from storage nodes
  • Optimization
  • Locality, proximity, media routing
  • Deployment
  • SIP-P2P vs P2P-SIP, Intra-net, ISP servers
  • Motivation
  • Why should I run as super-node?

34
Comparison of P2P and server-based systems
server-based P2P
scaling server count ? scales with user count, but limited by supernode count
efficiency most efficient DHT maintenance O((log N)2)
security trust server provider binary trust most supernodes probabilistic
reliability server redundancy catastrophic failure possible unreliable supernodes catastrophic failure unlikely
35
Catastrophic failure
  • Server redundancy is well-understood ? can handle
    single-server failures
  • Catastrophic (system-wide) failure occurs when
    common element fails
  • Both server-based and P2P
  • all servers crash based on client stimulus (e.g.,
    common parser bug)
  • Traditional server-based system
  • servers share same facility, power, OS,
  • P2P system
  • less likely
  • share same OS?

36
Conclusions
  • P2P useful for VoIP
  • Scalable, reliable
  • No configuration
  • Not as fast as client/server
  • P2P-SIP
  • Basic operations easy
  • Implementation
  • sippeer C, Linux
  • Interoperates
  • Some potential issues
  • Security
  • Performance

http//www.cs.columbia.edu/IRT/p2p-sip
37
Backup slides
38
Napster
  • Centralized index
  • File names gt
  • active holder machines
  • Sophisticated search
  • Easy to implement
  • Ensure correct search
  • Centralized index
  • Lawsuits
  • Denial of service
  • Can use server farms

P1
P5
S
P2
P4
P2
Where is quit playing games ?
FTP
P3
39
Gnutella
  • Flooding
  • Overlay network
  • Decentralized
  • Robust
  • Not scalable.
  • Use TTL. Query can fail
  • Can not ensure correctness

P
P
P
P
P
P
P
P
P
40
KaZaA (FastTrack)
  • Super-nodes
  • Election
  • capacity
  • bandwidth, storage, CPU
  • and availability
  • connection time
  • public address
  • Use heterogeneity of peers
  • Inherently non-scalable
  • If flooding is used

P
P
P
P
P
P
P
P
P
P
P
P
41
FreeNet
  • File is cached on reverse search path
  • Anonymity
  • Replication, cache
  • Similar keys on same node
  • Empirical log(N) lookup
  • TTL limits search
  • Only probabilistic guarantee
  • Transaction state
  • No remove( )
  • Use cache replacement

P
2
1
P
P
3
12
7
11
4
6
P
10
P
5
P
9
8
P
42
Distributed Hash Tables
  • Types of search
  • Central index (Napster)
  • Distributed index with flooding (Gnutella)
  • Distributed index with hashing (Chord)
  • Basic operations
  • find(key), insert(key, value), delete(key), no
    search()

Properties/types Every peer has complete table Every peer has one key/value
Search time or messages O(1) O(n)
Join/leave messages O(n) O(1)
43
CANContent Addressable Network
  • Each key maps to one point in the d-dimensional
    space
  • Each node responsible for all the keys in its
    zone.
  • Divide the space into zones.

1.0
C
D
E
B
A
0.0
1.0
0.0
C
D
E
A
B
44
CAN
1.0 .75 .5 .25 0.0
E
E
A
A
C
B
C
B
X
X
Z
D
D
(x,y)
0.0 .25 .5 .75
1.0
Node Z joins
Node X locates (x,y)(.3,.1)
State 2d Search dxn1/d
45
Chord
  • Identifier circle
  • Keys assigned to successor
  • Evenly distributed keys and nodes

1
54
8
58
10
14
47
21
42
38
32
38
24
30
46
Chord
Key node
81 9 14
82 10 14
84 12 14
88 16 21
81624 32
83240 42
1
54
8
58
10
14
47
21
  • Finger table logN
  • ith finger points to first node that succeeds n
    by at least 2i-1
  • Stabilization after join/leave

42
38
32
38
24
30
47
Tapestry
  • ID with base B2b
  • Route to numerically closest node to the given
    key
  • Routing table has O(B) columns. One per digit in
    node ID.
  • Similar to CIDR but suffix-based

763
427
364
123
324
365
135
564
N2 N1 N0
064 ?04 ??0
164 ?14 ??1
264 ?24 ??2
364 ?34 ??3
464 ?44 ??4
564 ?54 ??5
664 ?64 ??6
4 gt 64 gt 364
48
Pastry
  • Prefix-based
  • Route to node with shared prefix (with the key)
    of ID at least one digit more than this node.
  • Neighbor set, leaf set and routing table.

d471f1
d467c4
d46a1c
d462ba
d4213f
Route(d46a1c)
d13da3
65a1fc
49
Other schemes
  • Distributed TRIE
  • Viceroy
  • Kademlia
  • SkipGraph
  • Symphony

50
Comparison
Property/ scheme Un-structured CAN Chord Tapestry Pastry Viceroy
Routing O(N) or no guarantee d x N1/d log(N) logBN logBN log(N)
State Constant 2d log(N) logBN B.logBN log(N)
Join/leave Constant 2d (logN)2 logBN logBN log(N)
Reliability and fault resilience Data at Multiple locations Retry on failure finding popular content is efficient Multiple peers for each data item retry on failure multiple paths to destination Replicate data on consecutive peers retry on failure Replicate data on multiple peers keep multiple paths to each peers Replicate data on multiple peers keep multiple paths to each peers Routing load is evenly distributed among participant lookup servers
51
Server-based vs peer-to-peer
Reliability, failover latency DNS-based. Depends on client retry timeout, DB replication latency, registration refresh interval DHT self organization and periodic registration refresh. Depends on client timeout, registration refresh interval.
Scalability, number of users Depends on number of servers in the two stages. Depends on refresh rate, join/leave rate, uptime
Call setup latency One or two steps. O(log(N)) steps.
Security TLS, digest authentication, S/MIME Additionally needs a reputation system, working around spy nodes
Maintenance, configuration Administrator DNS, database, middle-box Automatic one time bootstrap node addresses
PSTN interoperability Gateways, TRIP, ENUM Interact with server-based infrastructure or co-locate peer node with the gateway
52
Related work Skype From the KaZaA community
  • Host cache of some super nodes
  • Bootstrap IP addresses
  • Auto-detect NAT/firewall settings
  • STUN and TURN
  • Protocol among super nodes ??
  • Allows searching a user (e.g., kun)
  • History of known buddies
  • All communication is encrypted
  • Promote to super node
  • Based on availability, capacity
  • Conferencing

53
Reliability and scalabilityTwo stage
architecture for CINEMA
a_at_example.com
a.example.com _sip._udp SRV 0 0 a1.example.com
SRV 1 0 a2.example.com
a1
s1
a2
sipbob_at_example.com
s2
sipbob_at_b.example.com
b_at_example.com
b.example.com _sip._udp SRV 0 0 b1.example.com
SRV 1 0 b2.example.com
s3
b1
b2
ex
example.com _sip._udp SRV 0 40 s1.example.com
SRV 0 40 s2.example.com SRV 0 20
s3.example.com SRV 1 0 ex.backup.com
Request-rate f(stateless, groups) Bottleneck
CPU, memory, bandwidth? Failover latency ?
54
Related workP2P
  • P2P networks
  • Unstructured (Kazaa, Gnutella,)
  • Structured (DHT Chord, CAN,)
  • Skype and related systems
  • Flooding based chat, groove, Magi
  • P2P-SIP telephony
  • Proprietary NimX, Peerio,
  • File sharing SIPShare

55
Why we chose Chord?
  • Chord can be replaced by another
  • As long as it can map to SIP
  • High node join/leave rates
  • Provable probabilistic guarantees
  • Easy to implement
  • X proximity based routing
  • X security, malicious nodes

56
Related workJXTA vs Chord in P2P-SIP
  • JXTA
  • Protocol for communication (peers, groups, pipes,
    etc.)
  • Stems from unstructured P2P
  • P2P-SIP
  • Instead of SIP, JXTA can also be used
  • Separate search (JXTA) from signaling (SIP)

57
Find(user)
  • Option-2 With REGISTER
  • REGISTERs with nodes responsible for its key
  • Refreshes periodically
  • Allows offline messages (?)
  • Option-1 No REGISTER
  • Node computes key based on user ID
  • Nodes join the overlay based on ID
  • One node ? one user

56
REGISTER alice42
58
42
alice42
12
bob12
42
14
12
REGISTER bob12
32
24
24
sam24
58
P2P-SIPSecurity open issues (threats,
solutions, issues)
  • More threats than server-based
  • Privacy, confidentiality
  • Malicious node
  • Dont forward all calls, log call history (spy),
  • free riding, motivation to become super-node
  • Existing solutions
  • Focus on file-sharing (non-real time)
  • Centralized components (boot-strap, CA)
  • Assume co-operating peers (
  • works for server farm in DHT
  • Collusion
  • Hide security algorithm (e.g., yahoo, skype)
  • Chord
  • Recommendations, design principles,

59
P2P so far
Kademlia protocol eMule MindGem MLDonkey
MANOLITO/MP2P network Blubster Piolet
RockItNet Napster network Napigator OpenNap
WinMX Peercasting type networks PeerCast
IceShare Freecast  WPNP network WinMX other
networks Akamai Alpine ANts P2P Ares Galaxy
Audiogalaxy network Carracho Chord The Circle
Coral5  Dexter Diet-Agents EarthStation 5
network
Evernet FileTopia GNUnet Grapevine Groove
Hotwire iFolder6 konspire2b Madster/Aimster
MUTE Napshare OpenFT Poisoned P-Grid7 IRC
_at_find XDCC JXTA Peersites 8 MojoNation Mnet
Overnet network Scour Scribe Skype
Solipsis SongSpy network Soulseek SPIN
SpinXpress SquidCam 9 Swarmcast WASTE Warez
P2P Winny
  • Gnutella network
  • Acquisitionx (Mac OS X)
  • BearShare
  • BetBug
  • Cabos
  • CocoGnut (RISC OS)
  • Gnucleus
  • Grokster
  • iMesh Light
  • gtk-gnutella (Unix)
  • LimeWire (Java)
  • MLDonkey
  • mlMac
  • Morpheus
  • Phex
  • Poisoned
  • Swapper
  • Shareaza
  • Applejuice network
  • Applejuice Client
  • BitTorrent network
  • ABC
  • Azureus
  • BitAnarch
  • BitComet
  • BitSpirit
  • BitTornado
  • BitTorrent
  • BitTorrent
  • BitTorrent.Net
  • G3 Torrent
  • mlMac
  • MLDonkey
  • QTorrent
  • SimpleBT
  • Shareaza
  • TomatoTorrent (Mac OS X)

eDonkey network aMule (Linux) eDonkey client
(no longer supported) eMule LMule MindGem
MLDonkey mlMac Shareaza xMule iMesh Light
ed2k (eDonkey 2000 protocol) eDonkey eMule
xMule aMule Shareaza FastTrack protocol giFT
Grokster iMesh, iMesh Light Kazaa , Kazaa
Lite, K, Diet Kaza, CleanKazaa Mammoth
MLDonkey mlMac Poisoned Freenet network
Entropy Freenet Frost
Write a Comment
User Comments (0)
About PowerShow.com