Title: SONET/SDH
1SONET/SDH
- Yaakov (J) Stein
- Chief Scientist
- RAD Data Communications
2Course Outline
- Background (analog telephony, TDM, PDH)
- SONET/SDH history and motivation
- Architecture (path, line, section)
- Rates and frame structure
- Payloads and mappings
- Protection and rings
- VCAT and LCAS
- Handling packet data
3Background
4The PSTN circa 1900
pair of copper wires local loop
manual routing at local exchange office (CO)
- Analog voltage travels over copper wire
end-to-end - Voice signal arrives at destination severely
attenuated and distorted - Routing performed manually at exchanges
office(s) - Routing is expensive and lengthy operation
- Route is maintained for duration of call
5Telephony Multiplexing
- 1900 25 of telephony revenues went to copper
mines - standard was 18 gauge, long distance even heavier
- two wires per loop to combat cross-talk
- needed method to place multiple conversations on
a single trunk - 1918 Carrier system (FDM)
- 5 conversations on single trunk
- later extended to 12 (group)
- still later supergroups (60), master groups
(60)),
6The Digitalization of the PSTN
- Shannon (Bell Labs) proved that
- Digital communications
- is always better than
- Analog communications
- and the PSTN became digital
- Better means
- More efficient use of resources (e.g. more
channels on trunks) - Higher voice quality (less noise, less
distortion) - Added features
- After the invention of the transistor, in 1963
T-carrier system (TDM) - 1 byte per sample 8000 samples per second
- T1 24 conversations per trunk
- 2 groups per cable!
7and switching became easier too
Analog Crossbar switch
Digital Cross-connect (DXC)
processor
- Complexity increases rapidly with size
8Optimized Telephony Routing
Circuit switching (route is maintained for
duration of call) Route set-up is an expensive
operation, just as it was for manual
switching Today, complex least cost routing
algorithms are used Call duration consists of
set-up, voice and tear-down phases
9The PSTN circa 1960
trunks circuits
local loop subscriber line
automatic routing through universal telephone
network
- Analog voltages used throughout, but extensive
Frequency Division Multiplexing - Voice signal arrives at destination after
amplification and filtering to 4 KHz - Automatic routing
- Universal dial-tone
- Voltage and tone signaling
- Circuit switching (route is maintained for
duration of call)
10The Present PSTN
tandem switch
last mile
PSTN Network
subscriber line
class 5 switch
class 5 switch
- Analog voltages and copper wire used only in
last mile, - but core designed to mimic original situation
- Voice signal filtered to 4 KHz at input to
digital network - Time Division Multiplexing of digital signals in
the network - Extensive use of fiber optic and wireless
physical links - T1/E1, PDH and SONET/SDH synchronous protocols
- Signaling can be channel/trunk associated or via
separate network (SS7) - Automatic routing
- Circuit switching (route is maintained for
duration of call) - Complex routing optimization algorithms (LP,
Karmarkar, etc)
11TDM timing
- Time Domain Multiplexing relies on all channels
(timeslots) - having precisely the same timing (frequency and
phase) - In order to enforce this
- the TDM device itself frequently performs the
digitization
12if the inputs are already digital
- If the TDM switch does not digitize the analog
signals - then there can be a problem
- the clocks used to digitize do not have identical
frequencies - we get byte slips! (well, actually, we can
get bit slips first ) -
exaggerated pictorial
example - Numerical example
- clock derived from 8000 Hz. quartz crystal
- typical crystal accuracy ? 50 ppm
- So 2 crystals can differ by 100 ppm
- i.e. 0.8 samples / second
- So difference is 1 sample after 1 ¼ seconds
component signals
TDM
13The fix
- We must ensure that all the clocks have the same
frequency - Every telephony network has an accurate clock
called - a stratum 1 or Primary Reference Clock
- All other clocks are directly or indirectly
locked to it (master slave) - A TDM receiving device can lock onto the source
clock - based on the incoming data (FLL, PLL)
- For this to work, we must ensure that the data
has enough transitions - (special line coding, scrambling bits, etc.)
14Comparing clocks
- A clock is said to be isochronous (isosequal,
chronostime) - if its ticks are equally spaced in time
- 2 clocks are said to be synchronous (synsame
chronostime) - if they tick in time, i.e. have precisely the
same frequency - 2 clocks are said to be plesiochronous
(plesionear chronostime) - if they are nominally if the same frequency
- but are not locked
15PDH principle
- If we want yet higher rates, we can mux together
TDM signals (tributaries) - We could demux the TDM timeslots and directly
remux them - but that is too complex
- The TDM inputs are already digital, so we must
- insist that the mux provide clock to all
tributaries - (not always possible, may already be locked
to a network) - OR
- somehow transport tributary with its own clock
- across a higher speed network with a
different clock - (without spoiling remote clock recovery)
16PDH hierarchies
level
0
64 kbps
24
30
24
1
2.048 Mbps
1.544 Mbps
1.544 Mbps
E1
J1
T1
4
4
4
2
6.312 Mbps
6.312 Mbps
8.448 Mbps
E2
T2
J2
4
7
5
3
34.368 Mbps
44.736 Mbps
32.064 Mbps
E3
T3
J3
3
6
4
4
97.728 Mbps
139.264 Mbps
274.176 Mbps
J4
E4
T4
CEPT
Japan
N.A.
17Framing and overhead
- In addition to locking on to bit-rate
- we need to recognize the frame structure
- We identify frames by adding Frame Alignment
Signal - The FAS is part of the frame overhead (which also
includes "C-bits", OAM, etc.) - Each layer in PDH hierarchy adds its own overhead
- For example
- E1 2 overhead bytes per 32 bytes overhead
6.25 - E2 4 E1s 8.192 Mbps out of 8.448Mbps
- so there is an additional 0.256 Mbps 3
- altogether 43064 kbps 7.680 Mbps out of 8.448
Mbps - or 9.09 overhead
- What happens next ?
18PDH overhead
digital signal data rate (Mbps) voice channels overhead percentage
T1 1.544 24 0.52
T2 6.312 96 2.66
T3 44.736 672 3.86
T4 274.176 4032 5.88
E1 2.048 30 6.25
E2 8.448 120 9.09
E3 34.368 480 10.61
E4 139.264 1920 11.76
- Overhead always increases with data rate !
19OAM
- analog channels and 64 kbps digital channels
- do not have mechanisms to check signal validity
and quality - thus
- major faults could go undetected for long periods
of time - hard to characterize and localize faults when
reported - minor defects might be unnoticed indefinitely
- Solution is to add mechanisms based on overhead
- as PDH networks evolved, more and more overhead
was dedicated to - Operations, Administration and Maintenance (OAM)
functions - including
- monitoring for valid signal
- defect reporting
- alarm indication/inhibition (AIS)
20PDH Justification
- In addition to FAS, PDH overhead includes
- justification control (C-bits) and justification
opportunity stuffing (R-bits) - Assume the tributary bitrate is B ? T
- Positive justification
- payload is expected at highest bitrate BT
- if the tributary rate is actually at the maximum
bitrate - then all payload and R bits are filled
- if the tributary rate is lower than the maximum
- then sometimes there are not enough incoming bits
- so the R-bits are not filled and C-bits indicate
this - Negative justification
- payload is expected at lowest bitrate B-T
- if the tributary rate is actually the minimum
bitrate - then payload space suffices
- if the tributary rate is higher than the minimum
- then sometimes there are not enough positions to
accommodate - so R-bits in the overhead are used and the C-bits
indicate this - Positive/Negative justification
- payload is expected at nominal bitrate B
21SONET/SDH motivation and history
22First step
- With the disvestiture of the US Bell system a new
need arose - MCI and NYNEX couldnt directly interconnect
optical trunks - Interexchange Carrier Compatibility Forum
requested T1 to solve problem - Needed multivendor/ multioperator fiber-optic
communications standard - Three main tasks
- Optical interfaces (wavelengths, power levels,
etc) - proposal submitted to T1X1 (Aug 1984)
- T1.106 standard on single mode optical interfaces
(1988) - Operations (OAM) system
- proposal submitted to T1M1
- T1.119 standard
- Rates, formats, definition of network elements
- Bellcore (Yau-Chau Ching and Rodney Boehm)
proposal (Feb 1985) - proposed to T1X1
- term SONET was coined
- T1.105 standard (1988)
23PDH limitations
- Rate limitations
- Copper interfaces defined
- Need to mux/demux hierarchy of levels (hard to
pull out a single timeslot) - Overhead percentage increases with rate
- At least three different systems (Europe, NA,
Japan) - E 2.048, 8.448, 34.348, 139.264
- T 1.544, 3.152, 6.312, 44.736, 91.053, 274.176
- J 1.544, 3.152, 6.312, 32.064, 97.728, 397.2
- So a completely new mechanism was needed
24Idea behind SONET
- Synchronous Optical NETwork
- Designed for optical transport (high bitrate)
- Direct mapping of lower levels into higher ones
- Carry all PDH types in one universal hierarchy
- ITU version Synchronous Digital Hierarchy
- different terminology but interoperable
- Overhead doesnt increase with rate
- OAM designed-in from beginning
25Standardization !
- The original Bellcore proposal
- hierarchy of signals, all multiple of basic rate
(50.688) - basic rate about 50 Mbps to carry DS3 payload
- bit-oriented mux
- mechanisms to carry DS1, DS2, DS3
- Many other proposals were merged into 1987 draft
document (rate 49.920) - In summer of 1986 CCITT express interest in
cooperation - needed a rate of about 150 Mbps to carry E4
- wanted byte oriented mux
- Initial compromise attempt
- byte mux
- US wanted 13 rows 180 columns
- CEPT wanted 9 rows 270 columns
- Compromise!
- US would use basic rate of 51.84 Mbps, 9 rows
90 columns - CEPT would use three times that rate - 155.52
Mbps, 9 rows 270 columns
26SONET/SDH architecture
27Layers
- SONET was designed with definite layering
concepts - Physical layer optical fiber (linear or ring)
- when exceed fiber reach regenerators
- regenerators are not mere amplifiers,
- regenerators use their own overhead
- fiber between regenerators called section
(regenerator section) - Line layer link between SONET muxes (Add/Drop
Multiplexers) - input and output at this level are Virtual
Tributaries (VCs) - actually 2 layers
- lower order VC (for low bitrate payloads)
- higher order VC (for high bitrate payloads)
- Path layer end-to-end path of client data
(tributaries) - client data (payload) may be
- PDH
- ATM
- packet data
28SONET architecture
- SONET (SDH) has at 3 layers
- path end-to-end data connection, muxes
tributary signals path section - there are STS paths Virtual Tributary (VT)
paths - line protected multiplexed SONET payload
multiplex section - section physical link between adjacent elements
regenerator section - Each layer has its own overhead to support needed
functionality -
SDH
terminology
29STS, OC, etc.
- A SONET signal is called a Synchronous Transport
Signal - The basic STS is STS-1, all others are multiples
of it - STS-N - The (optical) physical layer signal corresponding
to an STS-N is an OC-N
SONET Optical rate
STS-1 OC-1 51.84M
STS-3 OC-3 155.52M
STS-12 OC-12 622.080M
STS-48 OC-48 2488.32M
STS-192 OC-192 9953.28M
3
4
4
4
30rates and frame structure
31SONET / SDH frames
framing
- Synchronous Transfer Signals are bit-signals (OC
are optical) - Like all TDM signals, there are framing bits at
the beginning of the frame - However, it is convenient to draw SONET/SDH
signals as rectangles
32SONET STS-1 frame
90 columns
framing
9 rows
- Each STS-1 frame is 90 columns 9 rows 810
bytes - There are 8000 STS-1 frames per second
- so each byte represents 64 kbps (each column is
576 kbps) - Thus the basic STS-1 rate is 51.840 Mbps
33SDH STM-1 frame
- Synchronous Transport Modules are the bit-signals
for SDH - Each STM-1 frame is 270 columns 9 rows 2430
bytes - There are 8000 STM-1 frames per second
- Thus the basic STM-1 rate is 155.520 Mbps
- 3 times the STS-1 rate!
34SONET/SDH rates
SONET SDH columns rate
STS-1 90 51.84M
STS-3 STM-1 270 155.52M
STS-12 STM-4 1080 622.080M
STS-48 STM-16 4320 2488.32M
STS-192 STM-64 17280 9953.28M
- STS-N has 90N columns STM-M corresponds to
STS-N with N 3M - SDH rates increase by factors of 4 each time
- STS/STM signals can carry PDH tributaries, for
example - STS-1 can carry 1 T3 or 28 T1s or 1 E3 or 21 E1s
- STM-1 can carry 3 E3s or 63 E1s or 3 T3s or 84
T1s
35SONET/SDH tributaries
SONET SDH T1 T3 E1 E3 E4
STS-1 28 1 21 1
STS-3 STM-1 84 3 63 3 1
STS-12 STM-4 336 12 252 12 4
STS-48 STM-16 1344 48 1008 48 16
STS-192 STM-64 5376 192 4032 192 64
- E3 and T3 are carried as Higher Order Paths
(HOPs) - E1 and T1 are carried as Lower Order Paths (LOPs)
- (the numbers are for direct mapping)
36STS-1 frame structure
90 columns
Synchronous Payload Envelope
3 rows
9 rows
9 rows
6 rows
Transport Overhead TOH
- Section overhead is 3 rows 3 columns 9 bytes
576 kbps - framing, performance monitoring, management
- Line overhead is 6 rows 3 columns 18 bytes
1152 kbps - protection switching, line maintenance,
mux/concat, SPE pointer - SPE is 9 rows 87 columns 783 bytes 50.112
Mbps - Similarly, STM-1 has 9 (different) columns of
sectionline overhead !
37STM-1 frame structure
270 columns
RSOH
MSOH
Section Overhead SOH
- STM-1 has 9 (different) columns of transport
overhead ! - RS overhead is 3 rows 9 columns
- Pointer overhead is 1 row 9 columns
- MS overhead is 5 rows 9 columns
- SPE is 9 rows 261 columns
38Even higher rates
- 3 STS-1s can form an STS-3
- 4 STM-1s (STS-3s) can form an STM-4 (STS-12)
- 4 STM-4s (STS-12s) can form an STM-16 (STS-48)
- etc. for STM-N (STS-3N)
- The procedure is byte-interleaving
39Byte-interleaving
. . .
40Scrambling
- SONET/SDH receivers recover clock based on
incoming signal - Insufficient number of 0-1 transitions causes
degradation of clock performance - In order to guarantee sufficient transitions,
SONET/SDH employ a scrambler - All data except first row of section overhead is
scrambled - Scrambler is 7 bit self-synchronizing X7 X6
1 - Scrambler is initialized with ones
- A short scrambler is sufficient for voice data
- but NOT for data which may contain long stretches
of zeros - When sending data an additional payload scrambler
is used - modern standards use 43 bit X43 1
- run continuously on ATM payload bytes (suspended
for 5 bytes of cell tax) - run continuously on HDLC payloads
41STS-1 Overhead
- The STS-1 overhead consists of
- 3 rows of section overhead
- frame sync (A1, A2)
- section trace (J0)
- error control (B1)
- section orderwire (E1)
- Embedded Operations Channel (Di)
- 6 rows of line overhead
- pointer and pointer action (Hi)
- error control (B2)
- Automatic Protection Switching signaling (Ki)
- Data Channel (Di)
- Synchronization Status Message (S1)
- Far End Block Error (M0)
- line orderwire (E2)
A1 A2 J0
B1 E1 F1
D1 D2 D3
H1 H2 H3
B2 K1 K2
D4 D5 D6
D7 D8 D9
D10 D11 D12
S1 M0 E2
section overhead
line overhead
42STM-1 Overhead
m media dependent (defined for SONET
radio)
A1 A1 A1 A2 A2 A2 J0 res res
B1 m m E1 m F1 res res
D1 m m D2 m D3
B2 B2 B2 K1 K2
D4 D5 D6
D7 D8 D9
D10 D11 D12
S1 M1 E2
RSOH
AU pointers
res reserved for national use
MSOH
SOH
43A1, A2, J0 (section overhead)
- A1, A2 - framing bytes
- A1 11110110
- A2 00101000
- SONET/SDH framing always uses equal numbers of A1
and A2 bytes - J0 - regenerator section trace (in early SONET -
a counter called C1) - enables receiver to be sure that the section
connection is still OK - enables identifying individual STS/STMs after
muxing - J0 goes through a 16 byte sequence
- MSBs are J0 framing (100000)
- Cs are CRC-7 of previous frame
- S are 15 7-bit characters
- section access point identifier
44B1, E1, F1, D1-3 (section overhead)
- B1 Byte Interleaved Parity-8 byte
- even parity of bits of bytes of previous frame
after scrambling - only 1 BIT-8 for multiplexed STS/STM
- E1 section orderwire
- 64 kbps voice link for technicians
- from regenerator to regenerator
- F1 64 kbps link for user purposes
- D1 D2 D3 192 kbps messaging channel
- used by section termination as Embedded
Operations Channel (SONET) - or Data Communications Channel (SDH)
45Pointers (line overhead)
- In SONET, pointers are considered part of line
overhead - For STS-1, H1H2 is the pointer, H3 is the
pointer action - H1H2 indicates the offset (in bytes) from H3 to
the SPE - (i.e. if 0 then J1 POH byte is immediately after
H3 in the row) - 4 MSBs are New Data Flag, 10 LSBs are actual
offset value (0 782) - When offset522 the STS-1 SPE is in a single
STS-1 frame - In all other cases the SPE straddles two frames
- When offset is a multiple of 87, the SPE is
rectangular - To compensate for clock differences
- we have pointer justification
- When negative justification
- H3 carries the extra data
- When positive justification
- byte after H3 is stuffing byte
46SONET Justification
- If tributary rate is above nominal, negative
justification is needed - When less than 8 more bits than expected in
buffer - NDF is 0110
- offset unchanged
- When 8 extra bits accumulate
- NDF is set to 1001
- extra byte placed into H3
- offset is decremented by 1 (byte)
- If tributary rate is below nominal, positive
justification is needed - When less than 8 fewer than expected bits in
buffer - NDF is 0110
- offset unchanged
- When 8 missing bits
- NDF is set to 1001
- byte after H3 is stuffing
- offset is incremented by 1 (byte)
47B2, K1, K2, D4-D12 (line overhead)
- B2 BIP-8 of line overhead previous envelope
(w/o scrambling) - N B2s for muxed STM-N
- K1 and K2 are used for Automatic Protection
Switching (see later) - D4 D12 are a 576 Kbps Data Communications
Channel - between multiplexers
- usually manufacturer specific OAM functions
48S1, M0, E2 (line overhead)
- S1 Synchronization Status Message
- indicates stratum level (unknown, stratum 1, ,
do not use) - M0 Far End Block Error
- indicates number of BIP violations detected
- E2 line orderwire
- 64 kbps voice link for technicians
- from line mux to line mux
49Payloads andMappings
50STS-1 HOP SPE structure
- We saw that the pointer the line overhead points
to the STS path overhead POH - (after re-arranging) POH is one column of 9 rows
(9 bytes 576 kbps)
51STS-1 HOP
- 1 column of SPE is POH
- 2 more (fixed stuffing) columns are reserved
- We are left with
- 84 columns 756 bytes 48.384 Mbps for payload
- This is enough for a E3 (34.368M) or a T3
(44.736M)
52STS-1 Path overhead
- 1 column of overhead for path (576 Kbps)
- POH is responsible for
- path type identification
- path performance monitoring
- status (including of mapped payloads)
- virtual concatenation
- path protection
- trace
53J1, B3, C2 (path overhead)
C2 (hex) Payload type
00 unequipped
01 nonspecific
02 LOP (TUG)
04 E3/T3
12 E4
13 ATM
16 PoS RFC 1662
18 LAPS X.85
1A 10G Ethernet
1B GFP
CF PoS - RFC1619
- J1 path trace
- enables receiver to be sure
- that the path connection is still OK
- B3 BIP-8 even bit parity of bytes
- (without scrambling)
- of previous payload
- C2 path signal label
- identifies the payload type
- (examples in table)
54G1, F2, H4, F3, K3, N1 (path overhead)
- G1 path status
- conveys status and performance back to originator
- 4 MSBs are path FEBE, 1 bit RDI, 3 unused
- F2 and F3 user specific communications
- H4 used for LOP multiframe sync and VCAT (see
later) - K3 (4 MSBs) path APS
- N1 Tandem Connection Monitoring
- Messaging channel for tandem connections
55LOP
7 VTGs
1
87
59
30
1
2
3
4
5
6
7
- To carry lower rate payloads, divide the 84
available columns - into 7 12 interleaved columns, i.e. 7 Virtual
Tributary (VT) Groups - VT group is 12 columns of 9 rows, i.e. 108 bytes
or 6.912 Mbps - VT group is composed of VT(s)
- there are different types of VT in order to carry
different types of payload - all VTs in VT group must be of the same type (no
mixing) - but different VT groups in same SPE can have
different VT types - A VT can have 3, 4, 6 or 12 columns
56SONET/SDH VT/VC types
VT/STS VC column rate payload
VT 1.5 VC-11 3 1.728 DS1 (1.544)
VT 2 VC-12 4 2.304 E1 (2.048)
VT 3 6 3.456 DS1C (3.152)
VT 6 VC-2 12 6.912 DS2 (6.312)
STS-1 VC-3 48.384 E3 (34.368)
STS-1 VC-3 48.384 DS3 (44.736)
STS-3c VC-4 149.760 E4 (139.264)
4 per group
3 per group
LOP
2 per group
1 per group
HOP
standard PDH rates map efficiently into SONET/SDH
!
57LO Path overhead
- LOP OH is responsible for timing, PM, REI,
- LO Path APS signaling is 4 MSBs of byte K4
V1 pointer
H4XXXXXX00
125 msec
VC11 25B VC12 34B
V2 pointer
H4XXXXXX01
500 msec
V3 pointer
H4XXXXXX10
V4 pointer
H4XXXXXX11
VC11 27B VC12 36B
58Payload capacity
- VT1.5/VC-11 has 3 columns 27 bytes 1.728 Mbps
- but 2 bytes are used for overhead (V1/V2/V3/V4
and V5/J2/N2/K4) - so actually only 25 bytes 1.6 Mbps are
available - Similarly
- VT2/VC-12 has 4 columns 36 bytes 2.304 Mbps
- but 2 bytes are used for overhead
- So actually only 34 bytes 2.176 Mbps are
available
59LOP overhead
- V5 consists of
- BIP (2b)
- REI (1b)
- RFI (1b)
- Signal label (3b) (uneq, async, bit-sync,
byte-sync, test, AIS) - RDI (1b)
- J2 is path trace
- N2 is the network operator byte
- may be used for LOP tandem connection monitoring
(LO-TCM) - K4 is for LO VCAT and LO APS
60SDH Containers
- Tributary payloads are not placed directly into
SDH - Payloads are placed (adapted) into containers
- The containers are made into virtual containers
(by adding POH) - Next, the pointer is used the pointer VC is a
TU or AU - Tributary Unit adapts a lower order VC to high
order VC - Administrative Unit adapts higher order VC to SDH
- TUs and AUs are grouped together until they are
big enough - We finally get an Administrative Unit Group
- To the AUG we add SOH to make the STM frame
61Formally
- C-n n 11, 12, 2, 3, 4
- VC-n POH C-n
- TU-n pointer VC-n (n11, 12, 2, 3)
- AU-n pointer VC-n (n3,4)
- TUG N TU-n
- AUG N AU-n
- STM-N SOH AUG
62Multiplexing
- An AUG may contain a VC-4 with an E4
- or it may contain 3 AU-3s each with a VC-3s with
an E3 - In the latter case, the AU pointer points to the
AUG - and inside the AUG are 3 pointers to the AU-3s
63More multiplexing
- Similarly, we can hierarchically build complex
structures - Lower rate STMs can be combined into higher rate
STMs - AUGs can be combined into STMs
- AUs can be combined into AUGs
- TUGs can be combined into high order VCs
- Lower rate TUs can be combined into TUGs
- etc.
- But only certain combinations are allowed by
standards
64All SDH mappings
AUG
E4 139.264 M
STM-N
C-4
VC-4
AU-4
AUG
ATM 149.760M
3
AUG
VC-3
TU-3
TUG-3
3
E3 34.368 M
AU-3
VC-3
C3
STM-0
T3 44.736 M
ATM 48.384 M
7
7
T2 6.312 M
C2
TUG-2
VC-2
TU-2
ATM 6.874M
3
E1 2.048 M
VC-12
TU-12
C12
ATM 2.144 M
4
T1 1.544 M
C11
VC-11
TU-11
ATM 1.6 M
65All SONET mappings
E4 139.264 M
STS-3 SPE
STS-N
STS-3c
ATM 149.760M
N
E3 34.368 M
STS-1
STS-1 SPE
T3 44.736 M
ATM 48.384 M
7
VTG
T2 6.312 M
VT6 SPE
VT6
ATM 6.874M
3
E1 2.048 M
VT-2
VT2 SPE
ATM 2.144 M
pointer processing
4
T1 1.544 M
VT1.5 SPE
VT1.5
ATM 1.6 M
66Tributary mapping types
- When mapping tributaries into VCs, PDH-like
bit-stuffing is used - For E1 and T1 there are several options
- Asynchronous mapping (framing-agnostic)
- Bit synchronous mapping
- Byte synchronous mapping (time-slot aligned)
- E4 into VC-4, E3/T3 into VC-3 are always
asynchronous - T1 into VC-11 may be any of the 3
- (in byte synchronous the framing bit is placed in
the VC overhead) - E1 into VC-12 may be asynchronous or byte
synchronous
67WAN-PHY (10 GbE in STM-64)
10GBASE-W 802.3-2005 Clause 50
- There is a special case where the bit-rates work
out relatively well - GbE 10GBASE-R (64B/66B coding) can be directly
mapped - into a STM-64 (with contiguous concatenation -
see later) without need for GFP - MAC creates "stretched InterPacket Gap" to
compensate for rate being lt 10G - This is the fastest connection commonly used for
Internet traffic - Complication SDH clock accuracy is ?4.6 ppm,
GbE accuracy is ?20 ppm
64(270-9) 16704 columns
J1
63 columns of fixed stuff
68Protection and Rings
69What is protection ?
- SONET/SDH need to be highly reliable (five nines)
- Down-time should be minimal (less than 50 msec)
- So systems must repair themselves (no time for
manual intervention) - Upon detection of a failure (dLOS, dLOF, high
BER) - the network must reroute traffic (protection
switching) - from working channel to protection channel
- The Network Element that detects the failure
(tail-end NE) - initiates the protection switching
- The head-end NE must change forwarding or to send
duplicate traffic - Protection switching is unidirectional
- Protection switching may be revertive
(automatically revert to working channel)
working channel
protection channel
tail-end NE
head-end NE
70How does it work?
- Head-end and tail-end NEs have bridges (muxes)
- Head-end and tail-end NEs maintain bidirectional
signaling channel - Signaling is contained in K1 and K2 bytes of
protection channel - K1 tail-end status and requests
- K2 head-end status
71Linear 11 protection
- Simplest form of protection
- Can be at OC-n level (different physical fibers)
- or at STM/VC level (called SubNetwork Connection
Protection) - or end-to-end path (called trail protection)
- Head-end bridge always sends data on both
channels - Tail-end chooses channel to use based on BER,
dLOS, etc. - No need for signaling
- If non-revertive
- there is no distinction between working and
protection channels - BW utilization is 50
72Linear 11 protection
- Head-end bridge usually sends data on working
channel - When tail-end detects failure it signals (using
K1) to head-end - Head-end then starts sending data over protection
channel - When not in use
- protection channel can be used for (discounted)
extra traffic - (pre-emptible unprotected traffic)
- May be at any layer (only OC-n level protects
against fiber cuts)
working channel
extra traffic
protection channel
73Linear 1N protection
- In order to save BW
- we allocate 1 protection channel for every N
working channels - N limited to 14
- 4 bits in K1 byte from tail-end to head-end
- 0 protection channel
- 1-14 working channels
- 15 extra traffic channel
74Two fiber vs. Four-fiber rings
- Ring based protection is popular in North America
(100K rings) - Full protection against physical fiber cuts
- Simpler and less expensive than mesh topologies
- Protection at line (multiplexed section) or path
layer - Four-fiber rings
- fully redundant at OC level
- can support bidirectional routing at line layer
- Two-fiber rings
- support unidirectional routing at line layer
2 fibers in opposite directions
75Unidirectional vs. bidirectional
- Unidirectional routing
- working channel B-A same direction (e.g.
clockwise) as A-B - management simplicity A-B and B-A can occupy
same timeslots - Inefficient waste in ring BW and excessive delay
in one direction - Bidirectional routing
- A-B and B-1 are opposite in direction
- both using shortest route
- spatial reuse timeslots can be reused in other
sections -
76UPSR vs. BLSR (MS-SPRing)
Path switching Line switching
Two-fiber Four-fiber
Unidirectional Bidirectional
UPSR
BLSR
- Of all the possible combinations, only a few are
in use - Unidirectional Path Switched Rings
- protects tributaries
- extension of 11 to ring topology
- Bidirectional Line Switched Rings (two-fiber and
four-fiber versions) - called Multiplex Section Shared Protection Ring
in SDH - simultaneously protects all tributaries in STM
- extension of 11 to ring topology
77UPSR
- Working channel is in one direction
- protection channel in the opposite direction
- All traffic is added in both directions
- decision as to which to use at drop point (no
signaling) - Normally non-revertive, so effective two
diversity paths - Good match for access networks
- 1 access resilient ring
- less expensive than fiber pair per customer
- Inefficient for core networks
- no spatial reuse
- every signal in every span
- in both directions
- node needs to continuously monitor
- every tributary to be dropped
78BLSR
- Switch at line level less monitoring
- When failure detected tail-end NE signals
head-end NE - Works for unidirectional/bidirectional fiber
cuts, and NE failures - Two-fiber version
- half of OC-N capacity devoted to protection
- only half capacity available for traffic
- Four-fiber version
- full redundant OC-N devoted to protection
- twice as many NEs as compared to two-fiber
Example recovery from unidirectional fiber cut
79VCATand LCAS
80Concatenation
- Payloads that dont fit into standard VT/VC sizes
can be accommodated - by concatenating of several VTs / VCs
- For example, 10 Mbps doesnt fit into any VT or
VC - so w/o concatenation we need to put it into an
STS-1 (48.384 Mbps) - the remaining 38.384 Mbps can not be used
- We would like to be able to divide the 10 Mbps
among - 7 VT1.5/VC-11 s 7 1.600 11.20 Mbps or
- 5 VT2/VC-12 s 5 2.176 10.88 Mbps
81Concatenation (cont.)
- There are 2 ways to concatenate X VTs or VCs
- Contiguous Concatenation (G.707 11.1)
- HOP STS-Nc (SONET) or VC-4-Nc (SDH)
- or LOP 1-7 VC-2-Nc into a VC-3
- since has to fit into SONET/SDH payload
- only STS-Nc N3 4n or VC-4-Nc N4n
- components transported together and in-phase
- requires support at intermediate network elements
- Virtual Concatenation (VCAT G.707 11.2)
- HOP STS-1-Xv or STS-Nc-Xv (SONET) or VC-3/4-Xv
(SDH) - or LOP VT-1.5/2/3/6-Xv (SONET) or VC-11/12/2-Xv
(SDH) - HOP X 256 LOP X 64 (limitation due to
bits in header) - payload split over multiple STSs / STMs
- fragments may follow different routes
- requires support only at path terminations
- requires buffering and differential delay
alignment
82Contiguous Concatenation STS-3c
270 columns
9 rows
258 columns of SPE
STS-3
9 columns of section and line overhead
258 columns 0.576 148.608 Mbps
3 columns of path overhead
270 columns
9 rows
STS-3c
260 columns of SPE
9 columns of section and line overhead
1 column of path overhead
260 columns 0.576 149.760 Mbps
83STS-N vs. STS-Nc
- Although both have raw rates of 155.520 Mbps
- STS-3c has 2 more columns (1.152Mbps) available
- More generally, For STS-Nc gains (N-1) columns
- e.g. STS-12c gains 11 columns 6.336Mbps vis a
vis STS-12 - STS-48c gains 47 columns 27.072 Mbps
- STS-192c gains 191 columns 110.016 Mbps !
- However, an STS-Nc signal is not as easily
separable - when we want to add/drop component signals
84Virtual Concatenation
H4
- VCAT is an inverse multiplexing mechanism
(round-robin) - VCAT members may travel along different routes in
SONET/SDH network - Intermediate network elements dont need to know
about VCAT - (unlike contiguous concatenation that is handled
by all intermediate nodes)
85SDH virtually concatenated VCs
VC Capacity (Mbps) if all members in one VC
VC-11-Xv 1.600, 3.200, 1.600X in VC-3 X 28 C 44.800 in VC-4 X 64 C 102.400
VC-12-Xv 2.176, 4.352, 2.176X in VC-3 X 21 C 45.696 in VC-4 X 63 C 137.088
VC-2-Xv 6.784, 13.568, , 6.784X in VC-3 X 7 C 47.448 in VC-4 X 21 C 142.464
- So we have many permissible rates
- 1.600, 2.176, 3.200, 4.352, 4.800, 6.400, 6.528,
6.784, 8.000,
86SONET virtually concatenated VTs
VT Capacity (Mbps) If all members in one STS
VT1.5-Xv 1.600, 3.200, 1.600X in STS-1 X 28 C 44.800 in STS-3c X 64 C 102.400
VT2-Xv 2.176, 4.352, 2.176X in STS-1 X 21 C 45.696 in STS-3c X 63 C 137.088
VT3-Xv 3.328, 6.656, 3.328X in STS-1 X 14 C 46.592 in STS-3c X 42 C 139.776
VT6-Xv 6.784, 13.568, 6.784X in STS-1 X 7 C 47.448 in STS-3c X 21 C 142.464
- So we have many permissible rates
- 1.600, 2.176, 3.200, 3.328, 4.352, 4.800, 6.400,
6.528, 6.656, 6.784,
87Efficiency comparison
rate w/o VCAT efficiency with VCAT efficiency
10 STS-1 21 VT2-5v VC-12-5v 92
100 STS-3c VC-4 67 STS-1-2v VC-3-2v 100
1000 STS-48c VC-4-16c 42 STS-3c-7v VC-4-7v 95
- Using VCAT increases efficiency to close to 100 !
88PDH VCAT
- Recently ITU-T G.7043 expanded VCAT to
E1,T1,E3,T3 - Enables bonding of up to 16 PDH signals to
support higher rates - Only bonding of like PDH signals allowed (e.g.
cant mix E1s and T1s) - Multiframe is always per G.704/G.832 (e.g. T1
ESF 24 frames, E1 16 frames) - 1 byte per multiframe is VCAT overhead (SQ, MFI,
MST, CRC) - Supports LCAS (to be discussed next)
89PDH VCAT overhead octet
- There is one VCAT overhead octet per multiframe,
so net rate is - T1 (2424-1) 575 data bytes per 3 ms.
multiframe 191.666 kB/s - E1 (1630-1) 495 data bytes per 2 ms multiframe
247.5 kB/s - T3 and E3 can also be used
- We will show the overhead octet format later
- (when using LCAS, the overhead octet is called
VLI)
90Delay compensation
- 802.1ad Ethernet link aggregation cheats
- each identifiable flow is restricted to one link
- doesnt work if single high-BW flow
- VCAT is completely general
- works even with a single flow
- VCG members may travel over completely separate
paths - so the VCAT mechanism must compensate for
differential delay - Requirement for over ½ second compensation
- Must compensate to the bit level
- but since frames have Frame Alignment Signal
- the VCAT mechanism only needs to identify
individual frames
91VCAT buffering
- Since VCAT components may take different paths
- At egress the members
- are no longer in the proper temporal relationship
- VCAT path termination function buffers members
- and outputs in proper order (relying on POH
sequencing) - (up to 512 ms of differential delay can be
tolerated) - VCAT defines a multiframe to enable delay
compensation - length of multiframe determines delay that can be
accommodated - H4 byte in members POH contains
- sequence indicator (identifies component) (number
of bits limits X) - MFI multiframe indicator (multiframe sequencing
to find differential delay)
92Multiframes and superframes
- Here is how we compensate for 512 ms of
differential delay - 512 ms corresponds to a superframe is 4096 TDM
frames (40960.125m512m) - For HOP SDH VCAT and PDH VCAT (H4 byte or PDH
VCAT overhead) - The basic multiframe is 16 frames
- So we need 256 multiframes in a superframe
(256164096) - The MultiFrame Indicator is divided into two
parts - MFI1 (4 bits) appears once per frame
- and counts from 0 to 15 to sequence the
multiframe - MFI2 (8bits) appears once per multiframe
- and counts from 0 to 255
- For LOP SDH (bit 2 of K4 byte)
- a 32 bit frame is built and a 5-bit MFI is
dedicated - 32 multiframes of 16 ms give the needed 512 ms
93Link Capacity Adjustment Scheme
- LCAS is defined in G.7042 (also numbered Y.1305)
- LCAS extends VCAT by allowing dynamic BW changes
- LCAS is a protocol for dynamic adding/removing of
VCAT members - hitless BW modification
- similar to Link Aggregation Control Protocol for
Ethernet links - LCAS is not a control plane or management
protocol - it doesnt allocate the members
- still need control protocols to perform actual
allocation - LCAS is a handshake protocol
- it enables the path ends to negotiate the
additional / deletion - it guarantees that there will be no loss of data
during change - it can determine that a proposed member is ill
suited - it allows automatic removal of faulty member
94LCAS how does it work?
- LCAS is unidirectional (for symmetric BW need to
perform twice) - LCAS functions can be initiated by source or sink
- LCAS assumes that all VCG members are error-free
- LCAS messages are CRC protected
- LCAS messages are sent in advance
- sink processes messages after differential
compensation - message describes link state at time of next
message - receiver can switch to new configuration in time
- LCAS messages are in the upper nibble of
- H4 byte for HOS SONET/SDH
- K4 byte for LOS SONET/SDH
- VCAT overhead octet for PDH VCAT and LCAS
Information - LCAS messages employ redundancy
- messages from source to sink are member specific
- messages from sink to source are replicated
95LCAS control messages
- LCAS adds fields to the basic VCAT ones
- Fields in messages from source to sink
- MFI MultiFrame Indicator
- SQ SeQuence indicator (member ID inside
VCAT group) - CTRL ConTRoL (IDLE, being ADDed, NORMal, End
of Sequence, Do Not Use) - GID Group Identification (identifies VCAT
group) - Fields in messages from sink to source (identical
in all members) - MST Member Status (1 bit for each VCG
member) - RS-Ack ReSequence Acknowledgement
- Fields in both directions
- CRC Cyclic Redundancy Code
- The precise format depends on the VCAT type (H4,
K4, PDH) - Note for H4 format SQ is 8 bits, so up to 256
VCG members - for PDH SQ is only 4 bits, so up to 16
VCG members
96H4 format
MFI1
MFI2 bits 1-4 0 0 0 0
MFI2 bits 5-8 0 0 0 1
CTRL 0 0 1 0
0 0 0 GID 0 0 1 1
0 0 0 0 0 1 0 0
0 0 0 0 0 1 0 1
CRC-8 bits 1-4 0 1 1 0
CRC-8 bits 5-8 0 1 1 1
MST bits 1 0 0 0
more MST bits 1 0 0 1
0 0 0 RS-ACK 1 0 1 0
0 0 0 0 1 0 1 1
0 0 0 0 1 1 0 0
0 0 0 0 1 1 0 1
SQ bits 1-4 1 1 1 0
SQ bits 5-8 1 1 1 1
reserved fields
16 frame multiframe
reserved fields
97H4 format some comments
- CRC-8 (when using K4 it is CRC-3)
- covers the previous 14 frames (not synced on
multiframe) - polynomial x8 x2 x 1
- MST
- each VCG member carries the status of all members
- so we need 256 bits of member status
- this is done by muxing MST bits
- there are MST bits per multiframe
- and 32 multiframes in an MST multiframe
- no special sequencing, just MFI2 multiframe mod
32 - GID
- single bit indentifier
- all members of VCG share the same bit
- cycles through 215-1 LFSR sequence
- different VCGs use different phase offsets of
sequence
98LCAS adding a member (1)
- When more/less BW is needed, we need to
add/remove VCAT members - Adding/removing VCAT members first requires
provisioning (management) - LCAS handles member sequence numbers assignment
- LCAS ensures service is not disrupted
- Example to add a 4th member to group 1
- Initial state
- Step 1 NMS provisions new member
- source sends CTRLIDLE for new member
- sink sends MSTFAIL for new member
99LCAS adding a member (2)
- Step 2 source sends CTRLADD and SQ
- sink sends MSTOK for new member
- if it has been provisioned
- if receiving new member OK
- if it is able to compensate for delay
- otherwise it will send MSTFAIL
- and source reports this to NMS
- Step 3 source sends CTRLEOS for new member
- new member starts to carry traffic
- sink sends RS-ACK
- Note 1 several new members may be added at once
- Note 2 removing a member is similar
- Source puts CTRLIDLE for member to be
removed and stops using it - All member sequence numbers must be adjusted
100LCAS service preservation
- To preserve service integrity if sink detects a
failure of a VCAT member - LCAS can temporarily remove member (if service
can tolerate BW reduction) - Example Initial state
- Step 1 sink sends MSTFAIL for member 2
- source sends CTRLDNU (special
treatment if EoS) - and ceases to use member 2
- Note if EoS fails, renumber to ensure EoS is
active - Step 2 sink sends MSTOK indicating defect is
cleared - source returns CTRL to NORM
- and starts using the member again
- Note if NMS decides to permanently remove the
member, proceed as in previous slide
101HandlingPacketData
102Packet over SONET
- Currently defined in RFC2615 (PPP over SONET)
obsoletes RFC1619 - SONET/SDH can provide a point-to-point
byte-oriented - full-duplex synchronous link
- PPP is ideal for data transport over such a link
- PoS uses PPP in HDLC framing to provide a
byte-oriented interface - to the SONET/SDH infrastructure
- POH signal label (C2)
- indicates PoS as C216 (C2CF if no scrambler)
103PoS architecture
- PoS is based on PPP in HDLC framing
- Since SONET/SDH is byte oriented, byte stuffing
is employed - A special scrambler is used to protect SONET/SDH
timing - PoS operates on IP packets
- If IP is delivered over Ethernet
- the Ethernet is terminated (frame removed)
- Ethernet must be reconstituted at the far end
- require routers at edges of SONET/SDH network
104PoS Details
- IP packet is encapsulated in PPP
- default MTU is 1500 bytes
- up to 64,000 bytes allowed if negotiated by PPP
- FCS is generated and appended
- PPP in HDLC framing with byte stuffing
- 43 bit scrambler is run over the SPE
- byte stream is placed octet-aligned in SPE
- (e.g. 149.760 Mbps of STM-1)
- HDLC frames may cross SPE boundaries
105POS problems
- PoS is BW efficient
- but POS has its disadvantages
- BW must be predetermined
- HDLC BW expansion and nondeterminacy
- BW allocation is tightly constrained by SONET/SDH
capacities - e.g. GBE requires a full OC-48 pipe
- POS requires removing the Ethernet headers
- so lose RPR, VLAN, 802.1p, multicasting, etc
- POS requires IP routers
106LAPS
- In 2001 ITU-T introduced protocols for
transporting packets over SDH - X.85 IP over SDH using LAPS
- X.86 Ethernet over LAPS
- Built on series of ITU LAPx HDLC-based
protocols - Use ISO HDLC format
- Implement connectionless byte-oriented protocols
over SDH - X.85 is very close to (but not quite) IETF PoS
107GFP architecture
- A new approach, not based on HDLC
- Defined in ITU-T G.7041 (also numbered Y.1303)
- originally developed in T1X1 to fix ATM
limitations - (like ATM) uses HEC protected frames instead of
HDLC - Client may be PDU-oriented (Ethernet MAC, IP)
- or block-oriented (GBE, fiber channel)
- GFP frames
- are octet aligned
- contain at most 65,535 bytes
- consist of a header payload area
- Any idle time between GFP frames is filled with
GFP idle frames
108GFP frame structure
- Every GFP frame has a 4-byte core header
- 2 byte Payload Length Indicator
- PLI 01,2,3 are for control frames
- 2 byte core Header Error Control
- X16 X12 X5 1
- entire core header is XORed with B6AB31E0
- Idle GFP frames
- have PLI0
- have no payload area
- Non-idle GFP frames
- have 4 bytes in payload area
- the payload has its own header
- 2 payload modes GFP-F and GFP-T
- optionally protect payload with CRC-32
109GFP payload header
- GFP payload header has
- type (2B)
- type HEC (CRC-16)
- extension header (0-60B)
- either null or linear extension (payload type
muxing) - extension HEC (CRC-16)
- type consists of
- Payload Type Identifier (3b)
- PTI000 for client data
- PTI100 for client management (OAM dLOS, dLOF)
- Payload FCS Indicator (1b)
- PFI1 means there is a payload FCS
- Extension Header ID (3b)
- User Payload Identifier (8b)
- values for Ethernet, IP, PPP, FC, RPR, MPLS, etc.
PTI (3b)
EXI (3b)
PFI
type (2B)
UPI (8b)
tHEC (2B)
extension header (0-60B)
eHEC (2B)
110GFP modes
- GFP-F - frame mapped GFP
- Good for PDU-based protocols (Ethernet, IP, MPLS)
- or HDLC-based ones (PPP)
- Client PDU is placed in GFP payload field
- GFP-T transparent GFP
- Good for protocols that exploit physical layer
capabilities - In particular
- 8B/10B line code
- used in fiber channel, GbE, FICON, ESCON, DVB,
etc - Were we to use GFP-F would lose control info,
GFP-T is transparent to these codes - Also, GFP-T neednt wait for entire PDU to be
received (adding delay!)