Title: Deploying Exchange 2000 3
1Deploying Exchange 2000Part 3 Storage and
Routing Paul BowdenProgram ManagerExchange
Server Product UnitMicrosoft Corporation
2Session Theme
- Final part of the Exchange 2000 series
- Part 1 Directory
- Part 2 Co-existence and Upgrades
- Part 3 Storage and Routing
- Focus is on real deployments and ramifications,
not feature sets
3The Exchange 2000 Store
4Terminology Buster Store
5From 2000 Feet
- Scalability
- Multiple databases per server
- Front end/back end
- Performance
- Native content store
- Functionality
- Web Store
- Integrated Content Indexing
- Granular permissions
6Multiple Databases
- Why?
- To improve backup/recovery times
- To classify different types of data
- Just to clarify
- Not designed for one mailbox per database
- We do not run multiple STORE.EXEs
- We can dynamically mount and dismount databases
- We can perform online restoration
7Multiple MDB Scenarios
- Multiple companies on same server
- SLA conformance
- VIP mailboxes
- Large Public Folder applications
- Considerations
- Disk space
- Number of spindles
- Memory
8Database Grouping
- Storage Groups
- Databases belong to a Storage Group
- Essentially, an ESE instance
- Each has a transaction log set
- Fail-over groups when clustering
- Multiples of (n-1) SGs per cluster
9Storage Dilemmas
- New MDB or new Storage Group?
- Reasons for having a new group
- Circular logging requirements (e.g. NNTP feed)
- Different backup schedules
- Hard partition of company(s) data
- Clustering
10Multi-MDB/SG Deployment
System Boot Partition (C)
HDD
HDD
Pagefile (D)
HDD
Database partition for SGs 1 and 2 (M)
Transaction Log Set 1 (E)
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
Transaction Log Set 2 (F)
HDD
HDD
Database partition for SGs 3 and 4 (N)
HDD
HDD
HDD
HDD
HDD
HDD
Transaction Log Set 3 (G)
HDD
HDD
Transaction Log Set 4 (H)
HDD
HDD
11Clustering
- 2, 3 or 4 node clustering
- DataCenter Edition required for 4 node
- Clustering is Active Active
- Storage Groups are cluster resources
- Dynamic mount of storage group upon failure
- Run nodes at appropriate load
- 2 node 50 max load
- 3 node 66 max load
- 4 node 75 max load
12Clustering Walkthrough
X
SG4
SG1
SG5
SG2
SG6
SG3
SG4
Disk Array
SG7
SG10
SG8
SG11
SG9
SG12
SG5
SG6
13Client Access
- You dont need to upgrade clients to take
advantage of multiple MDBs - Client is transparent to Mailbox location
- Call to Mailbox namespace
- Mailboxes can be moved around
- Mixed-mode
- Anywhere within the same Admin Group
- Native-mode
- Anywhere within the forest
14Backup And Recovery
- New NTBACKUP
- Enhanced Backup API
- Granularity is MDB level
- Single MDB backup backs up T. Logs
- Will need to backup entire storage group to flush
logs - Online MDB recovery
- Assumes Transaction Logs are intact
- Replays transactions with appropriate MDB ID
15Front End / Back End
- Protocol and Database split
- Works with POP3, IMAP4, and HTTP
- Advantages
- Unified namespace
- Scalability / Load balancing
16Front-End Deployment
ServerA
ServerC
ServerB
/exchange/pbowden
/exchange/larryl
/disc/XML
/disc/foo
/exchange/jkenerso
/disc/dogfood
/exchange/davidmad
Front End Servers
/disc/foo
Directory
/exchange/pbowden
17Web Store
- In the past
- Where do I put my data?
- Public Folders
- Web Server
- File share
- Now
- Web Store
18Content Indexing
- Enabled on a per-MDB basis
- Microsoft Search Service
- Transparent to clients
- Outlook will automatically use this data
- Will also search deep attachments and objects
19Key Points
- Multiple MDBs provide great scalability
- Enhanced clustering
- Front-End servers provide a unified namespace for
Internet clients - Web Store and Content Indexing opens up huge
possibilities - No need for clients to be upgraded
20Message Routing
21Terminology Buster Routing
22From 2000 Feet
- Performance
- SMTP native message transfer
- SMTP becomes a full peer of X.400
- Enhanced SMTP services
- Functionality
- New routing architecture
- Link State Algorithm
- Transport Event Sinks
23Routing Groups
- Defines a set of meshed servers
- Similar to sites in Exchange Server 5.5
- All message transfer is SMTP-based
RGC
RGC
24Planning Routing Groups
- Routing group design
- Resilient links required inside the RG
- Plan RGs on network bandwidth
- Identify traffic patterns
- Design ramifications
- No alignment with namespace
- No RPC involved
- Start with a 11 site to RG mapping
- RGs can be changed dynamically
25Connector Options
RG2
RG1
Routing Group Connector Int. routing
SMTP Connector DNS routing
X.400 Connector Int. routing
26Common SMTP Questions
- Why use SMTP?
- Common standard/better interoperability
- High-throughput
- Does SMTP mean bigger messages?
- Not for most messages
- Exchange will decide on the best transport
- Is SMTP slower than what I have today?
- No, transfer will be fast over all types of
network links
27More Questions
- Doesnt SMTP mean no security?
- Options for authentication and encryption
- What if I want compression?
- Easy to implement through Sinks
28Link State Information
- Terminology
- LSA Link State Algorithm
- What is it?
- The new Routing mechanism
- Replaces the GWART
- Uses a mechanism similar to update sequence
numbers
29RG Master And Propagation
- Each RG has a Master
- Owns the table for the RG
- Immediate propagation of data
- Intra-RG propagation
- LSA uses TCP connection on port 3044
- Inter-RG propagation
- Uses X-LINK2STATE SMTP verb extension
30Routing Walkthrough
RG1
RG2
10
X
10
20
RG3
X
RG5
20
10
RG4
Exchange 2000 Server
Infinite Cost? Wait in queue
RG Master
Routing Group Connector
31Mixed-Vintage Routing
Ex5.5
Ex5.5
Ex5.5
Ex5.5
Ex5.5
Ex5.5
Ex5.5
Ex5.5
32Event Sinks
- Programmatic access to SMTP/NNTP
- Easier to write than IMS extensions!
- Good documentation
- Can be written in any COM-compatible language
- Types
- Protocol
- Transport
33Key Points
- SMTP is good
- High performance over all links
- Routing Groups are flexible
- Can change with the infrastructure
- Native connectors
- No more RPC worries
- No more ping-pong
- Event Sinks are very cool
34Questions?
35(No Transcript)
36(No Transcript)