Title: MOVE4: Upgrading Your Database to OpenEdge 10
1MOVE-4 Upgrading Your Database to OpenEdge 10
Gus Björklund
Wizard, Vice President Technology
2Audience Survey
3Why Upgrade to OpenEdge 10 RDBMS ?
4Why Upgrade to OpenEdge 10 RDBMS ?
The 10.1A RDBMS is better, stronger, faster
5Why Upgrade to OpenEdge 10 RDBMS ?
Add table/field/index online
Datatypes
Add extents online
Data archival
XML support
Save key events
Business Agility
Replication failback
High Availability
Maintenance
New IDE
logfile format
Truncate logfile online
Defrag tables online
OO ext to 4GL
Clusters
JTA support
AI archiving utility
Type II Storage
Fast quiet points
Audit core service
Audit archive util
SQL
index concurrency
DB defaults
Audit roles defined enforced
Security
Performance
Adaptive transaction end locking
Multi-threaded utilities
Limits / Scalability
Type 4 JDBC drivers
6General Upgrade Strategy
- Backup
- Keep dbanayls promon data for reference
- Install OpenEdge 10 (no need to overwrite V9)
- Upgrade DB to 10
- Run UPDATE STATISTICS for SQL
- Do your backups !!!!
- Recompile 4GL code
- Run your application
7The 5 Minute Upgrade
8Database Server Conversion Steps
- Upgrade clients first, then database server
- Preparation
- Backup database
- Truncate BI, Disable AI
- Install OpenEdge 10
- Run conversion utility
- Backup database
- Start database
9The 5-Minute Rule
- Conversion program runs in-place, in 5-minutes or
less - Mostly, we upgrade the schema tables
- No changes to records or indexes
- No changes to physical structures
10Database convert
proutil ltdbgt -C conv910 -B 512 probkup ltdbgt
11What Happens after 5-minutes?
- You can run the database !
- It will very likely run better than before
- Fragmentation may still exist
- Most of your DBA scripts should work fine
- All data in Type I data areas
- Optimize when time permits ...
12Do some more work !
What if I want more performance than I get with
the fast convert ?
13Move data to type ii data areas
14Moving to Type II Data Areas
- Before
- First upgrade with conversion utility
- Move schema tables
- Create type ii data areas
- Move tables and indexes
- After
- Truncate old data area
- Delete old data area
15Before the move
16Moving schema tables
- proutil ltdbgt -C mvsch
- Renumbers existing schema area
- Creates new schema area no. 6
- Copies schema tables
- Deletes old schema tables
17Moving Tables and Indexes
- Tablemove and Indexmove
- Dump and Load
18Storage Settings
- data block size 4k or 8k
- bi ai block size should match each other
- Type II Data Area Cluster Sizes
- Table areas 512
- Index areas 512
- Dump and load required to change data block size
19Using Table and Index Move
20Table Move
proutil ltdbgt -C tablemove owner-name.table-name
table-area index-area
Can move just table, ortable and its indexes
(preferred)
21Index Move
proutil ltdbgt -C idxmove owner-name.indexname
area-name
22After the move
23Truncating area
- proutil ltdbgt -C truncate biproutil ltdbgt -C
truncate area ltarea-namegt - Area logically truncated
- Extents can be deletedprostrct remove ltdbgt
ltextent-typegt ltarea-namegt
24If You cant tablemoveDumping and Loading
25Dump and Load Strategy
- Before
- Backup
- Upgrade with conversion utility
- Move schema tables
- Dump definitions and tables
- Create new database with type ii data areas
- Load definitions
- Load table data
- Build indexes
- Backup
26Dump data definitions
- Use Data Administration tool
- Dump all table and index definitions
- produces .df files
27Dump/Load Choices
- Dictionary Dump (to text)
- Dictionary load
- Bulkload
- Custom Dump Programs
- Custom loader
- Custom buffer-copy Programs
- Binary Dump
- Binary load
28Dictionary Dump
- Easy
- Slow
- 2 GB File size limit
- Cant choose dump order
- Have to dump entire table
29Bulkload
- proutil ltdbgt -yy n -C BULKLOAD fd-file -B 1000
- Use dictionary or custom code to export data
- 2 GB file limit
- Loads one table at a time
- Faster than dictionary load
- Slower than binary load
30Custom Dump Programs
- Have to write yourself (but not too hard)
- Use EXPORT and IMPORT statements
- Can dump subset of data
- Can run multiple loads concurrently
- but not into type i data areas
31buffer-copy
- Have to write yourself
- Planning required
- Can do while applications use old database
- Can copy part of data
- Can choose dump order via index
- No dump file needed
- Can load multiple tables concurrently
32Binary Dump and Load
- Easy
- No 2 GB file size limit
- Can choose dump order (by index)
- Can dump part of data
- Can load multiple tables concurrently
33Using Binary Dump and Load
34Binary Dump and Load
- Dump from type I data areas
- Create new database structure
- Load in to type ii data area
- proutil ltdbgt -C dump lttablegt . index 0
- Dump performance vs Read performance
- Choose an index based on read order instead
- 10 x differences
- proutil ltdbgt -C load lttablegt.bd build
- Load with build indexes
- Load to truncated or new area
- (truncate rather than emptied)
35Large Databases Dump and LoadFastest Overall
Process
- Binary dump and load
- Multiple streams (3-5 per CPU)
- Dump on smallest index ( blocks)
- Index rebuild in same step as load
- saves one phase
- may be faster YMMV
- Parallel load into separate type II data areas
when possible - But a minor point
36Large Databases Dump and LoadFastest Overall
Process
- Tune for high activity
- Dump with
- RO (ymmv) and high B
- Load with
- high B, r
- 1-2 APWs per CPU,
- -pwqdelay 10, -pwscan 1024, -pwwmax 1024
- 16K BI blocks, large clusters,
- no ai/2PC
- At every step, spread activity across disks /
controllers
37After the load
38After Loading You Need To
- build indexes
- can do in groups or all at once
- by area, by table, by owner
- by area might make sense
- dont forget sort scratch space (-SS)
- backup your database
39Index Rebuild After Load
- Index Rebuild by area, table, or schema proutil
ltdbgt -C idxbuild table lttablegt proutil ltdbgt -C
idxbuild area ltareagt - Other values
- -SG 64 (sort groups)
- -SS filename (scratch space location list)
- -TM 32 (merge buffers)
- -TB 32 (temp block size)
- -B 1000
40Review
41Summary
- 10.1A RDBMS is better, stronger, faster even
with NO application changes - Conversion is quick
- Optimise at your leisure
- Upgrade when you go home
42More Info
- Related sessions
- DB8 Highly Parallel Dump and Load
- MOVE14 Migrating Your Authentication System to
OpenEdge 10.1A and Beyond - INNOV1 OpenEdge 10.1A Overview
- INNOV12 OpenEdge Database Product Roadmap
43WantAnswers
?
44(No Transcript)