Title: OPS-10: Moving V8/V9 RDBMS to OpenEdge
1OPS-10 Moving V8/V9 RDBMS to OpenEdge 10
Rob Marshall
Principal Solutions Consultant
2Whats in OpenEdge RDBMS?
Performance
Visibility
- Type II Storage Areas
- Fast Drop Temp tables
- Increased shmem B 1 billion
- Internal algorithmic enhancements
- Buffers, Locks, Indexing
- Improved APW scanning
- Auto Record Defrag
- Enhanced Txn Backout
- New Defaults
- Log File
- New format
- Significant events
- Improved management
- Db I/O by User by Object
- Database Describe
Large Database Support
- 64 bit Rowids
- 64 bit Sequences
- 64 bit Integer Datatype
- Large Index Key Entries (1970)
- 32,000 areas
- 8 TB Shmem
Datatype Support
- BLOB, CLOB
- Datetime, Datetime-TZ
- INT64 (no conversion)
3Whats in OpenEdge RDBMS?
High Availability
Maintenance
- Online Schema adds
- Sequences
- Tables
- Fields
- Indexes w/fast fill
- Online space management
- Enabled/Disable AI online
- Enable AI Mgmt online
- HA Clusters Bundled
- Index Rebuild
- By area, table, active status
- .st file syntax checker
- AI Management
- Multi threaded utilities
- idxbuild, binary dump
- Binary Dump without index
- Binary Load Performance
- Index Fix with NO-LOCK
Security
4Agenda
- General Migration Strategy
- The Fast and Easy Upgrade
- Physical Upgrade
- Tuning Opportunity
5Basic Steps
First detail plan, review plan, test plan THEN
execute
- Preparation
- Truncate BI, Disable AI, 2PC, and Replication
(V9) - Backup (V8/9)
- Install
- Install OpenEdge
- Dont need to uninstall V8/9
- Dont overwrite your current Progress Directory
- Upgrade
- Upgrade DB to OpenEdge 10
- Do your backups !!!!
- Recompile/Re-deploy your ABL code if client is
OpenEdge - Run the Application and Test, Test and.. Test
6The Application
- Safe
- Install OpenEdge on your existing test machine
- proutil lttest dbgt -C conv910
- Recompile application and test
- Fast and Loose
- Upgrade a remote or local client site
- Recompile application and test
- The Rollout
- Upgrade remote systems with OpenEdge
- Remote client server, Remote Application servers
- re-deploy newly built application (PROPATH) and
test - ABL code needs to be recompiled/re-deployed
only when upgrading the client to R10. In 3-tier
configurations (with AppServer) the client could
still be V9. Not possible via SQL or V8. - You will have to convert a V8 db to V9 before
converting to OpenEdge
7Agenda
- General Migration Strategy
- The Fast and Easy Upgrade
- Physical Upgrade
- Tuning Opportunity
8Deployment
- Preparation
- Truncate BI
- Disable AI, 2PC, Replication (V9)
- Backup database (V8/9)
- Validate backup
- Install OpenEdge 10 on server machine
- And everywhere else! (Clients must be upgraded
before the server can be) - Recompile and re-deploy application (if need be)
- Client/Server V9 to OpenEdge 10 disallowed
- V9 Clients to R10 AppServer to R10 Database is
allowed, No SQL V9 to OpenEdge permitted
9Connectivity Progress V9 and OpenEdge 10
- ABL mixed configurations Progress V9 and
OpenEdge 10 are supported - One version back (clients to servers)
- One version forward (clients to Application
Server) - SQL must match
Application Server
Client
Database
10
10
10
9
10
10
9
9
10
NEW in OpenEdge 10
10
10
9
9
10
9
Refer to product by product information for
further details
10Database Conversion
- Run conversion utility
- _proutil ltdbgt -C conv910 B 512
- Conversion runs in 5-minutes or less
- Basically just a schema upgrade
- No changes to user records or indexes
- No changes to physical structures
- Backup database
- Re-start database and application
11- You are now Good To Go with OpenEdge
- No physical changes to existing user data
- Data remains in Type I storage areas
- Data fragmentation exists as before
- Optimize physical layout when time permits...
12Agenda
- General Migration Strategy
- The Fast and Easy Upgrade
- Physical Upgrade
- Tuning Opportunity
13Storage Areas?
- Performance, Scalability Maintenance
- Take advantage of new features
- No adverse application effects
- Physical reorg does NOT change the application
- Object location is abstracted from the language
by an internal mapping layer - Different physical deployments can run with the
same compiled r-code
14How to Get There
- Preparation (same as before)
- Truncate BI, disable AI, backup, validate,
install - Before Physical Reorg
- Upgrade database to OpenEdge 10
- conversion utility
- prostrct create (a must if changing block size)
- Physical Updates (no r-code changes required)
- Separate schema from user data
- Create new storage areas
- Specify records per block
- Specify Type II cluster sizes
15How to Get There
- Physical Reorg
- Spread data out amongst new areas
- Move indexes
- Online options vs offline options
- Database block size changes are offline
- After Reorg
- Reclaim Unused Space
- Truncate old data area
- Delete old data area
16How to Get There
- In Place (same database)
- Transformation done all at once or over time
- Add new storage areas to existing database
- Migrate data from old areas to new
- Reclaim space from old areas
- New database
- Transformation done in one maintenance window
- Dump old data
- Create new database
- Load into new database
- Prodel old database
- Mix of Option 1 and Option 2 (custom)
- Create new database
- Move data from old database to new database
- Reclaim space by deleting old database
17Getting started Separate user data from schema
18Moving schema tables
Separate Schema from user data (in place)
proutil ltdbgt -C mvsch (offline operation)
Schema Area
Renames existing schema area
Old Default Area
19Moving schema tables
Separate Schema from user data
proutil ltdbgt -C mvsch (offline operation)
Schema Area
Renames existing schema area
Creates new schema area
Old Default Area
Schema Area
Moves schema Tables Indexes
20Moving schema tables
Separate Schema from user data
proutil ltdbgt -C mvsch (offline operation)
Schema Area
Renames existing schema area
Creates new schema area
Old Default Area
Schema Area
Moves schema Tables Indexes
You move data To new areas
User Area
User Area
User Area
User Area
You truncate Old Default Area
21Physical ChangesLocation, Location, Location
- Create .st file with new layout
- Set records per block
- Use Type II Storage Areas
- Tables 64 or 512 block clusters
- Indexes 8 or 64 block clusters
d Cust/Bill Indexes7,18 /d_array2/myDB_7.d1
f 512000 d Cust/Bill Indexes7,18
/d_array2/myDB_7.d2 d Customer Data8,1664
/d_array2/myDB_8.d1 f 1024000 d Customer
Data8,1664 /d_array2/myDB_8.d2 d Billing
Data9,32512 /d_array2/myDB_9.d1 f
1024000 d Billing Data9,32512
/d_array2/myDB_9.d2
22Physical Changes
- Validate first
- prostrct add ltdbgt new.st -validate
- Then update
- prostrct add ltdbgt new.st
- OR
- prostrct addonline ltdbgt new.st
The Structure file format is valid. (12619)
23Moving Tables and Indexes
3 Options for Data Movement
- Table move and Index move
- Online (by primary index)
- Dump and Load (DL)
- With or without index rebuild
- Application must be offline
- Suggestion Mix of option 1 and 2
- 1st purge/archive unneeded data
- Table move small tables (number of blocks)
- DL everything else
24Option 1 Table/Index Move
Pros and Cons
- Advantages
- Online (with no-lock access)
- Dump load in one step
- Schema is automatically updated
- No object changes to deal with
- Parallelism
- Disadvantages
- Only No-lock accessible during move
- Moves but doesnt rebuild indexes
- Too slow for large tables
- Changes are logged!
- BI growth may be of concern
25Table Move
- proutil ltdbgt -C tablemove owner-name .
table-name table-area index-area - Move/reorg a table by its primary index
- Move a table AND its indexes
- Preferred performance
- Fast for small tables
26Index Move
- proutil ltdbgt -C idxmove owner-name .
table-name . index-name - area-name
- Move an index from one area to another
- Does NOT alter/optimize index block layout
- Fast but does not rebuild indexes
- Online but changes to owning table blocked
27Option 2 Dump and Load
3 Dump and Load Flavors
- Textual Data
- ASCII dump
- ASCII load
- Bulk load followed by index rebuild
- Binary
- Binary dump
- Binary load
- With index rebuild
- Followed by index rebuild
- Custom (Textual or raw)
- DL with triggers
- Buffer-Copy / Raw-data-transfer / Export/Import
- Can be tricky, you may want help
28Dump and Load General Strategy
- Create new database structure
- Add to existing DB
- New database
- Run tabanalys
- Dump table data sequence values, _User table
- Data definitions
- Dump definitions
- Modify storage area locations
- Load definitions
- Load table data
- Build indexes (if needed) 10.1C can specify
pack - Run tabanalys
- Backup
If you have the disk space, creating a new db
saves time
29Dumping the data
30Dictionary Data Dump
- Database Admin tool
- OR run prodict/dump_d.p(lttablegt,
ltdirgt,ltcodepagegt). - Advantages
- Fast and Easy
- Parallel
- No endian issues
- Disadvantages
- 2 GB File size limit Pre 10.1C
- Cant choose dump order
- Have to dump entire table
- Must ensure no one changes table between DL
31Using Binary Dump
- Advantages
- Fastest and Easy
- No 2 GB file size limit
- No endian issues
- Can choose dump order (by index)
- Can dump table data in portions
- Multi threaded (10.1B)
- Can dump multiple tables concurrently (parallel)
- Disadvantages
- Must ensure no table changes between DL (unless
using triggers as well)
32Binary Dump Specified
- proutil ltdbgt -C dumpspecified lttable.fieldgt
ltoperatorgt field-value1 AND operator value2
ltdirgt -preferidx ltidx-namegt - 10.1B03 allows multiple values
- Switches
- table.field MUST be lead participant in index
- Valid operators LT, GE, LE, GT, EQ
- -preferidx determines specific index to use
- -index, -thread are ignored
- Performance
- Threaded is preferred
- Can run in parallel with many unique values
- Cautions
- Avoid using descending indexes
- There is a risk of missing a range
33Binary Dump Specified
Finding the median value
define variable i as integer
no-undo. define variable max-recs as integer
initial 0 no-undo. define variable median as
integer no-undo. for each mytable
NO-LOCK use-index ltpreferred-idxgt max-recs
max-recs 1. end. median max-recs / 2. do i
1 to median find next mytable NO-LOCK
use-index ltpreferred-idxgt. end. display
i substr(field1, 1, 192).
34Binary Dump Threaded
- proutil ltdbgt -C dump lttablegt ltdirgt -index ltindex
gt - -thread 1 -threadnum ltngt
- -dumpfile ltfilelistgt -Bp 64
- -index ltngt
- Choose index based on read order
- -index 0
- Faster dump, slower read
- Assumes coming from Type II
- -thread indicates threaded dump
- threads automatic ( CPUs)
- threadnum max of CPUs 2
- Threads only available in multi user mode
- Workgroup only supports 1 thread
- -dumpfile used as input for load
35Dont forget SQL
- SQL Tables and Views
- These need to be dumped and loaded as well
- You can use different commands to move the
information - sqlschema
- sqldump
- sqlload
36Data Dump Completed. Reorganize the Area/Object
Configuration
37Dump Modify data definitions
- Use Data administration tool
- OR
- run prodict/dump_df.p(ALL, ltmydbgt.df,
). - If using bulk load
- run prodict/dump_fd.p(ALL, ltmydbgt.fd).
38Dump Modify data definitions
- Update .df files
- Optionally delete old table
- Change tables area information
- Delete/Drop tables
- Load data definitions
- Data administration tool
- OR run prodict/load_df.p(ltmytablegt.df").
39Alternative Data Definition Modification
If all data in area dumped
- Truncate objects for fast move/delete
- proutil ltdbgt -C truncate area Old Default Area
- Warns then deletes data (but NOT schema)
- Rebuild/activate empty indexes (if moving)
- proutil ltdbgt -C idxbuild inactiveindexes
- Can be done ONLINE, not just the build but the
activate - Move empty tables/indexes to new area
- proutil ltdbgt -C tablemove lttablegt ltareagt
index-area
40Load the data back in (finally). Remember to
load all data files. Be sure to validate data
loaded.
41Loading
- Things to consider...
- Enable large file support
- In the Operating System (ulimit)
- In the Filesystem / volume groups
- In the Database
42Bulkload
- proutil ltdbgt -C bulkload ltfd-filegt -B 1000 i Mf
10 - Data input from dictionary or custom data dump
- Mentioned here for completeness only
- Drawbacks
- 2 GB file limit (pre 10.1C)
- Loads one table at a time (single user)
- Does not insert index entries
- Requires index rebuild as separate step
- No advantage over other loads
- Slower than all other loads
43Dictionary Load
- Data Administration Tool
- OR
- run prodict/load_d.p(table1, table1.d).
- Data input from dictionary or custom data dump
- 2 GB file limit per load (pre 10.1C)
- Load data in parallel (to separate tables)
- Inserts index entries
- Index tree not perfect
- Performance close to binary load index rebuild
- (when loading multiple tables)
44Binary Load
- proutil ltdbgt -C load lttablegt.bd build
- Load to new or truncated area
- Truncated rather than emptied
- Parallel load to different tables
- Same or different areas without scatter! When
using Type II Areas - Optionally load with build indexes
- Somewhat better performance
45Binary Load
From a threaded dump or dumpspecified
- proutil ltdbgt -C load lttablegt.bd
- -dumplist ltfilenamegt
- Dump List File
- /usr1/db/mytable.bd
- /usr1/db/mytable2.bd
- /usr1/db/mytable3.bd
- Must load ALL dumps (.db, db2, .db3, )
46Tuning the Process
Tune for high (non-recoverable) activity
- Dump with
- RO, high B and/or -Bp
- Dump on index with fewest blocks (if possible)
- Load with
- High B, r or i
- BIW, 1.5 APWs per CPU,
- Very Large BI clusters with 16K BI blocks
- No AI/2PC
- Spread data, BI and temp files across disks /
controllers
only use -r -i when complete data recovery
possible
47After the Load
Build Indexes (where applicable)
- proutil ltdbgt -C idxbuild all table lttablegt
area ltareagt schema ltownergt activeindexes
inactiveindexes -thread n -threadnum n -T
ltdirgt -TM n TB ltblocksizegt -B n -SG n
-SS ltfilegt -pfactor n - Many new idxbuild choices
- Helpful parameters
- -SG 64 (sort groups)
- -SS filename (file containing sort file list)
- -TM 32 (merge buffers)
- -TB 31 (temp block size)
- -B 1000
- Run tabanalys
- validate records
- Backup your database
48Reclaim space
For areas that were emptied
- proutil ltdbgt -C truncate area ltarea-namegt
- Warns then deletes data
- proutil ltdbgt -C truncate area
- Only truncates empty areas (but all of them)
- Area logically truncated (option 2)
- Extents can be deleted
- prostrct remove ltdbgt d ltold-area-namegt
49After the Load
Think your done
- Dont Forget....
- Run UPDATE STATISTICS for SQL/ODBC
50Agenda
- General Migration Strategy
- The Fast and Easy Upgrade
- Physical Upgrade
- Tuning Opportunity
51Tuning Opportunity
- -Bt (temp tables are Type II storage areas)
client parameter - 10.1B changes default Temp table block size
- From 1K to 4K
- tmpbsize 1 restores old behavior client
parameter - Monitor BI Cluster Size
- BI notes are bigger in OpenEdge 10
- BI grow
52In Summary
- Conversion is quick
- Physical Upgrade at your leisure
- Lots of physical re-org options
- Rollout can be simple
- 10,000 customers on OpenEdge
53Relevant Exchange Sessions
- OPS-1 DBA 101 - How Healthy is Your Database
Today? - OPS-8 Alerts, Alarms, Pages and Harbingers of
Trouble - OPS-14 Effective OpenEdge Database Configuration
- OPS-18 Data Management and Platforms Roadmap
- OPS-19 What is IPv6 and Why Should I Care
- OPS-20 Data Management and Platforms Info
Exchange - OPS-23 OpenEdge Performance Basics
- OPS-28 A New Spin on Some Old Latches
54?
Questions
55Thank You
56(No Transcript)