Title: Flashback Archive
1(No Transcript)
2High Availability
- Flashback Archive
- Data Guard
- Streams
- Online Maintenance
- Data Recovery Advisor
3ltInsert Picture Heregt
Flashback Data Archive
4Data History and Retention
- Data retention and change control requirements
are growing - Regulatory oversight and Compliance
- Sarbanes-Oxley, HIPAA, Basel-II, Internal Audit
- Business needs
- Extract temporal dimension of data
- Understand past behavior and manage customer
relationships profitably - Failure to maintain appropriate history
retention is expensive - Legal risks
- Loss of Reputation
- Current approaches to manage historical data are
inefficient and often ineffective
5Data History and Retention - Requirements
- Historical data needs to be secure and tamper
proof - Unauthorized users should not be able to access
historical data - No one should be able to update historical data
- Easily accessible from existing applications
- Seamless access
- Should not require special interfaces or
application changes - Minimal performance overhead
- Optimal Storage footprint
- Historical data volume can easily grow into
hundreds of terabytes - Easy to set up historical data capture and
configure retention policies
6Managing Data History Current Approaches
- Application or mid-tier level
- Combines business logic and archive policies
- Increases complexity
- No centralized management
- Data integrity issues if underlying data is
updated directly - Database level
- Enabled using Triggers
- Significant performance and maintenance overhead
- External or Third-party
- Mine redo logs
- History stored in separate database
- Cannot seamlessly query OLTP and history data
- None of the above approaches meet all customer
requirements - Customers are therefore forced to make
significant compromises
7Introducing Flashback Data Archive
- Transparently tracks historical changes to all
Oracle data in a highly secure and efficient
manner - Historical data is stored in the database and can
be retained for as long as you want - Special kernel optimizations to minimize
performance overhead of capturing historical data - Historical data is stored in compressed form to
minimize storage requirements - Automatically prevents end users from changing
historical data - Seamless access to archived historical data
- Using AS OF SQL construct
select from product_information AS OF TIMESTAMP
'02-MAY-05 12.00 AM where product_id 3060
8Introducing Flashback Data Archive
- Extremely easy to set up
- enable history capture in minutes!
- Completely transparent to applications
- Centralized and automatic management
- policy-based
- multiple tables can share same Retention and
Purge policies - automatic purge of aged history
Automatically Purge Data based on Retention policy
Retention Period
9How Does Flashback Data Archive Work?
- Primary source for history is the undo data
- History is stored in automatically created
history tables inside the archive - Transactions and its undo records on tracked
tables marked for archival - undo records not recycled until history is
archived - History is captured asynchronously by new
background process (fbda) - default capture interval is 5 minutes
- capture interval is self-tuned based on system
activities - process tries to maximize undo data reads from
buffer cache for better performance - INSERTs do not generate history records
10Flashback Data Archive And DDLs
- Possible to add columns to tracked tables
- Automatically disallows any other DDL that
invalidates history - Dropping and truncating a tables
- Dropping or modifying a column
- Must disable archiving before performing any
major changes - Disabling archiving discards already collected
history - Flashback Data Archive guarantees historical data
capture and maintenance - Any operations that invalidates history or
prevents historical capture will be disallowed
11Creating Flashback Data Archive Enable History
Tracking
- Create tablespace (Automatic Segment Space
Management is required) - Create a flashback data archive
- Set the retention period
- Enable archiving on desired tables
12Managing Flashback Data Archive
- SYS_FBA_HIST_ - Internal History Table
- replica of tracked table with additional
timestamp columns - partitioned for faster performance
- no modifications allowed to internal partitions
- compression reduces disk space required
- no out-of-box indexes
- support for copying primary key indexes from
tracked table in later releases (TBD) - Applications dont need to access internal tables
directly - use AS OF to seamlessly query history
- Alerts generated when flashback data archive is
90 full - Automatically purges historical data after
expiration of specified retention period - supports ad-hoc purge by administrators
(privileged operation
13Summary
- Managing historical data should no longer be a
onerous task - Flashback Data Archive provides a secure,
efficient, easy to use and applicant transparent
solution - Easy to implement
- Centralized, Integrated and query-able
- Highly storage and performance efficient
- Automatic, Policy-based management
- Reduce costs of compliance
- Can be used for variety of other purposes
- Auditing, Human error correction, etc.
14ltInsert Picture Heregt
Data Guard
15Data Guard Enhancements
- Better standby resource utilization
- Enhanced HA / DR functionality
- Improved performance
Data Guard becomes an integral part of IT
operations
16Physical Standby with Real-Time Query
Concurrent Real-Time Query
Continuous Redo Shipment and Apply
Primary Database
Physical Standby Database
- Read-only queries on physical standby concurrent
with redo apply - Supports RAC on primary and/or standby
- Queries see transactionally consistent results
- Handles all data types, but not as flexible as
logical standby
17Real-time Query Benefits
- Immediate appeal to a large installed customer
base for physical standby - Allows leveraging existing physical standby
assets for excellent business use - Satisfies several customers with specific
requirements - Telecom service schedules for technicians
- Medical access patient medical reports
- Finance provide management-level reports
- Transportation provide responses to package
tracking queries - Web-business scale-out read access for catalog
browsing - Significant differentiator compared to storage
mirroring - Mirror volumes are offline during mirroring
18Snapshot StandbyLeverage Standby Database for
Testing
- Preserves zero data loss, although no real time
query or fast failover - Truly leverages DR hardware for multiple purposes
- Similar to storage snapshots, but provides DR at
the same time anduses single copy of storage
19Snapshot StandbyEasier than manual steps in 10.2
- 10.2
- Standby
- alter database recover managed standby database
cancel - create restore point before_lt guarantee
flashback database - Primary
- alter system archive log current
- alter system set log_archive_dest_state_2defer
- Standby
- alter database activate standby database
- startup mount force
- alter database set standby database to maximize
performance - alter system set log_archive_dest_state_2defer
- alter database open
- PERFORM TESTING, ARCHIVE LOGS NOT SHIPPED
- startup mount force
- flashback database to restore point before_lt
- alter database convert to physical standby
- startup mount force
- alter database recover managed standby database
disconnect from session
- 11.1
- Standby
- alter database convert to snapshot standby
- PERFORM TESTING, ARCHIVE LOGS CONTINUE TO BE
SHIPPED - alter database convert to physical standby
20Use Physical Standby to Detect Lost Writes
- Use new initialization parameter
- Compare versions of blocks on the standby with
that in the incoming redo stream - Version discrepancy implies lost writes
- Can use the standby to failover and restore data
consistency
db_lost_write_protect
21Enhanced Data Guard Security
- SYS user and password files no longer required
for redo transmission authentication - Authentication possible using SSL - requires ASO,
OID - Uses PK Certificates
- Requires all Data Guard databases to be in the
same enterprise domain - Authentication still possible using a password
file (default) - Non-SYS user can be specified through the
parameter - This user must have the SYSOPER privileges
- Requires password for this user to be the same at
primary and all standbys - Upon SYSDBA / SYSOPER changes, password file must
be copied from the primary to all physical
standby databases
redo_transport_user
22Enhanced SQL Apply
- Support
- XMLType data type (CLOB)
- Transparent Data Encryption (TDE)
- DBMS_FGA (Fine Grained Auditing)
- DBMS_RLS (Virtual Private Database)
- Role-specific DBMS_SCHEDULER jobs
- (PRIMARY, LOGICAL STANDBY, BOTH)
- Dynamic SQL Apply parameter changes
- Support for Parallel DDL execution on the standby
database
23Enhanced Fast-Start Failover
- Supports Maximum Performance (ASYNC) Mode
- Automatic failover for long distance standby
- Data loss exposure limited using Broker property
(default30 seconds, min6 seconds) - Immediate fast-start failover for
user-configurable health conditions - Condition examples
- Datafile Offline
- Corrupted Controlfile
- Corrupted Dictionary
- Inaccessible Logfile
- Stuck Archiver
- Any explicit ORA-xyz error
- Apps can request fast-start failover using api
FastStartFailoverLagLimit
ENABLE FAST_START FAILOVER CONDITION ltvaluegt
DBMS_DG.INITIATE_FS_FAILOVER
24Data Guard Performance Improvements
- Faster Failover
- Failover in seconds with Fast-Start Failover
- Faster Redo Transport
- Optimized async transport for Maximum Performance
Mode - Redo Transport Compression for gap fetching new
compression attribute for log_archive_dest_n - Faster Redo Apply
- Parallel media recovery optimization
- Faster SQL Apply
- Internal optimizations
- Fast incremental backup on physical standby
database - Support for block change tracking
25Backup Recovery Enhancements for Data Guard
- Better manageability
- Make persistent RMAN configurations for
primary/standby database by just connecting to
the catalog, without connecting to each database - Archived logs can be configured to be deleted
when they have been shipped or applied to any
standby database - Backups can be taken on any combination of
primary or physical standby databases - Backup control file can be restored directly for
any standby database
26Rolling Database UpgradesUsing Transient Logical
Standby
- Start rolling database upgrades with physical
standbys - Temporarily convert physical standby to logical
to perform the upgrade - Data type restrictions are limited to short
upgrade window - No need for separate logical standby for upgrade
- Also possible in 10.2 (more manual steps)
Physical
Logical
Upgrade
Physical
Leverage your physical standbys!
27ltInsert Picture Heregt
Streams
28Streams Overview
SourceDatabase
Target Database
Propagate
Capture
Redo Logs
Transparent Gateway
- All sites active and updateable
- Automatic conflict detection optional
resolution - Supports data transformations
- Flexible configurations n-way, hub spoke,
- Database platform / release / schema structure
can differ - Provides HA for applications where update
conflicts can be avoided or managed
Non-Oracle Database
29Streams Enhancements in Oracle Database 11g
- Additional Data Type Support
- Table data comparison
- Synchronous capture
- Manageability Diagnosibility improvements
- Performance improvements
- Streams AQ Enhancements
30Newly Supported Datatypes
- XMLType
- Storage CLOB
- Transparent Data Encryption (TDE)
- Default Capture TDEgt Apply TDE
- PRESERVE_ENCRYPTION apply parameter controls
behaviour when destination columns are not
encrypted
31Table Data Comparison
- Compare data between live sources
- Compare 11.1 with 10.1, 10.2 or 11.1
- Recheck
- In-flight data
- Rows that are different
- Converge feature
- Identify truth database (local or remote) for
row diffs
DBMS_COMPARISON
32Synchronous Capture
- Available in all editions of Oracle Database 11g
- Efficient internal mechanism to immediately
capture change - Changes captured as part of the user transaction
- DML only
- LCRs enqueued persistently to disk
- When to use
- Replicate a few low activity tables of highly
active source database - Capture from redo logs cannot be implemented
DBMS_CAPTURE_ADM.CREATE_SYNC_CAPTURE
33Streams Performance Advisor
- Auto-discovery of streams topology on multiple
databases - Automatic performance analysis across all
databases - Per-Stream Analysis
- Time-based analysis of each component (waits,
CPU, etc.) using ASH - Bottleneck components
- Top wait events of bottleneck
- Per-Component Analysis
- Throughput and latency
- Aborted or Enabled
- Integrated with ADDM
- Stream errors are integrated with
Server-generated Alerts
34Split and Merge of Streams
- Challenge
- With hubspoke configurations, when one
destination is unavailable, all destinations are
hit with a performance impact because capture
queue spills to disk after 5 minutes - Solution
- Split the queue between live and down
destinations - Merge queues after recovery
- Maintains high performance for all replicas
- Automated, fast catch-up for unavailable
replica
35Streams Hub with 3 Spokes
Propagation A
Source Database
Enqueue LCRs
Propagation B
Apply Process
Propagation C
36Split Streams Site A Unavailable
Apply Process
37Split Streams Site A Available
Apply Process
38Merge Streams Original Configuration
Propagation A
Source Database
Enqueue LCRs
Propagation B
Apply Process
Propagation C
39Streams. more manageability improvements
- Automated Alerts
- abort of capture, propagation, or apply
- 1st error in DBA_APPLY_ERROR
- Propagation uses scheduler
- Improved error messages
- ORA-1403 -gt ORA-26786 or ORA-26787
- customer DML Handlers need to handle these new
exceptions - more detail added to many error messages
- Cross-database LCR tracking
- trace Streams messages from start to finish
DBMS_CAPTURE_ADM.SET_MESSAGE_TRACKING(mylabel)
VSTREAMS_MESSAGE_TRACKING
40Streams performance
- CERN reporting gt5000 txns/s in 10.2.03
- OpenLAB presentation
- http//openlab-mu-internal.web.cern.ch/openlab-mu
-internal/Documents/3_Presentations/Slides/2007/DW
_openlab_qr1_2007.pdf - 11g performance improvements
- common case 10.2 -gt 11.1 almost double
41Streams Advanced Queuing (AQ) New Features
- JMS/AQ performance improvements
- Direct Streams AQ support in JDBC
- Scalable event notification
- Grouping notification by time
- Multiple processes notification for scalability
- Improved Manageability
- Scheduler support
- Performance views
42Flashback Transaction
43Flashback Transaction
- Automatically finds and backs out a transaction
and all its dependent transactions - Utilizes undo, archived redo logs, supplemental
logging - Finalize changes with commit, or roll back
- Dependent transactions include
- Write after write
- Primary Key insert after delete
- Faster, Easier than laborious manual approach
DBMS_FLASHBACK.TRANSACTION_BACKOUT
44Flashback TransactionEnterprise Manager Support
45Data Recovery Advisor
46Data Recovery AdvisorThe Motivation
Investigation Planning
- Oracle provides robust tools for data repair
- RMAN physical media loss or corruptions
- Flashback logical errors
- Data Guard physical or logical problems
- However, problem diagnosis and choosing the right
solution can be error prone and time consuming - Errors more likely during emergencies
Recovery
47Data Recovery Advisor
- Oracle Database tool that automatically diagnoses
data failures, presents repair options, and
executes repairs at the user's request - Determines failures based on symptoms
- E.g. an open failed because datafiles f045.dbf
and f003.dbf are missing - Failure Information recorded in diagnostic
repository (ADR) - Flags problems before user discovers them, via
automated health monitoring - Intelligently determines recovery strategies
- Aggregates failures for efficient recovery
- Presents only feasible recovery options
- Indicates any data loss for each option
- Can automatically perform selected recovery steps
Reduces downtime by eliminating confusion
48Data Recovery AdvisorEnterprise Manager Support
49Data Recovery Advisor RMAN Command Line Interface
RMANgt list failure
- lists all previously detected failures
- presents recommended recovery options
- repair database failures (defaults to first
repair option from most recent ADVISE FAILURE) - change failure priority (with exception of
critical priority failures, e.g. missing
control file)
RMANgt advise failure
RMANgt repair failure
RMANgt change failure 5 priority low
50ltInsert Picture Heregt
Recovery Manager, Ultra Safe Mode and Online
Operations
51RMAN Enhancements
- Better performance
- Intra-file parallel backup and restore of single
data files (multi-section backup) - Faster backup compression (ZLIB, 40 faster)
- Better security
- Virtual private catalog allows a consolidation of
RMAN repositories and maintains a separation of
responsibilities. - Lower space consumption
- Duplicate database or create standby database
over the network, avoiding intermediate staging
areas - Integration with Windows Volume Shadow Copy
Services (VSS) API - Allows database to participate in snapshots
coordinated by VSS-compliant backup management
tools and storage products - Database is automatically recovered upon snapshot
restore via RMAN
52Ultra-Safe Mode
- The DB_ULTRA_SAFE parameter provides an easy way
to turn on the safest mode. It affects the
default values of the following parameters - DB_BLOCK_CHECKING, which initiates checking of
database blocks. This check can often prevent
memory and data corruption. - DB_BLOCK_CHECKSUM, which initiates the
calculation and storage of a checksum in the
cache header of every data block when writing it
to disk. Checksums assist in detecting corruption
caused by underlying disks, storage systems or
I/O systems. - DB_LOST_WRITE_PROTECT, which initiates checking
for "lost writes". Data block lost writes occur
on a physical standby database, when the I/O
subsystem signals the completion of a block
write, which has not yet been completely written
in persistent storage. Of course, the write
operation has been completed on the primary
database.
53Online Operations Redefinition Improvements
- Fast add column with default value
- Invisible indexes speed application migration and
testing - Online index build with NO pause to DML
- No recompilation of dependent objects when Online
Redefinition does not logically affect objects - More resilient execution of table DDL operations
- Support Online Redefinition for tables with
Materialized View Logs
54ltInsert Picture Heregt
55High Availability
- Flashback Archive
- Data Guard
- Streams
- Online Maintenance
- Data Recovery Advisor
56(No Transcript)