Title: Scalability and Reliability using Oracle Real Application Clusters
1Scalability and Reliability using Oracle Real
Application Clusters
- Suresh Chaganti
- Syed Asharaf
2Agenda
- Introductions
- Real Application Clusters Defined
- Architecture of Real Application Cluster
- Memory Structures
- Technology Stack Components
- Storage Considerations
- Pre-Installation Tasks
- Installing Real Application Clusters
- Case Study I Benchmark Statistics with Oracle
10g - Case Study II Migrating to RAC on 11i
- Closing Thoughts
3About Cozent
- Cozent is Oracle services firm based out of
Naperville, IL . - Cozent specializes in implementation of Oracle
E-Business suite, design of High Availability
Solutions using Oracle RAC and support services
for Oracle Databases and E-Business Suite.
4Real Application Clusters
- A cluster comprises multiple interconnected
computers or servers that appear as if they are
one server to end users and applications. - Single-instance
- one-to-one relationship between the Oracle
database and the instance. - RAC Environments
- one-to-many relationship between the database and
instances - combined processing power of the multiple servers
can provide greater throughput ,scalability and
availability - Oracle Clusterware is a portable cluster
management solution that is integrated with the
Oracle database.
5Real Application Clusters - Architecture
- Two or more database instances that each contain
memory structures and background processes - A RAC database is a logically or physically
shared everything database - All datafiles, control files, PFILEs, and redo
log files in RAC environments must reside on
cluster-aware shared disks - At least one additional thread of redo for each
instance - An instance-specific undo tablespace
- Must have private interconnect between cluster
nodes
6The Real Application Clusters Memory Structures
- Each instance has a buffer cache in its System
Global Area (SGA). - Using Cache Fusion, RAC environments logically
combine each instance's buffer cache to enable
the instances to process data as if the data
resided on a logically combined, single cache. - The SGA size requirements for RAC are greater
than the SGA requirements for single-instance
Oracle databases due to Cache Fusion
7RAC High Availability Components
- Voting Disk
- Manages cluster membership by way of a health
check - RAC uses the voting disk to determine which
instances are members of a cluster - must reside on shared disk
- Oracle Cluster Registry (OCR)
- Maintains cluster configuration information as
well as configuration information about any
cluster database within the cluster - must reside on shared disk
8Installing Oracle Clusterware and Real
Application Clusters
- Pre-Installation
- user equivalence
- network connectivity
- directory and file permissions
- Use Cluster verification Utility to check
requirements - Two-phase installation
- Oracle Clusterware Installation
- use Oracle Universal Installer (OUI) to install
Oracle Clusterware - Installed in separate home ORA_CRS_HOME
- Real Application Clusters Installation and
Database Creation - Use Oracle Universal Installer (OUI) to install
RDBMS - Install RDBMS in separate ORACLE_HOME
- Use DBCA to Create and configure Databases
9Storage considerations
- Types of Files to consider
- Voting Disk
- Oracle Cluster Registry
- Shared Database files
- Oracle Software
- Recovery
- Can use any combination of File Systems shown on
the right - For each filesystem choice, the disks/devices
need to be partitioned and configured. Please see
manual for full details
10Certification Matrix
- Leading Edge technology, Pay good attention to
Certification Matrix - OCFS2 is not certified yet for Linux AS 4 -64
bit, Itanium, Power and s390x ( at the time of
writing this) - Oracle may not have tested ALL Hardware
configurations / Vendors
11Pre-Installation Tasks
- Use CVU to confirm that all pre-installation
requirements are met - /dev/dvdrom/crs/Disk1/cluvfy/runcluvfy.sh stage
-pre crsinst -n node1,node2 - cluvfy -help ( to display help and command info)
- Xserver availability ( to run OUI in a GUI mode)
- Ensure required Operating groups (dba,oinstall)
and users ( oracle,nobody) - Ensure oracle user can ssh to local node and
remote nodes without prompting for password. This
is the single most important thing to consider - Ensure that date is set on the cluster nodes
using NTP or some common mechanism. If the date
settings on the nodes vary, the installation may
fail during copy of binaries to the remote nodes
12Pre-Installation Tasks - Continued
- Ensure enough RAM , SWAP and TMP space
- Installation specific. Refer to installation
manual. - 2 Network adapters one each for public connection
and Private interconnect between the nodes - Configure /etc/hosts to ensure entries for
private, public and vip hostnames for ALL nodes
are available in each of the nodes. - Ensure that all rpms required are installed.
- Check latest release notes for errata
13Pre-Installation Tasks - Continued
- Configure Kernel Parameters as per installation
guide - Create ORACLE_BASE, ORACLE_HOME directories
- Configure oracle users environment
14Final Checks
- Run Cluster Verification utility after completing
the pre-installation tasks to ensure that
following aspects are covered - Node Reachability All of the specified nodes are
reachable from the local node. - User Equivalence Required user equivalence
exists on all of the specified nodes. - Node Connectivity Connectivity exists between
all the specified nodes through the public and
private network interconnections, and at least
one subnet exists that connects each node and
contains public network interfaces that are
suitable for use as virtual IPs (VIPs). - Administrative Privileges The oracle user has
proper administrative privileges to install
Oracle Clusterware on the specified nodes. - Shared Storage Accessibility The OCR device and
voting disk are shared across all the specified
nodes. - System Requirements All system requirements are
met for installing Oracle Clusterware software,
including kernel version, kernel parameters,
memory, swap directory space, temporary directory
space, and required users and groups. - Kernel Packages All required operating system
software packages are installed. - Node Applications The virtual IP (VIP), Oracle
Notification Service (ONS) and Global Service
Daemon (GSD) node applications are functioning on
each node.
15Installing Clusterware and RDBMS
- Very straightforward with Oracle Universal
Installer if all the pre-installation
requirements are met - Need to run in 2 phases, first to install
Clusterware and second to install Oracle RAC
RDBMS software - Run DBCA to configure ASM instance ( if the
storage option chosen is ASM) - Run DBCA to create Database
16Case Study I Benchmark Statistics for 10g R2
on RAC
- Background
- Client is in healthcare industry analyzing high
volume of claims, eligibility information along
with transaction processing application for
disease management. -
17Before and After RAC
- Environment before RAC
- Custom application running on Single node Windows
based Oracle 9i RDBMS - Reporting and Datawarehouse requirements met from
same Database - Performance and Scalability issues
- Environment after RAC
- 3 Node 10g R2 RAC system on Linux AS 3 running on
2 way AMD 64 bit processor machines - 9 GB main memory on all 3 nodes
- 4.5 GB SGA configured
- Instances separated for Production and
Datawarehouse - Configured ASM for database Files
18Benchmark Statistics on Stress Testing
- Used TPC-C compliant set of data
- Simulates a real workload by allowing a large
number of concurrent connections performing
user-defined transactions simultaneously. - Used to gauge the server performance under a CPU
and memory intensive Oracle database workload - middle 80 of transactions are used for the
benchmark readings
19Test Configuration
- executed at several iterations of 5, 10, 25, 50,
100, 150, 200 and 250 sessions - Each iteration executed 100,000 transactions.
- Initially executed on a single node at a time
- The test was repeated and sessions connected to
the three available nodes depending on load.
20Results
- the server handled various database loads
optimally and scaled very well. - The three node RAC had a steady database
throughput and was able to handle user load up to
800 concurrent users ( Configuration Limit). - Load larger than 800 users resulted in error
because of database configuration limit of 800
users. - The database could be altered if needed to
increase the user limit.
Session TPS
5 4744
10 6735
25 6776
50 6735
100 6701
150 6617
200 6583
250 6575
300 6567
500 6531
21Case Study - IIMigrating to 11i RAC on 9i
Database
- Background
- Client is in discreet manufacturing industry with
plants all over North America. The requirements
were for a 24 X 7 environment running on an
Oracle 11i (11.5.8) E-Business Suite application
22Environment and Configuration
- Oracle 11.5.8 E-Business Suite application
- Oracle database version 9.2.0.7
- Oracle Clustered File System V1
- 4 node environment
- 2 Database nodes
- 2 Application Server nodes
- Concurrent Manager tier on 2 database nodes
- Reports and Forms tier on 2 Application Server
nodes - Parallel Concurrent Processing configured
- Virtual host name configured
23Key Steps in Implementing RAC on 11i
- Configure SAN storage with shared disks
- Install and configure OCFS
- Install Oracle Cluster Manager
- Install Oracle 9i (9.2.0.4) and upgrade database
to latest release - Cloned production instance to new servers
- Apply all pre-requests patches to the Cloned
Production instance for converting to RAC. ( ex
ATG_PF.H , AD.I etc ). - Converted database to RAC
- Configured application tier for RAC
- Created database instance 2 on Node 2.
- Created application tier 2 on Node 2.
- Configured virtual host and configured
application to use the virtual host name. - Configured PCP ( Parallel Concurrent Processing )
24Critical Step Enabling Autoconfig
- Copy the appsutil, appsoui and oui22 directories
from the OLD_ORACLE_HOME to the NEW_ ORACLE_HOME.
- Set environment variables ORACLE_HOME,
LD_LIBRARY_PATH and TNS_ADMIN to point to NEW_
ORACLE_HOME.Set ORACLE_SID variable to point to
instance name running on this database node. - Shutdown the instance and database listener.
- Start the instance by using parameter file as
initltsid.oragt. Start the database listener. - Generate instance specific xml file using
NEW_ORACLE_HOME/appsutil/bin adbldxml.sh tierdb
appsuserltAPPSusergt appspasswdltAPPSpwdgt - Execute the AutoConfig utility (adconfig.sh) on
database tier from NEW_ORACLE_HOME/appsutil/bin.
Verify the log file located at NEW_ORACLE_HOMEgt/ap
psutil/log/ltcontext_namegt/ltMMDDhhmm
25Critical Step Converting database to RAC
- Execute AutoConfig utility on the application
tier. Verify the AutoConfig log file located at
APPL_TOP/admin/ltcontext_namegt/log/ltMMDDhhmmgt. - Execute AD_TOP/bin/admkappsutil.pl to generate
appsutil.zip for the database tier. - Transfer this appsutil.zip to database tier in
the NEW_ORACLE_HOME. - Unzip this file to create appsutil directory in
the NEW_ORACLE_HOME. - Execute the AutoConfig on database tier from
NEW_ORACLE_HOME/appsutil/ltcontext_namegt/scripts
by using adautocfg.sh - Verify the AutoConfig log file located in the
NEW_ORACLE_HOME NEW_ORACLE_HOMEgt/appsutil/log/ltcon
text_namegt/ltMMDDhhmm. - Execute the following command to accumulate all
the information about the instance
NEW_ORACLE_HOME/appsutil/scripts/ltcontext_namegt/p
erl adpreclone.pl database - Shutdown the instance
- Ensure that listener process on database tier is
also stopped. - Execute the following from the NEW_ORACLE_HOME/app
sutil/clone/bin.perl adcfgclone.pl database - Answer the prompted questions
26Critical Step Converting Database to RAC (
Contd )
- The process will
- Create instance specific context file
- Create instance specific environment file.
- Create RAC parameter specific init.ora file.
- Recreate the control files.
- Create redo log threads for other instances in
the cluster. - Create undo tablespaces for other instances in
the cluster. - Execute AutoConfig on the Database tier.
- Start the instance and database listener on the
local host.
27Critical Step - Configure Applications
Environment for RAC
- Execute the AutoConfig by using
AD_TOP/bin/adconfig.sh contextfileAPPL_TOP/adm
in/ltcontext_filegt. - Verify the AutoConfig log located at
APPL_TOP/admin/ltcontext_namegt/log/ltMMDDhhmmgtfor
errors. Source the environment by using the
latest environment file generated. -
- Verify the tnsnames.ora, listener.ora files
located in the 8.0.6 ORACLE_HOME at
ORACLE_HOME/network/admin and IAS_ORACLE_HOME/ne
twork/admin. Ensure that the correct tns aliases
are generated for load balance and fail over. - Verify the dbc file located at FND_SECURE.
Ensure that the parameter APPS_JDBC_URL is
configured with all instances in the environment
and load_balance is set to ON.
28Critical Step Load Balancing Oracle Apps
Environment
- Run the Context Editor through Oracle
Applications Manager interface to set the value
of "Tools OH TWO_TASK","iAS OH TWO_TASK" and
"Apps JDBC Connect Alias" - To load balance the forms based applications
database connections, set the value of "Tools OH
TWO_TASK" to point to the ltdatabase_namegt_806_bala
nce alias generated in the tnsnames.ora file. - To load balance the self-service applications
database connections, set the value of iAS OH
TWO_TASK" and "Apps JDBC Connect Alias" to point
to the ltdatabase_namegt_balance alias generated in
the tnsnames.ora file. - Execute AutoConfig by using AD_TOP/bin/adconfig.
sh contextfileAPPL_TOP/admin/ltcontext_filegt - Restart the applications processes by using the
latest scripts generated after AutoConfig
execution. - Set profile option "Application Database Id" to
dbc file name generated at FND_TOP/secure. - Update session_cookie_name in table
ICX_PARAMETERS to ltservice_namegt.
29Critical Step Configure Parallel Concurrent
Processing
- Prerequisites for setting up PCP
- Configure the Application to use GSM (Generic
Service Management), The GSM profile options
should be YES - Setup PCP
- Ensure that all pre-requisite patches are
applied. - Edit the applications context file through Oracle
Applications Manager interface and set the value
of APPLDCPON and "Concurrent Manager TWO_TASK"
value to instance alias. - Execute AutoConfig by using COMMON_TOP/admin/scri
pts/ltcontext_namegt/adautocfg.sh on all concurrent
nodes. - Source the application environment by using
APPL_TOP/APPSORA.env - Check the configuration files tnsnames.ora and
listener.ora located under 8.0.6 ORACLE_HOME at
ORACLE_HOME /network/admin/ltcontextgt. Ensure
that you have information of all the other
concurrent nodes for FNDSM and FNDFS entries.
30Critical Step Configure Parallel Concurrent
Processing ( Contd )
- Setup PCP
- Restart the application listener processes on
each application node. - Logon to Oracle E-Business Suite 11i Applications
using SYSADMIN in login and System Administrator
Responsibility. Navigate to Install gt Nodes
screen and ensure that each node in the cluster
is registered. - Navigate to Concurrent gt Manager gt Define screen,
and set up the primary and secondary node names
for all the concurrent managers according to the
desired configuration for each nodes workload.
The Internal Concurrent Manager should be defined
on the primary PCP node only. - Set the APPLCSF environment variable on all the
CP nodes pointing to a log directory on a shared
file system
31Closing Thoughts
- Lot of choices to be made. Analyze and decide
what best fits your environment - Pay close attention to certification matrix,
documentation and release notes - Expect some issues with any configuration
32- Thank You !
- For any questions/ Comments please contact
- Suresh.chaganti_at_cozent.com
- Syed.asharaf_at_cozent.com