DBA Tips Archive for Oracle

  


Data Guard Configuration Example - (10g, Physical Standby)

by Jeff Hunter, Sr. Database Administrator

Contents

Introduction

Oracle Data Guard (known as Oracle Standby Database prior to Oracle9i), forms an extension to the Oracle RDBMS and provides organizations with high availability, data protection, and disaster recovery for enterprise databases. Oracle Data Guard provides the DBA with services for creating, maintaining, managing, and monitoring one or more standby databases. The functionality included with Oracle Data Guard enables enterprise data systems to survive both data corruption as well as major disasters.

This article provides instructions for creating and configuring a physical standby database from a primary database using Oracle Database 10g Release 2 (10.2) operating in maximum performance protection mode. It should be noted that several different methods exist to create a physical standby database configuration and that this is just one of those ways. The methods outlined in this guide present a simple approach that should be easy to implement in most situations. In fact, if you break down the essential tasks required to build a physical standby database, you will see that it is essentially nothing more than taking a backup of the primary database, creating a standby controlfile, transferring the files to the standby host, mounting the standby database, putting the standby database in managed recovery mode (Redo Apply), and starting remote archiving from the primary database (Redo Transport). Obviously there are a number of smaller steps I am leaving out which will all be discussed in more depth throughout this guide.

All configuration parameters related to the Oracle instance and networking will be discussed as well as how to place the standby database in Managed Recovery Mode.

Introduction to Oracle Data Guard

The standby database feature of Oracle was first introduced with the release of Oracle 7 in the early 1990's. The design was fairly simple. Oracle used media recovery to apply archive logs to a remote standby database, however, none of the automation we now take for granted was present in this release of the product. DBA's were required to write custom scripts that shipped and applied archive logs to the remote standby database. It wasn't until Oracle8i where some form of automation was introduced that relied on Oracle Net Services to transfer and apply archive redo logs. DBA's were still required to supply scripts that handled gap resolution and resynchronize the primary and standby database when they lost connectivity with one another. Also included in Oracle8i was a set of pre-written scripts that simplified the switchover and failover process.

With the introduction of Oracle9i, the standby database feature was renamed to Oracle Data Guard. In addition to the re-branding of the product, Oracle delivered a comprehensive automated solution for disaster recovery that was fully integrated with the database kernel. Finally, a fully integrated disaster recovery solution without the need to maintain custom written scripts! Oracle9i also provided a vast array of new features which included automatic gap resolution, enhanced redo transport methods (synchronous and asynchronous redo transport), the ability to configure zero data loss, and the concept of protection modes.

Until Oracle9i Release 2, the only standby database type available was the physical standby database. A physical standby database is an identical, block-for-block copy of the primary database and is kept in sync with the primary using media recovery (also referred to as Redo Apply). Oracle introduced a new type of standby database with Oracle9i Release 2 named Logical Standby Database. This new type of standby database keeps in sync with the primary database using SQL Apply (versus Redo Apply used with a physical standby database). A logical standby database remains open for user access while logical records are being received and applied from the primary database which makes this a great candidate for a reporting database.

When the standby database site is hosted in a different geographical location than the primary site, it provides for an excellent High Availability (HA) solution. When creating a standby database configuration, the DBA should always attempt to keep the primary and standby database sites identical as well as keeping the physical location of the production database transparent to the end user. This allows for an easy role transition scenario for both planned and unplanned outages. When the secondary (standby) site is identical to the primary site, it allows predictable performance and response time after failing over (or switching over) from the primary site.

Oracle Database Enterprise Edition Requirement

Oracle Data Guard is only available as a bundled feature included within its Enterprise Edition release of the Oracle Database software. It is not available with Oracle Database Standard Edition. With the exception of performing a rolling database upgrade using logical standby database, it is mandatory that the same release of Oracle Database Enterprise Edition be installed on the primary database and all standby databases!

While it remains possible to simulate a standby database environment running Oracle Database Standard Edition, it requires the DBA to develop custom scripts that manually transfer archived redo log files and then manually applying them to the standby database. This is similar to the methods used to maintain a standby database with Oracle 7. The consequence with this type of configuration is that it does not provide the ease-of-use, manageability, performance, and disaster-recovery capabilities available with Data Guard.

Standby Database Types

There are two types of standby databases that can be created with Oracle Data Guard — physical or logical. Deciding which of the two types of standby databases to create is critical and depends on the nature of the business needs the organization is trying to satisfy.

A physical standby database is an identical, block-for-block copy of the primary database and is kept in sync with the primary using media recovery. As redo gets generated on the primary database, it gets transferred to the standby database where an RFS process receives the primary redo and applies the change vectors directly to the standby database. A physical standby database is an excellent choice for disaster recovery.

A logical standby database works in a different manner which keeps in sync with the primary by transforming redo data received from the primary database into logical SQL statements and then executes those SQL statements against the standby database. With a logical standby database, the standby remains open for user access in read/write mode while still receiving and applying logical records from the primary. While a physical standby database is an exact physical replica of the primary, a logical standby database is not. Because Oracle is applying SQL statements to the standby database and not performing media recovery (as is done with a physical standby database), it is possible for the logical standby database to contain the same logical data, but at the same time have a different physical structure. A logical standby database is an excellent solution for a reporting database while at the same time retaining the attributes of a disaster recovery solution. Not only does a logical standby database contain the same logical information as the primary, it can also support the creation of additional objects to support improved reporting requirements.

Data Protection Modes

After deciding between a physical or logical standby database, the next major decision is which data protection mode should be used to operate the Data Guard configuration. At the heart of this decision lies the answer to one important question — how much data loss is your organization willing to endure in the event of a failover? The obvious answer to expect from management is none. Configuring Data Guard with guaranteed no data loss, however, requires a significant investment in equipment and other resources necessary to provide support for this type of environment.

An Oracle Database 10g Data Guard configuration will always run in one of three data protection modes:

Each of the three modes provide a high degree of data protection; however they differ with regards to data availability and performance of the primary database.

When selecting a protection mode, always consider the one that best meets the needs of your business. Carefully take into account the need to protect the data against any loss vs. availability and performance expectations of the primary database. An in-depth discussion on the three available data protection modes and how redo transport works to support them is beyond the scope of this guide. To keep the article simple, I will be using the default protection mode of Maximum Performance.

 

For a detailed discussion on the various Oracle Data Guard protection modes, see my article entitled 'Data Protection Modes'.

Hardware and O/S Configuration

Let's take a look at the Oracle configuration and a summary of the parameters that will be used to create the physical standby database configuration described in this guide.

Primary Database
Oracle Release Oracle 10g Release 2 — (10.2.0.5)
Host Name vmlinux1.idevelopment.info — (192.168.1.160)
Operating System Red Hat Linux 5 — (CentOS 5.5)
Database Name (db_name) modesto
Database Domain (db_domain) idevelopment.info
Oracle SID modesto
Database Unique Name (db_unique_name) modesto
TNS Alias modesto.idevelopment.info
Service Names modesto.idevelopment.info, modesto
Database Files - (db_create_file_dest) /u02/oradata
Flash Recovery Area - (db_recovery_file_dest) /u03/flash_recovery_area
Local Online Redo Log Files - (log_archive_dest_1) location=use_db_recovery_file_dest — (all_logfiles,all_roles)
Remote Archive Destination - (log_archive_dest_2) service=turlock — (online_logfiles,primary_role)

Physical Standby Database
Oracle Release Oracle 10g Release 2 — (10.2.0.5)
Host Name vmlinux2.idevelopment.info — (192.168.1.162)
Operating System Red Hat Linux 5 — (CentOS 5.5)
Database Name (db_name) modesto
Database Domain (db_domain) idevelopment.info
Oracle SID turlock
Database Unique Name (db_unique_name) turlock
TNS Alias turlock.idevelopment.info
Service Names turlock.idevelopment.info, turlock
Database Files - (db_create_file_dest) /u02/oradata
Flash Recovery Area - (db_recovery_file_dest) /u03/flash_recovery_area
Local Online Redo Log Files - (log_archive_dest_1) location=use_db_recovery_file_dest — (all_logfiles,all_roles)
Remote Archive Destination - (log_archive_dest_2) service=modesto — (online_logfiles,primary_role)

It is assumed that Oracle Database 10g Release 2 and all patchsets have been installed on both nodes in the Oracle Data Guard configuration. Click here for a guide on installing Oracle Database 10g R2 on the Red Hat Linux 5 platform.

Configure the Primary Database

The first phase in creating an Oracle Data Guard configuration involves performing the necessary tasks on the primary database. While several of the tasks outlined in this section are not mandatory, I will provide explanations to their benefits and how their application can result in a more stable disaster recovery solution. As several of the steps included in this section involve bouncing the primary database, try to consolidate those steps in order to reduce the number of bounces.

  1. Enable Archivelog Mode

    Oracle Data Guard relies on redo from the primary database in order to maintain the standby. This requires the primary database be placed into archivelog mode. Use the following to verify that the primary database is in archivelog mode:


    SQL> archive log list Database log mode No Archive Mode Automatic archival Disabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 32 Current log sequence 34

    The output from the above command shows that the primary database is not in archivelog mode. Perform the following steps to place the primary database into archivelog mode:


    SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount ORACLE instance started. Total System Global Area 1241513984 bytes Fixed Size 1273420 bytes Variable Size 318767540 bytes Database Buffers 905969664 bytes Redo Buffers 15503360 bytes Database mounted. SQL> alter database archivelog; Database altered. SQL> alter database open; Database altered. SQL> archive log list Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 34 Next log sequence to archive 36 Current log sequence 36

    Prior to Oracle Database 10g, it was also required to set the log_archive_start initialization parameter to TRUE in order to enable automatic archiving. This is no longer necessary as automatic archiving is enabled by default when the database is placed into archivelog mode with Oracle Database 10g or higher.

  2. Create a Password File

    As part of the new redo transport security and authentication features, it is now mandatory that each database in an Oracle Data Guard configuration utilize a password file. In addition, the SYS password must be identical on every database in order for redo transport to function. If a password file does not exist for the primary database, create one using the following steps:


    [oracle@vmlinux1 ~]$ cd $ORACLE_HOME/dbs [oracle@vmlinux1 dbs]$ orapwd file=orapwmodesto password=MySysPassword

    After creating the password file, set the remote_login_passwordfile initialization parameter to EXCLUSIVE in the spfile on the primary database. Since this parameter cannot be dynamically modified for the current running instance, the change will have to be made to the spfile and bounced in order to take effect:


    SQL> alter system set remote_login_passwordfile=exclusive scope=spfile; System altered. SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup open ORACLE instance started. Total System Global Area 1241513984 bytes Fixed Size 1273420 bytes Variable Size 318767540 bytes Database Buffers 905969664 bytes Redo Buffers 15503360 bytes Database mounted. Database opened.

  3. Enable Force Logging (optional)

    Any nologging operations performed on the primary database do not get fully logged within the redo stream. As Oracle Data Guard relies on the redo stream to maintain the standby database, this can result in data inconsistencies between the primary and standby along with a massive headache for the DBA to resolve. To prevent this from occurring, one solution is to place the primary database into force logging mode. In this mode, all nologging operations are permitted to run without error, however, the changes will be placed in the redo stream anyway. Although this is considered an optional step, I make it mandatory when designing an Oracle Data Guard configuration for my clients. Overlooking this on a production environment can result in the DBA spending considerable time during the implementation of a disaster recovery situation.

    To place the primary database in forced logging mode, connect as SYS and run the following:


    SQL> alter database force logging; Database altered.

    To verify force logging is enabled for the database:


    SQL> select force_logging from v$database; FORCE_LOGGING -------------- YES

  4. Create Standby Redo Logs (optional)

    Certain data protection modes within Oracle Data Guard, such as maximum availability and maximum protection, mandate the use of standby redo logs. Regardless of the protection mode (even when using maximum performance), it is highly recommended to utilize standby redo logs as generally more data can be recovered during a failover than without them.

    When creating the standby redo logs, it is recommended to have one more standby redo log file group than the number of online redo log file groups on the primary database. In addition, the standby redo log file groups must be the same size as the online redo log file groups. The recommended number of standby redo log log file groups can be calculated using the following formula:

    (# of online redo log file groups on primary + 1) * maximum # of threads

    The example database used in this guide is not configured for Oracle RAC and therefore only contains one thread. The primary database is configured with three online redo log file groups sized at 50MB each:


    SQL> select group#, thread#, bytes, members from v$log; GROUP# THREAD# BYTES MEMBERS ---------- ---------- ---------- ---------- 1 1 52428800 2 2 1 52428800 2 3 1 52428800 2

    The number of standby redo logs required for the physical standby database in this example is (3 + 1) * 1 = 4 at 50MB each.

    A best practice generally followed is to create the standby redo logs on both the primary and the standby database so as to make role transitions smoother. By creating the standby redo logs at this stage, it is assured that they will exist on both the primary and the newly created standby database.

    From the primary database, connect as SYS and run the following to create four standby redo log file groups:


    SQL> alter database add standby logfile thread 1 group 4 size 50m; Database altered. SQL> alter database add standby logfile thread 1 group 5 size 50m; Database altered. SQL> alter database add standby logfile thread 1 group 6 size 50m; Database altered. SQL> alter database add standby logfile thread 1 group 7 size 50m; Database altered.

    To verify the new standby redo log files:


    SQL> select group#, type, member from v$logfile order by group#, member; GROUP# TYPE MEMBER ---------- ------- ----------------------------------------------------------------- 1 ONLINE /u02/oradata/MODESTO/onlinelog/o1_mf_1_6jjc8hr8_.log 1 ONLINE /u03/flash_recovery_area/MODESTO/onlinelog/o1_mf_1_6jjc8jxq_.log 2 ONLINE /u02/oradata/MODESTO/onlinelog/o1_mf_2_6jjc8mvj_.log 2 ONLINE /u03/flash_recovery_area/MODESTO/onlinelog/o1_mf_2_6jjc8nvv_.log 3 ONLINE /u02/oradata/MODESTO/onlinelog/o1_mf_3_6jjc8qrw_.log 3 ONLINE /u03/flash_recovery_area/MODESTO/onlinelog/o1_mf_3_6jjc8ry2_.log 4 STANDBY /u02/oradata/MODESTO/onlinelog/o1_mf_4_6hvg3qk9_.log 4 STANDBY /u03/flash_recovery_area/MODESTO/onlinelog/o1_mf_4_6hvg3rnm_.log 5 STANDBY /u02/oradata/MODESTO/onlinelog/o1_mf_5_6hvg48f3_.log 5 STANDBY /u03/flash_recovery_area/MODESTO/onlinelog/o1_mf_5_6hvg49gr_.log 6 STANDBY /u02/oradata/MODESTO/onlinelog/o1_mf_6_6hvg4m9j_.log 6 STANDBY /u03/flash_recovery_area/MODESTO/onlinelog/o1_mf_6_6hvg4nb4_.log 7 STANDBY /u02/oradata/MODESTO/onlinelog/o1_mf_7_6hvg4w3d_.log 7 STANDBY /u03/flash_recovery_area/MODESTO/onlinelog/o1_mf_7_6hvg4xbl_.log

  5. Configure the Primary Database Initialization Parameters

    Most of the configuration options for Data Guard are implemented using initialization parameters for the Oracle instance. When the primary and standby machines are identical (memory, disks, CPU, etc.) the initialization file for both databases should be nearly identical.

    When configuring the initialization parameters on the primary database, it is important to consider future role transitions. The initialization parameters that control redo transport and redo apply should be configured so that the database will seamlessly operate in either role with no parameter modifications required. When the database is mounted on a primary controlfile, the standby parameters are not read and are not put into effect, so they will not affect the operation of the database while in the primary role.

    The parameters listed below should be placed in the initialization file for the primary database. Notice that I will initially 'DEFER' log_archive_dest_state_2 until the standby database has been created, mounted, and placed in managed recovery mode. Also note that the db_file_name_convert and log_file_name_convert parameters are only required if the paths will be different between the primary and standby host.


    # ---[ Dump Destination Parameters ] --- # audit_file_dest='/u01/app/oracle/admin/modesto/adump' background_dump_dest='/u01/app/oracle/admin/modesto/bdump' core_dump_dest='/u01/app/oracle/admin/modesto/cdump' user_dump_dest='/u01/app/oracle/admin/modesto/udump' # ---[ Role-independent Parameters ] --- # archive_lag_target=900 compatible='10.2.0.5.0' control_files='/u02/oradata/MODESTO/controlfile/o1_mf_6hc6stn9_.ctl', '/u03/flash_recovery_area/MODESTO/controlfile/o1_mf_6hc6styy_.ctl' db_name='modesto' db_domain='idevelopment.info' db_create_file_dest='/u02/oradata' db_recovery_file_dest='/u03/flash_recovery_area' dispatchers='(PROTOCOL=TCP) (SERVICE=modestoXDB)' instance_name='modesto' log_archive_config='dg_config=(modesto,turlock)' log_archive_max_processes=4 remote_login_passwordfile='exclusive' # ---[ Primary Role Parameters ] --- # db_unique_name='modesto' log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=modesto' log_archive_dest_2='service=turlock.idevelopment.info valid_for=(online_logfiles,primary_role) db_unique_name=turlock' log_archive_dest_state_1='enable' log_archive_dest_state_2='defer' service_names='modesto.idevelopment.info, modesto' # ---[ Standby Role Parameters ] --- # db_file_name_convert='/TURLOCK/','/MODESTO/' log_file_name_convert='/TURLOCK/','/MODESTO/' fal_server='modesto','turlock' fal_client='modesto' standby_file_management='auto'

    Several of the initialization parameters listed above can not be dynamically modified and therefore will require the primary database to be bounced:


    SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 1241513984 bytes Fixed Size 1273420 bytes Variable Size 318767540 bytes Database Buffers 905969664 bytes Redo Buffers 15503360 bytes Database mounted. Database opened.

  6. Create a Backup of the Primary Database

    A physical standby database can be created using either a hot or cold backup of the primary as long as all of the necessary archivelogs are available to bring the standby database to a consistent state. For the purpose of this guide, I will be performing an online (hot) backup of the primary database using RMAN. The RMAN backupsets will be written to a staging directory located outside of the Flash Recovery Area; namely /u04/oracle/dg_staging. I start by creating the staging directory on both the primary and standby hosts:


    [oracle@vmlinux1 ~]$ mkdir -p /u04/oracle/dg_staging [oracle@vmlinux2 ~]$ mkdir -p /u04/oracle/dg_staging

    From the primary host, perform an RMAN backup of the primary database that places the backupset into the staging directory:


    [oracle@vmlinux1 ~]$ rman target sys/MySysPassword@modesto RMAN> backup device type disk format '/u04/oracle/dg_staging/%U' database plus archivelog;

  7. Create a Standby Controlfile

    Using the same process as above, create a standby controlfile in the staging directory using RMAN:


    [oracle@vmlinux1 ~]$ rman target sys/MySysPassword@modesto RMAN> backup device type disk format '/u04/oracle/dg_staging/%U' current controlfile for standby;

  8. Prepare an Initialization Parameter for the Standby Database

    Create an initialization parameter for the standby database using the primary as the source. The primary database in this example is using an spfile which will need to be copied to a pfile so it can be modified and used by the standby database. When configuring the standby database later in this guide, I will be converting the modified standby pfile back to an spfile.

    From the primary database, create a pfile in the staging directory:


    SQL> create pfile='/u04/oracle/dg_staging/initturlock.ora' from spfile; File created.

    Next, modify the necessary parameters in the new pfile to allow the database to operate in the standby role. All modified parameters are indicated in red:


    # ---[ Memory Parameters ] --- # turlock.__db_cache_size=905969664 turlock.__java_pool_size=16777216 turlock.__large_pool_size=16777216 turlock.__shared_pool_size=285212672 turlock.__streams_pool_size=0 # ---[ Dump Destination Parameters ] --- # audit_file_dest='/u01/app/oracle/admin/turlock/adump' background_dump_dest='/u01/app/oracle/admin/turlock/bdump' core_dump_dest='/u01/app/oracle/admin/turlock/cdump' user_dump_dest='/u01/app/oracle/admin/turlock/udump' # ---[ Role-independent Parameters ] --- # archive_lag_target=900 compatible='10.2.0.5.0' control_files='/u02/oradata/TURLOCK/controlfile/o1_mf_6hc6stn9_.ctl', '/u03/flash_recovery_area/TURLOCK/controlfile/o1_mf_6hc6styy_.ctl' db_name='modesto' db_domain='idevelopment.info' db_create_file_dest='/u02/oradata' db_recovery_file_dest='/u03/flash_recovery_area' dispatchers='(PROTOCOL=TCP) (SERVICE=turlockXDB)' instance_name='turlock' log_archive_config='dg_config=(modesto,turlock)' log_archive_max_processes=4 remote_login_passwordfile='exclusive' # ---[ Primary Role Parameters ] --- # db_unique_name='turlock' log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=turlock' log_archive_dest_2='service=modesto.idevelopment.info valid_for=(online_logfiles,primary_role) db_unique_name=modesto' log_archive_dest_state_1='enable' log_archive_dest_state_2='enable' service_names='turlock.idevelopment.info, turlock' # ---[ Standby Role Parameters ] --- # db_file_name_convert='/MODESTO/','/TURLOCK/' log_file_name_convert='/MODESTO/','/TURLOCK/' fal_server='modesto','turlock' fal_client='turlock' standby_file_management='auto'

  9. Transfer Files to the Standby Host

    Using an OS remote copy utility, transfer the backup of the primary database, standby controlfile, and standby initialization parameter file to the standby host (vmlinux2):


    [oracle@vmlinux1 ~]$ scp /u04/oracle/dg_staging/* vmlinux2:/u04/oracle/dg_staging/ 0blv2spa_1_1 100% 124KB 124.0KB/s 00:00 0clv2spd_1_1 100% 664MB 9.1MB/s 01:13 0dlv2sqg_1_1 100% 7936KB 7.8MB/s 00:01 0elv2sqj_1_1 100% 3072 3.0KB/s 00:00 0flv2ss4_1_1 100% 7904KB 7.7MB/s 00:01 initturlock.ora 100% 1705 1.7KB/s 00:00

Configure Oracle Net Components

Oracle Data Guard depends on Oracle Net as the transport mechanism to propagate changes made on the primary database to the standby. The Oracle Net listener and Oracle Net aliases must be configured on both the primary and standby host in order to facilitate Data Guard's communication requirements. From a high level, the following Oracle Net components need to be configured:

Oracle Net Listener

Although not mandatory, create a named static listener entry in the listener.ora file for each database in the Data Guard configuration:

Primary Host


SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1) (PROGRAM = extproc) ) (SID_DESC = (GLOBAL_DBNAME = MODESTO.IDEVELOPMENT.INFO) (SID_NAME = modesto) (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1) ) ) INBOUND_CONNECT_TIMEOUT_LISTENER = 0 LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = vmlinux1.idevelopment.info)(PORT = 1521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0)) ) )

Standby Host


SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1) (PROGRAM = extproc) ) (SID_DESC = (GLOBAL_DBNAME = TURLOCK.IDEVELOPMENT.INFO) (SID_NAME = turlock) (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1) ) ) INBOUND_CONNECT_TIMEOUT_LISTENER = 0 LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = vmlinux2.idevelopment.info)(PORT = 1521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0)) ) )

Restart Oracle Net Listener Service

After making changes to the listener.ora, restart the Oracle Net listener service on the primary and standby host:


[oracle@vmlinux1 ~]$ lsnrctl reload [oracle@vmlinux2 ~]$ lsnrctl reload

Verify Oracle Net Listener Services


[oracle@vmlinux1 ~]$ lsnrctl services | grep '^Service' Services Summary... Service "MODESTO.IDEVELOPMENT.INFO" has 2 instance(s). Service "PLSExtProc" has 1 instance(s). Service "modestoXDB.idevelopment.info" has 1 instance(s). Service "modesto_XPT.idevelopment.info" has 1 instance(s). [oracle@vmlinux2 ~]$ lsnrctl services | grep '^Service' Services Summary... Service "PLSExtProc" has 1 instance(s). Service "TURLOCK.IDEVELOPMENT.INFO" has 1 instance(s).

Oracle Net Aliases

The primary and standby host should contain an Oracle Net alias in the tnsnames.ora file for all primary and standby net service names:


MODESTO.IDEVELOPMENT.INFO = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = vmlinux1.idevelopment.info)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = modesto.idevelopment.info) ) ) TURLOCK.IDEVELOPMENT.INFO = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = vmlinux2.idevelopment.info)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = turlock.idevelopment.info) ) )

Note that each of the Oracle Net alias entries above contain the domain IDEVELOPMENT.INFO. In my Oracle network configuration, I defined the default domain in the sqlnet.ora file on every host which provides name resolution for Oracle Net clients. When this parameter is set, the default domain name is automatically appended to any unqualified net service name or alias:


NAMES.DEFAULT_DOMAIN = IDEVELOPMENT.INFO

Configure the Standby Database

This section contains the steps used to create, mount, and start Redo Apply services for the physical standby database.

  1. Create the Standby Password File

    As part of the new redo transport security and authentication features, it is now mandatory that each database in an Oracle Data Guard configuration utilize a password file. In addition, the SYS password must be identical on every database in order for redo transport to function. Create the password file on the standby database using the following steps:


    [oracle@vmlinux2 ~]$ cd $ORACLE_HOME/dbs [oracle@vmlinux2 dbs]$ orapwd file=orapwturlock password=MySysPassword

  2. Create an spfile for the Standby Instance

    Using the prepared standby initialization parameter file created and copied from the primary host, convert the pfile to an spfile by entering the following command on the standby instance:


    [oracle@vmlinux2 ~]$ sqlplus / as sysdba SQL*Plus: Release 10.2.0.5.0 - Production on Tue Dec 7 22:14:02 2010 Copyright (c) 1982, 2010, Oracle. All Rights Reserved. Connected to an idle instance. SQL> create spfile from pfile='/u04/oracle/dg_staging/initturlock.ora'; File created. SQL> !ls -l $ORACLE_HOME/dbs total 12 -rw-r----- 1 oracle oinstall 1536 Dec 8 22:20 orapwturlock -rw-r----- 1 oracle oinstall 4608 Dec 8 22:20 spfileturlock.ora

    The above command assumes that the standby initialization parameter file copied from the primary host is located in the staging directory /u04/oracle/dg_staging.

  3. Create and Start the Standby Instance

    Start by creating the "dump directories" on the standby host as referenced in the standby initialization parameter file:


    [oracle@vmlinux2 ~]$ mkdir -p /u01/app/oracle/admin/turlock/adump [oracle@vmlinux2 ~]$ mkdir -p /u01/app/oracle/admin/turlock/bdump [oracle@vmlinux2 ~]$ mkdir -p /u01/app/oracle/admin/turlock/cdump [oracle@vmlinux2 ~]$ mkdir -p /u01/app/oracle/admin/turlock/dpdump [oracle@vmlinux2 ~]$ mkdir -p /u01/app/oracle/admin/turlock/pfile [oracle@vmlinux2 ~]$ mkdir -p /u01/app/oracle/admin/turlock/scripts [oracle@vmlinux2 ~]$ mkdir -p /u01/app/oracle/admin/turlock/udump

    Next, create and verify all directories on the standby host that will be used for database files and the Flash Recovery Area:


    [oracle@vmlinux2 ~]$ mkdir -p /u02/oradata/TURLOCK/controlfile [oracle@vmlinux2 ~]$ mkdir -p /u02/oradata/TURLOCK/datafile [oracle@vmlinux2 ~]$ mkdir -p /u02/oradata/TURLOCK/onlinelog [oracle@vmlinux2 ~]$ mkdir -p /u03/flash_recovery_area/TURLOCK/archivelog [oracle@vmlinux2 ~]$ mkdir -p /u03/flash_recovery_area/TURLOCK/autobackup [oracle@vmlinux2 ~]$ mkdir -p /u03/flash_recovery_area/TURLOCK/backupset [oracle@vmlinux2 ~]$ mkdir -p /u03/flash_recovery_area/TURLOCK/controlfile [oracle@vmlinux2 ~]$ mkdir -p /u03/flash_recovery_area/TURLOCK/onlinelog

    After verifying the appropriate environment variables are set on the standby host ($ORACLE_SID, $ORACLE_HOME, $PATH, $LD_LIBRARY_PATH), start the physical standby instance:


    [oracle@vmlinux2 ~]$ echo $ORACLE_SID turlock [oracle@vmlinux2 ~]$ sqlplus / as sysdba SQL*Plus: Release 10.2.0.5.0 - Production on Tue Dec 7 22:35:34 2010 Copyright (c) 1982, 2010, Oracle. All Rights Reserved. Connected to an idle instance. SQL> startup nomount ORACLE instance started. Total System Global Area 1241513984 bytes Fixed Size 1273420 bytes Variable Size 318767540 bytes Database Buffers 905969664 bytes Redo Buffers 15503360 bytes

  4. Create the Physical Standby Database

    From the standby host where the standby instance was just started, duplicate the primary database as a standby using RMAN:


    [oracle@vmlinux2 ~]$ rman target sys/MySysPassword@modesto auxiliary sys/MySysPassword@turlock Recovery Manager: Release 10.2.0.5.0 - Production on Tue Dec 7 22:51:22 2010 Copyright (c) 1982, 2007, Oracle. All rights reserved. connected to target database: MODESTO (DBID=2026707242) connected to auxiliary database: MODESTO (not mounted) RMAN> duplicate target database for standby; Starting Duplicate Db at 08-DEC-2010 22:24:52 using target database control file instead of recovery catalog allocated channel: ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: sid=156 devtype=DISK contents of Memory Script: { restore clone standby controlfile; sql clone 'alter database mount standby database'; } executing Memory Script Starting restore at 08-DEC-2010 22:24:53 using channel ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: starting datafile backupset restore channel ORA_AUX_DISK_1: restoring control file channel ORA_AUX_DISK_1: reading from backup piece /u04/oracle/dg_staging/0flv2ss4_1_1 channel ORA_AUX_DISK_1: restored backup piece 1 piece handle=/u04/oracle/dg_staging/0flv2ss4_1_1 tag=TAG20101208T220036 channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01 output filename=/u02/oradata/TURLOCK/controlfile/o1_mf_6j0m05vj_.ctl output filename=/u03/flash_recovery_area/TURLOCK/controlfile/o1_mf_6j0m068b_.ctl Finished restore at 08-DEC-2010 22:24:54 sql statement: alter database mount standby database contents of Memory Script: { set newname for tempfile 1 to "/u02/oradata/TURLOCK/datafile/o1_mf_temp_6hc6v3jd_.tmp"; switch clone tempfile all; set newname for datafile 1 to "/u02/oradata/TURLOCK/datafile/o1_mf_system_6hc6t6l0_.dbf"; set newname for datafile 2 to "/u02/oradata/TURLOCK/datafile/o1_mf_undotbs1_6hc6trl0_.dbf"; set newname for datafile 3 to "/u02/oradata/TURLOCK/datafile/o1_mf_sysaux_6hc6tyvd_.dbf"; set newname for datafile 4 to "/u02/oradata/TURLOCK/datafile/o1_mf_example_6hc6vf79_.dbf"; set newname for datafile 5 to "/u02/oradata/TURLOCK/datafile/o1_mf_users_6hc6vlf0_.dbf"; restore check readonly clone database ; } executing Memory Script executing command: SET NEWNAME renamed temporary file 1 to /u02/oradata/TURLOCK/datafile/o1_mf_temp_6hc6v3jd_.tmp in control file executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME Starting restore at 08-DEC-2010 22:25:01 using channel ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: starting datafile backupset restore channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set restoring datafile 00001 to /u02/oradata/TURLOCK/datafile/o1_mf_system_6hc6t6l0_.dbf restoring datafile 00002 to /u02/oradata/TURLOCK/datafile/o1_mf_undotbs1_6hc6trl0_.dbf restoring datafile 00003 to /u02/oradata/TURLOCK/datafile/o1_mf_sysaux_6hc6tyvd_.dbf restoring datafile 00004 to /u02/oradata/TURLOCK/datafile/o1_mf_example_6hc6vf79_.dbf restoring datafile 00005 to /u02/oradata/TURLOCK/datafile/o1_mf_users_6hc6vlf0_.dbf channel ORA_AUX_DISK_1: reading from backup piece /u04/oracle/dg_staging/0clv2spd_1_1 channel ORA_AUX_DISK_1: restored backup piece 1 piece handle=/u04/oracle/dg_staging/0clv2spd_1_1 tag=TAG20101208T215908 channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:45 Finished restore at 08-DEC-2010 22:25:47 contents of Memory Script: { switch clone datafile all; } executing Memory Script datafile 1 switched to datafile copy input datafile copy recid=6 stamp=737245547 filename=/u02/oradata/TURLOCK/datafile/o1_mf_system_6j0m0gkf_.dbf datafile 2 switched to datafile copy input datafile copy recid=7 stamp=737245547 filename=/u02/oradata/TURLOCK/datafile/o1_mf_undotbs1_6j0m0gmf_.dbf datafile 3 switched to datafile copy input datafile copy recid=8 stamp=737245547 filename=/u02/oradata/TURLOCK/datafile/o1_mf_sysaux_6j0m0glf_.dbf datafile 4 switched to datafile copy input datafile copy recid=9 stamp=737245547 filename=/u02/oradata/TURLOCK/datafile/o1_mf_example_6j0m0gnn_.dbf datafile 5 switched to datafile copy input datafile copy recid=10 stamp=737245547 filename=/u02/oradata/TURLOCK/datafile/o1_mf_users_6j0m0gol_.dbf Finished Duplicate Db at 08-DEC-2010 22:25:48 RMAN> exit Recovery Manager complete.

    The RMAN duplicate process above will read the backupset that was transferred to the staging directory, clone the standby controlfile, mount the database on the new standby controlfile, and restore the physical database files.

  5. Start Redo Apply on the Standby Database

    Now that the standby is in place, start Redo Apply on the standby database by putting it in managed recovery mode. This instructs the standby database to begin applying changes from archived redo logs transferred from the primary database:


    SQL> alter database recover managed standby database disconnect; Database altered.

     

    To use Real Time Apply, run the following alternate version of the alter database command to place the standby database in managed recovery mode:

    SQL> alter database recover managed standby database using current logfile disconnect;
    
    Database altered.

Start Remote Archiving

With the standby database now in managed recovery mode, the next and final phase is to start shipping redo data to the standby from the primary.

Earlier within this guide in the section "Configure the Primary Database Initialization Parameters", I mentioned that it is good practice to initially set the log_archive_dest_state_2 initialization parameter to DEFER on the primary until the standby database was created, mounted, and placed in managed recovery mode. This is the destination defined that will transfer redo data to the standby database. From the primary database, enable SQL Transport Services by issuing the following statement:


SQL> alter system set log_archive_dest_state_2=enable scope=both; System altered.

With the protection mode used in this guide (maximum performance), archiving of redo data to the remote standby does not occur until after a log switch. By default, a log switch occurs when an online redo log becomes full which means the standby database does not get updated until then.

To force the current redo logs to be archived immediately, use the following statement on the primary database:


SQL> alter system archive log current; System altered.

At this point, the standby database will continue to apply changes from archive redo logs being transferred from the primary.

Verifying the Physical Standby Database

With the standby and primary databases now in operation, the next step is to verify the Data Guard configuration. This will ensure that Redo Transport on the primary and Redo Apply on the physical standby are working correctly.

Given this Data Guard configuration is running in maximum performance mode, the validation tasks will involve switching redo log files from the primary and verifying those log files are being shipped and applied to the physical standby database.

Redo Transport

From the primary database, perform a log switch and then verify the transmissions of the archive redo log file was successful:


SQL> alter system switch logfile; System altered. SQL> select status, error from v$archive_dest where dest_id = 2; STATUS ERROR --------- --------------------------------------------------------- VALID

If the transmission was successful, the status of the destination will be VALID as shown above. If for any reason the transmission was unsuccessful, the status will be INVALID and the full text of the error message will be populated in the ERROR column which can be used to investigate and correct the issue.

Redo Apply

To verify Redo Apply, identify the existing archived redo logs on the standby, archive a log or two from the primary, and then check the standby database again. This test will ensure that redo data was shipped from the primary and then successfully received, archived, and applied to the standby.

First, identify the existing archived redo redo logs on the standby database:


[oracle@vmlinux2 ~]$ sqlplus sys/MySysPassword@turlock as sysdba SQL> alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS'; Session altered. SQL> select sequence#, first_time, next_time, archived, applied from v$archived_log order by sequence#; SEQUENCE# FIRST_TIME NEXT_TIME ARCHIVED APPLIED ---------- -------------------- -------------------- -------- ------- 320 10-DEC-2010 08:34:06 10-DEC-2010 08:49:39 YES YES 321 10-DEC-2010 08:49:39 10-DEC-2010 09:05:15 YES YES 322 10-DEC-2010 09:05:15 10-DEC-2010 09:20:48 YES YES 323 10-DEC-2010 09:20:48 10-DEC-2010 09:36:20 YES YES 324 10-DEC-2010 09:36:20 10-DEC-2010 09:51:52 YES YES 325 10-DEC-2010 09:51:52 10-DEC-2010 10:07:27 YES YES 326 10-DEC-2010 10:07:27 10-DEC-2010 10:14:23 YES YES 327 10-DEC-2010 10:14:23 10-DEC-2010 10:14:49 YES YES 328 10-DEC-2010 10:14:49 10-DEC-2010 10:30:23 YES YES

From the primary database, archive the current log using the following SQL statement:


SQL> alter system archive log current; System altered.

Go back to the standby database and re-query the V$ARCHIVED_LOG view to verify redo data was shipped, received, archived, and applied:


SQL> select sequence#, first_time, next_time, archived, applied from v$archived_log order by sequence#; SEQUENCE# FIRST_TIME NEXT_TIME ARCHIVED APPLIED ---------- -------------------- -------------------- -------- ------- 320 10-DEC-2010 08:34:06 10-DEC-2010 08:49:39 YES YES 321 10-DEC-2010 08:49:39 10-DEC-2010 09:05:15 YES YES 322 10-DEC-2010 09:05:15 10-DEC-2010 09:20:48 YES YES 323 10-DEC-2010 09:20:48 10-DEC-2010 09:36:20 YES YES 324 10-DEC-2010 09:36:20 10-DEC-2010 09:51:52 YES YES 325 10-DEC-2010 09:51:52 10-DEC-2010 10:07:27 YES YES 326 10-DEC-2010 10:07:27 10-DEC-2010 10:14:23 YES YES 327 10-DEC-2010 10:14:23 10-DEC-2010 10:14:49 YES YES 328 10-DEC-2010 10:14:49 10-DEC-2010 10:30:23 YES YES 329 10-DEC-2010 10:30:23 10-DEC-2010 10:39:56 YES YES

Monitoring the alert.log of the Standby Database

Querying the V$ARCHIVED_LOG view from the standby database is a good way to ensure Redo Transport and Redo Apply is doing their job correctly. In addition, I also like to tail the alert.log file of the standby database as a double check.

From the standby database, perform a tail -f against the alert.log while issuing the "alter system archive log current" statement from the primary:


[oracle@vmlinux2 ~]$ cd /u01/app/oracle/admin/turlock/bdump [oracle@vmlinux2 bdump]$ tail -f alert_turlock.log RFS[2]: Successfully opened standby log 5: '/u02/oradata/TURLOCK/onlinelog/o1_mf_5_6j0ml7nw_.log' Fri Dec 10 10:14:51 EST 2010 Media Recovery Log /u03/flash_recovery_area/TURLOCK/archivelog/2010_12_10/o1_mf_1_326_6j4jzbpc_.arc Media Recovery Log /u03/flash_recovery_area/TURLOCK/archivelog/2010_12_10/o1_mf_1_327_6j4jzcbm_.arc Media Recovery Waiting for thread 1 sequence 328 Fri Dec 10 10:30:24 EST 2010 RFS[1]: Successfully opened standby log 4: '/u02/oradata/TURLOCK/onlinelog/o1_mf_4_6j0mkynk_.log' Fri Dec 10 10:30:26 EST 2010 Media Recovery Log /u03/flash_recovery_area/TURLOCK/archivelog/2010_12_10/o1_mf_1_328_6j4kwjb5_.arc Media Recovery Waiting for thread 1 sequence 329 Fri Dec 10 10:39:57 EST 2010 RFS[2]: Successfully opened standby log 4: '/u02/oradata/TURLOCK/onlinelog/o1_mf_4_6j0mkynk_.log' Fri Dec 10 10:39:58 EST 2010 Media Recovery Log /u03/flash_recovery_area/TURLOCK/archivelog/2010_12_10/o1_mf_1_329_6j4lgfsr_.arc Media Recovery Waiting for thread 1 sequence 330

Deletion Policy for Archived Redo Log Files In Flash Recovery Area

By default, archived redo log files in a flash recovery area that were backed up to a tertiary device or made obsolete (as defined by the RMAN retention policy) are eligible for deletion. The archived redo log files that are backed up or obsolete can eventually be deleted automatically to make space if the disk space in the flash recovery area becomes full. However, you can change this default deletion policy on either the primary or standby database using the following RMAN command.


CONFIGURE ARCHIVELOG DELETION POLICY TO [CLEAR | NONE | APPLIED ON STANDBY];

This section describes the command qualifiers and provides examples for setting up a deletion policy for archived redo log files which are already applied on standby database in Oracle Database 10g and above.

Using the APPLIED ON STANDBY Clause

Use the APPLIED ON STANDBY clause so that archived redo log files that have been applied on all mandatory standby destinations will be deleted. The actions taken when you specify this clause are described in the following table.

APPLIED ON STANDBY Clause
When the APPLIED ON STANDBY clause is configured on ... Then, these files are eligible for deletion ...
The primary database Archived redo log files in the flash recovery area that were applied on all mandatory standby databases.
A standby database that has one or more mandatory cascading standby databases. Archived redo log files in the flash recovery area that were applied on all mandatory cascading standby databases.
A standby database that has no cascading standby databases. Archived redo log files in the flash recovery area that were applied on the standby database.

Using the CLEAR Clause

Use the CLEAR clause to disable the deletion policy that was previously set up with the RMAN CONFIGURE ARCHIVELOG DELETION POLICY command. The Oracle database will resume the default deletion policy behavior, which is to delete archived redo log files that are backed up or obsolete to make space if disk space in the flash recovery area becomes full.

Using the NONE Clause

Use the NONE clause so that archived redo logs in flash recovery area that were backed up or obsolete as per the RMAN retention policy are eligible for deletion. This is the default configuration. Archived redo log files that are backed up or obsolete are deleted to make space if the disk space in the flash recovery area becomes full.

Using the CONFIGURE ARCHIVELOG DELETION POLICY Command

As described in the above table, the APPLIED ON STANDBY setting can be configured on both the primary and standby database. In most cases, the APPLIED ON STANDBY setting should not be set on the database that is performing and maintaining RMAN backups for the purpose of recovery. For example.

When backups of archived redo log files are taken on the primary database:

  1. Issue the following command on the standby database:


    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;

  2. Issue the following command on the primary database:


    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;

When backups of archived redo log files are taken on the standby database:

  1. Issue the following command on the primary database:


    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;

  2. Issue the following command on the standby database:


    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;

Post-Creation Steps

At this point, the physical standby database is running and can provide the maximum performance level of data protection (the default). The following list describes additional preparations you can take on the physical standby database:

Activating a Physical Standby Database (Role Transition)

A database can operate in one of two mutually exclusive modes in an Oracle Data Guard configuration: primary or standby. Whenever the role is changed between the primary and standby, this is referred to as a role transition. Role transition plays an important part in Data Guard by providing an interface that allows DBA's to activate a standby database to take over as the primary production database or vice versa. There are two types of role transitions supported in Oracle 10g Data Guard: switchover and failover. Knowing the difference between the two role transition methods is crucial.

Click here for a guide on how to perform role transitions and activating a physical standby database.

Further Reading

Additional information on Oracle 10g Release 2 Data Guard can be found in the Data Guard Concepts and Administration. This guide is available from the Oracle Documentation Library website located at the following address http://download.oracle.com/docs/cd/B19306_01/server.102/b14239.pdf.

About the Author

Jeffrey Hunter is an Oracle Certified Professional, Java Development Certified Professional, Author, and an Oracle ACE. Jeff currently works as a Senior Database Administrator for The DBA Zone, Inc. located in Pittsburgh, Pennsylvania. His work includes advanced performance tuning, Java and PL/SQL programming, developing high availability solutions, capacity planning, database security, and physical / logical database design in a UNIX / Linux server environment. Jeff's other interests include mathematical encryption theory, tutoring advanced mathematics, programming language processors (compilers and interpreters) in Java and C, LDAP, writing web-based database administration tools, and of course Linux. He has been a Sr. Database Administrator and Software Engineer for over 20 years and maintains his own website site at: http://www.iDevelopment.info. Jeff graduated from Stanislaus State University in Turlock, California, with a Bachelor's degree in Computer Science and Mathematics.



Copyright (c) 1998-2017 Jeffrey M. Hunter. All rights reserved.

All articles, scripts and material located at the Internet address of http://www.idevelopment.info is the copyright of Jeffrey M. Hunter and is protected under copyright laws of the United States. This document may not be hosted on any other site without my express, prior, written permission. Application to host any of the material elsewhere can be made by contacting me at jhunter@idevelopment.info.

I have made every effort and taken great care in making sure that the material included on my web site is technically accurate, but I disclaim any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on it. I will in no case be liable for any monetary damages arising from such loss, damage or destruction.

Last modified on
Thursday, 26-Jan-2012 16:33:48 EST
Page Count: 88405