DBA Tips Archive for Oracle

  


Data Guard Configuration Example - (Oracle9i, Physical Standby)

by Jeff Hunter, Sr. Database Administrator


Contents

  1. Overview
  2. Introduction to Data Guard
  3. Hardware and O/S Configuration
  4. Configuring Nodes for Remote Access
  5. Configuring the Primary Database
  6. Configuring the Standby Database
  7. Start Remote Archiving
  8. Verifying the Physical Standby Database
  9. Role Transitions
  10. Which Role Transition Operation Should I Use?
  11. Implementing Role Switchover Operations
  12. Implementing Role Failover Operations
  13. Further Reading
  14. About the Author



Overview

Oracle Data Guard (known as Oracle Standby Database prior to Oracle9i), provides organizations with high availability, data protection, and disaster recovery for enterprise databases with extraordinary ease of use. The DBA has many services included with Oracle Data Guard for creating, maintaining, managing, and monitoring one or more standby database instances. Oracle Data Guard provides functionality to enable enterprise data systems to survive both data corruption as well as major disasters.

This article provides detailed instructions for creating and configuring a physical standby database from a primary database in Oracle9i Release 2 (9.2). After creating and configuring the physical standby database, I will then go into the details of Role Transitions using both switchover and failover operations and when to use either of them.

To keep the article simple, I will be using a protection mode of Maximum Performance. (Maximum Performance is the default protection mode.)

  For a detailed discussion on the various Oracle Data Guard protection modes, see my article entitled "Data Protection Modes".

All configuration parameters related to the Oracle instance and networking will be discussed as well as how to place the standby database in Managed Recovery Mode.



Introduction to Data Guard

The Standby Database feature of Oracle was first introduced in Oracle Release 7. Since then Oracle has added many new features to the standby database option, making it an excellent choice for your high availably requirements.

With the introduction of Oracle9i, Oracle has since then changed the name of Standby Database to Oracle Data Guard. The Data Guard software product in Oracle9i is used to maintain a standby database, or real-time copy of a primary database. When the standby database site is hosted in a different geographical location than the primary site, it provides for an excellent High Availability (HA) solution. When creating a standby database configuration, the DBA should always attempt to keep the primary and standby database sites identical as well as keeping the physical location of the production database transparent to the end user. This allows for an easy switchover scenario for both planned and unplanned outages. When the secondary (standby) site is identical to the primary site, it allows predictable performance and response time after failing over (or switching over) from the primary site.



Hardware and O/S Configuration

Now, let's take a look at the Oracle database configuration and parameters we will be using for our physical standby database configuration.

Primary Database
Oracle Release: Oracle9i Release 2 - (9.2.0.8.0)
Machine Name: vmlinux3.idevelopment.info
Operating System: Red Hat Linux 4 - (CentOS 4.4)
Oracle SID: TESTDB
Instance Service Names: TESTDB.IDEVELOPMENT.INFO
TNS Service Name: TESTDB_VMLINUX3.IDEVELOPMENT.INFO
Standby Database
Oracle Release: Oracle9i Release 2 - (9.2.0.8.0)
Machine Name: vmlinux4.idevelopment.info
Operating System: Red Hat Linux 4 - (CentOS 4.4)
Oracle SID: TESTDB
Instance Service Names: TESTDB.IDEVELOPMENT.INFO
TNS Service Name: TESTDB_VMLINUX4.IDEVELOPMENT.INFO



Configuring Nodes for Remote Access

One other note about my configuration. I have both of my Linux servers configured with an .rhosts file for the oracle user in order to allow the use of the r* commands (like rcp). This allows me to expedite the initial copying of database files from the primary host to the standby host by not requiring me to manually login (FTP) with a username and password. Although it is only necessary to configure the standby database server with an .rhosts file (since it is the standby database server that needs to identify which servers will be trusted), I like to configure both nodes for remote access.

  The rsh daemon on the standby database server validates UNIX users using the /etc/hosts.equiv file or the .rhosts file found in the user's (oracle's) home directory.

  The use of rcp (and any other of the r* commands) is not required for normal Data Guard operation. rcp, however, does expedite the initial copying of database files from the primary host to the standby host! After copying all database files from the primary database server to the standby database server, this feature can be disabled.

Before configuring the .rhosts file, we must first make sure that we have the rsh RPMs installed on each node in the RAC cluster:

# rpm -q rsh rsh-server
rsh-0.17-25.4
rsh-server-0.17-25.4
From the above, we can see that we have the rsh and rsh-server installed.

  If rsh is not installed, run the following command from the CD where the RPM is located:
# su -
# rpm -ivh rsh-0.17-25.4.rpm rsh-server-0.17-25.4.rpm

To enable the "rsh" service, the "disable" attribute in the /etc/xinetd.d/rsh file must be set to "no" and xinetd must be reloaded. This can be done by running the following commands on both nodes in the Data Guard configuration:

# su -
# chkconfig rsh on
# chkconfig rlogin on
# chkconfig rsync on
# chkconfig rexec on
# service xinetd reload
Reloading configuration: [  OK  ]

Here is a copy of the .rhosts file I have configured on both of my Linux servers for the oracle user account. This file should reside in the $HOME directory for the oracle user account:

Example .rhosts File
+vmlinux3.idevelopment.info oracle
+vmlinux4.idevelopment.info oracle

  Before attempting to test the rsh command, ensure that you are using the correct version of rsh. By default, Red Hat Linux puts /usr/kerberos/sbin at the head of the $PATH variable. This will cause the Kerberos version of rsh to be executed.

I will typically rename the Kerberos version of rsh so that the normal rsh command will be used. Use the following:

# su -

# which rsh
/usr/kerberos/bin/rsh

# mv /usr/kerberos/bin/rsh /usr/kerberos/bin/rsh.original
# mv /usr/kerberos/bin/rcp /usr/kerberos/bin/rcp.original
# mv /usr/kerberos/bin/rlogin /usr/kerberos/bin/rlogin.original

# which rsh
/usr/bin/rsh

For security reasons, the permissions of the .rhosts file should be 600 and owned by the "oracle" user account. If instead, you are using a global /etc/hosts.equiv file, it should be owned by root and the permissions be set to 600. In fact, some systems will only honor the content of this file if the owner of this file is root and the permissions are set to 600.

$ chmod 600 ~/.rhosts
$ ls -l ~/.rhosts
-rw-------  1 oracle dba 0 Jun 21 17:08 /u01/app/oracle/.rhosts

After configuring the .rhost file on both computers, I like to perform a simple test from each machine to ensure that the configuration is valid:

FROM vmlinux3 MACHINE

    $ id
    uid=175(oracle) gid=115(dba) groups=115(dba) context=user_u:system_r:unconfined_t

    $ rsh vmlinux4 hostname
    vmlinux4


FROM vmlinux4 MACHINE

    $ id
    uid=175(oracle) gid=115(dba) groups=115(dba) context=user_u:system_r:unconfined_t

    $ rsh vmlinux3 hostname
    vmlinux3



Configuring the Primary Database

Let's start by configuring and preparing our primary database.

Although not required, I have the primary database configured with an SPFILE instead of the traditional text initialization parameter file.

  1. Enable archiving and define a local archiving destination

    The primary database must be in archivelog mode, configured for automatic archiving and that you have at least one local archiving destination defined. To verify that the database meets all of these requirements, you can use the following command:

    SQL> archive log list
    Database log mode              Archive Mode
    Automatic archival             Enabled
    Archive destination            /u02/oraarchive/TESTDB
    Oldest online log sequence     1
    Next log sequence to archive   2
    Current log sequence           2

      In the statement above, we can see that Archive Log Mode is enabled and that my archive destination for local archive redos is /orabackup/archive/TESTDB. If it turns out that Archive Log Mode is not enabled for your primary database, you can enable it using the following. First, you will need to define the following instance variables:
    log_archive_dest_1        = 'location=/u02/oraarchive/TESTDB MANDATORY'
    log_archive_dest_state_1  = 'enable'
    log_archive_format        = 'arch_t%t_s%s.dbf'
    log_archive_start         = true
    Next, place the primary database in archive log mode:
    SQL> shutdown immediate;
    SQL> startup mount;
    SQL> alter database archivelog;
    SQL> alter database open;

  2. Enable forced logging

    The following step is optional but highly recommended as it could save the DBA considerable time when disaster recovery needs to be implemented and you have operations that are performing nologging operations.

    You should place the primary database in FORCE LOGGING mode. This option is a new feature with Oracle Release 9.2 and will override any user transaction that attempts to specify NOLOGGING. If a user were able to perform NOLOGGING operations on the primary database would mean that some changes would not be fully logged within the redo stream to maintain the standby database. Any unlogged operations would invalidate the standby database and would require substantial DBA intervention in order to manually propagate unlogged operations. By placing the primary database in force logging mode, nologging operations are still permitted to run without error, but the changes will will be placed into the redo stream anyways. You can use the following statement to place the primary database in force logging mode:

    SQL> alter database force logging;
    
    Database altered.

      The above statement may take a substantial amount of time to complete. The database must want for all unlogged direct write I/O operations to finish.

    Also, if the database is already in force logging mode, you will receive the following warning:

    SQL> alter database force logging;
    alter database force logging
    *
    ERROR at line 1:
    ORA-12920: database is already in force logging mode
    Use the following query to determine the logging mode of the database:
    SQL> select force_logging from v$database;
    
    FORCE_LOGGING
    -------------
    YES

  3. Create a password file

    A new feature to Oracle Data Guard is log transport security and authentication. It is mandatory that all databases in a Data Guard configuration be configured with a password file. Also, the password for the SYS database account must be identical on every system for log transports to function. If your primary database is already configured with a password file, you can move on to the next step. If you need to configure a password file, use the following:

    $ cd $ORACLE_HOME/dbs
    $ orapwd file=orapwTESTDB password=change_on_install
    Once the password file is created, you will need to configure the instance to use it by setting the following instance parameter in NOMOUNT mode:
    SQL> shutdown immediate
    SQL> startup nomount
    SQL> alter system set remote_login_passwordfile=exclusive scope=spfile;
    SQL> alter database mount;
    SQL> alter database open;

  4. Configuring the Oracle networking files

    Configure the Oracle Networking Components that allow the primary database to communicate to the standby database. On the primary host create a net service name that the primary database can use to connect to the standby database.

    On the primary host create a net service name that the standby, when running on the primary host, can use to connect to the primary, when it is running on the standby host.

    You should first put the following two entries in the tnsnames.ora file for the primary host. Keep in mind that the entries below will appear in both the primary and standby hosts:

    TNS Names Entries

    TESTDB_VMLINUX3.IDEVELOPMENT.INFO = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = vmlinux3.idevelopment.info) (PORT = 1521) ) ) (CONNECT_DATA = (SERVICE_NAME = TESTDB.IDEVELOPMENT.INFO) ) ) TESTDB_VMLINUX4.IDEVELOPMENT.INFO = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = vmlinux4.idevelopment.info) (PORT = 1521) ) ) (CONNECT_DATA = (SERVICE_NAME = TESTDB.IDEVELOPMENT.INFO) ) )

    Now, you will need to setup the listener.ora file on the primary host:

    Listener.ora File on the Primary Host

    LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = vmlinux3.idevelopment.info)(PORT = 1521)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC)) ) ) ) SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/app/oracle/product/9.2.0) (PROGRAM = extproc) (ENVS="EXTPROC_DLLS=ANY") ) (SID_DESC = (GLOBAL_DBNAME = TESTDB.IDEVELOPMENT.INFO) (SID_NAME = TESTDB) (ORACLE_HOME = /u01/app/oracle/product/9.2.0) ) )

    Once the Oracle networking files are configured, ensure to start the TNS listener with the latest (valid) listener.ora file:

    $ lsnrctl stop
    $ lsnrctl start

    Finally, you should set the dead connection detection in the sqlnet.ora file. Keep in mind that this is only a requirement for the sqlnet.ora file on the standby database server. However, I like to have it already configured in the sqlnet.ora file for the primary database server if a switchover needs to occur. I then do not have to remember to set it if the primary were to transition to the role of standby. Here is an example sqlnet.ora file:

    Sqlnet.ora File on the Primary Host

    NAMES.DIRECTORY_PATH= (TNSNAMES, LDAP) BEQUEATH_DETACH=yes AUTOMATIC_IPC = ON NAMES.DEFAULT_DOMAIN = IDEVELOPMENT.INFO SQLNET.EXPIRE_TIME=2

  5. Set initialization parameters on the primary database

    We now need to configure several initialization parameters on the primary database. Most of the configuration options for Data Guard are implemented as initialization parameters for the Oracle instance. When the primary and standby host are identical (memory, disks, CPU, etc.) the initialization file for both databases should be almost identical - with the exception of four key parameters. This greatly simplifies role transition of the primary and standby database. The four parameters that need to be set on the standby database different from the primary database are:

    FAL_CLIENT
    FAL_SERVER
    LOG_ARCHIVE_DEST_2
    LOG_ARCHIVE_DEST_STATE_2
    This step provides an overview of the parameters that are important for a successful standby database configuration to be set on the primary database.
    ...
    archive_lag_target             = 0
    compatible                     = '9.2.0.8.0'
    control_file_record_keep_time  = 7
    control_files                  = '/u02/oradata/TESTDB/control01.ctl',
                                     '/u02/oradata/TESTDB/control02.ctl',
                                     '/u02/oradata/TESTDB/control03.ctl'
    db_name                        = 'TESTDB'
    fal_client                     = 'TESTDB_VMLINUX3.IDEVELOPMENT.INFO'
    fal_server                     = 'TESTDB_VMLINUX4.IDEVELOPMENT.INFO'
    instance_name                  = 'TESTDB'
    local_listener                 = ''
    log_archive_dest_1             = 'location=/u02/oraarchive/TESTDB mandatory'
    log_archive_dest_2             = 'service=TESTDB_VMLINUX4 optional reopen=15'
    log_archive_dest_state_1       = enable
    log_archive_dest_state_2       = defer
    log_archive_format             = 'arch_t%t_s%s.dbf'
    log_archive_start              = true
    remote_archive_enable          = true
    remote_login_passwordfile      = exclusive
    service_names                  = 'TESTDB.IDEVELOPMENT.INFO'
    standby_archive_dest           = '/u02/oraarchive/TESTDB'
    standby_file_management        = auto
    ...

      Notice that I set the log_archive_dest_state_2 to defer. I like to keep the state of this destination set to defer until the standby database has been created and in the MOUNT stage.

      Note that I included values for both FAL_CLIENT and FAL_SERVER on the primary database even though these two values are only recognized when defined on the standby database by log apply services. Doing this makes role transition easier to manage if this primary database ever has to switch roles from being the primary database to a standby database. If this role transition were to occur, then TESTDB_VMLINUX4 would be the primary database and TESTDB_VMLINUX3 would become the standby database as defined above. For more information about setting FAL_CLIENT and FAL_SERVER, see my article "Log Gap Detection and Resolution".

  6. Take cold backup of the primary database

    We now need to take a backup of the primary database that can be used to copy over to the standby database server. This can be either a cold or hot (online) backup of the database. I prefer to perform a cold backup for this example. Keep in mind that you will need to have all of the archived redo logs from the primary database in order to bring the standby database to a consistent state.

    To perform our cold backup, we will need to bring down the primary database and perform a backup of all database files.

      You only need to backup database files - there is no need to backup online redo log files or control files.

    To determine the database files that need to be backed up, you can query V$DATAFILE view. For example:

    SQL> select name from v$datafile;
    
    NAME
    -------------------------------------------
    /u02/oradata/TESTDB/system01.dbf
    /u02/oradata/TESTDB/undotbs1_01.dbf
    /u02/oradata/TESTDB/cwmlite01.dbf
    /u02/oradata/TESTDB/drsys01.dbf
    /u02/oradata/TESTDB/indx01.dbf
    /u02/oradata/TESTDB/odm01.dbf
    /u02/oradata/TESTDB/tools01.dbf
    /u02/oradata/TESTDB/users01.dbf
    /u02/oradata/TESTDB/xdb01.dbf
    /u02/oradata/TESTDB/perfstat01.dbf
    
    10 rows selected.
    We now shutdown the primary database:
    SQL> shutdown immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    After closing the primary database, copy all database files to the standby database server.

      Note that in most cases you will want to limit the amount of time the primary database is down. If the network is slow and/or you have a large amount of database files, copy the database files to a temporary directory on the local machine, bring up the primary database and then transfer the database files from the temporary directory to the standby database server. It is also possible to completely eliminate the need for any type of outage of the primary database by using RMAN to create a hot backup of the database and then transfer the RMAN backup sets to the standby database server.

    For me, this step is easy since all of my database files are on /u02:

    $ rsh vmlinux4 mkdir -p /u02/oradata/TESTDB
    $ rcp -r /u02/oradata/TESTDB/*.dbf vmlinux4:/u02/oradata/TESTDB
    Once all database files are copied to their new location (or to a temporary location), you should now bring up the primary database:
    SQL> startup open
    ORACLE instance started.
    
    Total System Global Area  252777712 bytes
    Fixed Size                   451824 bytes
    Variable Size             218103808 bytes
    Database Buffers           33554432 bytes
    Redo Buffers                 667648 bytes
    Database mounted.
    Database opened.

  7. Create a controlfile for the standby database

    After bringing up the primary database, create a standby control file for the standby database, as follows:

    SQL> alter database create standby controlfile as '/u01/app/oracle/control01.ctl';
    
    Database altered.

      The controlfile must also be created after the last time stamp for the backup datafiles.

      You can also create the standby controlfile from within RMAN as follows:
    RMAN> backup current controlfile for standby;
    
    Starting backup at 22-JUL-06
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=17 devtype=DISK
    channel ORA_DISK_1: starting full datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    including standby controlfile in backupset
    channel ORA_DISK_1: starting piece 1 at 22-JUL-06
    channel ORA_DISK_1: finished piece 1 at 22-JUL-06
    piece handle=/u02/orabackup/TESTDB/backup_db_TESTDB_S_10_P_1_T_596422041 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
    Finished backup at 22-JUL-06
    
    Starting Control File and SPFILE Autobackup at 22-JUL-06
    piece handle=/u02/orabackup/TESTDB/c-2362875026-20060722-00 comment=NONE
    Finished Control File and SPFILE Autobackup at 22-JUL-06

  8. Prepare initialization parameter file for the standby database

    As noted in the previous section, I indicated that I am using an SPFILE for the primary database (and will be using one for the standby database once it is created). In this section, we will want to create a traditional text initialization parameter file from the server parameter file used by the primary database. We can then copy this text file to the standby location.

    SQL> create pfile='/u01/app/oracle/initTESTDB.ora' from spfile;
    
    File created.

      When configuring the standby database, we will be converting this file back to a server parameter file (SPFILE) after it is modified to contain the parameter values appropriate for use with the physical standby database.

  9. Copy all files from the primary host to the standby host

    When I took the cold backup of the Oracle database, I used the rcp command to transfer all of the database files to the standby host. If you have not yet copied the database files to the standby host, do so now. Also transfer the standby controlfile and the text initialization parameter created to the standby host.

    $ rcp control01.ctl vmlinux4:
    $ rcp initTESTDB.ora vmlinux4:

  10. Use same file and directory naming conventions

    When setting up the standby database, use the same naming convention for all database files and directories. This, however, is not always possible. If you cannot use the same directory structure and/or the same file names on the standby database, you can use the procedures in this section for creating a mapping of the primary database and standby database directories and file names.

      This is required if the standby database is on the same machine as the primary. If the above init<SID>.ora parameters are used, you MUST use alter database rename file <oldname> to <newname> when the standby database is activated.

    The two initialization parameters that are used to create the mappings of directory structure and file names between the primary and standby database are the following:

    • db_file_name_convert - Specify the location of the primary database datafiles followed by the standby location of the datafiles. This parameter will convert the filename of the primary database datafiles to the filename of the standby datafile filenames. If the standby database is on the same system as the primary database or if the directory structure where the datafiles are located on the standby site is different from the primary site then this parameter is required.

    • log_file_name_convert - Specify the location of the primary database logs followed by the standby location of the logs. This parameter will convert the filename of the primary database log to the filenames of the standby log. If the standby database is on the same system as the primary database or if the directory structure where the logs are located on the standby site is different from the primary site then this parameter is required.



Configuring the Standby Database

Let's continue by configuring and preparing our standby database.
  1. Configuring the Oracle networking files

    Configure the Oracle Networking Components for the standby database. You should first put the following two entries in the tnsnames.ora file for the standby host.

    Keep in mind that the entries below will appear in both the primary and standby hosts:

    TNS Names Entries

    TESTDB_VMLINUX3.IDEVELOPMENT.INFO = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = vmlinux3.idevelopment.info) (PORT = 1521) ) ) (CONNECT_DATA = (SERVICE_NAME = TESTDB.IDEVELOPMENT.INFO) ) ) TESTDB_VMLINUX4.IDEVELOPMENT.INFO = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = vmlinux4.idevelopment.info) (PORT = 1521) ) ) (CONNECT_DATA = (SERVICE_NAME = TESTDB.IDEVELOPMENT.INFO) ) )

    Now, you will need to setup the listener.ora file for the standby host. The listener.ora file can be copied from the primary database server with only one change that need to be performed. I have marked the changes that will need to be made in RED.

    Listener.ora File on the Standby Host

    LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = vmlinux4.idevelopment.info)(PORT = 1521)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC)) ) ) ) SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/app/oracle/product/9.2.0) (PROGRAM = extproc) (ENVS="EXTPROC_DLLS=ANY") ) (SID_DESC = (GLOBAL_DBNAME = TESTDB.IDEVELOPMENT.INFO) (SID_NAME = TESTDB) (ORACLE_HOME = /u01/app/oracle/product/9.2.0) ) )

    Finally, you should set the dead connection detection in the sqlnet.ora file. Here is an example sqlnet.ora file:

    Sqlnet.ora File on the Standby Host

    NAMES.DIRECTORY_PATH= (TNSNAMES, LDAP) BEQUEATH_DETACH=yes AUTOMATIC_IPC = ON NAMES.DEFAULT_DOMAIN = IDEVELOPMENT.INFO SQLNET.EXPIRE_TIME=2

    Once the Oracle networking files are configured, ensure to start the TNS listener with the latest (valid) listener.ora file:

    $ lsnrctl stop
    $ lsnrctl start

  2. Create a server parameter file for the standby database

    Remember that text initialization parameter file we created and transferred from the primary database? Well, we need to make modifications to it and create an SPFILE for the standby database. Let's start by putting the text initialization parameter file in the $ORACLE_HOME/dbs directory:

    $ cp initTESTDB.ora $ORACLE_HOME/dbs
    $ cd $ORACLE_HOME/dbs
    Now, let's change to that directory and make the appropriate modifications to the text initialization file. Below, I provide an overview of the parameters that are important for a successful standby database configuration for the standby database.

    Earlier in this article, I mentioned that in a robust Data Guard configuration, the initialization parameter for the primary and standby database should be identical with the exception of four key parameters:

    FAL_CLIENT
    FAL_SERVER
    LOG_ARCHIVE_DEST_2
    LOG_ARCHIVE_DEST_STATE_2
    I marked the changes that should be made the standby initialization file in RED:
    ...
    archive_lag_target             = 0
    compatible                     = '9.2.0.8.0'
    control_file_record_keep_time  = 7
    control_files                  = '/u02/oradata/TESTDB/control01.ctl',
                                     '/u02/oradata/TESTDB/control02.ctl',
                                     '/u02/oradata/TESTDB/control03.ctl'
    db_name                        = 'TESTDB'
    fal_client                     = 'TESTDB_VMLINUX4.IDEVELOPMENT.INFO'
    fal_server                     = 'TESTDB_VMLINUX3.IDEVELOPMENT.INFO'
    instance_name                  = 'TESTDB'
    local_listener                 = ''
    log_archive_dest_1             = 'location=/u02/oraarchive/TESTDB mandatory'
    log_archive_dest_2             = 'service=TESTDB_VMLINUX3 optional reopen=15'
    log_archive_dest_state_1       = enable
    log_archive_dest_state_2       = defer
    log_archive_format             = 'arch_t%t_s%s.dbf'
    log_archive_start              = true
    remote_archive_enable          = true
    remote_login_passwordfile      = exclusive
    service_names                  = 'TESTDB.IDEVELOPMENT.INFO'
    standby_archive_dest           = '/u02/oraarchive/TESTDB'
    standby_file_management        = auto
    ...
    Finally, on the idle standby database, use the CREATE SPFILE ... SQL statement to create a server parameter file for the standby database from the text initialization parameter file that was just edited:
    $ sqlplus "/ as sysdba"
    
    Connected to an idle instance.
    
    SQL> create spfile from pfile='?/dbs/initTESTDB.ora';
    
    File created.

  3. Copy the standby controlfile to the appropriate location

    Remember that standby controlfile we created on the primary database server and transferred to the standby host? Well, we need to copy that file to the appropriate locations (and name them correctly) for the standby database as identified in your init<SID>.ora file for the standby database. For my example, I have the control_files defined as follows:

    control_files =   '/u02/oradata/TESTDB/control01.ctl'
                    , '/u02/oradata/TESTDB/control02.ctl'
                    , '/u02/oradata/TESTDB/control03.ctl'
    $ cd
    $ cp control01.ctl /u02/oradata/TESTDB/control01.ctl
    $ cp control01.ctl /u02/oradata/TESTDB/control02.ctl
    $ cp control01.ctl /u02/oradata/TESTDB/control03.ctl

  4. Create Oracle Password File

    Create an Oracle Password File for the standby database:

    $ cd $ORACLE_HOME/dbs
    $ orapwd file=orapwTESTDB password=change_on_install

      Keep in mind that the Oracle Password File is required only if you will be using Data Guard Broker. It is also a requirement that the password for SYS be the same on both the primary and standby database are the same.

  5. Start the physical standby database

    On the standby database, issue the following SQL statements to start and mount the database in standby mode:

    SQL> startup nomount
    ORACLE instance started.
    
    Total System Global Area  252777712 bytes
    Fixed Size                   451824 bytes
    Variable Size             218103808 bytes
    Database Buffers           33554432 bytes
    Redo Buffers                 667648 bytes
    
    SQL> alter database mount standby database;
    
    Database altered.

  6. Initiate log apply services

    On the standby database, start log apply services as follows:

    SQL> alter database recover managed standby database disconnect from session;
    
    Database altered.

      The example includes the DISCONNECT FROM SESSION option so that log apply services run in a background session. This option is new in Oracle9i.

  7. Begin shipping redo data to the standby database

    Back when we were configuring the primary database server, we deferred the LOG_ARCHIVE_DEST_2. This is the destination defined that will transfer redo data to the standby database we have defined in our configuration. Now that the standby database is mounted, we should enable this process on the primary database:

    SQL> alter system set log_archive_dest_state_2=enable scope=both;
    
    System altered.



Start Remote Archiving

With the protection mode we are using, maximum performance, archiving of redo logs to the remote standby location do not occur until after a log switch. A log switch occurs, by default, when an online redo log becomes full.

To force the current redo logs to be archived immediately, use the ALTER SYSTEM ... SQL statement on the primary database. For example:

SQL> alter system archive log current;

System altered.



Verifying the Physical Standby Database

Now that we have a physical standby database and set up log transport services, we should now verify that database modifications are being successfully shipped from the primary database to the standby database.

To see the new archived redo logs that were received on the standby database, you should first identify the existing archived redo logs on the standby database, archive a few logs on the primary database, and then check the standby database again.

The following steps show how to perform these tasks.

  1. Identify the existing archived redo logs

    On the standby database, query the V$ARCHIVED_LOG view to identify existing archived redo logs. For example:

    SQL> alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS';
    
    Session altered.
    
    SQL> select sequence#, first_time, next_time
      2  from v$archived_log order by sequence#;
    
     SEQUENCE# FIRST_TIME           NEXT_TIME
    ---------- -------------------- --------------------
             3 21-JUL-2006 14:45:49 22-JUL-2006 01:23:08
             4 22-JUL-2006 01:23:08 22-JUL-2006 01:25:04

  2. Archiving the current log

    On the primary database, archive the current log using the following SQL statement:

    SQL> alter system archive log current;
    
    System altered.

  3. Verify that the new archived redo log was received

    From the standby database, query the V$ARCHIVED_LOG view again to verify the redo log was received:

    SQL> select sequence#, first_time, next_time
      2  from v$archived_log order by sequence#;
    
     SEQUENCE# FIRST_TIME           NEXT_TIME
    ---------- -------------------- --------------------
             3 21-JUL-2006 14:45:49 22-JUL-2006 01:23:08
             4 22-JUL-2006 01:23:08 22-JUL-2006 01:25:04
             5 22-JUL-2006 01:25:04 22-JUL-2006 01:25:41
    The logs are now available for the log apply services to apply redo data to the standby database. We check this in the next step.

  4. Verify that the new archived redo log was applied

    From the standby database, query the V$ARCHIVED_LOG view to verify the archived redo log was applied.

    SQL> select sequence#, archived, applied
      2  from v$archived_log order by sequence#;
    
     SEQUENCE# ARCHIVED APPLIED
    ---------- -------- -------
             3 YES      YES
             4 YES      YES
             5 YES      YES

  5. Monitoring the alert.log of the standby database

    Querying the V$ARCHIVED_LOG is a good way to ensure the log transport service and the log apply service is doing their job. I also like to tail the alert.log file of the standby database to witness the Media Recovery services being applied.

    From the standby database, perform the following tail -f against the alert.log while you are issuing the "alter system archive log current;" statement on the primary database":

    $ cd ~oracle/admin/TESTDB/bdump
    
    $ tail -f alert_TESTDB.log
    Media Recovery Log /u02/oraarchive/TESTDB/arch_t1_s3.dbf
    Media Recovery Waiting for thread 1 seq# 4
    Fri Jul 21 20:40:01 2006
    Restarting dead background process QMN0
    QMN0 started with pid=9
    Fri Jul 21 20:41:43 2006
    Media Recovery Log /u02/oraarchive/TESTDB/arch_t1_s4.dbf
    Media Recovery Waiting for thread 1 seq# 5
    Media Recovery Log /u02/oraarchive/TESTDB/arch_t1_s5.dbf
    Media Recovery Waiting for thread 1 seq# 6
    Media Recovery Log /u02/oraarchive/TESTDB/arch_t1_s6.dbf
    Media Recovery Waiting for thread 1 seq# 7



Role Transitions

At this point, you are happy, your customer is happy, management is happy - everyone is happy with the piece of mind that their data is being replicated to their disaster recovery site. But the time will eventually come where the inevitable will occur and the primary database becomes unavailable. You are now faced with failing over production activities to the standby database. Other circumstances can also arise where scheduled maintenance needs to occur on the primary database and database operations need to be switched over to the standby database. In either case, the role of the primary database and the standby database will need to be changed. This is known as Role Transition and is the subject of this and the following sections.

Any database within a Data Guard configuration can operate in one of the following mutually exclusive roles: primary or standby. Services within Data Guard allow you to change these roles dynamically by issuing SQL statements that will be fully explained in the following sections.

There are two types of role transitions that are supported in Oracle9i Data Guard:



Which Role Transition Operation Should I Use?

When faced with the decision on which role transition is best for the given situation, you need to always choose one that best reduces downtime and has the least potential for data loss. Also to consider is how the change will affect any other standby database in the configuration. You should consider the following when making the decision on which operation to use:

The following decision tree can be used to assist when making this critical decision as to which operation to perform:

One key point to consider is that if it would be faster to repair the primary database (from failure or a simple planned hardware/software upgrade), the most efficient method would be to perform the tasks and then to bring up the primary database as quickly as possible and not perform any type of role transition. This method can impose less risk to the system and does not require any client software to be re-configured.

The sections that follow describe how to perform both switchover and failover operations.



Implementing Role Switchover Operations

A switchover operation is a reversible role transition between the primary database and one of it standby databases. With a switchover, there is no data loss, and the old primary database will remain in the Data Guard configuration as a standby database. This operation is generally used to reduce primary database downtime during a scheduled outage. This may include outages related to operating system or hardware upgrades. After the required tasks for the planned outage are completed, the original roles for the primary and standby database can be transitioned back to normal.

This section uses the same two-node Data Guard configuration that was created in this article - a physical primary database (TESTDB on vmlinux3.idevelopment.info) sending redo to a physical standby database (TESTDB on vmlinux4.idevelopment.info). The Data Guard protection mode for the primary database is set to maximum performance which means the standby database does not receive any redo data until the primary database fills its current online redo and archives it. The data received and applied to the standby database is only as current as the last archived redo log sent from the primary database.

In this section, I will be performing a role transition where TESTDB on vmlinux3 will transition its role from primary to standby while the TESTDB database on vmlinux4 will transition its role from standby to primary using the graceful switchover operation. This will all occur without any loss of data!

A switchover operation takes place in two phases:

  1. The existing primary database is transitioned to a standby role. After this first phase is implemented, the Data Guard configuration temporarily has two standby databases.

  2. A standby database (in this example, there was only one physical standby database) is transitioned to the primary role.

Use the following steps to perform the switchover operation.


Preparing the Data Guard environment for physical database switchover

  1. Identify all initialization parameter to change

    One of the first steps before attempting the switchover operation is to identify all initialization parameters that must be changed to complete the role transition. Two of the more important parameters are the log_archive_dest_n and log_archive_dest_state_n parameters on the current standby database. When a switchover or failover operation occurs, you must ensure that all of the standby databases continue to receive logs from the new primary database. With the configuration we implemented in this article, most of the initialization parameters are in place for the standby database to assume the primary role.

    On the current physical primary database, verify the following are set:

    ...
    fal_client                     = 'TESTDB_VMLINUX4.IDEVELOPMENT.INFO'
    fal_server                     = 'TESTDB_VMLINUX3.IDEVELOPMENT.INFO'
    log_archive_dest_1             = 'location=/u02/oraarchive/TESTDB mandatory'
    log_archive_dest_2             = 'service=TESTDB_VMLINUX4 optional reopen=15'
    log_archive_dest_state_1       = ENABLE
    log_archive_dest_state_2       = ENABLE
    ...

    On the current physical standby database, verify the following are set:

    ...
    fal_client                     = 'TESTDB_VMLINUX3.IDEVELOPMENT.INFO'
    fal_server                     = 'TESTDB_VMLINUX4.IDEVELOPMENT.INFO'
    log_archive_dest_1             = 'location=/u02/oraarchive/TESTDB mandatory'
    log_archive_dest_2             = 'service=TESTDB_VMLINUX3 optional reopen=15'
    log_archive_dest_state_1       = ENABLE
    log_archive_dest_state_2       = DEFER
    ...

      If your Data Guard configuration contains other standby databases, verify that the new primary database contains a log_archive_dest_n definition for each standby database. This ensures each standby database will continue to receive redo data from the new primary database.

  2. Verify no active database users

    Query the V$SESSION view from both the primary and standby database to verify no active users are connected:

    SELECT username FROM v$session
    WHERE status = 'ACTIVE' AND username IS NOT NULL AND username != 'SYS';
    
    no rows selected

  3. Verify only one primary and one standby instance are up in a RAC environment

    If you are working in a Real Application Clusters (RAC) environment, verify that all but the one primary database instance and one standby instance are shut down. When using Oracle RAC, only one primary database instance and only one standby instance can perform the switchover operation. All other instances need to be shutdown before attempting the switchover operation!

  4. Verify primary database is open and standby database is mounted

    Since we are working with a physical standby database, ensure the physical primary database is in the OPEN stage and that the physical standby database is in the MOUNT stage.

    The physical standby database that is planned to transition to the primary role must be mounted before any switchover operation can begin. Ideally, the physical standby database will also be actively recovering archived redo logs when the database roles are switched. If the physical standby database is open for READ-ONLY access, the switchover can still take place; however it will require additional time.

  5. Place current standby database in ARCHIVELOG mode

    Place the current standby database that will become the primary database in ARCHIVELOG mode:

    SQL> alter database recover managed standby database cancel;
    SQL> shutdown immediate
    SQL> startup nomount
    SQL> alter database mount standby database;
    SQL> alter database archivelog;
    SQL> alter database recover managed standby database disconnect from session;

  6. Remove DELAY parameter for and redo data application

    If you have setup any type of DELAY=n parameter in any of the redo data application on the standby database, please remove it.


From the current primary database

  1. Verify that it is possible to perform a switchover

    From the primary database, query the SWITCHOVER_STATUS column of v$database to verify that switchover to standby is possible.

    SQL> SELECT switchover_status FROM v$database;
    
    SWITCHOVER_STATUS
    ------------------
    TO STANDBY

      In order to perform a switchover operation, all sessions to the database need to be disconnected. In Oracle9i R1 this was a manual process. With the release of Oracle9i R2 (version 9.2.0) this process can be automated with the "with session shutdown" clause that has been added to the ALTER DATABASE COMMIT TO SWITCHOVER command.

    If SWITCHOVER_STATUS returns "SESSIONS ACTIVE" then you should either disconnect all sessions manually or when performing the next step, append the "with session shutdown" clause. For example:

    SQL> alter database commit to switchover to physical standby with session shutdown;

    Note that the clause also works with the switchover to primary command.

    During normal operations it is acceptable to see the value for SWITCHOVER_STATUS on the primary to be "SESSIONS ACTIVE" or "TO STANDBY". During normal operations on the standby it is acceptable to see the values of "NOT ALLOWED" or "SESSIONS ACTIVE".

  2. Initiate switchover operation on the primary database

    To transition the current primary database to the physical standby database role, use the following SQL statement on the primary database:

    SQL> alter database commit to switchover to physical standby;
    
    Database altered.

      As noted in the previous section, all sessions to the database need to be disconnected. If the SWITCHOVER_STATUS column of v$database indicates "SESSIONS ACTIVE", you can use the "with session shutdown" clause:
    SQL> alter database commit to switchover to physical standby with session shutdown;

    After the above statement completes, the primary database is converted into a standby database.

      The current controlfile is backed up to the current SQL session trace file (found in the user_dump_dest directory) before the switchover operation commits. This makes it possible to reconstruct a current controlfile, if necessary.

  3. Shutdown and restart the former primary instance

    Shut down the former primary instance and restart it without mounting the database. After starting the instance, mount the database as a physical standby database:

    SQL> shutdown immediate
    ORA-01507: database not mounted
    
    
    ORACLE instance shut down.
    
    SQL> startup nomount
    ORACLE instance started.
    
    Total System Global Area  252777556 bytes
    Fixed Size                   451668 bytes
    Variable Size             218103808 bytes
    Database Buffers           33554432 bytes
    Redo Buffers                 667648 bytes
    
    SQL> alter database mount standby database;
    
    Database altered.
    Note that at this point in the switchover process, both databases are configured as standby databases!

  4. Defer the remote archive destination on the old primary

    Defer the remote archive destination on the old physical primary database:

    SQL> alter system set log_archive_dest_state_2 = defer;
    
    System altered. 


From the current standby database

  1. Verify that the physical standby can be converted to the new primary

    After you transition the physical primary database to the physical standby role and the switchover notification is received by the standby database in the configuration, you should then verify if the switchover notification was processed by the target standby database. This is down by querying the SWITCHOVER_STATUS column of the v$database view from the old (current) physical standby database:

    SQL> SELECT switchover_status FROM v$database;
    
    SWITCHOVER_STATUS
    ------------------
    SWITCHOVER PENDING

      Note that if the status returns "SESSIONS ACTIVE" then you should append the "with session shutdown" clause to the command in the next step.

  2. Transition the physical standby database role to the primary role

    It is now possible to switch the physical standby database from the standby role to the primary role. This can be done when the standby database instance is either mounted in managed recovery mode or open for READ-ONLY access. It must be mounted in one of these modes so that the primary database switchover operation request can be coordinated.

      The ALTER DATABASE ... command used to perform the switchover (below) automatically creates online redo logs if they do not already exist. This may cause a significant increase in the time it takes for the COMMIT operation to complete. It is possible and recommended by Oracle that the DBA always manually add online redo logs to the target standby database when it is created. This however is beyond the scope of this article and will not be addressed.

    Run the following SQL command to transition the physical standby database role to the primary role:

    SQL> alter database commit to switchover to primary;
    
    Database altered.

      If this were version 9.0.1 then you would have first needed to cancel managed recovery prior to issuing the above command. If on 9.2.0 and you started managed recovery with the "through all switchover" clause then managed recovery should also be canceled before issuing the above command.

  3. Shutdown and restart the new primary database

    To complete the transition of this standby database role to the new primary role, the instance needs to be shutdown and restarted:

    SQL> shutdown immediate
    ORA-01507: database not mounted
    
    
    ORACLE instance shut down.
    
    SQL> startup open
    ORACLE instance started.
    
    Total System Global Area  252777556 bytes
    Fixed Size                   451668 bytes
    Variable Size             218103808 bytes
    Database Buffers           33554432 bytes
    Redo Buffers                 667648 bytes
    Database mounted.
    Database opened.

      There is no need to shut down and restart any other standby databases (not involved in the switchover) in the Data Guard configuration that were online at the time of the switchover operation. These standby databases will continue to function normally after the switchover completes.


From the new physical standby database

Issue the following statement from the new physical standby database to begin managed recovery operations:
SQL> alter database recover managed standby database disconnect from session;


From the new primary database

Finally, from the new primary database, enable archiving to the new physical standby database, begin sending redo to the new physical standby database and create tempfiles for any of the temporary tablespaces:
SQL> alter system set log_archive_dest_state_2 = enable;

System altered.

SQL> alter system archive log current;

System altered.

SQL> alter tablespace temp add tempfile '/u02/oradata/TESTDB/temp01.dbf' size 50m reuse
  2  autoextend on next 100m maxsize unlimited;

Tablespace altered.



Implementing Role Failover Operations

A failover operation transitions a standby database to the primary role in response to a failure on the primary database. During a failover operation, one of the standby databases will transition its role to the primary database and the old primary database is rendered unable to participate in the configuration. (Although it could be rebuilt and entered back into the configuration as a standby database!). You would typically choose the failover method only when the primary database becomes unavailable and there is no possibility of restoring it to a service within a reasonable amount of time.

The amount of data loss incurred by a failover operation is predicated on the protection mode under which the old primary database was operating under. A failover can be performed after all or most of the data was last propagated to the standby database after the primary database became unavailable.

This section uses the same two-node Data Guard configuration that was created in this article - a physical primary database (TESTDB on vmlinux3.idevelopment.info) sending redo to a physical standby database (TESTDB on vmlinux4.idevelopment.info). The Data Guard protection mode for the primary database is set to maximum performance which means the standby database does not receive any redo data until the primary database fills its current online redo and archives it. The data received and applied to the standby database is only as current as the last archived redo log sent from the primary database.

In this section, I will be performing a role transition where TESTDB on vmlinux3 (the current primary database) is considered unavailable and rendered unusable while the TESTDB database on vmlinux4 (the current standby database) will transition its role from standby to primary using a failover operation. Since our primary database was configured for maximum performance, there will always be a good likelihood that data loss will occur when transitioning the physical standby database to the primary role in this configuration. Also with this protection mode, any other standby databases in the configuration will need to be rebuilt.

  Note that the failover operation used in this example will result in some data loss and any other standby databases used in the Data Guard configuration (not involved in the transition) would need to be rebuilt. With our protection mode of maximum performance, the data received and applied to the current standby database is only as current as the last redo log switch from the primary database. The configuration in this example does not use standby redo logs which is only a requirement when configuring the higher protection modes (e.g. maximum availability and maximum protection). Without standby redo logs available, you will always encounter some data loss (depending on the changes since the latest log switch on the primary database).

  After performing a failover operation, you can optionally restore the original state of the Data Guard configuration by performing the following steps:

  1. Recreate the failed primary database as a new standby database using a copy of the new primary database. This article provides all of the steps necessary to create a new physical standby database from a primary database.

  2. Add the database to the Data Guard configuration as a new standby database.

  3. Finally, perform a switchover to transition the database to the primary role and restore the configuration to the original pre-failed state.

The following steps can be used to perform a failover operation. In general, before performing any failover operation, you should transfer as much available and unapplied redo data as possible from the primary database to the standby database to reduce the amount of data loss.

During a failover operation involving a physical standby database:


Preparing the Data Guard environment for physical database failover

  1. Identify all initialization parameter to change

    One of the first steps before attempting the failover operation is to identify all initialization parameters that must be changed to complete the role transition. Two of the more important parameters are the log_archive_dest_n and log_archive_dest_state_n parameters on the current standby database. When a failover operation occurs and the standby database being transitioned was configured for maximum protection mode, you must ensure that all of the standby databases that were also set up for maximum protection continue to receive logs from the new primary database. With the configuration we implemented in this article, most of the initialization parameters are in place for the standby database to assume the primary role; however, since our protection mode was set to maximum performance, any other standby databases in the configuration would have to be rebuilt.

    On the current physical standby database, verify the following are set:

    ...
    fal_client                     = 'TESTDB_VMLINUX3.IDEVELOPMENT.INFO'
    fal_server                     = 'TESTDB_VMLINUX4.IDEVELOPMENT.INFO'
    log_archive_dest_1             = 'location=/u02/oraarchive/TESTDB mandatory'
    log_archive_dest_2             = 'service=TESTDB_VMLINUX3 optional reopen=15'
    log_archive_dest_state_1       = ENABLE
    log_archive_dest_state_2       = DEFER
    ...

      If your Data Guard configuration contains other standby databases configured for maximum protection mode, verify that the new primary database contains a log_archive_dest_n definition for each standby database. This ensures each standby database will continue to receive redo data from the new primary database.

  2. Verify only one standby instance is up in a RAC environment

    If you are working in a Real Application Clusters (RAC) environment, verify that only one standby instance is up and active during the failover operation. When using Oracle RAC, only one standby instance can perform the failover operation. All other instances need to be shutdown before attempting the failover operation!

  3. Place physical standby database in maximum performance mode

    If the physical standby database involved in the failover operation is in maximum protection mode, it will need to be placed in maximum performance mode. This step is required because you cannot fail over to a physical standby database that is in maximum protection mode.

    Place the physical standby database involved in the failover operation to maximum performance mode by issuing the following SQL statement on the physical standby database:

    SQL> alter database set standby database to maximize performance;
    
    Database altered. 
    
    SQL> select protection_mode, protection_level from v$database;
    
    PROTECTION_MODE      PROTECTION_LEVEL
    -------------------- --------------------
    MAXIMUM PERFORMANCE  MAXIMUM PERFORMANCE

      The Data Guard configuration being used in this scenario was already in maximum performance mode and therefore does not require the above SQL statement to be run. It doesn't hurt, however, to run it to ensure the physical standby database is configured for maximum performance.

      The above ALTER DATABASE ... will not succeed if a primary database in maximum protection mode is still actively communicating with this standby database. Because a failover operation irreversibly removes the original primary database from the Data Guard configuration, these features service to protect a primary database operating in maximum protection mode from the effects of an unintended failover operation.


Perform the role transition (to primary) from the current standby database

  1. Identify and resolve any archived redo log gaps

    Query the view v$archive_gap from the target standby database to determine if there are gaps at the target standby database from the primary database. This view contains the sequence numbers of the archived redo logs that are known to be missing for each thread. The data returned reflects the highest gap only.

    SQL> select * from v$archive_gap;
    
    no rows selected

      For more information about Log Gap Detection and Resolution, see my article "Log Gap Detection and Resolution".

      If the above query against v$archive_gap returns a record, it will display the archived redo logs missing from the standby database as illustrated in the following example:
    SQL> select * from v$archive_gap;
    
       THREAD# LOW_SEQUENCE# HIGH_SEQUENCE#
    ---------- ------------- --------------
             1            51             55
    From the output above, the physical standby database is currently missing logs from sequence 51 to sequence 55 for thread 1. If possible locate and copy all of the identified missing archived redo logs to the target standby database. If working in a RAC environment, this would need to include missing archived redo logs from all threads. Once copied over, the missing archived redo logs will need to be registered with the target standby database:
    SQL> alter database register logfile '/u02/oraarchive/TESTDB/arch_t1_s51.dbf';
    SQL> alter database register logfile '/u02/oraarchive/TESTDB/arch_t1_s52.dbf';
    SQL> alter database register logfile '/u02/oraarchive/TESTDB/arch_t1_s53.dbf';
    SQL> alter database register logfile '/u02/oraarchive/TESTDB/arch_t1_s54.dbf';
    SQL> alter database register logfile '/u02/oraarchive/TESTDB/arch_t1_s55.dbf';

  2. Copy and register any other missing archived redo logs

    Determine if there are any other missing archived redo logs by querying the view v$archived_log on all other available databases in the configuration (if any others exist) to obtain the highest sequence number for each thread. For example:

    SQL> select unique thread# as "Thread", max(sequence#)
      2  over (partition by thread#) as "Last" from v$archived_log;
    
        Thread       Last
    ---------- ----------
             1         56
    If you find any archived redo logs that contain sequence numbers higher than the highest sequence number available on the target standby database, copy and register them to the target standby database. Again, this must be done for all threads if you are in a RAC environment.
    SQL> alter database register logfile '<log_file_spec_n>';

      About partial archived redo logs

    It is possible to copy over and register what is known as a partial archived redo log file. A partial archived redo log file contains all of the primary database redo data received by the standby database when the primary database fails, but the archived redo log is not automatically registered in the standby database.

    When you register a partial archive redo log, it prevents the recovery of the standby redo logs (if they exist). Therefore, whether or not you have registered a partial archive redo log determines which failover command will be necessary to run (next section).

    You will know if you registered a partial archived redo log if you get receive the following message when attempting to register the archived redo log:

    Register archivelog 'filespec1' was created due to a network disconnect; archivelog contents are valid but missing subsequent data

  3. Initiate the failover operation on the target physical standby database

    Since our Data Guard configuration was set up in maximum performance mode, it was not configured with any standby redo logs. Therefore, run the following SQL statement from the target physical standby database to initiate the failover operation. Please keep in mind that this statement will cause (minimal) data loss as the latest information from the down primary database is not available anymore.

    SQL> alter database recover managed standby database finish skip standby logfile;
    
    Database altered.
    The above SQL statement performs incomplete recovery until the last SCN included in the latest archived redo log available at the physical standby database. The next section describes the final step which is to make the target standby database the primary.

      If your Data Guard configuration did contain standby redo logs (as required with protection modes like maximum availability and maximum protection AND you have not registered any partial archived redo log files, then you need to stop normal managed recovery and then initiate the failover operation using the following SQL statements:
    SQL> alter database recover managed standby database cancel;
     
    Database altered.
    
    SQL> alter database recover managed standby database finish;
    
    Database altered.

  4. Convert the physical standby database to the primary role

    After the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE ... FINISH; SQL statement successfully completes, you will then need to transition the target physical standby database to the primary database role by issuing the following SQL statement:

    SQL> alter database commit to switchover to primary;
    
    Database altered.

      The above SQL statement will only succeed if the correct "ALTER DATABASE RECOVER MANAGED STANDBY DATABASE ... FINISH;" statement was issued (in the previous section). If you forgot the "...SKIP STANDBY LOGFILE" clause although you have no standby redo log files, the "COMMIT TO SWITCHOVER" will fail with the error that more media recovery is required here.

    If the "COMMIT TO SWITCHOVER" fails for any reason, you have to use the "ACTIVATE" SQL statement which forces the failover operation (and may cause data loss!):

    SQL> alter database activate standby database;

    After the "COMMIT TO SWITCHOVER" successfully completes, you can no longer use this database as a standby database. Also, subsequent redo logs from the original primary database cannot be applied.

  5. Shutdown and restart the new primary database

    To complete the failover operation, you need to shutdown the new primary database and restart it in read/write mode using the appropriate initialization parameter file (or SPFILE) for the primary role:

    SQL> shutdown immediate
    ORA-01507: database not mounted
    
    
    ORACLE instance shut down.
    
    SQL> startup
    ORACLE instance started.
    
    Total System Global Area  252777556 bytes
    Fixed Size                   451668 bytes
    Variable Size             218103808 bytes
    Database Buffers           33554432 bytes
    Redo Buffers                 667648 bytes
    Database mounted.
    Database opened.
  6. Create tempfiles for any of the temporary tablespaces

    Finally, create tempfiles for any of the temporary tablespaces:

    SQL> alter tablespace temp add tempfile '/u02/oradata/TESTDB/temp01.dbf' size 50m reuse
      2  autoextend on next 100m maxsize unlimited;
    
    Tablespace altered.



Further Reading

Additional information on Oracle9i Release 2 Data Guard can be found in the Data Guard Concepts and Administration. This guide is available from the Oracle Documentation Library website located at the following address http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96653.pdf.



About the Author

Jeffrey Hunter is an Oracle Certified Professional, Java Development Certified Professional, Author, and an Oracle ACE. Jeff currently works as a Senior Database Administrator for The DBA Zone, Inc. located in Pittsburgh, Pennsylvania. His work includes advanced performance tuning, Java and PL/SQL programming, developing high availability solutions, capacity planning, database security, and physical / logical database design in a UNIX / Linux server environment. Jeff's other interests include mathematical encryption theory, tutoring advanced mathematics, programming language processors (compilers and interpreters) in Java and C, LDAP, writing web-based database administration tools, and of course Linux. He has been a Sr. Database Administrator and Software Engineer for over 20 years and maintains his own website site at: http://www.iDevelopment.info. Jeff graduated from Stanislaus State University in Turlock, California, with a Bachelor's degree in Computer Science and Mathematics.



Copyright (c) 1998-2017 Jeffrey M. Hunter. All rights reserved.

All articles, scripts and material located at the Internet address of http://www.idevelopment.info is the copyright of Jeffrey M. Hunter and is protected under copyright laws of the United States. This document may not be hosted on any other site without my express, prior, written permission. Application to host any of the material elsewhere can be made by contacting me at jhunter@idevelopment.info.

I have made every effort and taken great care in making sure that the material included on my web site is technically accurate, but I disclaim any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on it. I will in no case be liable for any monetary damages arising from such loss, damage or destruction.

Last modified on
Thursday, 17-Apr-2014 02:52:46 EDT
Page Count: 33944