DBA Tips Archive for Oracle

  


Add a Node to an Existing Oracle RAC 11g R2 Cluster on Linux - (RHEL 5)

by Jeff Hunter, Sr. Database Administrator

Contents

Introduction

As your organization grows, so too does your need for more application and database resources to support the company's IT systems. Oracle RAC 11g provides a scalable framework which allows DBA's to effortlessly extend the database tier to support this increased demand. As the number of users and transactions increase, additional Oracle instances can be added to the Oracle database cluster to distribute the extra load.

This document is an extension to my article "Building an Inexpensive Oracle RAC 11g R2 on Linux - (RHEL 5)". Contained in this new article are the steps required to add a single node to an already running and configured two-node Oracle RAC 11g Release 2 (11.2.0.3) for Linux x86_64 environment on the CentOS 5 Linux platform. All shared disk storage for Oracle RAC is based on iSCSI using Openfiler running on a separate node (also known as the Network Storage Server). Although this article was written and tested on CentOS 5 Linux, it should work unchanged with Red Hat Enterprise Linux 5 or Oracle Linux 5.

To add nodes to an existing Oracle RAC, Oracle Corporation recommends using the Oracle cloning procedures which are described in the Oracle Universal Installer and OPatch User's Guide. This article, however, uses manual procedures to add nodes and instances to the existing Oracle RAC. The manual procedures method described in this guide involve using the addNode.sh script to install the Oracle Grid Infrastructure home then the Oracle Database home to the new node and finally extending the cluster database by adding a new instance on the new node. In other words, you extend the software and the new instance onto the new Oracle RAC node in the same order as you installed the Grid Infrastructure and Oracle database software components on the existing RAC.

This article assumes the following:

Oracle Documentation

While this guide provides detailed instructions for successfully extending an Oracle RAC 11g system, it is by no means a substitute for the official Oracle documentation (see list below). In addition to this guide, users should also consult the following Oracle documents to gain a full understanding of alternative configuration options, installation, and administration with Oracle RAC 11g. Oracle's official documentation site is docs.oracle.com.

Example Configuration

The example configuration used in this guide stores all physical database files (data, online redo logs, control files, archived redo logs) on ASM in an ASM disk group named +RACDB_DATA while the Fast Recovery Area is created in a separate ASM disk group named +FRA.

The new three-node Oracle RAC and the network storage server will be configured as described in the table below after adding the new Oracle RAC node (racnode3).

Oracle RAC / Openfiler Nodes
Node Name Instance Name Database Name Processor RAM Operating System
racnode1 racdb1 racdb.idevelopment.info 1 x Dual Core Intel Xeon, 3.00 GHz 4GB CentOS 5.5 - (x86_64)
racnode2 racdb2 1 x Dual Core Intel Xeon, 3.00 GHz 4GB CentOS 5.5 - (x86_64)
racnode3   racdb3 1 x Dual Core Intel Xeon, 3.00 GHz 4GB CentOS 5.5 - (x86_64)
openfiler1     2 x Intel Xeon, 3.00 GHz 6GB Openfiler 2.3 - (x86_64)

Network Configuration
Node Name Public IP Private IP Virtual IP SCAN Name SCAN IP
racnode1 192.168.1.151 192.168.2.151 192.168.1.251 racnode-cluster-scan 192.168.1.187
192.168.1.188
192.168.1.189
racnode2 192.168.1.152 192.168.2.152 192.168.1.252
racnode3   192.168.1.153 192.168.2.153 192.168.1.253
openfiler1 192.168.1.195 192.168.2.195  

Oracle Software Components
Software Component OS User Primary Group Supplementary Groups Home Directory Oracle Base / Oracle Home
Grid Infrastructure grid oinstall asmadmin, asmdba, asmoper /home/grid /u01/app/grid
/u01/app/11.2.0/grid
Oracle RAC oracle oinstall dba, oper, asmdba /home/oracle /u01/app/oracle
/u01/app/oracle/product/11.2.0/dbhome_1

Storage Components
Storage Component File System Volume Size ASM Volume Group Name ASM Redundancy Openfiler Volume Name
OCR/Voting Disk ASM 2GB +CRS External racdb-crs1
Database Files ASM 32GB +RACDB_DATA External racdb-data1
ASM Cluster File System ASM 32GB +DOCS External racdb-acfsdocs1
Fast Recovery Area ASM 32GB +FRA External racdb-fra1

The following is a conceptual look at what the environment will look like after adding the third Oracle RAC node (racnode3) to the cluster. Click on the graphic below to enlarge the image.

    

Figure 1: Adding racnode3 to the current Oracle RAC 11g Release 2 System

This article is only designed to work as documented with absolutely no substitutions. The only exception here is the choice of vendor hardware (i.e. machines, networking equipment, and internal / external hard drives). Ensure that the hardware you purchase from the vendor is supported on Red Hat Enterprise Linux 5 and Openfiler 2.3 (Final Release).

Hardware and Costs

The hardware used to build the third node in our example Oracle RAC 11g environment consists of a Linux server and components that can be purchased at many local computer stores or over the Internet.

Oracle RAC Node 3 - (racnode3)
Dell PowerEdge T100

Dual Core Intel(R) Xeon(R) E3110, 3.0 GHz, 6MB Cache, 1333MHz
4GB, DDR2, 800MHz
160GB 7.2K RPM SATA 3Gbps Hard Drive
Integrated Graphics - (ATI ES1000)
Integrated Gigabit Ethernet - (Broadcom(R) NetXtreme IITM 5722)
16x DVD Drive
No Keyboard, Monitor, or Mouse - (Connected to KVM Switch)

US$500
1 x Ethernet LAN Card

Used for RAC interconnect and Openfiler networked storage.

Each Linux server for Oracle RAC should contain at least two NIC adapters. The Dell PowerEdge T100 includes an embedded Broadcom(R) NetXtreme IITM 5722 Gigabit Ethernet NIC that will be used to connect to the public network. A second NIC adapter will be used for the private network (RAC interconnect and Openfiler networked storage). Select the appropriate NIC adapter that is compatible with the maximum data transmission speed of the network switch to be used for the private network. For the purpose of this article, I used a Gigabit Ethernet switch (and a 1Gb Ethernet card) for the private network.

US$90
Miscellaneous Components
2 x Network Cables

Category 6 patch cable - (Connect racnode3 to public network)
Category 6 patch cable - (Connect racnode3 to interconnect Ethernet switch)

US$10
US$10
Total US$630

Install and Configure the Linux Operating System on the New Node

Install the Linux Operating System

Install the Linux operation system on the new Oracle RAC node using the same procedures documented in the original guide describing the two-node Oracle RAC 11g Release 2 configuration.

When configuring the machine name and networking, ensure to follow the same conventions used in the existing Oracle RAC system. For example, the following describes the network configuration used for the new node.

Oracle RAC Node Network Configuration

(racnode3)
eth0
Enable IPv4 support ON
Dynamic IP configuration (DHCP) - (select Manual configuration) OFF
IPv4 Address 192.168.1.153
Prefix (Netmask) 255.255.255.0
Enable IPv6 support OFF
eth1
Enable IPv4 support ON
Dynamic IP configuration (DHCP) - (select Manual configuration) OFF
IPv4 Address 192.168.2.153
Prefix (Netmask) 255.255.255.0
Enable IPv6 support OFF

Continue by manually setting your hostname. I used racnode3 for the new Oracle RAC node. Finish this dialog off by supplying your gateway and DNS servers.

Install Required Linux Packages for Oracle RAC

Install all required Linux packages on the new Oracle RAC node using the same procedures documented in the original guide describing the two-node Oracle RAC 11g Release 2 configuration.

Network Configuration

The current Oracle RAC is not using Grid Naming Service (GNS) to assign IP addresses. The existing cluster uses the traditional method of manually assigning static IP addresses in Domain Name Service (DNS). I often refer to this traditional method of manually assigning IP addresses as the "DNS method" given the fact that all IP addresses should be resolved using DNS.

When using the DNS method for assigning IP addresses, Oracle recommends that all static IP addresses be manually configured in DNS for the new Oracle RAC node before extending the Oracle Grid Infrastructure software. This would include the public IP address for the node, the RAC interconnect, and virtual IP address (VIP).

The following table displays the network configuration that will be used when adding a third node to the existing Oracle RAC. Note that every IP address will be registered in DNS and the hosts file for each Oracle RAC node with the exception of the SCAN virtual IP. The SCAN virtual IP will only be registered in DNS.

New Three-Node Oracle RAC Network Configuration
Identity Name Type IP Address Resolved By
Node 1 Public racnode1 Public 192.168.1.151 DNS and hosts file
Node 1 Private racnode1-priv Private 192.168.2.151 DNS and hosts file
Node 1 VIP racnode1-vip Virtual 192.168.1.251 DNS and hosts file
Node 2 Public racnode2 Public 192.168.1.152 DNS and hosts file
Node 2 Private racnode2-priv Private 192.168.2.152 DNS and hosts file
Node 2 VIP racnode2-vip Virtual 192.168.1.252 DNS and hosts file
Node 3 Public   racnode3 Public 192.168.1.153 DNS and hosts file
Node 3 Private   racnode3-priv Private 192.168.2.153 DNS and hosts file
Node 3 VIP   racnode3-vip Virtual 192.168.1.253 DNS and hosts file
SCAN VIP 1 racnode-cluster-scan Virtual 192.168.1.187 DNS
SCAN VIP 2 racnode-cluster-scan Virtual 192.168.1.188 DNS
SCAN VIP 3 racnode-cluster-scan Virtual 192.168.1.189 DNS

Update DNS

Update DNS by adding an entry for the new Oracle RAC node in the forward and reverse zone definition files.

Next, configure the new node for name resolution by editing the "/etc/resolv.conf" file to contain the IP address of the name server and domain that matches those of your DNS server and the domain you have configured.


nameserver 192.168.1.195 search idevelopment.info

After modifying the /etc/resolv.conf file on the new node, verify that DNS is functioning correctly by testing forward and reverse lookups using the nslookup command-line utility. Perform tests similar to the following from each node to all other nodes in your cluster.


[root@racnode1 ~]# nslookup racnode3.idevelopment.info Server: 192.168.1.195 Address: 192.168.1.195#53 Name: racnode3.idevelopment.info Address: 192.168.1.153 [root@racnode1 ~]# nslookup racnode3 Server: 192.168.1.195 Address: 192.168.1.195#53 Name: racnode3.idevelopment.info Address: 192.168.1.153 [root@racnode1 ~]# nslookup 192.168.1.153 Server: 192.168.1.195 Address: 192.168.1.195#53 153.1.168.192.in-addr.arpa name = racnode3.idevelopment.info. [root@racnode1 ~]# nslookup racnode-cluster-scan Server: 192.168.1.195 Address: 192.168.1.195#53 Name: racnode-cluster-scan.idevelopment.info Address: 192.168.1.187 Name: racnode-cluster-scan.idevelopment.info Address: 192.168.1.188 Name: racnode-cluster-scan.idevelopment.info Address: 192.168.1.189 [root@racnode1 ~]# nslookup 192.168.1.187 Server: 192.168.1.195 Address: 192.168.1.195#53 187.1.168.192.in-addr.arpa name = racnode-cluster-scan.idevelopment.info.

Update /etc/hosts

After configuring DNS, update the /etc/hosts file on all Oracle RAC nodes to include entries for the new node being added and to also remove any entry that has to do with IPv6.


# Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 localhost.localdomain localhost # Public Network - (eth0) 192.168.1.151 racnode1.idevelopment.info racnode1 192.168.1.152 racnode2.idevelopment.info racnode2 192.168.1.153 racnode3.idevelopment.info racnode3 # Private Interconnect - (eth1) 192.168.2.151 racnode1-priv.idevelopment.info racnode1-priv 192.168.2.152 racnode2-priv.idevelopment.info racnode2-priv 192.168.2.153 racnode3-priv.idevelopment.info racnode3-priv # Public Virtual IP (VIP) addresses - (eth0:1) 192.168.1.251 racnode1-vip.idevelopment.info racnode1-vip 192.168.1.252 racnode2-vip.idevelopment.info racnode2-vip 192.168.1.253 racnode3-vip.idevelopment.info racnode3-vip # Private Storage Network for Openfiler - (eth1) 192.168.1.195 openfiler1.idevelopment.info openfiler1 192.168.2.195 openfiler1-priv.idevelopment.info openfiler1-priv

Verify the network configuration by using the ping command to test the connection from each node in the cluster to the new Oracle RAC node being added.


# ping -c 3 racnode1.idevelopment.info # ping -c 3 racnode2.idevelopment.info # ping -c 3 racnode3.idevelopment.info # ping -c 3 racnode1-priv.idevelopment.info # ping -c 3 racnode2-priv.idevelopment.info # ping -c 3 racnode3-priv.idevelopment.info # ping -c 3 openfiler1.idevelopment.info # ping -c 3 openfiler1-priv.idevelopment.info # ping -c 3 racnode1 # ping -c 3 racnode2 # ping -c 3 racnode3 # ping -c 3 racnode1-priv # ping -c 3 racnode2-priv # ping -c 3 racnode3-priv # ping -c 3 openfiler1 # ping -c 3 openfiler1-priv

Cluster Time Synchronization Service

The current Oracle RAC uses Oracle Cluster Time Synchronization Service (CTSS) as the network time protocol which means that the NTP service will need to be de-configured and de-installed on the new Oracle RAC node before installing Oracle Grid Infrastructure.


[root@racnode3 ~]# /sbin/service ntpd stop [root@racnode3 ~]# chkconfig ntpd off [root@racnode3 ~]# mv /etc/ntp.conf /etc/ntp.conf.original [root@racnode3 ~]# rm /var/run/ntpd.pid

Create Job Role Separation Operating System Privileges Groups, Users, and Directories

Perform the following user, group, directory configuration, and setting shell limit tasks for the grid and oracle users on the new Oracle RAC node.

The Oracle Grid Infrastructure and Oracle Database software is installed using the optional Job Role Separation configuration. One OS user is created to own each Oracle software product — "grid" for the Oracle Grid Infrastructure owner and "oracle" for the Oracle Database software. Both Oracle software owners must have the Oracle Inventory group (oinstall) as their primary group, so that each Oracle software installation owner can write to the central inventory (oraInventory), and so that OCR and Oracle Clusterware resource permissions are set correctly.

It is important that the UID and GID of the grid and oracle user accounts on the new Oracle RAC node be identical to that of the existing RAC nodes.

grid

Start by creating the recommended OS groups and user for Grid Infrastructure on the new Oracle RAC node.


[root@racnode3 ~]# groupadd -g 1000 oinstall [root@racnode3 ~]# groupadd -g 1200 asmadmin [root@racnode3 ~]# groupadd -g 1201 asmdba [root@racnode3 ~]# groupadd -g 1202 asmoper [root@racnode3 ~]# useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c "Grid Infrastructure Owner" grid [root@racnode3 ~]# id grid uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)

Set the password for the grid account.


[root@racnode3 ~]# passwd grid Changing password for user grid. New UNIX password: xxxxxxxxxxx Retype new UNIX password: xxxxxxxxxxx passwd: all authentication tokens updated successfully.

Log in to the new Oracle RAC node as the grid user account and create a .bash_profile. When setting the Oracle environment variables in the login script for the new node, make certain to assign a unique Oracle SID for ASM (i.e. ORACLE_SID=+ASM3).


[root@racnode3 ~]# su - grid .bash_profile (grid)

oracle

Next, create the the recommended OS groups and user for the Oracle database software on the new Oracle RAC node.


[root@racnode3 ~]# groupadd -g 1300 dba [root@racnode3 ~]# groupadd -g 1301 oper [root@racnode3 ~]# useradd -m -u 1101 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle [root@racnode3 ~]# id oracle uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)

Set the password for the oracle account.


[root@racnode3 ~]# passwd oracle Changing password for user oracle. New UNIX password: xxxxxxxxxxx Retype new UNIX password: xxxxxxxxxxx passwd: all authentication tokens updated successfully.

Log in to the new Oracle RAC node as the oracle user account and create a .bash_profile. When setting the Oracle environment variables in the login script for the new node, make certain to assign a unique Oracle SID for the instance (i.e. ORACLE_SID=racdb3).


[root@racnode3 ~]# su - oracle .bash_profile (oracle)

Verify that the user nobody exists.

  1. To determine if the user exists, enter the following command.


    [root@racnode3 ~]# id nobody uid=99(nobody) gid=99(nobody) groups=99(nobody)

    If this command displays information about the nobody user, then you do not have to create that user.

  2. If the user nobody does not exist, then enter the following command to create it.


    [root@racnode3 ~]# /usr/sbin/useradd nobody

Configure an Oracle base path compliant with an Optimal Flexible Architecture (OFA) structure and correct permissions.


[root@racnode3 ~]# mkdir -p /u01/app/grid [root@racnode3 ~]# mkdir -p /u01/app/11.2.0/grid [root@racnode3 ~]# chown -R grid:oinstall /u01 [root@racnode3 ~]# mkdir -p /u01/app/oracle [root@racnode3 ~]# chown oracle:oinstall /u01/app/oracle [root@racnode3 ~]# chmod -R 775 /u01

To improve the performance of the software on Linux systems, you must increase the following resource limits for the Oracle software owner users (grid, oracle).


[root@racnode3 ~]# cat >> /etc/security/limits.conf <<EOF grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 EOF


[root@racnode3 ~]# cat >> /etc/pam.d/login <<EOF session required pam_limits.so EOF


[root@racnode3 ~]# cat >> /etc/profile <<EOF if [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then if [ \$SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi EOF

Configure the New Node for Remote Access using SSH

During the creation of the existing Oracle RAC, the installation of Oracle Grid Infrastructure and the Oracle Database software were only performed from one node in the RAC cluster — namely from racnode1 as the grid and oracle user account respectively. The Oracle Universal Installer (OUI) on that particular node would then use the ssh and scp commands to run remote commands on and copy the Oracle software to all other nodes within the RAC cluster. The grid and oracle user accounts on the node running the OUI (runInstaller) had to be trusted by all other nodes in the RAC cluster. This meant that the grid and oracle user accounts had to run the secure shell commands (ssh or scp) on the Linux server executing the OUI (racnode1) against all other Linux servers in the cluster without being prompted for a password. The same security requirements hold true when extending Oracle RAC.

User equivalence will be configured so that the Oracle Grid Infrastructure and Oracle Database software will be securely copied from racnode1 to the new Oracle RAC node (racnode3) using ssh and scp without being prompted for a password. Setting up SSH user equivalence has been greatly simplified in Oracle 11g by running the runSSHSetup.sh script which can be found in the GRID_HOME/oui/bin and ORACLE_HOME/oui/bin directories. This script will setup SSH equivalence between the local host to specified remote hosts so you will be able to execute commands on remote hosts without providing any password or confirmation.

From the machine you will be using to extend the Oracle RAC (racnode1), run the runSSHSetup.sh script as both grid and then oracle to setup SSH equivalence.

grid


[grid@racnode1 ~]$ $GRID_HOME/oui/bin/runSSHSetup.sh -user grid -hosts "racnode2 racnode3" -advanced -exverify This script will setup SSH Equivalence from the host 'racnode1.idevelopment.info' to specified remote hosts. ORACLE_HOME = /u01/app/11.2.0/grid JAR_LOC = /u01/app/11.2.0/grid/oui/jlib SSH_LOC = /u01/app/11.2.0/grid/oui/jlib OUI_LOC = /u01/app/11.2.0/grid/oui JAVA_HOME = /u01/app/11.2.0/grid/jdk Checking if the remote hosts are reachable. ClusterLogger - log file location: /home/grid/Logs/remoteInterfaces2012-04-18_05-00-26-PM.log Failed Nodes : racnode2 racnode3 Remote host reachability check succeeded. All hosts are reachable. Proceeding further... NOTE : As part of the setup procedure, this script will use ssh and scp to copy files between the local host and the remote hosts. You may be prompted for the password during the execution of the script. AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE directories. Do you want to continue and let the script make the above mentioned changes (yes/no)? yes If The files containing the client public and private keys already exist on the local host. The current private key may or may not have a passphrase associated with it. In case you remember the passphrase and do not want to re-run ssh-keygen, type 'no'. If you type 'yes', the script will remove the old private/public key files and, any previous SSH user setups would be reset. Enter 'yes', 'no' no Enter the password: xxxxx ClusterLogger - log file location: /home/grid/Logs/remoteInterfaces2012-04-18_05-00-33-PM.log Logfile Location : /tmp/SSHSetup2012-04-18_05-00-33-PM Checking binaries on remote hosts... Doing SSHSetup... Please be patient, this operation might take sometime...Dont press Ctrl+C... Validating remote binaries.. Remote binaries check succeeded Local Platform:- Linux ------------------------------------------------------------------------ Verifying SSH setup =================== The script will now run the date command on the remote nodes using ssh to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP, THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR PASSWORDS. If you see any output other than date or are prompted for the password, ssh is not setup correctly and you will need to resolve the issue and set up ssh again. The possible causes for failure could be: 1. The server settings in /etc/ssh/sshd_config file do not allow ssh for user grid. 2. The server may have disabled public key based authentication. 3. The client public key on the server may be outdated. 4. ~grid or ~grid/.ssh on the remote host may not be owned by grid. 5. User may not have passed -shared option for shared remote users or may be passing the -shared option for non-shared remote users. 6. If there is output in addition to the date, but no password is asked, it may be a security alert shown as part of company policy. Append the additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file. ------------------------------------------------------------------------ --racnode2:-- Running /usr/bin/ssh -x -l grid racnode2 date to verify SSH connectivity has been setup from local host to racnode2. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Wed Apr 18 17:01:13 EDT 2012 ------------------------------------------------------------------------ --racnode3:-- Running /usr/bin/ssh -x -l grid racnode3 date to verify SSH connectivity has been setup from local host to racnode3. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Wed Apr 18 17:00:17 EDT 2012 ------------------------------------------------------------------------ ------------------------------------------------------------------------ Verifying SSH connectivity has been setup from racnode2 to racnode2 ------------------------------------------------------------------------ IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Wed Apr 18 17:01:14 EDT 2012 ------------------------------------------------------------------------ ------------------------------------------------------------------------ Verifying SSH connectivity has been setup from racnode2 to racnode3 ------------------------------------------------------------------------ IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Wed Apr 18 17:00:17 EDT 2012 ------------------------------------------------------------------------ -Verification from racnode2 complete- ------------------------------------------------------------------------ Verifying SSH connectivity has been setup from racnode3 to racnode2 ------------------------------------------------------------------------ IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Wed Apr 18 17:01:14 EDT 2012 ------------------------------------------------------------------------ ------------------------------------------------------------------------ Verifying SSH connectivity has been setup from racnode3 to racnode3 ------------------------------------------------------------------------ IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Wed Apr 18 17:00:18 EDT 2012 ------------------------------------------------------------------------ -Verification from racnode3 complete- SSH verification complete.

oracle


[oracle@racnode1 ~]$ $ORACLE_HOME/oui/bin/runSSHSetup.sh -user oracle -hosts "racnode2 racnode3" -advanced -exverify This script will setup SSH Equivalence from the host 'racnode1.idevelopment.info' to specified remote hosts. ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1 JAR_LOC = /u01/app/oracle/product/11.2.0/dbhome_1/oui/jlib SSH_LOC = /u01/app/oracle/product/11.2.0/dbhome_1/oui/jlib OUI_LOC = /u01/app/oracle/product/11.2.0/dbhome_1/oui JAVA_HOME = /u01/app/oracle/product/11.2.0/dbhome_1/jdk Checking if the remote hosts are reachable. ClusterLogger - log file location: /home/oracle/Logs/remoteInterfaces2012-04-18_04-52-53-PM.log Failed Nodes : racnode2 racnode3 Remote host reachability check succeeded. All hosts are reachable. Proceeding further... NOTE : As part of the setup procedure, this script will use ssh and scp to copy files between the local host and the remote hosts. You may be prompted for the password during the execution of the script. AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE directories. Do you want to continue and let the script make the above mentioned changes (yes/no)? yes If The files containing the client public and private keys already exist on the local host. The current private key may or may not have a passphrase associated with it. In case you remember the passphrase and do not want to re-run ssh-keygen, type 'no'. If you type 'yes', the script will remove the old private/public key files and, any previous SSH user setups would be reset. Enter 'yes', 'no' no Enter the password: xxxxx ClusterLogger - log file location: /home/oracle/Logs/remoteInterfaces2012-04-18_04-53-02-PM.log Logfile Location : /tmp/SSHSetup2012-04-18_04-53-02-PM Checking binaries on remote hosts... Doing SSHSetup... Please be patient, this operation might take sometime...Dont press Ctrl+C... Validating remote binaries.. Remote binaries check succeeded Local Platform:- Linux ------------------------------------------------------------------------ Verifying SSH setup =================== The script will now run the date command on the remote nodes using ssh to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP, THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR PASSWORDS. If you see any output other than date or are prompted for the password, ssh is not setup correctly and you will need to resolve the issue and set up ssh again. The possible causes for failure could be: 1. The server settings in /etc/ssh/sshd_config file do not allow ssh for user oracle. 2. The server may have disabled public key based authentication. 3. The client public key on the server may be outdated. 4. ~oracle or ~oracle/.ssh on the remote host may not be owned by oracle. 5. User may not have passed -shared option for shared remote users or may be passing the -shared option for non-shared remote users. 6. If there is output in addition to the date, but no password is asked, it may be a security alert shown as part of company policy. Append the additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file. ------------------------------------------------------------------------ --racnode2:-- Running /usr/bin/ssh -x -l oracle racnode2 date to verify SSH connectivity has been setup from local host to racnode2. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Wed Apr 18 16:53:40 EDT 2012 ------------------------------------------------------------------------ --racnode3:-- Running /usr/bin/ssh -x -l oracle racnode3 date to verify SSH connectivity has been setup from local host to racnode3. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Wed Apr 18 16:52:44 EDT 2012 ------------------------------------------------------------------------ ------------------------------------------------------------------------ Verifying SSH connectivity has been setup from racnode2 to racnode2 ------------------------------------------------------------------------ IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Wed Apr 18 16:53:41 EDT 2012 ------------------------------------------------------------------------ ------------------------------------------------------------------------ Verifying SSH connectivity has been setup from racnode2 to racnode3 ------------------------------------------------------------------------ IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Wed Apr 18 16:52:45 EDT 2012 ------------------------------------------------------------------------ -Verification from racnode2 complete- ------------------------------------------------------------------------ Verifying SSH connectivity has been setup from racnode3 to racnode2 ------------------------------------------------------------------------ IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Wed Apr 18 16:53:41 EDT 2012 ------------------------------------------------------------------------ ------------------------------------------------------------------------ Verifying SSH connectivity has been setup from racnode3 to racnode3 ------------------------------------------------------------------------ IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Wed Apr 18 16:52:45 EDT 2012 ------------------------------------------------------------------------ -Verification from racnode3 complete- SSH verification complete.

Configure the New Linux Server for Oracle

Perform the following OS configuration procedures on the new Oracle RAC node.


[root@racnode3 ~]# cat >> /etc/sysctl.conf <<EOF # Controls the maximum number of shared memory segments system wide kernel.shmmni = 4096 # Sets the following semaphore values: # SEMMSL_value SEMMNS_value SEMOPM_value SEMMNI_value kernel.sem = 250 32000 100 128 # Sets the maximum number of file-handles that the Linux kernel will allocate fs.file-max = 6815744 # Defines the local port range that is used by TCP and UDP # traffic to choose the local port net.ipv4.ip_local_port_range = 9000 65500 # Default setting in bytes of the socket "receive" buffer which # may be set by using the SO_RCVBUF socket option net.core.rmem_default=262144 # Maximum setting in bytes of the socket "receive" buffer which # may be set by using the SO_RCVBUF socket option net.core.rmem_max=4194304 # Default setting in bytes of the socket "send" buffer which # may be set by using the SO_SNDBUF socket option net.core.wmem_default=262144 # Maximum setting in bytes of the socket "send" buffer which # may be set by using the SO_SNDBUF socket option net.core.wmem_max=1048576 # Maximum number of allowable concurrent asynchronous I/O requests requests fs.aio-max-nr=1048576 EOF

Activate all kernel parameters for the system.


[root@racnode3 ~]# sysctl -p net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 6815744 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 fs.aio-max-nr = 1048576

Configure Access to the Shared Storage

Ensure that the new Oracle RAC node has access to the shared storage.

If the shared volumes for your Oracle RAC are configured using Openfiler and iSCSI, then certain tasks will need to be performed so that the new Oracle RAC node can access the iSCSI volumes.

Configure Network Security on the Openfiler Storage Server

From the Openfiler Storage Control Center home page, log in as an administrator. The default administration login credentials for Openfiler are:

https://openfiler1.idevelopment.info:446/

Username: openfiler
Password: password

Network access will need to be setup in Openfiler in order to identify the new Oracle RAC node so that it can access the iSCSI volumes through the storage (private) network. Navigate to [System] / [Network Setup]. The "Network Access Configuration" section (at the bottom of the page) allows an administrator to setup networks and/or hosts that will be allowed to access resources exported by the Openfiler appliance. Add the new Oracle RAC node individually rather than allowing the entire network to have access to Openfiler resources.

The following image shows the results of adding the new Oracle RAC node.

Figure 2: Configure Openfiler Network Access for the New Oracle RAC Node

Before the iSCSI client on the new Oracle RAC node can access the shared volumes, it needs to be granted the appropriate permissions to the associated iSCSI targets. From the Openfiler Storage Control Center, navigate to [Volumes] / [iSCSI Targets]. Under the "Target Configuration" sub-tab, use the pull-down menu to select one of the current RAC iSCSI targets in the section "Select iSCSI Target" and then click the [Change] button.

Figure 3: Select iSCSI Target

Click on the grey sub-tab named "Network ACL" (next to "LUN Mapping" sub-tab). For the currently selected iSCSI target, change the "Access" for the new Oracle RAC node from 'Deny' to 'Allow' and click the [Update] button. This needs to be performed for all of the RAC iSCSI targets.

The following image shows the results of granting access to the crs1 target for the new Oracle RAC node.

Figure 4: Update Network ACL for the New Oracle RAC Node

Configure the iSCSI Initiator

After granting access to the iSCSI targets from Openfiler, configure the iSCSI initiator on the new Oracle RAC node in order to access the shared volumes.

Installing the iSCSI (initiator) service

Determine if the iscsi-initiator-utils package is installed on the new Oracle RAC node.


[root@racnode3 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep iscsi-initiator-utils

If the iscsi-initiator-utils package is not installed, load CD/DVD #1 into the machine and perform the following:


[root@racnode3 ~]# mount -r /dev/cdrom /media/cdrom [root@racnode3 ~]# cd /media/cdrom/CentOS [root@racnode3 ~]# rpm -Uvh iscsi-initiator-utils-* [root@racnode3 ~]# cd / [root@racnode3 ~]# eject

Verify the iscsi-initiator-utils package is now installed.


[root@racnode3 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep iscsi-initiator-utils iscsi-initiator-utils-6.2.0.871-0.16.el5 (x86_64)

Configure the iSCSI (initiator) service

Next, start the iscsid service and enable it to automatically start when the system boots. Also configure the iscsi service to automatically start which logs into iSCSI targets needed at system startup.


[root@racnode3 ~]# service iscsid start Turning off network shutdown. Starting iSCSI daemon: [ OK ] [ OK ] [root@racnode3 ~]# chkconfig iscsid on [root@racnode3 ~]# chkconfig iscsi on

Now that the iSCSI service is started, use the iscsiadm command-line interface to discover all available targets on the network storage server to verify the configuration is functioning properly.


[root@racnode3 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1-priv 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.acfs1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.fra1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.data1

Log In to iSCSI Targets

Manually log in to each of the available iSCSI targets using the iscsiadm command-line interface.


[root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.acfs1 -p 192.168.2.195 -l [root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs1 -p 192.168.2.195 -l [root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.data1 -p 192.168.2.195 -l [root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.fra1 -p 192.168.2.195 -l

Ensure the client will automatically log in to each of the targets listed above when the machine is booted (or the iSCSI initiator service is started/restarted).


[root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.acfs1 -p 192.168.2.195 --op update -n node.startup -v automatic [root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs1 -p 192.168.2.195 --op update -n node.startup -v automatic [root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.data1 -p 192.168.2.195 --op update -n node.startup -v automatic [root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.fra1 -p 192.168.2.195 --op update -n node.startup -v automatic

Create Persistent Local SCSI Device Names

Create persistent local SCSI device names for each of the iSCSI target names using udev. Having a consistent local SCSI device name and which iSCSI target it maps to, helps to differentiate between the three volumes when configuring ASM. Although this is not a strict requirement since we will be using ASMLib 2.0 for all volumes, it provides a means of self-documentation to quickly identify the name and location of each iSCSI volume.

Start by creating the following rules file /etc/udev/rules.d/55-openiscsi.rules on the new Oracle RAC node.


# /etc/udev/rules.d/55-openiscsi.rules KERNEL=="sd*", BUS=="scsi", PROGRAM="/etc/udev/scripts/iscsidev.sh %b",SYMLINK+="iscsi/%c/part%n"

Next, create a directory where udev scripts can be stored and then create the UNIX SHELL script that will be called when this event is received.


[root@racnode3 ~]# mkdir -p /etc/udev/scripts

Create the UNIX shell script /etc/udev/scripts/iscsidev.sh.


#!/bin/sh # FILE: /etc/udev/scripts/iscsidev.sh BUS=${1} HOST=${BUS%%:*} [ -e /sys/class/iscsi_host ] || exit 1 file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname" target_name=$(cat ${file}) # This is not an open-scsi drive if [ -z "${target_name}" ]; then exit 1 fi # Check if QNAP drive check_qnap_target_name=${target_name%%:*} if [ $check_qnap_target_name = "iqn.2004-04.com.qnap" ]; then target_name=`echo "${target_name%.*}"` fi echo "${target_name##*.}"

After creating the UNIX SHELL script, change it to executable.


[root@racnode3 ~]# chmod 755 /etc/udev/scripts/iscsidev.sh

Now that udev is configured, restart the iSCSI service.


[root@racnode3 ~]# service iscsi stop Logging out of session [sid: 1, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260] Logging out of session [sid: 2, target: iqn.2006-01.com.openfiler:racdb.acfs1, portal: 192.168.2.195,3260] Logging out of session [sid: 3, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260] Logging out of session [sid: 4, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260] Logout of [sid: 1, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]: successful Logout of [sid: 2, target: iqn.2006-01.com.openfiler:racdb.acfs1, portal: 192.168.2.195,3260]: successful Logout of [sid: 3, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]: successful Logout of [sid: 4, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]: successful Stopping iSCSI daemon: [ OK ] [root@racnode3 ~]# service iscsi start iscsid dead but pid file exists Turning off network shutdown. Starting iSCSI daemon: [ OK ] [ OK ] Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.acfs1, portal: 192.168.2.195,3260] Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260] Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260] Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260] Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.acfs1, portal: 192.168.2.195,3260]: successful Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]: successful Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]: successful Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]: successful [ OK ]

Let's see if our hard work paid off. Verify that the new Oracle RAC node is able to see each of the volumes that contain the partition (i.e. part1).


[root@racnode3 ~]# ls -l /dev/iscsi/* /dev/iscsi/acfs1: total 0 lrwxrwxrwx 1 root root 9 Apr 30 10:30 part -> ../../sdb lrwxrwxrwx 1 root root 10 Apr 30 10:30 part1 -> ../../sdb1 /dev/iscsi/crs1: total 0 lrwxrwxrwx 1 root root 9 Apr 30 10:30 part -> ../../sdc lrwxrwxrwx 1 root root 10 Apr 30 10:30 part1 -> ../../sdc1 /dev/iscsi/data1: total 0 lrwxrwxrwx 1 root root 9 Apr 30 10:30 part -> ../../sde lrwxrwxrwx 1 root root 10 Apr 30 10:30 part1 -> ../../sde1 /dev/iscsi/fra1: total 0 lrwxrwxrwx 1 root root 9 Apr 30 10:30 part -> ../../sdd lrwxrwxrwx 1 root root 10 Apr 30 10:30 part1 -> ../../sdd1

Install and Configure ASMLib 2.0

The existing Oracle RAC is using the ASMLib support library to provide persistent paths and permissions for storage devices used with Oracle ASM.

Copy the ASMLib 2.0 libraries and the kernel driver from one of the existing Oracle RAC nodes. Install the ASMLib software on the new Oracle RAC node.


[root@racnode3 ~]# rpm -Uvh oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm \ > oracleasmlib-2.0.4-1.el5.x86_64.rpm \ > oracleasm-support-2.1.7-1.el5.x86_64.rpm warning: oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159 Preparing... ########################################### [100%] 1:oracleasm-support ########################################### [ 33%] 2:oracleasm-2.6.18-194.el########################################### [ 67%] 3:oracleasmlib ########################################### [100%]

Enter the following command to run the oracleasm initialization script with the configure option.


[root@racnode3 ~]# /usr/sbin/oracleasm configure -i Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: grid Default group to own the driver interface []: asmadmin Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: done

Enter the following command to load the oracleasm kernel module.


[root@racnode3 ~]# /usr/sbin/oracleasm init Creating /dev/oracleasm mount point: /dev/oracleasm Loading module "oracleasm": oracleasm Mounting ASMlib driver filesystem: /dev/oracleasm

To make the volumes available on the new Oracle RAC node enter the following command.


[root@racnode3 ~]# /usr/sbin/oracleasm scandisks Reloading disk partitions: done Cleaning any stale ASM disks... Scanning system for ASM disks... Instantiating disk "CRSVOL1" Instantiating disk "FRAVOL1" Instantiating disk "DATAVOL1" Instantiating disk "ACFS1"

Verify that the new Oracle RAC node has identified the disks that are marked as Automatic Storage Management disks.


[root@racnode3 ~]# /usr/sbin/oracleasm listdisks ACFS1 CRSVOL1 DATAVOL1 FRAVOL1

Pre-installation Tasks for Oracle Grid Infrastructure for a Cluster

After configuring the hardware and operating system on the node you want to add, use the Cluster Verification Utility (CVU) to verify that the node you want to add is reachable by other nodes in the cluster, user equivalence is setup correctly, all required Linux packages have been installed, access to the shared storage, and other checks to ensure the new node is ready to be added to the cluster. Also verify that if you are logging in to a remote system using an X terminal that an X11 display server is properly configured.

Install the cvuqdisk Package for Linux

Install the operating system package cvuqdisk on the new Oracle RAC node. Without cvuqdisk, CVU cannot discover shared disks and you will receive the error message "Package cvuqdisk not installed" when the CVU is run. Copy the cvuqdisk RPM from one of the existing Oracle RAC nodes.

Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk, which for this article is oinstall and then install the cvuqdisk package.


[root@racnode3 rpm]# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP [root@racnode3 rpm]# rpm -iv cvuqdisk-1.0.9-1.rpm Preparing packages for installation... cvuqdisk-1.0.9-1

Verify the cvuqdisk utility was successfully installed.


[root@racnode3 rpm]# ls -l /usr/sbin/cvuqdisk -rwsr-xr-x 1 root oinstall 14000 Sep 3 2011 /usr/sbin/cvuqdisk

Verify New Node (HWOS)

From one of the active nodes in the existing cluster, log in as the Oracle Grid Infrastructure owner and run cvuqdisk at the post-hardware installation to ensure that racnode3 (the Oracle RAC node to be added) is ready from the perspective of the hardware and operating system.


[root@racnode1 ~]# su - grid [grid@racnode1 ~]$ echo $GRID_HOME /u01/app/11.2.0/grid [grid@racnode1 ~]$ echo $ORACLE_HOME /u01/app/11.2.0/grid [grid@racnode1 ~]$ $GRID_HOME/bin/cluvfy stage -post hwos -n racnode3

Review the CVU report.

If the CVU was successful, the command will end with:

Post-check for hardware and operating system setup was successful.

Otherwise, the CVU will print meaningful error messages.

Verify Peer (REFNODE)

As the Oracle Grid Infrastructure owner, run the CVU again, this time to determine the readiness of the new Oracle RAC node. Use the comp peer option to obtain a detailed comparison of the properties of a reference node that is part of your existing cluster environment with the node to be added in order to determine any conflicts or compatibility issues with required Linux packages, kernel settings, and so on.

Specify the reference node (racnode1 in this example) against which you want CVU to compare the node to be added (the node(s) specified after the -n option). Also provide the name of the Oracle inventory O/S group as well as the name of the OSDBA O/S group.


[grid@racnode1 ~]$ $GRID_HOME/bin/cluvfy comp peer -refnode racnode1 -n racnode3 -orainv oinstall -osdba dba -verbose

Review the CVU report.

Invariably, the CVU will report:

Verification of peer compatibility was unsuccessful.

This is due to the fact that the report simply looks for mismatches between the properties of the nodes being compared. Certain properties will undoubtedly differ. For example, the amount of available memory, the amount of free disk space for the Grid_home, and the free space in /tmp will rarely match exactly. These such mismatches can be safely ignored. Differences, however, with kernel settings and required Linux packages should be addressed before extending the cluster.

Verify New Node (NEW NODE PRE)

Use CVU as the Oracle Grid Infrastructure owner one last time to determine the integrity of the cluster and whether it is ready for the new Oracle RAC node to be added.


[grid@racnode1 ~]$ $GRID_HOME/bin/cluvfy stage -pre nodeadd -n racnode3 -fixup -verbose

Review the CVU report.

If the CVU was successful, the command will end with:

Pre-check for node addition was successful.

Otherwise, the CVU will create fixup scripts (if the -fixup option was specified) with instructions to fix the cluster or node if the verification fails.

When the shared storage is Oracle ASM and ASMLib is being used, there are cases where you may receive the following error from CVU:

ERROR:
PRVF-5449 : Check of Voting Disk location "ORCL:CRSVOL1(ORCL:CRSVOL1)" failed on the following nodes:

        racnode3:No such file or directory

PRVF-5431 : Oracle Cluster Voting Disk configuration check failed

As documented in Oracle BUG #10310848, this error can be safely ignored. The error was a result of having the voting disks stored in Oracle ASM which is a new feature of Oracle 11g Release 2.

Logging In to a Remote System Using X Terminal

There are several applications which can be used when extending the Oracle RAC that use a Graphical User Interface (GUI) and require the use of an X11 display server. The most notable of these GUI applications (or better known as an X application) is the Database Configuration Assistant (DBCA). If you are not logged directly on to the graphical console of a node but rather you are using a remote client like SSH, PuTTY, or Telnet to connect to the node, any X application will require an X11 display server installed on the client. For example, if you are making a terminal remote connection to racnode1 from a Windows workstation, you would need to install an X11 display server on that Windows client (Xming for example). If you intend to run any of the Oracle GUI applications from a Windows workstation or other system with an X11 display server installed, then perform the following actions.

  1. Start the X11 display server software on the client workstation.

  2. Configure the security settings of the X server software to permit remote hosts to display X applications on the local system.

  3. From the client workstation, SSH or Telnet to the server where you want to run the GUI applications from.

  4. Set the DISPLAY environment.


    [root@racnode1 ~]# su - grid [grid@racnode1 ~]$ DISPLAY=<your local workstation>:0.0 [grid@racnode1 ~]$ export DISPLAY [grid@racnode1 ~]$ # TEST X CONFIGURATION BY RUNNING xterm [grid@racnode1 ~]$ xterm &

    Figure 5: Test X11 Display Server on Windows; Run xterm from Node 1 (racnode1)

Extend Oracle Grid Infrastructure for a Cluster to the New Node

From one of the active nodes in the existing Oracle RAC, log in as the Grid Infrastructure owner (grid when using Job Role Separation) and execute the addNode.sh script to install and configure the Oracle Grid Infrastructure software on the new node. The same addNode.sh script has been used in previous releases and allowed either a GUI or a silent/console install. However, with 11g Release 2 (11.2), the only mode to run the script is using the -silent option. The GUI installation method is no longer available. Furthermore, there are different options depending on whether or not you are using Grid Naming Service (GNS).

 

My Oracle Support [ID 1267569.1]

The addNode.sh script will not complete unless all of the pre-requisites checks for Grid Infrastructure are successful. For example, users who receive the PRVF-5449 message (or any other error message) from the CVU will need to set the environment variable IGNORE_PREADDNODE_CHECKS=Y before running addNode.sh in order to bypass the node addition pre-check; otherwise, the silent node addition will fail without showing any errors to the console.

Navigate to the Grid_home/oui/bin directory on one of the existing nodes in the cluster and run the addNode.sh script using the following syntax, where racnode3 is the name of the node that you are adding and racnode3-vip is the VIP name for the node.

If you are not using GNS (like me):


[grid@racnode1 ~]$ id uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper) [grid@racnode1 ~]$ cd $GRID_HOME/oui/bin [grid@racnode1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={racnode3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={racnode3-vip}"

If you are using GNS:


[grid@racnode1 ~]$ id uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper) [grid@racnode1 ~]$ cd $GRID_HOME/oui/bin [grid@racnode1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={racnode3}"

If the command is successful, you should see a prompt similar to the following:


... The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes. /u01/app/oraInventory/orainstRoot.sh #On nodes racnode3 /u01/app/11.2.0/grid/root.sh #On nodes racnode3 To execute the configuration scripts: 1. Open a terminal window 2. Log in as "root" 3. Run the scripts in each cluster node The Cluster Node Addition of /u01/app/11.2.0/grid was successful. Please check '/tmp/silentInstall.log' for more details.

Run the orainstRoot.sh and root.sh commands on the new Oracle RAC node. The root.sh script performs the work of configuring Grid Infrastructure on the new node and includes adding High Availability Services to the /etc/inittab so that CRS starts up when the machine starts. When root.sh completes, all services for Oracle Grid Infrastructure will be running.


[root@racnode3 ~]# /u01/app/oraInventory/orainstRoot.sh [root@racnode3 ~]# /u01/app/11.2.0/grid/root.sh

It is best practice to run the CVU from one of the initial nodes in the Oracle RAC one last time to verify the cluster is integrated and that the new node has been successfully added to the cluster at the network, shared storage, and clusterware levels.


[grid@racnode1 ~]$ $GRID_HOME/bin/cluvfy stage -post nodeadd -n racnode3 -verbose

Verify Oracle Grid Infrastructure for a Cluster on the New Node

After extending Oracle Grid Infrastructure, run the following tests to verify the install and configuration was successful from the new Oracle RAC node as the grid user. If successful, the Oracle Clusterware daemons, the TNS listener, the ASM instance, etc. should be started by the root.sh script.

Check CRS Status


[grid@racnode3 ~]$ crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online

Check Clusterware Resources


[grid@racnode3 ~]$ crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora.CRS.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1 ora.DOCS.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1 ora.FRA.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1 ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE racnode1 ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode2 ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode3 ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode1 ora....DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1 ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE racnode1 ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE racnode1 ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE ora....network ora....rk.type 0/5 0/ ONLINE ONLINE racnode1 ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE racnode1 ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE racnode1 ora.racdb.db ora....se.type 0/2 0/1 ONLINE ONLINE racnode1 ora....nfo.svc ora....ce.type 0/0 0/0 ONLINE ONLINE racnode1 ora....SM1.asm application 0/5 0/0 ONLINE ONLINE racnode1 ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE racnode1 ora....de1.gsd application 0/5 0/0 OFFLINE OFFLINE ora....de1.ons application 0/3 0/0 ONLINE ONLINE racnode1 ora....de1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE racnode2 ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE racnode2 ora....de2.gsd application 0/5 0/0 OFFLINE OFFLINE ora....de2.ons application 0/3 0/0 ONLINE ONLINE racnode2 ora....de2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode2 ora....SM3.asm application 0/5 0/0 ONLINE ONLINE racnode3 ora....E3.lsnr application 0/5 0/0 ONLINE ONLINE racnode3 ora....de3.gsd application 0/5 0/0 OFFLINE OFFLINE ora....de3.ons application 0/3 0/0 ONLINE ONLINE racnode3 ora....de3.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode3 ora....ry.acfs ora....fs.type 0/5 0/ ONLINE ONLINE racnode1 ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode2 ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode3 ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode1

 

The crs_stat command is deprecated in Oracle Clusterware 11g Release 2 (11.2).

Check Cluster Nodes


[grid@racnode3 ~]$ olsnodes -n racnode1 1 racnode2 2 racnode3 3

Oracle TNS Listener Process


[grid@racnode3 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}' LISTENER_SCAN2 LISTENER

Confirming Oracle ASM Function for Oracle Clusterware Files

If you installed the OCR and voting disk files on Oracle ASM, then use the following command syntax as the Grid Infrastructure installation owner to confirm that your Oracle ASM installation is running.


[grid@racnode3 ~]$ srvctl status asm -a ASM is running on racnode3,racnode1,racnode2 ASM is enabled.

Check Oracle Cluster Registry (OCR)


[grid@racnode3 ~]$ ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 3592 Available space (kbytes) : 258528 ID : 1546531707 Device/File Name : +CRS Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check bypassed due to non-privileged user

Check Voting Disk


[grid@racnode3 ~]$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 7fe9ad5212f84fb5bf48192cede68454 (ORCL:CRSVOL1) [CRS] Located 1 voting disk(s).

Extend Oracle Database Software to the New Node

From one of the active nodes in the existing Oracle RAC, log in as the Oracle owner (oracle) and execute the addNode.sh script to install and configure the Oracle Database software on the new node. Like with Oracle Grid Infrastructure, the only mode to run the addNode.sh script is with the -silent option. The GUI installation method is no longer available.


[oracle@racnode1 ~]$ id uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper) [oracle@racnode1 ~]$ cd $ORACLE_HOME/oui/bin [oracle@racnode1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={racnode3}"

If the command is successful, you should see a prompt similar to the following:


... The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes. /u01/app/oracle/product/11.2.0/dbhome_1/root.sh #On nodes racnode3 To execute the configuration scripts: 1. Open a terminal window 2. Log in as "root" 3. Run the scripts in each cluster node The Cluster Node Addition of /u01/app/oracle/product/11.2.0/dbhome_1 was successful. Please check '/tmp/silentInstall.log' for more details.

Run the root.sh command on the new Oracle RAC node as directed:


[root@racnode3 ~]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh

Change Group Ownership of 'oracle' Binary when using Job Role Separation

If your Oracle RAC is configured using Job Role Separation, the $ORACLE_HOME/bin/oracle binary may not have the proper group ownership after extending the Oracle Database software on the new node. This will prevent the Oracle Database software owner (oracle) from accessing the ASMLib driver or ASM disks on the new node as stated in My Oracle Support [ID 1084186.1] and [ID 1054033.1]. For example, after extending the Oracle Database software, the oracle binary on the new node is owned by


-rwsr-s--x 1 oracle oinstall

instead of


-rwsr-s--x 1 oracle asmadmin

The group ownership for the $ORACLE_HOME/bin/oracle binary on the new node should be set to the value of the ASM Administrators Group (OSASM) which in this guide is asmadmin. When using ASMLib, you can always determine the OSASM group using the oracleasm configure command.


[root@racnode3 ~]# /usr/sbin/oracleasm configure ORACLEASM_ENABLED=true ORACLEASM_UID=grid ORACLEASM_GID=asmadmin ORACLEASM_SCANBOOT=true ORACLEASM_SCANORDER="" ORACLEASM_SCANEXCLUDE=""

For other platforms, check installAction<date>.log for the OSASM setting.

As the grid user, run the setasmgidwrap command to set the $ORACLE_HOME/bin/oracle binary to the proper group ownership.


[root@racnode3 ~]# su - grid [grid@racnode3 ~]$ cd $GRID_HOME/bin [grid@racnode3 bin]$ ./setasmgidwrap o=/u01/app/oracle/product/11.2.0/dbhome_1/bin/oracle [grid@racnode3 bin]$ ls -l /u01/app/oracle/product/11.2.0/dbhome_1/bin/oracle -rwsr-s--x 1 oracle asmadmin 232399319 Apr 26 18:25 /u01/app/oracle/product/11.2.0/dbhome_1/bin/oracle*

 

This change is only required for the Oracle Database software ($ORACLE_HOME/bin/oracle). Do not modify the $GRID_HOME/bin/oracle binary ownership for Oracle Grid Infrastructure.

 

Warning: Whenever a patch is applied to the database ORACLE_HOME, ensure the above ownership and permission are corrected after the patch.

Add New Instance to the Cluster Database

Use either the Oracle Database Configuration Assistant (DBCA) GUI or the SRVCTL command-line interface to add a new instance to the existing cluster database running on the new Oracle RAC node. Specifically, an instance named racdb3 will be added to the pre-existing racdb cluster database.

This section describes both methods that can be used to add a new Oracle instance to an existing cluster database — DBCA or SRVCTL.

Database Configuration Assistant — GUI Method

To use the GUI method, log in to one of the active nodes in the existing Oracle RAC as the Oracle owner (oracle) and execute the DBCA.


[oracle@racnode1 ~]$ dbca &

Screen Name Response Screen Shot
Welcome Screen Select Oracle Real Application Clusters database.
Operations If your database is administrator-managed, select Instance Management.

If your database is policy-managed, then the Instance Management option is not available. To increase the number of database instances, add more nodes to the server pool.
Instance Management Select Add an Instance.
List of cluster databases From the List of Cluster Databases page, select the active Oracle RAC database to which you want to add an instance. Enter user name and password for the database user that has SYSDBA privileges.
List of cluster databases instances Review the existing instances for the cluster database and click Next to add a new instance.
Instance naming and node selection On the Adding an Instance page, enter the instance name in the field at the top of this page if the instance name that DBCA provides does not match your existing instance naming scheme. Then select the target node name from the list and click Next.
Instance Storage Expand the Tablespaces, Datafiles, and Redo Log Groups nodes to verify a new UNDO tablespace and Redo Log Groups of a new thread are being created for the purpose of the new instance then click Finish.
Summary Review the information on the Summary dialog and click OK.
Progress The DBCA displays a progress dialog showing DBCA performing the instance addition operation.
End of Add Instance When DBCA completes the instance addition operation, DBCA displays a dialog asking whether you want to perform another operation. Click No to exit from the DBCA.

SRVCTL — Command-Line Method

To use the command-line method (SRVCTL), start by logging in to the new Oracle RAC node as the Oracle owner (oracle) and create the Oracle dependencies such as the password file, init.ora, oratab, and admin directories for the new instance.


[oracle@racnode3 ~]$ echo $ORACLE_HOME /u01/app/oracle/product/11.2.0/dbhome_1 [oracle@racnode3 ~]$ cd $ORACLE_HOME/dbs [oracle@racnode3 dbs]$ mv initracdb1.ora initracdb3.ora [oracle@racnode3 dbs]$ cat initracdb3.ora SPFILE='+RACDB_DATA/racdb/spfileracdb.ora' [oracle@racnode3 dbs]$ mv orapwracdb1 orapwracdb3 [oracle@racnode3 dbs]$ echo "racdb3:$ORACLE_HOME:N" >> /etc/oratab [oracle@racnode3 dbs]$ mkdir -p $ORACLE_BASE/admin/racdb/adump [oracle@racnode3 dbs]$ mkdir -p $ORACLE_BASE/admin/racdb/dpdump [oracle@racnode3 dbs]$ mkdir -p $ORACLE_BASE/admin/racdb/hdump [oracle@racnode3 dbs]$ mkdir -p $ORACLE_BASE/admin/racdb/pfile [oracle@racnode3 dbs]$ mkdir -p $ORACLE_BASE/admin/racdb/scripts [oracle@racnode3 dbs]$ mkdir -p $ORACLE_BASE/diag

From one of the active nodes in the existing Oracle RAC, log in as the Oracle owner (oracle) and issue the following commands to create the needed public log thread, undo tablespace, and instance parameter entries for the new instance.


[oracle@racnode1 ~]$ . oraenv ORACLE_SID = [racdb1] ? racdb1 The Oracle base remains unchanged with value /u01/app/oracle [oracle@racnode1 ~]$ sqlplus / as sysdba SQL> alter database add logfile thread 3 group 5 ('+FRA','+RACDB_DATA') size 50M, group 6 ('+FRA','+RACDB_DATA') size 50M; Database altered. SQL> alter database enable public thread 3; Database altered. SQL> create undo tablespace undotbs3 datafile '+RACDB_DATA' size 500M autoextend on next 100m maxsize 8g; Tablespace created. SQL> alter system set undo_tablespace=undotbs3 scope=spfile sid='racdb3'; System altered. SQL> alter system set instance_number=3 scope=spfile sid='racdb3'; System altered. SQL> alter system set cluster_database_instances=3 scope=spfile sid='*'; System altered.

Update the Oracle Cluster Registry (OCR) with the new instance being added to the cluster database as well as changes to any existing service(s). Specifically, add racdb3 to the racdb cluster database and verify the results.


[oracle@racnode3 ~]$ srvctl add instance -d racdb -i racdb3 -n racnode3 [oracle@racnode3 ~]$ srvctl status database -d racdb -v Instance racdb1 is running on node racnode1 with online services racdbsvc.idevelopment.info. Instance status: Open. Instance racdb2 is running on node racnode2 with online services racdbsvc.idevelopment.info. Instance status: Open. Instance racdb3 is not running on node racnode3 [oracle@racnode3 ~]$ srvctl config database -d racdb Database unique name: racdb Database name: racdb Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1 Oracle user: oracle Spfile: +RACDB_DATA/racdb/spfileracdb.ora Domain: idevelopment.info Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: racdb Database instances: racdb1,racdb2,racdb3 Disk Groups: RACDB_DATA,FRA Mount point paths: Services: racdbsvc.idevelopment.info Type: RAC Database is administrator managed

With all of the prerequisites satisfied and OCR updated, start the racdb3 instance on the new Oracle RAC node.


[oracle@racnode3 ~]$ srvctl start instance -d racdb -i racdb3

Add New Instance to any Services - (Optional)

After adding the new instance to the configuration using either DBCA or SRVCTL, add the new instance to any services you may have.


[oracle@racnode3 ~]$ srvctl add service -d racdb -s racdbsvc.idevelopment.info -r racdb3 -u [oracle@racnode3 ~]$ srvctl start service -d racdb [oracle@racnode3 ~]$ srvctl config service -d racdb -s racdbsvc.idevelopment.info Service name: racdbsvc.idevelopment.info Service is enabled Server pool: racdb_racdbsvc.idevelopment.info Cardinality: 3 Disconnect: false Service role: PRIMARY Management policy: AUTOMATIC DTP transaction: false AQ HA notifications: false Failover type: NONE Failover method: NONE TAF failover retries: 0 TAF failover delay: 0 Connection Load Balancing Goal: LONG Runtime Load Balancing Goal: NONE TAF policy specification: NONE Edition: Preferred instances: racdb3,racdb1,racdb2 Available instances:

Verify New Instance


[oracle@racnode3 ~]$ srvctl status database -d racdb -v Instance racdb1 is running on node racnode1 with online services racdbsvc.idevelopment.info. Instance status: Open. Instance racdb2 is running on node racnode2 with online services racdbsvc.idevelopment.info. Instance status: Open. Instance racdb3 is running on node racnode3 with online services racdbsvc.idevelopment.info. Instance status: Open.


SQL> select inst_id, instance_name, status, 2 to_char(startup_time, 'DD-MON-YYYY HH24:MI:SS') as "START_TIME" 3 from gv$instance order by inst_id; INST_ID INSTANCE_NAME STATUS START_TIME ---------- ---------------- ------------ -------------------- 1 racdb1 OPEN 26-APR-2012 22:12:38 2 racdb2 OPEN 27-APR-2012 00:16:50 3 racdb3 OPEN 26-APR-2012 23:11:22

Configure TNSNAMES

When extending the Oracle Database software, a copy of the current $ORACLE_HOME/network/admin/tnsnames.ora file was copied to the new node which contains entries for all of the initial instances. Update the tnsnames.ora file on each node by adding entries for the new instance.


RACDB3.IDEVELOPMENT.INFO = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = racnode3-vip.idevelopment.info)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = racdb.idevelopment.info) (INSTANCE_NAME = racdb3) ) ) LISTENERS_RACDB3.IDEVELOPMENT.INFO = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = racnode3-vip.idevelopment.info)(PORT = 1521)) ) LISTENERS_RACDB.IDEVELOPMENT.INFO = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = racnode1-vip.idevelopment.info)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = racnode2-vip.idevelopment.info)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = racnode3-vip.idevelopment.info)(PORT = 1521)) )

OEM Database Control

If you configured Oracle Enterprise Manager (Database Control), add the new instance to DB Control monitoring without recreating the repository.

The URL for this example is: https://racnode1.idevelopment.info:1158/em

Check OEM DB Control Cluster Configuration

After extending the cluster database by adding a new instance, use the emca from one of the original Oracle RAC nodes to check the current DB Control cluster configuration.


[oracle@racnode1 ~]$ emca -displayConfig dbcontrol -cluster STARTED EMCA at Apr 27, 2012 12:21:40 AM EM Configuration Assistant, Version 11.2.0.3.0 Production Copyright (c) 2003, 2011, Oracle. All rights reserved. Enter the following information: Database unique name: racdb Service name: racdb.idevelopment.info Do you wish to continue? [yes(Y)/no(N)]: y Apr 30, 2012 9:05:58 PM oracle.sysman.emcp.EMConfig perform INFO: This operation is being logged at /u01/app/oracle/cfgtoollogs/emca/racdb/emca_2012_04_30_21_05_17.log. Apr 30, 2012 9:05:58 PM oracle.sysman.emcp.EMDBPostConfig showClusterDBCAgentMessage INFO: **************** Current Configuration **************** INSTANCE NODE DBCONTROL_UPLOAD_HOST ---------- ---------- --------------------- racdb racnode1 racnode1.idevelopment.info racdb racnode2 racnode1.idevelopment.info racdb racnode3 <not configured> Enterprise Manager configuration completed successfully FINISHED EMCA at Apr Apr 30, 2012 9:05:58 PM

Add Instance to DB Control Monitoring

In this example, OEM Database Control agent is running on the initial two Oracle RAC nodes. The agent for the new instance will need to be configured and started from one of the original Oracle RAC nodes.


[oracle@racnode3 ~]$ emca -addInst db STARTED EMCA at Apr 30, 2012 9:51:49 PM EM Configuration Assistant, Version 11.2.0.3.0 Production Copyright (c) 2003, 2011, Oracle. All rights reserved. Enter the following information: Database unique name: racdb Service name: racdb.idevelopment.info Node name: racnode3 Database SID: racdb3 Do you wish to continue? [yes(Y)/no(N)]: y Apr 30, 2012 9:52:36 PM oracle.sysman.emcp.EMConfig perform INFO: This operation is being logged at /u01/app/oracle/cfgtoollogs/emca/racdb/racdb3/emca_2012_04_30_21_51_35.log. Apr 30, 2012 9:56:29 PM oracle.sysman.emcp.util.GeneralUtil initSQLEngineLoacly WARNING: null Apr 30, 2012 9:56:29 PM oracle.sysman.emcp.ParamsManager checkListenerStatusForDBControl WARNING: Error initializing SQL connection. SQL operations cannot be performed Apr 30, 2012 9:56:33 PM oracle.sysman.emcp.util.DBControlUtil stopOMS INFO: Stopping Database Control (this may take a while) ... Apr 30, 2012 9:58:36 PM oracle.sysman.emcp.EMDBCConfig instantiateOC4JConfigFiles INFO: Propagating /u01/app/oracle/product/11.2.0/dbhome_1/oc4j/j2ee/OC4J_DBConsole_racnode3_racdb to remote nodes ... Apr 30, 2012 10:26:35 PM oracle.sysman.emcp.EMAgentConfig deployStateDirs INFO: Propagating /u01/app/oracle/product/11.2.0/dbhome_1/racnode3_racdb to remote nodes ... Apr 30, 2012 10:33:19 PM oracle.sysman.emcp.util.DBControlUtil secureDBConsole INFO: Securing Database Control (this may take a while) ... Apr 30, 2012 10:48:58 PM oracle.sysman.emcp.util.DBControlUtil startOMS INFO: Starting Database Control (this may take a while) ... Apr 30, 2012 10:59:09 PM oracle.sysman.emcp.EMDBPostConfig performAddInstConfiguration INFO: Database Control started successfully Apr 30, 2012 11:13:20 PM oracle.sysman.emcp.EMDBPostConfig showClusterDBCAgentMessage INFO: **************** Current Configuration **************** INSTANCE NODE DBCONTROL_UPLOAD_HOST ---------- ---------- --------------------- racdb racnode1 racnode1.idevelopment.info racdb racnode2 racnode1.idevelopment.info racdb racnode3 racnode1.idevelopment.info Apr 30, 2012 11:13:20 PM oracle.sysman.emcp.EMDBPostConfig invoke WARNING: ************************ WARNING ************************ Management Repository has been placed in secure mode wherein Enterprise Manager data will be encrypted. The encryption key has been placed in the file: /u01/app/oracle/product/11.2.0/dbhome_1/racnode1_racdb/sysman/config/emkey.ora. Ensure this file is backed up as the encrypted data will become unusable if this file is lost. *********************************************************** Enterprise Manager configuration completed successfully FINISHED EMCA at Apr 30, 2012 11:13:20 PM

    

Figure 6: Oracle Enterprise Manager - (Database Console)

Extend Oracle ACFS Cluster File System to the New Node

The existing Oracle RAC is configured with Oracle ASM Cluster File System (Oracle ACFS) and Oracle ASM Dynamic Volume Manager (ADVM) which is being used as a shared file system to store files maintained outside of the Oracle database. A new Oracle ASM disk group was created named DOCSDG1 along with a new Oracle ASM volume created in that disk group named docsvol1. Finally, a cluster file system was created for the new volume whose mount point is /oradocs on all Oracle RAC nodes. The cluster file system will be extended to the new Oracle RAC node.

Verify the volume device(s) are externalized to the OS on the new node and appear dynamically as special file(s) in the /dev/asm directory.


[root@racnode3 ~]# ls -l /dev/asm/ total 0 brwxrwx--- 1 root asmadmin 252, 76289 Apr 26 18:44 docsvol1-149

Manually start the Oracle ASM volume driver on the new Oracle RAC node (if necessary).


[root@racnode3 ~]# /u01/app/11.2.0/grid/bin/acfsload start -s

Verify the modules were successfully loaded.


[root@racnode3 ~]# lsmod | grep oracle oracleacfs 1670360 2 oracleadvm 260320 6 oracleoks 321904 2 oracleacfs,oracleadvm oracleasm 84136 1

Configure the Oracle ASM volume driver to load automatically on system startup.


[root@racnode3 ~]# cat > /etc/init.d/acfsload <<EOF #!/bin/sh # chkconfig: 2345 30 21 # description: Load Oracle ASM volume driver on system startup ORACLE_HOME=/u01/app/11.2.0/grid export ORACLE_HOME \$ORACLE_HOME/bin/acfsload start -s EOF [root@racnode3 ~]# chmod 755 /etc/init.d/acfsload [root@racnode3 ~]# chkconfig --add acfsload [root@racnode3 ~]# chkconfig --list | grep acfsload acfsload 0:off 1:off 2:on 3:on 4:on 5:on 6:off

Verify the Oracle Grid Infrastructure 'ora.registry.acfs' resource exists.


[root@racnode3 ~]# su - grid -c crs_stat | grep acfs NAME=ora.registry.acfs TYPE=ora.registry.acfs.type

Copy the Oracle ACFS executables to /sbin and set the appropriate permissions. The Oracle ACFS executables are located in the GRID_HOME/install/usm/EL5/<ARCHITECTURE>/<KERNEL_VERSION>/<FULL_KERNEL_VERSION>/bin directory or in the /u01/app/11.2.0/grid/install/usm/cmds/bin directory (12 files) and include any file without the *.ko extension.


[root@racnode3 ~]# cd /u01/app/11.2.0/grid/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5-x86_64/bin [root@racnode3 bin]# cp acfs* /sbin; chmod 755 /sbin/acfs* [root@racnode3 bin]# cp advmutil* /sbin; chmod 755 /sbin/advmutil* [root@racnode3 bin]# cp fsck.acfs* /sbin; chmod 755 /sbin/fsck.acfs* [root@racnode3 bin]# cp mkfs.acfs* /sbin; chmod 755 /sbin/mkfs.acfs* [root@racnode3 bin]# cp mount.acfs* /sbin; chmod 755 /sbin/mount.acfs* [root@racnode3 ~]# cd /u01/app/11.2.0/grid/install/usm/cmds/bin [root@racnode3 bin]# cp acfs* /sbin; chmod 755 /sbin/acfs* [root@racnode3 bin]# cp advmutil* /sbin; chmod 755 /sbin/advmutil* [root@racnode3 bin]# cp fsck.acfs* /sbin; chmod 755 /sbin/fsck.acfs* [root@racnode3 bin]# cp mkfs.acfs* /sbin; chmod 755 /sbin/mkfs.acfs* [root@racnode3 bin]# cp mount.acfs* /sbin; chmod 755 /sbin/mount.acfs*

Modify any of the Oracle ACFS shell scripts copied to the /sbin directory (above) to include the ORACLE_HOME for Grid Infrastructure. The successful execution of these scripts requires access to certain Oracle shared libraries that are found in the Grid Infrastructure Oracle home. Since many of the Oracle ACFS shell scripts will be executed as the root user account, the ORACLE_HOME environment variable will typically not be set in the shell and will result in the executable to fail. For example:


[root@racnode1 ~]# /sbin/acfsutil registry /sbin/acfsutil.bin: error while loading shared libraries: libhasgen11.so: cannot open shared object file: No such file or directory

An easy workaround to get past this error is to set the ORACLE_HOME environment variable for the Oracle Grid Infrastructure home in the Oracle ACFS shell scripts on all Oracle RAC nodes. The ORACLE_HOME should be set at the beginning of the file after the header comments as shown in the following example:


#!/bin/sh # # Copyright (c) 2001, 2009, Oracle and/or its affiliates. All rights reserved. # ORACLE_HOME=/u01/app/11.2.0/grid ORA_CRS_HOME=%ORA_CRS_HOME% if [ ! -d $ORA_CRS_HOME ]; then ORA_CRS_HOME=$ORACLE_HOME fi ...

Add the ORACLE_HOME environment variable for the Oracle Grid Infrastructure home as noted above to the following Oracle ACFS shell scripts on all Oracle RAC nodes:

Verify the volume device(s).


[root@racnode3 ~]# ls -l /dev/asm total 0 brwxrwx--- 1 root asmadmin 252, 76289 Apr 26 18:44 docsvol1-149

Create a directory that will be used to mount the new Oracle ACFS.


[root@racnode3 ~]# mkdir /oradocs

Mount the volume.


[root@racnode3 ~]# /bin/mount -t acfs /dev/asm/docsvol1-149 /oradocs

Verify that the cluster file system mounted properly.


[root@racnode3 ~]# mount /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/mapper/VolGroup01-LogVol00 on /local1 type ext3 (rw) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) oracleasmfs on /dev/oracleasm type oracleasmfs (rw) /dev/asm/docsvol1-149 on /oradocs type acfs (rw)

Verify that the volume (and mount point) is registered in the Oracle ACFS mount registry so that Oracle Grid Infrastructure will mount and unmount volumes on startup and shutdown.


[root@racnode3 ~]# /sbin/acfsutil registry Mount Object: Device: /dev/asm/docsvol1-149 Mount Point: /oradocs Disk Group: DOCS Volume: DOCSVOL1 Options: none Nodes: all

This concludes adding a node to an existing Oracle RAC.

About the Author

Jeffrey Hunter is an Oracle Certified Professional, Java Development Certified Professional, Author, and an Oracle ACE. Jeff currently works as a Senior Database Administrator for The DBA Zone, Inc. located in Pittsburgh, Pennsylvania. His work includes advanced performance tuning, Java and PL/SQL programming, developing high availability solutions, capacity planning, database security, and physical / logical database design in a UNIX / Linux server environment. Jeff's other interests include mathematical encryption theory, tutoring advanced mathematics, programming language processors (compilers and interpreters) in Java and C, LDAP, writing web-based database administration tools, and of course Linux. He has been a Sr. Database Administrator and Software Engineer for over 20 years and maintains his own website site at: http://www.iDevelopment.info. Jeff graduated from Stanislaus State University in Turlock, California, with a Bachelor's degree in Computer Science and Mathematics.



Copyright (c) 1998-2014 Jeffrey M. Hunter. All rights reserved.

All articles, scripts and material located at the Internet address of http://www.idevelopment.info is the copyright of Jeffrey M. Hunter and is protected under copyright laws of the United States. This document may not be hosted on any other site without my express, prior, written permission. Application to host any of the material elsewhere can be made by contacting me at jhunter@idevelopment.info.

I have made every effort and taken great care in making sure that the material included on my web site is technically accurate, but I disclaim any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on it. I will in no case be liable for any monetary damages arising from such loss, damage or destruction.

Last modified on
Thursday, 14-Jun-2012 23:27:25 EDT
Page Count: 31304