DBA Tips Archive for Oracle

  


Building an Inexpensive Oracle RAC 10g R2 on Linux - (CentOS 4.2 / FireWire)

by Jeff Hunter, Sr. Database Administrator


Contents

  1. Overview
  2. Oracle RAC 10g Overview
  3. Shared-Storage Overview
  4. FireWire Technology
  5. Hardware & Costs
  6. Install the Linux Operating System
  7. Network Configuration
  8. Obtain & Install FireWire Modules
  9. Create "oracle" User and Directories
  10. Create Partitions on the Shared FireWire Storage Device
  11. Configure the Linux Servers for Oracle
  12. Configure the "hangcheck-timer" Kernel Module
  13. Configure RAC Nodes for Remote Access
  14. All Startup Commands for Each RAC Node
  15. Check RPM Packages for Oracle10g Release 2
  16. Install & Configure Oracle Cluster File System (OCFS2)
  17. Install & Configure Automatic Storage Management (ASMLib 2.0)
  18. Download Oracle10g RAC Software
  19. Install Oracle10g Clusterware Software
  20. Install Oracle10g Database Software
  21. Install Oracle10g Companion CD Software
  22. Create TNS Listener Process
  23. Create the Oracle Cluster Database
  24. Verify TNS Networking Files
  25. Create / Alter Tablespaces
  26. Verify the RAC Cluster & Database Configuration
  27. Starting / Stopping the Cluster
  28. Transparent Application Failover - (TAF)
  29. Conclusion
  30. Acknowledgements
  31. About the Author



Overview

One of the most efficient ways to become familiar with Oracle10g Real Application Cluster (RAC) technology is to have access to an actual Oracle10g RAC cluster. In learning this new technology, you will soon start to realize the benefits Oracle10g RAC has to offer like fault tolerance, new levels of security, load balancing, and the ease of upgrading capacity. The problem though is the price of the hardware required for a typical production RAC configuration. A small two node cluster, for example, could run anywhere from $10,000 to well over $20,000. This would not even include the heart of a production RAC environment, the shared storage. In most cases, this would be a Storage Area Network (SAN), which generally start at $10,000.

For those who simply want to become familiar with Oracle10g RAC, this article provides a low cost alternative to configure an Oracle10g RAC system using commercial off the shelf components and downloadable software. The estimated cost for this configuration could be anywhere from $1200 to $1800. This system will consist of a dual node cluster (each with a single processor), both running Linux (CentOS 4.2 or Red Hat Enterprise Linux 4 Update 2), Oracle10g Release 2, OCFS2, ASMLib 2.0 with a shared disk storage based on IEEE1394 (FireWire) drive technology. (Of course, you could also consider building a virtual cluster on a VMware Virtual Machine, but the experience won't quite be the same!)

  This article will mark the last in a series to make use of FireWire technology as the shared storage medium in order to build an inexpensive Oracle10g RAC system. Future releases of this article will adopt the use of iSCSI; more specifically, building a network storage server using Openfiler. Powered by rPath Linux, Openfiler is a free browser-based network storage management utility that delivers file-based Network Attached Storage (NAS) and block-based Storage Area Networking (SAN) in a single framework. Openfiler supports CIFS, NFS, HTTP/DAV, FTP, however, I will only be making use of its iSCSI capabilities to implement an inexpensive SAN for the shared storage component required by Oracle10g RAC.

Please note, that this is not the only way to build a low cost Oracle10g RAC system. I have worked on other solutions that utilize an implementation based on SCSI rather than FireWire for shared storage. In most cases, SCSI will cost more than a FireWire solution where an inexpensive SCSI configuration will consist of:

Keep in mind that some motherboards may already include built-in SCSI controllers.

It is important to note that the FireWire configuration described in this article should never be run in a production environment and that it is not supported by Oracle or any other vendor. In a production environment, fiber channel — the high-speed serial-transfer interface that can connect systems and storage devices in either point-to-point or switched topologies — is the technology of choice. FireWire offers a low-cost alternative to fiber channel for testing and development, but it is not ready for production.

Although in past articles I used raw partitions for storing files on shared storage, here we will make use of the Oracle Cluster File System V2 (OCFS2) and Oracle Automatic Storage Management (ASM). The two Linux servers will be configured as follows:

Oracle Database Files
RAC Node Name Instance Name Database Name $ORACLE_BASE File System / Volume Manager for DB Files
linux1 orcl1 orcl /u01/app/oracle Automatic Storage Management (ASM)
linux2 orcl2 orcl /u01/app/oracle Automatic Storage Management (ASM)
Oracle Clusterware Shared Files
File Type File Name Partition Mount Point File System
Oracle Cluster Registry (OCR) /u02/oradata/orcl/OCRFile /dev/sda1 /u02/oradata/orcl Oracle's Cluster File System, Release 2 (OCFS2)
Voting Disk /u02/oradata/orcl/CSSFile /dev/sda1 /u02/oradata/orcl Oracle's Cluster File System, Release 2 (OCFS2)

  With Oracle Database 10g Release 2 (10.2), Cluster Ready Services, or CRS, is now called Oracle Clusterware.

The Oracle Clusterware software will be installed to /u01/app/oracle/product/crs on each of the nodes that make up the RAC cluster. However, the Clusterware software requires that two of its files, the "Oracle Cluster Registry (OCR)" file and the "Voting Disk" file be shared with all nodes in the cluster. These two files will be installed on shared storage using Oracle's Cluster File System, Release 2 (OCFS2). It is possible (but not recommended by Oracle) to use RAW devices for these files, however, it is not possible to use ASM for these two Clusterware files.

  Starting with Oracle Database 10g Release 2 (10.2), Oracle Clusterware should be installed in a separate Oracle Clusterware home directory which is non-release specific. This is a change to the Optimal Flexible Architecture (OFA) rules. You should not install Oracle Clusterware in a release-specific Oracle home mount point, (/u01/app/oracle/product/10.2.0/... for example), as succeeding versions of Oracle Clusterware will overwrite the Oracle Clusterware installation in the same path. Also, If Oracle Clusterware 10g Release 2 (10.2) detects an existing Oracle Cluster Ready Services installation, then it overwrites the existing installation in the same path.

The Oracle10g Release 2 Database software will be installed into a separate Oracle Home; namely /u01/app/oracle/product/10.2.0/db_1 on each of the nodes that make up the RAC cluster. All of the Oracle physical database files (data, online redo logs, control files, archived redo logs), will be installed to different partitions of the shared drive being managed by Automatic Storage Management (ASM).

  The Oracle database files could have just as well been stored on the Oracle Cluster File System (OFCS2). Using ASM, however, makes the article that much more interesting!

  This article is only designed to work as documented with absolutely no substitutions!

If you are interested in configuring the same type of configuration for Oracle10g Release 1, please see my article entitled "Building an Inexpensive Oracle RAC 10g Release 1 Configuration on Linux - (WBEL 3.0)".

If you are interested in configuring the same type of configuration for Oracle9i, please see my article entitled "Building an Inexpensive Oracle RAC 9i Configuration on Linux".



Oracle RAC 10g Overview

Oracle RAC, introduced with Oracle9i, is the successor to Oracle Parallel Server (OPS). RAC allows multiple instances to access the same database (storage) simultaneously. RAC provides fault tolerance, load balancing, and performance benefits by allowing the system to scale out, and at the same time since all nodes access the same database, the failure of one instance will not cause the loss of access to the database.

At the heart of Oracle10g RAC is a shared disk subsystem. All nodes in the cluster must be able to access all of the data, redo log files, control files and parameter files for all nodes in the cluster. The data disks must be globally available in order to allow all nodes to access the database. Each node has its own redo log file(s) and UNDO tablespace, but the other nodes must be able to access them (and the shared control file) in order to recover that node in the event of a system failure.

The biggest difference between Oracle RAC and OPS is the addition of Cache Fusion. With OPS a request for data from one node to another required the data to be written to disk first, then the requesting node can read that data. With cache fusion, data is passed along a high-speed interconnect using a sophisticated locking algorithm.

Not all clustering solutions use shared storage. Some vendors use an approach known as a Federated Cluster, in which data is spread across several machines rather than shared by all. With Oracle10g RAC, however, multiple nodes use the same set of disks for storing data. With Oracle10g RAC, the data files, redo log files, control files, and archived log files reside on shared storage on raw-disk devices, a NAS, ASM, or on a clustered file system. Oracle's approach to clustering leverages the collective processing power of all the nodes in the cluster and at the same time provides failover security.

Pre-configured Oracle10g RAC solutions are available from vendors such as Dell, IBM and HP for production environments. This article, however, focuses on putting together your own Oracle10g RAC environment for development and testing by using Linux servers and a low cost shared disk solution; FireWire.

For more background about Oracle RAC, visit the Oracle RAC Product Center on OTN.



Shared-Storage Overview

Today, fibre channel is one of the most popular solutions for shared storage. As mentioned earlier, fibre channel is a high-speed serial-transfer interface that is used to connect systems and storage devices in either point-to-point or switched topologies. Protocols supported by Fibre Channel include SCSI and IP. Fibre channel configurations can support as many as 127 nodes and have a throughput of up to 2.12 gigabits per second. Fibre channel, however, is very expensive. Just the fibre channel switch alone can start at around $1000. This does not even include the fibre channel storage array and high-end drives, which can reach prices of about $300 for a 36GB drive. A typical fibre channel setup which includes fibre channel cards for the servers, a basic setup is roughly $10,000, which does not include the cost of the servers that make up the cluster.

A less expensive alternative to fibre channel is SCSI. SCSI technology provides acceptable performance for shared storage, but for administrators and developers who are used to GPL-based Linux prices, even SCSI can come in over budget, at around $2,000 to $5,000 for a two-node cluster.

Another popular solution is the Sun NFS (Network File System) found on a NAS. It can be used for shared storage but only if you are using a network appliance or something similar. Specifically, you need servers that guarantee direct I/O over NFS, TCP as the transport protocol, and read/write block sizes of 32K.

The shared storage that will be used for this article is based on IEEE1394 (FireWire) drive technology. FireWire is able to offer a low-cost alternative to Fibre Channel for testing and development, but should never be used in a production environment.



FireWire Technology

Developed by Apple Computer and Texas Instruments, FireWire is a cross-platform implementation of a high-speed serial data bus. With its high bandwidth, long distances (up to 100 meters in length) and high-powered bus, FireWire is being used in applications such as digital video (DV), professional audio, hard drives, high-end digital still cameras and home entertainment devices. Today, FireWire operates at transfer rates of up to 800 megabits per second while next generation FireWire calls for speeds to a theoretical bit rate to 1600 Mbps and then up to a staggering 3200 Mbps. That's 3.2 gigabits per second. This will make FireWire indispensable for transferring massive data files and for even the most demanding video applications, such as working with uncompressed high-definition (HD) video or multiple standard-definition (SD) video streams.

The following chart shows speed comparisons of the various types of disk interfaces. For each interface, I provide the maximum transfer rates in kilobits (kb), kilobytes (KB), megabits (Mb), megabytes (MB), gigabits (Gb), and gigabytes (GB) per second. As you can see, the capabilities of IEEE1394 compare very favorably with other disk interface and network technologies that are currently available today.

Disk Interface / Network / BUS Speed
Kb KB Mb MB Gb GB
Serial 115 14.375 0.115 0.014    
Parallel (standard) 920 115 0.92 0.115    
10Base-T Ethernet     10 1.25    
IEEE 802.11b wireless Wi-Fi (2.4 GHz band)     11 1.375    
USB 1.1     12 1.5    
Parallel (ECP/EPP)     24 3    
SCSI-1     40 5    
IEEE 802.11g wireless WLAN (2.4 GHz band)     54 6.75    
SCSI-2 (Fast SCSI / Fast Narrow SCSI)     80 10    
100Base-T Ethernet (Fast Ethernet)     100 12.5    
ATA/100 (parallel)     100 12.5    
IDE     133.6 16.7    
Fast Wide SCSI (Wide SCSI)     160 20    
Ultra SCSI (SCSI-3 / Fast-20 / Ultra Narrow)     160 20    
Ultra IDE     264 33    
Wide Ultra SCSI (Fast Wide 20)     320 40    
Ultra2 SCSI     320 40    
FireWire 400 - (IEEE1394a)     400 50    
USB 2.0     480 60    
Wide Ultra2 SCSI     640 80    
Ultra3 SCSI     640 80    
FireWire 800 - (IEEE1394b)     800 100    
Gigabit Ethernet     1000 125 1  
PCI - (33 MHz / 32-bit)     1064 133 1.064  
Serial ATA I - (SATA I)     1200 150 1.2  
Wide Ultra3 SCSI     1280 160 1.28  
Ultra160 SCSI     1280 160 1.28  
PCI - (33 MHz / 64-bit)     2128 266 2.128  
PCI - (66 MHz / 32-bit)     2128 266 2.128  
AGP 1x - (66 MHz / 32-bit)     2128 266 2.128  
Serial ATA II - (SATA II)     2400 300 2.4  
Ultra320 SCSI     2560 320 2.56  
FC-AL Fibre Channel     3200 400 3.2  
PCI-Express x1 - (bidirectional)     4000 500 4  
PCI - (66 MHz / 64-bit)     4256 532 4.256  
AGP 2x - (133 MHz / 32-bit)     4264 533 4.264  
Serial ATA III - (SATA III)     4800 600 4.8  
PCI-X - (100 MHz / 64-bit)     6400 800 6.4  
PCI-X - (133 MHz / 64-bit)       1064 8.512 1
AGP 4x - (266 MHz / 32-bit)       1066 8.528 1
10G Ethernet - (IEEE 802.3ae)       1250 10 1.25
PCI-Express x4 - (bidirectional)       2000 16 2
AGP 8x - (533 MHz / 32-bit)       2133 17.064 2.1
PCI-Express x8 - (bidirectional)       4000 32 4
PCI-Express x16 - (bidirectional)       8000 64 8



Hardware & Costs

The hardware used to build our example Oracle10g RAC environment consists of two Linux servers and components that can be purchased at any local computer store or over the Internet.

Server 1 - (linux1)
  Dimension 2400 Series
     - Intel Pentium 4 Processor at 2.80GHz
     - 1GB DDR SDRAM (at 333MHz)
     - 40GB 7200 RPM Internal Hard Drive
     - Integrated Intel 3D AGP Graphics
     - Integrated 10/100 Ethernet - (Broadcom BCM4401)
     - CDROM (48X Max Variable)
     - 3.5" Floppy
     - No monitor (Already had one)
     - USB Mouse and Keyboard
$620
  1 - Ethernet LAN Cards

Each Linux server should contain two NIC adapters. The Dell Dimension includes an integrated 10/100 Ethernet adapter that will be used to connect to the public network. The second NIC adapter will be used for the private interconnect.

       Linksys 10/100 Mpbs - (LNE100TX) - (Used for Interconnect to linux2)

$20
  1 - FireWire Card

The following is a list of FireWire I/O cards that contain the correct chipset, allow for multiple logins, and should work with this article (no guarantees however). FireWire I/O cards with chipsets made by Texas Instruments (TI) or VIA Technologies (VIA) are known to work.

     FireWire 400

       Belkin FireWire 3-Port 1394 PCI Card - (F5U501-APL)
       SIIG 3-Port 1394 I/O Card - (NN-300012)
       SIIG FireWire 800 PCI-32T host adapter - (NN-830112)
       StarTech 4 Port IEEE-1394 PCI Firewire Card - (PCI1394_4)
       Adaptec FireConnect 4300 FireWire PCI Card - (1890600)

     FireWire 800

       SIIG FireWire 800 PCI-32T host adapter - (NN-830112)
       LaCie FireWire 800 PCI Card - (107755) - See warning below!

  Warning: I was unable to obtain concurrent logins to the FireWire drive when using the LaCie FireWire 800 PCI Card - (107755) in both nodes. To resolve this, I used two different FireWire PCI cards: the LaCie FireWire 800 PCI Card - (107755) in one node and the SIIG FireWire 800 PCI-32T host adapter - (NN-830112) in the second node. Please refer to the section Troubleshooting Concurrent Logins to the FireWire Drive for further details.

$30
Server 2 - (linux2)
  Dimension 2400 Series
     - Intel Pentium 4 Processor at 2.80GHz
     - 1GB DDR SDRAM (at 333MHz)
     - 40GB 7200 RPM Internal Hard Drive
     - Integrated Intel 3D AGP Graphics
     - Integrated 10/100 Ethernet - (Broadcom BCM4401)
     - CDROM (48X Max Variable)
     - 3.5" Floppy
     - No monitor (Already had one)
     - USB Mouse and Keyboard
$620
  1 - Ethernet LAN Cards

Each Linux server should contain two NIC adapters. The Dell Dimension includes an integrated 10/100 Ethernet adapter that will be used to connect to the public network. The second NIC adapter will be used for the private interconnect.

       Linksys 10/100 Mpbs - (LNE100TX) - (Used for Interconnect to linux1)

$20
  1 - FireWire Card

The following is a list of FireWire I/O cards that contain the correct chipset, allow for multiple logins, and should work with this article (no guarantees however). FireWire I/O cards with chipsets made by Texas Instruments (TI) or VIA Technologies (VIA) are known to work.

     FireWire 400

       Belkin FireWire 3-Port 1394 PCI Card - (F5U501-APL)
       SIIG 3-Port 1394 I/O Card - (NN-300012)
       SIIG FireWire 800 PCI-32T host adapter - (NN-830112)
       StarTech 4 Port IEEE-1394 PCI Firewire Card - (PCI1394_4)
       Adaptec FireConnect 4300 FireWire PCI Card - (1890600)

     FireWire 800

       SIIG FireWire 800 PCI-32T host adapter - (NN-830112)
       LaCie FireWire 800 PCI Card - (107755) - See warning below!

  Warning: I was unable to obtain concurrent logins to the FireWire drive when using the LaCie FireWire 800 PCI Card - (107755) in both nodes. To resolve this, I used two different FireWire PCI cards: the LaCie FireWire 800 PCI Card - (107755) in one node and the SIIG FireWire 800 PCI-32T host adapter - (NN-830112) in the second node. Please refer to the section Troubleshooting Concurrent Logins to the FireWire Drive for further details.

$30
Miscellaneous Components
  FireWire Hard Drive

The following is a list of FireWire drives (and enclosures) that contain the correct chipset, allow for multiple logins, and should work with this article (no guarantees however):

     FireWire 400

       Maxtor OneTouch III - 500GB FireWire 400/USB 2.0 Drive - (F01G500)
       Maxtor OneTouch III - 300GB FireWire 400/USB 2.0 Drive - (F01G300)

       Maxtor OneTouch II 300GB USB 2.0 / IEEE 1394a External Hard Drive - (E01G300)
       Maxtor OneTouch II 250GB USB 2.0 / IEEE 1394a External Hard Drive - (E01G250)
       Maxtor OneTouch II 200GB USB 2.0 / IEEE 1394a External Hard Drive - (E01A200)

       LaCie Hard Drive, Design by F.A. Porsche 250GB, FireWire 400 - (300703U)
       LaCie Hard Drive, Design by F.A. Porsche 160GB, FireWire 400 - (300702U)
       LaCie Hard Drive, Design by F.A. Porsche 80GB, FireWire 400 - (300699U)

       Dual Link Drive Kit, FireWire Enclosure, ADS Technologies - (DLX185)
           Maxtor Ultra 200GB ATA-133 (Internal) Hard Drive - (L01P200)

       Maxtor OneTouch 250GB USB 2.0 / IEEE 1394a External Hard Drive - (A01A250)
       Maxtor OneTouch 200GB USB 2.0 / IEEE 1394a External Hard Drive - (A01A200)

     FireWire 800

       LaCie d2 Hard Drive Extreme with Triple Interface 500GB - (300793U)
       LaCie d2 Hard Drive Extreme with Triple Interface 300GB - (300791U)
       LaCie d2 Hard Drive Extreme with Triple Interface 250GB - (300790U)
       LaCie d2 Hard Drive Extreme with Triple Interface 160GB - (301033U)

       FireWire 800+USB 2.0 3.5" Enclosure, SIIG - (NN-HDK112-S1)
           Maxtor Ultra 200GB ATA-133 (Internal) Hard Drive - (L01P200)

       Maxtor OneTouch II - 500GB FireWire 800 & USB Drive - (E01W500)
       Maxtor OneTouch II - 300GB FireWire 800 & USB Drive - (E01W300)
       Maxtor OneTouch II - 200GB FireWire 800 & USB Drive - (E01V200)

       Ensure that the FireWire drive that you purchase supports multiple logins. If the drive has a chipset that does not allow for concurrent access from more than one server, the disk and its partitions can only be seen by one server at a time. Disks with the Oxford 911 chipset (FireWire 400), Oxford 912 chipset (FireWire 800), or Oxford 922 chipset (FireWire 800) are known to work. Note that the Oxford 912 chipset is newer and faster than Oxford 922. Here are the details about the disk that I purchased for this test:
  Vendor: Maxtor
  Model: OneTouch II
  Mfg. Part No. or KIT No.: E01G300
  Capacity: 300 GB
  Cache Buffer: 16 MB
  Rotational Speed (rpm): 7200 RPM
  Interface Transfer Rate : 400 Mbits/s
  "Combo" Interface: IEEE 1394 / USB 2.0 and USB 1.1 compatible

$280
  1 - Extra FireWire Cable

Each node in the RAC configuration will need to connect to the shared storage device (the FireWire hard drive). The FireWire hard drive will come supplied with one FireWire cable. You will need to purchase one additional FireWire cable to connect the second node to the shared storage. Select the appropriate FireWire cable that is compatible with the data transmission speed (FireWire 400 / FireWire 800) and the desired cable length.

     FireWire 400

       Belkin 6-pin to 6-pin 1394 Cable, 3 ft. - (F3N400-03-ICE)
       Belkin 6-pin to 6-pin 1394 Cable, 14 ft. - (F3N400-14-ICE)

     FireWire 800

       SIIG FireWire 800 9-pin to 9-pin Cable, 2.0 Meters - (CB-899011)
       Belkin 9-Pin to 9-Pin FireWire 800 Cable, 6 ft. - (F3N405-06-APL)
       Belkin 9-Pin to 9-Pin FireWire 800 Cable, 14 ft. - (F3N405-14-APL)

$20
  1 - Ethernet hub or switch

Used for the interconnect between linux1-priv and linux2-priv. A question I often receive is about substituting the Ethernet switch (used for interconnect linux1-priv / linux2-priv) with a crossover CAT5 cable. I would not recommend this. I have found that when using a crossover CAT5 cable for the interconnect, whenever I took one of the PCs down, the other PC would detect a "cable unplugged" error, and thus the Cache Fusion network would become unavailable.

       Linksys EtherFast 10/100 5-port Ethernet Switch - (EZXS55W)

$25
  4 - Network Cables

       Category 5e patch cable - (Connect linux1 to public network)
       Category 5e patch cable - (Connect linux2 to public network)
       Category 5e patch cable - (Connect linux1 to interconnect ethernet switch)
       Category 5e patch cable - (Connect linux2 to interconnect ethernet switch)

$5
$5
$5
$5

Total     $1685  

  I have received several emails since posting this article asking if the Maxtor OneTouch external drive (and the other external hard drives I have listed) has two IEEE1394 (FireWire) ports. All of the drives that I have listed and tested do have two IEEE1394 ports located on the back of the drive.

Click on the following images for a larger view of the Maxtor OneTouch external drive:

 


We are about to start the installation process. Now that we have talked about the hardware that will be used in this example, let's take a conceptual look at what the environment would look like:


Figure 1: Oracle10g Release 2 Testing Rac Configuration

As we start to go into the details of the installation, it should be noted that most of the tasks within this document will need to be performed on both servers. I will indicate at the beginning of each section whether or not the task(s) should be performed on both nodes or not.



Install the Linux Operating System


  Perform the following installation on all nodes in the cluster!

After procuring the required hardware, it is time to start the configuration process. The first task we need to perform is to install the Linux operating system. As already mentioned, this article will use CentOS 4.2. Although I have used Red Hat Fedora in the past, I wanted to switch to a Linux environment that would guarantee all of the functionality contained with Oracle. This is where CentOS comes in. The CentOS Enterprise Linux project takes the Red Hat Enterprise Linux 4 source RPMs, and compiles them into a free clone of the Red Hat Enterprise Server 4 product. This provides a free and stable version of the Red Hat Enterprise Linux 4 (AS/ES) operating environment that I can now use for testing different Oracle configurations. Over the last several months, I have been moving away from Fedora as I need a stable environment that is not only free, but as close to the actual Oracle supported operating system as possible. While CentOS is not the only project performing the same functionality, I tend to stick with it as it is stable and reacts fast with regards to updates by Red Hat.


Downloading CentOS Enterprise Linux

Use the links (below) to download CentOS Enterprise Linux 4.2. After downloading CentOS, you will then want to burn each of the ISO images to CD.

  CentOS Enterprise Linux

  If you are downloading the above ISO files to a MS Windows machine, there are many options for burning these images (ISO files) to a CD. You may already be familiar with and have the proper software to burn images to CD. If you are not familiar with this process and do not have the required software to burn images to CD, here are just two (of many) software packages that can be used:

  UltraISO
  Magic ISO Maker


Installing CentOS Enterprise Linux

This section provides a summary of the screens used to install CentOS Enterprise Linux. For more detailed installation instructions, it is possible to use the manuals from Red Hat Linux http://www.redhat.com/docs/manuals/. I would suggest, however, that the instructions I have provided below be used for this Oracle10g RAC configuration.

  Before installing the Linux operating system on both nodes, you should have the FireWire and two NIC interfaces (cards) installed.

Also, before starting the installation, ensure that the FireWire drive (our shared storage drive) is NOT connected to either of the two servers. You may also choose to connect both servers to the FireWire drive and simply turn the power off to the drive.

Although none of this is mandatory, it is how I will be performing the installation and configuration for this article.

After downloading and burning the CentOS images (ISO files) to CD, insert CentOS Disk #1 into the first server (linux1 in this example), power it on, and answer the installation screen prompts as noted below. After completing the Linux installation on the first node, perform the same Linux installation on the second node while substituting the node name linux1 for linux2 and the different IP addresses were appropriate.

Boot Screen

The first screen is the CentOS Enterprise Linux boot screen. At the boot: prompt, hit [Enter] to start the installation process.
Media Test
When asked to test the CD media, tab over to [Skip] and hit [Enter]. If there were any errors, the media burning software would have warned us. After several seconds, the installer should then detect the video card, monitor, and mouse. The installer then goes into GUI mode.
Welcome to CentOS Enterprise Linux
At the welcome screen, click [Next] to continue.
Language / Keyboard Selection
The next two screens prompt you for the Language and Keyboard settings. Make the appropriate selections for your configuration.
Installation Type
Choose the [Custom] option and click [Next] to continue.
Disk Partitioning Setup
Select [Automatically partition] and click [Next] continue.

If there were a previous installation of Linux on this machine, the next screen will ask if you want to "remove" or "keep" old partitions. Select the option to [Remove all partitions on this system]. Also, ensure that the [hda] drive is selected for this installation. I also keep the checkbox [Review (and modify if needed) the partitions created] selected. Click [Next] to continue.

You will then be prompted with a dialog window asking if you really want to remove all partitions. Click [Yes] to acknowledge this warning.

Partitioning
The installer will then allow you to view (and modify if needed) the disk partitions it automatically selected. In almost all cases, the installer will choose 100MB for /boot, double the amount of RAM for swap, and the rest going to the root (/) partition. I like to have a minimum of 1GB for swap. For the purpose of this install, I will accept all automatically preferred sizes. (Including 2GB for swap since I have 1GB of RAM installed.)

Starting with RHEL 4, the installer will create the same disk configuration as just noted but will create them using the Logical Volume Manager (LVM). For example, it will partition the first hard drive (/dev/hda for my configuration) into two partitions - one for the /boot partition (/dev/hda1) and the remainder of the disk dedicate to a LVM named VolGroup00 (/dev/hda2). The LVM Volume Group (VolGroup00) is then partitioned into two LVM partitions - one for the root file system (/) and another for swap. I basically check that it created at least 1GB of swap. Since I have 1GB of RAM installed, the installer created 2GB of swap. Saying that, I just accept the default disk layout.

Boot Loader Configuration
The installer will use the GRUB boot loader by default. To use the GRUB boot loader, accept all default values and click [Next] to continue.
Network Configuration
I made sure to install both NIC interfaces (cards) in each of the Linux machines before starting the operating system installation. This screen should have successfully detected each of the network devices.

First, make sure that each of the network devices are checked to [Active on boot]. The installer may choose to not activate eth1.

Second, [Edit] both eth0 and eth1 as follows. You may choose to use different IP addresses for both eth0 and eth1 and that is OK. If possible, try to put eth1 (the interconnect) on a different subnet than eth0 (the public network):

eth0:
- Check OFF the option to [Configure using DHCP]
- Leave the [Activate on boot] checked ON
- IP Address: 192.168.1.100
- Netmask: 255.255.255.0

eth1:
- Check OFF the option to [Configure using DHCP]
- Leave the [Activate on boot] checked ON
- IP Address: 192.168.2.100
- Netmask: 255.255.255.0

Continue by setting your hostname manually. I used "linux1" for the first node and "linux2" for the second. Finish this dialog off by supplying your gateway and DNS servers.

Firewall
On this screen, make sure to select [No firewall] and click [Next] to continue. You may be prompted with a warning dialog about not setting the firewall. If this occurs, simply hit [Proceed] to continue.
Additional Language Support / Time Zone
The next two screens allow you to select additional language support and time zone information. In almost all cases, you can accept the defaults.
Set Root Password
Select a root password and click [Next] to continue.
Package Group Selection
Scroll down to the bottom of this screen and select [Everything] under the "Miscellaneous" section. Click [Next] to continue.

Please note that the installation of Oracle does not require all Linux packages to be installed. My decision to install all packages was for the sake of brevity. Please see section "Check RPM Packages for Oracle10g Release 2" for a more detailed look at the critical packages required for a successful Oracle installation.

Also note that with some RHEL 4 distributions, you will not get the "Package Group Selection" screen by default. There, you are asked to simply "Install default software packages" or "Customize software packages to be installed". Select the option to "Customize software packages to be installed" and click [Next] to continue. This will then bring up the "Package Group Selection" screen. Now, scroll down to the bottom of this screen and select [Everything] under the "Miscellaneous" section. Click [Next] to continue.

About to Install
This screen is basically a confirmation screen. Click [Next] to start the installation. During the installation process, you will be asked to switch disks to Disk #2, Disk #3, and then Disk #4. Click [Continue] to start the installation process.

Note that with CentOS 4.2, the installer will ask to switch to Disk #2, Disk #3, Disk #4, Disk #1, and then back to Disk #4.

Graphical Interface (X) Configuration
With most RHEL 4 distributions (not the case with CentOS 4.2), when the installation is complete, the installer will attempt to detect your video hardware. Ensure that the installer has detected and selected the correct video hardware (graphics card and monitor) to properly use the X Windows server. You will continue with the X configuration in the next serveral screens.
Congratulations
And that's it. You have successfully installed CentOS Enterprise Linux on the first node (linux1). The installer will eject the CD from the CD-ROM drive. Take out the CD and click [Exit] to reboot the system.

When the system boots into Linux for the first time, it will prompt you with another Welcome screen. The following wizard allows you to configure the date and time, add any additional users, testing the sound card, and to install any additional CDs. The only screen I care about is the time and date (and if you are using CentOS 4.x, the monitor/display settings). As for the others, simply run through them as there is nothing additional that needs to be installed (at this point anyways!). If everything was successful, you should now be presented with the login screen.

Perform the same installation on the second node
After completing the Linux installation on the first node, repeat the above steps for the second node (linux2). When configuring the machine name and networking, ensure to configure the proper values. For my installation, this is what I configured for linux2:

First, make sure that each of the network devices are checked to [Active on boot]. The installer may choose to not activate eth1.

Second, [Edit] both eth0 and eth1 as follows:

eth0:
- Check OFF the option to [Configure using DHCP]
- Leave the [Activate on boot] checked ON
- IP Address: 192.168.1.101
- Netmask: 255.255.255.0

eth1:
- Check OFF the option to [Configure using DHCP]
- Leave the [Activate on boot] checked ON
- IP Address: 192.168.2.101
- Netmask: 255.255.255.0

Continue by setting your hostname manually. I used "linux2" for the second node. Finish this dialog off by supplying your gateway and DNS servers.



Network Configuration


  Perform the following network configuration on all nodes in the cluster!

  Although we configured several of the network settings during the installation of CentOS Enterprise Linux, it is important to not skip this section as it contains critical steps that are required for a successful RAC environment.


Introduction to Network Settings

During the Linux O/S install we already configured the IP address and host name for each of the nodes. We now need to configure the /etc/hosts file as well as adjusting several of the network settings for the interconnect.

Each node should have one static IP address for the public network and one static IP address for the private cluster interconnect. The private interconnect should only be used by Oracle to transfer Cluster Manager and Cache Fusion related data. Note that Oracle does not support using the public network interface for the interconnect. You must have one network interface for the public network and another network interface for the private interconnect. For a production RAC implementation, the interconnect should be at least gigabit or more and only be used by Oracle.


Configuring Public and Private Network

In our two node example, we need to configure the network on both nodes for access to the public network as well as their private interconnect.

The easiest way to configure network settings in Red Hat Linux is with the program Network Configuration. This application can be started from the command-line as the "root" user account as follows:

# su -
# /usr/bin/system-config-network &

  Do not use DHCP naming for the public IP address or the interconnects - we need static IP addresses!

Using the Network Configuration application, you need to configure both NIC devices as well as the /etc/hosts file. Both of these tasks can be completed using the Network Configuration GUI. Notice that the /etc/hosts entries are the same for both nodes.

Our example configuration will use the following settings:

Server 1 - (linux1)
Device IP Address Subnet Gateway Purpose
eth0 192.168.1.100 255.255.255.0 192.168.1.1 Connects linux1 to the public network
eth1 192.168.2.100 255.255.255.0   Connects linux1 (interconnect) to linux2 (linux2-priv)
/etc/hosts
127.0.0.1        localhost      loopback

# Public Network - (eth0)
192.168.1.100    linux1
192.168.1.101    linux2

# Private Interconnect - (eth1)
192.168.2.100    linux1-priv
192.168.2.101    linux2-priv

# Public Virtual IP (VIP) addresses for - (eth0)
192.168.1.200    linux1-vip
192.168.1.201    linux2-vip

Server 2 - (linux2)
Device IP Address Subnet Gateway Purpose
eth0 192.168.1.101 255.255.255.0 192.168.1.1 Connects linux2 to the public network
eth1 192.168.2.101 255.255.255.0   Connects linux2 (interconnect) to linux1 (linux1-priv)
/etc/hosts
127.0.0.1        localhost      loopback

# Public Network - (eth0)
192.168.1.100    linux1
192.168.1.101    linux2

# Private Interconnect - (eth1)
192.168.2.100    linux1-priv
192.168.2.101    linux2-priv

# Public Virtual IP (VIP) addresses for - (eth0)
192.168.1.200    linux1-vip
192.168.1.201    linux2-vip

  Note that the virtual IP addresses only need to be defined in the /etc/hosts file (or your DNS) for both nodes. The public virtual IP addresses will be configured automatically by Oracle when you run the Oracle Universal Installer, which starts Oracle's Virtual Internet Protocol Configuration Assistant (VIPCA). All virtual IP addresses will be activated when the srvctl start nodeapps -n <node_name> command is run. Although I am getting ahead of myself, this is the Host Name/IP Address that will be configured in the client(s) tnsnames.ora file for each Oracle Net Service Name. All of this will be explained much later in this article!


In the screen shots below, only node 1 (linux1) is shown. Ensure to make all the proper network settings to both nodes!



Figure 2: Network Configuration Screen - Node 1 (linux1)



Figure 3: Ethernet Device Screen - eth0 (linux1)



Figure 4: Ethernet Device Screen - eth1 (linux1)



Figure 5: Network Configuration Screen - /etc/hosts (linux1)


Once the network if configured, you can use the ifconfig command to verify everything is working. The following example is from linux1:

$ /sbin/ifconfig -a
eth0      Link encap:Ethernet  HWaddr 00:0D:56:FC:39:EC
          inet addr:192.168.1.100  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::20d:56ff:fefc:39ec/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:835 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1983 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:705714 (689.1 KiB)  TX bytes:176892 (172.7 KiB)
          Interrupt:3

eth1      Link encap:Ethernet  HWaddr 00:0C:41:E8:05:37
          inet addr:192.168.2.100  Bcast:192.168.2.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:41ff:fee8:537/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 b)  TX bytes:546 (546.0 b)
          Interrupt:11 Base address:0xe400

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:5110 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5110 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:8276758 (7.8 MiB)  TX bytes:8276758 (7.8 MiB)

sit0      Link encap:IPv6-in-IPv4
          NOARP  MTU:1480  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)


About Virtual IP

Why do we have a Virtual IP (VIP) in 10g? Why does it just return a dead connection when its primary node fails?

It's all about availability of the application. When a node fails, the VIP associated with it is supposed to be automatically failed over to some other node. When this occurs, two things happen.

  1. The new node re-arps the world indicating a new MAC address for the address. For directly connected clients, this usually causes them to see errors on their connections to the old address.

  2. Subsequent packets sent to the VIP go to the new node, which will send error RST packets back to the clients. This results in the clients getting errors immediately.

This means that when the client issues SQL to the node that is now down, or traverses the address list while connecting, rather than waiting on a very long TCP/IP time-out (~10 minutes), the client receives a TCP reset. In the case of SQL, this is ORA-3113. In the case of connect, the next address in tnsnames is used.

Going one step further is making use of Transparent Application Failover (TAF). With TAF successfully configured, it is possible to completely avoid ORA-3113 errors alltogether! TAF will be discussed in more detail in the section "Transparent Application Failover - (TAF)".

Without using VIPs, clients connected to a node that died will often wait a 10 minute TCP timeout period before getting an error. As a result, you don't really have a good HA solution without using VIPs.

Source - Metalink: "RAC Frequently Asked Questions" (Note:220970.1)


Make sure RAC node name is not listed in loopback address

Ensure that the node names (linux1 or linux2) are not included for the loopback address in the /etc/hosts file. If the machine name is listed in the in the loopback address entry as below:
    127.0.0.1        linux1 localhost.localdomain localhost
it will need to be removed as shown below:
    127.0.0.1        localhost.localdomain localhost

  If the RAC node name is listed for the loopback address, you will receive the following error during the RAC installation:
ORA-00603: ORACLE server session terminated by fatal error
or
ORA-29702: error occurred in Cluster Group Service operation


Adjusting Network Settings

With Oracle 9.2.0.1 and onwards, Oracle now makes use of UDP as the default protocol on Linux for inter-process communication (IPC), such as Cache Fusion and Cluster Manager buffer transfers between instances within the RAC cluster.

Oracle strongly suggests to adjust the default and maximum send buffer size (SO_SNDBUF socket option) to 256 KB, and the default and maximum receive buffer size (SO_RCVBUF socket option) to 256 KB.

The receive buffers are used by TCP and UDP to hold received data until it is read by the application. The receive buffer cannot overflow because the peer is not allowed to send data beyond the buffer size window. This means that datagrams will be discarded if they don't fit in the socket receive buffer. This could cause the sender to overwhelm the receiver.

  The default and maximum window size can be changed in the /proc file system without reboot:
# su - root

# sysctl -w net.core.rmem_default=262144
net.core.rmem_default = 262144

# sysctl -w net.core.wmem_default=262144
net.core.wmem_default = 262144

# sysctl -w net.core.rmem_max=262144
net.core.rmem_max = 262144

# sysctl -w net.core.wmem_max=262144
net.core.wmem_max = 262144

The above commands made the changes to the already running O/S. You should now make the above changes permanent (for each reboot) by adding the following lines to the /etc/sysctl.conf file for each node in your RAC cluster:

# Default setting in bytes of the socket receive buffer
net.core.rmem_default=262144

# Default setting in bytes of the socket send buffer
net.core.wmem_default=262144

# Maximum socket receive buffer size which may be set by using
# the SO_RCVBUF socket option
net.core.rmem_max=262144

# Maximum socket send buffer size which may be set by using 
# the SO_SNDBUF socket option
net.core.wmem_max=262144


Check and turn off UDP ICMP rejections:

During the Linux installation process, I indicated to not configure the firewall option. (By default the option to configure a firewall is selected by the installer.) This has burned me several times so I like to do a double-check that the firewall option is not configured and to ensure udp ICMP filtering is turned off.

If UDP ICMP is blocked or rejected by the firewall, the Oracle Clusterware software will crash after several minutes of running. When the Oracle Clusterware process fails, you will have something similar to the following in the <machine_name>_evmocr.log file:

08/29/2005 22:17:19
oac_init:2: Could not connect to server, clsc retcode = 9
08/29/2005 22:17:19
a_init:12!: Client init unsuccessful : [32]
ibctx:1:ERROR: INVALID FORMAT
proprinit:problem reading the bootblock or superbloc 22
When experiencing this type of error, the solution was to remove the udp ICMP (iptables) rejection rule - or to simply have the firewall option turned off. The Oracle Clusterware software will then start to operate normally and not crash. The following commands should be executed as the root user account:

  1. Check to ensure that the firewall option is turned off. If the firewall option is stopped (like it is in my example below) you do not have to proceed with the following steps.
    # /etc/rc.d/init.d/iptables status
    Firewall is stopped.

  2. If the firewall option is operating you will need to first manually disable UDP ICMP rejections:
    # /etc/rc.d/init.d/iptables stop
    
    Flushing firewall rules: [  OK  ]
    Setting chains to policy ACCEPT: filter [  OK  ]
    Unloading iptables modules: [  OK  ]

  3. Then, to turn UDP ICMP rejections off for next server reboot (which should always be turned off):
    # chkconfig iptables off 



Obtain & Install FireWire Modules


  Perform the following FireWire module install and configuration on all nodes in the cluster!


Overview

The next step is to obtain and install the FireWire modules that support the use of IEEE1394 devices with multiple logins.

  In previous articles, it was required to download and install both a new Linux kernel, (e.g. the Oracle Technet Supplied 2.6.9-11.0.0.10.3.EL #1 Linux kernel), and the supporting FireWire modules. As of November 2005, oss.oracle.com now provides pre-compiled FireWire modules for the 2.6.9-22.EL and 2.6.9-22.0.1.EL Linux kernels. Installing a new Linux kernel is no longer required. We will only need to install and configure the supporting FireWire modules!

  I am using the term "multiple logins" a bit loosely in this article. The concept of "multiple logins" is strictly not allowed in the IEEE1394 specification, as it is only a point to point protocol. The term "multiple logins", is often confused with "concurrent sessions", which is supported in the IEEE1394 specification. It simply means that the device allows multiple outstanding requests simultaneously (similar to the SCSI protocol). Therefore multiple hosts (initiators) on a single bus are prohibited according to IEEE1394.

In previous releases of this article, I included the steps to download a patched version of the Linux kernel (the C source code) and then compile it. Thanks to Oracle's Linux Projects development group, this is no longer a requirement. Oracle now provides a pre-compiled module that supports the sharing of FireWire drives. The instructions for downloading and installing the supporting FireWire module is included in this section. Before going into the details of how to perform these actions, however, lets take a moment to discuss the changes that are required to support sharing of the FireWire drive.

While FireWire drivers already exist for Linux, they often do not support shared storage. Normally, when you logon to an O/S, the O/S associates the driver to a specific drive for that machine alone. This implementation simply will not work for our RAC configuration. The shared storage (our FireWire hard drive) needs to be accessed by more than one node. We need to enable the FireWire driver to provide nonexclusive access to the drive so that multiple servers - the nodes that comprise the cluster - will be able to access the same storage. This is accomplished by removing the bit mask that identifies the machine during login in the source code. This results in allowing nonexclusive access to the FireWire hard drive. All other nodes in the cluster login to the same drive during their logon session, using the same modified driver, so they too also have nonexclusive access to the drive.

Our implementation describes a dual node cluster (each with a single processor), each server running CentOS 4.2 Enterprise Linux. Keep in mind that the process of installing the supporting FireWire modules will need to be performed on both Linux nodes. CentOS Enterprise Linux 4.2 includes kernel 2.6.9-22.EL #1. Knowing this, we now need to download the matching FireWire module from:

  Oracle Technet Supplied FireWire Modules


Download one of the following files for the supporting FireWire Modules:


Install the supporting FireWire modules, as root:

Install the supporting FireWire modules package by running either of the following:
# rpm -ivh oracle-firewire-modules-2.6.9-22.EL-1286-1.i686.rpm   - (for single processor)
  - OR -
# rpm -ivh oracle-firewire-modules-2.6.9-22.ELsmp-1286-1.i686.rpm   - (for multiple processors)


Add module options:

Add the following lines to /etc/modprobe.conf:

options sbp2 exclusive_login=0

It is vital that the parameter sbp2 exclusive_login of the Serial Bus Protocol module (sbp2) be set to zero to allow multiple hosts to login to and access the FireWire disk concurrently.


Perform the above tasks on the second Linux server:

With the supporting FireWire modules installed on the first Linux server, move on to the second Linux server and repeat the same tasks in this section on it.

  [Perform the same tasks on linux2]


Connect FireWire drive to each machine and boot with the new FireWire modules installed:

After you have performed the above tasks on both nodes in the cluster, power down both of them:
===============================

# hostname
linux1

# init 0

===============================

# hostname
linux2

# init 0

===============================
After both machines are powered down, connect each of them to the back of the FireWire drive.

Finally, power on each Linux server one at a time and ensure to watch for the "Probing for New Hardware" section during the boot process.

  RHEL 4 users will be prompted during the boot process on both nodes at the "Probing for New Hardware" section for your FireWire hard drive. Simply select the option to "Configure" the device and continue the boot process.

If you are not prompted during the "Probing for New Hardware" section for the new FireWire drive, you will need to run the following commands and reboot the machine. Do not put these commands in a script and attempt to run them - run them interactively at the command-line:

# rpm -e oracle-firewire-modules-2.6.9-22.EL-1286-1
# rpm -Uvh oracle-firewire-modules-2.6.9-22.EL-1286-1.i686.rpm

# modprobe -r sbp2
# modprobe -r sd_mod
# modprobe -r ohci1394

# modprobe ohci1394
# modprobe sd_mod
# modprobe sbp2

# /usr/sbin/kudzu

# init 6
After running /usr/sbin/kudzu (above), you should be prompted to "Configure" the new drive. There are times when this didn't work the first time. If it didn't work, I had to power down everything, power them back up and perform the modprobe tasks (above) again.


Check and turn off UDP ICMP rejections:

After rebooting each machine (above) check to ensure that the firewall option is turned off (stopped):
# /etc/rc.d/init.d/iptables status
Firewall is stopped.


Loading the FireWire stack:

  Starting with Red Hat Enterprise Linux 3 (and of course CentOS Enterprise Linux!), the loading of the FireWire stack should already be configured!

In most cases, the loading of the FireWire stack will already be configured in the /etc/rc.sysinit file. The commands that are contained within this file that are responsible for loading the FireWire stack are:

# modprobe sbp2
# modprobe ohci1394
In older versions of Red Hat, this was not the case and these commands would have to be manually run or put within a startup file. With Red Hat Enterprise Linux 3 and higher, these commands are already put within the /etc/rc.sysinit file and run on each boot.


Check for SCSI Device:

After each machine has rebooted, the kernel should automatically detect the shared disk as a SCSI device (/dev/sdXX). This section will provide several commands that should be run on all nodes in the cluster to verify the FireWire drive was successfully detected and being shared by all nodes in the cluster.

For this configuration, I was performing the above procedures on both nodes at the same time. The following commands and results are from my linux2 machine. Again, make sure that you run the following commands on all nodes to ensure both machine can login to the shared drive.

Let's first check to see that the FireWire adapter was successfully detected:

# lspci
00:00.0 Host bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE DRAM Controller/Host-Hub Interface (rev 01)
00:02.0 VGA compatible controller: Intel Corporation 82845G/GL[Brookdale-G]/GE Chipset Integrated Graphics Device (rev 01)
00:1d.0 USB Controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #1 (rev 01)
00:1d.1 USB Controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #2 (rev 01)
00:1d.2 USB Controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #3 (rev 01)
00:1d.7 USB Controller: Intel Corporation 82801DB/DBM (ICH4/ICH4-M) USB2 EHCI Controller (rev 01)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 81)
00:1f.0 ISA bridge: Intel Corporation 82801DB/DBL (ICH4/ICH4-L) LPC Interface Bridge (rev 01)
00:1f.1 IDE interface: Intel Corporation 82801DB (ICH4) IDE Controller (rev 01)
00:1f.3 SMBus: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) SMBus Controller (rev 01)
00:1f.5 Multimedia audio controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) AC'97 Audio Controller (rev 01)
01:04.0 Ethernet controller: Linksys NC100 Network Everywhere Fast Ethernet 10/100 (rev 11)
01:06.0 FireWire (IEEE 1394): Texas Instruments TSB43AB23 IEEE-1394a-2000 Controller (PHY/Link)
01:09.0 Ethernet controller: Broadcom Corporation BCM4401 100Base-T (rev 01)
Second, let's check to see that the modules are loaded:
# lsmod |egrep "ohci1394|sbp2|ieee1394|sd_mod|scsi_mod"
sd_mod                 17217  0
ohci1394               35784  0
sbp2                   23948  0
scsi_mod              121293  2 sd_mod,sbp2
ieee1394              298228  2 ohci1394,sbp2
Third, let's make sure the disk was detected and an entry was made by the kernel:
# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 01 Lun: 00
  Vendor: Maxtor   Model: OneTouch II      Rev: 023g
  Type:   Direct-Access                    ANSI SCSI revision: 06
Now let's verify that the FireWire drive is accessible for multiple logins and shows a valid login:
# dmesg | grep sbp2
sbp2: $Rev: 1265 $ Ben Collins <bcollins@debian.org>
ieee1394: sbp2: Maximum concurrent logins supported: 2
ieee1394: sbp2: Number of active logins: 1
ieee1394: sbp2: Logged into SBP-2 device

From the above output, you can see that the FireWire drive I have can support concurrent logins by up to 2 servers. It is vital that you have a drive where the chipset supports concurrent access for all nodes within the RAC cluster.

One other test I like to perform is to run a quick fdisk -l from each node in the cluster to verify that it is really being picked up by the O/S. Your drive may show that the device does not contain a valid partition table, but this is OK at this point of the RAC configuration.

# fdisk -l

Disk /dev/hda: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/hda1   *           1          13      104391   83  Linux
/dev/hda2              14        4863    38957625   8e  Linux LVM

Disk /dev/sda: 300.0 GB, 300090728448 bytes
255 heads, 63 sectors/track, 36483 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1       36483   293049666    c  W95 FAT32 (LBA)


Rescan SCSI bus no longer required:

  With Red Hat Enterprise Linux 3 and 4 (and you guessed it, CentOS Enterprise Linux), you no longer need to rescan the SCSI bus in order to detect the disk! The disk should be detected automatically by the kernel as seen from the tests you performed above.

In older versions of the kernel, I would need to run the rescan-scsi-bus.sh script in order to detect the FireWire drive. The purpose of this script was to create the SCSI entry for the node by using the following command:

echo "scsi add-single-device 0 0 0 0" > /proc/scsi/scsi

Starting with Red Hat Enterprise Linux 3, this is no longer required and the disk should be detected automatically.


Troubleshooting SCSI Device Detection:

If you are having troubles with any of the procedures (above) in detecting the SCSI device, you can try the following:
# rpm -e oracle-firewire-modules-2.6.9-22.EL-1286-1
# rpm -Uvh oracle-firewire-modules-2.6.9-22.EL-1286-1.i686.rpm
# modprobe -r sbp2
# modprobe -r sd_mod
# modprobe -r ohci1394
# modprobe ohci1394
# modprobe sd_mod
# modprobe sbp2
# /usr/sbin/kudzu

You may also want to unplug any USB devices connected to the server. The system may not be able to recognize your FireWire drive if you have a USB device attached!


Troubleshooting Concurrent Logins to the FireWire Drive:

One of the first things to verify is that you are using a FireWire drive that contains the correct chipset and allows for multiple logins. If the FireWire drive has a chipset that does not allow for concurrent access from more than one server, the disk and its partitions can only be seen by one server at a time. Disks with the Oxford 911 chipset (FireWire 400), Oxford 912 chipset (FireWire 800), or Oxford 922 chipset (FireWire 800) are known to work. Note that the Oxford 912 chipset is newer and faster than Oxford 922. For a full list of FireWire drives (and enclosures) I have tested, please see the section Verified and Tested FireWire Hard Drives.

Although I have only run into this situation once, there can be problems with the FireWire cards (the IEEE1394 controller cards). For example, in one of my tests (using FireWire 800) I was unable to obtain concurrent logins to the FireWire drive when using the LaCie FireWire 800 PCI Card - (107755) in both nodes. While the first node was able to login to the FireWire drive, it was acquiring it for exclusive access and causing the second node to fail its login process to the drive. For example:

From linux1:

# dmesg | grep sbp2
sbp2: $Rev: 1265 $ Ben Collins 
ieee1394: sbp2: Maximum concurrent logins supported: 2
ieee1394: sbp2: Number of active logins: 0
ieee1394: sbp2: Logged into SBP-2 device
From linux2:
# dmesg | grep sbp2
sbp2: $Rev: 1265 $ Ben Collins 
ieee1394: sbp2: Maximum concurrent logins supported: 2
ieee1394: sbp2: Number of active logins: 1
ieee1394: sbp2: Error logging into SBP-2 device - login failed
sbp2: probe of 00d04b690809290b-0 failed with error -16
ieee1394: sbp2: Maximum concurrent logins supported: 2
ieee1394: sbp2: Number of active logins: 1
ieee1394: sbp2: Error logging into SBP-2 device - login failed
sbp2: probe of 00d04b690809290b-0 failed with error -16

I have seen postings that indicate this can be resolved by using the sbp2 option "serialize_io=1" defined in the the /etc/modprobe.conf. For example, the entry in the /etc/modprobe.conf file would be:

options sbp2 serialize_io=1 exclusive_login=0
Although this has been used to resolve some of the cases with failed concurrent logins, it did not resolve the problem I was having with installing the LaCie FireWire 800 PCI Card - (107755) in both nodes.

Another solution to resolve failed concurrent logins is to use different FireWire cards for each of the nodes. For example, one that uses the Texas Instruments (TI) chipset and another that users the VIA Technologies (VIA) chipset. Actually for me, I was able to resolve this by simply using two different FireWire cards from different vendors. For example, I used the LaCie FireWire 800 PCI Card - (107755) in one node and the SIIG FireWire 800 PCI-32T host adapter - (NN-830112) in the second node. Although they both use the TI chipset, it was enough to resolve the problem I was having with failed concurrent logins. After this, both nodes were able to successfully login to the FireWire drive.



Create "oracle" User and Directories


  Perform the following tasks on all nodes in the cluster!

  I will be using the Oracle Cluster File System, Release 2 (OCFS2) to store the files required to be shared for the Oracle Clusterware software. When using OCFS2, the UID of the UNIX user "oracle" and GID of the UNIX group "dba" must be the same on all machines in the cluster. If either the UID or GID are different, the files on the OCFS2 file system will show up as "unowned" or may even be owned by a different user. For this article, I will use 175 for the "oracle" UID and 115 for the "dba" GID.


Create Group and User for Oracle

Lets continue this example by creating the UNIX dba group and oracle user account along with all appropriate directories.

# mkdir -p /u01/app
# groupadd -g 115 dba
# useradd -u 175 -g 115 -d /u01/app/oracle -s /bin/bash -c "Oracle Software Owner" -p oracle oracle
# chown -R oracle:dba /u01
# passwd oracle
# su - oracle

  When you are setting the Oracle environment variables for each RAC node, ensure to assign each RAC node a unique Oracle SID!

For this example, I used:

  • linux1 : ORACLE_SID=orcl1
  • linux2 : ORACLE_SID=orcl2

  The Oracle Universal Installer (OUI) requires at most 400MB of free space in the /tmp directory.

You can check the available space in /tmp by running the following command:

# cat /proc/swaps
Filename                                Type            Size    Used    Priority
/dev/mapper/VolGroup00-LogVol01         partition       2031608 0       -1
-OR-
# cat /proc/meminfo | grep SwapTotal
SwapTotal:     2031608 kB
If for any reason, you do not have enough space in /tmp, you can temporarily create space in another file system and point your TEMP and TMPDIR to it for the duration of the install. Here are the steps to do this:
# su -
# mkdir /<AnotherFilesystem>/tmp
# chown root.root /<AnotherFilesystem>/tmp
# chmod 1777 /<AnotherFilesystem>/tmp
# export TEMP=/<AnotherFilesystem>/tmp     # used by Oracle
# export TMPDIR=/<AnotherFilesystem>/tmp   # used by Linux programs
                                           #   like the linker "ld"
When the installation of Oracle is complete, you can remove the temporary directory using the following:
# su -
# rmdir /<AnotherFilesystem>/tmp
# unset TEMP
# unset TMPDIR


Create Login Script for oracle User Account

After creating the "oracle" UNIX user account on both nodes, make sure that you are logged in as the oracle user and verify that the environment is setup correctly by using the following .bash_profile:

.bash_profile for Oracle User
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
      . ~/.bashrc
fi

alias ls="ls -FA"
alias s="screen -DRRS iPad -t iPad"

# User specific environment and startup programs
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORA_CRS_HOME=$ORACLE_BASE/product/crs
export ORACLE_PATH=$ORACLE_BASE/dba_scripts/sql:.:$ORACLE_HOME/rdbms/admin

# Each RAC node must have a unique ORACLE_SID. (i.e. orcl1, orcl2,...)
export ORACLE_SID=orcl1

export PATH=.:${PATH}:$HOME/bin:$ORACLE_HOME/bin
export PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
export PATH=${PATH}:$ORACLE_BASE/dba_scripts/bin
export ORACLE_TERM=xterm
export TNS_ADMIN=$ORACLE_HOME/network/admin
export ORA_NLS10=$ORACLE_HOME/nls/data
export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export CLASSPATH=$ORACLE_HOME/JRE
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export THREADS_FLAG=native
export TEMP=/tmp
export TMPDIR=/tmp


Create Mount Point for OCFS2 / Clusterware

Finally, let's create the mount point for the Oracle Cluster File System, Release 2 (OCFS2) that will be used to store the two Oracle Clusterware shared files. These commands will need to be run as the "root" user account:
$ su -
# mkdir -p /u02/oradata/orcl
# chown -R oracle:dba /u02



Create Partitions on the Shared FireWire Storage Device


  Create the following partitions from only one node in the cluster!


Overview

The next step is to create the required partitions on the FireWire (shared) drive. As mentioned earlier in this article, I will be using Oracle's Cluster File System, Release 2 (OCFS2) to store the two files to be shared for Oracle's Clusterware software. We will then be using Automatic Storage Management (ASM) to create three ASM volumes; two for all physical database files (data/index files, online redo log files, and control files) and one for the Flash Recovery Area (RMAN backups and archived redo log files).

The following table lists the individual partitions that will be created on the FireWire (shared) drive and what files will be contained on them.

Oracle Shared Drive Configuration
File System Type Partition Size Mount Point ASM Diskgroup Name File Types
OCFS2 /dev/sda1 1 GB /u02/oradata/orcl   Oracle Cluster Registry (OCR) File - (~100 MB)
Voting Disk - (~20MB)
ASM /dev/sda2 50 GB ORCL:VOL1 +ORCL_DATA1 Oracle Database Files
ASM /dev/sda3 50 GB ORCL:VOL2 +ORCL_DATA1 Oracle Database Files
ASM /dev/sda4 100 GB ORCL:VOL3 +FLASH_RECOVERY_AREA Oracle Flash Recovery Area
Total   201 GB      


Create All Partitions on FireWire Shared Storage

As shown in the table above, my FireWire drive shows up as the SCSI device /dev/sda. The fdisk command is used in Linux for creating (and removing) partitions. For this configuration, I will be creating four partitions - one for Oracle's Clusterware shared files and the other three for ASM (to store all Oracle database files and the Flash Recovery Area). Before creating the new partitions, it is important to remove any existing partitions (if they exist) on the FireWire drive:
# fdisk /dev/sda
Command (m for help): p

Disk /dev/sda: 300.0 GB, 300090728448 bytes
255 heads, 63 sectors/track, 36483 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1       36483   293049666    c  W95 FAT32 (LBA)


Command (m for help): d
Selected partition 1

Command (m for help): p

Disk /dev/sda: 300.0 GB, 300090728448 bytes
255 heads, 63 sectors/track, 36483 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System


Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-36483, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-36483, default 36483): +1G


Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (124-36483, default 124): 124
Last cylinder or +size or +sizeM or +sizeK (124-36483, default 36483): +50G

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (6204-36483, default 6204): 6204
Last cylinder or +size or +sizeM or +sizeK (6204-36483, default 36483): +50G

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Selected partition 4
First cylinder (12284-36483, default 12284): 12284
Last cylinder or +size or +sizeM or +sizeK (12284-36483, default 36483): +100G

Command (m for help): p

Disk /dev/sda: 300.0 GB, 300090728448 bytes
255 heads, 63 sectors/track, 36483 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1         123      987966   83  Linux
/dev/sda2             124        6203    48837600   83  Linux
/dev/sda3            6204       12283    48837600   83  Linux
/dev/sda4           12284       24442    97667167+  83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
After creating all required partitions, you should now inform the kernel of the partition changes using the following syntax as the "root" user account:
# partprobe

# fdisk -l /dev/sda
Disk /dev/sda: 300.0 GB, 300090728448 bytes
255 heads, 63 sectors/track, 36483 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1         123      987966   83  Linux
/dev/sda2             124        6203    48837600   83  Linux
/dev/sda3            6204       12283    48837600   83  Linux
/dev/sda4           12284       24442    97667167+  83  Linux

  The FireWire drive (and partitions created) will be exposed as a SCSI device.



Configure the Linux Servers for Oracle


  Perform the following configuration procedures on all nodes in the cluster!

  The kernel parameters discussed in this section will need to be defined on every node within the cluster every time the machine is booted. This section provides very detailed information about setting those kernel parameters required for Oracle. Instructions for placing them in a startup script (/etc/sysctl.conf) are included in section "All Startup Commands for Each RAC Node".


Overview

This section focuses on configuring both Linux servers - getting each one prepared for the Oracle10g RAC installation. This includes verifying enough swap space, setting shared memory and semaphores, setting the maximum amount of file handles, setting the IP local port range, setting shell limits for the oracle user, activating all kernel parameters for the system, and finally how to verify the correct date and time for each node in the cluster.

Throughout this section you will notice that there are several different ways to configure (set) these parameters. For the purpose of this article, I will be making all changes permanent (through reboots) by placing all commands in the /etc/sysctl.conf file.


Swap Space Considerations


Setting Shared Memory

Shared memory allows processes to access common structures and data by placing them in a shared memory segment. This is the fastest form of Inter-Process Communications (IPC) available - mainly due to the fact that no kernel involvement occurs when data is being passed between the processes. Data does not need to be copied between processes.

Oracle makes use of shared memory for its Shared Global Area (SGA) which is an area of memory that is shared by all Oracle backup and foreground processes. Adequate sizing of the SGA is critical to Oracle performance since it is responsible for holding the database buffer cache, shared SQL, access paths, and so much more.

To determine all shared memory limits, use the following:

# ipcs -lm

------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 32768
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1

Setting SHMMAX

The SHMMAX parameters defines the maximum size (in bytes) for a shared memory segment. The Oracle SGA is comprised of shared memory and it is possible that incorrectly setting SHMMAX could limit the size of the SGA. When setting SHMMAX, keep in mind that the size of the SGA should fit within one shared memory segment. An inadequate SHMMAX setting could result in the following:
ORA-27123: unable to attach to shared memory segment

You can determine the value of SHMMAX by performing the following:

# cat /proc/sys/kernel/shmmax
33554432
The default value for SHMMAX is 32MB. This is often too small to configure the Oracle SGA. I generally set the SHMMAX parameter to 2GB using the following methods:
  • You can alter the default setting for SHMMAX without rebooting the machine by making the changes directly to the /proc file system (/proc/sys/kernel/shmmax) by using the following command:
    # sysctl -w kernel.shmmax=2147483648

  • You should then make this change permanent by inserting the kernel parameter in the /etc/sysctl.conf startup file:
    # echo "kernel.shmmax=2147483648" >> /etc/sysctl.conf

Setting SHMMNI

We now look at the SHMMNI parameters. This kernel parameter is used to set the maximum number of shared memory segments system wide. The default value for this parameter is 4096.

You can determine the value of SHMMNI by performing the following:

# cat /proc/sys/kernel/shmmni
4096
The default setting for SHMMNI should be adequate for our Oracle10g Release 2 RAC installation.

Setting SHMALL

Finally, we look at the SHMALL shared memory kernel parameter. This parameter controls the total amount of shared memory (in pages) that can be used at one time on the system. In short, the value of this parameter should always be at least:
ceil(SHMMAX/PAGE_SIZE)
The default size of SHMALL is 2097152 and can be queried using the following command:
# cat /proc/sys/kernel/shmall
2097152
The default setting for SHMALL should be adequate for our Oracle10g Release 2 RAC installation.

  The page size in Red Hat Linux on the i386 platform is 4096 bytes. You can, however, use bigpages which supports the configuration of larger memory page sizes.


Setting Semaphores

Now that we have configured our shared memory settings, it is time to take care of configuring our semaphores. The best way to describe a semaphore is as a counter that is used to provide synchronization between processes (or threads within a process) for shared resources like shared memory. Semaphore sets are supported in System V where each one is a counting semaphore. When an application requests semaphores, it does so using "sets".

To determine all semaphore limits, use the following:

# ipcs -ls

------ Semaphore Limits --------
max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767
You can also use the following command:
# cat /proc/sys/kernel/sem
250     32000   32      128

Setting SEMMSL

The SEMMSL kernel parameter is used to control the maximum number of semaphores per semaphore set.

Oracle recommends setting SEMMSL to the largest PROCESS instance parameter setting in the init.ora file for all databases on the Linux system plus 10. Also, Oracle recommends setting the SEMMSL to a value of no less than 100.

Setting SEMMNI

The SEMMNI kernel parameter is used to control the maximum number of semaphore sets in the entire Linux system.

Oracle recommends setting the SEMMNI to a value of no less than 100.

Setting SEMMNS

The SEMMNS kernel parameter is used to control the maximum number of semaphores (not semaphore sets) in the entire Linux system.

Oracle recommends setting the SEMMNS to the sum of the PROCESSES instance parameter setting for each database on the system, adding the largest PROCESSES twice, and then finally adding 10 for each Oracle database on the system.

Use the following calculation to determine the maximum number of semaphores that can be allocated on a Linux system. It will be the lesser of:

SEMMNS  -or-  (SEMMSL * SEMMNI)

Setting SEMOPM

The SEMOPM kernel parameter is used to control the number of semaphore operations that can be performed per semop system call.

The semop system call (function) provides the ability to do operations for multiple semaphores with one semop system call. A semaphore set can have the maximum number of SEMMSL semaphores per semaphore set and is therefore recommended to set SEMOPM equal to SEMMSL.

Oracle recommends setting the SEMOPM to a value of no less than 100.

Setting Semaphore Kernel Parameters

Finally, we see how to set all semaphore parameters. In the following, the only parameter I care about changing (raising) is SEMOPM. All other default settings should be sufficient for our example installation.
  • You can alter the default setting for all semaphore settings without rebooting the machine by making the changes directly to the /proc file system (/proc/sys/kernel/sem) by using the following command:
    # sysctl -w kernel.sem="250 32000 100 128"

  • You should then make this change permanent by inserting the kernel parameter in the /etc/sysctl.conf startup file:
    # echo "kernel.sem=250 32000 100 128" >> /etc/sysctl.conf


Setting File Handles

When configuring the Red Hat Linux server, it is critical to ensure that the maximum number of file handles is large enough. The setting for file handles denotes the number of open files that you can have on the Linux system.

Use the following command to determine the maximum number of file handles for the entire system:

# cat /proc/sys/fs/file-max
102563

Oracle recommends that the file handles for the entire system be set to at least 65536.

  You can query the current usage of file handles by using the following:
# cat /proc/sys/fs/file-nr
825     0       65536
The file-nr file displays three parameters:
  • Total allocated file handles
  • Currently used file handles
  • Maximum file handles that can be allocated

  If you need to increase the value in /proc/sys/fs/file-max, then make sure that the ulimit is set properly. Usually for Linux 2.4 and 2.6 it is set to unlimited. Verify the ulimit setting my issuing the ulimit command:
# ulimit
unlimited


Setting IP Local Port Range

Configure the system to allow a local port range of 1024 through 65000.

Use the following command to determine the value of ip_local_port_range:

# cat /proc/sys/net/ipv4/ip_local_port_range
32768   61000
The default value for ip_local_port_range is ports 32768 through 61000. Oracle recommends a local port range of 1024 to 65000.


Setting Shell Limits for the oracle User

To improve the performance of the software on Linux systems, Oracle recommends you increase the following shell limits for the oracle user:

Shell Limit Item in limits.conf Hard Limit
Maximum number of open file descriptors nofile 65536
Maximum number of processes available to a single user nproc 16384

To make these changes, run the following as root:

cat >> /etc/security/limits.conf <<EOF
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
EOF

cat >> /etc/pam.d/login <<EOF
session required /lib/security/pam_limits.so
EOF
Update the default shell startup file for the "oracle" UNIX account.


Activating All Kernel Parameters for the System

At this point, we have covered all of the required Linux kernel parameters needed for a successful Oracle installation and configuration. Within each section above, we configured the Linux system to persist each of the kernel parameters on system startup by placing them all in the /etc/sysctl.conf file.

We could reboot at this point to ensure all of these parameters are set in the kernel or we could simply "run" the /etc/sysctl.conf file by running the following command as root. Perform this on each node of the cluster!

# sysctl -p

net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_max = 262144
kernel.shmmax = 2147483648
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000


Setting the Correct Date and Time on All Cluster Nodes

During the installation of Oracle Clusterware, the Database, and the Companion CD, the Oracle Universal Installer (OUI) first installs the software to the local node running the installer (i.e. linux1). The software is then copied remotely to all of the remaining nodes in the cluster (i.e. linux2). During the remote copy process, the OUI will execute the UNIX "tar" command on each of the remote nodes to extract the files that were archived and copied over. If the date and time on the node performing the install is greater than that of the node it is copying to, the OUI will throw an error from the "tar" command indicating it is attempting to extract files stamped with a time in the future:

Error while copying directory 
    /u01/app/oracle/product/crs with exclude file list 'null' to nodes 'linux2'.
[PRKC-1002 : All the submitted commands did not execute successfully]
---------------------------------------------
linux2:
   /bin/tar: ./bin/lsnodes: time stamp 2006-09-13 09:21:34 is 735 s in the future
   /bin/tar: ./bin/olsnodes: time stamp 2006-09-13 09:21:34 is 735 s in the future
   ...(more errors on this node)

Please note that although this would seem like a severe error from the OUI, it can safely be disregarded as a warning. The "tar" command DOES actually extract the files; however, when you perform a listing of the files (using ls -l) on the remote node, they will be missing the time field until the time on the server is greater than the timestamp of the file.

Before starting any of the above noted installations, ensure that each member node of the cluster is set as closely as possible to the same date and time. Oracle strongly recommends using the Network Time Protocol feature of most operating systems for this purpose, with all nodes using the same reference Network Time Protocol server.

Accessing a Network Time Protocol server, however, may not always be an option. In this case, when manually setting the date and time for the nodes in the cluster, ensure that the date and time of the node you are performing the software installations from (linux1) is less than all other nodes in the cluster (linux2). I generally use a 20 second difference as shown in the following example:

Setting the date and time from linux1:

# date -s "9/13/2006 23:00:00"

Setting the date and time from linux2:

# date -s "9/13/2006 23:00:20"

The two-node RAC configuration described in this article does not make use of a Network Time Protocol server.



Configure the "hangcheck-timer" Kernel Module


  Perform the following configuration procedures on all nodes in the cluster!

Oracle 9.0.1 and 9.2.0.1 used a userspace watchdog daemon called watchdogd to monitor the health of the cluster and to restart a RAC node in case of a failure. Starting with Oracle 9.2.0.2 (and still available in Oracle10g Release 2), the watchdog daemon has been deprecated by a Linux kernel module named hangcheck-timer which addresses availability and reliability problems much better. The hang-check timer is loaded into the Linux kernel and checks if the system hangs. It will set a timer and check the timer after a certain amount of time. There is a configurable threshold to hang-check that, if exceeded will reboot the machine. Although the hangcheck-timer module is not required for Oracle Clusterware (Cluster Manager) operation, it is highly recommended by Oracle.


The hangcheck-timer.ko Module

The hangcheck-timer module uses a kernel-based timer that periodically checks the system task scheduler to catch delays in order to determine the health of the system. If the system hangs or pauses, the timer resets the node. The hangcheck-timer module uses the Time Stamp Counter (TSC) CPU register which is a counter that is incremented at each clock signal. The TCS offers much more accurate time measurements since this register is updated by the hardware automatically.

Much more information about the hangcheck-timer project can be found here.


Installing the hangcheck-timer.ko Module

The hangcheck-timer was normally shipped only by Oracle, however, this module is now included with Red Hat Linux AS starting with kernel versions 2.4.9-e.12 and higher. The hangcheck-timer should already be included. Use the following to ensure that you have the module included:
# find /lib/modules -name "hangcheck-timer.ko"
/lib/modules/2.6.9-22.EL/kernel/drivers/char/hangcheck-timer.ko
In the above output, we care about the hangcheck timer object (hangcheck-timer.ko) in the /lib/modules/2.6.9-22.EL/kernel/drivers/char directory.


Configuring and Loading the hangcheck-timer Module

There are two key parameters to the hangcheck-timer module:

  The two hangcheck-timer module parameters indicate how long a RAC node must hang before it will reset the system. A node reset will occur when the following is true:
system hang time > (hangcheck_tick + hangcheck_margin)


Configuring Hangcheck Kernel Module Parameters

Each time the hangcheck-timer kernel module is loaded (manually or by Oracle) it needs to know what value to use for each of the two parameters we just discussed: (hangcheck-tick and hangcheck-margin).

These values need to be available after each reboot of the Linux server. To do this, make an entry with the correct values to the /etc/modprobe.conf file as follows:

# su -
# echo "options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180" >> /etc/modprobe.conf
Each time the hangcheck-timer kernel module gets loaded, it will use the values defined by the entry I made in the /etc/modprobe.conf file.


Manually Loading the Hangcheck Kernel Module for Testing

Oracle is responsible for loading the hangcheck-timer kernel module when required. It is for this reason that it is not required to perform a modprobe or insmod of the hangcheck-timer kernel module in any of the startup files (i.e. /etc/rc.local).

It is only out of pure habit that I continue to include a modprobe of the hangcheck-timer kernel module in the /etc/rc.local file. Someday I will get over it, but realize that it does not hurt to include a modprobe of the hangcheck-timer kernel module during startup.

So to keep myself sane and able to sleep at night, I always configure the loading of the hangcheck-timer kernel module on each startup as follows:

# echo "/sbin/modprobe hangcheck-timer" >> /etc/rc.local

  You don't have to manually load the hangcheck-timer kernel module using modprobe or insmod after each reboot. The hangcheck-timer module will be loaded by Oracle (automatically) when needed.

Now, to test the hangcheck-timer kernel module to verify it is picking up the correct parameters we defined in the /etc/modprobe.conf file, use the modprobe command. Although you could load the hangcheck-timer kernel module by passing it the appropriate parameters (e.g. insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180), we want to verify that it is picking up the options we set in the /etc/modprobe.conf file.

To manually load the hangcheck-timer kernel module and verify it is using the correct values defined in the /etc/modprobe.conf file, run the following command:

# su -
# modprobe hangcheck-timer
# grep Hangcheck /var/log/messages | tail -2
Dec 19 21:04:40 linux2 kernel: Hangcheck: starting hangcheck timer 0.5.0 (tick is 30 seconds, margin is 180 seconds)



Configure RAC Nodes for Remote Access


  Perform the following configuration procedures on all nodes in the cluster!

Before you can install and use Oracle Real Application clusters, you must configure either secure shell (SSH) or remote shell (RSH) for the "oracle" UNIX user account on all cluster nodes. The goal here is to setup user equivalence for the "oracle" UNIX user account. User equivalence enables the "oracle" UNIX user account to access all other nodes in the cluster (running commands and copying files) without the need for a password. This can be configured using either SSH or RSH where SSH is the preferred method. Oracle added support in 10g Release 1 for using the SSH tool suite for setting up user equivalence. Before Oracle10g, user equivalence had to be configured using remote shell.

  If the Oracle Universal Installer in 10g does not detect the presence of the secure shell tools (ssh and scp), it will attempt to use the remote shell tools instead (rsh and rcp).

So, why do we have to setup user equivalence? Installing Oracle Clusterware and the Oracle Database software is only performed from one node in a RAC cluster. When running the Oracle Universal Installer (OUI) on that particular node, it will use the ssh and scp commands (or rsh and rcp commands if using remote shell) to run remote commands on and copy files (the Oracle software) to all other nodes within the RAC cluster. The "oracle" UNIX user account on the node running the OUI (runInstaller) must be trusted by all other nodes in your RAC cluster. This means that you must be able to run the secure shell commands (ssh or scp) or the remote shell commands (rsh and rcp) on the Linux server you will be running the OUI from against all other Linux servers in the cluster without being prompted for a password.

  The use of secure shell or remote shell is not required for normal RAC operation. This configuration, however, must to be enabled for RAC and patchset installations as well as creating the clustered database.

The first step is to decide which method of remote access to use - secure shell or remote shell. Both of them have their pros and cons. Remote shell, for example, is extremely easy to setup and configure. It takes fewer steps to construct and is always available in the terminal session when logging on to the trusted node (the node you will be performing the install from). The connection to the remote nodes, however, is not secure during the installation and any patching process. Secure shell on the other hand does provide a secure connection when installing and patching but does require a greater number of steps. It also needs to be enabled in the terminal session each time the oracle user logs in to the trusted node. The official Oracle documentation only describes the steps for setting up secure shell and is considered the preferred method.

Both methods for configuring user equivalence are described in the following two sections:

       Using the Secure Shell Method
       Using the Remote Shell Method


Using the Secure Shell Method

This section describes how to configure OpenSSH version 3.

To determine if SSH is installed and running, enter the following command:

# pgrep sshd
2808
4440
4442
6067
6069
If SSH is running, then the response to this command is a list of process ID number(s). Please run this command on all nodes in the cluster to verify the SSH daemons are installed and running!

  To find out more about SSH, refer to the man page:
# man ssh


Creating RSA and DSA Keys on Each Node

The first step in configuring SSH is to create RSA and DSA key pairs on each node in the cluster. The command to do this will create a public and private key for both RSA and DSA (for a total of four keys per node). The content of the RSA and DSA public keys will then need to be copied into an authorized key file which is then distributed to all nodes in the cluster.

Use the following steps to create the RSA and DSA key pairs. Please note that these steps will need to be completed on all nodes in the cluster:

  1. Logon as the "oracle" UNIX user account.
    # su - oracle

  2. If necessary, create the .ssh directory in the "oracle" user's home directory and set the correct permissions on it:
    $ mkdir -p ~/.ssh
    $ chmod 700 ~/.ssh

  3. Enter the following command to generate an RSA key pair (public and private key) for version 3 of the SSH protocol:
    $ /usr/bin/ssh-keygen -t rsa
    At the prompts:
    • Accept the default location for the key files.
    • Enter and confirm a pass phrase. This should be different from the "oracle" UNIX user account password however it is not a requirement.

    This command will write the public key to the ~/.ssh/id_rsa.pub file and the private key to the ~/.ssh/id_rsa file. Note that you should never distribute the private key to anyone!

  4. Enter the following command to generate a DSA key pair (public and private key) for version 3 of the SSH protocol:
    $ /usr/bin/ssh-keygen -t dsa
    At the prompts:
    • Accept the default location for the key files.
    • Enter and confirm a pass phrase. This should be different from the "oracle" UNIX user account password however it is not a requirement.

    This command will write the public key to the ~/.ssh/id_dsa.pub file and the private key to the ~/.ssh/id_dsa file. Note that you should never distribute the private key to anyone!

  5. Repeat the above steps for each node in the cluster.

Now that each node contains a public and private key for both RSA and DSA, you will need to create an authorized key file on one of the nodes. An authorized key file is nothing more than a single file that contains a copy of everyone's (every node's) RSA and DSA public key. Once the authorized key file contains all of the public keys, it is then distributed to all other nodes in the cluster.

Complete the following steps on one of the nodes in the cluster to create and then distribute the authorized key file. For the purpose of this article, I am using linux1:

  1. First, determine if an authorized key file already exists on the node (~/.ssh/authorized_keys). In most cases this will not exist since this article assumes you are working with a new install. If the file doesn't exist, create it now:
    $ touch ~/.ssh/authorized_keys
    $ cd ~/.ssh
    $ ls -l *.pub
    -rw-r--r--  1 oracle dba 603 Aug 31 23:40 id_dsa.pub
    -rw-r--r--  1 oracle dba 223 Aug 31 23:36 id_rsa.pub
    NOTE: The listing above should show the id_rsa.pub and id_dsa.pub public keys created in the previous section.

  2. In this step, use SSH to copy the content of the ~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub public key from each node in the cluster to the authorized key file just created (~/.ssh/authorized_keys). Again, this will be done from linux1. You will be prompted for the "oracle" UNIX user account password for each node accessed. Notice that when using SSH to access the node you are on (linux1), the first time prompts for the "oracle" UNIX user account password. The second attempt at accessing this node will prompt for the pass phrase used to unlock the private key. For any of the remaining nodes, it will always ask for the "oracle" UNIX user account password.

    The following example is being run from linux1 and assumes a two-node cluster, with nodes linux1 and linux2:

    $ ssh linux1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
    The authenticity of host 'linux1 (192.168.1.100)' can't be established.
    RSA key fingerprint is 61:8a:f9:9e:28:a2:b7:d3:70:8d:dc:76:ca:d9:23:43.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'linux1,192.168.1.100' (RSA) to the list of known hosts.
    oracle@linux1's password: xxxxx
    
    $ ssh linux1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
    Enter passphrase for key '/u01/app/oracle/.ssh/id_rsa': xxxxx
    
    $ ssh linux2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
    The authenticity of host 'linux2 (192.168.1.101)' can't be established.
    RSA key fingerprint is 84:2b:bd:eb:31:2c:23:36:55:c2:ee:54:d2:23:6a:e4.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'linux2,192.168.1.101' (RSA) to the list of known hosts.
    oracle@linux2's password: xxxxx
    
    $ ssh linux2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
    oracle@linux2's password: xxxxx

      The first time you use SSH to connect to a node from a particular system, you may see a message similar to the following:
    The authenticity of host 'linux1 (192.168.1.100)' can't be established.
    RSA key fingerprint is 61:8a:f9:9e:28:a2:b7:d3:70:8d:dc:76:ca:d9:23:43.
    Are you sure you want to continue connecting (yes/no)? yes
    Enter yes at the prompt to continue. You should not see this message again when you connect from this system to the same node.

  3. At this point, we have the content of the RSA and DSA public keys from every node in the cluster in the authorized key file (~/.ssh/authorized_keys) on linux1. We now need to copy it to the remaining nodes in the cluster. In our two-node cluster example, the only remaining node is linux2. Use the scp command to copy the authorized key file to all remaining nodes in the cluster:
    $ scp ~/.ssh/authorized_keys linux2:.ssh/authorized_keys
    oracle@linux2's password: xxxxx
    authorized_keys                         100% 1652     1.6KB/s   00:00

  4. Change the permission of the authorized key file for each node in the cluster by logging into the node and running the following:
    $ chmod 600 ~/.ssh/authorized_keys

  5. At this point, if you use ssh to log in to or run a command on another node, you are prompted for the pass phrase that you specified when you created the DSA key. For example, test the following from linux1:
    $ ssh linux1 hostname
    Enter passphrase for key '/u01/app/oracle/.ssh/id_rsa': xxxxx
    linux1
    
    $ ssh linux2 hostname
    Enter passphrase for key '/u01/app/oracle/.ssh/id_rsa': xxxxx
    linux2

      If you see any other messages or text, apart from the host name, then the Oracle installation can fail. Make any changes required to ensure that only the host name is displayed when you enter these commands. You should ensure that any part of a login script(s) that generate any output, or ask any questions, are modified so that they act only when the shell is an interactive shell.


Enabling SSH User Equivalency for the Current Shell Session

When running the OUI, it will need to run the secure shell tool commands (ssh and scp) without being prompted for a pass phrase. Even though SSH is configured on all nodes in the cluster, using the secure shell tool commands will still prompt for a pass phrase. Before running the OUI, you need to enable user equivalence for the terminal session you plan to run the OUI from. For the purpose of this article, all Oracle installations will be performed from linux1.

User equivalence will need to be enabled on any new terminal shell session before attempting to run the OUI. If you log out and log back in to the node you will be performing the Oracle installation from, you must enable user equivalence for the terminal shell session as this is not done by default.

To enable user equivalence for the current terminal shell session, perform the following steps:

  1. Logon to the node where you want to run the OUI from (linux1) as the "oracle" UNIX user account.
    # su - oracle

  2. Enter the following commands:
    $ exec /usr/bin/ssh-agent $SHELL
    $ /usr/bin/ssh-add
    Enter passphrase for /u01/app/oracle/.ssh/id_rsa: xxxxx
    Identity added: /u01/app/oracle/.ssh/id_rsa (/u01/app/oracle/.ssh/id_rsa)
    Identity added: /u01/app/oracle/.ssh/id_dsa (/u01/app/oracle/.ssh/id_dsa)
    At the prompts, enter the pass phrase for each key that you generated.

  3. If SSH is configured correctly, you will be able to use the ssh and scp commands without being prompted for a password or pass phrase from this terminal session:
    $ ssh linux1 "date;hostname"
    Wed Sep 13 17:35:30 EDT 2006
    linux1
    
    $ ssh linux2 "date;hostname"
    Wed Sep 13 17:36:14 EDT 2006
    linux2

      The commands above should display the date set on each node along with its hostname. If any of the nodes prompt for a password or pass phrase then verify that the ~/.ssh/authorized_keys file on that node contains the correct public keys.

    Also, if you see any other messages or text, apart from the date and hostname, then the Oracle installation can fail. Make any changes required to ensure that only the date is displayed when you enter these commands. You should ensure that any part of a login script(s) that generate any output, or ask any questions, are modified so that they act only when the shell is an interactive shell.

  4. The Oracle Universal Installer is a GUI interface and requires the use of an X Server. From the terminal session enabled for user equivalence (the node you will be performing the Oracle installations from), set the environment variable DISPLAY to a valid X Windows display:

    Bourne, Korn, and Bash shells:

    $ DISPLAY=<Any X-Windows Host>:0
    $ export DISPLAY
    C shell:
    $ setenv DISPLAY <Any X-Windows Host>:0
    After setting the DISPLAY variable to a valid X Windows display, you should perform another test of the current terminal session to ensure that X11 forwarding is not enabled:
    $ ssh linux1 hostname
    linux1
    
    $ ssh linux2 hostname
    linux2

      If you are using a remote client to connect to the node performing the installation, and you see a message similar to: "Warning: No xauth data; using fake authentication data for X11 forwarding." then this means that your authorized keys file is configured correctly; however, your SSH configuration has X11 forwarding enabled. For example:
    $ export DISPLAY=melody:0
    $ ssh linux2 hostname
    Warning: No xauth data; using fake authentication data for X11 forwarding.
    linux2
    Note that having X11 Forwarding enabled will cause the Oracle installation to fail. To correct this problem, create a user-level SSH client configuration file for the "oracle" UNIX user account that disables X11 Forwarding:

    • Using a text editor, edit or create the file ~/.ssh/config
    • Make sure that the ForwardX11 attribute is set to no. For example, insert the following into the ~/.ssh/config file:
      Host *
              ForwardX11 no

  5. You must run the Oracle Universal Installer from this terminal session or remember to repeat the steps to enable user equivalence (steps 2, 3, and 4 from this section) before you start the Oracle Universal Installer from a different terminal session.


Remove any stty Commands

When installing the Oracle software, any hidden files on the system (i.e. .bashrc, .cshrc, .profile) will cause the installation process to fail if they contain stty commands.

To avoid this problem, you must modify these files to suppress all output on STDERR as in the following examples:

  • Bourne, Bash, or Korn shell:
    if [ -t 0 ]; then
        stty intr ^C
    fi

  • C shell:
    test -t 0
    if ($status == 0) then
        stty intr ^C
    endif

  If there are hidden files that contain stty commands that are loaded by the remote shell, then OUI indicates an error and stops the installation.


Using the Remote Shell Method

The services provided by remote shell are disabled by default on most Linux systems. This section describes the tasks required for enabling and configuring user equivalence for use by the Oracle Universal Installer when commands should be run and files copied to the remote nodes in the cluster using the remote shell tools. The goal is to enable the Oracle Universal Installer to use rsh and rcp to run commands and copy files to a remote node without being prompted for a password. Please note that using the remote shell method for configuring user equivalence is not secure.

The rsh daemon validates users using the /etc/hosts.equiv file or the .rhosts file found in the user's (oracle's) home directory.

First, let's make sure that we have the rsh RPMs installed on each node in the RAC cluster:

# rpm -q rsh rsh-server
rsh-0.17-25.3
rsh-server-0.17-25.3
From the above, we can see that we have the rsh and rsh-server installed.

  If rsh is not installed, run the following command from the CD where the RPM is located:
# su -
# rpm -ivh rsh-0.17-25.3.i386.rpm rsh-server-0.17-25.3.i386.rpm

To enable the "rsh" and "rlogin" services, the "disable" attribute in the /etc/xinetd.d/rsh file must be set to "no" and xinetd must be reloaded. This can be done by running the following commands on all nodes in the cluster:

# su -
# chkconfig rsh on
# chkconfig rlogin on
# service xinetd reload
Reloading configuration: [  OK  ]
To allow the "oracle" UNIX user account to be trusted among the RAC nodes, create the /etc/hosts.equiv file on all nodes in the cluster:
# su -
# touch /etc/hosts.equiv
# chmod 600 /etc/hosts.equiv
# chown root.root /etc/hosts.equiv
Now add all RAC nodes to the /etc/hosts.equiv file similar to the following example for all nodes in the cluster:
# cat /etc/hosts.equiv
+linux1 oracle
+linux2 oracle
+linux1-priv oracle
+linux2-priv oracle

  In the above example, the second field permits only the oracle user account to run rsh commands on the specified nodes. For security reasons, the /etc/hosts.equiv file should be owned by root and the permissions should be set to 600. In fact, some systems will only honor the content of this file if the owner of this file is root and the permissions are set to 600.

  Before attempting to test your rsh command, ensure that you are using the correct version of rsh. By default, Red Hat Linux puts /usr/kerberos/sbin at the head of the $PATH variable. This will cause the Kerberos version of rsh to be executed.

I will typically rename the Kerberos version of rsh so that the normal rsh command will be used. Use the following:

# su -

# which rsh
/usr/kerberos/bin/rsh

# mv /usr/kerberos/bin/rsh /usr/kerberos/bin/rsh.original
# mv /usr/kerberos/bin/rcp /usr/kerberos/bin/rcp.original
# mv /usr/kerberos/bin/rlogin /usr/kerberos/bin/rlogin.original

# which rsh
/usr/bin/rsh

You should now test your connections and run the rsh command from the node that will be performing the Oracle Clusterware and 10g RAC installation. I will be using the node linux1 to perform all installs so this is where I will run the following commands from:

# su - oracle

$ rsh linux1 ls -l /etc/hosts.equiv
-rw-------  1 root  root  68 Sep 27 23:37 /etc/hosts.equiv

$ rsh linux1-priv ls -l /etc/hosts.equiv
-rw-------  1 root  root  68 Sep 27 23:37 /etc/hosts.equiv

$ rsh linux2 ls -l /etc/hosts.equiv
-rw-------  1 root  root  68 Sep 27 23:45 /etc/hosts.equiv

$ rsh linux2-priv ls -l /etc/hosts.equiv
-rw-------  1 root  root  68 Sep 27 23:45 /etc/hosts.equiv

  Unlike when using secure shell, no other actions or commands are needed to enable user equivalence using the remote shell. User equivalence will be enabled for the "oracle" UNIX user account after successfully logging in to a terminal session.



All Startup Commands for Each RAC Node


  Verify that the following startup commands are included on all nodes in the cluster!

Up to this point, we have talked in great detail about the parameters and resources that need to be configured on all nodes for the Oracle10g RAC configuration. This section will take a deep breath and recap those parameters, commands, and entries (in previous sections of this document) that need to happen on each node when the machine is booted.

In this section, I provide all of the commands, parameters, and entries that have been discussed so far that will need to be included in the startup scripts for each Linux node in the RAC cluster. For each of the startup files below, I indicate in blue the entries that should be included in each of the startup files in order to provide a successful RAC node.


/etc/modprobe.conf

All parameters and values to be used by kernel modules.

/etc/modprobe.conf
alias eth0 b44
alias eth1 tulip
alias snd-card-0 snd-intel8x0
options snd-card-0 index=0
alias usb-controller ehci-hcd
alias usb-controller1 uhci-hcd
options sbp2 exclusive_login=0
alias scsi_hostadapter sbp2
options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180


/etc/sysctl.conf

We wanted to adjust the default and maximum send buffer size as well as the default and maximum receive buffer size for the interconnect. This file also contains those parameters responsible for configuring shared memory, semaphores, file handles, and local IP range for use by the Oracle instance.

  First, verify that each of the required kernel parameters are configured in the /etc/sysctl.conf file. Then, ensure that each of these parameters are truly in effect by running the following command on each node in the cluster:
# sysctl -p
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_max = 262144
kernel.shmmax = 2147483648
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000

/etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled.  See sysctl(8) and
# sysctl.conf(5) for more details.

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Default setting in bytes of the socket receive buffer
net.core.rmem_default=262144

# Default setting in bytes of the socket send buffer
net.core.wmem_default=262144

# Maximum socket receive buffer size which may be set by using
# the SO_RCVBUF socket option
net.core.rmem_max=262144

# Maximum socket send buffer size which may be set by using
# the SO_SNDBUF socket option
net.core.wmem_max=262144

# +---------------------------------------------------------+
# | SHARED MEMORY                                           |
# +---------------------------------------------------------+
kernel.shmmax=2147483648

# +---------------------------------------------------------+
# | SEMAPHORES                                              |
# | ----------                                              |
# |                                                         |
# | SEMMSL_value  SEMMNS_value  SEMOPM_value  SEMMNI_value  |
# |                                                         |
# +---------------------------------------------------------+
kernel.sem=250 32000 100 128

# +---------------------------------------------------------+
# | FILE HANDLES                                            |
# ----------------------------------------------------------+
fs.file-max=65536

# +---------------------------------------------------------+
# | LOCAL IP RANGE                                          |
# ----------------------------------------------------------+
net.ipv4.ip_local_port_range=1024 65000


/etc/hosts

All machine/IP entries for nodes in the RAC cluster.

/etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1        localhost.localdomain   localhost
# Public Network - (eth0)
192.168.1.100    linux1
192.168.1.101    linux2
# Private Interconnect - (eth1)
192.168.2.100    linux1-priv
192.168.2.101    linux2-priv
# Public Virtual IP (VIP) addresses for - (eth0)
192.168.1.200    linux1-vip
192.168.1.201    linux2-vip
192.168.1.106    melody
192.168.1.102    alex
192.168.1.105    bartman


/etc/hosts.equiv

  The /etc/hosts.equiv file is only required when using the remote shell method to establish remote access and user equivalency.

Allow logins to each node as the oracle user account without the need for a password when using the remote shell method for enabling user equivalency.

/etc/hosts.equiv
+linux1 oracle
+linux2 oracle
+linux1-priv oracle
+linux2-priv oracle


/etc/rc.local

Loading the hangcheck-timer kernel module.

/etc/rc.local
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local

# +---------------------------------------------------------+
# | HANGCHECK TIMER                                         |
# | (I do not believe this is required, but doesn't hurt)   |
# ----------------------------------------------------------+

/sbin/modprobe hangcheck-timer



Check RPM Packages for Oracle10g Release 2


  Perform the following checks on all nodes in the cluster!


Overview

When installing the Linux O/S (CentOS Enterprise Linux or Red Hat Enterprise Linux 4), you should verify that all required RPMs are installed. If you followed the instructions I used for installing Linux, you would have installed Everything, in which case you will have all of the required RPM packages. However, if you performed another installation type (i.e. "Advanced Server), you may have some packages missing and will need to install them. All of the required RPMs are on the Linux CDs/ISOs.


Check Required RPMs

The following packages (keeping in mind that the version number for your Linux distribution may vary slightly) must be installed.

make-3.80-5
glibc-2.3.4-2.9
glibc-devel-2.3.4-2.9
glibc-headers-2.3.4-2.9
glibc-kernheaders-2.4-9.1.87
cpp-3.4.3-22.1
compat-db-4.1.25-9
compat-gcc-32-3.2.3-47.3
compat-gcc-32-c++-3.2.3-47.3
compat-libstdc++-33-3.2.3-47.3
compat-libstdc++-296-2.96-132.7.2
openmotif-2.2.3-9.RHEL4.1
setarch-1.6-1
To query package information (gcc and glibc-devel for example), use the "rpm -q <PackageName> [, <PackageName>]" command as follows:
# rpm -q gcc glibc-devel
gcc-3.4.3-22.1
glibc-devel-2.3.4-2.9
If you need to install any of the above packages (which you should not have to if you installed Everything), use the "rpm -Uvh <PackageName.rpm>" command. For example, to install the GCC 3.2.3-24 package, use:
# rpm -Uvh gcc-3.4.3-22.1.i386.rpm
Reboot the System
If you made any changes to the O/S, reboot all nodes in the cluster before attempting to install any of the Oracle components!!!
# init 6



Install and Configure Oracle Cluster File System (OCFS2)


  Most of the configuration procedures in this section should be performed on all nodes in the cluster! Creating the OCFS2 filesystem, however, should only be executed on one of nodes in the RAC cluster.


Overview

It is now time to install the Oracle Cluster File System, Release 2 (OCFS2). OCFS2, developed by Oracle Corporation, is a Cluster File System which allows all nodes in a cluster to concurrently access a device via the standard file system interface. This allows for easy management of applications that need to run across a cluster.

OCFS (Release 1) was released in December 2002 to enable Oracle Real Application Cluster (RAC) users to run the clustered database without having to deal with RAW devices. The file system was designed to store database related files, such as data files, control files, redo logs, archive logs, etc. OCFS2 is the next generation of the Oracle Cluster File System. It has been designed to be a general purpose cluster file system. With it, one can store not only database related files on a shared disk, but also store Oracle binaries and configuration files (shared Oracle Home) making management of RAC even easier.

In this article, I will be using the latest release of OCFS2 (OCFS2 Release 1.2.3-1 at the time of this writing) to store the two files that are required to be shared by the Oracle Clusterware software. Along with these two files, I will also be using this space to store the shared SPFILE for all Oracle RAC instances.

See the following page for more information on OCFS2 (including Installation Notes) for Linux:

  OCFS2 Project Documentation


Download OCFS

First, let's download the latest OCFS2 distribution. The OCFS2 distribution comprises of two sets of RPMs; namely, the kernel module and the tools. The latest kernel module is available for download from http://oss.oracle.com/projects/ocfs2/files/ and the tools from http://oss.oracle.com/projects/ocfs2-tools/files/.

Download the appropriate RPMs starting with the latest OCFS2 kernel module (the driver). With CentOS 4.2 Enterprise Linux, I am using kernel release 2.6.9-22.EL. The appropriate OCFS2 kernel module was found in the latest release of OCFS2 at the time of this writing (OCFS2 Release 1.2.3-1). The available OCFS2 kernel modules for Linux kernel 2.6.9-22.EL are listed below. Always download the latest OCFS2 kernel module that matches the distribution, platform, kernel version and the kernel flavor (smp, hugemem, psmp, etc).

  ocfs2-2.6.9-22.EL-1.2.3-1.i686.rpm - (for single processor)
  ocfs2-2.6.9-22.ELsmp-1.2.3-1.i686.rpm - (for multiple processors)
  ocfs2-2.6.9-22.ELhugemem-1.2.3-1.i686.rpm - (for hugemem)
For the tools, simply match the platform and distribution. You should download both the OCFS2 tools and the OCFS2 console applications.
  ocfs2-tools-1.2.1-1.i386.rpm - (OCFS2 tools)
  ocfs2console-1.2.1-1.i386.rpm - (OCFS2 console)

  The OCFS2 Console is optional but highly recommended. The ocfs2console application requires e2fsprogs, glib2 2.2.3 or later, vte 0.11.10 or later, pygtk2 (EL4) or python-gtk (SLES9) 1.99.16 or later, python 2.3 or later and ocfs2-tools.

  If you were curious as to which OCFS2 driver release you need, use the OCFS2 release that matches your kernel version. To determine your kernel release:
$ uname -a
Linux linux1 2.6.9-22.EL #1 Sat Oct 8 17:48:27 CDT 2005 i686 i686 i386 GNU/Linux
In the absence of the string "smp" after the string "EL", we are running a single processor (Uniprocessor) machine. If the string "smp" were to appear, then you would be running on a multi-processor machine.


Install OCFS2

I will be installing the OCFS2 files onto two - single processor machines. The installation process is simply a matter of running the following command on all nodes in the cluster as the root user account:
$ su -
# rpm -Uvh ocfs2-2.6.9-22.EL-1.2.3-1.i686.rpm \
       ocfs2console-1.2.1-1.i386.rpm \
       ocfs2-tools-1.2.1-1.i386.rpm
Preparing...                ########################################### [100%]
   1:ocfs2-tools            ########################################### [ 33%]
   2:ocfs2-2.6.9-22.EL      ########################################### [ 67%]
   3:ocfs2console           ########################################### [100%]


Disable SELinux (RHEL4 U2 Only)

RHEL4 U2 users (CentOS 4.2 is based on RHEL4 U2) are advised that OCFS2 currently does not work with SELinux enabled. If you are using RHEL4 U2 (which includes us since we are using CentOS 4.2) you will need to disable SELinux (using tool system-config-securitylevel) to get the O2CB service to execute.

  A ticket has been logged with Red Hat on this issue.

To disable SELinux, run the "Security Level Configuration" GUI utility:

# /usr/bin/system-config-securitylevel &


This will bring up the following screen:


Figure 6: Security Level Configuration Opening Screen


Now, click the SELinux tab and check off the "Enabled" checkbox. After clicking [OK], you will be presented with a warning dialog. Simply acknowledge this warning by clicking "Yes". Your screen should now look like the following after disabling the SELinux option:


Figure 7: SELinux Disabled


After making this change on both nodes in the cluster, each node will need to be rebooted to implement the change. SELinux must be disabled before you can continue with configuring OCFS2!

# init 6


Configure OCFS2

The next step is to generate and configure the /etc/ocfs2/cluster.conf file on each node in the cluster. The easiest way to accomplish this is to run the GUI tool ocfs2console. In this section, we will not only create and configure the /etc/ocfs2/cluster.conf file using ocfs2console, but will also create and start the cluster stack O2CB. When the /etc/ocfs2/cluster.conf file is not present, (as will be the case in our example), the ocfs2console tool will create this file along with a new cluster stack service (O2CB) with a default cluster name of ocfs2. This will need to be done on all nodes in the cluster as the root user account:
$ su -
# ocfs2console &
This will bring up the GUI as shown below:


Figure 8: ocfs2console Screen


Using the ocfs2console GUI tool, perform the following steps:

  1. Select [Cluster] -> [Configure Nodes...]. This will start the OCFS Cluster Stack (Figure 9). Acknowledge this Information dialog box by clicking [Close]. You will then be presented with the "Node Configuration" dialog.
  2. On the "Node Configurtion" dialog, click the [Add] button.
    • This will bring up the "Add Node" dialog.
    • In the "Add Node" dialog, enter the Host name and IP address for the first node in the cluster. Leave the IP Port set to its default value of 7777. In my example, I added both nodes using linux1 / 192.168.1.100 for the first node and linux2 / 192.168.1.101 for the second node.
    • Click [Apply] on the "Node Configuration" dialog - All nodes should now be "Active" as shown in Figure 10.
    • Click [Close] on the "Node Configuration" dialog.
  3. After verifying all values are correct, exit the application using [File] -> [Quit]. This needs to be performed on all nodes in the cluster.



Figure 9: Starting the OCFS2 Cluster Stack


The following dialog shows the OCFS2 settings I used for the node linux1 and linux2:


Figure 10: Configuring Nodes for OCFS2


After exiting the ocfs2console, you will have a /etc/ocfs2/cluster.conf similar to the following. This process needs to be completed on all nodes in the cluster and the OCFS2 configuration file should be exactly the same for all of the nodes:

/etc/ocfs2/cluster.conf
node:
        ip_port = 7777
        ip_address = 192.168.1.100
        number = 0
        name = linux1
        cluster = ocfs2

node:
        ip_port = 7777
        ip_address = 192.168.1.101
        number = 1
        name = linux2
        cluster = ocfs2

cluster:
        node_count = 2
        name = ocfs2


O2CB Cluster Service

Before we can do anything with OCFS2 like formatting or mounting the file system, we need to first have OCFS2's cluster stack, O2CB, running (which it will be as a result of the configuration process performed above). The stack includes the following services:

All of the above cluster services have been packaged in the o2cb system service (/etc/init.d/o2cb). Here is a short listing of some of the more useful commands and options for the o2cb system service.

  The following commands are for demonstration purposes only and should not be run when installing and configuring OCFS!


Configure O2CB to Start on Boot

We would now like to configure the on-boot properties of the OC2B driver so that the cluster stack services will start on each boot. All of the tasks within this section will need to be performed on both nodes in the cluster.

  With the version of OCFS2 I am using for this article, there is a bug that exists where the driver does not get loaded on each boot even after configuring the on-boot properties to do so. After attempting to configure the on-boot properties to start on each boot according to the official OCFS2 documentation, you will still get the following error on each boot:
...
Mounting other filesystems:
     mount.ocfs2: Unable to access cluster service

Cannot initialize cluster mount.ocfs2:
    Unable to access cluster service Cannot initialize cluster [FAILED]
...
Red Hat changed the way the service is registered between chkconfig-1.3.11.2-1 and chkconfig-1.3.13.2-1. The O2CB script used to work with the former.

Before attempting to configure the on-boot properties:

  • REMOVE the following lines in /etc/init.d/o2cb
    ### BEGIN INIT INFO
    # Provides: o2cb
    # Required-Start:
    # Should-Start:
    # Required-Stop:
    # Default-Start: 2 3 5
    # Default-Stop:
    # Description: Load O2CB cluster services at system boot.
    ### END INIT INFO

  • Re-register the o2cb service.
    # chkconfig --del o2cb
    # chkconfig --add o2cb
    # chkconfig --list o2cb
    o2cb            0:off   1:off   2:on    3:on    4:on    5:on    6:off
    
    # ll /etc/rc3.d/*o2cb*
    lrwxrwxrwx  1 root root 14 Sep 29 11:56 /etc/rc3.d/S24o2cb -> ../init.d/o2cb
    The service should be S24o2cb in the default runlevel.

After resolving the bug I listed (above), we can now continue to set the on-boot properties as follows:

# /etc/init.d/o2cb offline ocfs2
# /etc/init.d/o2cb unload
# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
 without typing an answer will keep that current value.  Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [n]: y
Cluster to start on boot (Enter "none" to clear) [ocfs2]: ocfs2
Writing O2CB configuration: OK
Loading module "configfs": OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting cluster ocfs2: OK


Format the OCFS2 File System

We can now start to make use of the partitions we created in the section Create Partitions on the Shared FireWire Storage Device. Well, at least the first partition!

If the O2CB cluster is offline, start it. The format operation needs the cluster to be online, as it needs to ensure that the volume is not mounted on some node in the cluster.

Earlier in this document, we created the directory /u02/oradata/orcl under the section Create Mount Point for OCFS / Clusterware. This section contains the commands to create and mount the file system to be used for the Cluster Manager - /u02/oradata/orcl.

  Unlike the other tasks in this section, creating the OCFS2 filesystem should only be executed on one of nodes in the RAC cluster. I will be executing all commands in this section from linux1 only.

  Note that it is possible to create and mount the OCFS2 file system using either the GUI tool ocfs2console or the command-line tool mkfs.ocfs2. From the ocfs2console utility, use the menu [Tasks] - [Format].

See the instructions below on how to create the OCFS2 file system using the command-line tool mkfs.ocfs2.

To create the file system, we can use the Oracle executable mkfs.ocfs2. For the purpose of this example, I run the following command only from linux1 as the root user account:

$ su -
# mkfs.ocfs2 -b 4K -C 32K -N 4 -L oradatafiles /dev/sda1

mkfs.ocfs2 1.2.1
Filesystem label=oradatafiles
Block size=4096 (bits=12)
Cluster size=32768 (bits=15)
Volume size=1011646464 (30873 clusters) (246984 blocks)
1 cluster groups (tail covers 30873 clusters, rest cover 30873 clusters)
Journal size=16777216
Initial number of node slots: 4
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Formatting Journals: done
Writing lost+found: done
mkfs.ocfs2 successful


Mount the OCFS2 File System

Now that the file system is created, we can mount it. Let's first do it using the command-line, then I'll show how to include it in the /etc/fstab to have it mount on each boot.

  Mounting the file system will need to be performed on all nodes in the Oracle RAC cluster as the root user account.

First, here is how to manually mount the OCFS2 file system from the command-line. Remember that this needs to be performed as the root user account:

$ su -
# mount -t ocfs2 -o datavolume,nointr /dev/sda1 /u02/oradata/orcl
If the mount was successful, you will simply get your prompt back. We should, however, run the following checks to ensure the file system is mounted correctly. Let's use the mount command to ensure that the new file system is really mounted. This should be performed on all nodes in the RAC cluster:
# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
none on /proc type proc (rw)
none on /sys type sysfs (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/hda1 on /boot type ext3 (rw)
none on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
cartman:SHARE2 on /cartman type nfs (rw,addr=192.168.1.120)
configfs on /config type configfs (rw)
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/sda1 on /u02/oradata/orcl type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)

  Please take note of the datavolume option I am using to mount the new file system. Oracle database users must mount any volume that will contain the Voting Disk file, Cluster Registry (OCR), Data files, Redo logs, Archive logs and Control files with the datavolume mount option so as to ensure that the Oracle processes open the files with the o_direct flag. The nointr option ensures that the I/O's are not interrupted by signals.

Any other type of volume, including an Oracle home (which I will not be using for this article), should not be mounted with this mount option.

  Why does it take so much time to mount the volume? It takes around 5 seconds for a volume to mount. It does so as to let the heartbeat thread stabilize. In a later release, Oracle plans to add support for a global heartbeat, which will make most mounts instant.


Configure OCFS to Mount Automatically at Startup

Let's take a look at what we have done so far. We downloaded and installed the Oracle Cluster File System, Release 2 (OCFS2) that will be used to store the files needed by Cluster Manager files. After going through the install, we loaded the OCFS2 module into the kernel and then formatted the clustered file system. Finally, we mounted the newly created file system. This section walks through the steps responsible for mounting the new OCFS2 file system each time the machine(s) are booted.

We start by adding the following line to the /etc/fstab file on all nodes in the RAC cluster:

/dev/sda1     /u02/oradata/orcl    ocfs2   _netdev,datavolume,nointr     0 0

  Notice the "_netdev" option for mounting this file system. The _netdev mount option is a must for OCFS2 volumes. This mount option indicates that the volume is to be mounted after the network is started and dismounted before the network is shutdown.

Now, let's make sure that the ocfs2.ko kernel module is being loaded and that the file system will be mounted during the boot process.

If you have been following along with the examples in this article, the actions to load the kernel module and mount the OCFS2 file system should already be enabled. However, we should still check those options by running the following on all nodes in the RAC cluster as the root user account:

$ su -
# chkconfig --list o2cb
o2cb            0:off   1:off   2:on    3:on    4:on    5:on    6:off
The flags that I have marked in bold should be set to "on".


Check Permissions on New OCFS2 File System

Use the ls command to check ownership. The permissions should be set to 0775 with owner "oracle" and group "dba".

Let's first check the permissions:

# ls -ld /u02/oradata/orcl
drwxr-xr-x  3 root root 4096 Sep 29 12:11 /u02/oradata/orcl
As we can see from the listing above, the oracle user account (and the dba group) will not be able to write to this directory. Let's fix that:
# chown oracle.dba /u02/oradata/orcl
# chmod 775 /u02/oradata/orcl
Let's now go back and re-check that the permissions are correct for each node in the cluster:
# ls -ld /u02/oradata/orcl
drwxrwxr-x  3 oracle dba 4096 Sep 29 12:11 /u02/oradata/orcl


Adjust the O2CB Heartbeat Threshold

This is a very important section when configuring OCFS2 for use by Oracle Clusterware's two shared files on our FireWire drive. During testing, I was able to install and configure OCFS2, format the new volume, and finally install Oracle Clusterware (with its two required shared files; the voting disk and OCR file), located on the new OCFS2 volume. I was able to install Oracle Clusterware and see the shared drive, however, during my evaluation I was receiving many lock-ups and hanging after about 15 minutes when the Clusterware software was running on both nodes. It always varied on which node would hang (either linux1 or linux2 in my example). It also didn't matter whether there was a high I/O load or none at all for it to crash (hang).

Let's keep in mind that the configuration we are creating is a rather low-end setup being configured with slow disk access with regards to the FireWire drive. This is by no means a high-end setup and susceptible to bogus timeouts.

After looking through the trace files for OCFS2, it was apparent that access to the voting disk was too slow (exceeding the O2CB heartbeat threshold) and causing the Oracle Clusterware software (and the node) to crash.

The solution I used was to simply increase the O2CB heartbeat threshold from its default setting of 7, to 601 (and in some cases as high as 901). This is a configurable parameter that is used to compute the time it takes for a node to "fence" itself. First, let's see how to determine what the O2CB heartbeat threshold is currently set to. This can be done by querying the /proc file system as follows:

# cat /proc/fs/ocfs2_nodemanager/hb_dead_threshold
7
We see that the value is 7, but what does this value represent? Well, it is used in the formula below to determine the fence time (in seconds):
[fence time in seconds] = (O2CB_HEARTBEAT_THRESHOLD - 1) * 2
So, with an O2CB heartbeat threshold of 7, we would have a fence time of:
(7 - 1) * 2 = 12 seconds
I want a much larger threshold (1200 seconds to be exact) given our slower FireWire disks. For 1200 seconds, I will want a O2CB_HEARTBEAT_THRESHOLD of 601 as shown below:
(601 - 1) * 2 = 1200 seconds

Let's see now how to increase the O2CB heartbeat threshold from 7 to 601. This will need to be performed on both nodes in the cluster. We first need to modify the file /etc/sysconfig/o2cb and set O2CB_HEARTBEAT_THRESHOLD to 601:

/etc/sysconfig/o2cb
# O2CB_ENABELED: 'true' means to load the driver on boot.
O2CB_ENABLED=true

# O2CB_BOOTCLUSTER: If not empty, the name of a cluster to start.
O2CB_BOOTCLUSTER=ocfs2

# O2CB_HEARTBEAT_THRESHOLD: Iterations before a node is considered dead.
O2CB_HEARTBEAT_THRESHOLD=601

After modifying the file /etc/sysconfig/o2cb, we need to alter the o2cb configuration. Again, this should be performed on all nodes in the cluster.

# umount /u02/oradata/orcl/
# /etc/init.d/o2cb unload
# /etc/init.d/o2cb configure

Load O2CB driver on boot (y/n) [y]: y
Cluster to start on boot (Enter "none" to clear) [ocfs2]: ocfs2
Writing O2CB configuration: OK
Loading module "configfs": OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting cluster ocfs2: OK
We can now check again to make sure the settings took place in for the o2cb cluster stack:
# cat /proc/fs/ocfs2_nodemanager/hb_dead_threshold
601

  It is important to note that the value of 601 I used for the O2CB heartbeat threshold will not work for all of the FireWire drives I have listed to work for this article. In some cases, the O2CB heartbeat threshold value had to be increased to as high as 901 in order to prevent OCFS2 from panicking the kernel.


Replace CFQ I/O Scheduler with the DEADLINE I/O Scheduler - (RHEL4 U2 and U3 Users)

A bug has been encountered with the default CFQ I/O scheduler which causes a process doing heavy I/O to temporarily starve out other processes. While this is not fatal for most environments, it is for OCFS2 as the heartbeat thread is expected to be r/w to the heartbeat area at least once every 12 seconds (default). A bug along with the fix has been filed with Red Hat. Red Hat is expected to have this fixed in RHEL4 U4 release. SLES9 SP3 2.5.6-7.257 includes this fix. Until this issue is resolved, one is advised to use the DEADLINE I/O scheduler. To use it, add "elevator=deadline" to the kernel command line found in the /boot/grub/grub.conf file. This will need to be performed on both nodes in the cluster.

/boot/grub/grub.conf
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /boot/, eg.
#          root (hd0,0)
#          kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
#          initrd /initrd-version.img
#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS-4 i386 (2.6.9-22.EL)
    root (hd0,0)
    kernel /vmlinuz-2.6.9-22.EL ro root=/dev/VolGroup00/LogVol00 rhgb quiet elevator=deadline
    initrd /initrd-2.6.9-22.EL.img

After making this change, the system will need to be rebooted for the change to take effect. (See the next task)


Reboot Both Nodes

Before starting the next section, this would be a good place to reboot all of the nodes in the RAC cluster. When the machines come up, ensure that the cluster stack services are being loaded and the new OCFS2 file system is being mounted:
# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
none on /proc type proc (rw)
none on /sys type sysfs (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/hda1 on /boot type ext3 (rw)
none on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
cartman:SHARE2 on /cartman type nfs (rw,addr=192.168.1.120)
configfs on /config type configfs (rw)
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/sda1 on /u02/oradata/orcl type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)


You should also verify that the O2CB heartbeat threshold is set correctly (to our new value of 601):

# cat /proc/fs/ocfs2_nodemanager/hb_dead_threshold
601


Lastly, verify the new kernel command line enabled the DEADLINE I/O scheduler:

# cat /proc/cmdline
ro root=/dev/VolGroup00/LogVol00 rhgb quiet elevator=deadline


How to Determine OCFS2 Version

To determine which version of OCFS2 is running, use:
# cat /proc/fs/ocfs2/version
OCFS2 1.2.3 Wed Jul 26 12:34:15 PDT 2006 (build 6b798aaadf626d3b137c3952809b2f38)



Install and Configure Automatic Storage Management (ASMLib 2.0)


  Most of the installation and configuration procedures should be performed on all nodes in the cluster! Creating the ASM disks, however, will only need to be performed on a single node within the cluster.


Introduction

In this section, we will configure Automatic Storage Management (ASM) to be used as the file system / volume manager for all Oracle physical database files (data, online redo logs, control files, archived redo logs) and a Flash Recovery Area.

ASM was introduced in Oracle10g Release 1 and is used to alleviate the DBA from having to manage individual files and drives. ASM is built into the Oracle kernel and provides the DBA with a way to manage thousands of disk drives 24x7 for both single and clustered instances of Oracle. All of the files and directories to be used for Oracle will be contained in a disk group. ASM automatically performs load balancing in parallel across all available disk drives to prevent hot spots and maximize performance, even with rapidly changing data usage patterns.

I start this section by first discussing the ASMLib 2.0 libraries and its associated driver for Linux plus other methods for configuring ASM with Linux. Next, I will provide instructions for downloading the ASM drivers (ASMLib Release 2.0) specific to our Linux kernel.

Lastly, I will install and configure the ASMLib 2.0 drivers while finishing off the section with a demonstration of how I created the ASM disks.

If you would like to learn more about Oracle ASMLib 2.0, visit http://www.oracle.com/technology/tech/linux/asmlib/

The next section, "Methods for Configuring ASM with Linux", discusses the two methods for using ASM on Linux and is for reference only!


(FOR REFERENCE ONLY!) - Methods for Configuring ASM with Linux

  This section is nothing more than a reference that describes the two different methods for configuring ASM on Linux. The commands in this section are not meant to be run on any of the nodes in the cluster!

When I first started this article, I wanted to focus on using ASM for all database files. I was curious to see how well ASM worked (load balancing / fault tolerance) with this test RAC configuration. There are two different methods to configure ASM on Linux:

In this article, I will be using the "ASM with ASMLib I/O" method. Oracle states (in Metalink Note 275315.1) that "ASMLib was provided to enable ASM I/O to Linux disks without the limitations of the standard UNIX I/O API". I plan on performing several tests in the future to identify the performance gains in using ASMLib. Those performance metrics and testing details are out of scope of this article and therefore will not be discussed.

Before discussing the installation and configuration details of ASMLib 2.0, I thought it would be interesting to talk briefly about the second method "ASM with Standard Linux I/O". If you were to use this method, (which is a perfectly valid solution, just not the method we will be implementing in this article), you should be aware that Linux does not use RAW devices by default. Every Linux RAW device you want to use must be bound to the corresponding block device using the RAW driver. For example, if you wanted to use the partitions we created in the "Create Partitions on the Shared FireWire Storage Device" section, (/dev/sda2, /dev/sda3, /dev/sda4), you would need to perform the following tasks:

  1. Edit the file /etc/sysconfig/rawdevices as follows:
    # raw device bindings
    # format:    
    #           
    # example: /dev/raw/raw1 /dev/sda1
    #          /dev/raw/raw2 8 5
    /dev/raw/raw2 /dev/sda2
    /dev/raw/raw3 /dev/sda3
    /dev/raw/raw4 /dev/sda4
    The RAW device bindings will be created on each reboot.

  2. You would then want to change ownership of all raw devices to the "oracle" user account:
    # chown oracle:dba /dev/raw/raw2; chmod 660 /dev/raw/raw2
    # chown oracle:dba /dev/raw/raw3; chmod 660 /dev/raw/raw3
    # chown oracle:dba /dev/raw/raw4; chmod 660 /dev/raw/raw4

  3. The last step is to reboot the server to bind the devices or simply restart the rawdevices service:
    # service rawdevices restart

Like I mentioned earlier, the above example was just to demonstrate that there is more than one method for using ASM with Linux. Now let's move on to the method that will be used for this article, "ASM with ASMLib I/O".


Download the ASMLib 2.0 Packages

We start this section by downloading the latest ASMLib 2.0 libraries and the driver from OTN. At the time of this writing, the latest release of the ASMLib driver was 2.0.3-1. Like the Oracle Cluster File System, we need to download the version for the Linux kernel and number of processors on the machine. We are using kernel 2.6.9-22.EL #1 while the machines I am using are both single processor machines:
# uname -a
Linux linux1 2.6.9-22.EL #1 Sat Oct 8 17:48:27 CDT 2005 i686 i686 i386 GNU/Linux

  If you do not currently have an account with Oracle OTN, you will need to create one. This is a FREE account!


  Oracle ASMLib Downloads for Red Hat Enterprise Linux 4 AS

  oracleasm-2.6.9-22.EL-2.0.3-1.i686.rpm - (for single processor)
  oracleasm-2.6.9-22.ELsmp-2.0.3-1.i686.rpm - (for multiple processors)
  oracleasm-2.6.9-22.ELhugemem-2.0.3-1.i686.rpm - (for hugemem)
You will also need to download the following ASMLib tools:
  oracleasmlib-2.0.2-1.i386.rpm - (Userspace library)
  oracleasm-support-2.0.3-1.i386.rpm - (Driver support files)


Install ASMLib 2.0 Packages

This installation needs to be performed on all nodes in the RAC cluster as the root user account:
$ su -
# rpm -Uvh oracleasm-2.6.9-22.EL-2.0.3-1.i686.rpm \
       oracleasmlib-2.0.2-1.i386.rpm \
       oracleasm-support-2.0.3-1.i386.rpm
Preparing...                ########################################### [100%]
   1:oracleasm-support      ########################################### [ 33%]
   2:oracleasm-2.6.9-22.EL  ########################################### [ 67%]
   3:oracleasmlib           ########################################### [100%]


Configure and Loading the ASMLib 2.0 Packages

Now that we downloaded and installed the ASMLib 2.0 Packages for Linux, we now need to configure and load the ASM kernel module. This task needs to be run on all nodes in the RAC cluster as the root user account:
$ su -
# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: [  OK  ]
Creating /dev/oracleasm mount point: [  OK  ]
Loading module "oracleasm": [  OK  ]
Mounting ASMlib driver filesystem: [  OK  ]
Scanning system for ASM disks: [  OK  ]


Create ASM Disks for Oracle

In the section "Create Partitions on the Shared FireWire Storage Device", we created three Linux partitions to be used for storing Oracle database files like online redo logs, database files, control files, archived redo log files, and a flash recovery area.

Here is a list of those partitions we created for use by ASM:

Oracle ASM Partitions Created
File System Type Partition Size Mount Point File Types
ASM /dev/sda2 50 GB ORCL:VOL1 Oracle Database Files
ASM /dev/sda3 50 GB ORCL:VOL2 Oracle Database Files
ASM /dev/sda4 100 GB ORCL:VOL3 Flash Recovery Area
Total   200 GB    

The last task in this section is to create the ASM Disks.

  Creating the ASM disks only needs to be done on one node in the RAC cluster as the root user account. I will be running these commands on linux1. On the other nodes, you will need to perform a scandisk to recognize the new volumes. When that is complete, you should then run the oracleasm listdisks command on all nodes to verify that all ASM disks were created and available.

  If you are repeating this article using the same hardware (actually, the same shared drive), you may get a failure when attempting to create the ASM disks. If you do receive a failure, try listing all ASM disks using:
# /etc/init.d/oracleasm listdisks
VOL1
VOL2
VOL3
As you can see, the results show that I have three volumes already defined. If you have the three volumes already defined from a previous run, go ahead and remove them using the following commands. After removing the previously created volumes, use the "oracleasm createdisk" commands (below) to create the volumes.
# /etc/init.d/oracleasm deletedisk VOL1
Removing ASM disk "VOL1" [  OK  ]
# /etc/init.d/oracleasm deletedisk VOL2
Removing ASM disk "VOL2" [  OK  ]
# /etc/init.d/oracleasm deletedisk VOL3
Removing ASM disk "VOL3" [  OK  ]

$ su -
# /etc/init.d/oracleasm createdisk VOL1 /dev/sda2
Marking disk "/dev/sda2" as an ASM disk [  OK  ]

# /etc/init.d/oracleasm createdisk VOL2 /dev/sda3
Marking disk "/dev/sda3" as an ASM disk [  OK  ] 

# /etc/init.d/oracleasm createdisk VOL3 /dev/sda4
Marking disk "/dev/sda4" as an ASM disk [  OK  ]


On all other nodes in the RAC cluster, you must perform a scandisk to recognize the new volumes:

# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks [  OK  ]


We can now test that the ASM disks were successfully created by using the following command on all nodes in the RAC cluster as the root user account:

# /etc/init.d/oracleasm listdisks
VOL1
VOL2
VOL3



Download Oracle10g RAC Software


  The following download procedures only need to be performed on one node in the cluster!


Overview

The next logical step is to install Oracle Clusterware Release 2 (10.2.0.1.0), Oracle Database 10g Release 2 (10.2.0.1.0), and finally the Oracle Database 10g Companion CD Release 2 (10.2.0.1.0) for Linux x86 software. However, we must first download and extract the required Oracle software packages from the Oracle Technology Network (OTN).

  If you do not currently have an account with Oracle OTN, you will need to create one. This is a FREE account!

In this section, we will be downloading and extracting the required software from Oracle to only one of the Linux nodes in the RAC cluster - namely linux1. This is the machine where I will be performing all of the Oracle installs from. The Oracle installer will copy the required software packages to all other nodes in the RAC configuration using the remote access method we setup in the section "Configure RAC Nodes for Remote Access".

Login to the node that you will be performing all of the Oracle installations from as the "oracle" user account. In this example, I will be downloading the required Oracle software to linux1 and saving them to "/u01/app/oracle/orainstall".


Oracle Clusterware Release 2 (10.2.0.1.0) for Linux x86

First, download the Oracle Clusterware Release 2 for Linux x86.

  Oracle Clusterware Release 2 (10.2.0.1.0)


Oracle Database 10g Release 2 (10.2.0.1.0) for Linux x86

Next, we need to download the Oracle Database 10g Release 2 (10.2.0.1.0) Software for Linux x86. This can be downloaded from the same page used to download the Oracle Clusterware Release 2 software:

  Oracle Database 10g Release 2 (10.2.0.1.0)


Oracle Database 10g Companion CD Release 2 (10.2.0.1.0) for Linux x86

Finally, we should download the Oracle Database 10g Companion CD for Linux x86. This can be downloaded from the same page used to download the Oracle Clusterware Release 2 software:

  Oracle Database 10g Companion CD Release 2 (10.2.0.1.0)


As the "oracle" user account, extract the three packages you downloaded to a temporary directory. In this example, I will use "/u01/app/oracle/orainstall".

Extract the Clusterware package as follows:

# su - oracle
$ cd ~oracle/orainstall
$ unzip 10201_clusterware_linux32.zip

Then extract the Oracle10g Database Software:

$ cd ~oracle/orainstall
$ unzip 10201_database_linux32.zip

Finally, extract the Oracle10g Companion CD Software:

$ cd ~oracle/orainstall
$ unzip 10201_companion_linux32.zip



Install Oracle10g Clusterware Software


  Perform the following installation procedures from only one node in the cluster! The Oracle Clusterware software will be installed to all other nodes in the cluster by the Oracle Universal Installer.


Overview

We are ready to install the Cluster part of the environment - the Oracle Clusterware. In the last section, we downloaded and extracted the install files for Oracle Clusterware to linux1 in the directory /u01/app/oracle/orainstall/clusterware. This is the only node we need to perform the install from. During the installation of Oracle Clusterware, you will be asked for the nodes involved and to configure in the RAC cluster. Once the actual installation starts, it will copy the required software to all nodes using the remote access we configured in the section "Configure RAC Nodes for Remote Access".

So, what exactly is the Oracle Clusterware responsible for? It contains all of the cluster and database configuration metadata along with several system management features for RAC. It allows the DBA to register and invite an Oracle instance (or instances) to the cluster. During normal operation, Oracle Clusterware will send messages (via a special ping operation) to all nodes configured in the cluster - often called the heartbeat. If the heartbeat fails for any of the nodes, it checks with the Oracle Clusterware configuration files (on the shared disk) to distinguish between a real node failure and a network failure.

After installing Oracle Clusterware, the Oracle Universal Installer (OUI) used to install the Oracle10g database software (next section) will automatically recognize these nodes. Like the Oracle Clusterware install we will be performing in this section, the Oracle10g database software only needs to be run from one node. The OUI will copy the software packages to all nodes configured in the RAC cluster.


Oracle Clusterware Shared Files

The two shared files (actually file groups) used by Oracle Clusterware will be stored on the Oracle Cluster File System, Release 2 (OFCS2) we created earlier. The two shared Oracle Clusterware file groups are:

  It is not possible to use Automatic Storage Management (ASM) for the two shared Oracle Clusterware files: Oracle Cluster Registry (OCR) or the CRS Voting Disk files. The problem is that these files need to be in place and accessible BEFORE any Oracle instances can be started. For ASM to be available, the ASM instance would need to be run first.

  The two shared files could be stored on the OCFS2, shared RAW devices, or another vendor's clustered file system.


Verifying Terminal Shell Environment

Before starting the Oracle Universal Installer, you should first verify you are logged onto the server you will be running the installer from (i.e. linux1) then run the xhost command as root from the console to allow X Server connections. Next, login as the oracle user account. If you are using a remote client to connect to the node performing the installation (SSH / Telnet to linux1 from a workstation configured with an X Server), you will need to set the DISPLAY variable to point to your local workstation. Finally, verify remote access / user equivalence to all nodes in the cluster:

Verify Server and Enable X Server Access

# hostname
linux1

# xhost +
access control disabled, clients can connect from any host

Login as the oracle User Account and Set DISPLAY (if necessary)

# su - oracle

$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
$ DISPLAY=<your local workstation>:0.0
$ export DISPLAY

Verify Remote Access / User Equivalence

Verify you are able to run the Secure Shell commands (ssh or scp) or the Remote Shell commands (rsh and rcp) on the Linux server you will be running the Oracle Universal Installer from against all other Linux servers in the cluster without being prompted for a password.

When using the secure shell method, user equivalence will need to be enabled on any new terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for each key that you generated when prompted:

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Enter passphrase for /u01/app/oracle/.ssh/id_rsa: xxxxx
Identity added: /u01/app/oracle/.ssh/id_rsa (/u01/app/oracle/.ssh/id_rsa)
Identity added: /u01/app/oracle/.ssh/id_dsa (/u01/app/oracle/.ssh/id_dsa)

$ ssh linux1 "date;hostname"
Wed Sep 13 18:27:17 EDT 2006
linux1

$ ssh linux2 "date;hostname"
Wed Sep 13 18:28:00 EDT 2006
linux2

When using the remote shell method, user equivalence is generally defined in the /etc/hosts.equiv file for the oracle user account and is enabled on all new terminal shell sessions:

$ rsh linux1 "date;hostname"
Tue Sep 12 12:37:17 EDT 2006
linux1

$ rsh linux2 "date;hostname"
Tue Sep 12 12:38:00 EDT 2006
linux2


Installing Oracle Clusterware

  CSS Timeout Computation in 10g RAC 10.1.0.3 and higher

Please note that after the Oracle Clusterware software is installed, you will need to modify the CSS timeout value for Clusterware. This is especially true for 10.1.0.3 and higher as the CSS timeout is computed differently than with 10.1.0.2. Several problems have been documented as a result of the CSS daemon timing out starting with Oracle 10.1.0.3 on the Linux platform (including IA32, IA64, and x86-64). This has been a big problem for me in the past, especially during database creation (DBCA). During the database creation process, for example, it was not uncommon for the database creation process to fail with the error: ORA-03113: end-of-file on communication channel. The key error was reported in the log file $ORA_CRS_HOME/css/log/ocssd1.log as:

clssnmDiskPingMonitorThread: voting device access hanging (45010 miliseconds)
The problem is essentially slow disks and the default value for CSS misscount. The CSS misscount value is the number of heartbeats missed before CSS evicts a node. CSS uses this number to calculate the time after which an I/O operation to the voting disk should be considered timed out and thus terminating itself to prevent split brain conditions. The default value for CSS misscount on Linux for Oracle 10.1.0.2 and higher is 60. The formula for calculating the timeout value (in seconds), however, did change from release 10.1.0.2 to 10.1.0.3.

With 10.1.0.2, the timeout value was calculated as follows:

time_in_secs > CSS misscount, then EXIT
With the default value of 60, for example, the timeout period would be 60 seconds.

Starting with 10.1.0.3, the formula was changed to:

disktimeout_in_secs = MAX((3 * CSS misscount)/4, CSS misscount - 15)
Again, using the default CSS misscount value of 60, this would result in a timeout of 45 seconds.

This change was motivated mainly in order to allow for a faster cluster reconfiguration in case of node failure. With the default CSS misscount value of 60 in 10.1.0.2, we would have to wait at least 60 seconds for a timeout, where this same default value of 60 can be shaved by 15 seconds to 45 seconds starting with 10.1.0.3.

OK, so why all the talk about CSS misscount? As I mentioned earlier, I would often have the database creation process fail (or other high I/O loads to the system) from the Oracle Clusterware crashing. The high I/O would cause lengthy timeouts for CSS while attempting to query the voting disk. When the calculated timeout was exceeded, Oracle Clusterware crashed. This has been common with this article as the FireWire drives we are using are not the fastest. The slower the drive, the more often this will occur.

Well, the good news is that we can modify the CSS misscount value from its default value of 60 (for Linux) to allow for lengthier timeouts. For the drives I have been using with this article, I have been able to get away with a CSS misscount value of 360. Although I haven't been able to verify this, I believe the CSS misscount can be set as large as 600.

So how do we modify the default value for CSS misscount? Well, there are several ways. The easiest way is to modify the $ORA_CRS_HOME/install/rootconfig for Oracle Clusterware before running the root.sh script on each node in the cluster. (The instructions for modifying the $ORA_CRS_HOME/install/rootconfig script for Oracle Clusterware can be found here.)

If Oracle Clusterware is already installed, you can still modify the CSS misscount value using the $ORA_CRS_HOME/bin/crsctl command. (The instructions for verifying and modifying the CSS misscount using crsctl can be found in the section "Verify Oracle Clusterware / CSS misscount value".)

The following tasks are used to install the Oracle Clusterware:

$ cd ~oracle
$ /u01/app/oracle/orainstall/clusterware/runInstaller -ignoreSysPrereqs

Screen Name Response
Welcome Screen Click Next
Specify Inventory directory and credentials Accept the default values:
   Inventory directory: /u01/app/oracle/oraInventory
   Operating System group name: dba
Specify Home Details Set the Name and Path for the ORACLE_HOME (actually the $ORA_CRS_HOME that I will be using in this article) as follows:
   Name: OraCrs10g_home
   Path: /u01/app/oracle/product/crs
Product-Specific Prerequisite Checks The installer will run through a series of checks to determine if the node meets the minimum requirements for installing and configuring the Oracle Clusterware software. If any of the checks fail, you will need to manually verify the check that failed by clicking on the checkbox. For my installation, all checks passed with no problems.

Click Next to continue.

Specify Cluster Configuration Cluster Name: crs
Public Node Name Private Node Name Virtual Node Name
linux1 linux1-priv linux1-vip
linux2 linux2-priv linux2-vip
Specify Network Interface Usage
Interface Name Subnet Interface Type
eth0 192.168.1.0 Public
eth1 192.168.2.0 Private
Specify Oracle Cluster Registry (OCR) Location Starting with Oracle Database 10g Release 2 (10.2) with RAC, Oracle Clusterware provides for the creation of a mirrored Oracle Cluster Registry (OCR) file, enhancing cluster reliability. For the purpose of this example, I did choose to mirror the OCR file by keeping the default option of "Normal Redundancy":

Specify OCR Location: /u02/oradata/orcl/OCRFile
Specify OCR Mirror Location: /u02/oradata/orcl/OCRFile_mirror

Specify Voting Disk Location Starting with Oracle Database 10g Release 2 (10.2) with RAC, CSS has been modified to allow you to configure CSS with multiple voting disks. In 10g Release 1 (10.1), you could configure only one voting disk. By enabling multiple voting disk configuration, the redundant voting disks allow you to configure a RAC database with multiple voting disks on independent shared physical disks. This option facilitates the use of the iSCSI network protocol, and other Network Attached Storage (NAS) storage solutions. Note that to take advantage of the benefits of multiple voting disks, you must configure at least three voting disks. For the purpose of this example, I did choose to mirror the voting disk by keeping the default option of "Normal Redundancy":

Voting Disk Location: /u02/oradata/orcl/CSSFile
Additional Voting Disk 1 Location: /u02/oradata/orcl/CSSFile_mirror1
Additional Voting Disk 2 Location: /u02/oradata/orcl/CSSFile_mirror2

Summary Click Install to start the installation!
Execute Configuration Scripts After the installation has completed, you will be prompted to run the orainstRoot.sh and root.sh script. Open a new console window on each node in the RAC cluster, (starting with the node you are performing the install from), as the "root" user account.

Navigate to the /u01/app/oracle/oraInventory directory and run orainstRoot.sh ON ALL NODES in the RAC cluster.


Within the same new console window on each node in the RAC cluster, (starting with the node you are performing the install from), stay logged in as the "root" user account.

  As mentioned earilier in the "CSS Timeout Computation in 10g RAC 10.1.0.3" section, you should modify the entry for CSS misscount from 60 to 360 in the file $ORA_CRS_HOME/install/rootconfig as follows (on each node in the cluster). Change the following entry that can be found on line 356:

CLSCFG_MISCNT="-misscount 60"

to

CLSCFG_MISCNT="-misscount 360"

Now, navigate to the /u01/app/oracle/product/crs directory and locate the root.sh file for each node in the cluster - (starting with the node you are performing the install from). Run the root.sh file ON ALL NODES in the RAC cluster ONE AT A TIME.

You will receive several warnings while running the root.sh script on all nodes. These warnings can be safely ignored.

The root.sh may take awhile to run. When running the root.sh on the last node, you will receive a critical error and the output should look like:

...
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
    linux1
    linux2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
The given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs.

This issue is specific to Oracle 10.2.0.1 only (noted in BUG 4437727) and needs to be resolved before continuing. The easiest workaround is to re-run vipca (GUI) manually as root from the last node in which the error occurred. Please keep in mind that vipca is a GUI and will need to set your DISPLAY variable accordingly to your X server:

# $ORA_CRS_HOME/bin/vipca

When the "VIP Configuration Assistant" appears, this is how I answered the screen prompts:

   Welcome: Click Next
   Network interfaces: Select the public interface - eth0
   Virtual IPs for cluster nodes:
       Node Name: linux1
       IP Alias Name: linux1-vip
       IP Address: 192.168.1.200
       Subnet Mask: 255.255.255.0

       Node Name: linux2
       IP Alias Name: linux2-vip
       IP Address: 192.168.1.201
       Subnet Mask: 255.255.255.0

   Summary: Click Finish
   Configuration Assistant Progress Dialog: Click OK after configuration is complete.
   Configuration Results: Click Exit

Go back to the OUI and acknowledge the "Execute Configuration scripts" dialog window.

End of installation At the end of the installation, exit from the OUI.


Verify Oracle Clusterware / CSS misscount value

As previously discussed in the section "CSS Timeout Computation in 10g RAC 10.1.0.3", I mentioned the need to modify the CSS misscount value from its default value of 60 to 360 (or higher). Within that section I disscussed how this can be accomplished by modifying the root.sh script before running it on each node in the cluster. If you were not able to modify the CSS misscount value within the root.sh script, you can still perform this action by using the $ORA_CRS_HOME/bin/crsctl program. For example, to obtain the current value for CSS misscount, use the following:
   $ORA_CRS_HOME/bin/crsctl get css misscount
   360
If you get back a value of 60, you will want to modify it to 360 as follows:


Verify Oracle Clusterware Installation

After the installation of Oracle Clusterware, we can run through several tests to verify the install was successful. Run the following commands on all nodes in the RAC cluster.

Check cluster nodes

$ /u01/app/oracle/product/crs/bin/olsnodes -n
linux1  1
linux2  2
Check Oracle Clusterware Auto-Start Scripts
$ ls -l /etc/init.d/init.*
-r-xr-xr-x  1 root root  1951 Oct  4 14:21 /etc/init.d/init.crs*
-r-xr-xr-x  1 root root  4714 Oct  4 14:21 /etc/init.d/init.crsd*
-r-xr-xr-x  1 root root 35394 Oct  4 14:21 /etc/init.d/init.cssd*
-r-xr-xr-x  1 root root  3190 Oct  4 14:21 /etc/init.d/init.evmd*



Install Oracle10g Database Software


  Perform the following installation procedures from only one node in the cluster! The Oracle database software will be installed to all other nodes in the cluster by the Oracle Universal Installer.


Overview

After successfully installing the Oracle Clusterware software, the next step is to install the Oracle10g Release 2 Database Software (10.2.0.1.0) with Real Application Clusters (RAC).

  For the purpose of this example, we will forgo the "Create Database" option when installing the Oracle10g Release 2 software. We will, instead, create the database using the Database Creation Assistant (DBCA) after the Oracle10g Database Software install.

Like the Oracle Clusterware install (previous section), the Oracle10g database software only needs to be run from one node. The OUI will copy the software packages to all nodes configured in the RAC cluster.


Verifying Terminal Shell Environment

As discussed in the previous section, (Install Oracle10g Clusterware Software), the terminal shell environment needs to be configured for remote access and user equivalence to all nodes in the cluster before running the Oracle Universal Installer. Note that you can utilize the same terminal shell session used in the previous section which in this case, you do not have to take any of the actions described below with regards to setting up remote access and the DISPLAY variable:

Login as the oracle User Account and Set DISPLAY (if necessary)

# su - oracle

$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
$ DISPLAY=<your local workstation>:0.0
$ export DISPLAY

Verify Remote Access / User Equivalence

Verify you are able to run the Secure Shell commands (ssh or scp) or the Remote Shell commands (rsh and rcp) on the Linux server you will be running the Oracle Universal Installer from against all other Linux servers in the cluster without being prompted for a password.

When using the secure shell method, user equivalence will need to be enabled on any new terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for each key that you generated when prompted:

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Enter passphrase for /u01/app/oracle/.ssh/id_rsa: xxxxx
Identity added: /u01/app/oracle/.ssh/id_rsa (/u01/app/oracle/.ssh/id_rsa)
Identity added: /u01/app/oracle/.ssh/id_dsa (/u01/app/oracle/.ssh/id_dsa)

$ ssh linux1 "date;hostname"
Wed Sep 13 18:27:17 EDT 2006
linux1

$ ssh linux2 "date;hostname"
Wed Sep 13 18:28:00 EDT 2006
linux2

When using the remote shell method, user equivalence is generally defined in the /etc/hosts.equiv file for the oracle user account and is enabled on all new terminal shell sessions:

$ rsh linux1 "date;hostname"
Tue Sep 12 12:37:17 EDT 2006
linux1

$ rsh linux2 "date;hostname"
Tue Sep 12 12:38:00 EDT 2006
linux2


Install Oracle10g Release 2 Database Software

The following tasks are used to install the Oracle10g Release 2 Database Software:
$ cd ~oracle
$ /u01/app/oracle/orainstall/database/runInstaller -ignoreSysPrereqs

Screen Name Response
Welcome Screen Click Next
Select Installation Type I selected the Enterprise Edition option. If you need other components like Oracle Label Security or if you want to simply customize the environment, select Custom.
Specify Home Details Set the Name and Path for the ORACLE_HOME as follows:
   Name: OraDb10g_home1
   Location: /u01/app/oracle/product/10.2.0/db_1
Specify Hardware Cluster Installation Mode Select the Cluster Installation option then select all nodes available. Click Select All to select all servers: linux1 and linux2.

  If the installation stops here and the status of any of the RAC nodes is "Node not reachable", perform the following checks:

  • Ensure the Oracle Clusterware is running on the node in question.
  • Ensure you are able to reach the node in question from the node you are performing the installation from.
Product-Specific Prerequisite Checks The installer will run through a series of checks to determine if the node meets the minimum requirements for installing and configuring the Oracle database software. If any of the checks fail, you will need to manually verify the check that failed by clicking on the checkbox.

It is possible to receive an error about the available swap space not meeting its minimum requirements:

"Checking available swap space requirements... Expected result: 3036MB Actual Result: 1983MB"

In most cases, you will have the minimum required swap space (as shown above) and this can be safely ignored. Simply click the check-box for "Checking available swap space requirements..." and click Next to continue.

Select Database Configuration Select the option to Install database Software only.

Remember that we will create the clustered database as a separate step using dbca.

Summary Click Install to start the installation!
Root Script Window - Run root.sh After the installation has completed, you will be prompted to run the root.sh script. It is important to keep in mind that the root.sh script will need to be run ON ALL NODES in the RAC cluster ONE AT A TIME starting with the node you are running the database installation from.

First, open a new console window on the node you are installing the Oracle10g database software from as the root user account. For me, this was "linux1".

Navigate to the /u01/app/oracle/product/10.2.0/db_1 directory and run root.sh.

After running the root.sh script on all nodes in the cluster, go back to the OUI and acknowledge the "Execute Configuration scripts" dialog window.

End of installation At the end of the installation, exit from the OUI.



Install Oracle10g Companion CD Software


  Perform the following installation procedures from only one node in the cluster! The Oracle10g Companion CD software will be installed to all other nodes in the cluster by the Oracle Universal Installer.


Overview

After successfully installing the Oracle Database software, the next step is to install the Oracle10g Release 2 Companion CD software (10.2.0.1.0).

Please keep in mind that this is an optional step. For the purpose of this article, my testing database will often make use of the Java Virtual Machine (Java VM) and Oracle interMedia and therefore will require the installation of the Oracle Database 10g Companion CD. The type of installation to perform will be the Oracle Database 10g Products installation type.

This installation type includes the Natively Compiled Java Libraries (NCOMP) files to improve Java performance. If you do not install the NCOMP files, the "ORA-29558:JAccelerator (NCOMP) not installed" error occurs when a database that uses Java VM is upgraded to the patch release.

Like the Oracle Clusterware and Database install (previous sections), the Oracle10g companion software only needs to be run from one node. The OUI will copy the software packages to all nodes configured in the RAC cluster.


Verifying Terminal Shell Environment

As discussed in the previous section, (Install Oracle10g Database Software), the terminal shell environment needs to be configured for remote access and user equivalence to all nodes in the cluster before running the Oracle Universal Installer. Note that you can utilize the same terminal shell session used in the previous section which in this case, you do not have to take any of the actions described below with regards to setting up remote access and the DISPLAY variable:

Login as the oracle User Account and Set DISPLAY (if necessary)

# su - oracle

$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
$ DISPLAY=<your local workstation>:0.0
$ export DISPLAY

Verify Remote Access / User Equivalence

Verify you are able to run the Secure Shell commands (ssh or scp) or the Remote Shell commands (rsh and rcp) on the Linux server you will be running the Oracle Universal Installer from against all other Linux servers in the cluster without being prompted for a password.

When using the secure shell method, user equivalence will need to be enabled on any new terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for each key that you generated when prompted:

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Enter passphrase for /u01/app/oracle/.ssh/id_rsa: xxxxx
Identity added: /u01/app/oracle/.ssh/id_rsa (/u01/app/oracle/.ssh/id_rsa)
Identity added: /u01/app/oracle/.ssh/id_dsa (/u01/app/oracle/.ssh/id_dsa)

$ ssh linux1 "date;hostname"
Wed Sep 13 18:27:17 EDT 2006
linux1

$ ssh linux2 "date;hostname"
Wed Sep 13 18:28:00 EDT 2006
linux2

When using the remote shell method, user equivalence is generally defined in the /etc/hosts.equiv file for the oracle user account and is enabled on all new terminal shell sessions:

$ rsh linux1 "date;hostname"
Tue Sep 12 12:37:17 EDT 2006
linux1

$ rsh linux2 "date;hostname"
Tue Sep 12 12:38:00 EDT 2006
linux2


Install Oracle10g Companion CD Software

The following tasks are used to install the Oracle10g Companion CD Software:
$ cd ~oracle
$ /u01/app/oracle/orainstall/companion/runInstaller -ignoreSysPrereqs

Screen Name Response
Welcome Screen Click Next
Select a Product to Install Select the Oracle Database 10g Products 10.2.0.1.0 option.
Specify Home Details Set the destination for the ORACLE_HOME Name and Path to that of the previous Oracle10g Database software install as follows:
   Name: OraDb10g_home1
   Path: /u01/app/oracle/product/10.2.0/db_1
Specify Hardware Cluster Installation Mode The Cluster Installation option will be selected along with all of the available nodes in the cluster by default. Stay with these default options and click Next to continue.

  If the installation stops here and the status of any of the RAC nodes is "Node not reachable", perform the following checks:

  • Ensure the Oracle Clusterware is running on the node in question.
  • Ensure you are able to reach the node in question from the node you are performing the installation from.
Product-Specific Prerequisite Checks The installer will run through a series of checks to determine if the node meets the minimum requirements for installing and configuring the Oracle10g Companion CD Software. If any of the checks fail, you will need to manually verify the check that failed by clicking on the checkbox. For my installation, all checks passed with no problems.

Click Next to continue.

Summary On the Summary screen, click Install to start the installation!
End of installation At the end of the installation, exit from the OUI.



Create TNS Listener Process


  Perform the following configuration procedures from only one node in the cluster! The Network Configuration Assistant (NETCA) will setup the TNS listener in a clustered configuration on all nodes in the cluster.


Overview

The Database Configuration Assistant (DBCA) requires the Oracle TNS Listener process to be configured and running on all nodes in the RAC cluster before it can create the clustered database.

The process of creating the TNS listener only needs to be performed from one of the nodes in the RAC cluster. All changes will be made and replicated to all nodes in the cluster. On one of the nodes (I will be using linux1) bring up the Network Configuration Assistant (NETCA) and run through the process of creating a new TNS listener process and to also configure the node for local access.


Verifying Terminal Shell Environment

As discussed in the previous section, (Install Oracle10g Companion CD Software), the terminal shell environment needs to be configured for remote access and user equivalence to all nodes in the cluster before running the Network Configuration Assistant (NETCA). Note that you can utilize the same terminal shell session used in the previous section which in this case, you do not have to take any of the actions described below with regards to setting up remote access and the DISPLAY variable:

Login as the oracle User Account and Set DISPLAY (if necessary)

# su - oracle

$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
$ DISPLAY=<your local workstation>:0.0
$ export DISPLAY

Verify Remote Access / User Equivalence

Verify you are able to run the Secure Shell commands (ssh or scp) or the Remote Shell commands (rsh and rcp) on the Linux server you will be running the Oracle Universal Installer from against all other Linux servers in the cluster without being prompted for a password.

When using the secure shell method, user equivalence will need to be enabled on any new terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for each key that you generated when prompted:

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Enter passphrase for /u01/app/oracle/.ssh/id_rsa: xxxxx
Identity added: /u01/app/oracle/.ssh/id_rsa (/u01/app/oracle/.ssh/id_rsa)
Identity added: /u01/app/oracle/.ssh/id_dsa (/u01/app/oracle/.ssh/id_dsa)

$ ssh linux1 "date;hostname"
Wed Sep 13 18:27:17 EDT 2006
linux1

$ ssh linux2 "date;hostname"
Wed Sep 13 18:28:00 EDT 2006
linux2

When using the remote shell method, user equivalence is generally defined in the /etc/hosts.equiv file for the oracle user account and is enabled on all new terminal shell sessions:

$ rsh linux1 "date;hostname"
Tue Sep 12 12:37:17 EDT 2006
linux1

$ rsh linux2 "date;hostname"
Tue Sep 12 12:38:00 EDT 2006
linux2


Run the Network Configuration Assistant

To start the NETCA, run the following GUI utility as the oracle user account:
$ netca &

The following table walks you through the process of creating a new Oracle listener for our RAC environment.

Screen Name Response
Select the Type of Oracle
Net Services Configuration
Select Cluster configuration
Select the nodes to configure Select all of the nodes: linux1 and linux2.
Type of Configuration Select Listener configuration.
Listener Configuration - Next 6 Screens The following screens are now like any other normal listener configuration. You can simply accept the default parameters for the next six screens:
   What do you want to do: Add
   Listener name: LISTENER
   Selected protocols: TCP
   Port number: 1521
   Configure another listener: No
   Listener configuration complete! [ Next ]
You will be returned to this Welcome (Type of Configuration) Screen.
Type of Configuration Select Naming Methods configuration.
Naming Methods Configuration The following screens are:
   Selected Naming Methods: Local Naming
   Naming Methods configuration complete! [ Next ]
You will be returned to this Welcome (Type of Configuration) Screen.
Type of Configuration Click Finish to exit the NETCA.


Verify TNS Listener Configuration

The Oracle TNS listener process should now be running on all nodes in the RAC cluster:
$ hostname
linux1

$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
LISTENER_LINUX1

=====================

$ hostname
linux2

$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
LISTENER_LINUX2



Create the Oracle Cluster Database


  The database creation process should only be performed from one node in the cluster!


Overview

We will be using the Oracle Database Configuration Assistant (DBCA) to create the clustered database.

Before executing the Database Configuration Assistant, make sure that $ORACLE_HOME and $PATH are set appropriately for the $ORACLE_BASE/product/10.2.0/db_1 environment.

You should also verify that all services we have installed up to this point (Oracle TNS listener, Oracle Clusterware processes, etc.) are running before attempting to start the clustered database creation process.


Verifying Terminal Shell Environment

As discussed in the previous section, (Create TNS Listener Process), the terminal shell environment needs to be configured for remote access and user equivalence to all nodes in the cluster before running the Database Configuration Assistant (DBCA). Note that you can utilize the same terminal shell session used in the previous section which in this case, you do not have to take any of the actions described below with regards to setting up remote access and the DISPLAY variable:

Login as the oracle User Account and Set DISPLAY (if necessary)

# su - oracle

$ # IF YOU ARE USING A REMOTE CLIENT TO CONNECT TO THE
$ # NODE PERFORMING THE INSTALL
$ DISPLAY=<your local workstation>:0.0
$ export DISPLAY

Verify Remote Access / User Equivalence

Verify you are able to run the Secure Shell commands (ssh or scp) or the Remote Shell commands (rsh and rcp) on the Linux server you will be running the Oracle Universal Installer from against all other Linux servers in the cluster without being prompted for a password.

When using the secure shell method, user equivalence will need to be enabled on any new terminal shell session before attempting to run the OUI. To enable user equivalence for the current terminal shell session, perform the following steps remembering to enter the pass phrase for each key that you generated when prompted:

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Enter passphrase for /u01/app/oracle/.ssh/id_rsa: xxxxx
Identity added: /u01/app/oracle/.ssh/id_rsa (/u01/app/oracle/.ssh/id_rsa)
Identity added: /u01/app/oracle/.ssh/id_dsa (/u01/app/oracle/.ssh/id_dsa)

$ ssh linux1 "date;hostname"
Wed Sep 13 18:27:17 EDT 2006
linux1

$ ssh linux2 "date;hostname"
Wed Sep 13 18:28:00 EDT 2006
linux2

When using the remote shell method, user equivalence is generally defined in the /etc/hosts.equiv file for the oracle user account and is enabled on all new terminal shell sessions:

$ rsh linux1 "date;hostname"
Tue Sep 12 12:37:17 EDT 2006
linux1

$ rsh linux2 "date;hostname"
Tue Sep 12 12:38:00 EDT 2006
linux2


Create the Clustered Database

To start the database creation process, run the Database Configuration Assistant as the "oracle" UNIX user account following:

$ dbca &
Screen Name Response
Welcome Screen Select Oracle Real Application Clusters database.
Operations Select Create a Database.
Node Selection Click the Select All button to select all servers: linux1 and linux2.
Database Templates Select Custom Database
Database Identification Select:
   Global Database Name: orcl.idevelopment.info
   SID Prefix: orcl

  I used idevelopment.info for the database domain. You may use any domain. Keep in mind that this domain does not have to be a valid DNS domain.

Management Option Leave the default options here which is to Configure the Database with Enterprise Manager / Use Database Control for Database Management
Database Credentials I selected to Use the Same Password for All Accounts. Enter the password (twice) and make sure the password does not start with a digit number.
Storage Options For this article, we will select to use Automatic Storage Management (ASM).
Create ASM Instance Supply the SYS password to use for the new ASM instance. Also, starting with Oracle10g Release 2, the ASM instance server parameter file (SPFILE) needs to be on a shared disk. You will need to modify the default entry for "Create server parameter file (SPFILE)" to reside on the OCFS2 partition as follows: /u02/oradata/orcl/dbs/spfile+ASM.ora. All other options can stay at their defaults.

You will then be prompted with a dialog box asking if you want to create and start the ASM instance. Select the OK button to acknowledge this dialog.

The OUI will now create and start the ASM instance on all nodes in the RAC cluster.

ASM Disk Groups To start, click the Create New button. This will bring up the "Create Disk Group" window with the three volumes we configured earlier using ASMLib.

If the volumes we created earlier in this article do not show up in the "Select Member Disks" window: (ORCL:VOL1, ORCL:VOL2, and ORCL:VOL3) then click on the "Change Disk Discovery Path" button and input "ORCL:VOL*".

For the first "Disk Group Name", I used the string ORCL_DATA1. Select the first two ASM volumes (ORCL:VOL1 and ORCL:VOL2) in the "Select Member Disks" window. Keep the "Redundancy" setting to Normal.

After verifying all values in this window are correct, click the OK button. This will present the "ASM Disk Group Creation" dialog. When the ASM Disk Group Creation process is finished, you will be returned to the "ASM Disk Groups" windows.

Click the Create New button again. For the second "Disk Group Name", I used the string FLASH_RECOVERY_AREA. Select the last ASM volume (ORCL:VOL3) in the "Select Member Disks" window. Set the "Redundancy" option to External.

After verifying all values in this window are correct, click the OK button. This will present the "ASM Disk Group Creation" dialog.

When the ASM Disk Group Creation process is finished, you will be returned to the "ASM Disk Groups" window with two disk groups created and selected. Select only one of the disk groups by using the checkbox next to the newly created Disk Group Name ORCL_DATA1 (ensure that the disk group for FLASH_RECOVERY_AREA is not selected) and click Next to continue.

Database File Locations I selected to use the default which is Use Oracle-Managed Files:
   Database Area: +ORCL_DATA1
Recovery Configuration Check the option for Specify Flash Recovery Area.

For the Flash Recovery Area, click the [Browse] button and select the disk group name +FLASH_RECOVERY_AREA.

My disk group has a size of about 100GB. I used a Flash Recovery Area Size of 90GB (91136 MB).

Database Content I left all of the Database Components (and destination tablespaces) set to their default value, although it is perfectly OK to select the Sample Schemas. This option is available since we installed the Oracle Companion CD software.
Database Services For this test configuration, click Add, and enter the Service Name: orcltest. Leave both instances set to Preferred and for the "TAF Policy" select Basic.
Initialization Parameters Change any parameters for your environment. I left them all at their default settings.
Database Storage Change any parameters for your environment. I left them all at their default settings.
Creation Options Keep the default option Create Database selected and click Finish to start the database creation process.

Click OK on the "Summary" screen.

End of Database Creation At the end of the database creation, exit from the DBCA.

When exiting the DBCA, another dialog will come up indicating that it is starting all Oracle instances and HA service "orcltest". This may take several minutes to complete. When finished, all windows and dialog boxes will disappear.

When the Oracle Database Configuration Assistant has completed, you will have a fully functional Oracle RAC cluster running!


Create the orcltest Service

During the creation of the Oracle clustered database, we added a service named "orcltest" that will be used to connect to the database with TAF enabled. During several of my installs, the service was added to the tnsnames.ora, but was never updated as a service for each Oracle instance.

Use the following to verify the orcltest service was successfully added:

SQL> show parameter service

NAME                 TYPE        VALUE
-------------------- ----------- --------------------------------
service_names        string      orcl.idevelopment.info, orcltest

If the only service defined was for orcl.idevelopment.info, then you will need to manually add the service to both instances:

SQL> show parameter service

NAME                 TYPE        VALUE
-------------------- ----------- --------------------------
service_names        string      orcl.idevelopment.info

SQL> alter system set service_names = 
  2  'orcl.idevelopment.info, orcltest.idevelopment.info' scope=both;



Verify TNS Networking Files


  Ensure that the TNS networking files are configured on all nodes in the cluster!


listener.ora

We already covered how to create a TNS listener configuration file (listener.ora) for a clustered environment in the section Create TNS Listener Process. The listener.ora file should be properly configured and no modifications should be needed.

For clarity, I included a copy of the listener.ora file from my node linux1:

listener.ora
# listener.ora.linux1 Network Configuration File:
# /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora.linux1
# Generated by Oracle configuration tools.

LISTENER_LINUX1 =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = linux1-vip)(PORT = 1521)(IP = FIRST))
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.100)(PORT = 1521)(IP = FIRST))
    )
  )

SID_LIST_LISTENER_LINUX1 =
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = PLSExtProc)
      (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
      (PROGRAM = extproc)
    )
  )


tnsnames.ora

Here is a copy of my tnsnames.ora file that was configured by Oracle and can be used for testing the Transparent Application Failover (TAF). This file should already be configured on each node in the RAC cluster.

You can include any of these entries on other client machines that need access to the clustered database.

tnsnames.ora
# tnsnames.ora Network Configuration File:
# /u01/app/oracle/product/10.2.0/db_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

LISTENERS_ORCL =
  (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = linux1-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = linux2-vip)(PORT = 1521))
  )

ORCL2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = linux2-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = orcl.idevelopment.info)
      (INSTANCE_NAME = orcl2)
    )
  )

ORCL1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = linux1-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = orcl.idevelopment.info)
      (INSTANCE_NAME = orcl1)
    )
  )

ORCLTEST =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = linux1-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = linux2-vip)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = orcltest.idevelopment.info)
      (FAILOVER_MODE =
        (TYPE = SELECT)
        (METHOD = BASIC)
        (RETRIES = 180)
        (DELAY = 5)
      )
    )
  )

ORCL =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = linux1-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = linux2-vip)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = orcl.idevelopment.info)
    )
  )

EXTPROC_CONNECTION_DATA =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
    )
    (CONNECT_DATA =
      (SID = PLSExtProc)
      (PRESENTATION = RO)
    )
  )


Connecting to Clustered Database From an External Client

This is an optional step, but I like to perform it in order to verify my TNS files are configured correctly. Use another machine (i.e. a Windows machine connected to the network) that has Oracle installed (either 9i or 10g) and add the TNS entries (in the tnsnames.ora) from either of the nodes in the cluster that were created for the clustered database.

Then try to connect to the clustered database using all available service names defined in the tnsnames.ora file:

C:\> sqlplus system/manager@orcl2
C:\> sqlplus system/manager@orcl1
C:\> sqlplus system/manager@orcltest
C:\> sqlplus system/manager@orcl



Create / Alter Tablespaces

When creating the clustered database, we left all tablespaces set to their default size. Since I am using a large drive for the shared storage, I like to make a sizable testing database.

This section provides several optional SQL commands I used to modify and create all tablespaces for my testing database. Please keep in mind that the database file names (OMF files) I used in this example may differ from what Oracle creates for your environment. The following query can be used to determine the file names for your environment:

SQL> select tablespace_name, file_name
  2  from dba_data_files
  3  union
  4  select tablespace_name, file_name
  5  from dba_temp_files;

TABLESPACE_NAME     FILE_NAME
--------------- --------------------------------------------------
EXAMPLE         +ORCL_DATA1/orcl/datafile/example.257.570913311
INDX            +ORCL_DATA1/orcl/datafile/indx.270.570920045
SYSAUX          +ORCL_DATA1/orcl/datafile/sysaux.260.570913287
SYSTEM          +ORCL_DATA1/orcl/datafile/system.262.570913215
TEMP            +ORCL_DATA1/orcl/tempfile/temp.258.570913303
UNDOTBS1        +ORCL_DATA1/orcl/datafile/undotbs1.261.570913263
UNDOTBS2        +ORCL_DATA1/orcl/datafile/undotbs2.265.570913331
USERS           +ORCL_DATA1/orcl/datafile/users.264.570913355

$ sqlplus "/ as sysdba"

SQL> create user scott identified by tiger default tablespace users;
SQL> grant dba, resource, connect to scott;

SQL> alter database datafile '+ORCL_DATA1/orcl/datafile/users.264.570913355' resize 1024m;
SQL> alter tablespace users add datafile '+ORCL_DATA1' size 1024m autoextend off;

SQL> create tablespace indx datafile '+ORCL_DATA1' size 1024m
  2  autoextend on next 50m maxsize unlimited
  3  extent management local autoallocate
  4  segment space management auto;

SQL> alter database datafile '+ORCL_DATA1/orcl/datafile/system.262.570913215' resize 800m;

SQL> alter database datafile '+ORCL_DATA1/orcl/datafile/sysaux.260.570913287' resize 500m;

SQL> alter tablespace undotbs1 add datafile '+ORCL_DATA1' size 1024m
  2  autoextend on next 50m maxsize 2048m;

SQL> alter tablespace undotbs2 add datafile '+ORCL_DATA1' size 1024m
  2  autoextend on next 50m maxsize 2048m;

SQL> alter database tempfile '+ORCL_DATA1/orcl/tempfile/temp.258.570913303' resize 1024m;

Here is a snapshot of the tablespaces I have defined for my test database environment:

Status    Tablespace Name TS Type      Ext. Mgt.  Seg. Mgt.    Tablespace Size    Used (in bytes) Pct. Used
--------- --------------- ------------ ---------- --------- ------------------ ------------------ ---------
ONLINE    UNDOTBS1        UNDO         LOCAL      MANUAL         1,283,457,024         85,065,728         7
ONLINE    SYSAUX          PERMANENT    LOCAL      AUTO             524,288,000        275,906,560        53
ONLINE    USERS           PERMANENT    LOCAL      AUTO           2,147,483,648            131,072         0
ONLINE    SYSTEM          PERMANENT    LOCAL      MANUAL           838,860,800        500,301,824        60
ONLINE    EXAMPLE         PERMANENT    LOCAL      AUTO             157,286,400         83,820,544        53
ONLINE    INDX            PERMANENT    LOCAL      AUTO           1,073,741,824             65,536         0
ONLINE    UNDOTBS2        UNDO         LOCAL      MANUAL         1,283,457,024          3,801,088         0
ONLINE    TEMP            TEMPORARY    LOCAL      MANUAL         1,073,741,824         27,262,976         3
                                                            ------------------ ------------------ ---------
avg                                                                                                      22
sum                                                              8,382,316,544        976,355,328

8 rows selected.



Verify the RAC Cluster & Database Configuration


  The following RAC verification checks should be performed on all nodes in the cluster! For this article, I will only be performing checks from linux1.


Overview

This section provides several srvctl commands and SQL queries that can be used to validate your Oracle10g RAC configuration.

  There are five node-level tasks defined for SRVCTL:

  • Adding and deleting node level applications.
  • Setting and unsetting the environment for node-level applications.
  • Administering node applications.
  • Administering ASM instances.
  • Starting and stopping a group of programs that includes virtual IP addresses, listeners, Oracle Notification Services, and Oracle Enterprise Manager agents (for maintenance purposes).


Status of all instances and services

$ srvctl status database -d orcl
Instance orcl1 is running on node linux1
Instance orcl2 is running on node linux2


Status of a single instance

$ srvctl status instance -d orcl -i orcl2
Instance orcl2 is running on node linux2


Status of a named service globally across the database

$ srvctl status service -d orcl -s orcltest
Service orcltest is running on instance(s) orcl2, orcl1


Status of node applications on a particular node

$ srvctl status nodeapps -n linux1
VIP is running on node: linux1
GSD is running on node: linux1
Listener is running on node: linux1
ONS daemon is running on node: linux1


Status of an ASM instance

$ srvctl status asm -n linux1
ASM instance +ASM1 is running on node linux1.


List all configured databases

$ srvctl config database
orcl


Display configuration for our RAC database

$ srvctl config database -d orcl
linux1 orcl1 /u01/app/oracle/product/10.2.0/db_1
linux2 orcl2 /u01/app/oracle/product/10.2.0/db_1


Display all services for the specified cluster database

$ srvctl config service -d orcl
orcltest PREF: orcl2 orcl1 AVAIL:


Display the configuration for node applications - (VIP, GSD, ONS, Listener)

$ srvctl config nodeapps -n linux1 -a -g -s -l
VIP exists.: /linux1-vip/192.168.1.200/255.255.255.0/eth0:eth1
GSD exists.
ONS daemon exists.
Listener exists.


Display the configuration for the ASM instance(s)

$ srvctl config asm -n linux1
+ASM1 /u01/app/oracle/product/10.2.0/db_1


All running instances in the cluster

SELECT
    inst_id
  , instance_number inst_no
  , instance_name inst_name
  , parallel
  , status
  , database_status db_status
  , active_state state
  , host_name host
FROM gv$instance
ORDER BY inst_id;

 INST_ID  INST_NO INST_NAME  PAR STATUS  DB_STATUS    STATE     HOST
-------- -------- ---------- --- ------- ------------ --------- -------
       1        1 orcl1      YES OPEN    ACTIVE       NORMAL    linux1
       2        2 orcl2      YES OPEN    ACTIVE       NORMAL    linux2


All data files which are in the disk group

select name from v$datafile
union
select member from v$logfile
union
select name from v$controlfile
union
select name from v$tempfile;

NAME
-------------------------------------------
+FLASH_RECOVERY_AREA/orcl/controlfile/current.258.570913191
+FLASH_RECOVERY_AREA/orcl/onlinelog/group_1.257.570913201
+FLASH_RECOVERY_AREA/orcl/onlinelog/group_2.256.570913211
+FLASH_RECOVERY_AREA/orcl/onlinelog/group_3.259.570918285
+FLASH_RECOVERY_AREA/orcl/onlinelog/group_4.260.570918295
+ORCL_DATA1/orcl/controlfile/current.259.570913189
+ORCL_DATA1/orcl/datafile/example.257.570913311
+ORCL_DATA1/orcl/datafile/indx.270.570920045
+ORCL_DATA1/orcl/datafile/sysaux.260.570913287
+ORCL_DATA1/orcl/datafile/system.262.570913215
+ORCL_DATA1/orcl/datafile/undotbs1.261.570913263
+ORCL_DATA1/orcl/datafile/undotbs1.271.570920865
+ORCL_DATA1/orcl/datafile/undotbs2.265.570913331
+ORCL_DATA1/orcl/datafile/undotbs2.272.570921065
+ORCL_DATA1/orcl/datafile/users.264.570913355
+ORCL_DATA1/orcl/datafile/users.269.570919829
+ORCL_DATA1/orcl/onlinelog/group_1.256.570913195
+ORCL_DATA1/orcl/onlinelog/group_2.263.570913205
+ORCL_DATA1/orcl/onlinelog/group_3.266.570918279
+ORCL_DATA1/orcl/onlinelog/group_4.267.570918289
+ORCL_DATA1/orcl/tempfile/temp.258.570913303

21 rows selected.


All ASM disk that belong to the 'ORCL_DATA1' disk group

SELECT path
FROM   v$asm_disk
WHERE  group_number IN (select group_number
                        from v$asm_diskgroup
                        where name = 'ORCL_DATA1');

PATH
----------------------------------
ORCL:VOL1
ORCL:VOL2



Starting / Stopping the Cluster

At this point, everything has been installed and configured for Oracle10g RAC. We have all of the required software installed and configured plus we have a fully functional clustered database.

With all of the work we have done up to this point, a popular question might be, "How do we start and stop services?". If you have followed the instructions in this article, all services should start automatically on each reboot of the Linux nodes. This would include Oracle Clusterware, all Oracle instances, Enterprise Manager Database Console, etc.

There are times, however, when you might want to shutdown a node and manually start it back up. Or you may find that Enterprise Manager is not running and need to start it. This section provides the commands (using SRVCTL) responsible for starting and stopping the cluster environment.

Ensure that you are logged in as the "oracle" UNIX user. I will be running all of the commands in this section from linux1:

# su - oracle

$ hostname
linux1


Stopping the Oracle10g RAC Environment

The first step is to stop the Oracle instance. Once the instance (and related services) is down, then bring down the ASM instance. Finally, shutdown the node applications (Virtual IP, GSD, TNS Listener, and ONS).
$ export ORACLE_SID=orcl1
$ emctl stop dbconsole
$ srvctl stop instance -d orcl -i orcl1
$ srvctl stop asm -n linux1
$ srvctl stop nodeapps -n linux1


Starting the Oracle10g RAC Environment

The first step is to start the node applications (Virtual IP, GSD, TNS Listener, and ONS). Once the node applications are successfully started, then bring up the ASM instance. Finally, bring up the Oracle instance (and related services) and the Enterprise Manager Database console.
$ export ORACLE_SID=orcl1
$ srvctl start nodeapps -n linux1
$ srvctl start asm -n linux1
$ srvctl start instance -d orcl -i orcl1
$ emctl start dbconsole


Start / Stop All Instances with SRVCTL

Start / Stop all of the instances and its enabled services. I just included this for fun as a way to bring down all instances!
$ srvctl start database -d orcl

$ srvctl stop database -d orcl



Transparent Application Failover - (TAF)


Overview

It is not uncommon for businesses of today to demand 99.99% or even 99.999% availability for their enterprise applications. Think about what it would take to ensure a downtime of no more than .5 hours or even no downtime during the year. To answer many of these high availability requirements, businesses are investing in mechanisms that provide for automatic failover when one participating system fails. When considering the availability of the Oracle database, Oracle10g RAC provides a superior solution with its advanced failover mechanisms. Oracle10g RAC includes the required components that all work within a clustered configuration responsible for providing continuous availability - when one of the participating systems fail within the cluster, the users are automatically migrated to the other available systems.

A major component of Oracle10g RAC that is responsible for failover processing is the Transparent Application Failover (TAF) option. All database connections (and processes) that loose connections are reconnected to another node within the cluster. The failover is completely transparent to the user.

This final section provides a short demonstration on how automatic failover works in Oracle10g RAC. Please note that a complete discussion on failover in Oracle10g RAC would be an article in of its own. My intention here is to present a brief overview and example of how it works.

One important note before continuing is that TAF happens automatically within the OCI libraries. This means that your application (client) code does not need to change in order to take advantage of TAF. Certain configuration steps, however, will need to be done on the Oracle TNS file tnsnames.ora.

  Keep in mind that at the time of this article, using the Java thin client will not be able to participate in TAF since it never reads the tnsnames.ora file.


Setup tnsnames.ora File

Before demonstrating TAF, we need to verify that a valid entry exists in the tnsnames.ora file on a non-RAC client machine (if you have a Windows machine lying around). Ensure that you have Oracle RDBMS software installed. (Actually, you only need a client install of the Oracle software.)

During the creation of the clustered database in this article, I created a new service that will be used for testing TAF named ORCLTEST. It provides all of the necessary configuration parameters for load balancing and failover. You can copy the contents of this entry to the %ORACLE_HOME%\network\admin\tnsnames.ora file on the client machine (my Windows laptop is being used in this example) in order to connect to the new Oracle clustered database:

tnsnames.ora File Entry for Clustered Database
...
ORCLTEST =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = linux1-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = linux2-vip)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = orcltest.idevelopment.info)
      (FAILOVER_MODE =
        (TYPE = SELECT)
        (METHOD = BASIC)
        (RETRIES = 180)
        (DELAY = 5)
      )
    )
  )
...


SQL Query to Check the Session's Failover Information

The following SQL query can be used to check a session's failover type, failover method, and if a failover has occurred. We will be using this query throughout this example.
COLUMN instance_name    FORMAT a13
COLUMN host_name        FORMAT a9
COLUMN failover_method  FORMAT a15
COLUMN failed_over      FORMAT a11

SELECT
    instance_name
  , host_name
  , NULL AS failover_type
  , NULL AS failover_method
  , NULL AS failed_over
FROM v$instance
UNION
SELECT
    NULL
  , NULL
  , failover_type
  , failover_method
  , failed_over
FROM v$session
WHERE username = 'SYSTEM';

Transparent Application Failover Demonstration

From a Windows machine (or other non-RAC client machine), login to the clustered database using the orcltest service as the SYSTEM user:
C:\> sqlplus system/manager@orcltest

COLUMN instance_name    FORMAT a13
COLUMN host_name        FORMAT a9
COLUMN failover_method  FORMAT a15
COLUMN failed_over      FORMAT a11

SELECT
    instance_name
  , host_name
  , NULL AS failover_type
  , NULL AS failover_method
  , NULL AS failed_over
FROM v$instance
UNION
SELECT
    NULL
  , NULL
  , failover_type
  , failover_method
  , failed_over
FROM v$session
WHERE username = 'SYSTEM';


INSTANCE_NAME HOST_NAME FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER
------------- --------- ------------- --------------- -----------
orcl1         linux1
                        SELECT        BASIC           NO
DO NOT logout of the above SQL*Plus session! Now that we have run the query (above), we should now shutdown the instance orcl1 on linux1 using the abort option. To perform this operation, we can use the srvctl command-line utility as follows:
# su - oracle
$ srvctl status database -d orcl
Instance orcl1 is running on node linux1
Instance orcl2 is running on node linux2

$ srvctl stop instance -d orcl -i orcl1 -o abort

$ srvctl status database -d orcl
Instance orcl1 is not running on node linux1
Instance orcl2 is running on node linux2
Now let's go back to our SQL session and rerun the SQL statement in the buffer:
COLUMN instance_name    FORMAT a13
COLUMN host_name        FORMAT a9
COLUMN failover_method  FORMAT a15
COLUMN failed_over      FORMAT a11

SELECT
    instance_name
  , host_name
  , NULL AS failover_type
  , NULL AS failover_method
  , NULL AS failed_over
FROM v$instance
UNION
SELECT
    NULL
  , NULL
  , failover_type
  , failover_method
  , failed_over
FROM v$session
WHERE username = 'SYSTEM';


INSTANCE_NAME HOST_NAME FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER
------------- --------- ------------- --------------- -----------
orcl2         linux2
                        SELECT        BASIC           YES

SQL> exit
From the above demonstration, we can see that the above session has now been failed over to instance orcl2 on linux2.



Conclusion

Oracle10g RAC allows the DBA to configure a database solution with superior fault tolerance and load balancing. For those DBAs, however, that want to become more familiar with the features and benefits of Oracle10g RAC will find the costs of configuring even a small RAC cluster costing in the range of $15,000 to $20,000.

This article has hopefully given you an economical solution to setting up and configuring an inexpensive Oracle10g Release 2 RAC Cluster using CentOS 4 Linux (or Red Hat Enterprise Linux 4) and FireWire technology. The RAC solution presented in this article can be put together for around $1800 and will provide the DBA with a fully functional Oracle10g Release 2 RAC cluster. While this solution should be stable enough for testing and development, it should never be considered for a production environment.



Acknowledgements

An article of this magnitude and complexity is generally not the work of one person alone. Although I was able to author and successfully demonstrate the validity of the components that make up this configuration, there are several other individuals that deserve credit in making this article a success.

First, I would like to thank Werner Puschitz for his outstanding work on "Installing Oracle Database 10g with Real Application Cluster (RAC) on Red Hat Enterprise Linux Advanced Server 3". This article, along with several others of his, provided information on Oracle10g RAC that could not be found in any other Oracle documentation. Without his hard work and research into issues like configuring and installing the hangcheck-timer kernel module, properly configuring UNIX shared memory, and configuring ASMLib, this article may have never come to fruition. If you are interested in examining technical articles on Linux internals and in-depth Oracle configurations written by Werner Puschitz, please visit his excellent website at www.puschitz.com.

I would next like to thank Wim Coekaerts, Joel Becker, Manish Singh and the entire team at Oracle's Linux Projects Development Group. The professionals in this group made the job of upgrading the Linux kernel to support IEEE1394 devices with multiple logins (and several other significant modifications) a seamless task. The group provides the pre-compiled FireWire modules for Red Hat Enterprise Linux 4.2 (which also works with CentOS Linux) along with many other useful tools and documentation at oss.oracle.com.

In addition, I would like to thank the following vendors for generously supplying the hardware for this article; Maxtor, D-Link, SIIG, and LaCie.



About the Author

Jeffrey Hunter is an Oracle Certified Professional, Java Development Certified Professional, Author, and an Oracle ACE. Jeff currently works as a Senior Database Administrator for The DBA Zone, Inc. located in Pittsburgh, Pennsylvania. His work includes advanced performance tuning, Java and PL/SQL programming, developing high availability solutions, capacity planning, database security, and physical / logical database design in a UNIX, Linux, and Windows server environment. Jeff's other interests include mathematical encryption theory, programming language processors (compilers and interpreters) in Java and C, LDAP, writing web-based database administration tools, and of course Linux. He has been a Sr. Database Administrator and Software Engineer for over 18 years and maintains his own website site at: http://www.iDevelopment.info. Jeff graduated from Stanislaus State University in Turlock, California, with a Bachelor's degree in Computer Science.



Copyright (c) 1998-2014 Jeffrey M. Hunter. All rights reserved.

All articles, scripts and material located at the Internet address of http://www.idevelopment.info is the copyright of Jeffrey M. Hunter and is protected under copyright laws of the United States. This document may not be hosted on any other site without my express, prior, written permission. Application to host any of the material elsewhere can be made by contacting me at jhunter@idevelopment.info.

I have made every effort and taken great care in making sure that the material included on my web site is technically accurate, but I disclaim any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on it. I will in no case be liable for any monetary damages arising from such loss, damage or destruction.

Last modified on
Sunday, 18-Mar-2012 22:19:10 EDT
Page Count: 123946