Connecting to an iSCSI Target with Open-iSCSI Initiator using Linux
by Jeff Hunter, Sr. Database Administrator
This article shows you how to use the Open-iSCSI Initiator software to add a new volume on Linux from an iSCSI target; namely Openfiler. This new volume will be formatted with an ext3 file system and can then be used to store any type of file(s) which in this example will be Oracle database files.
Included in this article will be how to configure an iSCSI target on Openfiler, configure the Open-iSCSI Initiator on the Oracle database server to discover and add a new volume, how to format the new volume with an ext3 file system, and finally how to configure the file system to be mounted on each boot.
Before discussing the tasks in this article, let's take a conceptual look at what my environment looks like. An Oracle database is installed and configured on the node linux3 while all network storage is being provided by Openfiler on the node openfiler1. A new 36GB iSCSI logical volume will be carved out on Openfiler which will then be discovered by the Oracle database server linux3. After discovering the new iSCSI volume from linux3, the volume will be partitioned, formatted with an ext3 file system, and mounted to the directory /u03. All machines in my example configuration have two network interfaces one for the public network (192.168.1.0) and a second for storage traffic (192.168.2.0):
Figure 1: Example iSCSI Hardware Configuration
The Linux Open-iSCSI Initiator is a built-in package included with Red Hat Enterprise Linux 5 or later, however, in most cases it does not get installed by default. The Open-iSCSI Initiator software is included in the iscsi-initiator-utils package which can be found on CD #1. You can connect to an iSCSI volume at a shell prompt with just a few commands as will be demonstrated in this article.
For more information and download location of Open-iSCSI, please visit: http://www.open-iscsi.org.
Powered by rPath Linux, Openfiler is a free browser-based network storage management utility that delivers file-based Network Attached Storage (NAS) and block-based Storage Area Networking (SAN) in a single framework. Openfiler supports CIFS, NFS, HTTP/DAV, FTP, however, we will only be making use of its iSCSI capabilities to implement an inexpensive SAN for the network storage to be used for Oracle database files.
For many years, the only technology that existed for building a network based storage solution was a Fibre Channel Storage Area Network (FC SAN). Based on an earlier set of ANSI protocols called Fiber Distributed Data Interface (FDDI), Fibre Channel was developed to move SCSI commands over a storage network.
Several of the advantages to FC SAN include greater performance, increased disk utilization, improved availability, better scalability, and most important to us support for server clustering! Still today, however, FC SANs suffer from three major disadvantages. The first is price. While the costs involved in building a FC SAN have come down in recent years, the cost of entry still remains prohibitive for small companies with limited IT budgets. The second is incompatible hardware components. Since its adoption, many product manufacturers have interpreted the Fibre Channel specifications differently from each other which has resulted in scores of interconnect problems. When purchasing Fibre Channel components from a common manufacturer, this is usually not a problem. The third disadvantage is the fact that a Fibre Channel network is not Ethernet! It requires a separate network technology along with a second set of skill sets that need to exist with the data center staff.
With the popularity of Gigabit Ethernet and the demand for lower cost, Fibre Channel has recently been given a run for its money by iSCSI-based storage systems. Today, iSCSI SANs remain the leading competitor to FC SANs.
Ratified on February 11, 2003 by the Internet Engineering Task Force (IETF), the Internet Small Computer System Interface, better known as iSCSI, is an Internet Protocol (IP)-based storage networking standard for establishing and managing connections between IP-based storage devices, hosts, and clients. iSCSI is a data transport protocol defined in the SCSI-3 specifications framework and is similar to Fibre Channel in that it is responsible for carrying block-level data over a storage network. Block-level communication means that data is transferred between the host and the client in chunks called blocks. Database servers depend on this type of communication (as opposed to the file level communication used by most NAS systems) in order to work properly. Like a FC SAN, an iSCSI SAN should be a separate physical network devoted entirely to storage, however, its components can be much the same as in a typical IP network (LAN).
While iSCSI has a promising future, many of its early critics were quick to point out some of its inherent shortcomings with regards to performance. The beauty of iSCSI is its ability to utilize an already familiar IP network as its transport mechanism. The TCP/IP protocol, however, is very complex and CPU intensive. With iSCSI, most of the processing of the data (both TCP and iSCSI) is handled in software and is much slower than Fibre Channel which is handled completely in hardware. The overhead incurred in mapping every SCSI command onto an equivalent iSCSI transaction is excessive. For many the solution is to do away with iSCSI software initiators and invest in specialized cards that can offload TCP/IP and iSCSI processing from a server's CPU. These specialized cards are sometimes referred to as an iSCSI Host Bus Adaptor (HBA) or a TCP Offload Engine (TOE) card. Also consider that 10-Gigabit Ethernet is a reality today!
So with all of this talk about iSCSI, does this mean the death of Fibre Channel anytime soon? Probably not. Fibre Channel has clearly demonstrated its capabilities over the years with its capacity for extremely high speeds, flexibility, and robust reliability. Customers who have strict requirements for high performance storage, large complex connectivity, and mission critical reliability will undoubtedly continue to choose Fibre Channel.
As with any new technology, iSCSI comes with its own set of acronyms and terminology. For the purpose of this article, it is only important to understand the difference between an iSCSI initiator and an iSCSI target.
Basically, an iSCSI initiator is a client device that connects and initiates requests to some service offered by a server (in this case an iSCSI target). The iSCSI initiator software will need to exist on the client node which in this article is a database server on machine linux3.
An iSCSI initiator can be implemented using either software or hardware. Software iSCSI initiators are available for most major operating system platforms. For this article, we will be using the free Linux Open-iSCSI software driver found in the iscsi-initiator-utils RPM. The iSCSI software initiator is generally used with a standard network interface card (NIC) a Gigabit Ethernet card in most cases. A hardware initiator is an iSCSI HBA (or a TCP Offload Engine (TOE) card), which is basically just a specialized Ethernet card with a SCSI ASIC on-board to offload all the work (TCP and SCSI commands) from the system CPU. iSCSI HBAs are available from a number of vendors, including Adaptec, Alacritech, Intel, and QLogic.
An iSCSI target is the "server" component of an iSCSI network. This is typically the storage device that contains the information you want and answers requests from the initiator(s). For the purpose of this article, the node openfiler1 will be the iSCSI target.
Perform the following configuration tasks on the network storage server (openfiler1).
Openfiler administration is performed using the Openfiler Storage Control Center a browser based tool over an https connection on port 446. For example:
From the Openfiler Storage Control Center home page, log in as an administrator. The default administration login credentials for Openfiler are:
The first page the administrator sees is the [Status] / [System Overview] screen.
To use Openfiler as an iSCSI storage server, we have to perform six major tasks set up iSCSI services, configure network access, identify and partition the physical storage, create a new volume group, create all logical volumes, and finally, create new iSCSI targets for each of the logical volumes.
To control services, we use the Openfiler Storage Control Center and navigate to [Services] / [Enable/Disable]:
Figure 2: Enable iSCSI Openfiler Service
To enable the iSCSI service, click on the 'Enable' link under the 'iSCSI target server' service name. After that, the 'iSCSI target server' status should change to 'Enabled'.
The ietd program implements the user level part of iSCSI Enterprise Target software for building an iSCSI storage system on Linux. With the iSCSI target enabled, we should be able to SSH into the Openfiler server and see the iscsi-target service running:
The next step is to configure network access in Openfiler to identify the client node (linux3) that will need to access the iSCSI volume(s) through the storage network (192.168.2.0). Note that iSCSI logical volumes will be created later on in this section. Also note that this step does not actually grant the appropriate permissions to the iSCSI volume required by the client node. That will be accomplished later in this section by updating the ACL for the new logical volume.
As in the previous section, configuring network access is accomplished using the Openfiler Storage Control Center by navigating to [General] / [Local Networks]. The "Local networks configuration" section (at the bottom of the page) allows an administrator to setup networks and/or hosts that will be allowed to access resources exported by the Openfiler appliance. For the purpose of this article, we will want to add the client node individually rather than allowing the entire 192.168.2.0 network have access to Openfiler resources.
When entering the client node, note that the 'Name' field is just a logical name used for reference only. As a convention when entering nodes, I simply use the node name defined for that IP address. Next, when entering the actual node in the 'Network/Host' field, always use its IP address even though its host name may already be defined in your /etc/hosts file or DNS. Lastly, when entering actual hosts in our Class C network, use a subnet mask of 255.255.255.255.
It is important to remember that you will be entering the IP address of the storage network (eth1) for each of the RAC nodes in the cluster.
The following image shows the results of adding the network access permissions for linux3:
Figure 3: Configure Openfiler Network Access for Client linux3
Storage devices like internal IDE/SATA/SCSI disks, external USB or FireWire drives, external arrays, or any other storage can be connected to the Openfiler server and served to clients. Once these devices are discovered at the OS level, Openfiler Storage Control Center can be used to set up and manage all of that storage.
My Openfiler server has 6 x 73GB 15K SCSI disks and 1 x 500GB SATA II disk. The six SCSI disks are configured as a RAID 0 stripe and exclusively used by iSCSI clients to store database files. Since this article concentrates on provisioning storage for Oracle database files, I will only be discussing the SCSI disks configuration.
Each of the six SCSI disks were configured with a single primary 'RAID array member' partition type that spanned the entire disk.
Since all of the disks will contain a single primary partition that spans the entire disk, most of the options were left at their default setting where the only modification was to change the 'Partition Type' from 'Extended partition' to 'RAID array member'.
To see this and to start the process of creating iSCSI volumes, navigate to [Volumes] / [Physical Storage Mgmt.] from the Openfiler Storage Control Center:
Figure 4: Openfiler Physical Storage
As mentioned in the previous section, the six SCSI disks on my Openfiler server are configured as a software RAID 0 meta-disk device:
Figure 5: Openfiler Software RAID Management
The next step was to create a Volume Group simply named scsi for the SCSI RAID 0 group created in the previous section.
From the Openfiler Storage Control Center, navigate to [Volumes] / [Volume Group Mgmt.]:
Figure 6: New Volume Group
Finally, I created a Logical Volume which is what gets discovered and used by the iSCSI client node. For the purpose of this example, I will be creating a new 36GB logical volume named linux3-data-1. The logical volume will be created using the volume group (scsi) created in the previous section.
From the Openfiler Storage Control Center, navigate to [Volumes] / [Create New Volume] and select the newly created volume group scsi. Then, enter the values to make a new iSCSI logical volume making certain to select iSCSI as the filesystem type.
After creating a new logical volume, the application will point you to the "List of Existing Volumes" screen. If you want to create another logical volume, you will need to click back to the "Create New Volume" tab to create the next logical volume:
Figure 7: New Logical (iSCSI) Volumes
After creating the new logical volume, the "List of Existing Volumes" screen should look as follows:
Figure 8: New Logical (iSCSI) Volume
Before an iSCSI client can have access to the newly created iSCSI logical volume, it needs to be granted the appropriate permissions. Awhile back, I illustrated how to configure Openfiler with the host / network access ( for linux3-san) that was configured with access rights to resources. I now need to grant the node access to the newly created iSCSI logical volume.
From the Openfiler Storage Control Center, navigate to [Volumes] / [List of Existing Volumes]. This will present the screen shown in the previous section. For the new iSCSI logical volume, click on the 'Edit' link (under the Properties column). This will bring up the 'Edit properties' screen for that volume. Scroll to the bottom of this screen, change the host access from 'Deny' to 'Allow' for the linux3 node and click the 'Update' button.
Figure 9: Grant Host Access to Logical (iSCSI) Volume
Every time a new logical volume is added, you will need to restart the associated service on the Openfiler server. In my case, I created a new iSCSI logical volume so I needed to restart the iSCSI target (iscsi-target) service. This will make the new iSCSI target available to all clients on the network who have privileges to access it.
To restart the iSCSI target service, use the Openfiler Storage Control Center and navigate to [Services] / [Enable/Disable]. The iSCSI target service should already be enabled (several sections back). If so, disable the service then enable it again. (See Figure 2)
The same task can be achieved through an SSH session on the Openfiler server:
An iSCSI client can be any system (Linux, Unix, MS Windows, Apple Mac, etc.) for which iSCSI support (a driver) is available. In this article, the client is an Oracle database server (linux3) running CentOS 5.
In this section I will be configuring the iSCSI software initiator on the Oracle database server linux3. Red Hat Enterprise Linux (and CentOS 5) includes the Open-iSCSI software initiator which can be found in the iscsi-initiator-utils RPM.
All iSCSI management tasks like discovery and logins will use the command-line interface iscsiadm which is included with Open-iSCSI.
The iSCSI software initiator on linux3 will be configured to automatically login to the network storage server (openfiler1) and discover the iSCSI volume created in the previous section. I will then go through the steps of creating a persistent local SCSI device name (i.e. /dev/iscsi/linux3-data-1) for the iSCSI target name discovered using udev. Having a consistent local SCSI device name and which iSCSI target it maps to is highly recommended in order to distinguish between multiple SCSI devices. Before I can do any of this, however, I must first install the iSCSI initiator software!
With Red Hat Enterprise Linux 5 (and CentOS 5), the Open-iSCSI iSCSI software initiator does not get installed by default. The software is included in the iscsi-initiator-utils package which can be found on CD #1. To determine if this package is installed (which in most cases, it will not be), perform the following on the client node (linux3):
If the iscsi-initiator-utils package is not installed, load CD #1 into the machine and perform the following:
After verifying that the iscsi-initiator-utils package is installed, start the iscsid service and enable it to automatically start when the system boots. I will also configure the iscsi service to automatically start which logs into iSCSI targets needed at system start up.
Now that the iSCSI service is started, use the iscsiadm command-line interface to discover all available targets on the network storage server:
At this point the iSCSI initiator service has been started and the client node was able to discover the available target(s) from the network storage server. The next step is to manually login to the available target(s) which can be done using the iscsiadm command-line interface. Note that I had to specify the IP address and not the host name of the network storage server (openfiler1-san) - I believe this is required given the discovery (above) shows the targets using the IP address.
The next step is to make certain the client will automatically login to the target(s) listed above when the machine is booted (or the iSCSI initiator service is started/restarted):
In this section, I will go through the steps to create a persistent local SCSI device name (/dev/iscsi/linux3-data-1) which will be mapped to the new iSCSI target name. This will be done using udev. Having a consistent local SCSI device name (for example /dev/mydisk1 or /dev/mydisk2) is highly recommended in order to distinguish between multiple SCSI devices (/dev/sda or /dev/sdb) when the node is booted or the iSCSI initiator service is started/restarted.
When the database server node boots and the iSCSI initiator service is started, it will automatically login to the target(s) configured in a random fashion and map them to the next available local SCSI device name. For example, the target iqn.2006-01.com.openfiler:scsi.linux3-data-1 may get mapped to /dev/sda when the node boots. I can actually determine the current mappings for all targets (if there were multiple targets) by looking at the /dev/disk/by-path directory:
Using the output from the above listing, we can establish the following current mappings:
|iSCSI Target Name||SCSI Device Name|
Ok, so I only have one target discovered which maps to /dev/sda. But what if there were multiple targets configured (say, iqn.2006-01.com.openfiler:scsi.linux3-data-2) or better yet, I had multiple removable SCSI devices on linux3? This mapping could change every time the node is rebooted. For example, if I had a second target discovered on linux3 (i.e. iqn.2006-01.com.openfiler:scsi.linux3-data-2), after a reboot it may be determined that the second iSCSI target iqn.2006-01.com.openfiler:scsi.linux3-data-2 gets mapped to the local SCSI device /dev/sda and iqn.2006-01.com.openfiler:scsi.linux3-data-1 gets mapped to the local SCSI device /dev/sdb or visa-versa.
As you can see, it is impractical to rely on using the local SCSI device names like /dev/sda or /dev/sdb given there is no way to predict the iSCSI target mappings after a reboot.
What we need is a consistent device name we can reference like /dev/iscsi/linux3-data-1 that will always point to the appropriate iSCSI target through reboots. This is where the Dynamic Device Management tool named udev comes in. udev provides a dynamic device directory using symbolic links that point to the actual device using a configurable set of rules. When udev receives a device event (for example, the client logging in to an iSCSI target), it matches its configured rules against the available device attributes provided in sysfs to identify the device. Rules that match may provide additional device information or specify a device node name and multiple symlink names and instruct udev to run additional programs (a SHELL script for example) as part of the device event handling process.
The first step is to create a new rules file. This file will be named /etc/udev/rules.d/55-openiscsi.rules and contain only a single line of name=value pairs used to receive events we are interested in. It will also define a call-out SHELL script (/etc/udev/scripts/iscsidev.sh) to handle the event.
Create the following rules file /etc/udev/rules.d/55-openiscsi.rules on the client node linux3:
Next, create the UNIX SHELL script that will be called when this event is received. Let's first create a separate directory on the linux3 node where udev scripts can be stored:
Finally, create the UNIX shell script /etc/udev/scripts/iscsidev.sh:
After creating the UNIX SHELL script, change it to executable:
Now that udev is configured, restart the iSCSI initiator service:
Let's see if our hard work paid off:
The listing above shows that udev did the job is was suppose to do! We now have a consistent set of local device name(s) that can be used to reference the iSCSI targets through reboots. For example, we can safely assume that the device name /dev/iscsi/linux3-data-1/part will always reference the iSCSI target iqn.2006-01.com.openfiler:scsi.linux3-data-1. We now have a consistent iSCSI target name to local device name mapping which is described in the following table:
|iSCSI Target Name||SCSI Device Name|
I now need to create a single primary partition on the new iSCSI volume that spans the entire size of the volume. The fdisk command is used in Linux for creating (and removing) partitions. You can use the default values when creating the primary partition as the default action is to use the entire disk. You can safely ignore any warnings that may indicate the device does not contain a valid DOS partition (or Sun, SGI or OSF disklabel).
The next step is to create an ext3 file system on the new partition. Provided with the RHEL distribution is a script named /sbin/mkfs.ext3 which makes the task of creating an ext3 file system seamless. Here is an example session of using the mkfs.ext3 script on linux3:
Now that the new iSCSI volume is partition and formatted, the final step is to mount the new volume. For this example, I will be mounting the new volume on the directory /u03.
Create the /u03 directory before attempting to mount the new volume:
Next, edit the /etc/fstab on linux3 and add an entry for the new volume:
After making the new entry in the /etc/fstab file, it is now just a matter of mounting the new iSCSI volume:
It is my hope that this article has provided valuable insight into how you can take advantage of networked storage and the iSCSI configuration process. As you can see, the process is fairly straightforward. Just as simple as it was to configure the Open-iSCSI Initiator on Linux, it is just as easy to remove it and that is the subject of this section.
Unmount the File System
After unmounting the file system, remove (or comment out) its related entry from the /etc/fstab file:
Manually Logout of iSCSI Target(s)
Verify we are logged out of the iSCSI target by looking at the /dev/disk/by-path directory. If no other iSCSI targets exist on the client node, then after logging out from the iSCSI target, the mappings for all targets should be gone and the following command should not find any files or directories:
Delete Target and Disable Automatic Login
Update the record entry on the client node to disable automatic logins to the iSCSI target:
Delete the iSCSI target:
Remove udev Rules Files
If the iSCSI target being removed is the only remaining target and you don't plan on adding any further iSCSI targets in the future, then it is safe to remove the iSCSI rules file and its call-out script:
Disable the iSCSI (Initiator) Service
If the iSCSI target being removed is the only remaining target and you don't plan on adding any further iSCSI targets in the future, then it is safe to disable the iSCSI Initiator Service:
Jeffrey Hunter is an Oracle Certified Professional, Java Development Certified Professional, Author, and an Oracle ACE. Jeff currently works as a Senior Database Administrator for The DBA Zone, Inc. located in Pittsburgh, Pennsylvania. His work includes advanced performance tuning, Java and PL/SQL programming, developing high availability solutions, capacity planning, database security, and physical / logical database design in a UNIX / Linux server environment. Jeff's other interests include mathematical encryption theory, tutoring advanced mathematics, programming language processors (compilers and interpreters) in Java and C, LDAP, writing web-based database administration tools, and of course Linux. He has been a Sr. Database Administrator and Software Engineer for over 20 years and maintains his own website site at: http://www.iDevelopment.info. Jeff graduated from Stanislaus State University in Turlock, California, with a Bachelor's degree in Computer Science and Mathematics.
Copyright (c) 1998-2017 Jeffrey M. Hunter. All rights reserved.
All articles, scripts and material located at the Internet address of http://www.idevelopment.info is the copyright of Jeffrey M. Hunter and is protected under copyright laws of the United States. This document may not be hosted on any other site without my express, prior, written permission. Application to host any of the material elsewhere can be made by contacting me at email@example.com.
I have made every effort and taken great care in making sure that the material included on my web site is technically accurate, but I disclaim any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on it. I will in no case be liable for any monetary damages arising from such loss, damage or destruction.
Last modified on
Wednesday, 28-Dec-2011 14:10:37 EST
Page Count: 241187