Linux Tips

  


Connecting to an iSCSI Target with Open-iSCSI Initiator using Linux

by Jeff Hunter, Sr. Database Administrator

Contents

Introduction

This article shows you how to use the Open-iSCSI Initiator software to add a new volume on Linux from an iSCSI target; namely Openfiler. This new volume will be formatted with an ext3 file system and can then be used to store any type of file(s) which in this example will be Oracle database files.

Included in this article will be how to configure an iSCSI target on Openfiler, configure the Open-iSCSI Initiator on the Oracle database server to discover and add a new volume, how to format the new volume with an ext3 file system, and finally how to configure the file system to be mounted on each boot.

Before discussing the tasks in this article, let's take a conceptual look at what my environment looks like. An Oracle database is installed and configured on the node linux3 while all network storage is being provided by Openfiler on the node openfiler1. A new 36GB iSCSI logical volume will be carved out on Openfiler which will then be discovered by the Oracle database server linux3. After discovering the new iSCSI volume from linux3, the volume will be partitioned, formatted with an ext3 file system, and mounted to the directory /u03. All machines in my example configuration have two network interfaces — one for the public network (192.168.1.0) and a second for storage traffic (192.168.2.0):

Figure 1: Example iSCSI Hardware Configuration

About Linux Open-iSCSI Initiator

The Linux Open-iSCSI Initiator is a built-in package included with Red Hat Enterprise Linux 5 or later, however, in most cases it does not get installed by default. The Open-iSCSI Initiator software is included in the iscsi-initiator-utils package which can be found on CD #1. You can connect to an iSCSI volume at a shell prompt with just a few commands as will be demonstrated in this article.

For more information and download location of Open-iSCSI, please visit: http://www.open-iscsi.org.

About Openfiler

Powered by rPath Linux, Openfiler is a free browser-based network storage management utility that delivers file-based Network Attached Storage (NAS) and block-based Storage Area Networking (SAN) in a single framework. Openfiler supports CIFS, NFS, HTTP/DAV, FTP, however, we will only be making use of its iSCSI capabilities to implement an inexpensive SAN for the network storage to be used for Oracle database files.

iSCSI Technology

For many years, the only technology that existed for building a network based storage solution was a Fibre Channel Storage Area Network (FC SAN). Based on an earlier set of ANSI protocols called Fiber Distributed Data Interface (FDDI), Fibre Channel was developed to move SCSI commands over a storage network.

Several of the advantages to FC SAN include greater performance, increased disk utilization, improved availability, better scalability, and most important to us — support for server clustering! Still today, however, FC SANs suffer from three major disadvantages. The first is price. While the costs involved in building a FC SAN have come down in recent years, the cost of entry still remains prohibitive for small companies with limited IT budgets. The second is incompatible hardware components. Since its adoption, many product manufacturers have interpreted the Fibre Channel specifications differently from each other which has resulted in scores of interconnect problems. When purchasing Fibre Channel components from a common manufacturer, this is usually not a problem. The third disadvantage is the fact that a Fibre Channel network is not Ethernet! It requires a separate network technology along with a second set of skill sets that need to exist with the data center staff.

With the popularity of Gigabit Ethernet and the demand for lower cost, Fibre Channel has recently been given a run for its money by iSCSI-based storage systems. Today, iSCSI SANs remain the leading competitor to FC SANs.

Ratified on February 11, 2003 by the Internet Engineering Task Force (IETF), the Internet Small Computer System Interface, better known as iSCSI, is an Internet Protocol (IP)-based storage networking standard for establishing and managing connections between IP-based storage devices, hosts, and clients. iSCSI is a data transport protocol defined in the SCSI-3 specifications framework and is similar to Fibre Channel in that it is responsible for carrying block-level data over a storage network. Block-level communication means that data is transferred between the host and the client in chunks called blocks. Database servers depend on this type of communication (as opposed to the file level communication used by most NAS systems) in order to work properly. Like a FC SAN, an iSCSI SAN should be a separate physical network devoted entirely to storage, however, its components can be much the same as in a typical IP network (LAN).

While iSCSI has a promising future, many of its early critics were quick to point out some of its inherent shortcomings with regards to performance. The beauty of iSCSI is its ability to utilize an already familiar IP network as its transport mechanism. The TCP/IP protocol, however, is very complex and CPU intensive. With iSCSI, most of the processing of the data (both TCP and iSCSI) is handled in software and is much slower than Fibre Channel which is handled completely in hardware. The overhead incurred in mapping every SCSI command onto an equivalent iSCSI transaction is excessive. For many the solution is to do away with iSCSI software initiators and invest in specialized cards that can offload TCP/IP and iSCSI processing from a server's CPU. These specialized cards are sometimes referred to as an iSCSI Host Bus Adaptor (HBA) or a TCP Offload Engine (TOE) card. Also consider that 10-Gigabit Ethernet is a reality today!

So with all of this talk about iSCSI, does this mean the death of Fibre Channel anytime soon? Probably not. Fibre Channel has clearly demonstrated its capabilities over the years with its capacity for extremely high speeds, flexibility, and robust reliability. Customers who have strict requirements for high performance storage, large complex connectivity, and mission critical reliability will undoubtedly continue to choose Fibre Channel.

As with any new technology, iSCSI comes with its own set of acronyms and terminology. For the purpose of this article, it is only important to understand the difference between an iSCSI initiator and an iSCSI target.

iSCSI Initiator

Basically, an iSCSI initiator is a client device that connects and initiates requests to some service offered by a server (in this case an iSCSI target). The iSCSI initiator software will need to exist on the client node which in this article is a database server on machine linux3.

An iSCSI initiator can be implemented using either software or hardware. Software iSCSI initiators are available for most major operating system platforms. For this article, we will be using the free Linux Open-iSCSI software driver found in the iscsi-initiator-utils RPM. The iSCSI software initiator is generally used with a standard network interface card (NIC) — a Gigabit Ethernet card in most cases. A hardware initiator is an iSCSI HBA (or a TCP Offload Engine (TOE) card), which is basically just a specialized Ethernet card with a SCSI ASIC on-board to offload all the work (TCP and SCSI commands) from the system CPU. iSCSI HBAs are available from a number of vendors, including Adaptec, Alacritech, Intel, and QLogic.

iSCSI Target

An iSCSI target is the "server" component of an iSCSI network. This is typically the storage device that contains the information you want and answers requests from the initiator(s). For the purpose of this article, the node openfiler1 will be the iSCSI target.

Configure iSCSI Target

Perform the following configuration tasks on the network storage server (openfiler1).

Openfiler administration is performed using the Openfiler Storage Control Center — a browser based tool over an https connection on port 446. For example:

https://openfiler1.idevelopment.info:446/

From the Openfiler Storage Control Center home page, log in as an administrator. The default administration login credentials for Openfiler are:

The first page the administrator sees is the [Status] / [System Overview] screen.

To use Openfiler as an iSCSI storage server, we have to perform six major tasks — set up iSCSI services, configure network access, identify and partition the physical storage, create a new volume group, create all logical volumes, and finally, create new iSCSI targets for each of the logical volumes.

Services

To control services, we use the Openfiler Storage Control Center and navigate to [Services] / [Enable/Disable]:

Figure 2: Enable iSCSI Openfiler Service

To enable the iSCSI service, click on the 'Enable' link under the 'iSCSI target server' service name. After that, the 'iSCSI target server' status should change to 'Enabled'.

The ietd program implements the user level part of iSCSI Enterprise Target software for building an iSCSI storage system on Linux. With the iSCSI target enabled, we should be able to SSH into the Openfiler server and see the iscsi-target service running:


[root@openfiler1 ~]# service iscsi-target status ietd (pid 14243) is running...

Network Access Restriction

The next step is to configure network access in Openfiler to identify the client node (linux3) that will need to access the iSCSI volume(s) through the storage network (192.168.2.0). Note that iSCSI logical volumes will be created later on in this section. Also note that this step does not actually grant the appropriate permissions to the iSCSI volume required by the client node. That will be accomplished later in this section by updating the ACL for the new logical volume.

As in the previous section, configuring network access is accomplished using the Openfiler Storage Control Center by navigating to [General] / [Local Networks]. The "Local networks configuration" section (at the bottom of the page) allows an administrator to setup networks and/or hosts that will be allowed to access resources exported by the Openfiler appliance. For the purpose of this article, we will want to add the client node individually rather than allowing the entire 192.168.2.0 network have access to Openfiler resources.

When entering the client node, note that the 'Name' field is just a logical name used for reference only. As a convention when entering nodes, I simply use the node name defined for that IP address. Next, when entering the actual node in the 'Network/Host' field, always use its IP address even though its host name may already be defined in your /etc/hosts file or DNS. Lastly, when entering actual hosts in our Class C network, use a subnet mask of 255.255.255.255.

It is important to remember that you will be entering the IP address of the storage network (eth1) for each of the RAC nodes in the cluster.

The following image shows the results of adding the network access permissions for linux3:

Figure 3: Configure Openfiler Network Access for Client linux3

Physical Storage

Storage devices like internal IDE/SATA/SCSI disks, external USB or FireWire drives, external arrays, or any other storage can be connected to the Openfiler server and served to clients. Once these devices are discovered at the OS level, Openfiler Storage Control Center can be used to set up and manage all of that storage.

My Openfiler server has 6 x 73GB 15K SCSI disks and 1 x 500GB SATA II disk. The six SCSI disks are configured as a RAID 0 stripe and exclusively used by iSCSI clients to store database files. Since this article concentrates on provisioning storage for Oracle database files, I will only be discussing the SCSI disks configuration.

Each of the six SCSI disks were configured with a single primary 'RAID array member' partition type that spanned the entire disk.

 

If the new partition was not going to be part of a software RAID group, you would select the partition type as 'Physical volume'.

Since all of the disks will contain a single primary partition that spans the entire disk, most of the options were left at their default setting where the only modification was to change the 'Partition Type' from 'Extended partition' to 'RAID array member'.

To see this and to start the process of creating iSCSI volumes, navigate to [Volumes] / [Physical Storage Mgmt.] from the Openfiler Storage Control Center:

Figure 4: Openfiler Physical Storage

Software RAID Management

As mentioned in the previous section, the six SCSI disks on my Openfiler server are configured as a software RAID 0 meta-disk device:

Figure 5: Openfiler Software RAID Management

Volume Group Management

The next step was to create a Volume Group simply named scsi for the SCSI RAID 0 group created in the previous section.

From the Openfiler Storage Control Center, navigate to [Volumes] / [Volume Group Mgmt.]:

Figure 6: New Volume Group

Logical Volumes

Finally, I created a Logical Volume which is what gets discovered and used by the iSCSI client node. For the purpose of this example, I will be creating a new 36GB logical volume named linux3-data-1. The logical volume will be created using the volume group (scsi) created in the previous section.

From the Openfiler Storage Control Center, navigate to [Volumes] / [Create New Volume] and select the newly created volume group scsi. Then, enter the values to make a new iSCSI logical volume making certain to select iSCSI as the filesystem type.

After creating a new logical volume, the application will point you to the "List of Existing Volumes" screen. If you want to create another logical volume, you will need to click back to the "Create New Volume" tab to create the next logical volume:

Figure 7: New Logical (iSCSI) Volumes

After creating the new logical volume, the "List of Existing Volumes" screen should look as follows:

Figure 8: New Logical (iSCSI) Volume

Grant Access Rights to New Logical Volume(s)

Before an iSCSI client can have access to the newly created iSCSI logical volume, it needs to be granted the appropriate permissions. Awhile back, I illustrated how to configure Openfiler with the host / network access ( for linux3-san) that was configured with access rights to resources. I now need to grant the node access to the newly created iSCSI logical volume.

From the Openfiler Storage Control Center, navigate to [Volumes] / [List of Existing Volumes]. This will present the screen shown in the previous section. For the new iSCSI logical volume, click on the 'Edit' link (under the Properties column). This will bring up the 'Edit properties' screen for that volume. Scroll to the bottom of this screen, change the host access from 'Deny' to 'Allow' for the linux3 node and click the 'Update' button.

Figure 9: Grant Host Access to Logical (iSCSI) Volume

Make iSCSI Target(s) Available to Client(s)

Every time a new logical volume is added, you will need to restart the associated service on the Openfiler server. In my case, I created a new iSCSI logical volume so I needed to restart the iSCSI target (iscsi-target) service. This will make the new iSCSI target available to all clients on the network who have privileges to access it.

To restart the iSCSI target service, use the Openfiler Storage Control Center and navigate to [Services] / [Enable/Disable]. The iSCSI target service should already be enabled (several sections back). If so, disable the service then enable it again. (See Figure 2)

The same task can be achieved through an SSH session on the Openfiler server:


[root@openfiler1 ~]# service iscsi-target restart Stopping iSCSI target service: [ OK ] Starting iSCSI target service: [ OK ]

Configure iSCSI Initiator and New Volume

An iSCSI client can be any system (Linux, Unix, MS Windows, Apple Mac, etc.) for which iSCSI support (a driver) is available. In this article, the client is an Oracle database server (linux3) running CentOS 5.

In this section I will be configuring the iSCSI software initiator on the Oracle database server linux3. Red Hat Enterprise Linux (and CentOS 5) includes the Open-iSCSI software initiator which can be found in the iscsi-initiator-utils RPM.

 

This is a change from previous versions of RHEL (4.x) which included the Linux iscsi-sfnet software driver developed as part of the Linux-iSCSI Project.

All iSCSI management tasks like discovery and logins will use the command-line interface iscsiadm which is included with Open-iSCSI.

The iSCSI software initiator on linux3 will be configured to automatically login to the network storage server (openfiler1) and discover the iSCSI volume created in the previous section. I will then go through the steps of creating a persistent local SCSI device name (i.e. /dev/iscsi/linux3-data-1) for the iSCSI target name discovered using udev. Having a consistent local SCSI device name and which iSCSI target it maps to is highly recommended in order to distinguish between multiple SCSI devices. Before I can do any of this, however, I must first install the iSCSI initiator software!

Installing the iSCSI (Initiator) Service

With Red Hat Enterprise Linux 5 (and CentOS 5), the Open-iSCSI iSCSI software initiator does not get installed by default. The software is included in the iscsi-initiator-utils package which can be found on CD #1. To determine if this package is installed (which in most cases, it will not be), perform the following on the client node (linux3):


[root@linux3 ~]# rpm -qa | grep iscsi-initiator-utils

If the iscsi-initiator-utils package is not installed, load CD #1 into the machine and perform the following:


[root@linux3 ~]# mount -r /dev/cdrom /media/cdrom [root@linux3 ~]# cd /media/cdrom/CentOS [root@linux3 ~]# rpm -Uvh iscsi-initiator-utils-6.2.0.865-0.8.el5.i386.rpm [root@linux3 ~]# cd / [root@linux3 ~]# eject

Configure the iSCSI (Initiator) Service

After verifying that the iscsi-initiator-utils package is installed, start the iscsid service and enable it to automatically start when the system boots. I will also configure the iscsi service to automatically start which logs into iSCSI targets needed at system start up.


[root@linux3 ~]# service iscsid start Turning off network shutdown. Starting iSCSI daemon: [ OK ] [ OK ] [root@linux3 ~]# chkconfig iscsid on [root@linux3 ~]# chkconfig iscsi on

Now that the iSCSI service is started, use the iscsiadm command-line interface to discover all available targets on the network storage server:


[root@linux3 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1-san 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:scsi.linux3-data-1

Manually Login to iSCSI Target(s)

At this point the iSCSI initiator service has been started and the client node was able to discover the available target(s) from the network storage server. The next step is to manually login to the available target(s) which can be done using the iscsiadm command-line interface. Note that I had to specify the IP address and not the host name of the network storage server (openfiler1-san) - I believe this is required given the discovery (above) shows the targets using the IP address.


[root@linux3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:scsi.linux3-data-1 -p 192.168.2.195 --login Logging in to [iface: default, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal: 192.168.2.195,3260] Login to [iface: default, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal: 192.168.2.195,3260]: successful

Configure Automatic Login

The next step is to make certain the client will automatically login to the target(s) listed above when the machine is booted (or the iSCSI initiator service is started/restarted):


[root@linux3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:scsi.linux3-data-1 -p 192.168.2.195 --op update -n node.startup -v automatic

Create Persistent Local SCSI Device Names

In this section, I will go through the steps to create a persistent local SCSI device name (/dev/iscsi/linux3-data-1) which will be mapped to the new iSCSI target name. This will be done using udev. Having a consistent local SCSI device name (for example /dev/mydisk1 or /dev/mydisk2) is highly recommended in order to distinguish between multiple SCSI devices (/dev/sda or /dev/sdb) when the node is booted or the iSCSI initiator service is started/restarted.

When the database server node boots and the iSCSI initiator service is started, it will automatically login to the target(s) configured in a random fashion and map them to the next available local SCSI device name. For example, the target iqn.2006-01.com.openfiler:scsi.linux3-data-1 may get mapped to /dev/sda when the node boots. I can actually determine the current mappings for all targets (if there were multiple targets) by looking at the /dev/disk/by-path directory:


[root@linux3 ~]# (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}') ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:scsi.linux3-data-1 -> ../../sda

Using the output from the above listing, we can establish the following current mappings:

Current iSCSI Target Name to local SCSI Device Name Mappings
iSCSI Target Name SCSI Device Name
iqn.2006-01.com.openfiler:scsi.linux3-data-1 /dev/sda

Ok, so I only have one target discovered which maps to /dev/sda. But what if there were multiple targets configured (say, iqn.2006-01.com.openfiler:scsi.linux3-data-2) or better yet, I had multiple removable SCSI devices on linux3? This mapping could change every time the node is rebooted. For example, if I had a second target discovered on linux3 (i.e. iqn.2006-01.com.openfiler:scsi.linux3-data-2), after a reboot it may be determined that the second iSCSI target iqn.2006-01.com.openfiler:scsi.linux3-data-2 gets mapped to the local SCSI device /dev/sda and iqn.2006-01.com.openfiler:scsi.linux3-data-1 gets mapped to the local SCSI device /dev/sdb or visa-versa.

As you can see, it is impractical to rely on using the local SCSI device names like /dev/sda or /dev/sdb given there is no way to predict the iSCSI target mappings after a reboot.

What we need is a consistent device name we can reference like /dev/iscsi/linux3-data-1 that will always point to the appropriate iSCSI target through reboots. This is where the Dynamic Device Management tool named udev comes in. udev provides a dynamic device directory using symbolic links that point to the actual device using a configurable set of rules. When udev receives a device event (for example, the client logging in to an iSCSI target), it matches its configured rules against the available device attributes provided in sysfs to identify the device. Rules that match may provide additional device information or specify a device node name and multiple symlink names and instruct udev to run additional programs (a SHELL script for example) as part of the device event handling process.

The first step is to create a new rules file. This file will be named /etc/udev/rules.d/55-openiscsi.rules and contain only a single line of name=value pairs used to receive events we are interested in. It will also define a call-out SHELL script (/etc/udev/scripts/iscsidev.sh) to handle the event.

Create the following rules file /etc/udev/rules.d/55-openiscsi.rules on the client node linux3:


# /etc/udev/rules.d/55-openiscsi.rules KERNEL=="sd*", BUS=="scsi", PROGRAM="/etc/udev/scripts/iscsidev.sh %b",SYMLINK+="iscsi/%c/part%n"

Next, create the UNIX SHELL script that will be called when this event is received. Let's first create a separate directory on the linux3 node where udev scripts can be stored:


[root@linux3 ~]# mkdir -p /etc/udev/scripts

Finally, create the UNIX shell script /etc/udev/scripts/iscsidev.sh:


#!/bin/sh # FILE: /etc/udev/scripts/iscsidev.sh BUS=${1} HOST=${BUS%%:*} [ -e /sys/class/iscsi_host ] || exit 1 file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname" target_name=$(cat ${file}) # This is not an open-scsi drive if [ -z "${target_name}" ]; then exit 1 fi # Check if QNAP drive check_qnap_target_name=${target_name%%:*} if [ $check_qnap_target_name = "iqn.2004-04.com.qnap" ]; then target_name=`echo "${target_name%.*}"` fi echo "${target_name##*.}"

After creating the UNIX SHELL script, change it to executable:


[root@linux3 ~]# chmod 755 /etc/udev/scripts/iscsidev.sh

Now that udev is configured, restart the iSCSI initiator service:


[root@linux3 ~]# service iscsi stop Logging out of session [sid: 3, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal: 192.168.2.195,3260] Logout of [sid: 3, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal: 192.168.2.195,3260]: successful Stopping iSCSI daemon: /etc/init.d/iscsi: line 33: 5143 Killed /etc/init.d/iscsid stop [root@linux3 ~]# service iscsi start iscsid dead but pid file exists Turning off network shutdown. Starting iSCSI daemon: [ OK ] [ OK ] Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal: 192.168.2.195,3260] Login to [iface: default, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal: 192.168.2.195,3260]: successful [ OK ]

Let's see if our hard work paid off:


[root@linux3 ~]# ls -l /dev/iscsi/ total 0 drwxr-xr-x 2 root root 60 Apr 7 01:57 linux3-data-1 [root@linux3 ~]# ls -l /dev/iscsi/linux3-data-1/ total 0 lrwxrwxrwx 1 root root 9 Apr 7 01:57 part -> ../../sda

The listing above shows that udev did the job is was suppose to do! We now have a consistent set of local device name(s) that can be used to reference the iSCSI targets through reboots. For example, we can safely assume that the device name /dev/iscsi/linux3-data-1/part will always reference the iSCSI target iqn.2006-01.com.openfiler:scsi.linux3-data-1. We now have a consistent iSCSI target name to local device name mapping which is described in the following table:

iSCSI Target Name to Local Device Name Mappings
iSCSI Target Name SCSI Device Name
iqn.2006-01.com.openfiler:scsi.linux3-data-1 /dev/iscsi/linux3-data-1/part

Create Primary Partition on iSCSI Volume

I now need to create a single primary partition on the new iSCSI volume that spans the entire size of the volume. The fdisk command is used in Linux for creating (and removing) partitions. You can use the default values when creating the primary partition as the default action is to use the entire disk. You can safely ignore any warnings that may indicate the device does not contain a valid DOS partition (or Sun, SGI or OSF disklabel).


[root@linux3 ~]# fdisk /dev/iscsi/linux3-data-1/part Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-36864, default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-36864, default 36864): 36864 Command (m for help): p Disk /dev/iscsi/linux3-data-1/part: 38.6 GB, 38654705664 bytes 64 heads, 32 sectors/track, 36864 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot Start End Blocks Id System /dev/iscsi/linux3-data-1/part1 1 36864 37748720 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.

Create File System on new iSCSI Volume / Partition

The next step is to create an ext3 file system on the new partition. Provided with the RHEL distribution is a script named /sbin/mkfs.ext3 which makes the task of creating an ext3 file system seamless. Here is an example session of using the mkfs.ext3 script on linux3:


[root@linux3 ~]# mkfs.ext3 -b 4096 /dev/iscsi/linux3-data-1/part1 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 4718592 inodes, 9437180 blocks 471859 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 288 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 26 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.

Mount the New File System

Now that the new iSCSI volume is partition and formatted, the final step is to mount the new volume. For this example, I will be mounting the new volume on the directory /u03.

Create the /u03 directory before attempting to mount the new volume:


[root@linux3 ~]# mkdir -p /u03

Next, edit the /etc/fstab on linux3 and add an entry for the new volume:


/dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 /dev/iscsi/linux3-data-1/part1 /u03 ext3 _netdev 0 0 cartman:SHARE2 /cartman nfs defaults 0 0 domo:Public /domo nfs defaults 0 0

After making the new entry in the /etc/fstab file, it is now just a matter of mounting the new iSCSI volume:


[root@linux3 ~]# mount /u03 [root@linux3 ~]# df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 56086828 21905480 31286296 42% / /dev/hda1 101086 19160 76707 20% /boot tmpfs 1037056 0 1037056 0% /dev/shm cartman:SHARE2 306562280 8448 306247272 1% /cartman domo:Public 1919782912 329519744 1590263168 18% /domo /dev/sda1 37156400 180240 35088724 1% /u03

Logout and Remove an iSCSI Target from a Linux Client

It is my hope that this article has provided valuable insight into how you can take advantage of networked storage and the iSCSI configuration process. As you can see, the process is fairly straightforward. Just as simple as it was to configure the Open-iSCSI Initiator on Linux, it is just as easy to remove it and that is the subject of this section.

  1. Unmount the File System


    [root@linux3 ~]# cd [root@linux3 ~]# umount /u03

    After unmounting the file system, remove (or comment out) its related entry from the /etc/fstab file:


    # /dev/iscsi/linux3-data-1/part1 /u03 ext3 _netdev 0 0

  2. Manually Logout of iSCSI Target(s)


    [root@linux3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:scsi.linux3-data-1 -p 192.168.2.195 --logout Logging out of session [sid: 4, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal: 192.168.2.195,3260] Logout of [sid: 4, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal: 192.168.2.195,3260]: successful

    Verify we are logged out of the iSCSI target by looking at the /dev/disk/by-path directory. If no other iSCSI targets exist on the client node, then after logging out from the iSCSI target, the mappings for all targets should be gone and the following command should not find any files or directories:


    [root@linux3 ~]# (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}') ls: *openfiler*: No such file or directory

  3. Delete Target and Disable Automatic Login

    Update the record entry on the client node to disable automatic logins to the iSCSI target:


    [root@linux3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:scsi.linux3-data-1 -p 192.168.2.195 --op update -n node.startup -v manual

    Delete the iSCSI target:


    [root@linux3 ~]# iscsiadm -m node --op delete --targetname iqn.2006-01.com.openfiler:scsi.linux3-data-1

  4. Remove udev Rules Files

    If the iSCSI target being removed is the only remaining target and you don't plan on adding any further iSCSI targets in the future, then it is safe to remove the iSCSI rules file and its call-out script:


    [root@linux3 ~]# rm /etc/udev/rules.d/55-openiscsi.rules [root@linux3 ~]# rm /etc/udev/scripts/iscsidev.sh

  5. Disable the iSCSI (Initiator) Service

    If the iSCSI target being removed is the only remaining target and you don't plan on adding any further iSCSI targets in the future, then it is safe to disable the iSCSI Initiator Service:


    [root@linux3 ~]# service iscsid stop [root@linux3 ~]# chkconfig iscsid off [root@linux3 ~]# chkconfig iscsi off

About the Author

Jeffrey Hunter is an Oracle Certified Professional, Java Development Certified Professional, Author, and an Oracle ACE. Jeff currently works as a Senior Database Administrator for The DBA Zone, Inc. located in Pittsburgh, Pennsylvania. His work includes advanced performance tuning, Java and PL/SQL programming, developing high availability solutions, capacity planning, database security, and physical / logical database design in a UNIX, Linux, and Windows server environment. Jeff's other interests include mathematical encryption theory, programming language processors (compilers and interpreters) in Java and C, LDAP, writing web-based database administration tools, and of course Linux. He has been a Sr. Database Administrator and Software Engineer for over 18 years and maintains his own website site at: http://www.iDevelopment.info. Jeff graduated from Stanislaus State University in Turlock, California, with a Bachelor's degree in Computer Science.



Copyright (c) 1998-2014 Jeffrey M. Hunter. All rights reserved.

All articles, scripts and material located at the Internet address of http://www.idevelopment.info is the copyright of Jeffrey M. Hunter and is protected under copyright laws of the United States. This document may not be hosted on any other site without my express, prior, written permission. Application to host any of the material elsewhere can be made by contacting me at jhunter@idevelopment.info.

I have made every effort and taken great care in making sure that the material included on my web site is technically accurate, but I disclaim any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on it. I will in no case be liable for any monetary damages arising from such loss, damage or destruction.

Last modified on
Wednesday, 28-Dec-2011 14:10:37 EST
Page Count: 156324