Return to Solaris Home Page.


Connecting to an iSCSI Target with Open-iSCSI Initiator using Solaris

by Jeff Hunter, Sr. Database Administrator


Contents

  1. Introduction
  2. About Openfiler
  3. iSCSI Technology
  4. Configure iSCSI Target
  5. Configure iSCSI Initiator and New Volume
  6. Logout and Remove an iSCSI Target from a Solaris Client
  7. About the Author


Introduction

This article shows you how to use the iSCSI initiator software to add a new volume on the Solaris platform from an iSCSI target; namely Openfiler. This new volume will be formatted with a ufs file system and can then be used to store any type of file(s) which in this example will be Oracle database files.

Included in this article will be how to configure an iSCSI target on Openfiler, configure the iSCSI initiator software on the Oracle database server to discover and add a new volume, how to format the new volume with a ufs file system, and finally how to configure the file system to be mounted on each boot.

Before discussing the tasks in this article, let's take a conceptual look at what my environment looks like. An Oracle database is installed and configured on the node alex while all network storage is being provided by Openfiler on the node openfiler1. A new 36GB iSCSI logical volume will be carved out on Openfiler which will then be discovered by the Oracle database server alex. After discovering the new iSCSI volume from alex, the volume will be partitioned, formatted with an ufs file system, and mounted to the directory /u04. All machines in my example configuration have two network interfaces — one for the public network (192.168.1.0) and a second for storage traffic (192.168.2.0):



Figure 1: Example iSCSI Hardware Configuration



About Openfiler

Powered by rPath Linux, Openfiler is a free browser-based network storage management utility that delivers file-based Network Attached Storage (NAS) and block-based Storage Area Networking (SAN) in a single framework. Openfiler supports CIFS, NFS, HTTP/DAV, FTP, however, we will only be making use of its iSCSI capabilities to implement an inexpensive SAN for the network storage to be used for Oracle database files.



iSCSI Technology

For many years, the only technology that existed for building a network based storage solution was a Fibre Channel Storage Area Network (FC SAN). Based on an earlier set of ANSI protocols called Fiber Distributed Data Interface (FDDI), Fibre Channel was developed to move SCSI commands over a storage network.

Several of the advantages to FC SAN include greater performance, increased disk utilization, improved availability, better scalability, and support for server clustering. Still today, however, FC SANs suffer from three major disadvantages. The first is price. While the costs involved in building a FC SAN have come down in recent years, the cost of entry still remains prohibitive for small companies with limited IT budgets. The second is incompatible hardware components. Since its adoption, many product manufacturers have interpreted the Fibre Channel specifications differently from each other which has resulted in scores of interconnect problems. When purchasing Fibre Channel components from a common manufacturer, this is usually not a problem. The third disadvantage is the fact that a Fibre Channel network is not Ethernet! It requires a separate network technology along with a second set of skill sets that need to exist with the data center staff.

With the popularity of Gigabit Ethernet and the demand for lower cost, Fibre Channel has recently been given a run for its money by iSCSI-based storage systems. Today, iSCSI SANs remain the leading competitor to FC SANs.

Ratified on February 11th, 2003 by the Internet Engineering Task Force (IETF), the Internet Small Computer System Interface, better known as iSCSI, is an Internet Protocol (IP)-based storage networking standard for establishing and managing connections between IP-based storage devices, hosts, and clients. iSCSI is a data transport protocol defined in the SCSI-3 specifications framework and is similar to Fibre Channel in that it is responsible for carrying block-level data over a storage network. Block-level communication means that data is transferred between the host and the client in chunks called blocks. Database servers depend on this type of communication (as opposed to the file level communication used by most NAS systems) in order to work properly. Like a FC SAN, an iSCSI SAN should be a separate physical network devoted entirely to storage, however, its components can be much the same as in a typical IP network (LAN).

While iSCSI has a promising future, many of its early critics were quick to point out some of its inherent shortcomings with regards to performance. The beauty of iSCSI is its ability to utilize an already familiar IP network as its transport mechanism. The TCP/IP protocol, however, is very complex and CPU intensive. With iSCSI, most of the processing of the data (both TCP and iSCSI) is handled in software and is much slower than Fibre Channel which is handled completely in hardware. The overhead incurred in mapping every SCSI command onto an equivalent iSCSI transaction is excessive. For many the solution is to do away with iSCSI software initiators and invest in specialized cards that can offload TCP/IP and iSCSI processing from a server's CPU. These specialized cards are sometimes referred to as an iSCSI Host Bus Adaptor (HBA) or a TCP Offload Engine (TOE) card. Also consider that 10-Gigabit Ethernet is a reality today!

As with any new technology, iSCSI comes with its own set of acronyms and terminology. For the purpose of this article, it is only important to understand the difference between an iSCSI initiator and an iSCSI target.

iSCSI Initiator

Basically, an iSCSI initiator is a client device that connects and initiates requests to some service offered by a server (in this case an iSCSI target). The iSCSI initiator software will need to exist on the client node which in this article is a database server on machine alex.

An iSCSI initiator can be implemented using either software or hardware. Software iSCSI initiators are available for most major operating system platforms. For this article, we will be using the iSCSI initiator software included with the Solaris 10 Operating System:

iSCSI Packages on Solaris 10
Package Name Package Description
SUNWiscsir Sun iSCSI Device Driver (root)
SUNWiscsitgtr Sun iSCSI Target (Root)
SUNWiscsitgtu Sun iSCSI Target (Usr)
SUNWiscsiu Sun iSCSI Management Utilities (usr)

The iSCSI software initiator is generally used with a standard network interface card (NIC) — a Gigabit Ethernet card in most cases. A hardware initiator is an iSCSI HBA (or a TCP Offload Engine (TOE) card), which is basically just a specialized Ethernet card with a SCSI ASIC on-board to offload all the work (TCP and SCSI commands) from the system CPU. iSCSI HBAs are available from a number of vendors, including Adaptec, Alacritech, Intel, and QLogic.

iSCSI Target

An iSCSI target is the "server" component of an iSCSI network. This is typically the storage device that contains the information you want and answers requests from the initiator(s). For the purpose of this article, the node openfiler1 will be the iSCSI target.

So with all of this talk about iSCSI, does this mean the death of Fibre Channel anytime soon? Probably not. Fibre Channel has clearly demonstrated its capabilities over the years with its capacity for extremely high speeds, flexibility, and robust reliability. Customers who have strict requirements for high performance storage, large complex connectivity, and mission critical reliability will undoubtedly continue to choose Fibre Channel.



Configure iSCSI Target

Openfiler administration is performed using the Openfiler Storage Control Center — a browser based tool over an https connection on port 446. For example:
https://openfiler1:446/
From the Openfiler Storage Control Center home page, login as an administrator. The default administration login credentials for Openfiler are:

The first page the administrator sees is the [Accounts] / [Authentication] screen. Configuring user accounts and groups is not necessary for this article and will therefore not be discussed.

To use Openfiler as an iSCSI storage server, we have to perform three major tasks; set up iSCSI services, configure host access, and create physical storage.


Services

To control services, use the Openfiler Storage Control Center and navigate to [Services] / [Enable/Disable]:


Figure 2: Enable iSCSI Openfiler Service

To enable the iSCSI service, click on 'Enable' under the 'iSCSI target' service name. After that, the 'iSCSI target' status should change to 'Disable'.

The ietd program implements the user level part of iSCSI Enterprise Target software for building an iSCSI storage system. With the iSCSI target enabled, we should be able to SSH into the Openfiler server and see the iscsi-target service running:

[root@openfiler1 ~]# service iscsi-target status
ietd (pid 3267) is running...


Host Access Restriction

The next step is to configure host access in Openfiler so the database server (alex) has permissions to the iSCSI volumes through the storage network (192.168.2.0).

  iSCSI volumes will be created in the next section!

Again, this task can be completed using the Openfiler Storage Control Center by navigating to [General] / [Local Networks]. The Local Networks screen allows an administrator to setup networks and/or hosts that will be allowed to access resources exported by the Openfiler appliance. For the purpose of this article, we want to add the Oracle database server individually rather than allowing the entire 192.168.2.0 network have access to Openfiler resources.

When entering the database server node, note that the 'Name' field is just a logical name used for reference only. As a convention when entering nodes, I simply use the node name defined for that IP address. Next, when entering the actual node in the 'Network/Host' field, always use it's IP address even though its host name may already be defined in your /etc/hosts file or DNS. Lastly, when entering actual hosts in our Class C network, use a subnet mask of 255.255.255.255.

It is important to remember that you will be entering the IP address of the storage network (eth1) for the database server machine.

The following image shows the results of adding the host access permissions for alex:


Figure 3: Configure Openfiler Host Access for the Client alex


Physical Storage

Storage devices like internal IDE/SATA/SCSI disks, external USB or FireWire drives, external arrays, or any other storage can be connected to the Openfiler server and served to clients. Once these devices are discovered at the OS level, Openfiler Storage Control Center can be used to set up and manage all of that storage.

My Openfiler server has 6 x 73GB 15K SCSI disks and 1 x 500GB SATA II disk. The six SCSI disks are configured as a RAID 0 stripe and exclusively used by iSCSI clients to store database files. Since this article concentrates on provisioning storage for Oracle database files, I will only be discussing the SCSI disks configuration.

Each of the six SCSI disks were configured with a single primary 'RAID array member' partition type that spanned the entire disk.

  If the new partition was not going to be part of a software RAID group, you would select the partition type as 'Physical volume'.

Since all of the disks will contain a single primary partition that spans the entire disk, most of the options were left at their default setting where the only modification was to change the 'Partition Type' from 'Extended partition' to 'RAID array member'.

To see this and to start the process of creating iSCSI volumes, navigate to [Volumes] / [Physical Storage Mgmt.] from the Openfiler Storage Control Center:


Figure 4: Openfiler Physical Storage


Software RAID Management

As mentioned in the previous section, the six SCSI disks on my Openfiler server are configured as a software RAID 0 meta-disk device:


Figure 5: Openfiler Software RAID Management


Volume Group Management

The next step was to create a Volume Group simply named scsi for the SCSI RAID 0 group created in the previous section.

From the Openfiler Storage Control Center, navigate to [Volumes] / [Volume Group Mgmt.]:


Figure 6: New Volume Group


Logical Volumes

Finally, I created a Logical Volume which is what gets discovered and used by the iSCSI client node. For the purpose of this example, I will be creating a new 36GB logical volume named alex-data-1. The logical volume will be created using the volume group (scsi) created in the previous section.

From the Openfiler Storage Control Center, navigate to [Volumes] / [Create New Volume] and select the newly created volume group scsi. Then, enter the values to make a new iSCSI logical volume making certain to select iSCSI as the filesystem type.

After creating a new logical volume, the application will point you to the "List of Existing Volumes" screen. If you want to create another logical volume, you will need to click back to the "Create New Volume" tab to create the next logical volume:


Figure 7: New Logical (iSCSI) Volumes


After creating the new logical volume, the "List of Existing Volumes" screen should look as follows:


Figure 8: New Logical (iSCSI) Volume


Grant Access Rights to New Logical Volume(s)

Before an iSCSI client can have access to the newly created iSCSI logical volume, it needs to be granted the appropriate permissions. Awhile back, I illustrated how to configure Openfiler with the host / network access ( for alex-san) that was configured with access rights to resources. I now need to grant the node access to the newly created iSCSI logical volume.

From the Openfiler Storage Control Center, navigate to [Volumes] / [List of Existing Volumes]. This will present the screen shown in the previous section. For the new iSCSI logical volume, click on the 'Edit' link (under the Properties column). This will bring up the 'Edit properties' screen for that volume. Scroll to the bottom of this screen, change the host access from 'Deny' to 'Allow' for the alex node and click the 'Update' button.


Figure 9: Grant Host Access to Logical (iSCSI) Volume


Make iSCSI Target(s) Available to Client(s)

Every time a new logical volume is added, you will need to restart the associated service on the Openfiler server. In my case, I created a new iSCSI logical volume so I needed to restart the iSCSI target (iscsi-target) service. This will make the new iSCSI target available to all clients on the network who have privileges to access it.

To restart the iSCSI target service, use the Openfiler Storage Control Center and navigate to [Services] / [Enable/Disable]. The iSCSI target service should already be enabled (several sections back). If so, disable the service then enable it again. (See Figure 2)

The same task can be achieved through an SSH session on the Openfiler server:

[root@openfiler1 ~]# service iscsi-target restart
Stopping iSCSI target service: [  OK  ]
Starting iSCSI target service: [  OK  ]



Configure iSCSI Initiator and New Volume

An iSCSI client can be any system (Linux, Unix, MS Windows, Apple Mac, etc.) for which iSCSI support (a driver) is available. In this article, the client is an Oracle database server (alex) running the Solaris 10 Operating System (SPARC).

The Solaris 10 Operating System includes and installs the iSCSI initiator software by default. All iSCSI management tasks like discovery and logins will use the command-line interface iscsiadm which is included with the Solaris 10 Operating System:

iSCSI Packages on Solaris 10
Package Name Package Description
SUNWiscsir Sun iSCSI Device Driver (root)
SUNWiscsitgtr Sun iSCSI Target (Root)
SUNWiscsitgtu Sun iSCSI Target (Usr)
SUNWiscsiu Sun iSCSI Management Utilities (usr)

The iSCSI software initiator on alex will be configured to automatically login to the network storage server (openfiler1) and discover the iSCSI volume created in the previous section.


Configure iSCSI Target Discovery

After verifying that the iSCSI software packages are installed to the client machine (alex) and that the iSCSI Target (Openfiler) is configured, run the following from the client machine to discover all available iSCSI LUNs. Note that the IP address for the Openfiler network storage server is accessed through the private network and located at the address 192.168.2.195.
[root@alex /]# iscsiadm add discovery-address 192.168.2.195:3260


Enable the iSCSI Target Discovery Method

Run the following command to configure the SendTargets method of discovery from the client node:
[root@alex /]# iscsiadm modify discovery --sendtargets enable


Create the iSCSI Device Links for the Local System

The next step is to create the iSCSI device links for the local system on the client machine alex. These links will be created in the /dev/rdsk directory. As before, run the following command on the client machine alex:
[root@alex /]# devfsadm -i iscsi
After the devices have been discovered by the Solaris iSCSI initiator, the login negotiation occurs automatically. The Solaris iSCSI driver determines the number of LUNs available and creates the device nodes. Then, the iSCSI devices can be treated as any other SCSI device.
[root@alex /]# dmesg | grep alex-data-1
Apr 14 12:50:07 alex iscsi: [ID 240218 kern.notice] NOTICE: iscsi session(19) 
iqn.2006-01.com.openfiler:scsi.alex-data-1 online

Apr 14 12:50:07 alex scsi: [ID 799468 kern.info] sd0 at iscsi0: 
name 0000iqn.2006-01.com.openfiler%3Ascsi.alex-data-10001,0, 
bus address 0000iqn.2006-01.com.openfiler%3Ascsi.alex-data-10001,0

Apr 14 12:50:07 alex genunix: [ID 936769 kern.info] sd0 is 
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Ascsi.alex-data-10001,0

Apr 14 12:50:07 alex scsi: [ID 107833 kern.warning] WARNING: 
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Ascsi.alex-data-10001,0 (sd0):

Apr 14 12:50:07 alex genunix: [ID 408114 kern.info] 
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Ascsi.alex-data-10001,0 (sd0) online

Apr 14 12:50:07 alex scsi: [ID 107833 kern.warning] WARNING: 
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Ascsi.alex-data-10001,0 (sd0):


Create Primary Partition on iSCSI Volume

Now that the new iSCSI LUN has been successfully discovered, the next step is to create a new partition (or partitions).

Under Solaris, the command to partition disks is format. This is an interactive tool similar to FDISK under MS-DOS. In most cases, you will be allocating all available space to one slice. If this is the case, simply allocate all cylinders to a partition on slice 4 of the disk. If the new iSCSI volume is going to be used as an Oracle ASM volume (which is not the case in this article), it would be required to first create a small 128MB partition on slice 0 before creating the partition for the data on slice 4. As a general practice, I always create the small 128MB partition on slice 0 just in case I do decide to reformat the disk for use as an Oracle ASM volume. You will also see that slice 2 is already labeled "backup". You should leave slice 2 as is. After creating all of the required partitions and the partition table is ready, then write the new partition table to disk and label the disk. Labeling the disk can also be done within the interactive format session.

[root@alex /]# format
Searching for disks...done

c1t0d0: configured with capacity of 35.98GB


AVAILABLE DISK SELECTIONS:
       0. c0t0d0 
          /pci@1f,0/ide@d/dad@0,0
       1. c0t2d0 
          /pci@1f,0/ide@d/dad@2,0
       2. c1t0d0 
          /iscsi/disk@0000iqn.2006-01.com.openfiler%3Ascsi.alex-data-10001,0
Specify disk (enter its number): 2
selecting c1t0d0
[disk formatted]
Disk not labeled.  Label it now? y


FORMAT MENU:
        disk       - select a disk
        type       - select (define) a disk type
        partition  - select (define) a partition table
        current    - describe the current disk
        format     - format and analyze the disk
        repair     - repair a defective sector
        label      - write label to the disk
        analyze    - surface analysis
        defect     - defect list management
        backup     - search for backup labels
        verify     - read and display labels
        save       - save new disk/partition definitions
        inquiry    - show vendor, product and revision
        volname    - set 8-character volume name
        !     - execute , then return
        quit
format> partition


PARTITION MENU:
        0      - change '0' partition
        1      - change '1' partition
        2      - change '2' partition
        3      - change '3' partition
        4      - change '4' partition
        5      - change '5' partition
        6      - change '6' partition
        7      - change '7' partition
        select - select a predefined table
        modify - modify a predefined partition table
        name   - name the current table
        print  - display the current table
        label  - write partition map and label to the disk
        ! - execute , then return
        Quit

partition> modify
Select partitioning base:
        0. Current partition table (default)
        1. All Free Hog
Choose base (enter number) [0]? 1

Part      Tag    Flag     Cylinders        Size            Blocks
  0       root    wm       0               0         (0/0/0)           0
  1       swap    wu       0               0         (0/0/0)           0
  2     backup    wu       0 - 4605       35.98GB    (4606/0/0) 75464704
  3 unassigned    wm       0               0         (0/0/0)           0
  4 unassigned    wm       0               0         (0/0/0)           0
  5 unassigned    wm       0               0         (0/0/0)           0
  6        usr    wm       0               0         (0/0/0)           0
  7 unassigned    wm       0               0         (0/0/0)           0

Do you wish to continue creating a new partition
table based on above table[yes]? yes
Free Hog partition[6]? 6
Enter size of partition '0' [0b, 0c, 0.00mb, 0.00gb]: 128mb
Enter size of partition '1' [0b, 0c, 0.00mb, 0.00gb]:
Enter size of partition '3' [0b, 0c, 0.00mb, 0.00gb]:
Enter size of partition '4' [0b, 0c, 0.00mb, 0.00gb]: 35.855gb
Warning: no space available for '5' from Free Hog partition
Warning: no space available for '7' from Free Hog partition

Part      Tag    Flag     Cylinders        Size            Blocks
  0       root    wm       0 -   15      128.00MB    (16/0/0)     262144
  1       swap    wu       0               0         (0/0/0)           0
  2     backup    wu       0 - 4605       35.98GB    (4606/0/0) 75464704
  3 unassigned    wm       0               0         (0/0/0)           0
  4 unassigned    wm      16 - 4605       35.86GB    (4590/0/0) 75202560
  5 unassigned    wm       0               0         (0/0/0)           0
  6        usr    wm       0               0         (0/0/0)           0
  7 unassigned    wm       0               0         (0/0/0)           0

Okay to make this the current partition table[yes]? yes
Enter table name (remember quotes): "data1"

Ready to label disk, continue? yes

partition> quit

FORMAT MENU:
        disk       - select a disk
        type       - select (define) a disk type
        partition  - select (define) a partition table
        current    - describe the current disk
        format     - format and analyze the disk
        repair     - repair a defective sector
        label      - write label to the disk
        analyze    - surface analysis
        defect     - defect list management
        backup     - search for backup labels
        verify     - read and display labels
        save       - save new disk/partition definitions
        inquiry    - show vendor, product and revision
        volname    - set 8-character volume name
        !     - execute , then return
        quit
format> quit
[root@alex /]#


Create File System on new iSCSI Volume / Partition

Create a UFS file system on the new disk using the newfs command. The device name should be /dev/rdsk/c1t0d0s4 if you partitioned as above.
[root@alex /]# newfs /dev/rdsk/c1t0d0s4
newfs: construct a new file system /dev/rdsk/c1t0d0s4: (y/n)? y
/dev/rdsk/c1t0d0s4:     75202560 sectors in 12240 cylinders of 48 tracks, 128 sectors
        36720.0MB in 765 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
...............
super-block backups for last 10 cylinder groups at:
 74226080, 74324512, 74422944, 74521376, 74619808, 74718240, 74816672,
 74915104, 75013536, 75111968


Create Mount Point

Create a mount point that will be used to mount the new disk somewhere in the current file system:
[root@alex /]# mkdir -p /u04


Mount the New File System

If you want the iSCSI volume automatically mounted at boot when the iscsi init script runs, edit /etc/vfstab and add a line for the new file system. It should look like this (all on one line with tabs separating the fields):

/etc/vfstab
#device                 device                  mount                   FS      fsck    mount   mount
#to mount               to fsck                 point                   type    pass    at boot options
#
fd                      -                       /dev/fd                 fd      -       no      -
/proc                   -                       /proc                   proc    -       no      -
/dev/dsk/c0t0d0s1       -                       -                       swap    -       no      -
/dev/dsk/c0t0d0s0       /dev/rdsk/c0t0d0s0      /                       ufs     1       no      -
/dev/dsk/c0t0d0s7       /dev/rdsk/c0t0d0s7      /export/home            ufs     2       yes     -
/dev/dsk/c0t0d0s5       /dev/rdsk/c0t0d0s5      /u01                    ufs     2       yes     -
/dev/dsk/c0t0d0s6       /dev/rdsk/c0t0d0s6      /u02                    ufs     2       yes     -
/dev/dsk/c0t2d0s7       /dev/rdsk/c0t2d0s7      /u03                    ufs     2       yes     -
/devices                -                       /devices                devfs   -       no      -
sharefs                 -                       /etc/dfs/sharetab       sharefs -       no      -
ctfs                    -                       /system/contract        ctfs    -       no      -
objfs                   -                       /system/object          objfs   -       no      -
swap                    -                       /tmp                    tmpfs   -       yes     -
cartman:share2          -                       /cartman                nfs     -       yes     vers=3
domo:Public             -                       /domo                   nfs     -       yes     vers=3
/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4      /u04                    ufs     2       yes     -

This will mount the file system to /u04 at boot time when the iscsi init script runs.



Logout and Remove an iSCSI Target from a Solaris Client

It is my hope that this article has provided valuable insight into how you can take advantage of networked storage and the iSCSI configuration process. As you can see, the process is fairly straightforward. Just as simple as it was to configure the iSCSI Initiator on Solaris, it is just as easy to remove it and that is the subject of this section.

  1. Unmount the File System
  2. [root@alex /]# cd
    [root@alex /]# umount /u04

  3. Removing the iSCSI Target Discovery Method
  4. [root@alex /]# iscsiadm remove discovery-address 192.168.2.195:3260

  5. Disable iSCSI Target Discovery
  6. [root@alex /]# iscsiadm modify discovery --sendtargets disable

  7. Remove or comment out the following entry to /etc/vfstab
  8. Remove (or comment out) its related entry from the /etc/vfstab file:

    # /dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4      /u04                    ufs     2       yes     -

  9. Clean-out the iSCSI Device Links for the Local System
  10. Update the device links on the local system to clean-out the disabled iSCSI volumes:

    [root@alex /]# devfsadm -C



About the Author

Jeffrey Hunter is an Oracle Certified Professional, Java Development Certified Professional, Author, and an Oracle ACE. Jeff currently works as a Senior Database Administrator for The DBA Zone, Inc. located in Pittsburgh, Pennsylvania. His work includes advanced performance tuning, Java and PL/SQL programming, developing high availability solutions, capacity planning, database security, and physical / logical database design in a UNIX, Linux, and Windows server environment. Jeff's other interests include mathematical encryption theory, programming language processors (compilers and interpreters) in Java and C, LDAP, writing web-based database administration tools, and of course Linux. He has been a Sr. Database Administrator and Software Engineer for over 18 years and maintains his own website site at: http://www.iDevelopment.info. Jeff graduated from Stanislaus State University in Turlock, California, with a Bachelor's degree in Computer Science.



Last modified on: Saturday, 18-Sep-2010 18:23:23 EDT
Page Count: 38182