For these tests, we set up a dedicated Gigabit Ethernet iSCSI network. We started with an Adaptec SANblock 2-Gbps FC enclosure with 14 73-GB drives. From the SANblock, we set up a 1-Gbps FC connection to a McData 1620 Internetworking Switch, which provided the iSCSI link and handled the fabric translation from FC. We connected the McData unit over Gigabit Ethernet fiber (our 1620 has fiber Gigabit Ethernet ports only) to a Dell 5224 PowerConnect switch, which provided the Gigabit Ethernet copper connection for the iSCSI cards to be installed in our Dell 2650 dual 2.6-GHz Pentium test server. We removed all drives from the server and created three identical Windows Server 2000 boot drives, patched to current levels. This let us configure each card on a separate system disk and avoid cross-contamination.
In our previous test of iSCSI HBAs, we learned that because most iSCSI ASICs use parallel processing, to achieve reasonable efficiency you must provide multiple iSCSI targets with which to connect. This is a result of the off-loading process and is not an issue in FC networks or with iSCSI traffic running on conventional NICs. But it should be a consideration for anyone evaluating iSCSI: An HBA attached to a single-target array probably won't perform optimally. Therefore, the finishing touch for our test bench was to configure the SANblock's 14 drives in four identical 135-GB RAID 5 arrays, leaving two drives to run as hot spares. We then partitioned each of the arrays and formatted the NTFS to create four LUNs, designated as targets and loaded with our 5-GB Iometer data file.
This generation of HBAs, with graphical interfaces rather than command-line configurations, was far easier to install and manage than the earlier one. Adaptec's iSCSI HBA management appears as a simple system tool in the Control Panel, and QLogic provides a Java-based configuration application for setup. Alacritech uses Microsoft's native iSCSI initiator, which lets any Windows computer with a Ethernet NIC use iSCSI-enabled storage devices.
We changed the Iometer protocol used previously, increasing the maximum transfer size to 2 MB and adding a test to measure concurrent bidirectional read and write performance (see "Test Bed," page 13). To achieve full efficiency, which demands multiple IP streams, we created a topology in which four workers were each assigned a LUN. Under this setup, with its multiple data streams, the workers could each perform different tasks.