I've got an ESXi cluster in the lab, on which I've been evaluating several storage solutions to eventually move into a production environment.
Initially, we looked at MS Storage Server, OpenFiler 2.99, and FreeNAS 8 RC. OpenFiler 2.99 has beaucoup installer issues with our hardware of choice, which put it out of the running. MS Storage Server is somewhat pricey particularly for our application. Thus, we've been putting FreeNAS through its paces of late.
On the testbed machine, I have 8GB memory, 2x 3.2GHz Xeon (Pentium D-era, two physical cores per processor), a single Intel gigabit NIC, and 23 drives on 3 SAT2-MV8 Supermicro dumb controllers. The drives are broken up as follows:
1 - OS drive (on a local SATA, not attached to any of the SAT2's)
1 - Spare drive (not associated with any volume)
4 - 2 sets of mirrors intended as a secondary datastore
2 - 1 mirrored set used as an NFS / CIFS store
16 - 8 mirrored sets established as an iSCSI target.
All drives are 750GB 7200RPM drives with 16MB cache made by Seagate.
Using CIFS or NFS to the mirrored store described above, I can generally reach speeds of about 400-600 megabits/second. Given the constraints of the drives and the network, this is more than acceptable.
However, iSCSI from my ESXi machine can never get above 70 megabits/second or so. Doesn't matter how many connections are initiated. Also doesn't matter what the max burst length or queue depth is set to; iSCSI won't go above 70 megabits/second, even when I know the network and disks and machine can handle it. The iSCSI initiator is ESXi 5.0 using default settings. The 16-disk array is essentially RAID10'd, and its entirety is presented as a device extent through iSCSI for a LUN of about 5.4TB; ESXi uses a max_file_size of 2TB (thus, no vmdk can exceed 2TB) and, I believe, a block size of 1MB.
Is there any way to boost iSCSI performance? I'd like to stick with using iSCSI and FreeNAS, given their respective strong performance and simplicity of configuration.
Initially, we looked at MS Storage Server, OpenFiler 2.99, and FreeNAS 8 RC. OpenFiler 2.99 has beaucoup installer issues with our hardware of choice, which put it out of the running. MS Storage Server is somewhat pricey particularly for our application. Thus, we've been putting FreeNAS through its paces of late.
On the testbed machine, I have 8GB memory, 2x 3.2GHz Xeon (Pentium D-era, two physical cores per processor), a single Intel gigabit NIC, and 23 drives on 3 SAT2-MV8 Supermicro dumb controllers. The drives are broken up as follows:
1 - OS drive (on a local SATA, not attached to any of the SAT2's)
1 - Spare drive (not associated with any volume)
4 - 2 sets of mirrors intended as a secondary datastore
2 - 1 mirrored set used as an NFS / CIFS store
16 - 8 mirrored sets established as an iSCSI target.
All drives are 750GB 7200RPM drives with 16MB cache made by Seagate.
Using CIFS or NFS to the mirrored store described above, I can generally reach speeds of about 400-600 megabits/second. Given the constraints of the drives and the network, this is more than acceptable.
However, iSCSI from my ESXi machine can never get above 70 megabits/second or so. Doesn't matter how many connections are initiated. Also doesn't matter what the max burst length or queue depth is set to; iSCSI won't go above 70 megabits/second, even when I know the network and disks and machine can handle it. The iSCSI initiator is ESXi 5.0 using default settings. The 16-disk array is essentially RAID10'd, and its entirety is presented as a device extent through iSCSI for a LUN of about 5.4TB; ESXi uses a max_file_size of 2TB (thus, no vmdk can exceed 2TB) and, I believe, a block size of 1MB.
Is there any way to boost iSCSI performance? I'd like to stick with using iSCSI and FreeNAS, given their respective strong performance and simplicity of configuration.