I'm having what looks to be an iSCSI tuning problem. I just built out a SAN box and with a little dd if=/dev/zero action I can consistently get between 250-350MB/sec, depending on the bs and count values. For a 5-disk RAID-Z, I feel like this is appropriate and I'm not at all disappointed in those numbers. When I try to hit it on iSCSI, I'm having trouble getting more than 70MB/sec throughput so far.
Here's what I got:
Core i3-21xx (Sandy Bridge, dual core, hyperthreading)
8GB RAM, can't recall if ECC or not
5x750GB WDs
4 onboard NICs, Intel something, maybe 82574L
At first I was playing with a hypervisor currently in production. I initiated from there and couldn't regularly exceed 60MB/sec. I gave up doing my testing on production machines and switched to an unused hypervisor, bonded its two NICs, bonded 2 of the SANs NICs (FEC, not LACP), and have a nice blank slate to work with. After the NIC bonding on both sides, I've gotten up from 60ish to 70-75ish. I've experimented with zvol device extents versus file extents. I've experimented with different MTUs on each side (currently both 9000). I've adjusted, sometimes wildly, the target values that seem relevant to throughput. I can't find anything that makes any difference, except when I bonded the two NICs on each side. Unfortunately, my hypervisor is fresh out of NICs to bond. One other curiosity is that if I make four targets and benchmark them simultaneously, they come out to around 40-45 each. Still not as much as I had hoped for, but this does seem to imply that the bonded NICs can move the traffic, the switch can move the traffic, et cetera. 40x4 far exceeds a single gbit line, so I feel confident that it's not a network issue, at least not until I can get up to around 200MB/sec. Switch is an oldish, but still fairly decent 3Com. Aside from no LACP support (only FEC), it's never let me down.
All the evidence I've found suggests an iSCSI issue, not a ZFS issue and not a network issue. Anyone disagree?
I just rebooted it with some of the suggestions in the ZFS tuning guide, but I don't expect them to be of much help in these tests. Any thoughts or suggestions? Something obvious that I've totally overlooked?
For the record, I also tried OMV, OpenFiler, and Windows as targets and they all performed no better than 30MB/sec. FreeNAS is starting out the best and my experience with pfSense has been very positive as well, so I want to stay in the BSD camp on this one. I just don't know the intricate details of iSCSI yet.
Cheers, everyone; appreciate any advice!
Here's what I got:
Core i3-21xx (Sandy Bridge, dual core, hyperthreading)
8GB RAM, can't recall if ECC or not
5x750GB WDs
4 onboard NICs, Intel something, maybe 82574L
At first I was playing with a hypervisor currently in production. I initiated from there and couldn't regularly exceed 60MB/sec. I gave up doing my testing on production machines and switched to an unused hypervisor, bonded its two NICs, bonded 2 of the SANs NICs (FEC, not LACP), and have a nice blank slate to work with. After the NIC bonding on both sides, I've gotten up from 60ish to 70-75ish. I've experimented with zvol device extents versus file extents. I've experimented with different MTUs on each side (currently both 9000). I've adjusted, sometimes wildly, the target values that seem relevant to throughput. I can't find anything that makes any difference, except when I bonded the two NICs on each side. Unfortunately, my hypervisor is fresh out of NICs to bond. One other curiosity is that if I make four targets and benchmark them simultaneously, they come out to around 40-45 each. Still not as much as I had hoped for, but this does seem to imply that the bonded NICs can move the traffic, the switch can move the traffic, et cetera. 40x4 far exceeds a single gbit line, so I feel confident that it's not a network issue, at least not until I can get up to around 200MB/sec. Switch is an oldish, but still fairly decent 3Com. Aside from no LACP support (only FEC), it's never let me down.
All the evidence I've found suggests an iSCSI issue, not a ZFS issue and not a network issue. Anyone disagree?
I just rebooted it with some of the suggestions in the ZFS tuning guide, but I don't expect them to be of much help in these tests. Any thoughts or suggestions? Something obvious that I've totally overlooked?
For the record, I also tried OMV, OpenFiler, and Windows as targets and they all performed no better than 30MB/sec. FreeNAS is starting out the best and my experience with pfSense has been very positive as well, so I want to stay in the BSD camp on this one. I just don't know the intricate details of iSCSI yet.
Cheers, everyone; appreciate any advice!