Slow speed with FreeNAS iscsi target

Status
Not open for further replies.

gzartman

Contributor
Joined
Nov 1, 2013
Messages
105
If it shows each device averaging at least 25MB/sec, that's about what I'd expect for PCI-X.


It's a PCI-x slot PLUGGED into a PCI slot. I'm hamstrung by the limitations of the old PCI technology. I just bought the box a little over a weekago and didn't discover that some idiot put a PCI-X board on a motherboard with a PCIe interface. Once I get my PCIe raid card, let's have another look at the system.

I plugged 6x1TB into the motherboard sata interface and created a 6 drive raidz pool and here's my iostat for a dd write to disk, not bad:

zpool_stat_zps359ab0ba.png


I want to utilize all 12 bays in my 2U box, so I need that IBM M1505 card. The PCI-x raid card is getting chucked.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Okay, well I missed the PCI bit, that'll limit you to more like 100MB/sec aggregate... ow. Still, if you were only getting 40MB/sec write, it isn't clear that the controller was a serious bottleneck to that operation. ZFS will happily be gathering the next transaction group while the previous one is writing, so I'm still doubtful that this is the actual issue.

So the natural question becomes, what's performance over iSCSI like with the motherboard ports? The unfortunate problem is that iSCSI performance is often substantially less than what you get from local dd activities...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Okay, well I missed the PCI bit, that'll limit you to more like 100MB/sec aggregate... ow. Still, if you were only getting 40MB/sec write, it isn't clear that the controller was a serious bottleneck to that operation. ZFS will happily be gathering the next transaction group while the previous one is writing, so I'm still doubtful that this is the actual issue.

So the natural question becomes, what's performance over iSCSI like with the motherboard ports? The unfortunate problem is that iSCSI performance is often substantially less than what you get from local dd activities...

All that you just said is precisely why I said...

Hint: An M1015 isn't likely to help. I'm an iSCSI user and have an M1015. :)

40MB/sec is about what I get, give or take a little. High random I/O can drop that below 15MB/sec easily. I wasn't just taking out my booty. I really was serious. I know exactly what's going on, and he's gonna be doing that "Fifth:... blah blah blah buy more hardware" thing if he wants better performance. And not just an M1015. ;)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Okay, fine, didn't notice I was stealing your thunder. Composing msgs on a smartphone sux! :)
 

gzartman

Contributor
Joined
Nov 1, 2013
Messages
105
So the natural question becomes, what's performance over iSCSI like with the motherboard ports? The unfortunate problem is that iSCSI performance is often substantially less than what you get from local dd activities...

I'm getting just over 45MB/s over iSCSI, so slightly better. You do lose alot with iSCSI. I guess I'm used to my Promise Vtrak box, but I keep forgetting that that thing was designed to be nothing but iSCSI. As you pointed out, it probably has a big write cache -- it also has dual LAN. Down side it you can't do anything else with it besides present iSCSI targets.


I've been reading your post on crossflashing the IBM M1505, so I'm excited to try that along with an SSD L2ARC on the ZFS. Hopefully I didn't waste my money on this CPU/motherboard combo, but it was in my budget for what I wanted. I wanted a box with 12 hotswap drives and a Xeon 5500+ CPU.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Frustrating, right? Do make sure you walk through testing of subsystems like networking as well "just in case." But the general thing we've noticed on a consistent basis is that clock speed is important.

You may also be able to mitigate somewhat by bludgeoning TCP buffer sizes and other iSCSI settings upward, which could improve throughput a bit at the cost of some latency (for products like ESXi that care about that). No promises it'll work, just what I'd try in that situation.
 
Status
Not open for further replies.
Top