Squeezing more write speed from RAID, iSCSI, and AFP

Status
Not open for further replies.

wuxia

Dabbler
Joined
Jan 7, 2016
Messages
49
Code:
root@nas storage # dd if=/dev/zero of=test.bin bs=2048k count=1024k
1048576+0 records in
1048576+0 records out
2199023255552 bytes transferred in 1344.964770 secs (1,635,004,354 bytes/sec)

root@nas storage # dd if=test.bin of=/dev/null bs=2048k
1048576+0 records in
1048576+0 records out
2199023255552 bytes transferred in 1146.017894 secs (1,918,838,499 bytes/sec)
titan_rw, these are some spectacular numbers! Would you share the hardware of this machine, please? I'm building a 24 HDD system right now and could use some help. Thanks.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
titan_rw, these are some spectacular numbers! Would you share the hardware of this machine, please? I'm building a 24 HDD system right now and could use some help. Thanks.

It's an E5-1650 on an X9SRH-7TF and 64 GB ram.

Using a Norco chassis with each drive plugged directly into an HBA, either the onboard one, or one of two additional IBM 1015's.

The pool that the test was run on was 18 Seagate 7200 rpm desktop drives in 3 groups of 6 in raid Z2.

The other pool in the nas is 6 WD Red 6TB drives in Z2.
 

wuxia

Dabbler
Joined
Jan 7, 2016
Messages
49
It's an E5-1650 on an X9SRH-7TF and 64 GB ram.

Using a Norco chassis with each drive plugged directly into an HBA, either the onboard one, or one of two additional IBM 1015's.

The pool that the test was run on was 18 Seagate 7200 rpm desktop drives in 3 groups of 6 in raid Z2.

The other pool in the nas is 6 WD Red 6TB drives in Z2.
Thank you very much. Do you think I could achieve similar speeds with similar setup except that I was planing to use 1xIBM 1015 and the SAS expander of the SC846BE1C chasis? In other words do you think that more separate sas connections vs. all drives using one expander has any speed benefit? I'm asking because I haven't seen many close to 2GB/s ZFS speeds here and if that's the case I can switch to SC846BA which allows for 6 SFF 8087 connections (with two more IBM 1015 of course).
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
I haven't actually used an expander, but as I understand it, it shouldn't affect things too much. You should still get ~2400 megabyte/sec with a single 8087 connector. Although close, this is still higher than the expected throughput.

During a scrub of the 18 drive pool I saw 1.96 gigabyte/sec scrub speed. Pretty impressive.
 

VictorR

Contributor
Joined
Dec 9, 2015
Messages
143
The pool that the test was run on was 18 Seagate 7200 rpm desktop drives in 3 groups of 6 in raid Z2.

I wish we could have used 3 x 6 vdev, the extra bandwidth would have been great.
But, in our application, the capacity loss was too much - 96TB vs 80TB
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Not sure that more vdevs would benefit sequential operations.

It could, but I'm thinking it's more likely to actually reduce speeds a bit since fewer drives contain useful data. Though you end up pulling from the same number of disks. Interesting experiment.
 

VictorR

Contributor
Joined
Dec 9, 2015
Messages
143
It could, but I'm thinking it's more likely to actually reduce speeds a bit since fewer drives contain useful data. Though you end up pulling from the same number of disks. Interesting experiment.

I really regret that I didn't get to do more testing on different configurations with this box. But, the company wanted to start moving raw footage onto it right away and start editing.(I know, not wise... at least we have a good backup scheme)

We did throw several 2K, 4K, and even 6K streams per client at it. And here's where I think the real world performance may have exceeded iPerf test numbers. Because it seemed like it was able to sustain multiple streams whose total bandwidth exceed test read rates.
 
Last edited:

VictorR

Contributor
Joined
Dec 9, 2015
Messages
143
E5-1650 looks real nice. Team that up with the right single (or dual) CPU 12GB/sec mobo w/ 3x pci-e 3.0 slots(or more), one 12GB/sec SAS3 HBA(AOC-S3008L-L8e?) for each vdev, 4/6TB SAS3 drives, and gobs of RAM could interesting

Oh to dream......
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
There's little point to a dual CPU motherboard with the E5-1650 for ZFS. In a single board like the X10SRL-F, you get 7 PCIe slots but the way they do that is by running x8 electrical in x16, etc., effectively downsizing each slot. In the dual CPU motherboards, typically that isn't done as aggressively, but you must have the second CPU installed for a lot of the slots to even be functional. It's probably not even a supported configuration to go with an E5-16xx in those boards. The 7 PCIe slots in the single board is a better deal.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You can already go 256GB RAM on a single board for a very reasonable price (~$1600).
 

VictorR

Contributor
Joined
Dec 9, 2015
Messages
143
There's little point to a dual CPU motherboard with the E5-1650 for ZFS....The 7 PCIe slots in the single board is a better deal.

Thanks, I spent quite a bit of time trying to figure that out Friday night, without much luck.

You can already go 256GB RAM on a single board for a very reasonable price (~$1600).
True. But if you want to go beyond that it gets very expensive.

Hahaha!! As we found out! After ordering our Q30(X10DRL. dual CPU), I inquired about boosting it from 256GB to 512GB of DDR4 RDIMM. Their sales rep called back a few minutes later and said "I don't think you want to do that". The jump from 32GB to 64GB chips would have added another $7,000 to the price(~$1,200 per chip)...
 
Status
Not open for further replies.
Top