Need advice on drives/drive set up. 10 bay nas.

Status
Not open for further replies.

SuF1X

Dabbler
Joined
Sep 19, 2018
Messages
35
Hey All,

i just got myself the following set-up:

Fractal Design R4 65

Supermicro X10SL7-F Motherboard

Xeon E3-1230 V3 CPU

Kingston 2x 8gb KVR16E11/8 ECC RAM

Corsair RM 550w PSU

USE Case: Photo/Video storage and editing off the NAS. so i would like 500MB + speeds

Will be be building 10 Gbps network - could do with some advice on optimal(cost effective items too)

i have a modded case with 10 slots. I want to future proof the build within reason and thinking of adding 10 x 4TB drives .

1. Can someone advise me which i should go for? I am from UK so finding :
a) WD Red - £113.00
b) Seagate Iron Wolf £109.00
c)?
2. What kind of redundancy set-up i should do?
a) 2x5 Drive Stripped?
b) 2x Raidz2 - 5+5 Drives?
c) 10 drive single pool with Raidz2-3?
Also, what would be approximate performance?

Would love to get great speed with lowest amount of drives being parity, but keep reliability. As always, impossible request :)

I know my RAM will need upgrading soon - question is, if i have this amount of storage do i need to upgrade it first?

Also,once built - can i add SSD as cache after? or it needs to be added to vdev as part of the build?


Your advice would be really appreciated.
 
Joined
Jul 3, 2015
Messages
926
1. Personally I'd go with the WD Red NAS drives as history seems to have proven they are worth the slightly extra cost.

2. Someone once told me when building storage systems you have to consider three things, Capacity, Performance and Integrity. They then told me you can never have all three so you must choose the two that are the most important to you. After you have done that the setup almost picks itself.

RAM wise 16GB will be fine till you get time/money to add more providing you're not planning on spinning up loads of VMs.

Yes, you can add cache drives later be it SLOG or L2ARC and also remove them without issue. Thats probably the area you want to research most as in a lot of cases neither are useful and can sometimes be rather unhelpful.

Best of luck buddy.
 
Last edited:

SuF1X

Dabbler
Joined
Sep 19, 2018
Messages
35
Thanks!
2. Someone once told me when building storage systems you have to consider three things, Capacity, Performance and Integrity. They then told me you can never have all three so you must choose the two that are the most important to you. After you have done that the setup almost picks itself.
regarding this one - i want to balance it. what kind of speeds would i be expecting from A, B and C? aprox?
 
Joined
Jul 3, 2015
Messages
926
a. not sure what you mean by '2x5 Drive Stripped'?
b. faster than c
c. slower than b

General rule of thumb, mirrors are fast, RAIDZ not so fast, Z2 slow and Z3 very slow. Also the more vdevs the better performance wise. Each vdev will run at the speed of a single drive, add another vdev and in theory (give or take a bit, normally take) you double the performance. More vdevs = more performance.

:D
 

SuF1X

Dabbler
Joined
Sep 19, 2018
Messages
35
a. not sure what you mean by '2x5 Drive Stripped'?
b. faster than c
c. slower than b
this is just mean but makes sense :)

a) - 5 vdevs of 2 drives

as for B and C - i want to know approximate speed. i need 500MB + to be sustainable.

to be fair if i do 5 + 5 its not the biggest loss of storage, i could live with it. but hoping to see if someone has build like this and has speeds they could share :)

thanks for pointing me the right direction though! really nice
 
Joined
Jul 3, 2015
Messages
926
Joined
Jul 3, 2015
Messages
926

SuF1X

Dabbler
Joined
Sep 19, 2018
Messages
35
How fast are the drives you are going to buy?
D Red 4TB NAS Drive Specifications
Specifications 6 TB 1 TB
Interface speed 6 Gb/s 6 Gb/s
Internal transfer rate 175 MB/s 150 MB/s
Cache (MB) 64 64
Rotational speed (RPM) IntelliPower IntelliPower

likely this one then.
 
Joined
Jul 3, 2015
Messages
926
Internal transfer rate 175 MB/s 150 MB/s
Ok so lets take the smaller figure of 150MB/s and say roughly each vdev will run at that speed. If you have two vdevs you get 300MB/s and if you have three vdevs 450MB/s.

Do you see where I'm going with this :)
 

SuF1X

Dabbler
Joined
Sep 19, 2018
Messages
35
Ok so lets take the smaller figure of 150MB/s and say roughly each vdev will run at that speed. If you have two vdevs you get 300MB/s and if you have three vdevs 450MB/s.

Do you see where I'm going with this :)
ahh gezzz. i was under impression drives within Vdev add up in some way... that is very slow then!
 
Joined
Jul 3, 2015
Messages
926
vdevs are striped so performance is made by adding more stripes i.e. vdevs
 
Joined
Jul 3, 2015
Messages
926
Im just trying out a few of your suggested configs on my test box so give me a bit and I'll come back with some real world benchmark figures.
 
Joined
Jul 3, 2015
Messages
926
Ok so I've ran some benchmarking tests based on your suggested configs and you can see the results below. Please note that benchmarking isn't an exact science as workloads vary as do benchmark tests depending on the tools you use and the parameters you set however this will give you an idea. Some of the results even surprised me a little.

I used iozone -t 1 -i 0 -i 1 -r 1M -s 300g as my test system has 256GB RAM and I wanted to avoid any ZFS caching so this is just disk performance. I also disabled compression on the dataset.

The test system is a Supermicro 36 Bay all-in-one with an Intel Xeon E5-2623 v4 @ 2.60GHz and has 256GB of RAM.

The drives used for testing were 10TB HGST 4Kn SAS.

1 x 1 disk:

Writes

Avg throughput per process = 246373.77 kB/sec

Reads

Avg throughput per process = 243517.39 kB/sec



1 x 5 disk Z2:

Writes

Avg throughput per process = 414676.81 kB/sec

Reads

Avg throughput per process = 679087.06 kB/sec


2 x 5 disk Z2:

Writes

Avg throughput per process = 805717.69 kB/sec

Reads

Avg throughput per process = 1170987.50 kB/sec


1 x 10 disk Z2:

Writes

Avg throughput per process = 983882.06 kB/sec

Reads

Avg throughput per process = 1106588.00 kB/sec


1 x 10 disk Z3:

Writes

Avg throughput per process = 816047.12 kB/sec

Reads

Avg throughput per process = 1245721.12 kB/sec


5 x 2 disk mirrors:

Writes

Avg throughput per process = 988146.25 kB/sec

Reads

Avg throughput per process = 793428.81 kB/sec
 
Last edited:

SuF1X

Dabbler
Joined
Sep 19, 2018
Messages
35
Ok so I've ran some benchmarking tests based on your suggested configs and you can see the results below. Please note that benchmarking isn't an exact science as workloads vary as do benchmark tests depending on the tools you use and the parameters you set however this will give you an idea. Some of the results even surprised me a little.

I used iozone -t 1 -i 0 -i 1 -r 1M -s 300g as my test system has 256GB RAM and I wanted to avoid any ZFS caching so this is just disk performance. I also disabled compression on the dataset.

The test system is a Supermicro 36 Bay all-in-one with an Intel Xeon E5-2623 v4 @ 2.60GHz and has 256GB of RAM.

The drives used for testing were 10TB HGST 4Kn SAS.

1 x 1 disk:

Writes

Avg throughput per process = 246373.77 kB/sec

Reads

Avg throughput per process = 243517.39 kB/sec


1 x 5 disk Z2:


Writes

Avg throughput per process = 414676.81 kB/sec

Reads

Avg throughput per process = 679087.06 kB/sec


2 x 5 disk Z2:

Writes

Avg throughput per process = 805717.69 kB/sec

Reads

Avg throughput per process = 1170987.50 kB/sec


1 x 10 disk Z2:

Writes

Avg throughput per process = 983882.06 kB/sec

Reads

Avg throughput per process = 1106588.00 kB/sec


1 x 10 disk Z3:

Writes

Avg throughput per process = 816047.12 kB/sec

Reads

Avg throughput per process = 1245721.12 kB/sec


5 x 2 disk mirrors:

Writes

Avg throughput per process = 988146.25 kB/sec

Reads

Avg throughput per process = 793428.81 kB/sec


Really appreciate you running this test. based on this. 10x1 Z2 and Z3 are actually rather good. if you still have them configured - i think Freenas supports tracking IOPS, could you check 2x5 vs 1x10? it should be twice as much on 2 vdev's.

very interesting indeed!
 
Joined
Jul 3, 2015
Messages
926
Sorry buddy, back to the day job now. Hope this helped a bit.
 
Status
Not open for further replies.
Top