BUILD FreeNAS with 100+ TB for archiving

Status
Not open for further replies.

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
Guys you are really really helpful, thank you very much.

farmerpling2:
Very interesting, I didn't look at it this way.
I have to reconsider the disk selection.
I have to admin I'm a little bit afraid of Helium disks - I'm worried about the longevity, Helium is very volatile.

Helium volatile? Are we talking about the same gas? Helium is inert in most situations, does not burn, used in welding and air ships, etc. Maybe you are thinking about hydrogen?

Helium is quite inert and is nothing to worry about. Maybe I am missing something from days in chemistry???

I would not bat an eye about getting Helium drives. Less power usage and cooler.

Your goal should be to have something that works with fewest problems possible. With more than 8-16 drives you need to think about the harmonics that many drives make and the problems that this can cause.

You should stay away from any "1-8" NAS drive specifications. These drives are meant for low end NAS. You are more in medium land and want the Enterprise features, IMHO. You want it to work for 5+ years without failures and when thing do fail, easily replace with another drive that is good for another 5+ years.

Best of luck!
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
Sorry, I'm not a native English speaker, so my wording is probably wrong.
What I meant is, that Helium has the tendency to "escape"

Ahh... Yes. WD bought HGST and is using their technology. Helium drives have been looked at for 10+ years and finally manufacturers have figured out to harness it. Do a Google search and you can read some write ups about it.

I think they have harnessed the beastly little atom. :smile:
 

rosabox

Explorer
Joined
Jun 8, 2016
Messages
77
farmerpling2: I’m still not convinced about Helium disks: "helium being such a light and small monoatomic gas, it can diffuse through damn nearly anything that other gases couldn't" but I’ll reconsider them.
The lowest I was looking at were Seagate IronWolf PRO and/or WD Red Pro - both should be OK for 8+ NAS but actually I’m considering also some Enterprise disks and if the budget allows, I’ll definitively go for Enterprise disks.

Status update:
Chassis: 847BE1C-R1K28LPB https://www.supermicro.com/products/chassis/4U/847/SC847BE1C-R1K28LPB
Board: X10SRi-F https://www.supermicro.com/products/motherboard/Xeon/C600/X10SRi-F.cfm
CPU: Xeon® E5-2603 v4 https://ark.intel.com/products/92993/Intel-Xeon-Processor-E5-2603-v4-15M-Cache-1_70-GHz
(as I'll be using rsync, which AFAIK is single threaded, should I consider a higher clocked CPU or is this 1.7GHz Xeon OK?)
or
Xeon® E5-1620 v4 https://ark.intel.com/products/92991/Intel-Xeon-Processor-E5-1620-v4-10M-Cache-3_50-GHz
(4 cores, 8 threads, 3.5GHz base, 3.8GHz turbo - should be powerful enough)
HBA: AOC-S3008L-L8e http://www.supermicro.com/products/accessories/addon/AOC-S3008L-L8e.cfm
Boot device: 2x SuperDOM 16GB SSD-DM016-SMCMVN1 https://www.supermicro.com/products/nfo/SATADOM.cfm
RAM: 4x16GB MEM-DR416L-SL02-ER24 https://www.amazon.com/Supermicro-Certified-MEM-DR416L-SL02-ER24-Samsung-DDR4-2400/dp/B01DTIUTVY
HDD: not decided yet ...
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Status update:
Chassis: 847BE1C-R1K28LPB https://www.supermicro.com/products/chassis/4U/847/SC847BE1C-R1K28LPB
Board: X10SRi-F https://www.supermicro.com/products/motherboard/Xeon/C600/X10SRi-F.cfm
CPU: Xeon® E5-2603 v4 https://ark.intel.com/products/92993/Intel-Xeon-Processor-E5-2603-v4-15M-Cache-1_70-GHz
(as I'll be using rsync, which AFAIK is single threaded, should I consider a higher clocked CPU or is this 1.7GHz Xeon OK?)
or
Xeon® E5-1620 v4 https://ark.intel.com/products/92991/Intel-Xeon-Processor-E5-1620-v4-10M-Cache-3_50-GHz
(4 cores, 8 threads, 3.5GHz base, 3.8GHz turbo - should be powerful enough)
HBA: AOC-S3008L-L8e http://www.supermicro.com/products/accessories/addon/AOC-S3008L-L8e.cfm
Boot device: 2x SuperDOM 16GB SSD-DM016-SMCMVN1 https://www.supermicro.com/products/nfo/SATADOM.cfm
RAM: 4x16GB MEM-DR416L-SL02-ER24 https://www.amazon.com/Supermicro-Certified-MEM-DR416L-SL02-ER24-Samsung-DDR4-2400/dp/B01DTIUTVY
*drool*
However, I'd look at 32GB RAM modules. Last time I checked the premium for density was acceptable.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Samba benefits from single threaded performance. I'd consider the 1620.
 

rosabox

Explorer
Joined
Jun 8, 2016
Messages
77
I’m leaning towards the 1620, it looks much more powerful and the price is only slightly higher.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
With the 26xx chips, you're paying a significant premium for the ability to use them in a two-socket system. Since you're looking at a single-socket board, this is wasted. But also consider the 1650.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
And IMHO if you have a hot spare for each RaidZ2, then you might as well just use Raidz3 instead
Uhh, is having a hot spare not about replacing a failing disk (quik and automatically)? No matter if you are running a RaidZ2 or RaidZ3 setup. If a disk fails in a RaidZ3 you would still want/need to replace it. Having a hot spare can't be bad I think. And for a production situation the cost should be of less consideration.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Uhh, is having a hot spare not about replacing a failing disk (quik and automatically)? No matter if you are running a RaidZ2 or RaidZ3 setup. If a disk fails in a RaidZ3 you would still want/need to replace it. Having a hot spare can't be bad I think. And for a production situation the cost should be of less consideration.

What's more reliable? raidz3 or Raidz2?

What's more reliable? Raidz2 + hot spare or raidz3 without hot spare?

Assuming you have a limit on your resources, bays, drives or capacity, it is better to increase your redundancy and performance than to add a 'useless' hot spare

Were you planning on not replacing the hot spare?

It only makes sense when you have one hot spare for multiple vdevs
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
What's more reliable? raidz3 or Raidz2?

What's more reliable? Raidz2 + hot spare or raidz3 without hot spare?

Assuming you have a limit on your resources, bays, drives or capacity, it is better to increase your redundancy and performance than to add a 'useless' hot spare

Were you planning on not replacing the hot spare?

It only makes sense when you have one hot spare for multiple vdevs

Sorry but I think the question should be "what is more reliable Raidz2 of Raidz3?". The hot spare should not be in that equation.

I feel that having a hot spare does not add to the reliablity of your raidzX. If a disk fails it needs to be replaced. What having a hot spare does is buying you some time. The resilvering of the disk, replacing a failed one, start at once. For some that will be convenient or even important others could care less.

Choosing for a specific raidz configuration should come first and should not be influenced by having a hot spare or not. If somebody decides that RaidZ2 is the ticket then so be it.

By the way: according to the OP his resources etc. are not his biggest problem.
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
farmerpling2: I’m still not convinced about Helium disks: "helium being such a light and small monoatomic gas, it can diffuse through damn nearly anything that other gases couldn't" but I’ll reconsider them.

Helium filled drives have been around since 2013. 3-4 year ago and they work. Just google "helium disk drive" and there is a lot of info about it. I would not bat an eye about using them.

We see 3 vendors supplying them so the technology. Good sign that it works. The product specs look good also.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Sorry but I think the question should be "what is more reliable Raidz2 of Raidz3?". The hot spare should not be in that equation.

I started with that question. And the hot spare is part of the equation, as dedicating a drive to being a hot spare, when it could've been an extra parity drive instead is stupid. Unless you have multiple vdevs and only one spare between them.

You can think of an extra parity drive as a live spare if you prefer.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
You can think of an extra parity drive as a live spare if you prefer.

I would have a hard time thinking of a extra parity drive as a live spare. But that aside.

What I tried to point out (in my faulty English) is that choosing for a certain raidz configuration can be based on a valid reason (or use case if you prefer). Also choosing for having a hot spare can be based on a valid reason. Just stating that with another Raidz configuration the hot spare is not needed is a bit short. And as long as I don't know the reason somebody wants to have a certain Raidz configuration and/or a hot spare I would not call it stupid (expensive maybe :) )
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
My 2 cents...

With one vdev, you are generally better off having a Z3 than Z2+spare. With Z3 you get better availability without having to resilver the spare into a Z2. That window of resilvering is a decrease in possibly availability.

A 10TB drive could take 10-20+ hours to resilver, depending on vdev usage and speed of drives. A Z2 dropping to Z1 and that 10-20+ hours of resilvering before it gets back to Z2 is the window that you want to reduce. It is the whole reason why we use RAID party drives.

If you had 2 - Z2 vdev's and wanted to have a spare, that makes more sense, but you still have windows of increased probability of failure of other drives and possibly losing whole vdev. It would be best to have 2 - Z3's and no spare or 2 - Z3's with a shared spare.

If your budget will allow, go Z3 for SAFEST reduction of hard drive failure. RAID just reduces that chance of hard drive failures causing loss of data.

A good backup is still REQUIRED...
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
My 2 cents...

With one vdev, you are generally better off having a Z3 than Z2+spare. With Z3 you get better availability without having to resilver the spare into a Z2. That window of resilvering is a decrease in possibly availability.
...

I never argued that Z2 + spare is more safe then Z3. I just think that you can't simply state to forget about hot spares en go for Z3. First decide what Zx configuration is best for your use case. I suspect that if somebody is thinking about having hot spares there is a need for high availability and safety of the data. He or she would likely be better off with a Z3 configuration (and still have that hot spare?) but that's just an assumption. And if there is something I have learned in my years as a professional in the IT industry is that assumptions kill you.
...
If your budget will allow, go Z3 for SAFEST reduction of hard drive failure. RAID just reduces that chance of hard drive failures causing loss of data.
...
I don't think that a Z3 (or Z2) configuration does much for reduction of hard disk failure, but I know what you mean :)
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
I don't think that a Z3 (or Z2) configuration does much for reduction of hard disk failure, but I know what you mean :)

Crappy "wording" on my part.

Z3 provides better protection from vdev failure by allowing more devices to fail before the vdev is considered unusable.

When someone asks which is better:
  1. Z3, no spare
  2. Z2 with spare
  3. Z3 with spare
#1 wins in almost all cases.

FWIW, I play around with spares and removing devices from vdev to force failure/recovery. I do this as part of simple testing to see how robust ZFS is. So, I am not allergic to spares, just prefer better protection that Z3 provides. I personally use #2 because lack of $$$ and SATA ports for how I want to run things.

#3 is generally overkill for most home users. If your ZFS machine is for enterprise usage and you cannot afford to have down time, go with #3. You will pay for it...

Take care!
 
Last edited:

rosabox

Explorer
Joined
Jun 8, 2016
Messages
77
The new toy finally arrived :smile:

The final configuration is:
Mobo - Supermicro X10SDV-2C-TP4F
Chassis - Supermicro CSE-847BE1C-R1K28LPB
RAM - 2x 32GB DDR4 MEM-DR-432L-HL02
HBA - Supermicro AOC-S3008L-L8E
DOM - 2x 16GB Supermicro SSD-DM016-SMCMV
Cabling - 2x Supermicro CBL-SAST-0531
HDD - 18x Toshiba Nearline 8TB 3,5" SATA3 (2x8 vdev + 2 spare)

I like it very much, so far :smile:
Now the burn in and testing begins ...
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Now the burn in and testing begins ...
Make sure you take your time and do it for as long as it takes. Do not rush the tests. I hope all works well for you.
 
Status
Not open for further replies.
Top