BUILD FreeNAS with 100+ TB for archiving

Status
Not open for further replies.

rosabox

Explorer
Joined
Jun 8, 2016
Messages
77
I'm planning a FreeNAS with 100+ TB capacity mostly for archiving.
I would like to use Supermicro hardware.
The raw idea is:
- use 8TB disks IronWolf PRO ST8000NE0004 https://www.amazon.com/8TB-7200RPM-256MB-IRONWOLF-PRO/dp/B06XX4HBY8 and/or WD Red Pro 8TB (WD8001FFWX) https://www.amazon.com/3-5-Inch-SATAIII-7200rpm-Internal-WD8001FFWX/dp/B01H33VQDG/
- 128GB ECC RAM
- IPMI
- rack mountable chassis with 16 or 24 3.5" hot-swap drive bays
- redundant PSU
- Gigabit Ethernet
- PCIe slots for the HBA adapter(s) and in the future maybe faster LAN

16x8TB drives in RAID-Z3 gives approximately 80TB of usable capacity

24x8TB drives in RAID-Z3 gives approximately132TB of usable capacity

Any ideas, tips, specific recommendations?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Question: Are you planning to use deduplication? This affects your RAM size. If not then you shouldn't need so much RAM, 64GB would be fine if you are only using this server for archiving however more RAM is good to have, just not required.

The drives you selected are 7200 RPM which means they consume more power and create more heat, and generally cost more than the 5400 RPM versions. Do you have a specific reason for selecting these drives?

The capacity is a bit off but close. You are looking at three vdevs of 8 drives each = 36TB @ RAID-Z3 or 43TB @ RAID-Z2 per vdev.

Add in a small SSD as the boot device. You don't need a pair, one will do fine.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Any ideas, tips, specific recommendations?
To echo, and perhaps elaborate on, what @joeschmuck said, you aren't going to want all 16-24 disks in a single vdev, as performance really tanks when vdevs get too wide. You might get away with two, 12-disk vdevs, but three 8-disk vdevs would be a better plan.

Other than that, keep an eye out for used complete servers--I've recently seen systems on ebay that are very similar to mine for around $1200.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
16x8TB drives in RAID-Z3 gives approximately 80TB of usable capacity

24x8TB drives in RAID-Z3 gives approximately132TB of usable capacity
These suggestions, if intended per vdev, are far too wide.
 

rosabox

Explorer
Joined
Jun 8, 2016
Messages
77
First - thank you all for your answers, I really appreciate it.

No plan for deduplication.
But before putting this NAS in "production", we want to test some other usage scenarios and if the results are good, then we plan to build more - customized for the planned usage.

The HDD selection is based on the manufacturer recommendation (for NAS with more than 8 HDDs). I'm open to other HDDs.
I've also read, that it is better to combine HDDs from different manufactures - because of flawed series of HDDs?

An SSD as a system/boot device is planned but I forgot to mention it.

Another thing I forgot to mention - the NAS will serve as another layer of protection against ransomware, the plan is to copy backups to the NAS using rsync.

danb35:
I remember I've read somewhere 12 is the recommended maximum for a vdev but I didn't remember why.
The company will pay for the hardware and it has to be new - eBay is not an option.

Dice:
Vdevs 8 disks wide are OK? In case of 8 disk vdevs, is RAID-Z2 OK?

PS: I redid the calculations and it definitively looks I'll need a 24 bay chassis.
3 vdevs, 8 HDDs each, RAID-Z3 = 95TB
3 vdevs, 8 HDDs each, RAID-Z2 = 114TB
 
Last edited:

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
we want to test some other usage scenarios and if the results are good, then we plan to build more - customized for the planned usage.
Excellent.
Just keep in mind that FreeNAS performance is very dependent on use case scenario, requiring quite vastly different setups to perform.
Drawing conclusions without understanding how to optimize and the impact it has on performance easily becomes misleading.

I've also read, that it is better to combine HDDs from different manufactures - because of flawed series of HDDs?
Don't worry about this part.
However, worry about conducting a proper burn in procedure. Link in mysig, or check out the resource section for burn in scripts by @Spearfoot

Vdevs 8 disks wide are OK? In case of 8 disk vdevs, is RAID-Z2 OK?
Most definitely.
I would not go beyond 10 drives myself, except for 11drive raidz3 vdev's in very situational circumstances.
Raidz2 vs Raidz3 comes down to how well you are maintaining the box and how quickly problems can be addressed.
I'd personally be very content with 10drive wide raidz2 vdevs, granted that drives are burned in, spare drive is ready and tested on site, SMART and scrub schedules are setup along with email notifications that <actually gets read> and acted upon. Useful links in my sig.

3x 8drive raidz2 is a really nice setup for a 24bay box. However, you'd not have an "easy slot" available during resilver to add a new drive without removing the old one (which is considered best practice).
IMO a 3x7drive raidz2 solution is pretty neat. IIRC space efficiency is slightly better on 7drive wide than 8drive wide raidz2.
There are some merits of 36bay enclosures....

Another thing that has been left out the discussion so far. CPU horse power.
Interpreting your usecase as a box mostly/only for receiving incoming backups for archive purposes on 1Gbit LAN, you'r hardware requirements for satisfactory performance could be sliced quite a bit.
There is really no need for an E5 platform, nor an Xeon CPU at all.

X11SSL + HBA + 64GB RAM + i3-6100 is a setup well and beyond capable.

On the other hand, as mentioned by @danb35 there are 2ndhand options to check out.
If power consumption and heat is not a problem, there are typically 36bay SAS2 compatible boxes out there which are a good deal reagardless of using the interiors or not.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
If this is a production asset then of course I'd shoot for the 7200 RPM drives and ensure you purchase a few extras to replace the failures.

RAIDZ-2 in an 8 disk x 8TB is fine but you need to understand the risks. If this is considered mission critical data, I'd go with RAID-Z3, but only you can determine this. I feel RAID-Z2 using 8 disk vdevs is safe. What hurts the most is the size of the hard drives so using the 7200 RPM drives will speed up the resilvering process during a drive replacement.

What is the storage capacity that you need? If you have 50TB of data to backup today then double it. Also, realize that to maintain a healthy pool you need to store no more than 80% of the capacity. For iSCSI then you are looking at 50% of capacity. So if you have 50TB of data x 2 = 100TB minus 20% = 80TB or real storage for shares.

One last thing... We see stuff like this all the time where Joe Blow builds a FreeNAS system, the company if happy, Joe leaves the company, the system starts to have issue and the company is clueless on what to do. Build yourself a self-help guide to replace a failing hard drive and hang it on the server. This will save a lot of time and frustration.
 

rosabox

Explorer
Joined
Jun 8, 2016
Messages
77
Thank you again :smile:

I've already built one FreeNAS - my home NAS:
SilverStone DS380
ASRock C2550D4I
32 GB RAM
8 HDDs in RAID-Z2
So It will not be my first :smile:

Dice:
The plan is to have at least one spare disk ready, three preferable.
I have SMART tests and scrubs set up on my home NAS (with email notification) - I'll copy that :smile:
The "easy slot" - I didn't think about that, thank you for mentioning it. Hmm 36 bays :smile: .. and I started with the idea of 16 bays.
CPU horse power - I was even thinking about an Avaton board, but the limit of 64GB RAM put me off.
As already mentioned - second hand is not an option (sadly).

joeschmuck:
It's not mission critical more of a "cold" storage.
Actual backups are around 40 TB, that's why I'm aiming on 100+ TB
My calculation are already done for 80% of the capacity
It's for a company, so proper documentation is mandatory.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
That was a reassuring post, showing you've enough under your belt to make this happen the right way.
 

rosabox

Explorer
Joined
Jun 8, 2016
Messages
77
I'm trying really hard to think it through, that the new NAS fulfills our expectations :smile:

I've got some ideas and have some questions :smile:
Say I buy a 36 bay chassis, with a board and a CPU and 64GB ECC RAM with the option to expand to 128GB ECC RAM (or more) and to keep the cost low, I buy only 16 disks.
I create two 7 HDDs RAID-Z2 vdevs + 1 spare disk per vdev and I create one big pool on the vdevs.
Is it a good idea to have the spare disk(s) already in the chassis?
Does FreeNAS automatically resilver, when one a disk dies?
When I later buy another set of 8 HDDs, create a new vdev, can I expand the pool with it? I think yes and only new data get written on all three vdevs, am I correct?
What if the new disks are bigger, say 10-12TB - is it a problem?
Any error in my thinking? Any drawbacks?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Is it a good idea to have the spare disk(s) already in the chassis?
Will qualified personnel be available onsite to timely identify and replace a failed disk? If so, I wouldn't. If the server is going to be remote, so that it may take a few days before someone could get to it to do the replacement, a hot spare would make more sense.
Does FreeNAS automatically resilver, when one a disk dies?
Yes, if you've added the spares to the pool as spares. I'd lean against doing this, as noted above.
When I later buy another set of 8 HDDs, create a new vdev, can I expand the pool with it? I think yes and only new data get written on all three vdevs, am I correct?
Correct, though writes are going to favor the new vdev until the usage is mostly balanced.
What if the new disks are bigger, say 10-12TB - is it a problem?
No problem at all--note my system config.
 

rosabox

Explorer
Joined
Jun 8, 2016
Messages
77
Yes it'll be onsite with qualified personnel during the standard working hours.

I have to do some reading on the spare vs hot spare ...

Now I have pick some nice board + CPU + RAM + 36 bay chassis :smile:
When I'm done, I'll report back to verify my choice.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
For the chassis, the one in my sig works very well, though I think there's an updated model that adds two 2.5" hot-swap bays (good for SSDs).

Edit: Here's the updated unit. SAS3 expander backplanes (rather than SAS2), 36 3.5" hot-swap bays, 2 2.5" hot-swap bays. The regular 847 chassis still supports up to four 2.5" devices with optional brackets, but of course they wouldn't be hot-swappable then.

Edit 2: It may well be worth contacting iX for a quote. I've heard very good things about their support, and of course they know FreeNAS pretty well.
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I have to do some reading on the spare vs hot spare ...
In short, a hot spare (at least as I'm using the term) is a disk that's in the server, connected, powered up, and added to the pool as a spare. If any disk in the pool fails, FreeNAS will immediately begin resilvering to that disk.

A "warm spare" would be a disk that's installed, connected, and powered up, but not part of the pool. This will require manual intervention in the web GUI to begin resilvering, but you wouldn't need to do anything with the hardware.

The benefit to either of these is that the disk is right there, ready to go, and there's no need to touch the hardware in order to replace a failed disk. The downside is that the spare is wearing out at the same rate as the other disks in the pool.

My preference is a cold spare--a disk that's on hand, burned in, and tested, but not connected or powered up. When a disk fails, you install the cold spare and kick off the resilver. As long as you have staff on site who can do the job (and with a hot-swap chassis like you'd be using, that shouldn't be difficult), it seems to me that the downsides of warm and hot spares outweigh the upsides.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
I must have 7200 RPM Enterprise stamped on my forehead...

I would suggest looking at WD Gold Datacenter, only $20 more than Red NAS Pro, last time I checked. The 8TB and 10TB are helium filled so runs cooler than most other drives. I do not remember how much cooler, but it was in range of 5400 RPM drives, if my memory serves me correctly.

Next table is yearly cost per drive for electricity. Notice that the cost for electricity is not that much different between the different drives. WD 10TB is cheaper than WD Red NAS Home!

IMHO, we worry about the cost of electricity too much. There is not that much difference. It was much different years ago when we had multi 14" platters that needed 20 AMP 240 volt AC.

20% duty cycle (active/idle) is used in below numbers.
  • WD 6TB Gold Enterprise $8.14 electricity / year NAND memory for faster access
  • WD 8TB Gold Enterprise $6.00 electricity / year Helium cooler running
  • WD 10TB Gold Enterprise electricity $5.85 / year Helium cooler running
  • WD 8TB Red NAS Pro electricity $6.11
  • WD 8TB Red NAS Home electricity $5.88
  • Seagate 10TB Enterprise NAS Helium $?? Helium filled so I would expect very good power, still looking for specs on it
----------------------------------

Next table $/TB/year are normalized for a 5 year warranty for rough comparison purposes. The WD Home has a 3 year warranty vs. others have 5 year warranty. Just remember the MTBF for WD Red Home is about 1/2 that of the Enterprise and a little less for Red NAS Pro. The chance of a failure for Enterprise is much less than the Red Home.

The Enterprise drives also have a much higher usage TB per year usage specification than WD Red NAS Pro or Home.
  • WD 6TB Gold Enterprise $9.00 /TB/year
  • WD 8TB Gold Enterprise $9.50 /TB/year
  • WD 10TB Gold Enterprise $9.40 /TB/year
  • WD 8TB Red NAS Pro $9.00 /TB/year
  • WD 8TB Red NAS Home $7.80 /TB/year
  • Seagate 10TB Enterprise NAS Helium $8.44 /TB/year
The Seagate and WD Enterprise are the best for a business need, IMHO. Better made and fewer failures are likely to occur.

See the spreadsheet as all that info is in it...

Hope this helps...
 
Last edited:

CraigD

Patron
Joined
Mar 8, 2016
Messages
343
I create two 7 HDDs RAID-Z2 vdevs + 1 spare disk per vdev and I create one big pool on the vdevs
You only need one hot spare per pool
 

rosabox

Explorer
Joined
Jun 8, 2016
Messages
77
Guys you are really really helpful, thank you very much.

danb35:
I like the "updated unit" chassis, I'll check it out closely but I think I have a candidate :)

Most probably I'll contact iX and our Supermicro resellers/distributors for sure.

I was thinking the "warm spare" variant, meaning 1 "warm spare" for the whole pool + one spare per vdev (in the closet).

Stux:
Yes, in a server room - UPS, AC, sensors, cameras, security, etc.

farmerpling2:
Very interesting, I didn't look at it this way.
I have to reconsider the disk selection.
I have to admin I'm a little bit afraid of Helium disks - I'm worried about the longevity, Helium is very volatile. <- bad word, see post #20 for explanation


Edit: Another question came to my mind - as I'll be using rsync, which AFAIK is single threaded, should I consider a higher clocked (less threaded) CPU or is a low clocked Xeon (1.7GHz) OK?
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Helium is a noble gas... its not volatile ;)

But it is hard to contain

Are you perhaps thinking of Hydrogen?
 

rosabox

Explorer
Joined
Jun 8, 2016
Messages
77
Sorry, I'm not a native English speaker, so my wording is probably wrong.
What I meant is, that Helium has the tendency to "escape"
 
Status
Not open for further replies.
Top