LowCost - HighCapacity Storage Server

Status
Not open for further replies.

Fireball81

Explorer
Joined
Apr 24, 2016
Messages
51
Hey guys,

i need to build a cost efficient storage server because i need to move around 30TB (real 27TB) of data from our production storage for backup purposes.
As performance isn`t much of a concern for this kind of backup machine, i have a few things in mind and would like to know your opinion about it:

I would like to make it a S1151 build but i will pick some 1150 components for comparison:
Here is my current hardware selection, with a few questions afterwards:

S1151 build:
- Pentium G4400
- Supermicro X11SSH-CTF (little bit of an overkill but future proof choice)
(alternatively: Supermicro X11SSM-F and installing FreeNAS to a SLC-USB stick)
- 32GB ECC RAM
- 4x Seagate IronWolf NAS HDD 10TB (ST10000VN0004)
- Seasonic G550 550W PSU
- Some tower that can fit at least 8x3,5"

The plan is to build a RAIDZ vdev that gives me the round about 25-27TB of usable storage i need. There should be enough headroom to build another RAIDZ vdev for an additional ~25TB of usable storage.
All the data on this backup machine will also be backed up to LTO5 tapes.

SS1150 build:
- Pentium G3220
- Supermicro X10SL7-F
- 32GB ECC-RAM
- 4x Seagate IronWolf NAS HDD 10TB (ST10000VN0004)
- Seasonic G550 550W PSU
- Some tower that can fit at least 8x3,5"

Same thing as with the S1151 build. I will start with a RAIDZ pool consiting of 4 drives where i can use ~25TB of data. In the near future i would like to add another 4 drives to my pool.

Anything i am missing right now, any huge mistakes?

A few questions if you don`t mind:
Is the X11SSH-CTF fully compatible to the latest FreeNAS version. There were some rumors in late 2016 that the board isn`t fully compatible, whats the "status quo" on this one?
Can i change the LSI SAS 3008 easily to HBA/IT mode easily? Will the PSU suffice for 8 drives overall?
Thanks for all your help and opinions, i appreciate it very much. :)

Dennis
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
RAIDZ isn't really appropriate for drives of this size, due to the expected unrecoverable error rate of each drive. You basically can't complete a scrub without exceeding the URE rate of the drive. Cyberjock, I believe, has written a much more detailed treatise on this.

Long story short, you should consider tossing an additional drive in the mix and going to RAIDZ2. Most of us find 6 drives in RAIDZ2 to be a happy place. There are plenty of chassis that will handle 12 drives (so you can add your second vdev). You can get enough connections by adding a second HBA or using a SAS expander.

For your PSU, review jgreco's guidance here: https://forums.freenas.org/index.php?threads/proper-power-supply-sizing-guidance.38811/

You need to add something for a boot drive. Either USB sticks (blergh) or a small SSD (or, to be more paranoid, a mirrored pair of SSDs). Lots of people find USB sticks to be just fine, but they aren't really designed for this purpose and you'll eventually kill them.
 

Fireball81

Explorer
Joined
Apr 24, 2016
Messages
51
Hi, thank you for your helpful reply. If you don`t mind i got a few more questions followiing your posts.
I take your advice and would probably go with a vdev of 5x10TB harddrives in a Raidz2 configuration or 6x8TB Raidz2.
My idea would be to use lets say the 6x8TB drives on the SATA ports of the Intel chipset and later on i could use the SAS3 ports of the LSI3008 to expand my pool.
Can i expand a zfs pool over different storage controllers or would be be better to create one pool per storage controller?

Thank you
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Can i expand a zfs pool over different storage controllers
You certainly can and that's one of the wonderful features of FreeNAS. Just make sure to flash the controller to IT mode so it works as an HBA.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Also, as a backup device... it may be worth considering that 8 way 10TB raidz2 would get you your circa 60TB of storage... BUT you can't convert a 4/5 way RAIDZ2 to 8 way without backing up and restoring.

(taking into account TiB vs TB and overheads etc)

But as you will be backing up to tape... maybe thats worth considering.

8 way Z2 is also a happy place... but at 10TB that perhaps becomes iffy too.
 

Bhoot

Patron
Joined
Mar 28, 2015
Messages
241
I would suggest building a 8disk RaidZ2 (probably a RaidZ3) with either 4tb (~$140 each) or 6tb (~$200 each) disks. 2 disk redundancy is always better than 1 especially when you are talking of huge storage. The resilver of a failed drive will take long and God forbids another drive fails that is going to be a lot of pain. you will definitely save some money with this config while getting comparable storage (esp with the 6tb variant). Then you can periodically buy the 10tb disks and replace them one at a time (read about upgrading server)
Might want to read @Bidule0hm 's reliability/space calculator here.

//edit: Typo
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I would suggest building a 8disk RaidZ2 (probably a RaidZ3) with either 4tb (~$140 each) or 6tb (~$200 each) disks. 2 disk redundancy is always better than 1 especially when you are talking of huge storage. The resilver of a failed drive will take long and God forbids another drive fails that is going to be a lot of pain. you will definitely save some money with this config while getting comparable storage (esp with the 6tb variant). Then you can periodically but the 10tb disks and replace them one at a time (read about upgrading server)
Might want to read @Bidule0hm 's reliability/space calculator here.

Before you know it, you end up justifying a 24 bay 4U rack mount chassis.

https://forums.freenas.org/index.ph...24-supermicro-x10-sri-f-xeon-e5-1650v4.46262/
 

Fireball81

Explorer
Joined
Apr 24, 2016
Messages
51
Sorry for not replying to this thread the last couple of weeks.
After a lot of back and forth i think my decision will be between a:

- Supermicro SC846 with a expander backplane (BPN-SAS2-846EL1) and a 1200W PSU (PWS-1K21P-1R)
- Xcase RM424 LowNoise

The SC846 sells for round about 350€ on ebay which is fine, considering the included expander backplane which would be more than fine, cause i will copy my data over a simple gigabit ethernet link after all.
A reason for concern though is the 1200W PSU. I assume it will be pretty damn loud cause its not one of the latest models which improved in that domain as far as i know.
The server has to stand right in the office and i don`t wanna kill my colleagues with the sound that beast will probably make, even though the price is pretty tempting.

The XCase on the other hand is affordable too but does not have a expander backplane so the embedded LSI 3008 of the X11SSH-CTF i posted above won`t suffice to get access to all 24 drive bays.
As i am just starting with 6x8TB drives in a RAIDZ2 i lean towards the XCase cause i am sure i can make it a pretty silent system.
A few more questions if you don`t mind:

Question about drive count in a raidz2 vdev:
With 8TB HDDs whats the highest disk count you would choose per vdev?
Lets say i go from the LSI 3008 to the backplane of the XCase and can therefor access 8 drivebays out of thr 24 in total.
Would 8x8TB also make for a good Raidz2 set or is 6 drives the highest drive count you would recommend for 8TB disks in a raidz2 setup?

Question about spanning a single vdev across multiple controller:

Lets on the other hand assume i would go with 6x8TB vdevs.
The LSI 3008 gives me two SAS connectors for 8 drives in total and if i would for example add another HBA, like the IBM M1115 later on i could connect another 8 drives.
The question now is, if i stick with 6x8 vdevs would it be a problem if those vdevs span across the LSI 3008 and the M1115 cause the first vdev would be totally maintained by the LSI 3008 but
for the second vdev 2 slots would be maintained by the LSI 3008 whereas the other 4 would be provided by the M1115.

Thank you very much for your kind help.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
It's actually beneficial spanning across controllers, as with the right setup you can have all vdevs able to with stand a controller failure. For example If you had 4 6 way Raidz2 then each vdev can connect to one of the 4port ports on a controller and if a while controller fails you only lose two drives from each vdev and the system can keep on keeping on until you can replace the controller.
 

Fireball81

Explorer
Joined
Apr 24, 2016
Messages
51
Thank you for your clearing this up for me.
To my question about the drive count inside a vdev, can you guys tell me whats best practice for a raidz2 vdev?
From my perspective i would like to go with 8x8TB per raidz2 vdev, is that something i should avoid in favor of 6x8TB or what do you guys recommend?

What PSU should i choose for a maximum drive count of 24 cause i decided to buy the XCase RM424 LowNoise i mentioned earlier.
I think we will never reach a point where we need to fill all 24 drive bays of the case but just in case.
i think realistically we will go for 16 drives max, can i go with the seasonic G-750 or should i aim for a little bit more watt?

Thanks
 
Status
Not open for further replies.
Top