Second FreeNAS build: Plex/ownCloud box

Status
Not open for further replies.

Algwyn

Dabbler
Joined
Sep 16, 2016
Messages
16
Five years, I set-up my first FreeNAS box, based on a HP Microserver (N40L NHP G7 EU), with 16 Gb of RAM, and 5 WD Red disks of 3 Tb, set-up in Raid-Z1.
This box has run smoothly with minimal maintenance during the last five years. It has been set-up with a single jail for Transmission, and was otherwise mainly used as a media file server.

It is now time to set-up a new box as:
  • Disks are now full, with barely 100 Gb free space
  • Disks are showing early signs of failures (two disks have bad sectors :oops::()
  • RAM cannot be increased, it's already at the system max, therefore limiting the potential disk capacity to around 16 Tb
  • Processor is too slow to support a Plex server
The requirements for the new box are as follows:
  • 30-40 Tb disks capacity, in Raid Z2 (move to Z2 from Z1 as per cyberjock recommandation)
  • Run a few jails (Transmission, Plex, ownCloud, sickbeard, ...) and AFS file server
  • CPU powerfull enough to run high quality transcodes (4k videos) fro Plex
  • be a bit future proof, ability bring capacity to 50-60 Tb in a few of years (4k video takes huge amount of disk)
  • compact chassis, about 40x40x30 cm (WxDxH), to meet space constraints [updated]
After reviewing the various hardware guides, I have come up with the following build:

CPU option 1: 1 x Intel - Xeon E3-1230 V5 3.4GHz
CPU option 2: 1 x Intel Xeon E3–1230 V6 3,50 GHz
Motherboard: 1 x Supermicro - X11SSL-CF Micro ATX LGA1151 Motherboard
Memory: 1 x Crucial 32GB Kit (2 x 16GB) DDR4-2400 ECC UDIMM (CT7982583)
Storage: 2 x Samsung SSD 860 EVO, 250 Go - SATA III 2.5" - (MZ-76E250B/EU)
Storage: 6 x Western Digital - Red Pro 10TB (WD101KFBX)
Case: 1 x Fractal Design - Node 804 MicroATX Mid Tower Case
Power Supply: 1 x Corsair - SF 600W 80+ Gold Certified Fully-Modular SFX Power Supply
Other: 1 x SilverStone Technology Universal ATX to SFX Power Supply Bracket RL-PP08B
Other: 1 x Adaptec 2279900-R Câble SATA Argent

A few factor are not fully set:
  • hard drives: WD Red seems to have best value here in EU. WD Red Pro with 5 years warranty seem to be a good option. HGST Ultrastar HE10 could be another option.
  • SSD: 250 Go is overkill ... but minimum size in the latest SSD generations ... may select lower size
  • Motherboard: opted for X11 motherboard to be able to increase RAM to 64 Gb. I haven't seen any motherboard with DDR3 RAM, which would be better given the price of DDR4 RAM (and the ability to reuse the RAM of my current box)
  • RAM: will start at 32 Gb, but will raise to 64 Gb as needed
  • PSU: opted for a SFX power supply, as the Node 804 has very limited space between the disks and PSU.

Any advice/comments are welcome!
 
Last edited:

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
I am interested in your rate of growth. Your initial design was using six 10TB disks to get 30-40 TB of disks. Any thought to using twelve 4TB disks to meet your capacity needs? The smaller disks have three benefits:

1. Growth is easy; You can replace the 4TB disks with 6TB or 8TB to grow. To get 60TB usable with RAIDZ2 and 6 drives would need 15TB(?) disks. I think the largest currently available on Amazon is 12TB and they are about $450 USD.

2. In the event a disks is lost, the rebuild time on at 10TB will be long. (days/weeks)

3. In the event a disk is lost outside of warranty, the replacement cost of a 10TB disk is high. A 4TB disk will be much less.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
In the event a disk is lost outside of warranty, the replacement cost of a 10TB disk is high.
OTOH, "outside of warranty" will be 3+ years from now, so current pricing isn't especially relevant.
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
OTOH, "outside of warranty" will be 3+ years from now, so current pricing isn't especially relevant.

I don't assume the drives in question are new "boxed" drives. OEM, and or refurbished disks may have little or no warranty available to the user.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
refurbished disks
Wouldn't touch those with a ten-foot pole, not after my experience with a half-dozen "white label" 6 TB disks (of which five failed within a year).
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
They manage to be less reliable than zip disks!
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
Well, those are as good as paperweights.

I agree with you on refurbished disks or printers. Not ever my first choice. In 2013, I returned a failed disk (under warranty) to Seagate, they sent back a "certified repaired" disk. It is also marked as "Recertified". I reluctantly added that disk to my NAS device. Last year I rebuild my NAS and replaced that and other disks with Seagate 2TB disks but my "certified repaired" disk is still in service 5 years later.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
The case of your choice supports 10 drives. You might want to consider a Chenbro chassis (48 bays !!) or a Supermicro 4U chassis (24 bays). Those chassis will give you a whole lot of options in upgrading since you have "upgrade" as a requirement.

Supermicro chassis seem to always have a premium attached to them in terms of pricing. But the seller for the Chenbro chassis will take $350 for it as 2-3 members of this forum have bought from him/her. And the Chenbro is brand new (just a discontinued model)
 

Algwyn

Dabbler
Joined
Sep 16, 2016
Messages
16
The case of your choice supports 10 drives. You might want to consider a Chenbro chassis (48 bays !!) or a Supermicro 4U chassis (24 bays). Those chassis will give you a whole lot of options in upgrading since you have "upgrade" as a requirement.

Supermicro chassis seem to always have a premium attached to them in terms of pricing. But the seller for the Chenbro chassis will take $350 for it as 2-3 members of this forum have bought from him/her. And the Chenbro is brand new (just a discontinued model)

One requirement that I did not specify is that I have limited space for this new FreeNAS box, basically it has to fit in a 40x40x30 cm space. Hence the choice of the Node 804 as chassis.
 

Algwyn

Dabbler
Joined
Sep 16, 2016
Messages
16
I am interested in your rate of growth. Your initial design was using six 10TB disks to get 30-40 TB of disks. Any thought to using twelve 4TB disks to meet your capacity needs? The smaller disks have three benefits:

1. Growth is easy; You can replace the 4TB disks with 6TB or 8TB to grow. To get 60TB usable with RAIDZ2 and 6 drives would need 15TB(?) disks. I think the largest currently available on Amazon is 12TB and they are about $450 USD.

2. In the event a disks is lost, the rebuild time on at 10TB will be long. (days/weeks)

3. In the event a disk is lost outside of warranty, the replacement cost of a 10TB disk is high. A 4TB disk will be much less.

The trade-offs on the disk sizes is an interesting topic.

With a set-up of 6x10Tb disks, I would extend capacity by adding another vdev of 6x10Tb disks (or whatever size seems relevant at that time). Using that approach, growth is also easy.

Upgrading the set-up of 12x4Tb disks, means that you replace each disk by a bigger one. The drawback is that you don't reuse your initial disk investement.

Regarding the rebuild time, I am a bit puzzled. In the ZFS RAID size and reliability calculator, the MTTR (Mean Time To Recovery) is calculated from the usable data space, not the drive's size. Is it a limitation of this calculator model? How much does the drive size impact the MTTR vs. total usable data space?
 

Algwyn

Dabbler
Joined
Sep 16, 2016
Messages
16
I've started building my new FreeNAS.
I've used the following config:

CPU: 1 x Intel Xeon E3–1230 V6 3,50 GHz
Motherboard: 1 x Supermicro - X11SSL-CF Micro ATX LGA1151 Motherboard
Memory: 1 x Crucial 32GB Kit (2 x 16GB) DDR4-2400 ECC UDIMM (CT7982583)
Storage: 2 x Samsung SSD 860 EVO, 250 Go - SATA III 2.5" - (MZ-76E250B/EU)
Storage: 6 x Seagate IronWolf PRO 8 TB, ST8000NE0004
Case: 1 x Fractal Design - Node 804 MicroATX Mid Tower Case
Power Supply: 1 x Seasonic SSR-750PX
Other: 1 x Adaptec 2279900-R Câble SATA Argent

Amazon had a good price on the Ironwolf Pro ... however when I received them, I found out the middle screw holes were missing on the sides of these drives! A quick check in the forum showed that this was a known issue ...

Damned ... will return them and switch to WD Red Pro ...
 

ZeroNine

Dabbler
Joined
Feb 1, 2017
Messages
20
I've started building my new FreeNAS.
I've used the following config:

CPU: 1 x Intel Xeon E3–1230 V6 3,50 GHz
Motherboard: 1 x Supermicro - X11SSL-CF Micro ATX LGA1151 Motherboard
Memory: 1 x Crucial 32GB Kit (2 x 16GB) DDR4-2400 ECC UDIMM (CT7982583)
Storage: 2 x Samsung SSD 860 EVO, 250 Go - SATA III 2.5" - (MZ-76E250B/EU)
Storage: 6 x Seagate IronWolf PRO 8 TB, ST8000NE0004
Case: 1 x Fractal Design - Node 804 MicroATX Mid Tower Case
Power Supply: 1 x Seasonic SSR-750PX
Other: 1 x Adaptec 2279900-R Câble SATA Argent

Amazon had a good price on the Ironwolf Pro ... however when I received them, I found out the middle screw holes were missing on the sides of these drives! A quick check in the forum showed that this was a known issue ...

Damned ... will return them and switch to WD Red Pro ...

Hi,

Walking the exact base path as you in terms of build, in the EU. Got the NODE 804, the Supermicro and 10x4TB WD REDs. Would like to ask your thoughts on:
  1. Do you think the 1230 v6 is enough for your planned use? Struggling with that one right now.
  2. Any issue with cabling with the Seasonic? Were you able to cable everything without extras?
  3. Any issue with the RAM? The recommended RAM for the X11SSL-CF is nowhere to be (easily) found.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

ZeroNine

Dabbler
Joined
Feb 1, 2017
Messages
20
I used the same motherboard, but the E3-1230v5 instead of the v6 since I didn't want to deal with BIOS updates. Availability in the UK for the key required to unlock this functionality via IPMI was also virtually non-existent, but I don't know how different that is in other parts of the EU.

I know other users have had troubles with Kingston RAM, but I ended up going for it when the Samsung RAM on the QVL was no longer available.

I used this: Kingston ValueRAM DDR4 16 GB DIMM CL15 Unbuffered ECC Memory, which appears to available on amazon.fr too for €192.90.

I've had no issues with it, and to be honest I don't anticipate any either.

I got lucky, I asked the vendor (from amazon.de) and they said it was 2.0 updated.

Thanks for the hints!
 

Algwyn

Dabbler
Joined
Sep 16, 2016
Messages
16
Hi,

Walking the exact base path as you in terms of build, in the EU. Got the NODE 804, the Supermicro and 10x4TB WD REDs. Would like to ask your thoughts on:
  1. Do you think the 1230 v6 is enough for your planned use? Struggling with that one right now.
  2. Any issue with cabling with the Seasonic? Were you able to cable everything without extras?
  3. Any issue with the RAM? The recommended RAM for the X11SSL-CF is nowhere to be (easily) found.

Regarding the CPU, haven't yet started to load it ... I'm still busy copying files to the new server.

Cabling with the Seasonic was OK. I've added a SATA power cable, as I needed 3 to cover all disks. Would you like me to post some pictures of the set-up?

No issue with the RAM.

So far everything is working OK.

I just need to replace 2 of the WD Red Pro disks, as I was not able to register them with WD for warranty (they seem to be OEM disks, not eligible for warranty support)
All were purchased from Amazon at the same time. It seems that they have issues with their supplies.

Regarding the RAM, I ordered it direct from Crucial on their website. Best option here in EU.
When Micron RAM is recommended, Crucial is their retail brand which is more readily available (at least here in EU).
They have a very nice RAM compatibility selector, with all the Supermicro motherboard referenced ...
 
Last edited:

Algwyn

Dabbler
Joined
Sep 16, 2016
Messages
16
I used the same motherboard, but the E3-1230v5 instead of the v6 since I didn't want to deal with BIOS updates. Availability in the UK for the key required to unlock this functionality via IPMI was also virtually non-existent, but I don't know how different that is in other parts of the EU.

To which update are you referring to?
 
Status
Not open for further replies.
Top