Proposed FreeNAS build

Status
Not open for further replies.

AllanB

Dabbler
Joined
Feb 6, 2017
Messages
11
For a photo archive. Planning on running RAID-Z3. Plan on expanding as needed out to 12 drives.

Case: Fractal Design Define XL. I already have this; holds 13 or so drives, or more with some creativity.
Main drives: 8 x Western Digital Red 8 TB WEWD80EFRX. 64 TB total, 40 TB capacity in RAID-Z3.
Motherboard: Supermicro X11SSH-CTF LGA 1151. Has 8xSATA and 8xSAS ports, 10 Gb ethernet, chipset C236. Manufacturer link.
Processor: Intel Xeon E3-1225 v5. There are a lot of options here, this seems to be decent.
Memory: 2 x Crucial 16GB Single 2133 DDR4 PC4-17000 ECC. 32 GB to start, can be expanded to 64 GB if needed.
FreeNAS drive: m.2 Crucial MX300 525GB. Lots of options here. Smaller would probably work fine but doesn't really save much money.

I haven't looked into power supplies, fans, CPU cooling or cables yet.

Does this look like a good start? I'm not really expecting to do a lot of transcoding but there might be some. System will primarily be a photo archive with just 1 or 2 users.

Thanks for any tips
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Looks good!

The E3-1225 v5 has built-in graphics support that you don't need in a FreeNAS server - the trailing '5' in the part number is the give-away. Choose one of the Xeons from this series that don't have the graphics processor, with part numbers ending in zero (E3-1230 v5, E3-1240 v5, etc.)

http://ark.intel.com/products/family/88210/Intel-Xeon-Processor-E3-v5-Family#@Server

Seasonic and EVGA are high quality PSUs popular with forum users.

Good luck!
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
I'd reconsider the CPU selection.
If you are aiming to just run a backup machine without multiple (+3) transcoding streams, you'll still be fine with a i3-6100 at roughly half the cost.
 

AllanB

Dabbler
Joined
Feb 6, 2017
Messages
11
Thanks for the CPU tips

Reading more about ZFS, I'm seeing the drive count situation is more complex than I'd thought. My previous experience is with mdadm, which is very relaxed. I ran an 8 drive RAID6 and later added 4 more drives and it just did a rebuild and poof, more space.

As I'm understanding it now, a ZFS pool implements RAIDs on the VDev. You can't expand a VDev by adding more disks (though you can by replacing the disks with higher capacity disks). You can expand the ZFS pool by making new VDevs and adding them.

That throws a bit of a wrench into my "start with 8 drives and add four more later" thoughts. So I'm thinking of starting with a larger set - 10 or 12 drives. That should be sufficient for a long while; I can figure out a way forward if/when I need to.
 
Last edited:

iposner

Explorer
Joined
Jul 16, 2011
Messages
55
I don't know why you don't go for a RAIDZ2 instead of a RAIDZ3 - this way you'll have more capacity and your rebuild time in the event of a single disk loss will be better than RAIDZ1.
 

StephenFry

Contributor
Joined
Apr 9, 2012
Messages
171
I don't know why you don't go for a RAIDZ2 instead of a RAIDZ3
These are 8TB drives; I'd pick RAIDZ3 any time. Unless you are really wanting to squeeze every TB of storage out of your system, or are just using it as a (temp)storage for unimportant materials, the little extra cost is worth it, IMO.

OP says it's a photo archive. If someone is building a 40TB archive, I doubt everything will be backed up elsewhere, making Z3 even more relevant. But that's for OP to admit to ;)
 

AllanB

Dabbler
Joined
Feb 6, 2017
Messages
11
I don't know why you don't go for a RAIDZ2 instead of a RAIDZ3

Well, storytime. When I had a 12-drive RAID6 on mdadm, a drive failed on Tuesday. "I'll fix it on the weekend", I thought. Then another drive failed on Thursday. So powered down the computer, got two drives and installed them, fired it back up and waited through the 24 hour rebuild. The rebuild succeeded. But my lesson learned was that for a 10-ish drive array which you plan to run for years, even RAID6/RAIDZ2 is closer to the edge than you might think.

For my backup, the unedited CR2/JPG photos are backed up at a second site. Then on the RAID, all the JPG, XMP and PSD files are backed up locally, but not the CR2 that take up about 80% of the capacity. If the RAID fails the JPG/XMP/PSD files could be gotten from local backup and rejoined with the CR2 files recovered from the offsite backup. It would be annoying to do it though.
 

iposner

Explorer
Joined
Jul 16, 2011
Messages
55
1) That was 12 drives, this is 8. More drives, greater chance of simultaneous failure.

2) Isn't that a story more about operational processes rather than configuration? Why would you not replace the failed drive ASAP following notification of failure?

What this basically comes down to is trading performance and capacity versus cost and redundancy. It's a judgement call.
 
Joined
Apr 9, 2015
Messages
1,258
1) That was 12 drives, this is 8. More drives, greater chance of simultaneous failure.

The larger the drive the more chances for URE's resulting in rebuild errors. Also running multiple vDev's increases the risk for problems since on vDev failure kills the whole pool.
http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/ If the OP wants to expand the pool by adding a second vDev then RaidZ3 is pretty much a must and even without RaidZ2 would be pretty risky with 8TB drives.

Though I think 2019 is a bit optimistic since we are already seeing drives in the 10TB range, I say we are already getting there today and raidZ3 will be problematic around 2025 at the current expansion rates I expect 2025 to bring drives in the 20TB range.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Well two 6 drive Raidz2 vdevs means one now and one later and you lose only 1 drive of space vs a 12 way Raidz3. But you gain double the IOPS.

Or two 7-way Raidz2. If you can find space for one more drive in the case eventually.

Raid is not a backup. You still want a backup plan.
 
Last edited:

CraigD

Patron
Joined
Mar 8, 2016
Messages
343
I like what @Stux is saying

I would take it a step further

Your motherboard can support 16 drives, To me this looks like an 8 Drive RAIDz3 vdev, with the option to add another vdev the same at a later date

This means another case like the Antec Twelve Hundred with 3-4 cages installed, or even the case I use with 3 cages

Have Fun
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I like what @Stux is saying

I would take it a step further

Your motherboard can support 16 drives, To me this looks like an 8 Drive RAIDz3 vdev, with the option to add another vdev the same at a later date

This means another case like the Antec Twelve Hundred with 3-4 cages installed, or even the case I use with 3 cages

Have Fun

Or a rack mount case. Say a 24 bay for up to 3 8 disk vdevs
 

StephenFry

Contributor
Joined
Apr 9, 2012
Messages
171
Or a rack mount case. Say a 24 bay for up to 3 8 disk vdevs

A good suggestion, depending on where the machine is going to be placed.
OP has a fairly cool running Fractal case that is easy to keep quiet. A rack mount, in my -demanding- cooling experience, almost always needs noisy cooling.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Last edited:

StephenFry

Contributor
Joined
Apr 9, 2012
Messages
171
These kinds of builds are why I inserted the almost - nice job ;)
 

AllanB

Dabbler
Joined
Feb 6, 2017
Messages
11
The build described above is now built and is operating well. It's getting loaded with data.

I should note the X11SSH-CTF motherboard actually won't fit the Crucial m.2 MX300 drive. It turns out m.2 drives come in many different sizes. That Crucial drive is size 2280 (meaning 22mm x 80mm), and the motherboard only fits size 2260. I used a USB thumbdrive instead.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The larger the drive the more chances for URE's resulting in rebuild errors. Also running multiple vDev's increases the risk for problems since on vDev failure kills the whole pool.
Just to put some perspective on this, for everyone's benefit, I had to replace a 6TB drive in one of my servers at work that uses ZFS and it took about 36 hours to resilver.
You could easily loose another drive in that time. I don't really look forward to the next server my organization plans to buy which will have 60 x 8TB drives. The wheels are in motion on that already and we don't really have much choice because of storage density. We can't make the building bigger to add more racks. Some of the old servers I will be decommissioning next year have 2TB drives and the data on that whole server could fit on about 4 drives in a more modern server, with redundancy.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Joined
Apr 9, 2015
Messages
1,258
Isn't that a SATA SSD, anyway? The X11SSH-CTF only supports PCIe SSDs in the M.2 slot.
Could use an adapter card that converts a PCIe slot into a m.2
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Status
Not open for further replies.
Top