BUILD First FreeNAS Build

Status
Not open for further replies.

jimboooooo

Dabbler
Joined
Feb 21, 2017
Messages
13
I've been looking to put together a FreeNAS build for a while now and over the last few months have been reading and piecing together a list of hardware. I have a 28TB unraid machine but that will not cut it any longer for family photos and videos, time machine backups, personal repos, etc.

I want this build to be expandable as times goes on. I went with the node 804 case because it supports up to 10 3.5" + 2 2.5" drives. I decided to go with 2 x 8TB mirrored vdevs and have the ability to add 2 more mirrored vdevs later. I've gone back and forth on raidz2 or mirrored vdevs hundreds of times now. This article pushed me into the mirrored vdev court, but I'm always open to other thoughts on the matter :). I am also planning on running a number of jails (and docker containers when that is supported) so I'm thinking of 2x500GB SSDs in raid 1 outside of the pool just for that purpose. A 4TB drive just for downloading/transcoding of media.

I went with the Supermicro x11ssl mobo because it can connect up to 14 drives. I figure I'll save the SAS connectors for the vdev drives and use the sata connections for the other drives.

Do these parts sounds alright? I'm hoping I didn't miss something obvious.

CPU: Intel - Xeon E3-1275 V5 3.6GHz Quad-Core Processor ($336.44 @ OutletPC)
Motherboard: Supermicro - X11SSL-CF Micro ATX LGA1151 Motherboard ($252.99)
Memory: Crucial - 16GB (1 x 16GB) DDR4-2133 Memory ($184.99 @ B&H)
Memory: Crucial - 16GB (1 x 16GB) DDR4-2133 Memory ($184.99 @ B&H)
Storage: Samsung - 850 EVO-Series 500GB 2.5" Solid State Drive ($178.89 @ OutletPC)
Storage: Samsung - 850 EVO-Series 500GB 2.5" Solid State Drive ($178.89 @ OutletPC)
Storage: Western Digital - Red 4TB 3.5" 5400RPM Internal Hard Drive ($133.99 @ SuperBiiz)
Storage: Western Digital - Red 8TB 3.5" 5400RPM Internal Hard Drive ($263.99 @ SuperBiiz)
Storage: Western Digital - Red 8TB 3.5" 5400RPM Internal Hard Drive ($263.99 @ SuperBiiz)
Storage: Western Digital - Red 8TB 3.5" 5400RPM Internal Hard Drive ($263.99 @ SuperBiiz)
Storage: Western Digital - Red 8TB 3.5" 5400RPM Internal Hard Drive ($263.99 @ SuperBiiz)
Case: Fractal Design - Node 804 MicroATX Mid Tower Case ($89.99 @ SuperBiiz)
Power Supply: SeaSonic - 650W 80+ Gold Certified Fully-Modular ATX Power Supply ($99.90 @ Amazon)
Case Fan: Fractal Design - FD-FAN-SSR2-120 40.6 CFM 120mm Fan ($9.88 @ OutletPC)
Case Fan: Fractal Design - FD-FAN-SSR2-120 40.6 CFM 120mm Fan ($9.88 @ OutletPC)
Case Fan: Fractal Design - FD-FAN-SSR2-120 40.6 CFM 120mm Fan ($9.88 @ OutletPC)
Case Fan: Fractal Design - FD-FAN-SSR2-140 66.0 CFM 140mm Fan ($11.99 @ SuperBiiz)
Other: SanDisk Ultra Fit 16GB USB 3.0 Flash Drive SDCZ43-016G-G46 ($10.85 @ Amazon)
Other: SanDisk Ultra Fit 16GB USB 3.0 Flash Drive SDCZ43-016G-G46 ($10.85 @ Amazon)
Other: Tripp Lite DVI to VGA Monitor Cable, High Resolution cable with RGB Coax (DVI-A M to HD15 M) 10-ft.(P556-010) ($9.74 @ Amazon)
Total: $2770.10
 
Last edited by a moderator:

NASbox

Guru
Joined
May 8, 2012
Messages
650
Looks very good... quality parts... Fractal makes a great case... Seasonic great PS. WD/Samsung drives are great as well.
HGST drives are also a good option for HDDs as well.

All those fans might be a bit noisy, but with that small form factor you may well need them to clear the heat.

Depending on what you have in mind for expansion, if you can spare the space, you may want to go with a larger case.

I need more drives for backups, so I've got to scrap my case and rebuild my system (Going with Corsair 750D-with extra HD cages it will take 14 HDD+4 SSDs) Good room for air flow, and I plan on using a hot swap bay for offsite backup.

Good luck with your build... let us know how you make out.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
Looks like a great wish/shopping list. I envy you:)

About that article about the mirrored vdevs. I think it is well known within the community and a lot of it makes sense. But not everybody agrees with it. Keep in mind that pools with redundancy are all about availability and not about data protection. With mirrored vdevs resilvering is a breeze if a disk fails. But you can't afford to loose another disk in the same vdev. With for example RAIDZ2, resilvering means harder work for your system but losing any second disk is not fatal. Just look critical to your use case before you decide.

Go for a bigger case if you can. I have a hard time to keep my drives on a reasonable temperature in my midsize case.

I started out with mirrored USB sticks as boot device (they were Sandisk ultra's as well coincidently). I lost a couple of them and bought an SSD because I did not need the hassle. A lot of FreeNAS users have used them for years but others have been just as unlucky as I have been.

About that video cable? Your motherboard has IPMI. You will not have the need of a monitor or keyboard for your FreeNAS box. IPMI is great and not hard to use. You will have access to the bios and can even mount bootable devices or images trough the IPMI interface.
 
Last edited by a moderator:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I figure I'll save the SAS connectors for the vdev drives and use the sata connections for the other drives.
What do you mean by this? Every drive is a vdev, or is part of a vdev.

As to the SSDs, meh--it certainly won't hurt anything, but it really isn't going to do anything helpful either, at least for jails (it could for VMs).

Unless the 1275 CPU is the same price (or less) than the 1270, go for the 1270--the built-in GPU on the 1275 won't help you at all.
 

jimboooooo

Dabbler
Joined
Feb 21, 2017
Messages
13
What do you mean by this? Every drive is a vdev, or is part of a vdev.
I want the SSDs and the one 4tb drive to live outside of any pool just for use by applications. Not for storage.

Unless the 1275 CPU is the same price (or less) than the 1270, go for the 1270--the built-in GPU on the 1275 won't help you at all.
Ok I'll make the switch.

Depending on what you have in mind for expansion, if you can spare the space, you may want to go with a larger case.

I need more drives for backups, so I've got to scrap my case and rebuild my system (Going with Corsair 750D-with extra HD cages it will take 14 HDD+4 SSDs) Good room for air flow, and I plan on using a hot swap bay for offsite backup.

I was thinking just adding 4 more drives (2 mirrored vdevs) into the pool. That's all I could really do with that case and number of non-storage drives I currently have setup. That case looks great. I'll definitely consider switching to it and buying another cage.

About that article about the mirrored vdevs. I think it is well known within the community and a lot of it makes sense. But not everybody agrees with it. Keep in mind that pools with redundancy are all about availability and not about data protection. With mirrored vdevs resilvering is a breeze if a disk fails. But you can't afford to loose another disk in the same vdev. With for example RAIDZ2, resilvering means harder work for your system but losing any second disk is not fatal. Just look critical to your use case before you decide.

About that video cable? Your motherboard has IPMI. You will not have the need of a monitor or keyboard for your FreeNAS box. IPMI is great and not hard to use. You will have access to the bios and can even mount bootable devices or images trough the IPMI interface.

I'm on the fence about switching (again for the 100th time) to using raidz2 just based on that. If the system needed a new drive then I would stop any activity just for resilvering so I guess I don't need fast resilvering. I.cannot.lose.my.family.photos so maybe raidz2 is the way to go. Sorry for adding that cable. It's for another project.


Thanks for the advice everyone!
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I want the SSDs and the one 4tb drive to live outside of any pool just for use by applications.
Not possible. They may be part of a different pool (or even of two different pools), but they'll still be in a pool.
 

jimboooooo

Dabbler
Joined
Feb 21, 2017
Messages
13
Not possible. They may be part of a different pool (or even of two different pools), but they'll still be in a pool.

Ok good to know! I wanted raid 1 for the two SSDs so I guess that means I'd want a new pool with a mirrored vdev within that pool. Then another pool and single drive vdev for that other hdd.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Ok good to know! I wanted raid 1 for the two SSDs so I guess that means I'd want a new pool with a mirrored vdev within that pool. Then another pool and single drive vdev for that other hdd.
Yep, that's exactly how ZFS would work for your use case. 3 pools;
  1. 2 vDevs, each with 2 x 8TB HDD in a Mirror
  2. 1 vDev, with 2 x 500GB SSD in a Mirror
  3. 1 vDev, with 1 x 4TB HDD, no redundancy
Simply create them separately and use appropriate names.

Note we say Mirrored vDevs in ZFS terminology, rather than RAID-1. Similar concept and result, though ZFS does things differently.

One last thing. ZFS also allows you to hot replace disks. This is different than pulling a failed disk, putting in a replacement and then causing a rebuild, (aka re-silver in ZFS terminology). If you have a free disk slot, and the failing disk has not completely failed, use this fancy disk replacement. Can help with some types of second disk failure senerios.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Overall not a bad build however here is my two cents...

Do not use USB 3.0 Flash drives, instead opt for USB 2.0 flash drives or better yet just one smallish SSD. With all the money you are sinking into this build, a single SSD is really the way to go and you will be much happier in the long run from a reliability point of view.

I want the SSDs and the one 4tb drive to live outside of any pool just for use by applications. Not for storage.
I don't understand what you are asking for. In FreeNAS everything is storage. As previously mentioned these would still be treated as a vdev no matter how you treat it. You could use them for VMs or Jails but it's still all storage.

Do you have something specific in mind? If you provide an example then we can tell you if it will work or you are barking up the wrong tree. There is no sense wasting money if this fails to meet your ideas.
 

jimboooooo

Dabbler
Joined
Feb 21, 2017
Messages
13
Do you have something specific in mind? If you provide an example then we can tell you if it will work or you are barking up the wrong tree. There is no sense wasting money if this fails to meet your ideas.

Yep, that's exactly how ZFS would work for your use case. 3 pools;
  1. 2 vDevs, each with 2 x 8TB HDD in a Mirror
  2. 1 vDev, with 2 x 500GB SSD in a Mirror
  3. 1 vDev, with 1 x 4TB HDD, no redundancy
This setup is basically what I was thinking about once y'all informed me that I must create vdevs even for the SSDs and extra 4tb drive.

Keep in mind that pools with redundancy are all about availability and not about data protection. With mirrored vdevs resilvering is a breeze if a disk fails. But you can't afford to loose another disk in the same vdev. With for example RAIDZ2, resilvering means harder work for your system but losing any second disk is not fatal. Just look critical to your use case before you decide.

Data protection is more important to me so I think I'll be switching to raidz2. I realize that once you create a vdev you can't 'add to it' so I think I'll switch to more smaller drives to keep my total price at or below my original $2770.10 price point. My upgrade path would then be to eventually replace all the drives with larger drives.

I'm thinking:
  1. 9x2TB raidz2 vdev in it's own pool for actual data storage
  2. 2x500GB mirrored vdev in it's own pool for application storage
  3. 1x4TB vdev in it's own pool for 'temp' storage for things like plex transcoding and downloading of media
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
So your RAIDZ2 of nine 2TB drives would give you 10TB of useable storage. I'm not sure if you understand how to grow a pool but maybe you do. Lets say you out grow the 10TB of stroage and now you want more. If you replace a few of the hard drives with 4TB drives, nothing happens. You need to repalce all nine of the hard drives with larger drives for the space to become available. In general we try to guide people to use smaller vdevs in order to facilitate this better. You could go for six 4TB drives and have a few more TBs of storage up front and when you need more space you only need to replace 6 hard drives.

Also, I like to tell people to imagine how much storage they would need for 3 to 5 years, that is where they should plan. 3 years due to warranty, 5 years because drives tend to last that long. I'm almost at the 5 year point with my WD Reds and I will purchase replacement drives during the holidays hoping for a good holiday sale.

Lastly, I'm not sure about why you want to seperate the data into different pools. I could see ising a mirror of SSDs if you are using iSCSI and have a system to support it, and that 10Gb Ethernet connection, but in general a well designed pool will exceed even SSD performance. But it's your money and if you do find some advantage once you have the system all together, I would like to hear about it. The only advantage I can think of is you can take the single drive or mirror to another machine without moving the mail pool and access the files. I'm narrow minded right now.

Anyway, i hope things go very well for you when you get this system working.
 

jimboooooo

Dabbler
Joined
Feb 21, 2017
Messages
13
So your RAIDZ2 of nine 2TB drives would give you 10TB of useable storage. I'm not sure if you understand how to grow a pool but maybe you do. Lets say you out grow the 10TB of stroage and now you want more. If you replace a few of the hard drives with 4TB drives, nothing happens. You need to repalce all nine of the hard drives with larger drives for the space to become available.

Yep I understood that bit.

Also, I like to tell people to imagine how much storage they would need for 3 to 5 years, that is where they should plan. 3 years due to warranty, 5 years because drives tend to last that long.
I'll keep that mind. Thanks!

Lastly, I'm not sure about why you want to seperate the data into different pools.
My understanding is if I had all 3 of my vdevs I specified above in a pool, I would just have to lose that single 4TB vdev and I lose the ENTIRE pool.

If I went with a single pool comprised of the 6x4TB raidz2 vdev (and just got rid of my SSDs and other 4TB drive vdev) you mentioned and had all my media as well as application storage on it, then zfs/freenas would decide which specific drives that data was stored on. So it could be that both media and application data is stored on the same drive. My concern is that individual drives might be taxed more than others. If I separate out the application storage from media storage then the drives used for media are only used when needed. Please tell me if my logic here is flawed :)

Anyway, i hope things go very well for you when you get this system working.
Thanks! I'm wanting to report back with my final build once I'm done.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
then zfs/freenas would decide which specific drives that data was stored on. So it could be that both media and application data is stored on the same drive. My concern is that individual drives might be taxed more than others.
Data is striped (spread across evenly) all hard drives in the vdev/pool. If you saved a Word document to the pool then it would be spread across all drives evenly, and with parity as well (I'm simplifying this a bit). Remember, if you loose two hard drives in a RAIDZ2 then all your data is still there. If you lost a third drive then all your data is gone. There are no favorites for hard drives. I run my drives continuiously meaning they are spinning and the heads are not parked. I disabled the parking shortly after I got them. There is a general thought that a drive in motion runs longer than one that has frequent spinup cycles. My drives are almost 5 years of age and have no failures, yet. I can't say enough about the reliability of my WD Red drives.

So lets say you have a single vdev/pool and you wanted your data seperated. You would create several datasets and that would keep them functionally seperate. You could then share those datasets as you desire. This would simplify everything. Six 4TB drives would give you 11TB of usable space after subtracting a 20% free space to maintain a healthy pool. If you really need another 4TB of storage then add one more 4TB drive for a total of 7 drives, this would be better than adding a 4TB drive as it's own vdev in my opinion however you will need to make up your own mind. Buy the hardware and put it together, play with it for a few days creating vdevs and estroying them. Don't commit (move) your data to the system until you are set on a strategy. Play with it.

Use a RAID calulator (two are in my signature) and ensure you subtract 20% to get the no kidding capacity. ZFS is a great filing system but it does use a lot of the space up in order to keep your data safe.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Definitely a good idea to keep an SSD pool separate to an HD pool. The vastly different storage characteristics mean it's a good idea to keep them separate so you can take advantage.

For instance performance on the ssd pool and capacity on the hd pool.

You would probably even replicate the ssd pool to your hd pool to back it up.
 

jimboooooo

Dabbler
Joined
Feb 21, 2017
Messages
13
Here is everything I purchased.

CPU: Intel - Xeon E3-1275 V5 3.6GHz Quad-Core Processor ($336.79 @ SuperBiiz)
Motherboard: Supermicro - X11SSL-CF Micro ATX LGA1151 Motherboard ($266.14)
RAM: 2 x Samsung 16GB DDR4 PC4-17000 (2133MHz) ECC MEM-DR416L-SL01-EU21 ($176.01)
Storage: Samsung - 2 x 850 EVO-Series 500GB 2.5" Solid State Drive ($142.98 @ Newegg)
Storage: 9 x Western Digital - Red 2TB 3.5" 5400RPM Internal Hard Drive ($81.88 @ OutletPC)
Case: Fractal Design - Node 804 MicroATX Mid Tower Case ($89.99 @ SuperBiiz)
Power Supply: SeaSonic - 650W 80+ Gold Certified Fully-Modular ATX Power Supply ($107.89 @ Newegg)
Case Fan: 3 x Fractal Design - FD-FAN-SSR2-120 40.6 CFM 120mm Fan ($9.88 @ OutletPC)
Case Fan: 1 x Fractal Design - FD-FAN-SSR2-140 66.0 CFM 140mm Fan ($11.99 @ SuperBiiz)
UPS: CyberPower - CP1500PFCLCD UPS ($214.95 @ Amazon)
Other: 2 x SanDisk Ultra Fit 16GB USB 3.0 Flash Drive SDCZ43-016G-G46 ($12.99 @ Amazon)
Other: StarTech.com 4x SATA Power Splitter Adapter Cable (PYO4SATA) ($6.27)
Other: StarTech.com 4x SATA Power Splitter Adapter Cable (PYO4SATA) ($6.27)
Other: 2 x CableCreation Internal HD Mini SAS (SFF-8643 Host) - 4x SATA (Target) Angle Cable, SFF-8643 for Controller, 4 Sata Connect to hard drive, 0.5M ($25.58 @ Amazon)
Other: Tripp Lite DVI to VGA Monitor Cable, High Resolution cable with RGB Coax (DVI-A M to HD15 M) 10-ft.(P556-010) ($9.68 @ Amazon)
Total: $2531.65


For the fan push/pull setup I have 4x120mm in the front of the case for pulling air in. 1x120mm on the back (mobo side) and 1x140mm on the back (HDD side) pushing air out. The top two 120mm fans in the front and the back two are connected to the mobo. The others are connected to the kind of ghetto fan controller built into the case. I guess this is a positive pressure setup.

Getting the right cables took the longest time.
The 2x SFF-8643 connect 8 HDDs to the 2 mini-sas connectors. I would highly suggest these specific cables because the sata connections are right angle. There is barely any space above the PSU and these make it so you aren't bending the sata adapters. The 2x StarTech.com power splitters connect those drives to the PSU.
The 2xSSDs that are installed in the fron of the case and the 9th HDD installed in front of the mobo were connected with SATA cables (straight, not right angle) that with the mobo and power cables that came with the PSU.


I've put together the case and ran memtest86+ without issue for 26 hours. I'm currently following this burn in guide for 8 of the 9 HDDs. One was DOA and is being RMA'd.

I'll post pics when all is done.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
4x SATA Power Splitter Adapter Cable (PYO4SATA) ($6.27)

I'd be concerned about these cables. SATA power connector is not rated to power 4 drives. It's not even rated to power 2. This is a fire hazard.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Thanks! Are there any splitters you'd recommend?

Best thing is to not use any splitters or adapters if you can. But assuming you can't, you just can't split SATA power, but you can split Molex 4pin peripherals. And if you do split molex peripheral connectors, you need to use a quality one.


Molex 4 pin peripheral connectors are rated for about 4 drives...

So you should be able to use a pair of these:
https://www.amazon.com/Monoprice-10...51&sr=1-60&keywords=4+pin+molex+to+sata+power

http://www.playtool.com/pages/psuconnectors/connectors.html#peripheral
"Maximum Current per circuit: 13A"
I don't know of any official definition of the maximum current allowed in a peripheral cable. The connector can handle 13 amps according to the manufacturer. But you normally find 18 awg wire in the peripheral cables. If you have an 18 inch cable (about a half a meter) and are running 13 amps through 18 gauge wire then you get a voltage drop of about 0.25 volts counting both the power wire and the ground (it's got to go both ways) and the dissipation is about 3.3 watts. That's not good. I've just played it safe and listed the maximum current as 5 amps.

vs SATA power, where maximum current per pin is 1.5A. IIRC the combined is about 4.5A per voltage.
http://www.playtool.com/pages/psuconnectors/connectors.html#sata

since a drive will pull up to 36W at spinup, that's say 3A at 12V. Thus you can only safely split a SATA connector 1.5x. And there are some back planes that do this, ie they use 2 SATA power to run 3 drives. Or 3 to run 5.

Where as the 13A molex connector will be good for 3A per drive for a total of 12A with 4 drives.

Highly simplified :)

I can't say what the cable that your molex connector is attached to is rated at, but I would use no more than 2 of the molex connectors on each run, and use the two closest to the PSU that you can get away with.

It should be safe to use all the sata power connectors you have coming out of your PSU, as long as you never split them!
 
Last edited:

Evertb1

Guru
Joined
May 31, 2016
Messages
700
I am with Stux on this one. Don't ever use splitters on Sata connectors and not on Molex if you can avoid it. Why don't you find out if you can order some extra Sata power cables for your PSU? I have a SeaSonic FOCUS Plus 650 Gold for my desktop. It has 4 connectors for Sata, Peripherals and Floppy. It came with 2 Sata power cables with 4 connectors. Adding 2 more Sata cables would be good for 16 drives. You most likely don't need the Molex or Floppy cables.
 
Last edited:

jimboooooo

Dabbler
Joined
Feb 21, 2017
Messages
13

I actually had one of these laying around. So...

I connected the 8 SATA-> 2 SAS drives with 2 x 4 SATA cords provided with the PSU.
Used the Monoprice molex -> sata above with one of the PSU provided molex cords to connect the front 2 SSDs. The last HDD I'd need another molex -> sata cords to connect it.

Adding 2 more Sata cables would be good for 16 drives. You most likely don't need the Molex or Floppy cables.

Or I should do that for the 9th HDD and the 2 SSDs.

Thanks y'all!
 
Joined
Feb 2, 2016
Messages
574
I.cannot.lose.my.family.photos so maybe raidz2 is the way to go.
Data protection is more important to me so I think I'll be switching to raidz2.

RAIDZ2 versus mirrors isn't going to save your family photos. Backups are going to save your family photos. Redundant data inside a single chassis at a single location is a false sense of security. Sometimes as technicians we get caught up in nuance and forget the bigger picture. All it would take to lose your family photos is a fire, a theft, an accidental deletion, dumb luck, etc.

My home FreeNAS is a testament to engineering greatness. But my family photos are safe because they are snapshotted and replicated offsite.

Other than that, nice build.

Cheers,
Matt
 
Status
Not open for further replies.
Top