New build help

Status
Not open for further replies.

BlueJ007

Dabbler
Joined
Jun 25, 2018
Messages
12
Hello,

I'm wanting to replace my Synology with FreeNAS, and I'm looking for configuration advice. The NAS has two primary uses: Plex media storage and VMware homelab; both over ISCSI. I have a separate system running Plex, so I don't need the NAS CPU to handle transcoding. The homelab has 10-20 VMs running: domain controller, SQL, vCenter appliance, Windows and Linux application servers. Currently, my Plex media needs 10TB and VMware about 1TB. Currently, my Synology and VMware host are directly connected via 10GB SFP+.

I'm thinking either a single RAIDZ2 vdev with 8 x 4TB drives or two RAIDZ2 vdevs with 4 x 4TB drives in a pool for media, and a separate pool for VMware, but I'm not sure if I should do mirrored, striped, or RAIDZ2 vdev. What would you all recommend? Also, should the VMware pool consist of HDDs or SSDs?

Hardware
Motherboard: Supermicro X11SSM-F
CPU: Xeon E3-1230v6
RAM: Crucial 16GB DDR-2133 CT16G4WFD8213 x 2
Boot SSD: Samsung 860 EVO 250GB
HBA: LSI 9211-8i
NIC: Intel X520-DA2

Thanks!
 
Last edited:

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
If you're going to be using FreeNAS as storage for ESXI, you most certainly will want to use mirrored vdevs. The more pairs you put in, the more IO you're going to get.
 

BlueJ007

Dabbler
Joined
Jun 25, 2018
Messages
12
If you're going to be using FreeNAS as storage for ESXI, you most certainly will want to use mirrored vdevs. The more pairs you put in, the more IO you're going to get.
My understanding is mirrored vdevs are similar to RAID1. Is that correct? If so, is having multiple mirrored vdevs similar to RAID10?
 

IQless

Contributor
Joined
Feb 13, 2017
Messages
142

BlueJ007

Dabbler
Joined
Jun 25, 2018
Messages
12
I read it several times and it was very informative. The zpool examples were especially helpful. I also read the helpful Terminology and Abbreviations Primer a few times. Based on reading some other posts, I'm leaning towards two RAIDZ2 vdevs of 6 x 4TB drives in one zpool, two mirrored vdevs of 500GB SSD drives in another zpool, and two SSD boot drives.

Now, I just need to find a case which can hold 12 3.5" and 6 2.5" drives. Any suggestions?
 
Last edited:

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
Now, I just need to find a case which can hold 12 3.5" and 6 2.5" drives. Any suggestions?
With a requirement of at least 18 drives, you would be best served with a 4U rackmount chassis with 6 3.5" --> 2.5" adapters
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
My Lian Li PC-A76X can hold twelve 3.5" disks internally and has two 5.25" slots that can handle the 2.5" disks with 4 Bay enclosures in each 5.25" bay.
 

BlueJ007

Dabbler
Joined
Jun 25, 2018
Messages
12
My Lian Li PC-A76X can hold twelve 3.5" disks internally and has two 5.25" slots that can handle the 2.5" disks with 4 Bay enclosures in each 5.25" bay.

How well does that Lian Li case cool?

I've been looking around for towers which can either hold drives internally or has enough 5.25" slots to use enclosures for the hot-swap functionality. I was looking around for an Antec Twelve Hundred, but they're hard to find or expensive ($500+). I can't find any details about the PC-A76X on their site, but they still show the PC-A75 (similar to yours) and the PC-A79 (similar to the Antec).

Another problem I'm trying to solve is deciding on a CPU. This system will be primarily used for ISCSI storage to my Plex server and ESXI, so I was originally thinking the E3-1230 V6. However, I've read different discussions around using the Xeon-D since it has lower power. I found a great Xeon-D motherboard in the Supermicro X10SDV-4C-7TP4F, but it's a Flex ATX board and I'm uncertain if it'll fit in any of the above cases.
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
I have three FreeNAS builds I am testing currently.

The first system has a dual processor Supermicro X8 EATX motherboard in the full tower Lian Li PC-A76X. There are three 140 mm fans for right next to the 12 disks. There is another fan in the rear of the case. There are spots for 2 additional fans on the door. It's cool and quiet.

The second system has uniprocessor Supermicro X9 ATX motherboard in a mid tower Antec 900 case. The disks are housed in 3 iStarUSA 4 disk enclosures. Each enclosure has a 140mm fan and holds 4 disks for a total of 12 disks in the system. The server itself has a 140mm fan in the rear and a 200mm fan in the top. I have yet to fire this system up.

The third system has uniprocessor Supermicro X10 ATX motherboard in custom mid tower server. The disks are housed in 2 iStarUSA 5 disk enclosures. Each enclosure has a 140mm fan and holds 5 disks for a total of 10 disks in the system. The server itself has two 80mm fans in the rear. For many years, this was my FreeNAS system and it stayed fairly cool.
 

CraigD

Patron
Joined
Mar 8, 2016
Messages
343
I was looking around for an Antec Twelve Hundred, but they're hard to find or expensive ($500+).

The Antec Nine Hundred will do the job with addition hardware:

2x 5 3.5" in 3 5.25" hotswap bays or cages, a single three 3.5" in two 5.25" bays and a 4 2.5" in one 5.25" bay for data drives along with a double 2.5" SSD to PCI Internal Hard Drive Mounting Kit for boot

Have Fun
PS I can confirm the Antec 1200 is a great case and picked up mine six months ago second hand for NZ$60 (US$40)
 

BlueJ007

Dabbler
Joined
Jun 25, 2018
Messages
12
Ok, I may be over-thinking this, so any advice will be greatly appreciated.

In regards to PCIe lanes, if I'm planning to eventually have up to 20 drives (12 for media, 2 for boot, and 6 for VMware), I would need 2 x LSI HBA 8 port adapters plus motherboard ports. Each LSI adapter use 8 lanes. Add another 8 lanes for the 10Gb NIC and I'm at 24. Every 1151 socket CPU I've looked at has a max of 16 lanes. I'd have to change to a Xeon E5 to get enough lanes, but that seems so overpowered for my use case. The best route seems to be to use 1 x LSI HBA 16 port adapter, but man those are orders of magnitude more expensive. Am I headed down the right route or have I missed something?
 

IQless

Contributor
Joined
Feb 13, 2017
Messages
142
I think you could go with an HBA expander (something like the Intel RES2SV240).
Someone else might provide the math to show if this will limit bandwidth to the drives.
 
Joined
May 10, 2017
Messages
838
Ok, I may be over-thinking this, so any advice will be greatly appreciated.

In regards to PCIe lanes, if I'm planning to eventually have up to 20 drives (12 for media, 2 for boot, and 6 for VMware), I would need 2 x LSI HBA 8 port adapters plus motherboard ports. Each LSI adapter use 8 lanes. Add another 8 lanes for the 10Gb NIC and I'm at 24. Every 1151 socket CPU I've looked at has a max of 16 lanes. I'd have to change to a Xeon E5 to get enough lanes, but that seems so overpowered for my use case. The best route seems to be to use 1 x LSI HBA 16 port adapter, but man those are orders of magnitude more expensive. Am I headed down the right route or have I missed something?
You'll only have 16 CPU lanes but there are also the PCH lanes, e.g., the 10GbE NIC can be used one of the x4 slots PCH slots available for example in the X11SSM-F.
 

BlueJ007

Dabbler
Joined
Jun 25, 2018
Messages
12
You'll only have 16 CPU lanes but there are also the PCH lanes, e.g., the 10GbE NIC can be used one of the x4 slots PCH slots available for example in the X11SSM-F.
Interesting. I figured since the x8 slot was only wired as a x4, that would hinder its performance. I see your Main Build is similar to what I'm aiming for and I'm curious how you're connecting 20 drives with only 16 ports (8 on mb and 8 on 9211-8i). Also, how are you powering them all as I haven't found a PSU with enough SATA connections?
 
Joined
May 10, 2017
Messages
838
I figured since the x8 slot was only wired as a x4, that would hinder its performance.

A x4 link is still cable of 2000MB/s, 10GbE NIC maxes out at around 1100MB/s, so plenty of bandwidth.

I'm curious how you're connecting 20 drives with only 16 ports

The remaining four are on the 4 port Marvell 9230 controller, though I don't usually recommend these cheap controllers over LSI HBAs I already had it when I upgraded my SSD pool from 8 to 12 drives, so decided to give a shot and it has been 100% trouble free for a few months now so I'll keep it for now.

Also, how are you powering them all as I haven't found a PSU with enough SATA connections?

The 12 SSDs are on two 5.25" enclosures, each one requires a single molex connector, the other 8 disks are powered by 2 x Corsair modular SATA power cables (4 plugs each)
 

BlueJ007

Dabbler
Joined
Jun 25, 2018
Messages
12
Ok, so I've narrowed down the case to either a Lian Li PC-A76 or Fractal Design Define XL R2. Both can hold the requisite # of drives. Does anyone aside from joeinaz have experience with either case, how well it cools, and how well it quiet it is?
 

IQless

Contributor
Joined
Feb 13, 2017
Messages
142
I have the Fractal Design Define XL R2, and I must say I find the drive cage's a bit restrictive when it comes to the airflow. I did not like the temps on my drives when I only had the Noctua NF-A14 x2 on the intake. I ended up jerry-rigging 2 more fans with zip-ties on the other side of the cage in a "push-pull" wannabe configuration. This helped to get the temps down a few more C, and it gave more airflow over the MB. On my X8-board, the PCH overheated on me due to the poor airflow :\
Have you considered the R5 as an alternative? I have one, the drive cages are much more "vented" and it is possible to rotate the cages in more ways than the XL does.

Oh, and one more thing. My XL have these "lips" in the 5,25" bays that won't allow for all types of add inn drive cages. Something to be aware of. (can be fixed with a Dremel with steel cutting wheel).
 

BlueJ007

Dabbler
Joined
Jun 25, 2018
Messages
12
I have the Fractal Design Define XL R2, and I must say I find the drive cage's a bit restrictive when it comes to the airflow. I did not like the temps on my drives when I only had the Noctua NF-A14 x2 on the intake. I ended up jerry-rigging 2 more fans with zip-ties on the other side of the cage in a "push-pull" wannabe configuration. This helped to get the temps down a few more C, and it gave more airflow over the MB. On my X8-board, the PCH overheated on me due to the poor airflow :\
Have you considered the R5 as an alternative? I have one, the drive cages are much more "vented" and it is possible to rotate the cages in more ways than the XL does.

Oh, and one more thing. My XL have these "lips" in the 5,25" bays that won't allow for all types of add inn drive cages. Something to be aware of. (can be fixed with a Dremel with steel cutting wheel).

Thanks for the feedback. Someone on another forum recommended the Nanoxia Deep Silence 6. It can easily handle all the drives I need and it has the push-pull config on the first drive cage. Also, its included fans provide more airflow and are quieter than Noctua fans, according to their specs. So, here's another contender :)
 
Status
Not open for further replies.
Top