First FreeNAS Build - Sanity Check

Status
Not open for further replies.

Reader

Cadet
Joined
Nov 17, 2018
Messages
6
I am currently running an Ubuntu server (that is running 16.10 /facepalm) with some of the following hardware. I've been mounting an ever increasing number of drives individually in /media[1-9]. As that system is out of SATA connections and is running out of space, I figured it's time to move to something with more expandability (and some redundancy). I was able to grab the RPC-4224 very cheaply, so the project has been kicked off.

Inspiration: https://forums.freenas.org/index.ph...24-supermicro-x10-sri-f-xeon-e5-1650v4.46262/

Case: NORCO RPC-4224 4U Rackmount Server Case with 24 Hot-Swappable SATA/SAS Drive Bays (Purchased for $200.00)
CPU: Intel - Xeon E5-1650 V3 3.5 GHz 6-Core Processor (Purchased For $250.00)
Motherboard: Supermicro - MBD-X10SRL-F-O ATX LGA2011-3 Narrow Motherboard (Purchased For $238.96)
Memory: 2x Samsung - 16 GB (1 x 16 GB) Registered DDR4-2133 Memory (Purchased For $119.00 ea)
OS: 2x SanDisk Cruzer Fit CZ33 16GB USB 2.0 Low-Profile Flash Drive- SDCZ33-016G-B35 (Purchased for $6.50 ea)
Storage: 12x Western Digital - easystore 8 TB External Hard Drive [Shucked] (Purchased For $150.00 ea)
Power Supply: EVGA - SuperNOVA G3 1000 W 80+ Gold Certified Fully-Modular ATX Power Supply (Purchased for $99.00)
HBA: 2x IBM Serveraid M1015 SAS/SATA Controller 46M0831 (Purchased For $45.00 ea)
GPU: PNY NVIDIA Quadro P2000 Professional Graphics Board - (VCQP2000-PB) Graphic Cards (Purchased for $350.00)
-------
CPU Cooler: Noctua - NH-U9DXi4 37.8 CFM CPU Cooler (Purchased For $58.95)
Case Fan: 3x Noctua - NF-F12 PWM 54.97 CFM 120mm Fan (Purchased For $19.50 ea)
Case Fan: 2x Noctua - NF-A8 PWM 32.66 CFM 80mm Fan (Purchased For $15.95 ea)
Other: 2x Cable Matters Internal Mini-SAS to 4x SATA Reverse Breakout Cable 1.6 Feet / 0.5m (Purchased For $13.99 ea)
Other: 2x CableCreation 2-Pack Internal Mini SAS(SFF-8087) 36Pin Right Angle Male to Internal Mini SAS (SFF-8087) 36Pin Male Cable, 0.75 Meter (Purchased For $22.99 ea)
Other: XF TIMES HDMI to VGA Gold Plated Active Video Adapter Cable 1080P HDMI Digital to VGA Analog Converter Cable (6 Feet/ 1.8 Meters)
Generated by PCPartPicker 2018-11-19 10:26 EST-0500

Could someone do a sanity check on the above hardware for me?

Does anyone know of any right angle (8087) reverse breakout cables?

I already have the GPU and CPU, and they have served me well in their current deployment as a Plex server. I've read that using the P2000 with a Plex server in FreeNAS works fine, so the plan is to move it to an all-in-one solution. Does anyone have experience using HW acceleration in Plex when run on FreeNAS that would indicate this plan won't work? - FreeBSD doesn't support hardware transcoding apparently :(. I could do something with ESXI and host a VM for plex and FreeNAS though...

Only 6 of the drives are not currently in use by the old system. My current plan is to create a vdev with 4+2 RAIDZ2, import the old data, then create another 4+2 vdev and add that to the pool.

I've read that the RAIDZ2 should use 2^n+2 drives (for sector boundary reasons), so I should use vdevs of 6 or 10 drives. 24 bays will not allow me to have 3x 10-drive vdevs. However, the cost efficiency for usable space would be better. 2x 10-drive vdevs would give me 16x8tb (nominal) usable space, 4x 6-drive vdevs will also give 16x8tb (nominal) usable space, but would cost ~$600.00 more and take up 4 more slots. I'm leaning towards the 4x6-drive vdevs because of the lower initial cost (this is already very expensive to me). I'd like to know other opinions though. How bad would it be to do 3x 8-drive vdevs?

I purchased the HMDI to VGA converter as I do not own any VGA monitors. I would like to do everything over IPMI (first time for this as well), but I couldn't tell if that was enabled by default on the motherboard and I felt having the converter would be useful anyway. My understanding of IPMI is I can either use a browser and simply navigate to it like a web interface or I can use the IPMI tool by Supermicro. I've read that this can be a security risk if it becomes exposed, I do not have upnp enabled on my router so I should be fine so long as I don't manually add an entry for the port right?

My current plan pre-installation:
  1. Cross flash to IT mode Guide (SAS ID)
  2. Hardware Validation Guide
  3. Update MoBo bios
Should I be doing anything else?
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
You don't need a GPU for FreeNAS.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Ender117

Patron
Joined
Aug 20, 2018
Messages
219
First, may I ask you how you got so many easystore drives? For example it was limit 3 per person in recent Bestbuy deal.


I am currently running an Ubuntu server (that is running 16.10 /facepalm) with some of the following hardware. I've been mounting an ever increasing number of drives individually in /media[1-9]. As that system is out of SATA connections and is running out of space, I figured it's time to move to something with more expandability (and some redundancy). I was able to grab the RPC-4224 very cheaply, so the project has been kicked off.

Inspiration: https://forums.freenas.org/index.ph...24-supermicro-x10-sri-f-xeon-e5-1650v4.46262/

Case: NORCO RPC-4224 4U Rackmount Server Case with 24 Hot-Swappable SATA/SAS Drive Bays (Purchased for $200.00)
CPU: Intel - Xeon E5-1650 V3 3.5 GHz 6-Core Processor (Purchased For $250.00)
Motherboard: Supermicro - MBD-X10SRL-F-O ATX LGA2011-3 Narrow Motherboard (Purchased For $238.96)
Memory: 2x Samsung - 16 GB (1 x 16 GB) Registered DDR4-2133 Memory (Purchased For $119.00 ea)
OS: 2x SanDisk Cruzer Fit CZ33 16GB USB 2.0 Low-Profile Flash Drive- SDCZ33-016G-B35 (Purchased for $6.50 ea)
Storage: 12x Western Digital - easystore 8 TB External Hard Drive [Shucked] (Purchased For $150.00 ea)
Power Supply: EVGA - SuperNOVA G3 1000 W 80+ Gold Certified Fully-Modular ATX Power Supply (Purchased for $99.00)
HBA: 2x IBM Serveraid M1015 SAS/SATA Controller 46M0831 (Purchased For $45.00 ea)
GPU: PNY NVIDIA Quadro P2000 Professional Graphics Board - (VCQP2000-PB) Graphic Cards (Purchased for $350.00)
-------
CPU Cooler: Noctua - NH-U9DXi4 37.8 CFM CPU Cooler (Purchased For $58.95)
Case Fan: 3x Noctua - NF-F12 PWM 54.97 CFM 120mm Fan (Purchased For $19.50 ea)
Case Fan: 2x Noctua - NF-A8 PWM 32.66 CFM 80mm Fan (Purchased For $15.95 ea)
Other: 2x Cable Matters Internal Mini-SAS to 4x SATA Reverse Breakout Cable 1.6 Feet / 0.5m (Purchased For $13.99 ea)
Other: 2x CableCreation 2-Pack Internal Mini SAS(SFF-8087) 36Pin Right Angle Male to Internal Mini SAS (SFF-8087) 36Pin Male Cable, 0.75 Meter (Purchased For $22.99 ea)
Other: XF TIMES HDMI to VGA Gold Plated Active Video Adapter Cable 1080P HDMI Digital to VGA Analog Converter Cable (6 Feet/ 1.8 Meters)
Generated by PCPartPicker 2018-11-19 10:26 EST-0500

Could someone do a sanity check on the above hardware for me?
Nothing I can see that it won't work

Does anyone know of any right angle (8087) reverse breakout cables?

Why? you said you got 2 HBAs.

I already have the GPU and CPU, and they have served me well in their current deployment as a Plex server. I've read that using the P2000 with a Plex server in FreeNAS works fine, so the plan is to move it to an all-in-one solution. Does anyone have experience using HW acceleration in Plex when run on FreeNAS that would indicate this plan won't work? - FreeBSD doesn't support hardware transcoding apparently :(. I could do something with ESXI and host a VM for plex and FreeNAS though...
https://forums.freenas.org/index.ph...ide-to-not-completely-losing-your-data.12714/

Only 6 of the drives are not currently in use by the old system. My current plan is to create a vdev with 4+2 RAIDZ2, import the old data, then create another 4+2 vdev and add that to the pool.

Depends on how full your first vdev would become during the process, it might become heavily fragmented and performance suffers. And there is no de-frag in ZFS other than destroy and re-create. Might not matter to you, but worth to keep in mind.

I've read that the RAIDZ2 should use 2^n+2 drives (for sector boundary reasons), so I should use vdevs of 6 or 10 drives. 24 bays will not allow me to have 3x 10-drive vdevs. However, the cost efficiency for usable space would be better. 2x 10-drive vdevs would give me 16x8tb (nominal) usable space, 4x 6-drive vdevs will also give 16x8tb (nominal) usable space, but would cost ~$600.00 more and take up 4 more slots. I'm leaning towards the 4x6-drive vdevs because of the lower initial cost (this is already very expensive to me). I'd like to know other opinions though. How bad would it be to do 3x 8-drive vdevs?


if I understand it correctly with compression enabled (which is the default), this no longer matters.

I purchased the HMDI to VGA converter as I do not own any VGA monitors. I would like to do everything over IPMI (first time for this as well), but I couldn't tell if that was enabled by default on the motherboard and I felt having the converter would be useful anyway. My understanding of IPMI is I can either use a browser and simply navigate to it like a web interface or I can use the IPMI tool by Supermicro. I've read that this can be a security risk if it becomes exposed, I do not have upnp enabled on my router so I should be fine so long as I don't manually add an entry for the port right?


I would also put it into its own VLAN and subnet and only allow communication from/to management nodes. This requires a managed switch though

My current plan pre-installation:
  1. Cross flash to IT mode Guide (SAS ID)
  2. Hardware Validation Guide
  3. Update MoBo bios
Should I be doing anything else?
Maybe get an UPS?
 

Reader

Cadet
Joined
Nov 17, 2018
Messages
6
First, may I ask you how you got so many easystore drives? For example it was limit 3 per person in recent Bestbuy deal.

Well, I went to the retail store and they didn't enforce the limit for 6 of them. The other 6 I got 3 from the best buy via ebay and 3 from the online store.

Why? you said you got 2 HBAs.
24 bays, 8 for each HBA then using 8 SATA ports on the MoBo with a reverse breakout. This was cheaper than buying a SAS expander and saved me 1 pci slot vs another HBA. The right angle portion because it fits better in the case.

https://forums.freenas.org/index.php?threads/absolutely-must-virtualize-freenas-a-guide-to-not-completely-losing-your-data.12714/
I will take a look at this, thank you.

Depends on how full your first vdev would become during the process, it might become heavily fragmented and performance suffers. And there is no de-frag in ZFS other than destroy and re-create. Might not matter to you, but worth to keep in mind.
I hadn't heard of any issues with fragmentation before in FreeNAS. I will do some research on this.

if I understand it correctly with compression enabled (which is the default), this no longer matters.
So running the 8-disk vdev will be fine then?

I would also put it into its own VLAN and subnet and only allow communication from/to management nodes. This requires a managed switch though
I will look into this when I build a pfsense box.

Maybe get an UPS?
Already taken care of :)
 
Status
Not open for further replies.
Top