I am currently running an Ubuntu server (that is running 16.10 /facepalm) with some of the following hardware. I've been mounting an ever increasing number of drives individually in /media[1-9]. As that system is out of SATA connections and is running out of space, I figured it's time to move to something with more expandability (and some redundancy). I was able to grab the RPC-4224 very cheaply, so the project has been kicked off.
Inspiration: https://forums.freenas.org/index.ph...24-supermicro-x10-sri-f-xeon-e5-1650v4.46262/
Case: NORCO RPC-4224 4U Rackmount Server Case with 24 Hot-Swappable SATA/SAS Drive Bays (Purchased for $200.00)
CPU: Intel - Xeon E5-1650 V3 3.5 GHz 6-Core Processor (Purchased For $250.00)
Motherboard: Supermicro - MBD-X10SRL-F-O ATX LGA2011-3 Narrow Motherboard (Purchased For $238.96)
Memory: 2x Samsung - 16 GB (1 x 16 GB) Registered DDR4-2133 Memory (Purchased For $119.00 ea)
OS: 2x SanDisk Cruzer Fit CZ33 16GB USB 2.0 Low-Profile Flash Drive- SDCZ33-016G-B35 (Purchased for $6.50 ea)
Storage: 12x Western Digital - easystore 8 TB External Hard Drive [Shucked] (Purchased For $150.00 ea)
Power Supply: EVGA - SuperNOVA G3 1000 W 80+ Gold Certified Fully-Modular ATX Power Supply (Purchased for $99.00)
HBA: 2x IBM Serveraid M1015 SAS/SATA Controller 46M0831 (Purchased For $45.00 ea)
GPU: PNY NVIDIA Quadro P2000 Professional Graphics Board - (VCQP2000-PB) Graphic Cards (Purchased for $350.00)
-------
CPU Cooler: Noctua - NH-U9DXi4 37.8 CFM CPU Cooler (Purchased For $58.95)
Case Fan: 3x Noctua - NF-F12 PWM 54.97 CFM 120mm Fan (Purchased For $19.50 ea)
Case Fan: 2x Noctua - NF-A8 PWM 32.66 CFM 80mm Fan (Purchased For $15.95 ea)
Other: 2x Cable Matters Internal Mini-SAS to 4x SATA Reverse Breakout Cable 1.6 Feet / 0.5m (Purchased For $13.99 ea)
Other: 2x CableCreation 2-Pack Internal Mini SAS(SFF-8087) 36Pin Right Angle Male to Internal Mini SAS (SFF-8087) 36Pin Male Cable, 0.75 Meter (Purchased For $22.99 ea)
Other: XF TIMES HDMI to VGA Gold Plated Active Video Adapter Cable 1080P HDMI Digital to VGA Analog Converter Cable (6 Feet/ 1.8 Meters)
Generated by PCPartPicker 2018-11-19 10:26 EST-0500
Could someone do a sanity check on the above hardware for me?
Does anyone know of any right angle (8087) reverse breakout cables?
I already have the GPU and CPU, and they have served me well in their current deployment as a Plex server.I've read that using the P2000 with a Plex server in FreeNAS works fine, so the plan is to move it to an all-in-one solution. Does anyone have experience using HW acceleration in Plex when run on FreeNAS that would indicate this plan won't work? - FreeBSD doesn't support hardware transcoding apparently :(. I could do something with ESXI and host a VM for plex and FreeNAS though...
Only 6 of the drives are not currently in use by the old system. My current plan is to create a vdev with 4+2 RAIDZ2, import the old data, then create another 4+2 vdev and add that to the pool.
I've read that the RAIDZ2 should use 2^n+2 drives (for sector boundary reasons), so I should use vdevs of 6 or 10 drives. 24 bays will not allow me to have 3x 10-drive vdevs. However, the cost efficiency for usable space would be better. 2x 10-drive vdevs would give me 16x8tb (nominal) usable space, 4x 6-drive vdevs will also give 16x8tb (nominal) usable space, but would cost ~$600.00 more and take up 4 more slots. I'm leaning towards the 4x6-drive vdevs because of the lower initial cost (this is already very expensive to me). I'd like to know other opinions though. How bad would it be to do 3x 8-drive vdevs?
I purchased the HMDI to VGA converter as I do not own any VGA monitors. I would like to do everything over IPMI (first time for this as well), but I couldn't tell if that was enabled by default on the motherboard and I felt having the converter would be useful anyway. My understanding of IPMI is I can either use a browser and simply navigate to it like a web interface or I can use the IPMI tool by Supermicro. I've read that this can be a security risk if it becomes exposed, I do not have upnp enabled on my router so I should be fine so long as I don't manually add an entry for the port right?
My current plan pre-installation:
Should I be doing anything else?
Inspiration: https://forums.freenas.org/index.ph...24-supermicro-x10-sri-f-xeon-e5-1650v4.46262/
Case: NORCO RPC-4224 4U Rackmount Server Case with 24 Hot-Swappable SATA/SAS Drive Bays (Purchased for $200.00)
CPU: Intel - Xeon E5-1650 V3 3.5 GHz 6-Core Processor (Purchased For $250.00)
Motherboard: Supermicro - MBD-X10SRL-F-O ATX LGA2011-3 Narrow Motherboard (Purchased For $238.96)
Memory: 2x Samsung - 16 GB (1 x 16 GB) Registered DDR4-2133 Memory (Purchased For $119.00 ea)
OS: 2x SanDisk Cruzer Fit CZ33 16GB USB 2.0 Low-Profile Flash Drive- SDCZ33-016G-B35 (Purchased for $6.50 ea)
Storage: 12x Western Digital - easystore 8 TB External Hard Drive [Shucked] (Purchased For $150.00 ea)
Power Supply: EVGA - SuperNOVA G3 1000 W 80+ Gold Certified Fully-Modular ATX Power Supply (Purchased for $99.00)
HBA: 2x IBM Serveraid M1015 SAS/SATA Controller 46M0831 (Purchased For $45.00 ea)
GPU: PNY NVIDIA Quadro P2000 Professional Graphics Board - (VCQP2000-PB) Graphic Cards (Purchased for $350.00)
-------
CPU Cooler: Noctua - NH-U9DXi4 37.8 CFM CPU Cooler (Purchased For $58.95)
Case Fan: 3x Noctua - NF-F12 PWM 54.97 CFM 120mm Fan (Purchased For $19.50 ea)
Case Fan: 2x Noctua - NF-A8 PWM 32.66 CFM 80mm Fan (Purchased For $15.95 ea)
Other: 2x Cable Matters Internal Mini-SAS to 4x SATA Reverse Breakout Cable 1.6 Feet / 0.5m (Purchased For $13.99 ea)
Other: 2x CableCreation 2-Pack Internal Mini SAS(SFF-8087) 36Pin Right Angle Male to Internal Mini SAS (SFF-8087) 36Pin Male Cable, 0.75 Meter (Purchased For $22.99 ea)
Other: XF TIMES HDMI to VGA Gold Plated Active Video Adapter Cable 1080P HDMI Digital to VGA Analog Converter Cable (6 Feet/ 1.8 Meters)
Generated by PCPartPicker 2018-11-19 10:26 EST-0500
Could someone do a sanity check on the above hardware for me?
Does anyone know of any right angle (8087) reverse breakout cables?
I already have the GPU and CPU, and they have served me well in their current deployment as a Plex server.
Only 6 of the drives are not currently in use by the old system. My current plan is to create a vdev with 4+2 RAIDZ2, import the old data, then create another 4+2 vdev and add that to the pool.
I've read that the RAIDZ2 should use 2^n+2 drives (for sector boundary reasons), so I should use vdevs of 6 or 10 drives. 24 bays will not allow me to have 3x 10-drive vdevs. However, the cost efficiency for usable space would be better. 2x 10-drive vdevs would give me 16x8tb (nominal) usable space, 4x 6-drive vdevs will also give 16x8tb (nominal) usable space, but would cost ~$600.00 more and take up 4 more slots. I'm leaning towards the 4x6-drive vdevs because of the lower initial cost (this is already very expensive to me). I'd like to know other opinions though. How bad would it be to do 3x 8-drive vdevs?
I purchased the HMDI to VGA converter as I do not own any VGA monitors. I would like to do everything over IPMI (first time for this as well), but I couldn't tell if that was enabled by default on the motherboard and I felt having the converter would be useful anyway. My understanding of IPMI is I can either use a browser and simply navigate to it like a web interface or I can use the IPMI tool by Supermicro. I've read that this can be a security risk if it becomes exposed, I do not have upnp enabled on my router so I should be fine so long as I don't manually add an entry for the port right?
My current plan pre-installation:
Should I be doing anything else?
Last edited: