BUILD Proposed 11x6TB home server build from junk found on eBay

Status
Not open for further replies.

Vik

Dabbler
Joined
Nov 13, 2015
Messages
11
A background. I am currently running a 5-years-old 6-bay QNAP NAS with 5x2TB WD RE-GP disks in RAID-5. The NAS stores media and backup files, and it almost ran out of space. I also have a bunch of single disks scattered around the house that add another 20-25TB worth of media that needs a proper and safe storage. I also have ESXi host with a bunch of VMs (domain controllers, file server, Plex, CrashPlan destination server, Linux and Windows lab machines, etc.) that run nicely on Xeon E3 machine.

I am looking for a solution to consolidate media library and backup storage in one place. Today I need ~30TB of storage, with plans to expand in the future (~10TB a year growth in the next 4 years).

I don't plan to have a full backup solution. My disaster recovery plans would look like this: I will repopulate QNAP with 6x3TB drives in RAID-6 (3TB is the largest disk the NAS box can recognize), and use it as a backup for critical and unique files (such as home videos). Most of the media that I have is not unique, so when disaster strikes, I would have to reacquire files elsewhere.

Initially I was planning on building NAS using 4U 24-bay chassis, but then reconsidered due noise concerns and footprint (I don't really have a well ventilated space for a 900+mm deep rack). The plan is to have a full tower case initially, and build JBOD expansion unit in future using another case with a power supply, a SAS expander connected through SFF-8088 to the FreeNAS machine.

I do not have huge budget, and I am planning to get most of the hardware used from eBay (drives, power supply, fans and cables excepted).

Here is my shopping list:

  • Case: Lian Li PC-A75. 12 bays, good ventilation, plenty of space for E-ATX motherboard and expansion cards. Bought used one on eBay for $83 delivered
  • Motherboard: Supermicro X8DTI-LN4F. Bought used on eBay for $113 (I/O shield included)
  • RAM: 48GB (3x16GB) Samsung M393B2G70BH0-YH9 (ECC RDIMM from QVL list). Bought used on eBay for $249 in total. The motherboard has 6 DIMM slots per CPU, so I have a room for expansion in the future
  • CPU: Xeon X5667 3.06GHz. $40 on eBay. Although the motherboard supports dual CPUs, I'm just going with a single processor - I feel one CPU should be sufficient for what I am planning to do
  • CPU Fan: Noctua NH-U12DXi4. $64 from newegg
  • Power Supply: SeaSonic Platinum SS-860XP2 860W. $125 after discount and rebate (new)
  • HBA: IBM M1015. $100 on eBay with full bracket
  • SAS Cables: 2 x SFF-8087 breakout cables (about $7 each on eBay)
  • SATA Cables: 3 x SATA cables
  • Disks: 11x6TB (in RAIDZ3) - WD Red (brand new)
  • NIC: in future add Chelsio S320 or T420 10GbE card
  • Boot Device: 2 x 16GB Sandisk Cruzer FIT USB 2.0 (SDCZ33-016G-A11) - $8 each from Amazon
  • USB Header: 2 x USB A Female to USB Motherboard 4 Pin Header - $4 each from Amazon
I have not got my hands on the case yet, but I might need to replace its fans. I think that's everything in a nutshell. If anybody has any suggestions or alterations please let me know.

Thank you for your help!
 
Last edited:

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
The X56xx/E56xx generation of processors are known for being power-hungry. Remember, if it costs you an extra $100 today but saves you $5/mo. on your power bill, it's worth it. Plus, power-hungry/heat-producing components get you twice... once generating the heat, and again getting rid of it (HVAC).
 

Vik

Dabbler
Joined
Nov 13, 2015
Messages
11
I'd note that 11 drives probably requires more power than a 660W PSU. It isn't the steady state power requirement, it's the spinup juice that gets you.

Thank you jgreco. Do you think 750W would be suffice? Do I really need to bump up the requirements to 850W? X5667 is only 95W TPD, and I am going with a single CPU.
 
Last edited:

Vik

Dabbler
Joined
Nov 13, 2015
Messages
11
The X56xx/E56xx generation of processors are known for being power-hungry. Remember, if it costs you an extra $100 today but saves you $5/mo. on your power bill, it's worth it. Plus, power-hungry/heat-producing components get you twice... once generating the heat, and again getting rid of it (HVAC).

X5667's TPD is only 95W, but how bad its idle power consumption could be? Most of the time the server would be doing nothing. I mean, if while idling it produces 5W more of heat than a modern E5-1620 v3 - that's only 44 kWh or about $10 per year.
 
Last edited:

Vik

Dabbler
Joined
Nov 13, 2015
Messages
11
Plus, power-hungry/heat-producing components get you twice... once generating the heat, and again getting rid of it (HVAC).

Actually it should keep my heating bill lower this winter!
 

Vik

Dabbler
Joined
Nov 13, 2015
Messages
11
I oversaw boot device. Few questions:
  • Would 16GB SuperDOM be sufficient?
  • The motherboard does not come with yellow SATA connector, but has separate power pins for DOM. Does the SATA DOM like this one (Newegg SUPERMICRO SSD-DM016-PHI SATA DOM) come with power cable like the one pictured here (picture on the right)?
  • Can I mirror two DOMs? Will mirroring work with FreeNAS?
  • For mirroring I would a Y power cable. Where can I get that?
 

Fuganater

Patron
Joined
Sep 28, 2015
Messages
477
Just get 2x 16GB Sandisk Curizer Fit and mirror your install. Better yet buy 3 and have one as a spare.
 
Joined
Apr 9, 2015
Messages
1,258
Rather than doing SATA DOM's just get a pair of SSD's. They are cheap and take standard power connections. Or a set of USB drives.

As per the Hyper 212 Evo. I have a pair on a X8DT6-F and they work great. The one issue I came across was the backplate for the motherboard is attached to the CPU retention bracket. You will have to make a choice, use the bracket that comes with the EVO and lose the retention or get some longer screws and use them instead of the ones that come with the EVO.

I went with the longer screws and just took the clips off the included ones and took them out. Plus the Supermicro bracket is a lot stiffer than the replacement one that comes with the EVO. A couple washers and a couple nylon threaded locknuts and I was in business. I can say at least with this cooler you can do a little modification and make it work pretty easy. Some others may not work that way.

20151006_153817.jpg

20151006_153824.jpg
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
You might also want to consider a different drive configuration if you want to do any expansion in the future. As it is, you'll need to replace 11 drives to grow the vdev you intend to create, or (following the recommendations of cyberjock's PPT) add another 11-drive vdev to the pool. If you go to 12 drives, you could do 2 6-drive vdevs, both in RAIDZ2. That should provide somewhat better performance and better expansion opportunities in the future.
 

Vik

Dabbler
Joined
Nov 13, 2015
Messages
11
Just get 2x 16GB Sandisk Curizer Fit and mirror your install. Better yet buy 3 and have one as a spare.

Ok, I added two 16GB Sandisk Cruzer Fits with internal headers to keep in neat and clear on the outside.
 

Vik

Dabbler
Joined
Nov 13, 2015
Messages
11
As per the Hyper 212 Evo. I have a pair on a X8DT6-F and they work great. The one issue I came across was the backplate for the motherboard is attached to the CPU retention bracket. You will have to make a choice, use the bracket that comes with the EVO and lose the retention or get some longer screws and use them instead of the ones that come with the EVO.

I went with the longer screws and just took the clips off the included ones and took them out. Plus the Supermicro bracket is a lot stiffer than the replacement one that comes with the EVO. A couple washers and a couple nylon threaded locknuts and I was in business. I can say at least with this cooler you can do a little modification and make it work pretty easy. Some others may not work that way.

This sounds like another reason to go with Noctua. Does anyone have experience with Noctua NH-L12 and 1366 / X8 boards?
 

Vik

Dabbler
Joined
Nov 13, 2015
Messages
11
You might also want to consider a different drive configuration if you want to do any expansion in the future. As it is, you'll need to replace 11 drives to grow the vdev you intend to create, or (following the recommendations of cyberjock's PPT) add another 11-drive vdev to the pool. If you go to 12 drives, you could do 2 6-drive vdevs, both in RAIDZ2. That should provide somewhat better performance and better expansion opportunities in the future.

1 x 11-drive vdev in RAIDZ3 vs 2 x 6-dirve vdevs in RAIDZ2 is something that continues to bug me. It comes down to greed vs. promise of better performance (and I do plan to get 10Gbe for NAS, ESX and desktop workstation in near future).
Well, I guess I have to decide whether I want to get another 6TB drive or not. I have few weeks to make this decision while I'm still building, burning in and testing...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Thank you jgreco. Do you think 750W would be suffice? Do I really need to bump up the requirements to 850W? X5667 is only 95W TPD, and I am going with a single CPU.

My opinion's fully documented in the power supply sizing sticky. In there, I show the how and the why. You do it the way I show, and you should end up with a plenty nice comfy safety margin. Safety margin is good because when you're selecting a component that could potentially fry your multithousand dollar system if it fails in a bad way, you want to make a respectful selection. You cannot eliminate the chance of failures, but you can make smart selections that reduce the chance.

Of course you can compromise away that safety margin. Of course it should "work". I could drive around without wearing my seatbelt because, y'know, I never get in accidents.

My advice is not to "think" that something might suffice but to actually do the math and prove that it'd suffice.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
1 x 11-drive vdev in RAIDZ3 vs 2 x 6-dirve vdevs in RAIDZ2 is something that continues to bug me. It comes down to greed vs. promise of better performance (and I do plan to get 10Gbe for NAS, ESX and desktop workstation in near future).
Well, I guess I have to decide whether I want to get another 6TB drive or not. I have few weeks to make this decision while I'm still building, burning in and testing...
I missed the part about wanting to use this for a VM datastore. Keep in mind you get the IOPS of the slowest drive in each vdev... since you only have one, your performance will be horrid. The strong recommendation is for striped mirrors if you need decent performance, typically combined with a mirrored pair of SSD SLOG devices. There are many threads discussing performance for ESXi/NFS/iSCSI - you might want to do some searching...
You can't just look at this from a "'how much data do I want to store" perspective. You also need to understand how fast you want to access it. I can tell you that running 10 VMs on a Synology 5-drive NAS, which is EXT4, not ZFS, is painful. Just two systems trying to run Windows Update or yum update will make the rest of the VMs catatonic.
 

Vik

Dabbler
Joined
Nov 13, 2015
Messages
11
Of course you can compromise away that safety margin. Of course it should "work". I could drive around without wearing my seatbelt because, y'know, I never get in accidents.

My advice is not to "think" that something might suffice but to actually do the math and prove that it'd suffice.

Ok, here is the math.

By wattage:
  • CPU: 95W (limited by TPD)
  • Motherboard: 50W (a wild guess)
  • RAM: less than 34W (that would be the maximum power draw for 96GB worth of 1.35V DDR3 RDIMMs, according to this article)
  • HBA: 20W (for two M1015s - if I add the 2nd one in future)
  • NIC: less than 18W (for power-hungry Chelsio S320 or T420)
  • Fans: less than 6W (6 fans in total, under 1W per fan, according to this)
  • Drives: not more than 315.6W (1.75A peak draw on +12V rail during spin up + not more that 5.3W for the rest of the circuitry = 26.3W max per drive times 12 drives max)
  • Boot devices: less than 1W (<100mA times 5V times 2 devices)
  • TOTAL: less than 539.6W or less than 72% of 750W
By current draw from +12V rail:
  • Drives: 21A (12 drives x 1.75A)
  • Fans: 0.5A
  • I don't think anything else in this server draws current from +12V
  • TOTAL: 21.5A - about 35% of SeaSonic G-750's capacity for +12V rail
The design is limited by wattage, and unless I miscalculated something, 750W PSU would provide sufficient safety margin for the server in its maximum imaginable configuration.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
  • Drives: not more than 315.6W (1.75A peak draw on +12V rail during spin up + not more that 5.3W for the rest of the circuitry = 26.3W max per drive times 12 drives max)

May be worth noting that frequent contributor and electronic shop terror @Bidule0hm produced charts that showed the 1.75A peak draw spec listed for the WD Red may be fullacrap.

https://forums.freenas.org/index.ph...drive-spin-up-peak-current.38885/#post-237516

I'd suggest you still calculate more like 2.1A for those. It probably still comes out okay.
 

Vik

Dabbler
Joined
Nov 13, 2015
Messages
11
May be worth noting that frequent contributor and electronic shop terror @Bidule0hm produced charts that showed the 1.75A peak draw spec listed for the WD Red may be fullacrap.

Ok. I've ordered SS-860XP2 (860W).

Everything is now on its way now.
I will provide an update once the server is built.
Thank you for the help.
 
Status
Not open for further replies.
Top