New and Used Parts Build: Your Current Recommendations

Status
Not open for further replies.
Joined
Nov 3, 2016
Messages
17
Hi there,

I'm picking up this NAS project again now, like all of the new builds on this forum! I'm here for a few specific recommendations regarding currently priced components for an all-new NAS build:

Purpose:
-File storage / backup (mostly video, project files, large files)
-Light streaming of transcoded DVDs and Blurays at 1080p (no need for 4K as I sit more than 8 feet away from a 40 inch TV)
-Possible VMs down the road (definitely not a priority right now)

Budget:
-$1000 max to hit the ground running. Most of the money should be devoted to as many disks as I can pick up.
-Already picked up the Fractal Node 804 MicroATX case for this build.

MicroATX Motherboard:
-Have a SuperMicro X10DaI workstation setup here as my primary work computer. This X10DaI setup is what I will be primarily using to offload / transfer / access my files stored on the NAS. The X10DaI workstation has:
  • Dual Gigabit Ethernet LAN ports via Intel® i210
  • 32 GB ECC DDR4-2133 RAM (4x8GB sticks), Hynix memory (HMA41GR7MFR4N-TFTD)
  • Dual Xeon E5-2650 v3 processors (20 cores, 40 virtual)
-As I see it, I have two options:
1) Choose something compatible with my X10, DDR4 setup so I can swap DDR4 RAM and/or CPUs between the X10 workstation and the NAS when upgrading / changing anything with either setup
2) Go with a cheaper used X9 DDR3 setup for budget reasons, giving me the ability to pick up a powerful system for less and contribute more $$ towards a larger pool of drives

The only parameter here is budget and the form factor (MicroATX) and manufacturer (SuperMicro), but I can easily drill holes onto my case (already have experience doing that for my X10DaI SSI-EEB form factor). If you know of some great used or new motherboards, please throw out your personal recommendations! This is what I need the most help with.

CPU:
Willing to buy used. Thinking about a E5-2560 v2 or E5-2670 v2 for this setup since they come highly recommended on this forum. Which would be best, or is there another CPU that might fit my needs better?

Drives:
I think my tight budget only allows for 6 4TB drives right now, which should give me ~ 11 TB usable space to start off with if I place them in a RAIDz2 pool if I understand correctly. Initially thought of going with the WD Red 4TBs but saw some good recommendations for the Seagate IronWolf Pros / Barracudas and the HGST Deskstar NAS line. I have a combo of two Samsung EVO 840s, several WD Blues and a HGST DeskStar NAS 3TB drive in my workstation, and they've all performed admirably. I saw the Disk Price Analysis tool here, and I'm afraid after reviewing that spreadsheet, I still don't see a clear winner. Any strong recommendations for any of the brands I've mentioned above? Want to purchase storage drives new for warranty / RMA reasons.

Boot System:
Will use new USB drives (Cruzer Fit 16GB), one as primary and one as a backup, or pick up a couple of used small SSDs (Intel 350s, 40 GB) from eBay and mirror them. This seem reasonable?

Everything else:
I'd like to have a 10GBps connection to / from the X10DaI workstation, already have a battery backup unit and will look at 500W gold-rated PSUs. I feel like I can figure out those things down the line (with some pointers) once I've settled on the right combo of motherboard-CPU-drives.

Your help is very much appreciated! ;)
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
Your workstation motherboard has lots of horsepower could host everything in a virtual environment. Several thoughts:

1. Create a system that is compatible with your new case, and workstation hardware. There is a MicroATX Supermicro motherboard that will do this; X10SRM-F. Approximate cost from eBay is as follows: X10 motherboard $300; E5-2670-v3 $370; 6x 4TB disks about $650; 32GB of RAM about $300. Total about: $1700.

2. Use half of the existing hardware to start the FreeNAS system; pull one CPU and 16 GB RAM from your workstation. Now your system with just needs a new motherboard ($300) and disks ($650) comes to under $1000.

3. Use a different case to house an X9 motherboard; Approximate cost from eBay is as follows: ATX case $50, X9SRL-F motherboard $200; E5-2650 v1 $80; 6x 4TB disks about $650; 32GB of RAM about $140. Total about: $1120.

4. Use a different case to house the workstation motherboard and virtualize the entire solution. The cost would include an E-ATX case $150; and 6x 4 TB disks $650. You make maximum use of your resources.

Option 2 likely best meets your needs and budget for a place to start.
 
Last edited:

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
The cheapest option would be to convert your workstation into a server and virtualize the whole thing under a hypervisor like ESXi or Hyper-V. This solves your future requirements of having VMs. Virtualize FreeNAS and create a VM for your desktop(whatever you use today). All this requires is just the installation time.

You haven't mentioned what case you use currently for the workstation nor have you mentioned how much data/capacity is already full in your workstation. So I don't know how many drives your workstation case can support but I am assuming you have space available to add more drives.

  1. With the earmarked $1000, you can buy a few drives and a basic chromebook/notebook (if you don't have one already) to be able to ssh into the new server/workstation VM to let it do the work.
  2. Build a basic desktop in the Node 804 case that you have already purchased so you can ssh into your server/VM
 
Joined
Nov 3, 2016
Messages
17
Thanks for your responses, @joeinaz and @Inxsible ! I didn't realize that virtualization was a viable option because I read this thread: Please do not run FreeNAS in production as a Virtual Machine! and was scared off from considering virtualization as an option. Clearly I have the server grade hardware needed to do this, though, but it'd be my first time building a NAS, let alone virtualizing one via the free version of ESXi.

Will quickly run through your comments here:

@joeinaz said:
1. Create a system that is compatible with your new case, and workstation hardware. There is a MicroATX Supermicro motherboard that will do this; X10SRM-F. Approximate cost from eBay is as follows: X10 motherboard $300; E5-2670-v3 $370; 6x 4TB disks about $650; 32GB of RAM about $300. Total about: $1700.
This is the only motherboard I could find that fits the MicroATX requirement and clearly this approach is out of my budget, so at this point looks like I need to repurpose the case I have for something else.

@joeinaz said:
2. Use half of the existing hardware to start the FreeNAS system; pull one CPU and 16 GB RAM from your workstation. Now your system with just needs a new motherboard ($300) and disks ($650) comes to under $1000.
The X10DaI workstation is still in use as my primary workstation for motion graphics, heavy After Effects, Cinema 4D and Premiere usage, so I still need the 32 GB and both CPUs to be available for that use (but am happy to devote some of the cores / threads and allocate some of the RAM towards a virtualized NAS setup). I'm running Win 10 Pro on the workstation and still need to use it as is.

@joeinaz said:
3. Use a different case to house an X9 motherboard; Approximate cost from eBay is as follows: ATX case $50, X9SRL-F motherboard $200; E5-2650 v1 $80; 6x 4TB disks about $650; 32GB of RAM about $140. Total about: $1120.
This seems reasonable, just requires me reallocating my current case for another project- maybe a backup NAS down the road? That would be OK. This option is definitely in the mix.

@joeinaz said:
4. Use a different case to house the workstation motherboard and virtualize the entire solution. The cost would include an E-ATX case $150; and 6x 4 TB disks $650. You make maximum use of your resources.
Question- I already have the X10DaI workstation mounted in an ATX case (The Corsair 760T), so I don't necessarily need to relocate the motherboard, correct? See comment below for more details on my existing setup.

@Inxsible said:
You haven't mentioned what case you use currently for the workstation nor have you mentioned how much data/capacity is already full in your workstation. So I don't know how many drives your workstation case can support but I am assuming you have space available to add more drives.
I use the Corsair 760T for my current workstation. The X10DaI has space for 10 SATA direct-attach drives, so I would need to get a HBA (like this one maybe?) and use that for all the drives the VM controls as I see PCI passthrough is important for virtualization.

Drive capacity potential of the case:
The case itself has two hard drive caddies that hold 6 x 3.5" drives, and 4 x 2.5" drives. Even with the length of my GTX970, I can easily get two more hard drive caddies in there, giving me a total of 12 x 3.5" drives.
Potential: 12 x 3.5" drives, 4 x 2.5" drives.
Current usage: 2 500GB EVO 840 SSDs (one boot, one scratch/cache), 4 1TB WD Blue drives (two pools of RAID0 storage), 1 3TB Deskstar NAS backup drive. I will not need the 3TB drive after I migrate my data over into the NAS. I have other external backups so I would say my total data as is currently stands at 8TB.

I have two MacBook Pros that I can use to ssh into the virtual server/workstation component, but if I need a dedicated system, I have no problems with getting a cheap notebook or building a cheap desktop to do that.

If virtualization is the way to go...
This is exactly what I need to do, according to @cyberjock:
@cyberjock said:
You install ESXi on a hard drive(one that has nothing else on it).
You install FreeNAS on a USB stick(one that has nothing else on it).
You create a FreeNAS VM in ESXi and recover your config to it.

Then you do whatever you want to do at that point. Create a zpool or whatever.

So that means component-wise, I need a hard drive (a small SSD?), two USB sticks ($12 for 2) or small SSDs ($50 for 2) (one for backup), a HBA ($59.55), splitter cables ($16 for 2) and maybe more RAM to help drive this thing without seeing a performance drop in my workstation?

With this info, is virtualization still a viable path for my first FreeNAS experience?
 
Last edited:

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
I'd say yes. The only question is where would you copy your data while you were transitioning your workstation into a server.

For that, as I suggested, buy a 8 or 10TB drive and copy the data over and use it as a backup. Then use existing drives to install ESXi and virtualize your FreeNAS and your Windows 10 workstation. Your macs are great to ssh into the VMs.

Sell the Node 804 to recuperate some of the cost of 1 8/10TB drive.

This is still the most cost efficient option. Many users use ESXi and virtualize FreeNAS on these forums. Read through a few threads on ESXi to get over the initial hesitations that you might have. Ask questions if you have any.
 
Last edited:
Joined
Nov 3, 2016
Messages
17
Sifting through ESXi threads on this forum, and posting questions as they come up:

@Stux said:
The free version of ESXi has a per VM limit of 8 vCPU.
So this means 8 virtual cores would only be available per VM? This looks like it works out great in this build by @Stux for NAS purposes but this means that I would be pretty hindered with vCPU performance via a Win 10 VM at only 8 vCPUs not matching the performance of the 20 physical cores / 40 threads available on my current build. Am I understanding the limitations of the free version of ESXi correctly?

As I would run the FreeNAS VM concurrently while I'm working in a Windows 10 VM and having the Win 10 VM tied up for render sessions, I'm concerned about the RAM as well. @Stux's ESXi build took up 32 GB of RAM, so do I need to source more RAM for FreeNAS? I feel like the answer is yes but maybe this isn't an immediate need right now and can be addressed down the road?

I see that some consumer grade Nvidia cards (specifically GeForce ones) have problems passing through via ESXi. Clearly I'd like to make sure I can pass my GPU through for CUDA purposes before blowing up rearranging my workstation to create an awesome AIO solution. I have a GTX 970 and documentation on being able to pass that card through is spotty. And if I can't see the system via my DisplayPort monitor to set it up because my system has no iGPU and the X10DaI does not have IPMI (couldn't find verification of IPMI), does that throw a wrench into the initial setup process?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I wouldn't necessarily suggest using an ESXi based VM for actual work-station work. It can be done, but there are a lot of compromises regarding GPU and USB etc. It works much better if you can pass through a GPU and a USB controller...

And as NVidia is in the business of selling expensive GPUs targetted to VM companies, they actively break using their consumer GPUs in VM environments... although there are work arounds.

It is a nice science project though!

Paul Barren at TinkerTry goes into some detail on how he turned a XeonD all-in-one (in the same formfactor as the FreeNAS Mini) into a work-station esxi. His big issue was finding HHHL graphics cards, which in a full desktop case would not be so hard, but worth checking out

https://tinkertry.com/esxi-gpu-passthrough-update-for-xeon-d-superserver
https://tinkertry.com/superserverworkstation

And specifically:
https://tinkertry.com/superserverwo...e-aware-of-for-those-using-windows-10-as-a-vm

BUT this is probably the best resource:
https://www.virtuallifestyle.nl/201...rver-gfx-750ti-pci-passthrough-windows-10-vm/

Re ESXi free limitations. There are primarily 3 limitations. 1) 8 vCPUs per VM, 2) no vCenter control and 3) no VMware API based scripting

The vCenter stuff is not a big deal, if you're happy managing a VM manually, rather than managing a fleet of VMs on multiple hypervisors on multiple hardwares. And there are work-arounds for most of the scripting stuff... ie you can run scripts on the ESXi host, but you can't use the VMware API from a remote client to affect changes to the host. BUT that remote client can login to the ESXi host and make most changes through scripting! This is how people get FreeNAS to be able to start/shutdown various VMs and control order of booting etc.

The other thing to be aware of is that if you want to host VMs on the FreeNAS iSCSI volumes, then SLOGs become critical.

End of the day, I'd probably suggest setting up a nice NAS, and using your work-station as a work-station. Up to you though, and it is an interesting 'science project' ;)
 
Joined
Nov 3, 2016
Messages
17
Thanks for the detailed reply, @Stux!

Looks like what I have on my hands is, as you said, a cool science project and maybe if this workstation was just for home use, I would be happy to experiment with my current setup, but, as you said.. at the end of the day it's probably smarter to set up a nice NAS separately from the workstation. Read through all of your links and they were awesome!

That having been said, all of this research into what's possible with ESXi and virtualization has sold me 100%. No problem with the free ESXi limitations you stated and I'm fine with manual VM management. SLOGs are definitely important to me so that's a consideration I'm looking at within my budget.

As for the actual hardware, since I'm on a tight budget I'm looking at DDR3. There aren't too many SuperMicro DDR3 / MicroATX options- please point out any potential problems or if I'm cheaping out by choosing the X9 platform and either need to switch to ATX or another form factor or need to look towards X10 or X11 for ESXi purposes (read that there are some problems with X9 and ESXi pass through), please let me know.

RAM Choices and Motherboard Options
I can pick up 16 GB of DDR3 1333 ECC RAM locally for $20, and can choose from the following:
-Samsung M393B5273CH0-CH9 4GB 2Rx8 PC3 10600R-09-10-B0-D2 (64 GB avail)
-Hynix HMT351R7BFR8A-H9 4GB 2Rx8 PC3L 10600R-9-10-B0 (64 GB avail)
-Hynix HMT151R7TFR4C-H9 4GB 2Rx4 PC3 10600R-9-10-E1 (with heat sink) (64 GB avail)

I can't do a reverse / memory lookup on SuperMicro's website to verify which motherboards are compatible. Is there a tool I can use other than looking up all of the motherboards' memory lists individually? If not, that's OK- I will sift through the Micro ATX X9 boards and try to see what works with that RAM.

So this is what I have in mind:
  • SuperMicro X9SCM-F (~$60)
  • 16GB DDR3 RAM ($20), limited by number of RAM slots on the motherboard and config of memory (4x4GB)
  • SLOG storage of some sort (need suggestions, Intel p3700 looks cost-prohibitive)
  • HBA (probably Dell H310) (~$60)
  • X540 T2 10 Gbe NIC (two cards, one for my workstation and one for the NAS) ($150 x 2)
I also found this Supermicro 24 Bay build to be an interesting read because it addresses many of the same issues I have. I'd love to have real-time editing capabilities on the NAS but my priority is online archival data, love the idea of utilizing VMs for a small render farm.

My home office is tiny and the living room is on the other side of my office wall so I can easily set up a 10GbE network on the cheap. My X10DaI workstation has i210 dual gigabit ports so it needs an adapter card. Does it make sense to get a X9 board if I am thinking of building a small 10GbE network? I see SATA2 speeds as a potential bottleneck as well but I know that my needs for this NAS are not quite enterprise-grade and don't want to overbuild. At the same time, I don't want to end up replacing hardware and hope this lasts beyond 7 years of use.

TL;DR- does a X9 build for an ESXi/FreeNAS AIO solution running a few VMs (rendering, Adobe Media Encoder, etc, gaming) work, or should I focus on a X10/X11 build?

Thanks for your help and clever nuggets of inspiration!
 
Joined
Nov 3, 2016
Messages
17
Hi there, guys.

Resurrecting this thread with my most recent purchases:
  • A SuperMicro 847 36-bay chassis / X8DTU-F / L5520 / Dynatron G666 combo: $175
Going to try and put together this build using the X8DTU-F so I can learn more about my needs before building something more powerful- especially since I made that purchase specifically for the chassis not the included components. Then after I've outgrown the X8DTU-F, I'll move onto a X10/X11 build when it's cheaper.

Plan to toss the included L5520 and replace it with dual L5640s (recommendations welcomed on Westmere CPUs). They're not supported on ESXi 6.7, which means I will be stuck on ESXi 6.5 until I upgrade. I think that's all right as this is my first NAS build and I'm still learning.

I am thinking of future-proofing the rear 2U backplane for my chassis with a BPN-SAS3-826SEL1 backplane I can pick up locally for $90, and populating that with 2 x 6TB RAIDZ2 at first, then adding a pool of 6 drives to supplement that down the line. I will not be using the front backplane in this config and instead focus on filling the rear section of the chassis, then when the front SAS3 4U backplanes come down in price, replace the existing SAS8467EL1 backplane on the front. I plan to populate the backplane with regular ol' 4TB or above 7200RPM HDs so I'm definitely not saturating the connection but the idea is that I will be reasonably future-proofed for when the prices do come down.

In that case, do I only connect the SAS3 backplane (the one that will be in use) and when I have hard drives to populate the front backplane, I connect that via a separate HBA?

And in order to complete the SAS3 connectivity setup, I need to purchase the following to complement that SAS3 backplane:

- Two Mini SAS SFF-8643 cables
- A SAS3 8 port HBA

I know this use case is a bit specific, just trying to determine if this makes sense. :)
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
A SuperMicro 847 36-bay chassis / X8DTU-F / L5520 / Dynatron G666 combo: $175
That's a great price to pay for a 847 chassis itself, let alone getting some components with it. Your plan seems fine. The one thing I am not sure about is how the rear backplane works in such a chassis having had no experience with such. Is it independent or does it work in conjunction with the front backplane?

I'll let someone else with more experience in dual backplane chassis comment on that aspect.
 
Joined
Nov 3, 2016
Messages
17
Hi there, @Inxsible! Thanks for your two cents- helpful to know I am going in the right direction here :)

I am looking at getting the SAS3-826SEL1 backplane at $90 and a SuperMicro AOC-S3008L-L8E at $160 as that is cheaper than the SAS3 LSI options I can find with US-based shipping locations.

My question here is really this: how can I best take advantage of SAS3 in the future? Are there any things I can do now to get my system ready for SAS3 speeds, or to put on a future upgrade list? I suspect I need to address a) network bottlenecks b) rust speeds and c) X8 hardware limitations in that order, but want to confirm.

If SAS3 is completely useless to me (seems like it will be, based on other comments here in the forum), it seems to still be cheaper for me to skip SAS2 altogether and go directly to SAS3 because the SAS2 backplane prices on eBay cost more than the SAS3 backplane I have found, so that's the key behind my reasoning. Am I approaching this problem correctly? If so, I have two options:

A) Go for the SAS3 upgrade, ~ $250

B) Stay with the SAS-826/846-7EL1 backplanes until SAS2 or SAS3 prices come down (could be quite a while), ~ $57 for a SAS2 HBA (or even cheaper for an older SAS HBA).

Thoughts?
 
Joined
Nov 3, 2016
Messages
17
Still looking for recommendations as to whether or not I should stick with SAS for my dual backplane chassis or switch over to SAS3 for the 2U backplane, picking up a SAS3-826SEL1 / AOC-S3800-L8e combo for ~ $250 or just stick with SAS at $57 for a new SAS HBA. I am contemplating picking up 7200 RPM 4TB SAS3 hard drives but am not sure if the speed is going to be all that great, as they are still mechanical hard drives and not SSDs.

Looking for a good bang-for-the-buck / futureproofing ratio that is somewhat logical :)

Any strong feelings one way or another?
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
Stick with SAS and see if the speeds are up to your liking. If not, then go ahead and buy another backplane.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
If you can get a SAS3 backplane at a reasonable price, less than SAS2 , then it ts a lucky thing. May as well go for it.
The thing about spinning rust, it is slow. Mechanics of it limit the speed. The fastest drives are still in the SATA 2 speed zone.
If you have a SSD pool, it matters, I don't see it making a difference in mechanical drives soon.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 
Status
Not open for further replies.
Top