My first FreeNAS build: Xeon D-1518 vs Xeon E5-2603 v4

Status
Not open for further replies.

mouseskowitz

Dabbler
Joined
Jul 25, 2017
Messages
36
Where to start? Well my current storage solution is a 20 bay Norco case with the mobo from a TS140 as an ESXi host with a Server 2012 R2 VM that has the Essentials role installed and direct access to most of the 13 drives/ 34 TB, of formatted storage of which ~26 TB is used. There are currently 8 other VMs running for various other tasks as well. The plan is to build a storage solution that will server two ESXi hosts. The thought is to have a mirrored set of SSDs for OS drives via iSCSI. The mass storage would then be NSF that the OS connects to. The whole 50% usage limit for iSCSI seems to prohibit it's usefulness for mass storage. The storage would be used for media, file storage, NVR storage, and computer backups. It would make sense to run a Plex server directly on the FreeNAS, but most other things probably stay on my ESXi box or eventually two.

The goal with the underlying hardware is to have something that will be sufficient for at least 5 and hopefully more towards 10 years with just upgrading the sizes of drives.

There are two different directions I can go with this. One is to build the FreeNAS in my current Norco case and get a different one for the ESXi. Two is to keep the ESXi where it is and get everything new for the FreeNAS.

For option one I would go with the Supermicro X10SDV-4C-7TPF4-0 and 64 GB of RAM. I would need get a case, PSU and SPF+ card for the ESXi. Also migrating storage would be interesting as two mobos would be hooked into the storage in one case.

For option two I would get a ThinkServer RD450 with a Xeon E5-2603 v4 and 64 Gb of RAM. Since the case only has 8 bays I would add a ThinkServer SA120 to bring the total up to 20 bays. I would need SPF+ cards for both the FreeNAS and ESXi. I'm thinking of something like the Intel X520-DA2. I haven't quite figured out the HBA yet as I would need something with 2 external and 2 internal ports.

I'm still trying to figure out how to make the date transition. I'd like to make my current 8 x 3 TB drives into one z2 pool and add 5 more 4 TB drives to my current 3 for a media pool than I'm going back and forth between z1 and z2 for. I already have 2 x 500 MB ssds for the mirror. I'm thinking I'll have to down size my video and DVR collections to make the move, 20 TB and 4 TB respectively.

Does anyone see any issues with either of my plans? Any significant advantages of one or another? The cost difference would basically be the HBA and a SPF+ card. Any other things I may not have taken into account?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Thoughts or comments anyone?
The problem I see is that to reuse any of your existing hardware, you will need a fairly significant amount of down-time and a place to put all the data during the reconfiguration. I was stuck in this predicament a few years ago and the solution I came up with is to build an entire second NAS and copy everything onto it. Then I reconfigured the hardware and built the new storage system and copied everything back. Now I maintain the replication between the two systems and one is the primary and the other is a full online backup of everything. I have twice the number of drives invested, but I have zero chance of losing any thing unless I do something really wrong.

You mentioned using SFP+ cards to interface the hardware, do you have a 10gb switch? How many devices are you going to want to interconnect at the 10gb speed? I would suggest having the two ESXi hosts as separate compute nodes and having the storage as a separate device that they can both access. This will give you the ability to vmotion your VMs from one to the other.
You mention that you are torn about how to configure your drives. I suggest using this page to get an idea how different configurations of drives affect data access speed. http://wintelguy.com/raidperf.pl
The calculator is geared toward standard RAID but it will give you an idea. RAID groups are somewhat similar to vdevs in ZFS parlance. I have found that the numbers are fairly close but there is more overhead in ZFS because of the checksum calculations and testing that ZFS does to make sure your data is not corrupted.

I give you the link so you can check for yourself if you don't want to take my word for it, but generally, the more vdevs you have the better the performance is which is the reason that many people choose to use many mirror sets but I don't like that because of the loss of resiliency to failure. For the sake of speed I would suggest 3 drive RAID-z1 sets and have a minimum of four of these in your pool. More is faster. I have a system that I manage (not my personal equipment) that has 4 SAS controllers with 8 drives direct wired to each controller for maximum bandwidth to the drive. The access speed is phenomenal. It is a dual Xeon system with 32 threads and around 500 GB of RAM and it is amazing what it can do.

How much storage do you want to end up with at the end of the build? You really need to pin down what it is you want to accomplish and that will help nail down what hardware you need to have. I like to build with the Supermicro parts because you can find retired data-center components on ebay at bargain prices that you can mix and match to get just the system that fits your plan instead of something that doesn't quite fit.
 

mouseskowitz

Dabbler
Joined
Jul 25, 2017
Messages
36
The problem I see is that to reuse any of your existing hardware, you will need a fairly significant amount of down-time and a place to put all the data during the reconfiguration. I was stuck in this predicament a few years ago and the solution I came up with is to build an entire second NAS and copy everything onto it.
I'm thinking along the same lines. Not sure that the budget will allow for it.

You mentioned using SFP+ cards to interface the hardware, do you have a 10gb switch? How many devices are you going to want to interconnect at the 10gb speed? I would suggest having the two ESXi hosts as separate compute nodes and having the storage as a separate device that they can both access.
I do not have a 10gb switch at this point. Everything would be in the rack and it would be the ESXi and storage hosts talking to each other. I should be able to do that with dual port cards.

How much storage do you want to end up with at the end of the build? You really need to pin down what it is you want to accomplish and that will help nail down what hardware you need to have.
This is what I'm trying to nail down. It would be nice to have everything on a nice raidz2 pool. To do this and still have some room for expansion I'd be looking at well over $2000 just for the storage, 30+TB usable. There are two problems with this. One I'm not sure I could convince my wife that is necessary, and two it may well be more than the convenience value of having the movies on there.

I'm starting to lean towards the RD450 E5-2603v4 with 16GB RAM. I would use it for the OS hosting for ESXi via iSCSI and backup of stuff that is actually valuable enough to justify the storage price. So, I'd be looking about 1TB of data, plus another TB or two of computer backups. For that I'd look for ~5TB of raidz2. I'll have to do some more pondering on the issue of 2TB, 3TB or 4TB drives to get there if I go that route. If I did do that, I would keep using my current JBOD as I am now. I can always get the SAS enclosure further down the line to move the media storage to FreeNAS.

It's a whole lot easier to grow to 30TB of storage by getting drives here and there over the course of 10 years than to buy it all at once.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I am worried about the compatibility of the RD450 built in hardware with the FreeNAS (BSD) operating system. The specific one that you link to says it comes with 110i RAID which I looked up and that is a SATA interface that only support six drives. I would also want a faster CPU. You mentioned ESXi, are you planning to use the FreeNAS to run VMs (it can you know) for example I run Plex Media Server on mine and it trans-codes video using the CPU, or did you just want it to be the datastore for the VMs running on separate ESXi compute nodes? I am not sure how you plan to use this system.
 

mouseskowitz

Dabbler
Joined
Jul 25, 2017
Messages
36
I will have to look at the RAID card again. I was planning on passing the drives through and not configuring the RAID. I'm assuming that since there are 8 drive bays they would all be active. That would give me room for the two ssds and 6 hhds I'm planning to start with. Future expansion would be done with something like the ThinkServer SA120 DAS.
The ESXi would be separate machine or machines. I was originally thinking of running Plex on the FreeNAS, but if I don't have my media storage on it there wouldn't be much point to doing that.
In all my reading I really haven't seen much on how much CPU is needed for a FreeNAS. If it runs on Atom CPUs I would assume that a E5-2603v4 would be more than sufficient as long as I don't want to do much with VMs. I currently have a i3-4130 in my ESXi machine and would like to get a E5-2620v4 for a second one. I need to dig into what a Plex server needs, but I would think I should be able to use either of those.
 
Status
Not open for further replies.
Top