Looking for build advice for a production home system

Status
Not open for further replies.

ChrisL

Cadet
Joined
Sep 15, 2017
Messages
5
Hi all,

I have a collection of decent hardware for a home system, but I'm trying to figure out the best way to configure it. Basically I have some decent workstation-level hardware, but I don't want to make incorrect, difficult-to-remediate decisions without asking here first. I have read as much as I can without having a system in place - I think it's the right time to post this question.

Expected usage:
  • Family file storage - videos, photos, other shared fileserver-type data.
  • Possible backend (iscsi or nfs) datastore for an ESXi 6.5 box with 10-15 VMs. These VMs are strictly for my personal use, and include such things as mrtg, SQL Express, etc. Typical techie stuff but with no particular performance requirements.
  • If I can get sufficient performance, I'd like to store Steam games on the NAS and run them on my desktop. This requires low-latency reads and reliable performance.
Available hardware:
  • VMWare server: Modern Dell workstation with 2 SSDs and 2 HDDs. Currently holding about 2TB of fileserver data. ECC RAM, 1Gbit and dual 10Gbit NICs.
  • FreeNAS box: Modern Dell workstation with 64GB of ECC RAM, LSI HBA in IT mode, dual 10Gbit NIC and 1Gbit ports.
  • Storage: 2x Samsung 840 Pro 256 GB, 2x Samsung 840 Pro 512GB, 6x 2TB WD Black, 3x 4TB WD Black. Over the long term, I could replace my 2TB drives with 4TB drives, but not soon (i.e., not until at least next year)
  • My desktop has a 10Gbit NIC as well
  • My switch has two 10Gbit ports. If I connect my workstation to one, I need to choose which server gets the other one.

My questions (I would not be surprised if some of these are dumb, but I hope they are not)
  • Given my use case and hardware, are my SSDs of any use at all? Will an L2ARC just waste ARC RAM?
  • I think that I should make an 8-disk raid10 set (i.e. 4 2-disk mirrors) at 2TB per mirror (even on my few 4TB disks), and increase my total array size by replacing 2s with 4s over time. Is this safe or advisable?
  • What safety/performance tradeoffs can I expect on my vmware box with async iscsi esxi vs sync iscsi given my non-enterprise dual-256 GB Samsung 840 SLOG?
  • Can I expect to fill a 10Gbit connection with reads? What about writes?
  • Should I present my CIFS shares directly from the NAS or through a Windows VM? How well does the FreeNAS cifs server perform relative to Windows 2016 with Windows 10 clients?
  • This is a production system for a home network, so risk involves having my wife yell at me.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
Expected usage:
  • Family file storage - videos, photos, other shared fileserver-type data.
My opinion is based on a total lack of your backup strategy, so please keep that in mind.
The above statement was at the top of your list and I'll assume it's #1 in importance OK?

  • 2TB per mirror (even on my few 4TB disks), and increase my total array size by replacing 2s with 4s over time. Is this safe or advisable?
The pool made up of mirrored Vdevs gives faster performance, is easily expandable but is not as safe
for your data. I do not and would not keep that type of data protected only by a mirrored Vdev and
if this statement does not make sense to you, I see more study in your future ;)

Having said that, IF you have properly protected backups, the risk of having a pool of mirrored pairs
is only a loss of personal time of your pool takes a dump. This would perhaps only get you The Look,
instead of being yelled at :):):)
 

ChrisL

Cadet
Joined
Sep 15, 2017
Messages
5
My opinion is based on a total lack of your backup strategy, so please keep that in mind.
The above statement was at the top of your list and I'll assume it's #1 in importance OK?


The pool made up of mirrored Vdevs gives faster performance, is easily expandable but is not as safe
for your data. I do not and would not keep that type of data protected only by a mirrored Vdev and
if this statement does not make sense to you, I see more study in your future ;)

Having said that, IF you have properly protected backups, the risk of having a pool of mirrored pairs
is only a loss of personal time of your pool takes a dump. This would perhaps only get you The Look,
instead of being yelled at :):):)

Thanks for that - I use Crashplan Pro and I'm confident in my ability to keep the data backed up reliably (either with the crashplan plugin or via mapped drive trickery from a Windows host). The main reason for using a pool of mirrored vdevs is so that vmware doesn't suck. I'd consider dual-redundant RAID (RAIDZ2, i think?) if vmware would perform well enough for my purposes. I suppose that's not too hard to test.

I haven't been able to find any FreeNAS support for tiered writes. Is there any?
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I haven't been able to find any FreeNAS support for tiered writes. Is there any?
No, partly because it would either require Block Pointer Rewrite or violate some of ZFS' atomicity guarantees or block read I/O for absurd amounts of time.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
840 Pros are not suitable for SLOG.

Have you considered an SSD pool for your VMs, and an HD Raidz2 pool for you bulk storage?

Put your desktop and FreeNAS on the switch. Run a separate link from FreeNAS direct to VM host for 'storage'.

As it's not production... really, you could even stripe the VM ssd disks, and replicate them to your bulk storage array....
 

ChrisL

Cadet
Joined
Sep 15, 2017
Messages
5
Thanks for all the insight. Right now it looks like this might be the best bet:
  • Directly attach two 512GB SSDs to my vmware box, where I can back up VMs to both the NAS and to the other SSD.
  • Create an 8-disk RAIDZ2 vdev. 8x2=8TB usable after parity and minimum free space. (8x2=16, minus 2x2 = 12, minus 30% = 8TB)
  • Is there any downside to using a 256GB 840 Pro as an l2arc? Based on jgreco's post here I should be OK with a 256GB l2arc, given 64GB physical RAM.



And Stux - this does count as a production system in the context of a home environment: My RTO is long but my RPO is short.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
This is a production system for a home network, so risk involves having my wife yell at me.

Will agree to disagree if the risk does not involve bankruptcy.
 

ChrisL

Cadet
Joined
Sep 15, 2017
Messages
5
Will agree to disagree if the risk does not involve bankruptcy.

What does bankruptcy have to do with anything? I'm talking about a machine that, in case of catastrophic failure, would need to be replaced and have its data & services restored.


Anyway, the system is important to me, which is why I'm asking questions before digging myself into a configuration hole.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
What does bankruptcy have to do with anything? I'm talking about a machine that, in case of catastrophic failure, would need to be replaced and have its data & services restored.


Anyway, the system is important to me, which is why I'm asking questions before digging myself into a configuration hole.

It's a difference in that it depends on how far down the rabbit hole you want to go. If the ESXi host was true production, where a single transaction can't be lost, then you want to be using dual PLP slogs in mirror.

Alternatively, if you just want to avoid excessive down time, in the worst case, then simply periodic (as in up to minutely) replications from an ssd store to the bulk store might be sufficient.

Which is what I was trying (poorly and tersely) to get at.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
So, lets try again.

1) on your bulk storage, since you want to serve VMs, using mirrors is a great idea. 4 pairs. Might as well keep the spare 4TB to replace whatever drive fails first... or obtain another 4TB now and use 5 pairs. Of course, depending on ports and bays. The nice thing with mirrors is you can easily add another pair of drives to grow the space, or you can easily replace two drives to grow the space. So if you replace both drives in one of the mirrors with larger drives, then that mirror will grow instantly, and you'll have more space. Alternatively, you can just add another mirror whenever you want.

Every Vdev you add to your pool increases the random io, ie, IOPs of the pool by the IOPs potential of the slowest disk in the vdev, and mirrors gets you the most number of vdevs with redundancy, out of a set of drives. Mirrors also gets you optimal sequential read performance, but sequential write performance is halved, relative to a stripe with no redundancy. RaidZ2 etc does have better sequential write performance.

BUT, Mirrors are not infallible, you are only ever one disk failure and a bad block away from pool corruption. You should always have a backup. RaidZ2 means any two disks can fail before you have an issue, but is ill suited to VM datastore usage. Good for media though.

Maybe a local backup is a good use of that spare 4TB drive, for your most critical 4TB of data.

2) For ESXi VM storage on a FreeNAS datastore, whether its NFS or iSCSI, in order to have transactional safety, you want the ESXi writes to be synced immediately. This is the default with NFS, or with iSCSI you can enable it with sync=always on the iscsi zvol. The problem is that this can be very slow, but with 4 or 5 mirrors you at least get 4 or 5x the IOPs of a single disk, which may be satisfactory. If its not, then you can add an enterprise grade SSD as SLOG, with high sync sequential write performance and Power Loss Protection, either SATA or PCIe. The PCIe option is by far the best, and the best option for that is the Intel P3700. The problem is, there is a hole in this strategy. What if your SLOG fails when its needed? Ie, after a sudden power failure, or unexpected system crash. If that happens, then you lose the synchronous writes which weren't committed to the pool. The way to protect against this is to have two SLOGs in mirror... which doubles the potentially already expensive cost of the SLOG.

A good use for one of the Samsung drives is probaby as an L2ARC, since you do have 64GB for ARC. And a good use for the 512GB drives would be for a dedicated SSD pool for VM datastores. Depending on risk, you may not even need redundancy on that pool, as you could replicate it to your bulk storage pool, and easily restore it if an SSD did fail.

For example, iirc you have 2 256GB SSDs and 2 512GB ones. In stripe, that'd get you 1.5TB of fast storage. And then you could use your HDs as bulk storage in RaidZ2 for media, and to continuously replicate the SSD pool. This would mean no need for SLOGs or L2ARC as the SSD pool would be plenty fast without it. Alternatively, mirror the SSDs and get 750GB of high speed SSD pool.

And still replicate to your HD pool, as a backup.

3) You should be able to saturate 10gbe, or at least come close. In your scenario, I would hook up the FreeNAS and Desktop to a 10gbe switch, and I would use a direct link between the FreeNAS and the ESXi for the iSCSI/NFS link. This accomplishes getting the storage traffic off your LAN, and its also nice to get the LAN traffic off your SAN.

4) I believe the Windows server 2016 CIFs server is more performant for Windows 10 clients than the FreeNAS one, but perhaps the FreeNAS one is performant enough with tuning. I have not personally tuned a FreeNAS SMB to run efficiently over 10gbe yet, all my clients are still on gigabit.

5) Whether you use a SLOG or not is a performance question. If you use a SLOG, whether you use a mirrored SLOG or not is a question of how much risk of lost transactions you're willing to take. In my personal scenarios, I'm happy with the risk of a single SLOG as if I suffer VM corruption due to spontaneous simultaneous SLOG and system failure, I'm happy to restore from backups, with some transaction losses. Spontaneous simultaneous SLOG/System failure should be rare.

I went into some detail on my SLOG tests for my ESXi/FreeNAS AIO home build here, where I link to some testing I did, and some other benchmarks on the various SLOG options.
 
Last edited:

ChrisL

Cadet
Joined
Sep 15, 2017
Messages
5
Hi Stux,

I really appreciate that infodump - I read your linked post, too. I don't have a device that would make a good SLOG, so I won't use one. I doubt it would make a real difference for my particular usage pattern anyway.

Current plan:
  • VM storage on mirrored 512GB SSD
  • Bulk datastore on raidz2 rust - mostly 2TB with a couple of 4TB disks.
  • 10G point-to-point between servers
  • 10G NAS-to-switch and desktop-to-switch
  • Learn about tuning the freenas CIFS server to make it happy over 10G
  • 256GB L2ARC

My remaining questions (for now!):
  • Was my math correct? 8x2=8 practically usable on raidz2? Is 6x2 + 2x4 the same, or will ZFS use more space on the larger disks?
  • My raidz2 vdev will start with mixed disk sizes. Is there anything I need to know about this, given that next year I want to increase usable capacity by replacing 2TB with 4TB disks? Is there a way I could accidentally prevent that from being possible?
  • Can you point me at a resource for overprovisioning my SSDs from within Freenas? OR do I need to put them into another box to do it?
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Is 6x2 + 2x4 the same, or will ZFS use more space on the larger disks?
If those are all together in RAIDZn, ZFS treats every device in the vdev as though it were the size of the smallest device in the vdev. So no, it won't use more space on the larger disks.
My raidz2 vdev will start with mixed disk sizes. Is there anything I need to know about this
I'm pretty sure the Volume Manager won't let you do this without switching to the Manual Setup button--which isn't a bit deal, but just be aware that you might need to.
 
Status
Not open for further replies.
Top