Hardware config and iSCSI with Windows

jcl123

Dabbler
Joined
Jul 6, 2019
Messages
23
Hello all,

OK, I have a bunch of hardware to build some new servers, I was using a QNAP for years until it died. I have been struggling to decide between FreeNAS and Windows with Storage spaces for a while, but I am a bit fearful of ReFS until maybe it has a few more years and iterations under it's belt. FreeNAS and ZFS seem like a much more stable option for important data.

At the same time, I am not an experienced Linux/UNIX/etc. guy, and honestly I find Samba annoying for a bunch of reasons. It always annoyed me on the QNAP about the translation between the network shares and file system, differences in administration behavior, and a long list of other quirks with SMB, I much prefer to use Server 2019.

So, wondering what would happen if instead I just present an iSCSI share to a Windows VM? More complex, but potentially the best of both worlds?
I am building FreeNAS as a VM on VMWare ESXi anyways (with pass-thru), so at the same time I could also present iSCSI for VMFS while I am at it.

So, I am wondering what people think of this idea?

Obviously the hardware matters, so if ppl are in favor of this idea, I could use some advice on how specifically to implement it.
What I have is as follows, I have been trying to follow the guides and recommendations as closely as possible.
I am doing two servers and then backing up or replicating between them (need to decide the best way to do that)
There will be a 3rd copy at another site or in the cloud such as Backblaze (TBD)

Hardware:
- Two Supermicro motherboards with integrated 12Gig LSI HBAs (IT mode)
- One CPU is 16-core and the other is 12-core
- 128GB of RAM each, all ECC RDIMMs (using 16GB DIMMs)
- A pair of Intel 10Gig NICs
- Disks
- I have many 4TB 7200RPM Seagate Enterprise SAS drives for data
- I have 7x WD 10TB SATA drives ("shucked" from USB drives) (where data currently resides on Windows machine)
- I have a bunch of 200Gig SAS eSSDs with PLP, these are SLC 10DWPD high endurance (Samsung)
- I have a bunch of 400Gig SAS eSSDs, also with PLP, these are TLC but are also 10DWPD high endurance (Seagate)
- I have 4x eNVMe drives, these are Intel TLC but only medium endurance, I think 2-3DWPD
- I have one 16-bay Supermicro case with 2x platinum power supplies as well as a UPS
- The other case is a normal tower case, but it can handle 12 3.5" disks plus a couple more 2.5" SSDs
- I did convert my QNAP into a JBOD with a nice Intel expander, so I could do another 10x drives with that on either machine

Some other supporting information:
- I have 20TB of data currently, growth has slowed down allot recently, so 30-40TB available is plenty
- Pictures, home videos (4K and VR/360), and DVD/BD ISOs, and backups of other computers (tax records and documents don't amount to much)
- I care mostly about data security, and then maybe power consumption and noise
- These are in the basement where it is cold, and I have tuned the fans for low noise already
- I don't plan on creating a whole bunch of high-performance VMs if any at all
- Performance is important, but data is being served over gigabit to a dozen or so computers
- It may not seem like it, but I do like simple, I do work as a Windows / VMWare admin/engineer

Questions:
- If you are just doing normal file sharing, most recommendations say don't bother with ARC/ZIL on SSDs unless you are doing iSCSI
- But, if I am doing iSCSI maybe I do need them, although if it is mainly for a file share, maybe a corner case?
- When you get into that, I see allot of controversy over the size and configuration of SSDs used for ARC/ZIL
- I am leaning towards using the SAS SSDs rather than the NVMe's because of the higher endurance
- I am wondering how much RAM to give FreeNAS, seems pretty hungry, maybe 32-64GB?
- I am seeing that you don't want to create a VDEV larger than 8-12 or so drives

Hoping some people with allot of track time with these things can point me in the right direction.

Thanks in advance

-JCL
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
On FreeNAS, an iSCSI target can only be dedicated to a single initiator, and appears as an empty drive needing to be formatted. It can't be shared with multiple iniators. Is this what you intend?

If you don't want to deal with Samba, then you may be better off constructing an ESX host, and just running Windows Server within. Then a separate FreeNAS box can provide iSCSI volumes to the Windows Server.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
If you don't want to deal with Samba, then you may be better off constructing an ESX host, and just running Windows Server within. Then a separate FreeNAS box can provide iSCSI volumes to the Windows Server.
I believe that's exactly what @jcl123 is describing as the intention here, except that FreeNAS will be a VM on the same host.

I'd suggest we figure out a way to make SMB behave as there will be some lost efficiency by doing it this way. Certainly looks like the hardware is up to the task though; but I'll need to take some time to dig into this when I have access to a full keyboard again.
 

jcl123

Dabbler
Joined
Jul 6, 2019
Messages
23
Thank you for the reply.

On FreeNAS, an iSCSI target can only be dedicated to a single initiator, and appears as an empty drive needing to be formatted. It can't be shared with multiple iniators. Is this what you intend?

No, not sharing, although that is good to know if it can't do that if I ever wanted to play with Windows clustering....

Multiple targets, one (or more) would be to a Windows VM and formatted NTFS.
Another (or more) would be to ESXi and formatted VMFS.


If you don't want to deal with Samba, then you may be better off constructing an ESX host, and just running Windows Server within. Then a separate FreeNAS box can provide iSCSI volumes to the Windows Server.

How is that different than having the FreeNAS and Windows machines as VMs on the same ESXi host for my use case?

-JCL
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Logically it's the same, but as FreeNAS doesn't officially support running as a VM, but only on bare metal, you may have fewer problems this way.
 

jcl123

Dabbler
Joined
Jul 6, 2019
Messages
23
I'd suggest we figure out a way to make SMB behave as there will be some lost efficiency by doing it this way. Certainly looks like the hardware is up to the task though; but I'll need to take some time to dig into this when I have access to a full keyboard again.

I agree it will lose some efficiency.
If I have to I can live with Samba, I am not saying it is "bad", just exploring other ways to do it.

I don't see how it can be made to "behave", the underlying file system on a *nix OS is different than NTFS including the way is stores metadata. Samba is "extremely similar" to native Microsoft SMB, there are numerous key differences that cause all kinds of niggle points. I also have all the native interfaces for management, software compatibility, etc. etc.
It all comes down to what you are trying to do I guess, I used Samba on QNAP for years, and at work I use Windows, I prefer Windows.
I would probably prefer using *nix if I mainly used NFS or other protocols non-native to Windows.

On the other hand, I think ZFS on FreeNAS is a far better and more mature rendition of a COW w/excellent data integrity. ReFS and storage spaces might catch up some day, but I don't think it is there yet.

I think the best thing is FreeNAS basically has one job, and it does that really well with few frills.

-JCL
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
We can certainly look at the situation in more detail; if your client systems are Windows and you'll be doing more than basic ACLs then you may indeed find that there's no substitute for a Windows SMB server, but I've found that for the most part FreeNAS behaves admirably.

I'm still on mobile, but if you're able to identify the model numbers of the SSDs in question (I could make some educated guesses, but confirmation is always best) and then we can figure on the best way to use them.

Generally speaking though, block storage implies mirrors, and VMFS implies that SLOG/sync writes are necessary.
 

jcl123

Dabbler
Joined
Jul 6, 2019
Messages
23
Logically it's the same, but as FreeNAS doesn't officially support running as a VM, but only on bare metal, you may have fewer problems this way.

Yes I know, this is not running in production for a business, this is a home media server. I am not expecting official support, which I would have to pay for anyways. The hardware I have is way overkill to dedicate to a bare-metal FreeNAS box for my needs. And I am following all the best practices for virtualization, so even if ESXi died, I could boot up FreeNAS bare metal and see the volume.

I *could* do that with another system I have that is an older and much lower end quad-core with 32GB ECC RAM, and I have an extra LSI 9400 16i, so I could build a bare-metal box. But, it is maxed out at 32GB and some guides on here are recommending 64GB+ for iSCSI use cases, although granted this might be a corner case if I am only using the iSCSI for regular files.

But then my other box becomes overkill for just a couple Windows VMs, I want to try to do things somewhat efficiently.

-JCL
 

jcl123

Dabbler
Joined
Jul 6, 2019
Messages
23
We can certainly look at the situation in more detail; if your client systems are Windows and you'll be doing more than basic ACLs then you may indeed find that there's no substitute for a Windows SMB server, but I've found that for the most part FreeNAS behaves admirably.

Bingo ;) I have allot of Windows 10 clients around the house, yes I can beat Samba into submission, I did it for years, but it is a much better experience with Server 2019 <-> Windows 10.

I'm still on mobile, but if you're able to identify the model numbers of the SSDs in question (I could make some educated guesses, but confirmation is always best) and then we can figure on the best way to use them.

Sure, I can post more details later in the day. I really appreciate the help.

Generally speaking though, block storage implies mirrors, and VMFS implies that SLOG/sync writes are necessary.

Yes, I am not looking to have a single VDEV do both of these things.
If I have one target for VMFS, then that will probably be to an all-flash VDEV mirror with some of the 400Gig SSDs, I don't need massive amounts of space for that. Actually, that might be a good fit for the NVMe disks, although I could have more than one.

For the target used with the Windows VM, it would just be hosting ordinary files on NTFS. Maybe in that case it does not need the SLOG/sync at all? Especially since 80% of my data is archival in nature and does not change. The other fringe benefit is I can use Windows de-duplication, without needing half a TB of RAM ;)

Is the SLOG/sync "global" or is it associated with a specific VDEV?, this is where things start to go past my current level of knowledge of FreeNAS.

But I might also just do the file share and not the one for VMFS. Likely I am going to need to buy a native RAID controller anyways because VMWare does not support SATA/chipset RAID, so I could just have that serve all the VMFS needs and only do file storage with the FreeNAS to make it simpler. I like the idea of having that kind of resiliency in my VM storage, but I can always just back them up there.

-JCL
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Is the SLOG/sync "global" or is it associated with a specific VDEV?, this is where things start to go past my current level of knowledge of FreeNAS.

It's per pool.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Maybe in that case it does not need the SLOG/sync at all?

Since you intend to provide iSCSI targets, you will benefit from having an SLOG vdev in your pool to handle async writes to the zvols being shared. Even if your use case is mostly reads, there will be writes to update the access times.
 

jcl123

Dabbler
Joined
Jul 6, 2019
Messages
23
Since you intend to provide iSCSI targets, you will benefit from having an SLOG vdev in your pool to handle async writes to the zvols being shared. Even if your use case is mostly reads, there will be writes to update the access times.
OK, fair enough.
 

ljvb

Dabbler
Joined
Jul 14, 2014
Messages
30
On FreeNAS, an iSCSI target can only be dedicated to a single initiator, and appears as an empty drive needing to be formatted. It can't be shared with multiple iniators. Is this what you intend?

If you don't want to deal with Samba, then you may be better off constructing an ESX host, and just running Windows Server within. Then a separate FreeNAS box can provide iSCSI volumes to the Windows Server.
That is not strictly true... For Windows servers, yes, the "drive" is unique to that machine. For VMware, I have the same target shared between 3 VMWare hosts, but I suspect VCenter is facilitating that (event though I initially configured iscsi on the ESXI host before I setup vcenter.
 

jcl123

Dabbler
Joined
Jul 6, 2019
Messages
23
That is not strictly true... For Windows servers, yes, the "drive" is unique to that machine. For VMware, I have the same target shared between 3 VMWare hosts, but I suspect VCenter is facilitating that (event though I initially configured iscsi on the ESXI host before I setup vcenter.
Yeah, I thought that was odd as well, the target shouldn't care.
On both VMware and Windows, the client(s) do the job of making sure they don't step on each others data.
VMFS is a clustered file system from the get go, and on Windows you just load failover clustering.

-JCL
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Yeah, I thought that was odd as well, the target shouldn't care.
On both VMware and Windows, the client(s) do the job of making sure they don't step on each others data.
VMFS is a clustered file system from the get go, and on Windows you just load failover clustering.

-JCL

It depends on how you intend to mount the target. If you attach the target to vmWare, which then provides it to the Windows guest, then yes, VMFS understands how to deconflict multiple initiators.

If you attach the target to the Windows guest directly, which is what I thought you intended above, then no. Windows initiators aren't cluster-aware.
 

jcl123

Dabbler
Joined
Jul 6, 2019
Messages
23
If you attach the target to the Windows guest directly, which is what I thought you intended above, then no. Windows initiators aren't cluster-aware.

Well, they are, but only if you load the failover cluster manager role and setup clustering (and know what you are doing).
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Well, they are, but only if you load the failover cluster manager role and setup clustering (and know what you are doing).

You're correct, but most people don't want to bear the extra licensing cost entailed in such a setup. I assumed you'd be part of this population.
 

jcl123

Dabbler
Joined
Jul 6, 2019
Messages
23
You're correct, but most people don't want to bear the extra licensing cost entailed in such a setup. I assumed you'd be part of this population.
If it were on my own I would agree. I do allot of work with them at work, and I am allowed to use those licenses in my home lab for non-production use.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Bingo ;) I have allot of Windows 10 clients around the house, yes I can beat Samba into submission, I did it for years, but it is a much better experience with Server 2019 <-> Windows 10.
If you have the Server 2019 licenses already it can certainly help tilt things that way. Since your workload is mostly read as well we can try to violate a few of the other "best practices" in regards to block storage as well.

Sure, I can post more details later in the day. I really appreciate the help.
No problem. Given the specs, I have my suspicions that a "certain brand of storage" was recently decommissioned at your office, and you were permitted to benefit from that once the drives were erased. ;)

Yes, I am not looking to have a single VDEV do both of these things.
If I have one target for VMFS, then that will probably be to an all-flash VDEV mirror with some of the 400Gig SSDs, I don't need massive amounts of space for that. Actually, that might be a good fit for the NVMe disks, although I could have more than one.
Quick ZFS phrasing lesson - the "pool" is the top level aggregate, which in turn is made up of vdevs. Think of "vdevs" as the "5" in a RAID50 arrangement, and the "pool" that mashes them together being the "0" piece of it. More details on that front later.

For the target used with the Windows VM, it would just be hosting ordinary files on NTFS. Maybe in that case it does not need the SLOG/sync at all? Especially since 80% of my data is archival in nature and does not change. The other fringe benefit is I can use Windows de-duplication, without needing half a TB of RAM ;)
This would be a good solution; a RAIDZ2 pool with a ZVOL (think "a piece of your ZFS pool") provisioned over iSCSI as either an RDM through VMware, or direct in-guest iSCSI (although the latter might complicate the network layout) and leave it async. The main OS drive would be on your (sync) VMFS.

I won't discourage the use of dedup in Windows either. The current state of dedup on ZFS is basically "don't" - there's some work in progress to correct this but as of now it's inline and extremely memory heavy. The upcoming special vdev type that will be there in TrueNAS 12 helps mitigate this but it doesn't change the core methodology of how it works.

Is the SLOG/sync "global" or is it associated with a specific VDEV?, this is where things start to go past my current level of knowledge of FreeNAS.
Sync can be set at the pool, dataset, or individual ZVOL level; so you can even have different levels of data assurance and power-loss-protection within the same system.

But I might also just do the file share and not the one for VMFS. Likely I am going to need to buy a native RAID controller anyways because VMWare does not support SATA/chipset RAID, so I could just have that serve all the VMFS needs and only do file storage with the FreeNAS to make it simpler. I like the idea of having that kind of resiliency in my VM storage, but I can always just back them up there.

-JCL
You'll likely need a second controller anyways, since the proper way to do FreeNAS as a VM involves PCI passthrough of the HBA entirely to the VM, and you need a storage device visible to VMware in order to store and boot the FreeNAS VMX.

Question; how many of the different drives (SSD and 4T SAS) have you got? That can impact your pool decisions.

And for a guess:

Your 200G drives are Samsung SM1625's
Your 400G drives are Seagate 1200's
Your 4T drives are Seagate Constellation ES (1st gen)
And your NVMe drives are P3608 (but I'm unsure)

Let me know how many I get. ;)
 
Top