Best practice for iSCSI or NFS share for ESXi storage

Status
Not open for further replies.

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
Currently, I have a 480GB mirror of SSDs shared over iSCSI to FreeNAS. It works well but man oh man do VMs eat up space like it is going out of style.

My question is this: I have 2 spare 4TB spinning disks, 2 480GB SSDs and 2 80GB SSDs that I can, without too much fuss, free up to make another pool for the purpose of providing storage to ESXi. I was thinking of creating a 4TB mirror that also has a mirrored ZIL/SLOG made from the 80GB SSDs. That would still leave me the 480GB SSDs to do something with. I could use the 480GB SSDs for L2ARC but I think that would be a waste given I have 64GB of RAM reserved for FreeNAS and my server is for home use.

Another option may possibly be to add the 2 80GB drives to my existing Z2 pool as ZIL/SLOG devices and then just use my production pool to provide VM space to ESXi. That would allow me to keep the 2 4TB drives assigned as spares to my production and backup pools. This still leaves me with 2 480GB SSDs that I can continue to use as a fast mirrored pool for whatever I want.

My host also has 2 275GB SSDs connected to SATA so ESXi has native storage but those drives have no redundancy at all. Prefer to use them only to boot FreeNAS VM.

Given the above, what should I do? Any other configurations that I should consider?

Thanks,
 

Noctris

Contributor
Joined
Jul 3, 2013
Messages
163
Just a question: what do these vm's do ? I recently removed all of the overhead data ( like shared files etc) from the vmdk's to lun's on freenas and then ditched srhunk the vmdk's which resulted in quite a bit of space. At this point, i only have OS + databases on an SSD datastore. everything else run's from spinning disks
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
The VMs do not really do much, Emby server, some metatdata stuff, few other multi use Windows VMs.

Just a question: what do these vm's do ? I recently removed all of the overhead data ( like shared files etc) from the vmdk's to lun's on freenas and then ditched srhunk the vmdk's which resulted in quite a bit of space. At this point, i only have OS + databases on an SSD datastore. everything else run's from spinning disks

What is "ditched shrunk" the vmdks? Cause I would LOVE to reduce their size as I created these VMs on VirtualBox and they have 60GB HDDs but they use far less than that for the OS.

Cheers,
 

Noctris

Contributor
Joined
Jul 3, 2013
Messages
163
What i mean is that i never put any data on the OS drive other then the OS which makes it possible to go as low as 40GB per vm. Using thin provisioning, it allows to limit their usage. The data drives itself, i mount using iscsi in a lun on a "cheaper" pool ( aka: spinning disks).

That being said: the trick is to every so often remove all crap from the OS disk , vmotion them to another store and back to regain the disk space.

As to the usage of the SSD's, it's a bit difficult without an exact usage case. For my home, i have a couple of SSD's for the vm's that need the performance ( i run a slave for my companies mysql cluster and some other stuff i like to go really fast). The rest of the "not so important" stuff, i just put on the spinning disks datastore.

What i personally would do ( but this is without knowing your use case): Setup the 2 X 480 as a mirror and add to esx pool. Grow Pool. The 2 X 4TB could be a nice mirror for putting the data lun's of your vm's on if they don't need a lot of IO). For performance: are the 2 X 256 ssd's in your esx host begin used as local cache ? If not: do that. It will improve performance.

This leaves you with the 2 X 80GB ssd's which, in all honesty, I would not really know what to do with. Might be a nice extra datastore or a place to put your freenas jails on. The thing is that 80 GB is a bit light ( for me) to put vm's on. It would better suite as a boot device. In fact, i think that for me, i would swap out the 2 X 256GB ssd's from the esx host with the 80 GB's as freenas boot ( and try to make a mirror out of them ( isn't there any raid in the esx ?) just to be sure. This would give you 2 X 256GB SSD's to add to your freenas pool which is a bit better.

Just my 2 cents offcourse ;-)
 

jdong

Explorer
Joined
Mar 14, 2016
Messages
59
With NFS, you'll definitely want a SLOG if you want any hint of write performance.

With iSCSI, it's less important to have a SLOG. In fact unless the guest OS'es themselves are aggressively commanding sync writes themselves (e.g. running a database workload or a nested filesharing workload), the SLOG/ZIL might go completely unused in iSCSI, unless you set sync=always (which IMO is usually not necessary for iSCSI-backed ESXi servers).

I would agree with you that with 64GB of RAM, especially if you're seeing good ARC hit rates, a L2ARC won't really benefit you.

I think Noctris raises a good point that using an SSD as ESXi's "host cache" might result in the most meaningful IO performance improvements, but of course you have to balance that with the trade off that you are implementing additional storage technology on top of ESXi that circumvents some of the reliability/resiliency guarantees of the underlying ZFS-backed zvol…


Do your workloads really benefit from SSDs backing the whole pool versus letting the ARC do its job? I'm sure you'll get slightly faster performance, but on my ESXi setup, even with 3 Windows 2016 Servers, 2 FreeBSD servers, and a few on-demand Windows VM's, I am seeing a workload that fits comfortably inside 20GB of ARC, with very little read activity spilling over to my spinning disks, even when rebooting the VM's.
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
Wow, what an experience.

So I added my 2 80GB SSDs to be ZILs on my production pool. Was in the process of copying VMs to an NFS datastore using ESXi and all looked good, copy progressing at 90MBps. Then both ZIL SSDs failed at the same time. Awesome. No idea why but they were oldish. So, check data, all looks good on the pool.

Without the ZIL, transfers slowed to a CRAWL. Terrible. I then used the 480GB SSDs as ZIL and transfers sped up to 25MBps. Not great. Abandon that idea!!

So I now have the 2 480GB drives in a mirror arrangement shared to ESXi over NFS. I prefer NFS because I deal with files via FreeNAS, not ESXi as I seem to have issues with the datastore browser failing to copy. Not sure what that is all about... If I need more space, I think I will just move forward with buying more SSDs and adding them to the existing SSD based pool.

The only VMs that I will keep on the SATA connected 275GB SSDs will be FreeNAS. And I will add more host cache.

Been a bit of a ride...

Cheers,
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
So, I think I have my system in a stable arrangement.

I now use my 2 480GB SSDs in a mirror to provide storage to ESXi. The 2 local 275GB drives only host FreeNAS. Total overkill but it is working, so I am not going to touch it.

I also have added 2 120GB SSDs (over provisioned to 16GB) as ZIL to the SSD based array. Surprisingly, it does make a big difference to the write performance according to IOStat when using NFS. The 120GB drives are cheap ones. If I can find some small SSDs that are SLC type, I would use those on the SSD pool and move the MLC type to be ZIL on my spinning pool.

And wow, the SSD based pool is orders of magnitude faster than the spinning pool for IOPS and write speed when using IOStat.

Cheers,
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
Broke down and bought some 120GB intel S3510 SSDs for ZIL and L2ARC. We will see how they perform compared to the cheap ones.

Also, abandoned the SSD only pool as it was just too small for future use. Decided to use ZIL with my spinning array. IOPS and read performance is stellar. Write performance is suspect but I think the good Intel SSDs will improve that aspect too.

Cheers,
 
Status
Not open for further replies.
Top