Help with SSD storage upgrade for VMs

xbufu

Cadet
Joined
Jul 18, 2022
Messages
5
Hi,

so I am planning to upgrade my homelab to 10GBe and want to move my VM storage from my ESXi host to my TrueNAS Scale host. I am planning to make a new pool comprised of older Enterprise Sata SSDs, since I want the VMs to be as responsive as possible and saturate the 10G link. I will also add a cache SSD to both the new pool and my HDD pool, which will either be Intel Optane 900P or an HP ioDrive.

For the SSDs, I am looking at refurbished Intel DC S3520 1.6TB SSDs. I have 2 main questions here:

1. The IOPS for the SSD are quite a bit lower compared to the ones on consumer SSDs (67.5k/17k vs 98k/90k on 860 EVO). Is that normal for Enterprise SSDs and is the going to be a problem?
2. RAIDZ(2) vs Striped Mirrors: Looking through the forums, people recommend striped mirrors for performance (i.e. VMs) and raidz(2) for bulk storage. I would however loose a lot more usable storage by going with striped mirrors, which means higher cost per GB. How much would adding the Intel Optane 900P as a cache drive offset the performance loss of RAIDZ? Is the difference going to be that noticable over a 10G link?

I am mainly going for SSDs instead of HDDs for the obvious performance benefits, and also the fact that I already have a SuperMicro CSE216 case I can through them into. I'm probably not going to go for HDDs, but from your experience, how much of a difference do they make in the real world (responsiveness, boot time) compared to SSDs?

I appreciate any help I can get here, since I am still quite a beginner when it comes to storage.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
You need to post your hardware as per forum rules please

My 2p worth,
Enterprise drives are more honest about performance than consumer drives. They also tend to hold performance under load where consumer drives go hide in a corner and sulk.

The Intel DC3520 drives are read intensive drives. These will wear out a lot quicker than mixed mode or write intensive. However they are a lot cheaper. Last time I checked the 3500's had a DWPD of 1, 3600's DWPD of 3 and 3700's a DWPD of 10. Note however than a DWPD on a 1.6TB SSD is still a lot of writes.

How many drives are you talking about (need to post all hardware, including proposed hardware). Multiple mirrors are much better for VM's than Z1/Z2/Z3. However a mirrored pair vs a 3W Z1 won't see a lot of difference - albeit you will be wearing out 3 disks rather than 2

Your Cache (L2ARC) is unlikey to be of much use to you. However the Optane as a SLOG is a whole different kettle of fish and is a good idea (even on a pool of SSD's). ESXi uses sync writes by default to iSCSI and the Optane 900p will help here due to its performance. Note that in speed terms Sync < Sync+Slog < Async. So setting the dataset/zvol to async will be faster than Sync+Slog. However you do run the risk of data corruption as you can, in the event of a sudden power loss, lose up to 5 seconds of data - a potential disaster when talking about virtual HDDs. Sync+SLOG is safe (assuming you don't lose the SLOG)

SSD's vs HDD's & iSCSI ESXi Datastore. This is something I have played with in a non-serious manner. Look at my primary NAS. I have an ESXi Datastore on BigPool in sync mode, backed by a SLOG and lots of memory (even under Scale) for use as ARC. My VM's are not high performance VM's (AD Servers, a couple of windows clients for specific purposes, VCSA (which for what it does uses ridiculous resources), and a Docker Host for stuff I want to run elsewhere to most containers. These run very well from the HDD's (I honestly can't feel the difference). I do however have a dedicated SSD Pool which I use in preference. My suspicion is that a VM that hammered (with writes) the iSCSI link would not fare quite so well on HDD's as SSD's - but up to a point it works very well. I do have lots of HDD's (10) in mirrors though so have, relatively speaking, reasonably high IOPS for an HDD pool.
 

xbufu

Cadet
Joined
Jul 18, 2022
Messages
5
You need to post your hardware as per forum rules please

My 2p worth,
Enterprise drives are more honest about performance than consumer drives. They also tend to hold performance under load where consumer drives go hide in a corner and sulk.

The Intel DC3520 drives are read intensive drives. These will wear out a lot quicker than mixed mode or write intensive. However they are a lot cheaper. Last time I checked the 3500's had a DWPD of 1, 3600's DWPD of 3 and 3700's a DWPD of 10. Note however than a DWPD on a 1.6TB SSD is still a lot of writes.

How many drives are you talking about (need to post all hardware, including proposed hardware). Multiple mirrors are much better for VM's than Z1/Z2/Z3. However a mirrored pair vs a 3W Z1 won't see a lot of difference - albeit you will be wearing out 3 disks rather than 2

Your Cache (L2ARC) is unlikey to be of much use to you. However the Optane as a SLOG is a whole different kettle of fish and is a good idea (even on a pool of SSD's). ESXi uses sync writes by default to iSCSI and the Optane 900p will help here due to its performance. Note that in speed terms Sync < Sync+Slog < Async. So setting the dataset/zvol to async will be faster than Sync+Slog. However you do run the risk of data corruption as you can, in the event of a sudden power loss, lose up to 5 seconds of data - a potential disaster when talking about virtual HDDs. Sync+SLOG is safe (assuming you don't lose the SLOG)

SSD's vs HDD's & iSCSI ESXi Datastore. This is something I have played with in a non-serious manner. Look at my primary NAS. I have an ESXi Datastore on BigPool in sync mode, backed by a SLOG and lots of memory (even under Scale) for use as ARC. My VM's are not high performance VM's (AD Servers, a couple of windows clients for specific purposes, VCSA (which for what it does uses ridiculous resources), and a Docker Host for stuff I want to run elsewhere to most containers. These run very well from the HDD's (I honestly can't feel the difference). I do however have a dedicated SSD Pool which I use in preference. My suspicion is that a VM that hammered (with writes) the iSCSI link would not fare quite so well on HDD's as SSD's - but up to a point it works very well. I do have lots of HDD's (10) in mirrors though so have, relatively speaking, reasonably high IOPS for an HDD pool.
Posted my existing hardware in my signature. I know 4 drives in raidz2 is not optimal, but i didn't really know what I was doing yet when i got the system. My budget for the new drives would be enough for a maximum of 6 SSDs + the 2 cache drives.

As for sync vs async, I don't really care that much about the actual VM disks, as the improtant data is stored on the existing pool anyways. So I could go with async there I think.

My VMs are also not that performance intensive, used mainly as security lab and docker apps. I would however go with the SSDs mainly for power consumption and noise, since it's a homelab and power in Europe is expensive.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Just remember - with async you can easily corrupt a virtual disk inthe event of power loss. If you are just writing files to an SMB share, it doesn't matter so much.

Also - another factor take into consideration.
Lets say you have 6*1.6TB drives (1.4TB Useable on each)
3 Mirrored pairs = 4.2TB useable
However - you should NOT go over 50% to avoid any horrible slowdowns = 2.1TB Useable - Ouch

Same concept in Z1, Z2 & Z3 BTW
 
Top