Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Physical vs. Virtual FreeNAS + iSCSI vs. NFS: performance testing results.

Status
Not open for further replies.

soulburn

Member
Joined
Jul 6, 2014
Messages
100
Hi everyone,

I've wanted to see how well FreeNAS would perform in back to back tests on physical vs. virtual hosts on the same hardware for a while now and finally have all the parts together to do some initial testing. My vision is to have a single box that can do it all for my homelab and for testing with ESXi being the hypervisor and have FreeNAS be virtualized and host out a nested datastore for the rest of my ESXi VM's.

I wanted to share my results in case anyone else was curious and I haven't seen anything like this benchmarked before. This is not a production system and the results are far from scientific but should give you a little insight into the performance of this type of configuration.

Pertinent hardware specs:
  • FreeNAS 9.10.1
  • SuperStorage Server 6048R-E1CR36L chassis
  • X10DRH-iT motherboard
  • Dual Xeon 2620 v4 CPUs
  • 128 GB RAM
  • LSI 3008 HBA
  • 14x Seagate IronWolf 10TB 7,200 RPM drives in a single zvol that constists of 7x mirrored vdevs that are stripped (RAID 10).
  • 2x 80 GB Intel SSD DC s3510 overprovisioned to 10 GB for SLOG
  • 1x 256 GB Samsung 850 Pro SSD for cache
  • 2x onboard Intel x540 NICs (MTU 9000)
  • Netgear XS708E 10GbE switch with VLANs set up for the storage network to isolate its traffic.
Physical host setup:

FreeNAS was installed on a USB device in the SuperStorage Server 6048R-E1CR36L chassis with X10DRH-iT motherboard. The onboard Intel X540 10GbE NICs were plugged into a Netgear XS708E 10GbE switch. On the ESXi host side of things, I used a Supermicro X10SDV-TLN4F based 1U server which has an Intel Xeon D-1540 SoC and integrated Intel x552 10GbE NICs as well which was also plugged into the Netgear XS708E 10GbE switch. The screenshots you're seeing below are from a Windows Server 2012 R2 VM on iSCSI and NFS datastores.

Physical FreeNAS, iSCSI VM, sync=disabled

iscsi_physical.png

Physical FreeNAS, NFS VM, sync=disabled

nfs_physical.png

I forgot to screenshot the results for the physical host with sync=always but the writes were the same as you'll see below with the FreeNAS VM at around 100 MB/s.

VM host setup:


The general process for the VM setup was as follows:

A FreeNAS VM was created on an SATA DOM datastore that is physically in the host. Then in the FreeNAS VM the LSI 3008 HBA was set to passthrough mode so the VM could have full access to the disks. VMXNET 3 NICs were used for the VM and MTU was set to 9000 from within ESXi networking and FreeNAS. Either an iSCSI zvol target or NFS datastore was set up in the FreeNAS VM and passed back through to the ESXi host where I created a nested datastore. I then created a Windows Server 2012 R2 VM on that nested datastore. These are the disk performance results from the various instances of the Windows Server 2012 R2 VMs that's were on the aforementioned datastores. The Intel E1000E NIC was used for the Windows Server 2012 R2 VMs and the MTU was set to 9014 bytes.

VM FreeNAS, iSCSI VM, sync=disabled

iscsi_vm.png
VM FreeNAS, iSCSI VM, sync=enabled

iscsi_vm_sync_writes.png

VM FreeNAS, NFS VM, sync=disabled

nfs_vm.png
VM FreeNAS, NFS VM, sync=enabled

nfs_vm_sync_writes.png
Questions:
  • Based on the info I provided, is there any glaring reason why my performance metrics take such a hit with sync=enabled?
  • Which do you chose for your ESXi datastore; iSCSI or NFS and why? Right now I'm leaning towards NFS based on these results and the hassle I see in general with iSCSI tuning on FreeNAS.
  • What other disadvantages do you see with this configuration as it relates to virtualizing FreeNAS, considering you can passthrough an HBA and this is a non production environment?
Conclusion:

I'm still not totally sold one way or the other on virtualizing FreeNAS, but the temptation of virtualizing it for the power savings of having less servers running is tempting. It's also tempting to use sync=disabled for the ESXi datastores but I know that's not smart so I need to get that figured out. Additionally, as it relates to virtualizing FreeNAS, it's pretty annoying when you reboot or have a power outage as you have to either SSH into the ESXi host to rescan the storage adapters (which I realize you can automate but it doesn't work if you're using FreeNAS volume encryption) or you have to do it manually from the GUI before you can power on your VMs that are in your nested datastore that's hosted by the FreeNAS VM. It's also of course a huge negative as this type of setup takes down all of your nested VMs when something goes wrong and the system goes down for whatever reason.

Furthermore, my results show that sync=enabled is a huge performance killer, even with an Intel DC s3510 SSD's for a SLOG. I am not sure if I'm doing something wrong but the performance loss is just too great. I don't really care if sync=disabled winds up destroying my VM's in the ESXi datastore as Veeam is easy enough to use but I would be upset if I lost my data on the rest of my FreeNAS volume, so that's concerning. I just can't imagine that the Intel DC s3510 is too slow to use as a proper SLOG and will only put out 100 MB/s. I wish I had an NVMe drive to test SLOG performance with...

At any rate, I will keep testing and see if I can get the performance to an acceptable level whilst still using sync=enabled. My results make me feel like virtualizing FreeNAS will a viable option for my non-production environment and I don't really see any downsides as long as you can passthrough a proper HBA to the VM, but again I'm not sure which way I'll go just yet as I want to do more testing. I just wanted to share a few of my initial tests and start a post to get some discussion going. I'll report back once I come to a conclusion on how I'm going to set things up or if I have any other interesting data to share. I would enjoy any questions, opinions, or feedback you have on this configuration and hearing about your similar setups. I'd also really like to figure out why things are so slow with sync=enabled. Thanks for any feedback.
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,266
Using much more modest hardware than you, I ran FreeNAS virtualized with sync=disabled on my NFS-based VM datastore for over a year before adding an Intel DC S3700 SLOG device. My experience is that turning on synchronous writes does indeed drastically slow down writes, just as you describe. But as a practical matter, it doesn't make that much difference in day-to-day usage. I'm a developer and don't need stellar performance; your mileage may vary.

FWIW, the Intel DC S35xx devices are optimized for reads, where the S37xx devices are optimized for writes, meaning the latter are a better choice for a SLOG device, especially as regards durability.

You'd probably get better performance using an Intel DC P3700 as your SLOG device. But one of these will cost some dough...

As for virtualizing FreeNAS, I've been running two FreeNAS VM instances for over a year with absolutely no problems whatsoever.

Good luck!
 

soulburn

Member
Joined
Jul 6, 2014
Messages
100
Using much more modest hardware than you, I ran FreeNAS virtualized with sync=disabled on my NFS-based VM datastore for over a year before adding an Intel DC S3700 SLOG device. My experience is that turning on synchronous writes does indeed drastically slow down writes, just as you describe. But as a practical matter, it doesn't make that much difference in day-to-day usage. I'm a developer and don't need stellar performance; your mileage may vary.

FWIW, the Intel DC S35xx devices are optimized for reads, where the S37xx devices are optimized for writes, meaning the latter are a better choice for a SLOG device, especially as regards durability.

You'd probably get better performance using an Intel DC P3700 as your SLOG device. But one of these will cost some dough...

As for virtualizing FreeNAS, I've been running two FreeNAS VM instances for over a year with absolutely no problems whatsoever.

Good luck!
That was exactly the feedback I was looking for. Thanks for the info on the DC S35xx devices. When I got them I just saw they were cheap and had a supercap so I went for them. I guess I have to upgrade to the DC P3700 if I want really good write speed!
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,214
Based on the info I provided, is there any glaring reason why my performance metrics take such a hit with sync=enabled?
As @Spearfoot pointed out, it is your choice for SLOG. There is definitely a performance hit for using sync=always, but not as much as you are seeing. BTW, I use Intel DC S3710s for my SLOGs. For VMs you do want to have sync enabled, unless you are some uncouth rebel like others I won't name... ;)
Which do you chose for your ESXi datastore; iSCSI or NFS and why? Right now I'm leaning towards NFS based on these results and the hassle I see in general with iSCSI tuning on FreeNAS.
Have used both, but am now on iSCSI. Main reason is that I use both ESXi and MS Server 2012 R2 Hyper-V. So since MS doesn't do NFS for VMs, I had to use iSCSI. Either is fine IMHO and the debate will always be there about which is best...
What other disadvantages do you see with this confirmation as it relates to virtualizing FreeNAS, considering you can passthrough an HBA and this is a non production environment?
One thing I am still pondering (I run both stand-alone FreeNAS Servers and ESXi w/FreeNAS as a VM), is the dependencies between them. As an example, I have multiple Servers and keep mucking around with should I do this or do that. Reason is say I want to down the ESXi Server, well since it houses the FreeNAS VM (which in turn is the DataStore for VMs); so now all my VMs need to go down even the Hyper-V ones.

Keep in mind I used to have all of these items totally separated (still have an ESXi 5.5 U2 Server that I need to migrate). Ah... decisions... decisions ;)
 

Mlovelace

Neophyte Sage
Joined
Aug 19, 2014
Messages
1,065
Have used both, but am now on iSCSI. Main reason is that I use both ESXi and MS Server 2012 R2 Hyper-V. So since MS doesn't do NFS for VMs, I had to use iSCSI. Either is fine IMHO and the debate will always be there about which is best...
Arguably the most important point for freeNAS iSCSI backed ESXi datastore(s) is that VAAI is not supported in freeNAS via NFS, only iSCSI. So, if you choose NFS you won't be leveraging: Write Same Zero, Xcopy, Atomic Test and Set, UNMAP, and Warn & Stun. If you are unfamiliar with those VAAI primitives and why you should be leveraging them, here is a link: FreeNAS VAAI
 

soulburn

Member
Joined
Jul 6, 2014
Messages
100
As @Spearfoot pointed out, it is your choice for SLOG. There is definitely a performance hit for using sync=always, but not as much as you are seeing. BTW, I use Intel DC S3710s for my SLOGs. For VMs you do want to have sync enabled, unless you are some uncouth rebel like others I won't name... ;)

Have used both, but am now on iSCSI. Main reason is that I use both ESXi and MS Server 2012 R2 Hyper-V. So since MS doesn't do NFS for VMs, I had to use iSCSI. Either is fine IMHO and the debate will always be there about which is best...

One thing I am still pondering (I run both stand-alone FreeNAS Servers and ESXi w/FreeNAS as a VM), is the dependencies between them. As an example, I have multiple Servers and keep mucking around with should I do this or do that. Reason is say I want to down the ESXi Server, well since it houses the FreeNAS VM (which in turn is the DataStore for VMs); so now all my VMs need to go down even the Hyper-V ones.

Keep in mind I used to have all of these items totally separated (still have an ESXi 5.5 U2 Server that I need to migrate). Ah... decisions... decisions ;)
Yes it's quite annoying when you take to take your whole environment down due to things being nested! Thanks for the feedback!
 

soulburn

Member
Joined
Jul 6, 2014
Messages
100
Arguably the most important point for freeNAS iSCSI backed ESXi datastore(s) is that VAAI is not supported in freeNAS via NFS, only iSCSI. So, if you choose NFS you won't be leveraging: Write Same Zero, Xcopy, Atomic Test and Set, UNMAP, and Warn & Stun. If you are unfamiliar with those VAAI primitives and why you should be leveraging them, here is a link: FreeNAS VAAI
Oh wow I didn't realize NFS didn't support VAAI. Is that a FreeNAS issue or does NFS not support VAAI in general? Those features are pretty must required IMO so then the choice is clearly going to be iSCSI. At least that's one less thing I have to decide on! Thanks for letting me know. I'd have been so pissed if I used NFS only to later find out that VAAI wasn't working when I wanted to test it!
 

Mlovelace

Neophyte Sage
Joined
Aug 19, 2014
Messages
1,065
Oh wow I didn't realize NFS didn't support VAAI. Is that a FreeNAS issue or does NFS not support VAAI in general? Those features are pretty must required IMO so then the choice is clearly going to be iSCSI. At least that's one less thing I have to decide on! Thanks for letting me know. I'd have been so pissed if I used NFS only to later find out that VAAI wasn't working when I wanted to test it!
Other vendors have developed VAAI plugins for NFS but freeNAS doesn't offer anything like that, and I doubt they will moving forward.
 

soulburn

Member
Joined
Jul 6, 2014
Messages
100
Other vendors have developed VAAI plugins for NFS but freeNAS doesn't offer anything like that, and I doubt they will moving forward.
Thanks for the info!

I did a little more testing where I took out the 2x 80 GB Intel SSD's DC s3510 overprovisioned to 10 GB for SLOG and here are the results:

iscsi_physical_no_slog_sync_writes.png

Sure enough, it's even worse. So the lesson so far is that when using 10GbE, iSCSI, and ESXi datastores with FreeNAS, don't bother with anything like the DC S3510, they just aren't fast enough. I'm going to order an Intel DC S3700 400GB for the SLOG and will report back once I test that as well.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,266
Thanks for the info!

Sure enough, it's even worse. So the lesson so far is that when using 10GbE, iSCSI, and ESXi datastores with FreeNAS, don't bother with anything like the DC S3510, they just aren't fast enough. I'm going to order an Intel DC S3700 400GB for the SLOG and will report back once I test that as well.
I hope you're not too disappointed in the S3700, which I suspect will be only marginally faster than the S3500s. Their superior write durability is the main thing that makes S37xx SSDs superior to the S35xx models as SLOG devices.

SATA-based SSDs are entry-level SLOG devices. Something more appropriate for your high-end system would be the Intel NVMe-based 750 (good) or P3700 (better). Either of these should noticeably out-perform the S3500 or S3700 SSDs. Beyond that, you're looking at something really expensive, such as the ZeusRAM. See the thread: "Some insights into SLOG/ZIL with ZFS on FreeNAS" for detailed discussion about this stuff...
 

soulburn

Member
Joined
Jul 6, 2014
Messages
100
I hope you're not too disappointed in the S3700, which I suspect will be only marginally faster than the S3500s. Their superior write durability is the main thing that makes S37xx SSDs superior to the S35xx models as SLOG devices.

SATA-based SSDs are entry-level SLOG devices. Something more appropriate for your high-end system would be the Intel NVMe-based 750 (good) or P3700 (better). Either of these should noticeably out-perform the S3500 or S3700 SSDs. Beyond that, you're looking at something really expensive, such as the ZeusRAM. See the thread: "Some insights into SLOG/ZIL with ZFS on FreeNAS" for detailed discussion about this stuff...
Oops yes! I meant DC P3700. I should have 2x 400GB variants next week.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,266
Oops yes! I meant DC P3700. I should have 2x 400GB variants next week.
Ah! I believe you'll be very happy... and I'm green with envy! :)

Here is a benchmark I ran in a Windows 7 VM installed on an NFS datastore on my main FreeNAS system. This is an all-in-one, with the datastore configured on a local network using a virtual switch (as described in this blog article).
cm-boomer-01.jpg

Here is the same benchmark, run on a Windows 7 VM installed on my 'test' FreeNAS system against a SMB share - drive R: - on my main FreeNAS system.
cm-bacon-01.jpg

In both cases I'm running the benchmark against the pool in my main system, a RAIDZ2 array of 7 x 2TB HGST drives with a single 100GB Intel DC S3700 as a SLOG device and sync=always on the NFS-based VM dataset. In the first case, you can clearly see that writes are slower than reads. In the latter, you can see that, despite writes being slower than reads, I nevertheless come pretty close to saturating my 1GbE network. I expect you'll see something similar on your 10GbE network with your high-end SLOG.

Good luck!
 

soulburn

Member
Joined
Jul 6, 2014
Messages
100
Ah! I believe you'll be very happy... and I'm green with envy! :)

Here is a benchmark I ran in a Windows 7 VM installed on an NFS datastore on my main FreeNAS system. This is an all-in-one, with the datastore configured on a local network using a virtual switch (as described in this blog article).
View attachment 14035

Here is the same benchmark, run on a Windows 7 VM installed on my 'test' FreeNAS system against a SMB share - drive R: - on my main FreeNAS system.
View attachment 14036

In both cases I'm running the benchmark against the pool in my main system, a RAIDZ2 array of 7 x 2TB HGST drives with a single 100GB Intel DC S3700 as a SLOG device and sync=always on the NFS-based VM dataset. In the first case, you can clearly see that writes are slower than reads. In the latter, you can see that, despite writes being slower than reads, I nevertheless come pretty close to saturating my 1GbE network. I expect you'll see something similar on your 10GbE network with your high-end SLOG.

Good luck!
That's great! Thanks for sharing!
 

Dice

Neophyte Sage
Joined
Dec 11, 2015
Messages
1,214
A lot of neat information / guidance in this thread. Thanks.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,266
For VMs you do want to have sync enabled, unless you are some uncouth rebel like others I won't name... ;)
Wait! Is this a jab at moi! :rolleyes:
 

soulburn

Member
Joined
Jul 6, 2014
Messages
100
UPDATE:

400 GB DC P3700 came in. Results as follows:
  • Ultimately decided run FreeNAS on the physical hardware and ditch virtualizing it because as you'll see below it runs so awesome that I've decided I'm going to introduce this this into a production environment at a client's site once I finish up more testing.
    • This client currently uses individually managed ESXi hosts that all have DAS. All hosts currently use the free ESXi hypervisor. This new box will be used as SAN storage for the VM hosts datastores. The VM hosts will be clustered and managed with vCenter Server and used with vMotion.
  • Since VAAI is so important I ultimately decided to use iSCSI for the datastores.
  • I removed the 2x 80 GB Intel SSD DC s3510 overprovisioned to 10 GB for SLOG from the server.
  • I installed the 400 GB DC P3700.
  • I overprovisioned the 400 GB DC P3700 using the following commands:
    • Code:
      gpart create -s gpt nvd0
      gpart add -t freebsd-zfs -b 2048 -a 4k -l log0 -s 8G nvd0
      zpool add vol0 log gpt/log0
      gpart show nvd0
      
  • The DC P3700 now looks like this from the shell and GUI:
dc_p3700_gpart_info.png
dc_p3700_fn_gui_info.png
  • I set up a task to run the following command post init:
init_script.png
  • I spun up a VM with a vmxnet3 NIC, set the MTU to 9000 and re-ran CrysalDiskMark at 16GiB.
    • Physical FreeNAS, iSCSI VM, sync=enabled with the DC P3700 being used as the SLOG.
iscsi_physical_sync_writes.png
I still want to test a few things here and there but overall I am happy with the results and learned a ton by setting this up. I do have another DC P3700 that I can use as a cache drive but at this point I am not even sure if I need it. I have to do more testing to see the hit rates of the ARC to see if it's even necessary and also test a few other odds and ends. I'll keep everyone updated as things progress. Thank to everyone for all the help and feedback thus far!​
 
Last edited:

Dice

Neophyte Sage
Joined
Dec 11, 2015
Messages
1,214
I'm a bit interested in how you decided to use only <10gb> for SLOG? What was the thought process leading up to that?
 

soulburn

Member
Joined
Jul 6, 2014
Messages
100
I'm a bit interested in how you decided to use only <10gb> for SLOG? What was the thought process leading up to that?
The transaction group (txg) will be limited by the speed of your network link. The default time for a txg is 5 seconds. A 1 gbit network link will have a max txg of around .625 GB (1 gbit X 5 sec. = .625 GB). So for a 1 gbit network you only really need a SLOG size of .625 GB. For a 10 gigabit network that works out to be 6.25 GB. I made it 8 GB. You provision the SLOG as such so that the drive can handle things like wear leveling on it's own. See here for more info.

Oh and just for fun I also tried setting up the SLOG from the GUI and having it use the whole 400 GB. It made no difference in performance. Letting it use all the drive would just waste space that could be used for wear leveling.
 
Last edited:

Mlovelace

Neophyte Sage
Joined
Aug 19, 2014
Messages
1,065
UPDATE:

400 GB DC P3700 came in. Results as follows:
  • Ultimately decided run FreeNAS on the physical hardware and ditch virtualizing it because as you'll see below it runs so awesome that I've decided I'm going to introduce this this into a production environment at a client's site once I finish up more testing.
    • This client currently uses individually managed ESXi hosts that all have DAS. All hosts currently use the free ESXi hypervisor. This new box will be used as SAN storage for the VM hosts datastores. The VM hosts will be clustered and managed with vCenter Server and used with vMotion.
  • Since VAAI is so important I ultimately decided to use iSCSI for the datastores.
  • I removed the 2x 80 GB Intel SSD DC s3510 overprovisioned to 10 GB for SLOG from the server.
  • I installed the 400 GB DC P3700.
  • I overprovisioned the 400 GB DC P3700 using the following commands:
    • Code:
      gpart create -s gpt nvd0
      gpart add -t freebsd-zfs -b 2048 -a 4k -l log0 -s 8G nvd0
      zpool add vol0 log gpt/log0
      gpart show nvd0
      
  • The DC P3700 now looks like this from the shell and GUI:
  • I set up a task to run the following command post init:
  • I spun up a VM with a vmxnet3 NIC, set the MTU to 9000 and re-ran CrysalDiskMark at 16GiB.
    • Physical FreeNAS, iSCSI VM, sync=enabled with the DC P3700 being used as the SLOG.
I still want to test a few things here and there but overall I am happy with the results and learned a ton by setting this up. I do have another DC P3700 that I can use as a cache drive but at this point I am not even sure if I need it. I have to do more testing to see the hit rates of the ARC to see if it's even necessary and also test a few other odds and ends. I'll keep everyone updated as things progress. Thank to everyone for all the help and feedback thus far!​
Typically you should be adding the slog to the pool by it's gptid, but otherwise looks good.
Code:
gpart create -s GPT nvd0
gpart add -t freebsd-zfs -a 4k -s 16G nvd0
Run glabel status and find the gptid of nvd0p1
zpool add [poolname] log /dev/gptid/[gptid_of_nvd0p1]
 

soulburn

Member
Joined
Jul 6, 2014
Messages
100
Typically you should be adding the slog to the pool by it's gptid, but otherwise looks good.
Code:
gpart create -s GPT nvd0
gpart add -t freebsd-zfs -a 4k -s 16G nvd0
Run glabel status and find the gptid of nvd0p1
zpool add [poolname] log /dev/gptid/[gptid_of_nvd0p1]
Thanks for the info. I will remake it as you suggested. What I was using was just based off forum posts.
 
Status
Not open for further replies.
Top