ZIL / SLOG necessary

Status
Not open for further replies.

Tom85768

Cadet
Joined
Jun 28, 2017
Messages
9
Hi,

I'm using a Virtualized FreeNAS VM to serve up NFS for our vSphere 6.5 environment. The VM is built on top of a Dell R730 with 12 x 1.4Tb SSD's and 2 x 800Gb write intensive SSD's. I've sized the VM with 4 vCPU's, 200GB RAM and used 12 separate 1.5Tb VMDK's to create a RAID 10 8Tb volume. Performance appears excellent.

Is there any benefit to configuring a SLOG or ZIL with a configuration entirely based on enterprise SSD's with 200GB of Physical RAM? (I have 512Gb available but wanted to stay well within a NUMA boundary)

Cheers

Tom
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Hi,

I'm using a Virtualized FreeNAS VM to serve up NFS for our vSphere 6.5 environment. The VM is built on top of a Dell R730 with 12 x 1.4Tb SSD's and 2 x 800Gb write intensive SSD's. I've sized the VM with 4 vCPU's, 200GB RAM and used 12 separate 1.5Tb VMDK's to create a RAID 10 8Tb volume. Performance appears excellent.

Is there any benefit to configuring a SLOG or ZIL with a configuration entirely based on enterprise SSD's with 200GB of Physical RAM? (I have 512Gb available but wanted to stay well within a NUMA boundary)

Cheers

Tom
used 12 separate 1.5Tb VMDK's to create a RAID 10 8Tb volume

Dataloss is in your future. Rethink your setup before messing with a slog.

Sent from my Nexus 5X using Tapatalk
 

Tom85768

Cadet
Joined
Jun 28, 2017
Messages
9
Many thanks for your verbose reply. Would you care to expand upon what you think could be the issue? Is it the reliability of the hardware you are concerned about or the RAID 10 configuration. All configurations were performed with the initial setup wizard.
 

NZ_JJ

Dabbler
Joined
May 25, 2017
Messages
28
Do a forum search on virtualizing FreeNAS.
You'll find out what @SweetAndLow means.
Using virtual disks on ZFS is a VERY bad idea. You will lose your data
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Many thanks for your verbose reply. Would you care to expand upon what you think could be the issue? Is it the reliability of the hardware you are concerned about or the RAID 10 configuration. All configurations were performed with the initial setup wizard.
You have to expose full disks when you are using zfs. I thought this would have been common knowledge since you have very high end hardware and are doing this for a company.

It seems like you are using vmdks and creating a zfs pool with them.

Have you read the noob slide show? You can also read all the links in my signature. It will be very educational.

Sent from my Nexus 5X using Tapatalk
 

scrappy

Patron
Joined
Mar 16, 2017
Messages
347
used 12 separate 1.5Tb VMDK's to create a RAID 10 8Tb volume

Dataloss is in your future. Rethink your setup before messing with a slog.

Sent from my Nexus 5X using Tapatalk

I agree. Virtualized disks being used for ZFS is a big no no. You must provide direct access of the physical disk(s) from your VMware host. Very recently another forum member posted about an unreadable zpool after a power loss. He too, was using virtualized disks instead of direct pass through.
 

Tom85768

Cadet
Joined
Jun 28, 2017
Messages
9
Interesting, what exactly is it that makes you think this? I've been using virtual disks with Nexenta ZFS and Openfiler storage systems for years and have never experienced any issues whatsoever. VT-D passthrough is not an option for us unfortunately. As per the following article scrubbing is disabled...

http://www.freenas.org/blog/yes-you-can-virtualize-freenas/
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Interesting, what exactly is it that makes you think this? I've been using virtual disks with Nexenta ZFS and Openfiler storage systems for years and have never experienced any issues whatsoever. VT-D passthrough is not an option for us unfortunately. As per the following article scrubbing is disabled...

http://www.freenas.org/blog/yes-you-can-virtualize-freenas/
Ha this is not just some random person in the internet's opinion. This is me stating the facts on how it works and doesn't work. If you did this on other system I can tell you it was wrong there also.

This information is clearly documented from every zfs developer and systems administrator.

Your performance will suffer, data reliability will suffer and data integrity will suffer.

Sent from my Nexus 5X using Tapatalk
 

Tom85768

Cadet
Joined
Jun 28, 2017
Messages
9
Hi SweetAndLow,

Firstly let me thank you for taking the time to reply to my original email. I think the question I asked was whether a ZIL/SLOG are necessary on a system with 200Gb RAM allocated and enterprise SSD drives? Do you have any information regarding that?

Secondly let me explain that the FreeNAS system I have developed is for a staging environment only, not anything in production. It will be used until we get vSAN implemented at which point the contents of the NFS will be migrated across to that hyperconverged storage.

Third thing I wanted to cover was the technical details. Firstly you state performance will suffer. Do you care to elaborate on that at all? What is it that makes you think a virtual disk performs worse than a physical disk. Is this something relating to ZFS itself? If so why do we not have problems with the Solaris based ZFS implementation Nexenta. The virtualised FreeNAS solution I have setup is providing 2 NFS exports to approximately 8 ESXi hosts. Pretty soon there will be over 100 running VM's and so I will be more than happy to post some performance details in this thread but so far it looks very promising. Please also remember the reason the system is virtualised is because FreeNAS on the bare metal was not stable at all as I detailed in my previous thread.

As regards data reliability and integrity, again I need to understand details. Obviously power loss is potentially a problem for any storage system, this is why we use UPS conditioned power. Is there something particular to the FreeNAS BSD based ZFS implementation that means a virtual disk is unreliable. Given FreeNAS themselves are happy to release a statement saying FreeNAS can be virtualised I am left wondering what the reasons for your reliability/integrity statement are. If you don't have the details and are just repeating ad verbatim what you have heard elsewhere then please don't as its the technical details which we wish to discuss here. If you don't have these or don't know then perhaps someone else does in which case they may wish to reply. As for the clear documentation which explains the 'technical details' where is this exactly?

Cheers
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
Interesting, what exactly is it that makes you think this? I've been using virtual disks with Nexenta ZFS and Openfiler storage systems for years and have never experienced any issues whatsoever. VT-D passthrough is not an option for us unfortunately. As per the following article scrubbing is disabled...

http://www.freenas.org/blog/yes-you-can-virtualize-freenas/
I personally don't know if you're interested, but here is another piece if information.
My advice is to read all of it very carefully.
https://forums.freenas.org/index.ph...nas-in-production-as-a-virtual-machine.12484/
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Hi SweetAndLow,

Firstly let me thank you for taking the time to reply to my original email. I think the question I asked was whether a ZIL/SLOG are necessary on a system with 200Gb RAM allocated and enterprise SSD drives? Do you have any information regarding that?

Secondly let me explain that the FreeNAS system I have developed is for a staging environment only, not anything in production. It will be used until we get vSAN implemented at which point the contents of the NFS will be migrated across to that hyperconverged storage.

Third thing I wanted to cover was the technical details. Firstly you state performance will suffer. Do you care to elaborate on that at all? What is it that makes you think a virtual disk performs worse than a physical disk. Is this something relating to ZFS itself? If so why do we not have problems with the Solaris based ZFS implementation Nexenta. The virtualised FreeNAS solution I have setup is providing 2 NFS exports to approximately 8 ESXi hosts. Pretty soon there will be over 100 running VM's and so I will be more than happy to post some performance details in this thread but so far it looks very promising. Please also remember the reason the system is virtualised is because FreeNAS on the bare metal was not stable at all as I detailed in my previous thread.

As regards data reliability and integrity, again I need to understand details. Obviously power loss is potentially a problem for any storage system, this is why we use UPS conditioned power. Is there something particular to the FreeNAS BSD based ZFS implementation that means a virtual disk is unreliable. Given FreeNAS themselves are happy to release a statement saying FreeNAS can be virtualised I am left wondering what the reasons for your reliability/integrity statement are. If you don't have the details and are just repeating ad verbatim what you have heard elsewhere then please don't as its the technical details which we wish to discuss here. If you don't have these or don't know then perhaps someone else does in which case they may wish to reply. As for the clear documentation which explains the 'technical details' where is this exactly?

Cheers
A slog is beneficial when doing sync writes which happen when using nfs and esxi.

As for all the other stuff I'll let you risk your career and do your own research. I don't want to type up all the reasons when you can do a Google search and get 10k results the agree with my opinion.

Sent from my Nexus 5X using Tapatalk
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I think you mean sync writes. Async writes wouldn't use it at all.
Yes and fixed

Sent from my Nexus 5X using Tapatalk
 

IceBoosteR

Guru
Joined
Sep 27, 2016
Messages
503
Status
Not open for further replies.
Top