Removing ZIL from zpoolq

Status
Not open for further replies.

Dave Genton

Contributor
Joined
Feb 27, 2014
Messages
133
With current (9.3) are we yet able to remove a ZIL device from zpool in production ?? I used a 128GB SSD for ZIL however I've been watching Reporting Tool since 9.3 updated it and appears ZIL is not being touched.

Thanks in advance,
Dave
 
L

L

Guest
I just did it on my test box running a single nfs client and it came out just fine..

Are you running nfs or iscsi?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Yet? You've been able to for as long as I've been involved with this project...
 

Dave Genton

Contributor
Joined
Feb 27, 2014
Messages
133
I just did it on my test box running a single nfs client and it came out just fine..

Are you running nfs or iscsi?

Thanks for info, I'm running both NFS for Vmware server, and iSCSI for Vmware server as well as my work laptop uses iSCSI LUN for backups and work data storage etc.

What command did you use ?

dave
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
You don't do it from the command line. The WebGUI had the ability to remove the slog. Check out the FreeNAS documentation as it explains how to remove drives from a pool. ;)
 

Dave Genton

Contributor
Joined
Feb 27, 2014
Messages
133
Yet? You've been able to for as long as I've been involved with this project...

Well excuse my lack of history with FreeBSD but I fully disclosed to you my where my expertise lies and *nix OS types are mostly familiar because routers and switches use variants as well. My Google search PRIOR to taking any precious time away from board members included results that at the very top stated FreeNAS cannot CURRENTLY allow removal ... blah blah blah, then went on to say something about it may be a bug or mis configuration etc. Exact build version was not readily available and I concluded my skimming of that forum posting google brought to my attention. Hopefully justification will result in far fewer unproductive postings such as this one. Or maybe you just haven't been involved very long, I dont care but if I see the document again when I have a moment to recreate the same search I will certainly post it up here for you to be advised of Cyberjock.

d-
 

Dave Genton

Contributor
Joined
Feb 27, 2014
Messages
133
I didn't recall that but then again didn't make it home to sleep last night. Will do so on dinner break as these servers are all purring like kittens and with IPMI & KVM no need to remain here :) Love that, I am NOT a server person but setting up these servers to improve skill set as well as Lab up my SAN, Storage Fabric, Unified Compute Labs with converged infrastructure. So forgive my ignorance but I'll check it out being its still open on my desk monitor.

On the side. If the ZIL is showing zero hits and other hits report from ARC is showing like 99% then ZIL SSD is just drawing power and not needed at this time correct ? Basically stating system doesn't see the IOPS or load required to use ZIL ? I only added it because of heavy VMware ESXi Server usage with some VM's booting from iSCSI LUN's hosted on FreeNAS besides typical NFS shares.

thx again,
dave
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Do some more reading (like Cyberjock's guide to ZFS) on the ZIL. It's sole purpose in life is to satisfy the sync write requirement in case there is a power loss. That's it. If you aren't doing sync writes, then nothing will hit the ZIL.
 
L

L

Guest
It is important to retest everything when rev'ing versions.

I am confused that you aren't hitting zil. Maybe it's so small that it isn't showing well? If you like command line, try a zpool iostat -v 1 to see if there is something going to the zil. I would tell you to use zilstat, but it is broken in 9.3..most of the dtrace scripts I have tried are broke in this version.
 
L

L

Guest
On the side. If the ZIL is showing zero hits and other hits report from ARC is showing like 99% then ZIL SSD is just drawing power and not needed at this time correct ? Basically stating system doesn't see the IOPS or load required to use ZIL ? I only added it because of heavy VMware ESXi Server usage with some VM's booting from iSCSI LUN's hosted on FreeNAS besides typical NFS shares.

thx again,
dave[/QUOTE]

If the zil is showing no hits, I would actually wait until we find out if there are using zilstat underneath in 9.3. It is broken.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
The test to find out if you are doing sync writes is easy:

1. Are you using NFS?
2. Do you have any pools or datasets to which you've manually set sync=enabled?

If the answer to both is "no" then you have no sync writes.
 

Dave Genton

Contributor
Joined
Feb 27, 2014
Messages
133
The test to find out if you are doing sync writes is easy:

1. Are you using NFS?
2. Do you have any pools or datasets to which you've manually set sync=enabled?

If the answer to both is "no" then you have no sync writes.

Well agreed and why I put the ZIL in to begin with. Vmware, enough said with their sync writes for everything, but yes they are heaviest users of NFS shares for which performance sucked to the point I installed the ZIL. I tested with sync writes disabled and could see huge difference but then re-enabled for obvious reasons. So I wonder how accurate the Reporting in 9.3 is and why ZIL shows NIL no pun intended ? Only Vmware servers running use NFS, all users on this use AFP Shares. Vmware and several Mac's use iSCSI as well. My network prioritizes and queues NFS and iSCSI in the priority queues with strict priorities set and dscp markings always intact incase they cross vlan boundaries but always stay within the site. Its the VMware to FreeNAS only where the poor writes are seen and they are dramatic to say the least. I'll run some NFS based storage vMotions tomorrow and pull the ZIL Reporting then, before yanking the SSD.

How can I get better write performance by redesigning my disk pool(s) ? Mirrors and stripes can increase performance so looking for option using 8-10 disks to fit my server where I can have two vdev's to spread the read/writes across, stripe with parity within each vdev. Make sense ?

dave
 
L

L

Guest
Is sync disable on FreeNAS?
The test to find out if you are doing sync writes is easy:

1. Are you using NFS?
2. Do you have any pools or datasets to which you've manually set sync=enabled?

If the answer to both is "no" then you have no sync writes.

Is sync disabled on freenas by default?
 
L

L

Guest
Well agreed and why I put the ZIL in to begin with. Vmware, enough said with their sync writes for everything, but yes they are heaviest users of NFS shares for which performance sucked to the point I installed the ZIL. I tested with sync writes disabled and could see huge difference but then re-enabled for obvious reasons. So I wonder how accurate the Reporting in 9.3 is and why ZIL shows NIL no pun intended ? Only Vmware servers running use NFS, all users on this use AFP Shares. Vmware and several Mac's use iSCSI as well. My network prioritizes and queues NFS and iSCSI in the priority queues with strict priorities set and dscp markings always intact incase they cross vlan boundaries but always stay within the site. Its the VMware to FreeNAS only where the poor writes are seen and they are dramatic to say the least. I'll run some NFS based storage vMotions tomorrow and pull the ZIL Reporting then, before yanking the SSD.

How can I get better write performance by redesigning my disk pool(s) ? Mirrors and stripes can increase performance so looking for option using 8-10 disks to fit my server where I can have two vdev's to spread the read/writes across, stripe with parity within each vdev. Make sense ?

dave

Can you just run a pure iostat while you are running your tests? That should show us if the zil is being written to. I would ssh in while testing. Although the ssh traffic may interfere slight with workload. Best if done on a second nic.

#iostat -v 1

After awhile hold "control" and type "c"
 
L

L

Guest
Another way(and vmware recommends) is to make sure traffic is isolated. One nic for vm's one for afp.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
Is sync disabled on freenas by default?
If this was a question:
No, It's set for Sync=standard
You can query this by doing a 'zfs get sync %poolname%'
Example: 'zfs get sync pool1'

The values are 'standard/disabled/all' I think? Maybe some googling will get the correct wording for the three.

Not sure if that's what you were asking or answering something else?
 

Dave Genton

Contributor
Joined
Feb 27, 2014
Messages
133
Is sync disable on FreeNAS?


Is sync disabled on freenas by default?

No I have set it to equal "disabled" in the past when doing large transfers. I was doing both iSCSI and NFS. Typical NFS shares and many VM's are resident on iSCSI LUN's and must boot across network. When I disabled sync writes I could see the difference like night and day immediately. Since the basis of this was for testing the network and nothing more its just a "tool" I like to use for better throughput of remote read/writes. I always set it back to enabled before finishing up so I know I dont forget and the servers are "safe" for use no matter what the task next time they are out in the field.

That is the extent of what I have tweaked/disabled in FreeNAS and am speaking strictly about 9.2.x.x when I did this. But pretty certain 9.3 wouldn't go putting data integrity on a back burner. I know VMware ESXi demands the data sync writes for data integrity and this is why the poor performance between ESXi and FreeNAS with NFS is always a hot topic of conversation.

Beyond doing this just for testing and comparing to see more than anything I cannot complain about the performance of my FreeNAS for most every situation and I have about two dozen entries for loader and sysctl I've accumulated over time. With all my network and TCP sysctl tweaks it fills all the links I want to turn up, but there is one anchor dragging "to0 damn slow to even tolerate" operation but again its confined between ESXi and FreeNAS specifically, and for most part seems like its only in one direction.

d-
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
Have you seen a difference between NFS and iSCSI in regards to the sync performance?

I selected iSCSI when I setup my target. I was under the impression at the time (maybe a false one) that NFS wanted to do EVERYTHING as a sync coming out of ESX to it's target. With iSCSI, I thought I read that it was more of a choice that ESX made based on the who/what. I don't have any articles to back that up, but I thought I read one of the ninjas on here explaining it.

My pool is set for sync=standard. I would rather do sync=all, but I don't have an appropriate hardware config for it yet (currently using a RAID controller cache with battery), so I have to choose between bad performance or bad data integrity. This is a test system as you mentioned, so I'm willing to make the poorer of the two choices.

I keep switching between standard and all just to compare for fun, I'm curious what yours looks like.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Is sync disable on FreeNAS?


Is sync disabled on freenas by default?

Heck no. That would mean we aren't in compliance with the applicable POSIX stuff. It also means that all slogs and such would never work until you set sync=standard or sync=enabled. :P
 

Dave Genton

Contributor
Joined
Feb 27, 2014
Messages
133
Can you just run a pure iostat while you are running your tests? That should show us if the zil is being written to. I would ssh in while testing. Although the ssh traffic may interfere slight with workload. Best if done on a second nic.

#iostat -v 1

After awhile hold "control" and type "c"

Thank You for the tip but I cannot execute that command, what am I doing wrong ?? It says iostat: illegal option --v tried as sudo as well via shell locally and remote ssh.
 
Status
Not open for further replies.
Top