Can't remove log device

Status
Not open for further replies.

deasmi

Dabbler
Joined
Mar 21, 2013
Messages
14
When troubleshooting some performance issues I added a log volume to one of my ZFS pools.

If relevant this is running as a VM in ESXi.
Pool drives are connected via direct path I/O on LSI card
Log drive is a VMWare virtual disk ( I know, that's why it needs to go )
freenas# zpool status
pool: vol1
state: ONLINE
status: The pool is formatted using a legacy on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on software that does not support feature
flags.
scan: scrub repaired 0 in 2h47m with 0 errors on Sun Nov 3 02:47:46 2013
config:

NAME STATE READ WRITE CKSUM
vol1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/f3c13654-844d-11e2-b543-005056b32aa7 ONLINE 0 0 0
gptid/f4087e3d-844d-11e2-b543-005056b32aa7 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/0e55d215-844e-11e2-b543-005056b32aa7 ONLINE 0 0 0
gptid/0e8dfc82-844e-11e2-b543-005056b32aa7 ONLINE 0 0 0
logs
da1 ONLINE 0 0 0

errors: No known data errors
So I issue a
zpool remove vol1 da1
And after a few seconds the prompt comes back
# spool history vol1
..
2013-11-10.09:59:40 zpool remove vol1 da1


However volume is still there !
freenas# zpool status vol1

pool: vol1
state: ONLINE
status: The pool is formatted using a legacy on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on software that does not support feature
flags.
scan: scrub repaired 0 in 2h47m with 0 errors on Sun Nov 3 02:47:46 2013
config:

NAME STATE READ WRITE CKSUM
vol1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/f3c13654-844d-11e2-b543-005056b32aa7 ONLINE 0 0 0
gptid/f4087e3d-844d-11e2-b543-005056b32aa7 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/0e55d215-844e-11e2-b543-005056b32aa7 ONLINE 0 0 0
gptid/0e8dfc82-844e-11e2-b543-005056b32aa7 ONLINE 0 0 0
logs
da1 ONLINE 0 0 0

errors: No known data errors

freenas#

I'm clearly doing something very thick, if someone could tell me what I'd be very grateful.
 

deasmi

Dabbler
Joined
Mar 21, 2013
Messages
14
I have run a zpool upgrade on this volume just in case that was causing the issue with no change.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
It looks like you added the log disk from the command line interface, instead of the GUI (which is why you don't have a gptid alias). Generally speaking, you should always do anything from the GUI that can be done from the GUI. That is, after all, the entire purpose, in some sense, to FreeNAS.

Also, there appears to be some evidence that you didn't follow all of the carefully curated data in this document about virtualization.

You probably wouldn't be having this problem if you did this the "right way" by following the manual and the caveats in the virtualization document these guys prepared.

I can't explain why it doesn't remove when you try to (obviously, it should), but, what you might do to get rid of the volume, is shut down the FreeNAS VM entirely, physically disconnect the da1 disk. Once you bring the system back up, then da1 will be offline, and you should be able to detach it from the pool. Hopefully.?
 

deasmi

Dabbler
Joined
Mar 21, 2013
Messages
14
I did actually follow every single piece of advice in that thread, or rather I followed lots of advice in different threads before that one was written.

And then I thought, let's add a VMDK backed zil as a test, which was in hindsight stupid.
I was trying to solve persistent iSCSI issues with Linux imitators where having a ZIL was a suggested fix.

In any event seems this is a (semi-known) bug with ZFS, I've imported the pools on an OpenIndiana build and a FreeBSD build as well to the same effect. It may well be gptid related.
( http://comments.gmane.org/gmane.os.solaris.opensolaris.zfs/45525 for example )

I'm now copying all the data onto a different spool, I'll then destroy this zpool and start again. If I'm still getting the iSCSI issues I think it's might be time to try Solaris/Linux instead.
 

deasmi

Dabbler
Joined
Mar 21, 2013
Messages
14
Quick update on this.

Once I'd got all the data off and destroyed the zpool I re-created it without the zil.

I then added the zil with the GUI and was able to remove it/re add/remove without issues.

So as DrKK suggested, adding from the command line seems to be the cause, or more likely the lack of a gptid caused by adding a naked device name.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Two things:

1) You want the gptid alias stuff, because, you can then change the drives around physically and plug them into different ports of whatever, and the pool is independent of which interface the devices are on. If you add them from the command line, then you'd be screwed if you swapped or changed what drive(s) are connected where. You lose nothing by having the gptid, and you gain this much, at least, if not other things. So it's either good, or neutral, to add all devices from the GUI, and never bad, as far as I can tell. So it's a no-lose situation. ;)

2) There's not much point at all to using FreeNAS if you are not going to do as much as possible from the GUI...after all, FreeNAS with just the CLI is basically what you'd get on FreeBSD itself, right? All of the code for doing FreeBSD things via the FreeNAS GUI has been tested and retested and retested by the devs of FreeNAS. When you're just using the CLI you don't have the benefit of their understanding and testing of the procedures that are governed by the GUI.

Anyway, glad you got it fixed.
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
1) You want the gptid alias stuff, because, you can then change the drives around physically and plug them into different ports of whatever, and the pool is independent of which interface the devices are on. If you add them from the command line, then you'd be screwed if you swapped or changed what drive(s) are connected where.
Not really. ZFS itself has no problem using device names. The vdev labels contain all information it needs to properly import a pool even if you switch ports. You definitely would not be screwed if you used device names and then moved the drives around. The only thing that can get confused without GPTIDs is the FreeNAS GUI.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Alright. I stand corrected.

But the GUI will be all screwed up. :)
 
Status
Not open for further replies.
Top