l2arc removed but still being accessed

Status
Not open for further replies.

Teva

Dabbler
Joined
May 16, 2014
Messages
10
I removed an ssd l2arc drive from my encrypted pool and was still unable to reuse it on the nas server (dd and such were failing to write to the drive). I ended up physically removing the drive from the server to wipe it for use in another computer.
The problem is that now when the nas is rebooted and the pool mounts it looks like it is still trying to use the ssd that is no longer present. I do not see the ssd anymore in zpool status though, it also didn't show up after I issued the zpool remove command before i turned off the server.

The command i used since the web interface wouldn't let me:
zpool remove core gptid/54e44e7e-2031-11e4-9708-00248182cc52.eli

On boot i see this in the logs:
Sep 15 18:14:25 freenas manage.py: [middleware.notifier:1271] Failed to geli attach gptid/54e44e7e-2031-11e4-9708-00248182cc52: geli: Cannot open gptid/54e44e7e-2031-11e4-9708-00248182cc52: No such file or directory.

Other than that entry in the log the server is working fine. Is there a conf file somewhere that is still referencing the now missing drive?

The last scrub was canceled because I didn't want to wait the 19hours for it to finish so i could remove the drive. Scrubs have ran weekly for the last few months without any errors so i'm not to worried about missing a week.

FreeNAS-9.2.1.5-RELEASE-x64 (80c1d35)

Code:
[root@freenas] /# zpool status
  pool: core
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub canceled on Sun Sep 14 21:57:25 2014
config:

        NAME                                                STATE     READ WRITE CKSUM
        core                                                ONLINE       0     0     0
          raidz2-0                                          ONLINE       0     0     0
            gptid/c3b616c5-6396-11e3-88d5-6805ca0e7899.eli  ONLINE       0     0     0
            gptid/c43807a7-6396-11e3-88d5-6805ca0e7899.eli  ONLINE       0     0     0
            gptid/c4cd861c-6396-11e3-88d5-6805ca0e7899.eli  ONLINE       0     0     0
            gptid/7a39da5a-9728-11e3-8c4d-6805ca0e7899.eli  ONLINE       0     0     0
          raidz2-1                                          ONLINE       0     0     0
            gptid/df8c3d92-dc85-11e3-b5d2-00248182cc52.eli  ONLINE       0     0     0
            gptid/dffa53a8-dc85-11e3-b5d2-00248182cc52.eli  ONLINE       0     0     0
            gptid/e07ef30e-dc85-11e3-b5d2-00248182cc52.eli  ONLINE       0     0     0
            gptid/e10fbebf-dc85-11e3-b5d2-00248182cc52.eli  ONLINE       0     0     0

errors: No known data errors
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You shouldn't be running zpool commands from the command line. The manual explains how to do it properly from the gui. The problem is cli stuff takes place behind the webguis back and that's very ugly
 

Teva

Dabbler
Joined
May 16, 2014
Messages
10
I kinda figured that would be your response :smile:
No advice though? Would adding the drive back via the command line and then using the gui to remove it fix my blunder? Or perhaps exporting the pool and then importing it?
Just looking for some insight.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The only "good advice" I can give is try exporting and importing. Of course, that's basically what a reboot does. I'm not completely sure what you've actually done so I'm not sure how to fix it (and heck, the fix may be "go hack the database" which never goes well.
 

Teva

Dabbler
Joined
May 16, 2014
Messages
10
Ok, if it helps anyone else, it seems i wasn't patient enough. No more errors after doing the following:

shutdown nas
reattach previous ssd l2arc (no external changes to the drive since it was removed from the pool)
boot nas
decrypt pool
shutdown nas
remove ssd l2arc drive from server
boot nas
decrypt pool
no more errors in var/log/messages

I didn't add it back to the pool from the command line, just having it attached/detached and rebooting a few times seems to have cleared up the orphaned entries.

-Don't command line
 
Status
Not open for further replies.
Top