How do I get my FreeNAS 8 to forget about a mirrored FreeNas 7 volume?

Status
Not open for further replies.

electricd7

Explorer
Joined
Jul 16, 2012
Messages
81
Hello,

I recently installed FreeNAS8 and re-used some disks that were in use in a FreeNAS7 installation. I was able to successfully build a 6 disk RAIDZ2 ZFS pool with the disks and all seems to be working OK. The problem is that when I shutdown the system, it does its thing and at some point it say something about destroying "vol1" GENOM RAID1 set or something to that effect. I have seen it reference that volume before in a log file or something as well. The volume "vol1" does not exist anymore (as far as I know) as the disks have been used in the RAIDZ2 pool. How do I get FreeNAS to forget about that volume so that it is really "gone"? I didn't import this volume at any point on the FreeNAS 8 installation. Thanks in advance!

ED7
 

electricd7

Explorer
Joined
Jul 16, 2012
Messages
81
OK, now I really need help. I spoke a little too soon. Now when I look in my webGUI i see that my ZFS set is degraded as the disks that were part of the original vol1 RAID1 set are now not part of the ZFS set. I can go to volume manager and it shows me the 2 disks that were part of the RAID1 set as availble (ada4 and ada5). I can't do anything with them, but if I go to auto-import volume, I see an option for vol1 but I am unable to import it. help!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm kind of wondering if you actually have redundancy. It almost sounds like 2 of your drives are not setup for the new zpool. Are you sure you are in a RAIDZ2 and are you sure your total disk space is correct for your drives?
 

electricd7

Explorer
Joined
Jul 16, 2012
Messages
81
Well I am unsure of what happened. I started with all 6 disks as part of a ZFS RaidZ2 set. Then after a couple of reboots for unrelated issues, the ZFS set is saying it is degraded and the vol1 volume is available again for importing. I just don't know what needs to be done to wipe out the ada4 and ada5 drives that were part of a raid1 set, then later part of my raidz2 set and now are reporting they are in the raid1 set again?? I have my data backed up to an attached USB drive so I am not above wiping everything and starting over, but I already did that once and again it still remembers the old vol1 RAID1 set??
 

electricd7

Explorer
Joined
Jul 16, 2012
Messages
81
OK, so I did some more looking and found that vol1 existed again within the OS. I ran a gmirror list at prompt and got the following:

Geom name: vol1
State: COMPLETE
Components: 2
Balance: round-robin
Slice: 4096
Flags: NONE
GenID: 0
SyncID: 4
ID: 642294463
Providers:
1. Name: mirror/vol1
Mediasize: 2000398933504 (1.8T)
Sectorsize: 512
Mode: r0w0e0
Consumers:
1. Name: da0
Mediasize: 2000398934016 (1.8T)
Sectorsize: 512
Mode: r1w1e1
State: ACTIVE
Priority: 1
Flags: NONE
GenID: 0
SyncID: 4
ID: 366372751
2. Name: da1
Mediasize: 2000398934016 (1.8T)
Sectorsize: 512
Mode: r1w1e1
State: ACTIVE
Priority: 0
Flags: NONE
GenID: 0
SyncID: 4
ID: 717348480

Thats no good, that shouldn't be there. These are, in fact, the 2 disks that are shown as missing from my ZFS pool. So I ran the following 2 commands:

gmirror remove vol1 da0
gmirror remove vol1 da1

After doing so, a gmirror list returned nothing. I was then able to perform a wipe on both disks from within the GUI. Now here is the latest problem. When I go to view volumes, and volume status I do see 2 disks marked as missing. So I click "replace" and point it at the first disk and then click OK and get the following:

Error: Disk replacement failed: "invalid vdev specification, use '-f' to override the following errors:, /dev/gptid/5ea47a7b-dc19-11e1-baa7-002590760c9d is part of active pool 'pool1', "

What do I do now??
 

electricd7

Explorer
Joined
Jul 16, 2012
Messages
81
Nevermind....a reboot cleared the errors and the disks reattached automatically. gmirror list still returns no results..maybe I got it this time :)
 
Status
Not open for further replies.
Top