ZFS Volume state is unknown

Status
Not open for further replies.

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Would the best way of ascertaining which drives are associated to which pool, by using glabel -status?
I was going to say, to find the bad drive, move the 3-drive RAIDZ1 pool to another box and use @Bidule0hm's script to match serial numbers with gptids (or just look at them in View Disks), but if you can't even remember which drives belong to that pool, you'll have to move all 7.
 

BRIT

Dabbler
Joined
May 26, 2016
Messages
11
ok ... so here's the latest (thanks to those who are still reading this...)

Long story short, I had an unbuilt server in my garage (6 years!) which was built yesterday. It's up and running with the latest version of FreeNAS and with some test disks I can confirm that all is working. I inserted all of the 6 3Tb drives from the ProBox and kept my fingers crossed. Unfortunately, FreeNAS hangs (or rather a couple of the disks hang / timeout on the CAM status) to the point that FreeNAS doesn't boot. So, I took #4, #5 and #6 out to see if that helped. Nope. Pulling #1 - #3 out and inserting #4 - #6 however got FreeNAS up and running.

After logging in, I was successful in importing one of my pools!!:) The volume is reported as "degraded" but at least I'm getting somewhere, at least with this volume. It's my intention to put in 3 x 4Tb disks and then (hopefully?) try to move the 3 x 3Tb (old) drive data into the new 3 x 4Tb drive volume. Is that possible through the GUI, or is that a lower level operation? I see that others have done this (here) - would it make sense to get the volume in a "non-degraded" state before attempting to migrate the data over? Is that (last) link the best way of doing things?
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
try to move the 3 x 3Tb (old) drive data into the new 3 x 4Tb drive volume. Is that possible through the GUI, or is that a lower level operation? I see that others have done this (here) - would it make sense to get the volume in a "non-degraded" state before attempting to migrate the data over? Is that (last) link the best way of doing things?
I suggest the following:
  1. Add at least one disk to restore redundancy to the pool, using the directions for Replacing a Failed Drive.
  2. Follow the directions for Replacing Drives to Grow a ZFS Pool to get the 4TB drives installed.
 

BRIT

Dabbler
Joined
May 26, 2016
Messages
11
All,
First and foremost, thank you for all your continued support. After 8.5hrs the 4Tb drive completed resilvering. I've put in another 4Tb drive and started the replacement of one of the other 3Tb drives. Once that's completed, I'll pop in a 3rd 4Tb and replace the last of the 3Tb drives. I guess that way I'll also be increasing the overall size of the volume too. However, already I'm able to read the data from the volume through FreeNAS.

:) Thank you all!! :)​

One thing that I read, but I'm not seeing, is that the drives that I selected to "replace" aren't put into an offline state. Is that supposed to happen (using 9.10-STABLE)? As it happens, I'll know which drives to pull as they'll be the 3Tb drives, but I don't want to screw anything up in FreeNAS by just pulling them out when it thinks that they're still being used?

...

Now - is there anything that I can do with the "first" set of 3 x 3Tb drives? I can post more verbose errors that I'm getting when FreeNAS boots in the hope that - perhaps - there's something miraculous I can do? Perhaps set the drives up / FreeNAS to reduce timeouts or "ignore" some errors in the hope that I can at least get FreeNAS to attempt to retrieve whatever it can?
 
Last edited:

BRIT

Dabbler
Joined
May 26, 2016
Messages
11
They're supposed to be in "VMware_vol1". This was the initial set of three drives (#1 - #3). I've done some checking and with drives #1 and #2 plugged in, the server will boot. When #3 is plugged in the system never really boots, keeps on reporting timeout issues, so it's apparent that #3 is the (most) problematic drive. In the hope that there's something I can do with 2/3 drives, I've plugged in another 4Tb drive, along with the #1 and #2 and the system boots. When I run "zpool import" I get:


[root@freenas] ~# zpool import
pool: VMware_vol1
id: 6121196609444076099
state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
see: http://illumos.org/msg/ZFS-8000-EY
config:


VMware_vol1 UNAVAIL insufficient replicas
raidz1-0 UNAVAIL insufficient replicas
14244343138776996751 UNAVAIL cannot open
gptid/240db837-f5cd-11e4-a479-00151795dfaa ONLINE
5254766704247237793 UNAVAIL cannot open
[root@freenas] ~#


I'm guessing that I'm SOL?
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
If you have two working drives from a 3-disk RAIDZ1 pool, you should be able to import the pool. I guess one of the following is true:
  1. The two drives that don't prevent the machine from booting are not from the same pool.
  2. Two of the three drives from that pool are dead.
If it's #1, one of the other drives from the original seven might get that pool up and running.
If it's #2, the pool is lost.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
VMware_vol1 DEGRADED
raidz1-0 DEGRADED
gptid/2360c7e7-f5cd-11e4-a479-00151795dfaa ONLINE
gptid/240db837-f5cd-11e4-a479-00151795dfaa ONLINE
5254766704247237793 UNAVAIL cannot open
You need to find the one that starts "236".
 
Status
Not open for further replies.
Top