zpool import not showing all devices

Status
Not open for further replies.

rporter117

Cadet
Joined
Mar 22, 2016
Messages
5
Something’s up with my NAS after I moved with me across town. Suddenly none of the drives connected to one of the controllers wants to be detected. Both controllers show up in lspci. At least I still have 12 remaining of the 20 bays to work with to import my 10 disk array. Edit: Reseating connections allowed me to access all of my disks at once now. I still may have a bad sas cable.

I'm running the latest version of a fresh, unconfigured install of FreeNAS 9.10

Hardware:
Intel s5000psl motherboard
Xeon 5150 x2
32GB ECC memory
2x Dell PERC H200 crossflashed to LSI firmware in IT mode
16x Seagate ST31000340NS 10 of which are in a mirrored array. These drives have a known firmware issue, but I’ve not seen any issues with them...yet

Initially, a few disks didn’t show up when I issued the zpool import command. I didn’t think much of this since this is not a production pool and the data isn’t tremendously important. I’d just rather not have to redownload a few terabytes. I wanted to grab about 60 GB off of it, but it was transferring at an abysmal 2 MB/s. I thought this could be because I was doing this over sftp. Top showed that ssh was using a minimal amount of cpu and io waiting category was at a whopping 75%. It’s worth noting that I was running the latest updated Fedora server with ZFS on Linux.

I shut it down to deal with it later. Turning it on at said later and import fails to find a mirror pair. Trying many different things I return to FreeNAS. Here is the result of zpool import.
Code:
#zpool import -f -m -F -n
   pool: poolparty
	 id: 3245950555948954969
  state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://illumos.org/msg/ZFS-8000-EY
config:

	poolparty									   UNAVAIL  missing device
	 gptid/8e64ca66-64e0-11e5-bd8b-0015176425ac	ONLINE (ada0)
	 da5p2										 ONLINE
	 mirror-2									  DEGRADED
	   10757988070345942881						UNAVAIL  cannot open
	   gptid/923a8769-64e0-11e5-bd8b-0015176425ac  ONLINE (da4)
	 mirror-4									  ONLINE
	   da2p2									   ONLINE
	   da3p2									   ONLINE

OK

well not so OK. Making sure the array drives didn’t get swapped around in the move, I examined the rest of the drives with zdb looking for drives with the right pool name and id. These two are the only ones that come up with anything relating. Some of the other drives had been used previously in a raidz array and, as I found out, iops aren’t so great with that. I’m having a creeping feeling that I may have given away a couple of the wrong disks. Hopefully not a huge deal as the array imported before.
Code:
#zdb -l /dev/da0p2
--------------------------------------------
LABEL 0
--------------------------------------------
	version: 5000
	name: 'poolparty'
	state: 0
	txg: 3034666
	pool_guid: 3245950555948954969
	errata: 0
	hostname: 'mememachine.kewryan.com'
	top_guid: 16062381461016655633
	guid: 16062381461016655633
	hole_array[0]: 3
	vdev_children: 5
	vdev_tree:
		type: 'disk'
		id: 0
		guid: 16062381461016655633
		path: '/dev/disk/by-uuid/3245950555948954969'
		whole_disk: 1
		metaslab_array: 39
		metaslab_shift: 33
		ashift: 12
		asize: 998051414016
		is_log: 0
		create_txg: 4
	features_for_read:
		com.delphix:hole_birth
		com.delphix:embedded_data

<snip redundancies for brevity>

#zdb -l /dev/da1p2
--------------------------------------------
LABEL 0
--------------------------------------------
	version: 5000
	name: 'poolparty'
	state: 0
	txg: 3031861
	pool_guid: 3245950555948954969
	hostid: 2283479323
	hostname: ''
	top_guid: 8003677620862493554
	guid: 11734464301926509782
	hole_array[0]: 3
	vdev_children: 5
	vdev_tree:
		type: 'mirror'
		id: 1
		guid: 8003677620862493554
		metaslab_array: 37
		metaslab_shift: 33
		ashift: 12
		asize: 998052462592
		is_log: 0
		create_txg: 4
		children[0]:
			type: 'disk'
			id: 0
			guid: 11734464301926509782
			path: '/dev/gptid/8fba9f8d-64e0-11e5-bd8b-0015176425ac'
			whole_disk: 1
			create_txg: 4
		children[1]:
			type: 'disk'
			id: 1
			guid: 3389215055776677682
			path: '/dev/gptid/9090d808-64e0-11e5-bd8b-0015176425ac'
			whole_disk: 1
			create_txg: 4
	features_for_read:
		com.delphix:hole_birth
		com.delphix:embedded_data

<snip redundancies for brevity>

This is the point where I’m at a loss where to go from here. Why does importing not include these two other drives? What more information can I get to help you guys?
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I take it you tried reseating all the cards and reforming all the connections?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Use code tags for all output. Tell us what you think you should have as far as pool configuration goes. Also did you double check everything was plugged in? Your pool is pretty much a mess and has had who knows what done to it. Word to the wise if you can't fix it yourself don't mess with it like you have.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Which version of FreeNAS are you running?
 

rporter117

Cadet
Joined
Mar 22, 2016
Messages
5
I take it you tried reseating all the cards and reforming all the connections?
Moved things around more and now I'm able to get all my drives detected at one. I may have one bad sas cable though.

Use code tags for all output. Tell us what you think you should have as far as pool configuration goes. Also did you double check everything was plugged in? Your pool is pretty much a mess and has had who knows what done to it. Word to the wise if you can't fix it yourself don't mess with it like you have.

Edited. Now that I'm remembering what I did initially, I did detach drives from the pool to attach another to it. That still doesn't explain why the last vdev is missing after a reboot. Yes, the pool is a mess. I'd like to learn how to recover from my mistake.

Which version of FreeNAS are you running?
Fresh, unconfigured install of FreeNAS 9.10
 
Last edited:

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I was running the latest updated Fedora server with ZFS on Linux.
Do you mean the pool was not created on FreeNAS?
the pool is a mess. I'd like to learn how to recover from my mistake.
In the zpool import output you posted above, I see two mirror vdevs and two single-disk vdevs. This means that if either of the single-disk vdevs fails, the pool is lost. You would need to attach another disk to each single-disk vdev, turning them into mirrors, to mitigate this risk. However, there appear to be devices missing. In the end, you'll probably be best off starting over with a new pool.
now I'm able to get all my drives detected
Then please post new output so we can see the current situation.
I did detach drives from the pool to attach another to it
What does this mean?
 

rporter117

Cadet
Joined
Mar 22, 2016
Messages
5
Do you mean the pool was not created on FreeNAS?
It was created in FreeNAS earlier this year, possibly late last year. I installed Fedora so I could install onto mirrored SSD's with the SLOG on it. A previous attempt at this in FreeNAS resulted in an intermittent, bizarre, and cryptic grub error. That's beyond the scope of what this thread is about.

Then please post new output so we can see the current situation.
There will be a lot of output as some of these drives were used on previous and now no longer used pools:
Code:
#zpool import -f -m -F -n
   pool: dogepool
	 id: 12182144821564397731
  state: FAULTED
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
	The pool may be active on another system, but can be imported using
	the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-EY
config:

	dogepool				 FAULTED  corrupted data
	 mirror-0			   DEGRADED
	   7976856621905995257  UNAVAIL  cannot open
	   da3				  ONLINE

   pool: poolparty
	 id: 3245950555948954969
  state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
	devices and try again.
   see: http://illumos.org/msg/ZFS-8000-6X
config:

	poolparty									   UNAVAIL  missing device
	 gptid/8e64ca66-64e0-11e5-bd8b-0015176425ac	ONLINE
	 da0p2										 ONLINE
	 mirror-2									  DEGRADED
	   10757988070345942881						UNAVAIL  cannot open
	   gptid/923a8769-64e0-11e5-bd8b-0015176425ac  ONLINE
	 mirror-4									  ONLINE
	   da6p2									   ONLINE
	   da4p2									   ONLINE

	Additional devices are known to be part of this pool, though their
	exact configuration cannot be determined.

   pool: poolparty
	 id: 3388780484617072003
  state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://illumos.org/msg/ZFS-8000-EY
config:

	poolparty				 UNAVAIL  insufficient replicas
	 raidz1-0				UNAVAIL  insufficient replicas
	   da11p1				ONLINE
	   7046375561975078261   UNAVAIL  cannot open
	   2213831689996095185   UNAVAIL  cannot open
	   11360718170492271791  UNAVAIL  cannot open
	   4601240247631472665   UNAVAIL  cannot open
	 raidz1-2				UNAVAIL  insufficient replicas
	   12825610289144384010  UNAVAIL  cannot open
	   ada3p1				ONLINE
	   8601242925406628546   UNAVAIL  cannot open
	   9400635500860036573   UNAVAIL  cannot open
	   da10p1				ONLINE

   pool: poolparty
	 id: 2341966408268005875
  state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://illumos.org/msg/ZFS-8000-EY
config:

	poolparty				 UNAVAIL  insufficient replicas
	 raidz1-0				UNAVAIL  insufficient replicas
	   10443595734973985474  UNAVAIL  cannot open
	   6769943795411980695   UNAVAIL  corrupted data
	   17193190293968205878  UNAVAIL  cannot open
	   18124187498660179067  UNAVAIL  corrupted data
	   13055435399542577683  UNAVAIL  cannot open
	   1103471849868477066   UNAVAIL  cannot open
	   7299350896374739444   UNAVAIL  cannot open
	   da11				  ONLINE
[ryan@freenas] /% sudo import -f -m -F -n
Password:
sudo: import: command not found
[ryan@freenas] /% sudo zpool import -f -m -F -n
   pool: dogepool
	 id: 12182144821564397731
  state: FAULTED
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
	The pool may be active on another system, but can be imported using
	the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-EY
config:

	dogepool				 FAULTED  corrupted data
	 mirror-0			   DEGRADED
	   7976856621905995257  UNAVAIL  cannot open
	   da3				  ONLINE

   pool: poolparty
	 id: 3245950555948954969
  state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
	devices and try again.
   see: http://illumos.org/msg/ZFS-8000-6X
config:

	poolparty									   UNAVAIL  missing device
	 gptid/8e64ca66-64e0-11e5-bd8b-0015176425ac	ONLINE
	 da0p2										 ONLINE
	 mirror-2									  DEGRADED
	   10757988070345942881						UNAVAIL  cannot open
	   gptid/923a8769-64e0-11e5-bd8b-0015176425ac  ONLINE
	 mirror-4									  ONLINE
	   da6p2									   ONLINE
	   da4p2									   ONLINE

	Additional devices are known to be part of this pool, though their
	exact configuration cannot be determined.

   pool: poolparty
	 id: 3388780484617072003
  state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://illumos.org/msg/ZFS-8000-EY
config:

	poolparty				 UNAVAIL  insufficient replicas
	 raidz1-0				UNAVAIL  insufficient replicas
	   da11p1				ONLINE
	   7046375561975078261   UNAVAIL  cannot open
	   2213831689996095185   UNAVAIL  cannot open
	   11360718170492271791  UNAVAIL  cannot open
	   4601240247631472665   UNAVAIL  cannot open
	 raidz1-2				UNAVAIL  insufficient replicas
	   12825610289144384010  UNAVAIL  cannot open
	   ada3p1				ONLINE
	   8601242925406628546   UNAVAIL  cannot open
	   9400635500860036573   UNAVAIL  cannot open
	   da10p1				ONLINE

   pool: poolparty
	 id: 2341966408268005875
  state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://illumos.org/msg/ZFS-8000-EY
config:

	poolparty				 UNAVAIL  insufficient replicas
	 raidz1-0				UNAVAIL  insufficient replicas
	   10443595734973985474  UNAVAIL  cannot open
	   6769943795411980695   UNAVAIL  corrupted data
	   17193190293968205878  UNAVAIL  cannot open
	   18124187498660179067  UNAVAIL  corrupted data
	   13055435399542577683  UNAVAIL  cannot open
	   1103471849868477066   UNAVAIL  cannot open
	   7299350896374739444   UNAVAIL  cannot open
	   da11				  ONLINE

What does this mean?
When the pool was briefly but successfully imported before, I removed the missing members and tried attaching other drives to recreate the mirror vdevs. This threw a too small error despite the partitions having the same number of sectors. I shut it down to deal with it later not ever imagning that the pool woudln't import on the next boot.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
#zpool import -f -m -F -n pool: dogepool id: 12182144821564397731 state: FAULTED status: The pool was last accessed by another system. action: The pool cannot be imported due to damaged devices or data. The pool may be active on another system, but can be imported using the '-f' flag. see: http://illumos.org/msg/ZFS-8000-EY config: dogepool FAULTED corrupted data mirror-0 DEGRADED 7976856621905995257 UNAVAIL cannot open da3 ONLINE pool: poolparty id: 3245950555948954969 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://illumos.org/msg/ZFS-8000-6X config: poolparty UNAVAIL missing device gptid/8e64ca66-64e0-11e5-bd8b-0015176425ac ONLINE da0p2 ONLINE mirror-2 DEGRADED 10757988070345942881 UNAVAIL cannot open gptid/923a8769-64e0-11e5-bd8b-0015176425ac ONLINE mirror-4 ONLINE da6p2 ONLINE da4p2 ONLINE Additional devices are known to be part of this pool, though their exact configuration cannot be determined. pool: poolparty id: 3388780484617072003 state: UNAVAIL status: The pool was last accessed by another system. action: The pool cannot be imported due to damaged devices or data. see: http://illumos.org/msg/ZFS-8000-EY config: poolparty UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas da11p1 ONLINE 7046375561975078261 UNAVAIL cannot open 2213831689996095185 UNAVAIL cannot open 11360718170492271791 UNAVAIL cannot open 4601240247631472665 UNAVAIL cannot open raidz1-2 UNAVAIL insufficient replicas 12825610289144384010 UNAVAIL cannot open ada3p1 ONLINE 8601242925406628546 UNAVAIL cannot open 9400635500860036573 UNAVAIL cannot open da10p1 ONLINE pool: poolparty id: 2341966408268005875 state: UNAVAIL status: The pool was last accessed by another system. action: The pool cannot be imported due to damaged devices or data. see: http://illumos.org/msg/ZFS-8000-EY config: poolparty UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas 10443595734973985474 UNAVAIL cannot open 6769943795411980695 UNAVAIL corrupted data 17193190293968205878 UNAVAIL cannot open 18124187498660179067 UNAVAIL corrupted data 13055435399542577683 UNAVAIL cannot open 1103471849868477066 UNAVAIL cannot open 7299350896374739444 UNAVAIL cannot open da11 ONLINE [ryan@freenas] /% sudo import -f -m -F -n Password: sudo: import: command not found [ryan@freenas] /% sudo zpool import -f -m -F -n pool: dogepool id: 12182144821564397731 state: FAULTED status: The pool was last accessed by another system. action: The pool cannot be imported due to damaged devices or data. The pool may be active on another system, but can be imported using the '-f' flag. see: http://illumos.org/msg/ZFS-8000-EY config: dogepool FAULTED corrupted data mirror-0 DEGRADED 7976856621905995257 UNAVAIL cannot open da3 ONLINE pool: poolparty id: 3245950555948954969 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://illumos.org/msg/ZFS-8000-6X config: poolparty UNAVAIL missing device gptid/8e64ca66-64e0-11e5-bd8b-0015176425ac ONLINE da0p2 ONLINE mirror-2 DEGRADED 10757988070345942881 UNAVAIL cannot open gptid/923a8769-64e0-11e5-bd8b-0015176425ac ONLINE mirror-4 ONLINE da6p2 ONLINE da4p2 ONLINE Additional devices are known to be part of this pool, though their exact configuration cannot be determined. pool: poolparty id: 3388780484617072003 state: UNAVAIL status: The pool was last accessed by another system. action: The pool cannot be imported due to damaged devices or data. see: http://illumos.org/msg/ZFS-8000-EY config: poolparty UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas da11p1 ONLINE 7046375561975078261 UNAVAIL cannot open 2213831689996095185 UNAVAIL cannot open 11360718170492271791 UNAVAIL cannot open 4601240247631472665 UNAVAIL cannot open raidz1-2 UNAVAIL insufficient replicas 12825610289144384010 UNAVAIL cannot open ada3p1 ONLINE 8601242925406628546 UNAVAIL cannot open 9400635500860036573 UNAVAIL cannot open da10p1 ONLINE pool: poolparty id: 2341966408268005875 state: UNAVAIL status: The pool was last accessed by another system. action: The pool cannot be imported due to damaged devices or data. see: http://illumos.org/msg/ZFS-8000-EY config: poolparty UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas 10443595734973985474 UNAVAIL cannot open 6769943795411980695 UNAVAIL corrupted data 17193190293968205878 UNAVAIL cannot open 18124187498660179067 UNAVAIL corrupted data 13055435399542577683 UNAVAIL cannot open 1103471849868477066 UNAVAIL cannot open 7299350896374739444 UNAVAIL cannot open da11 ONLINE

Omg what have you done!?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
This must win the "biggest number of failed pools on one server" award...
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
And are all your drives online now?

How about removing unplugging the ones which don't belong to say... one of the pools, and just trying to get one pool up at a time?

Try importing a single pool at a time, by id.

zpool import <id>

zpool import -f <id>

etc

part of the reason why gptid is used instead of device ids is to help prevent some of this
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Removing the repeats and summarizing, here's what I see. Others may see things differently.
Code:
   pool: dogepool
	 id: 12182144821564397731
  state: FAULTED
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.

was a single mirrored pair vdev, one disk is missing, remaining disk is corrupted -> lost pool, but possibly some data recoverable with photorec or similar
Code:
   pool: poolparty
	 id: 3245950555948954969
  state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
	devices and try again.

was at least two mirrored pair vdevs plus at least two singe drive vdevs, at least one entire vdev is missing -> lost pool, recovery unlikely but maybe photorec or similar can salvage some scraps from individual disks
Code:
   pool: poolparty
	 id: 3388780484617072003
  state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.

was two RAIDZ1 vdevs, most disks missing -> lost pool, no chance of recovery
Code:
   pool: poolparty
	 id: 2341966408268005875
  state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.

was a single RAIDZ1 vdev, all but one disk missing -> lost pool, no chance of recovery
 

rporter117

Cadet
Joined
Mar 22, 2016
Messages
5
This must win the "biggest number of failed pools on one server" award...
This is the output of having all drives connected at once. Previous pools were used to testing out ZFS initially with trying out different pool configurations. Initially, this was intended to be purely a NAS and used a single raidz array to contain my data, but then my requirements changed and moved over to a striped raidz array. Still IOPs weren't adequate so I moved to mirrored pairs. I must not have destroyed these pools when I was done with them and just removed them.

And are all your drives online now?
How about removing unplugging the ones which don't belong to say... one of the pools, and just trying to get one pool up at a time?
Try importing a single pool at a time, by id.
zpool import <id>
zpool import -f <id>
etc
part of the reason why gptid is used instead of device ids is to help prevent some of this
I don't need any of the other pools other than the pool poolparty, ID: 3245950555948954969. The pool was created previously and expanded later using the FreeNAS gui so I have no idea why it has mixed guids and dev names.

Removing the repeats and summarizing, here's what I see. Others may see things differently.
Code:
   pool: dogepool
	 id: 12182144821564397731

was a single mirrored pair vdev, one disk is missing, remaining disk is corrupted -> lost pool, but possibly some data recoverable with photorec or similar
That's fine as that's an old, unused pool

Code:
   pool: poolparty
	 id: 3388780484617072003

was two RAIDZ1 vdevs, most disks missing -> lost pool, no chance of recovery
This is fine. It's also an old pool I don't need

Code:
   pool: poolparty
	 id: 2341966408268005875

was a single RAIDZ1 vdev, all but one disk missing -> lost pool, no chance of recovery
Again fine, don't need this one.

Code:
   pool: poolparty
	 id: 3245950555948954969

was at least two mirrored pair vdevs plus at least two singe drive vdevs, at least one entire vdev is missing -> lost pool, recovery unlikely but maybe photorec or similar can salvage some scraps from individual disks
This is the pool I want. With the blocksize default, I'm not going to garner any useful data with photorec. Again, my problem here is that after a reboot, a drive decied to drop out from an otherwise importable pool.




Taking from this thread,
Code:
[root@freenas ~]# zpool status -v
  pool: freenas-boot
 state: ONLINE
  scan: none requested
config:

	NAME										  STATE	 READ WRITE CKSUM
	freenas-boot								  ONLINE	   0	 0	 0
	 gptid/54ddffcd-81db-11e6-bbf6-0015176425ac  ONLINE	   0	 0	 0

errors: No known data errors
[root@freenas ~]# camcontrol devlist
<ATA ST31000340NS 300J>			at scbus0 target 1 lun 0 (pass0,da0)
<ATA ST31000340NS 300J>			at scbus0 target 9 lun 0 (pass1,da1)
<ATA ST31000340NS 300J>			at scbus0 target 10 lun 0 (pass2,da2)
<ATA ST31000340NS 300J>			at scbus0 target 11 lun 0 (pass3,da3)
<ATA ST31000340NS 300J>			at scbus1 target 2 lun 0 (pass4,da4)
<ATA ST31000340NS 300J>			at scbus1 target 3 lun 0 (pass5,da5)
<ATA ST31000340NS 300J>			at scbus1 target 4 lun 0 (pass6,da6)
<ATA ST31000340NS 300J>			at scbus1 target 5 lun 0 (pass7,da7)
<ATA ST31000340NS 300J>			at scbus1 target 12 lun 0 (pass8,da8)
<ATA ST31000340NS 300J>			at scbus1 target 14 lun 0 (pass9,da9)
<ATA ST31000340NS 300J>			at scbus1 target 15 lun 0 (pass10,da10)
<ATA ST31000340NS 300J>			at scbus1 target 18 lun 0 (da11,pass11)
<ST31000340NS 300J>				at scbus5 target 0 lun 0 (pass12,ada0)
<ST31000340NS 300J>				at scbus6 target 0 lun 0 (pass13,ada1)
<ST31000340NS 300J>				at scbus7 target 0 lun 0 (pass14,ada2)
<ST31000340NS 300J>				at scbus8 target 0 lun 0 (pass15,ada3)
< USB DISK 2.0 PMAP>			   at scbus10 target 0 lun 0 (pass16,da12)
[root@freenas ~]# glabel status
									  Name  Status  Components
						 gpt/NetBSD%20swap	 N/A  ada0p1
gptid/a97e7487-6037-4f8b-8e84-292519e89adf	 N/A  ada0p1
						 gpt/FreeBSD%20ZFS	 N/A  ada0p2
gptid/26617c1a-1dfd-46ed-a338-6bdb68c86041	 N/A  ada0p2
						  gpt/Linux%20RAID	 N/A  ada2p1
gptid/bb57793c-2d3e-4131-95a8-74b0cda30203	 N/A  ada2p1
						  gpt/Linux%20swap	 N/A  ada2p2
gptid/780d644e-b871-4714-a4ba-141882566f66	 N/A  ada2p2
				  gpt/zfs-d9fd95814adbf782	 N/A  ada3p1
gptid/6572ea7e-baa0-42e4-989d-b7a5fb09d9ef	 N/A  ada3p1
gptid/cded0013-f6bb-4658-a9c9-4cbdab23057e	 N/A  ada3p9
gptid/90745712-64e0-11e5-bd8b-0015176425ac	 N/A  da0p1
gptid/921c7d70-64e0-11e5-bd8b-0015176425ac	 N/A  da1p1
gptid/923a8769-64e0-11e5-bd8b-0015176425ac	 N/A  da1p2
gptid/8eda11a3-64e0-11e5-bd8b-0015176425ac	 N/A  da2p1
gptid/8eee5322-64e0-11e5-bd8b-0015176425ac	 N/A  da2p2
gptid/508c4943-96f6-11e5-9540-0015176425ac	 N/A  da4p1
gptid/8e51e535-64e0-11e5-bd8b-0015176425ac	 N/A  da5p1
gptid/8e64ca66-64e0-11e5-bd8b-0015176425ac	 N/A  da5p2
gptid/4a9ffa12-96f6-11e5-9540-0015176425ac	 N/A  da6p1
gptid/8f9e591b-64e0-11e5-bd8b-0015176425ac	 N/A  da7p1
gptid/8fba9f8d-64e0-11e5-bd8b-0015176425ac	 N/A  da7p2
gptid/1539e4f9-5732-4fcd-8cdc-55dd88f24eab	 N/A  da9p1
gptid/9b71b451-0670-4dda-842c-94b72c6b76d3	 N/A  da9p2
				  gpt/zfs-17ed40ec8c2a3b05	 N/A  da10p1
gptid/ecb9c37a-7274-9046-a696-685aba16c4b4	 N/A  da10p1
gptid/e8033ed5-0160-b742-9473-a0e833aa8040	 N/A  da10p9
gptid/54b87a37-81db-11e6-bbf6-0015176425ac	 N/A  da12p1
gptid/54ddffcd-81db-11e6-bbf6-0015176425ac	 N/A  da12p2
gptid/9090d808-64e0-11e5-bd8b-0015176425ac	 N/A  da0p2
gptid/4b35c8a8-96f6-11e5-9540-0015176425ac	 N/A  da6p2
gptid/51218b55-96f6-11e5-9540-0015176425ac	 N/A  da4p2
								   gpt/zfs	 N/A  da11p1
gptid/ea831e6f-52a0-ca4b-a563-5692251063db	 N/A  da11p1
gptid/482f3f96-934a-cf4f-948e-aa1b6ed820fa	 N/A  da11p9
[root@freenas ~]# gpart show
=>		34  1953525101  ada0  GPT  (932G)
		  34		2014		- free -  (1.0M)
		2048	 4192384	 1  netbsd-swap  (2.0G)
	 4194432		1920		- free -  (960K)
	 4196352  1949328776	 2  freebsd-zfs  (930G)
  1953525128		   7		- free -  (3.5K)

=>		34  1953525101  ada2  GPT  (932G)
		  34		2014		- free -  (1.0M)
		2048  1953125001	 1  linux-raid  (931G)
  1953127049		1399		- free -  (700K)
  1953128448	  396687	 2  linux-swap  (194M)

=>		34  1953525101  ada3  GPT  (932G)
		  34		2014		- free -  (1.0M)
		2048  1953505280	 1  !6a898cc3-1dd2-11b2-99a6-080020736631  (932G)
  1953507328	   16384	 9  !6a945a3b-1dd2-11b2-99a6-080020736631  (8.0M)
  1953523712		1423		- free -  (712K)

=>		34  1953525101  da0  GPT  (932G)
		  34		  94	   - free -  (47K)
		 128	 4194304	1  freebsd-swap  (2.0G)
	 4194432  1949330696	2  freebsd-zfs  (930G)
  1953525128		   7	   - free -  (3.5K)

=>		34  1953525101  da1  GPT  (932G)
		  34		  94	   - free -  (47K)
		 128	 4194304	1  freebsd-swap  (2.0G)
	 4194432  1949330696	2  freebsd-zfs  (930G)
  1953525128		   7	   - free -  (3.5K)

=>		34  1953525101  da2  GPT  (932G)
		  34		  94	   - free -  (47K)
		 128	 4194304	1  freebsd-swap  (2.0G)
	 4194432  1949330696	2  freebsd-zfs  (930G)
  1953525128		   7	   - free -  (3.5K)

=>		34  1953525101  da4  GPT  (932G)
		  34		  94	   - free -  (47K)
		 128	 4194304	1  freebsd-swap  (2.0G)
	 4194432  1949330696	2  freebsd-zfs  (930G)
  1953525128		   7	   - free -  (3.5K)

=>		34  1953522988  da5  GPT  (932G)
		  34		  94	   - free -  (47K)
		 128	 4194304	1  freebsd-swap  (2.0G)
	 4194432  1949328584	2  freebsd-zfs  (930G)
  1953523016		   6	   - free -  (3.0K)

=>		34  1953525101  da6  GPT  (932G)
		  34		  94	   - free -  (47K)
		 128	 4194304	1  freebsd-swap  (2.0G)
	 4194432  1949330696	2  freebsd-zfs  (930G)
  1953525128		   7	   - free -  (3.5K)

=>		34  1953525101  da7  GPT  (932G)
		  34		  94	   - free -  (47K)
		 128	 4194304	1  freebsd-swap  (2.0G)
	 4194432  1949330696	2  freebsd-zfs  (930G)
  1953525128		   7	   - free -  (3.5K)

=>		34  1953525101  da9  GPT  (932G)
		  34		2014	   - free -  (1.0M)
		2048  1953125001	1  linux-raid  (931G)
  1953127049		1399	   - free -  (700K)
  1953128448	  396687	2  linux-swap  (194M)

=>		34  1953525101  da10  GPT  (932G)
		  34		2014		- free -  (1.0M)
		2048  1953505280	 1  !6a898cc3-1dd2-11b2-99a6-080020736631  (932G)
  1953507328	   16384	 9  !6a945a3b-1dd2-11b2-99a6-080020736631  (8.0M)
  1953523712		1423		- free -  (712K)

=>	  34  15126461  da12  GPT  (7.2G)
		34	  1024	 1  bios-boot  (512K)
	  1058		 6		- free -  (3.0K)
	  1064  15125424	 2  freebsd-zfs  (7.2G)
  15126488		 7		- free -  (3.5K)

=>		34  1953525101  da11  GPT  (932G)
		  34		2014		- free -  (1.0M)
		2048  1953505280	 1  !6a898cc3-1dd2-11b2-99a6-080020736631  (932G)
  1953507328	   16384	 9  !6a945a3b-1dd2-11b2-99a6-080020736631  (8.0M)
  1953523712		1423		- free -  (712K)
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
my problem here is that after a reboot, a drive decied to drop out from an otherwise importable pool.
There's no way around a missing vdev. If this drive dropping out is what leads to the above-shown (unrecoverable) status of 3245950555948954969, then it appears your only hope is to prevent said drive from dropping. Most likely that's a hardware problem. Perhaps you can clone the drive with something like Gnu ddrescue.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
There should be awards for this kind of stuff. This is unreliable and amazing at the same time.

Sent from my Nexus 5X using Tapatalk
 

rporter117

Cadet
Joined
Mar 22, 2016
Messages
5
There's no way around a missing vdev. If this drive dropping out is what leads to the above-shown (unrecoverable) status of 3245950555948954969, then it appears your only hope is to prevent said drive from dropping. Most likely that's a hardware problem. Perhaps you can clone the drive with something like Gnu ddrescue.
I'll see what I can scrape up.

There should be awards for this kind of stuff. This is unreliable and amazing at the same time.
Thanks. Care to photoshop it?
 
Status
Not open for further replies.
Top