HELP !!! CRITICAL: The volume datastore (ZFS) state is UNKNOWN:

Status
Not open for further replies.

dan101aba

Cadet
Joined
Sep 16, 2018
Messages
6
Hi,
I need your help to understand how to restore my files from zfs missing device error,
I have the freenas load from USB version 9.3 with
3 SATA 4T (WD)
1 SSD 250gg
Since I got a Unretryable error on boot I try to reinstall a new version on USB, but my mistake I did it on the SSD drive.
After I did install on USB and upload my config files I got below error.
CRITICAL: The volume datastore (ZFS) state is UNKNOWN
Not sure what was my ZFS configuration and if the SSD(may be cache ) was part of the pool.
I would like to know if there is an option to restore the files from the 3 STATA disks?
See below some command I run to bring up the pool
Also, I try to import with 11.2 beta 3 version I got (cannot import 'datastore': no such pool or dataset
Destroy and re-create the pool from
a backup source.)
Code:
pool: datastore
	 id: 3130324954855864268
  state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://illumos.org/msg/ZFS-8000-EY
config:

	datastore									   UNAVAIL  missing device
	  raidz1-1									  ONLINE
		gptid/54f3c43f-efd7-11e3-989a-e03f49ac0a05  ONLINE
		gptid/55be4974-efd7-11e3-989a-e03f49ac0a05  ONLINE
		gptid/568b922e-efd7-11e3-989a-e03f49ac0a05  ONLINE

   pool: freenas-boot
	 id: 1736337926885161230
  state: ONLINE
status: The pool is formatted using a legacy on-disk version.
action: The pool can be imported using its name or numeric identifier, though
	some features will not be available without an explicit 'zpool upgrade'.
config:

	freenas-boot  ONLINE
	  ada0p2	ONLINE

[root@freenas] /mnt# zpool import  -f datastore (also import -F -f datastore)
cannot import 'datastore': one or more devices is currently unavailable
[root@freenas] /mnt# glabel status
									  Name  Status  Components
gptid/f40c5b80-b67f-11e8-85f0-e03f49ac0a05	 N/A  ada0p1
gptid/f40d520f-b67f-11e8-85f0-e03f49ac0a05	 N/A  ada0p2
gptid/54e46e1b-efd7-11e3-989a-e03f49ac0a05	 N/A  ada1p1
gptid/54f3c43f-efd7-11e3-989a-e03f49ac0a05	 N/A  ada1p2
gptid/55af7227-efd7-11e3-989a-e03f49ac0a05	 N/A  ada2p1
gptid/55be4974-efd7-11e3-989a-e03f49ac0a05	 N/A  ada2p2
gptid/5679804c-efd7-11e3-989a-e03f49ac0a05	 N/A  ada3p1
gptid/568b922e-efd7-11e3-989a-e03f49ac0a05	 N/A  ada3p2
gptid/7988e498-b9a7-11e8-bb6c-e03f49ac0a05	 N/A  da0p1


[root@freenas] /mnt# gpart show
=>	   34  468862061  ada0  GPT  (223G)
		 34	   1024	 1  bios-boot  (512k)
	   1058		  6		- free -  (3.0k)
	   1064  468861024	 2  freebsd-zfs  (223G)
  468862088		  7		- free -  (3.5k)

=>		34  7814037101  ada1  GPT  (3.7T)
		  34		  94		- free -  (47k)
		 128	 4194304	 1  freebsd-swap  (2.0G)
	 4194432  7809842696	 2  freebsd-zfs  (3.7T)
  7814037128		   7		- free -  (3.5k)

=>		34  7814037101  ada2  GPT  (3.7T)
		  34		  94		- free -  (47k)
		 128	 4194304	 1  freebsd-swap  (2.0G)
	 4194432  7809842696	 2  freebsd-zfs  (3.7T)
  7814037128		   7		- free -  (3.5k)

=>		34  7814037101  ada3  GPT  (3.7T)
		  34		  94		- free -  (47k)
		 128	 4194304	 1  freebsd-swap  (2.0G)
	 4194432  7809842696	 2  freebsd-zfs  (3.7T)
  7814037128		   7		- free -  (3.5k)

=>	  34  61194173  da0  GPT  (29G)
		34	  1024	1  bios-boot  (512k)
	  1058		 6	   - free -  (3.0k)
	  1064  61193136	2  freebsd-zfs  (29G)
  61194200		 7	   - free -  (3.5k)
 
Last edited by a moderator:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Not sure what was my ZFS configuration and if the SSD(may be cache ) was part of the pool.
Well, apparently the SSD was part of the pool, and the zpool import output suggests that you'd striped it in with the RAIDZ1. This is a strikingly stupid dangerous configuration, as it exposes you to things like, well, exactly what seems to have happened. It also would have taken a bit of work to create, as the GUI tries to prevent you from doing that.

Recreate the pool and restore from backup.
 

dan101aba

Cadet
Joined
Sep 16, 2018
Messages
6
Thanks,
Since I don't have any backup, there is an option to recover the files from the other 3 SATA disks?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
is an option to recover the files from the other 3 SATA disks?
If your pool was configured the way it looks like it was, no, there isn't.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
If your pool was configured the way it looks like it was, no, there isn't.
One missing drive shouldn't kill a Z1; two will, but you should be able to force an import with a single missing/offlined. And if the OP actually had the SSD in the Z1 vdev, I'd like to think it would have been noticed that the total pool wasn't ~6TB.

This part:

Code:
datastore									   UNAVAIL  missing device
	  raidz1-1									  ONLINE
		gptid/54f3c43f-efd7-11e3-989a-e03f49ac0a05  ONLINE
		gptid/55be4974-efd7-11e3-989a-e03f49ac0a05  ONLINE
		gptid/568b922e-efd7-11e3-989a-e03f49ac0a05  ONLINE


seems to suggest that it was a 3x3TB RAIDZ1 vdev that is refusing to online.

@dan101aba - can you run zdb -l /dev/ada1p2 through zdb -l /dev/ada3p2and post the output in [CODE] tags please? Let's see if the disks have a vdev label on them. zpool history -e datastore may also give something of use to see if we can figure out how your pool was set up.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
One missing drive shouldn't kill a Z1
No, but it will kill a RAIDZ1 + 1, which is what I suspect OP had before he overwrote the +1 (the SSD) by installing FreeNAS to it.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
No, but it will kill a RAIDZ1 + 1, which is what I suspect OP had before he overwrote the +1 (the SSD) by installing FreeNAS to it.

That's why I'm gunning for a zpool history so we can see if that SSD was added as a cache (you should be good) or striped as a second vdev (RIP)
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Based on @Chris Moore 's re-formatting of this post here in the cross-posted thread:

https://forums.freenas.org/index.ph...store-zfs-state-is-unknown.69982/#post-482693

The SSD was added as a single striped device to the pool.

Code:
root@freenas[~]# zpool import					  
   pool: datastore
	 id: 3130324954855864268
  state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://illumos.org/msg/ZFS-8000-EY
config:

	datastore									   UNAVAIL  insufficient replicas
	  3359151836782873844						   UNAVAIL  cannot open
	  raidz1-1									  ONLINE
		gptid/54f3c43f-efd7-11e3-989a-e03f49ac0a05  ONLINE
		gptid/55be4974-efd7-11e3-989a-e03f49ac0a05  ONLINE
		gptid/568b922e-efd7-11e3-989a-e03f49ac0a05  ONLINE
root@freenas[~]#


It's dead, Jim.
 

dan101aba

Cadet
Joined
Sep 16, 2018
Messages
6
One missing drive shouldn't kill a Z1; two will, but you should be able to force an import with a single missing/offlined. And if the OP actually had the SSD in the Z1 vdev, I'd like to think it would have been noticed that the total pool wasn't ~6TB.

This part:

Code:
datastore									   UNAVAIL  missing device
	  raidz1-1									  ONLINE
		gptid/54f3c43f-efd7-11e3-989a-e03f49ac0a05  ONLINE
		gptid/55be4974-efd7-11e3-989a-e03f49ac0a05  ONLINE
		gptid/568b922e-efd7-11e3-989a-e03f49ac0a05  ONLINE


seems to suggest that it was a 3x3TB RAIDZ1 vdev that is refusing to online.

@dan101aba - can you run zdb -l /dev/ada1p2 through zdb -l /dev/ada3p2and post the output in
Code:
 tags please? Let's see if the disks have a vdev label on them.  zpool history -e datastore may also give something of use to see if we can figure out how your pool was set up.
Code:



Thanks see below the outputroot@freenas[~]# zdb -l /dev/ada1p2
------------------------------------
LABEL 0
------------------------------------
	version: 5000
	name: 'datastore'
	state: 0
	txg: 27488692
	pool_guid: 3130324954855864268
	hostid: 1353493202
	hostname: 'freenas.local'
	top_guid: 9365343171638722204
	guid: 14207751489329681723
	vdev_children: 2
	vdev_tree:
		type: 'raidz'
		id: 1
		guid: 9365343171638722204
		nparity: 1
		metaslab_array: 35
		metaslab_shift: 36
		ashift: 12
		asize: 11995904212992
		is_log: 0
		create_txg: 4
		children[0]:
			type: 'disk'
			id: 0
			guid: 14207751489329681723
			path: '/dev/gptid/54f3c43f-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 541
			create_txg: 4
		children[1]:
			type: 'disk'
			id: 1
			guid: 17082211688300517477
			path: '/dev/gptid/55be4974-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 540
			create_txg: 4
		children[2]:
			type: 'disk'
			id: 2
			guid: 3998621066254083893
			path: '/dev/gptid/568b922e-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 539
			create_txg: 4
	features_for_read:
		com.delphix:hole_birth
------------------------------------
LABEL 1
------------------------------------
	version: 5000
	name: 'datastore'
	state: 0
	txg: 27488692
	pool_guid: 3130324954855864268
	hostid: 1353493202
	hostname: 'freenas.local'
	top_guid: 9365343171638722204
	guid: 14207751489329681723
	vdev_children: 2
	vdev_tree:
		type: 'raidz'
		id: 1
		guid: 9365343171638722204
		nparity: 1
		metaslab_array: 35
		metaslab_shift: 36
		ashift: 12
		asize: 11995904212992
		is_log: 0
		create_txg: 4
		children[0]:
			type: 'disk'
			id: 0
			guid: 14207751489329681723
			path: '/dev/gptid/54f3c43f-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 541
			create_txg: 4
		children[1]:
			type: 'disk'
			id: 1
			guid: 17082211688300517477
			path: '/dev/gptid/55be4974-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 540
			create_txg: 4
		children[2]:
			type: 'disk'
			id: 2
			guid: 3998621066254083893
			path: '/dev/gptid/568b922e-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 539
			create_txg: 4
	features_for_read:
		com.delphix:hole_birth
------------------------------------
LABEL 2
------------------------------------
	version: 5000
	name: 'datastore'
	state: 0
	txg: 27488692
	pool_guid: 3130324954855864268
	hostid: 1353493202
	hostname: 'freenas.local'
	top_guid: 9365343171638722204
	guid: 14207751489329681723
	vdev_children: 2
	vdev_tree:
		type: 'raidz'
		id: 1
		guid: 9365343171638722204
		nparity: 1
		metaslab_array: 35
		metaslab_shift: 36
		ashift: 12
		asize: 11995904212992
		is_log: 0
		create_txg: 4
		children[0]:
			type: 'disk'
			id: 0
			guid: 14207751489329681723
			path: '/dev/gptid/54f3c43f-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 541
			create_txg: 4
		children[1]:
			type: 'disk'
			id: 1
			guid: 17082211688300517477
			path: '/dev/gptid/55be4974-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 540
			create_txg: 4
		children[2]:
			type: 'disk'
			id: 2
			guid: 3998621066254083893
			path: '/dev/gptid/568b922e-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 539
			create_txg: 4
	features_for_read:
		com.delphix:hole_birth
------------------------------------
LABEL 3
------------------------------------
	version: 5000
	name: 'datastore'
	state: 0
	txg: 27488692
	pool_guid: 3130324954855864268
	hostid: 1353493202
	hostname: 'freenas.local'
	top_guid: 9365343171638722204
	guid: 14207751489329681723
	vdev_children: 2
	vdev_tree:
		type: 'raidz'
		id: 1
		guid: 9365343171638722204
		nparity: 1
		metaslab_array: 35
		metaslab_shift: 36
		ashift: 12
		asize: 11995904212992
		is_log: 0
		create_txg: 4
		children[0]:
			type: 'disk'
			id: 0
			guid: 14207751489329681723
			path: '/dev/gptid/54f3c43f-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 541
			create_txg: 4
		children[1]:
			type: 'disk'
			id: 1
			guid: 17082211688300517477
			path: '/dev/gptid/55be4974-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 540
			create_txg: 4
		children[2]:
			type: 'disk'
			id: 2
			guid: 3998621066254083893
			path: '/dev/gptid/568b922e-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 539
			create_txg: 4
	features_for_read:
		com.delphix:hole_birth
root@freenas[~]#


root@freenas[~]# zdb -l /dev/ada3p2
------------------------------------
LABEL 0
------------------------------------
	version: 5000
	name: 'datastore'
	state: 0
	txg: 27488692
	pool_guid: 3130324954855864268
	hostid: 1353493202
	hostname: 'freenas.local'
	top_guid: 9365343171638722204
	guid: 17082211688300517477
	vdev_children: 2
	vdev_tree:
		type: 'raidz'
		id: 1
		guid: 9365343171638722204
		nparity: 1
		metaslab_array: 35
		metaslab_shift: 36
		ashift: 12
		asize: 11995904212992
		is_log: 0
		create_txg: 4
		children[0]:
			type: 'disk'
			id: 0
			guid: 14207751489329681723
			path: '/dev/gptid/54f3c43f-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 541
			create_txg: 4
		children[1]:
			type: 'disk'
			id: 1
			guid: 17082211688300517477
			path: '/dev/gptid/55be4974-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 540
			create_txg: 4
		children[2]:
			type: 'disk'
			id: 2
			guid: 3998621066254083893
			path: '/dev/gptid/568b922e-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 539
			create_txg: 4
	features_for_read:
		com.delphix:hole_birth
------------------------------------
LABEL 1
------------------------------------
	version: 5000
	name: 'datastore'
	state: 0
	txg: 27488692
	pool_guid: 3130324954855864268
	hostid: 1353493202
	hostname: 'freenas.local'
	top_guid: 9365343171638722204
	guid: 17082211688300517477
	vdev_children: 2
	vdev_tree:
		type: 'raidz'
		id: 1
		guid: 9365343171638722204
		nparity: 1
		metaslab_array: 35
		metaslab_shift: 36
		ashift: 12
		asize: 11995904212992
		is_log: 0
		create_txg: 4
		children[0]:
			type: 'disk'
			id: 0
			guid: 14207751489329681723
			path: '/dev/gptid/54f3c43f-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 541
			create_txg: 4
		children[1]:
			type: 'disk'
			id: 1
			guid: 17082211688300517477
			path: '/dev/gptid/55be4974-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 540
			create_txg: 4
		children[2]:
			type: 'disk'
			id: 2
			guid: 3998621066254083893
			path: '/dev/gptid/568b922e-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 539
			create_txg: 4
	features_for_read:
		com.delphix:hole_birth
------------------------------------
LABEL 2
------------------------------------
	version: 5000
	name: 'datastore'
	state: 0
	txg: 27488692
	pool_guid: 3130324954855864268
	hostid: 1353493202
	hostname: 'freenas.local'
	top_guid: 9365343171638722204
	guid: 17082211688300517477
	vdev_children: 2
	vdev_tree:
		type: 'raidz'
		id: 1
		guid: 9365343171638722204
		nparity: 1
		metaslab_array: 35
		metaslab_shift: 36
		ashift: 12
		asize: 11995904212992
		is_log: 0
		create_txg: 4
		children[0]:
			type: 'disk'
			id: 0
			guid: 14207751489329681723
			path: '/dev/gptid/54f3c43f-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 541
			create_txg: 4
		children[1]:
			type: 'disk'
			id: 1
			guid: 17082211688300517477
			path: '/dev/gptid/55be4974-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 540
			create_txg: 4
		children[2]:
			type: 'disk'
			id: 2
			guid: 3998621066254083893
			path: '/dev/gptid/568b922e-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 539
			create_txg: 4
	features_for_read:
		com.delphix:hole_birth
------------------------------------
LABEL 3
------------------------------------
	version: 5000
	name: 'datastore'
	state: 0
	txg: 27488692
	pool_guid: 3130324954855864268
	hostid: 1353493202
	hostname: 'freenas.local'
	top_guid: 9365343171638722204
	guid: 17082211688300517477
	vdev_children: 2
	vdev_tree:
		type: 'raidz'
		id: 1
		guid: 9365343171638722204
		nparity: 1
		metaslab_array: 35
		metaslab_shift: 36
		ashift: 12
		asize: 11995904212992
		is_log: 0
		create_txg: 4
		children[0]:
			type: 'disk'
			id: 0
			guid: 14207751489329681723
			path: '/dev/gptid/54f3c43f-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 541
			create_txg: 4
		children[1]:
			type: 'disk'
			id: 1
			guid: 17082211688300517477
			path: '/dev/gptid/55be4974-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 540
			create_txg: 4
		children[2]:
			type: 'disk'
			id: 2
			guid: 3998621066254083893
			path: '/dev/gptid/568b922e-efd7-11e3-989a-e03f49ac0a05'
			whole_disk: 1
			DTL: 539
			create_txg: 4
	features_for_read:
		com.delphix:hole_birth
root@freenas[~]#





root@freenas[~]# zpool history -e datastore
invalid option 'e'
usage:
history [-il] [<pool>] ...
root@freenas[~]# zpool history -i datastore
cannot open 'datastore': no such pool
root@freenas[~]#
 

dan101aba

Cadet
Joined
Sep 16, 2018
Messages
6
Since I didn't create the pool, I'm not sure what was the configuration, the 3 SATA(4T size each) disks was in the pool and there is also the SSD disk not sure if it was, anyway data was deleted by mistake only on the SSD disk.

OK thank all for your help, I understand that the pool is unrecoverable, That is ZFS files recovery software that can run on 3 SATA disks?

Thanks,
Dan
 

jro

iXsystems
iXsystems
Joined
Jul 16, 2018
Messages
80
There are no ZFS data recovery techniques that are cheap and easy like you would get for NTFS volumes. ZFS data recovery will involve sending the system to professionals who will use proprietary tools and techniques to attempt a recovery. This will likely cost you thousands of dollars, if not tens of thousands. This is precisely why backups are strongly encouraged. Sorry :(
 
Status
Not open for further replies.
Top