Volume Status Unknown / Won't Import

Status
Not open for further replies.

NAS_Noob

Cadet
Joined
Sep 23, 2017
Messages
5
I was copying data to a SAMBA share on my FreeNAS server. The server locked up and would not respond. I reboot it and upon the reboot discovered that the volume status is unknown and will not import. I am not certain what this means or what to do to recover from this. I have tried a couple of basic commands to understand more and they are listed below with the output.

Code:
root@XXXXXX:~ # zpool import
   pool: XXXXXX_VOLUME
	 id: 17870302958259361307
  state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
		devices and try again.
   see: http://illumos.org/msg/ZFS-8000-6X
config:

		XXXXXX_VOLUME									UNAVAIL  missing device
		  gptid/4de1011e-52b9-11e6-9828-e4115b1386c2  ONLINE
		  gptid/4f0bbd79-52b9-11e6-9828-e4115b1386c2  ONLINE
		  gptid/519430b4-52b9-11e6-9828-e4115b1386c2  ONLINE
		  gptid/52c94b22-52b9-11e6-9828-e4115b1386c2  ONLINE

		Additional devices are known to be part of this pool, though their
		exact configuration cannot be determined.


And this:
Code:
root@XXXXXX:~ # camcontrol devlist
<WDC WD20EARX-00PASB0 51.0AB51>	at scbus0 target 0 lun 0 (pass0,ada0)
<WDC WD20EARX-00PASB0 51.0AB51>	at scbus1 target 0 lun 0 (pass1,ada1)
<WDC WD20EARX-00PASB0 51.0AB51>	at scbus2 target 0 lun 0 (pass2,ada2)
<VB0250EAVER HPG0>				 at scbus5 target 0 lun 0 (pass3,ada3)
<SanDisk SanDisk Cruzer 8.02>	  at scbus7 target 0 lun 0 (pass4,da0)


And this:
Code:
root@XXXXXX:~ # gpart show
=>		34  3907029101  ada0  GPT  (1.8T)
		  34		  94		- free -  (47K)
		 128	 4194304	 1  freebsd-swap  (2.0G)
	 4194432  3902834696	 2  freebsd-zfs  (1.8T)
  3907029128		   7		- free -  (3.5K)

=>		34  3907029101  ada1  GPT  (1.8T)
		  34		  94		- free -  (47K)
		 128	 4194304	 1  freebsd-swap  (2.0G)
	 4194432  3902834696	 2  freebsd-zfs  (1.8T)
  3907029128		   7		- free -  (3.5K)

=>		34  3907029101  ada2  GPT  (1.8T)
		  34		  94		- free -  (47K)
		 128	 4194304	 1  freebsd-swap  (2.0G)
	 4194432  3902834696	 2  freebsd-zfs  (1.8T)
  3907029128		   7		- free -  (3.5K)

=>	   34  488397101  ada3  GPT  (233G)
		 34		 94		- free -  (47K)
		128	4194304	 1  freebsd-swap  (2.0G)
	4194432  484202696	 2  freebsd-zfs  (231G)
  488397128		  7		- free -  (3.5K)

=>	  34  15753148  da0  GPT  (7.5G)
		34	  1024	1  bios-boot  (512K)
	  1058		 6	   - free -  (3.0K)
	  1064  15752112	2  freebsd-zfs  (7.5G)
  15753176		 6	   - free -  (3.0K)


And this:
Code:
root@XXXXXX:~ # glabel status
									  Name  Status  Components
gptid/517bc3c7-52b9-11e6-9828-e4115b1386c2	 N/A  ada0p1
gptid/519430b4-52b9-11e6-9828-e4115b1386c2	 N/A  ada0p2
gptid/52b4baa0-52b9-11e6-9828-e4115b1386c2	 N/A  ada1p1
gptid/52c94b22-52b9-11e6-9828-e4115b1386c2	 N/A  ada1p2
gptid/4ef4b36f-52b9-11e6-9828-e4115b1386c2	 N/A  ada2p1
gptid/4f0bbd79-52b9-11e6-9828-e4115b1386c2	 N/A  ada2p2
gptid/4dc6ffe8-52b9-11e6-9828-e4115b1386c2	 N/A  ada3p1
gptid/4de1011e-52b9-11e6-9828-e4115b1386c2	 N/A  ada3p2
gptid/e5a82590-7f85-11e5-a5e9-000c29b5b5ba	 N/A  da0p1


I am not completely sure what I am reviewing here. However, if I had to guess, it looks like one of the drives has failed???
  • Can anyone confirm this?
  • Also, what are the correct next steps from here
Thanks in advance.

P.S. FreeNAS installation details:
  • FreeNAS-11.0-U3 (c5dcf4416)
  • Platform: AMD Turion(tm) II Neo N40L Dual-Core Processor
  • Memory: 7897 MB (ECC)
  • Hardware: HP Microserver N40L
 
Last edited:

rs225

Guru
Joined
Jun 28, 2014
Messages
878
How many drives should there be? You may need to open your case to count.

Another possible problem is that it looks like you have no redundancy in this pool. If one drive fails, the whole pool goes with it.

The output of zdb -l /dev/ada0p2 should have the details of your pool configuration.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
However, if I had to guess, it looks like one of the drives has failed???
It kind of looks like it--how many drives do you think you have? Because the system is seeing four, and thinks there should be at least one more. And, as @rs225 says, you appear to have built your pool with no redundancy. So if a disk has failed, your pool is gone, and your data with it.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
In your camcontrol devlist there is a 230 GB Hewlett Packard disk showing up that has model # VB0250EAVER
I wonder what that drive is for, but it is not part of your storage pool. The drives in your storage pool are Western Digital Green model # WD20EARX, 2TB drives. At least I am guessing that is what they are all supposed to be because there are three showing in your listing and the system is expecting four drives.
I would definitely get from the information provided that one of the WD Green drives is dead, but you might try shutting the system down and checking all the connections. While it is off, disconnect and reconnect to ensure there are not lose connections. Then boot it up and see if you get a different result.

The others already pointed out that it looks like you configured the pool wrong, no redundancy, if that is indeed the case, and you can't get the disk back online, I hope you have a backup.
Next time, you should definitely use a RAID-Z2 pool. Six drives with two as redundant disks would give you about the same amount of storage you had but with the ability to lose one or two disks without loss of data.

If you need some drives, the Seagate 2TB drives have been working well for me, over 5 years on some of mine.
 

NAS_Noob

Cadet
Joined
Sep 23, 2017
Messages
5
How many drives should there be? You may need to open your case to count.

Another possible problem is that it looks like you have no redundancy in this pool. If one drive fails, the whole pool goes with it.

The output of zdb -l /dev/ada0p2 should have the details of your pool configuration.

Here is the output from that command:
Code:
root@XXXXXX:~ # zdb -l /dev/ada0p2
--------------------------------------------
LABEL 0
--------------------------------------------
	version: 5000
	name: 'XXXXXX_VOLUME'
	state: 0
	txg: 6442553
	pool_guid: 17870302958259361307
	hostid: 2757839847
	hostname: ''
	top_guid: 678636844152452019
	guid: 678636844152452019
	vdev_children: 5
	vdev_tree:
		type: 'disk'
		id: 3
		guid: 678636844152452019
		path: '/dev/gptid/519430b4-52b9-11e6-9828-e4115b1386c2'
		whole_disk: 1
		metaslab_array: 37
		metaslab_shift: 34
		ashift: 12
		asize: 1998246641664
		is_log: 0
		create_txg: 4
	features_for_read:
		com.delphix:hole_birth
		com.delphix:embedded_data
--------------------------------------------
LABEL 1
--------------------------------------------
	version: 5000
	name: 'XXXXXX_VOLUME'
	state: 0
	txg: 6442553
	pool_guid: 17870302958259361307
	hostid: 2757839847
	hostname: ''
	top_guid: 678636844152452019
	guid: 678636844152452019
	vdev_children: 5
	vdev_tree:
		type: 'disk'
		id: 3
		guid: 678636844152452019
		path: '/dev/gptid/519430b4-52b9-11e6-9828-e4115b1386c2'
		whole_disk: 1
		metaslab_array: 37
		metaslab_shift: 34
		ashift: 12
		asize: 1998246641664
		is_log: 0
		create_txg: 4
	features_for_read:
		com.delphix:hole_birth
		com.delphix:embedded_data
--------------------------------------------
LABEL 2
--------------------------------------------
	version: 5000
	name: 'XXXXXX_VOLUME'
	state: 0
	txg: 6442553
	pool_guid: 17870302958259361307
	hostid: 2757839847
	hostname: ''
	top_guid: 678636844152452019
	guid: 678636844152452019
	vdev_children: 5
	vdev_tree:
		type: 'disk'
		id: 3
		guid: 678636844152452019
		path: '/dev/gptid/519430b4-52b9-11e6-9828-e4115b1386c2'
		whole_disk: 1
		metaslab_array: 37
		metaslab_shift: 34
		ashift: 12
		asize: 1998246641664
		is_log: 0
		create_txg: 4
	features_for_read:
		com.delphix:hole_birth
		com.delphix:embedded_data
--------------------------------------------
LABEL 3
--------------------------------------------
	version: 5000
	name: 'XXXXXX_VOLUME'
	state: 0
	txg: 6442553
	pool_guid: 17870302958259361307
	hostid: 2757839847
	hostname: ''
	top_guid: 678636844152452019
	guid: 678636844152452019
	vdev_children: 5
	vdev_tree:
		type: 'disk'
		id: 3
		guid: 678636844152452019
		path: '/dev/gptid/519430b4-52b9-11e6-9828-e4115b1386c2'
		whole_disk: 1
		metaslab_array: 37
		metaslab_shift: 34
		ashift: 12
		asize: 1998246641664
		is_log: 0
		create_txg: 4
	features_for_read:
		com.delphix:hole_birth
		com.delphix:embedded_data
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Here is the output from that command:
Yes, 4 drives, all the same size. If you can't get the failed drive back, your pool is done.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
Any missing drive is a problem for this pool. But, since it isn't raidz, the size of each drive can be anything.

Check the cables, see if you can get a drive to reappear, then either backup or come back for help on making a mirror.

If you can't revive the drive and you want the data, you have a chance of getting the pool back online if you send the failed drive for a full image data recovery. Don't mention it is RAID or ZFS, and it shouldn't cost a fortune.
 

NAS_Noob

Cadet
Joined
Sep 23, 2017
Messages
5
Any missing drive is a problem for this pool. But, since it isn't raidz, the size of each drive can be anything.

Check the cables, see if you can get a drive to reappear, then either backup or come back for help on making a mirror.

If you can't revive the drive and you want the data, you have a chance of getting the pool back online if you send the failed drive for a full image data recovery. Don't mention it is RAID or ZFS, and it shouldn't cost a fortune.

Are there any places that are better than others for the full image data recovery? I want to make sure that I send it to a place that will not cause more harm than good. I found a place that does $60 Hard Drive Repair Service and did not know if that would be a wise choice either?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
If you can't revive the drive and you want the data, you have a chance of getting the pool back online if you send the failed drive for a full image data recovery. Don't mention it is RAID or ZFS, and it shouldn't cost a fortune.
Have you successfully done that before? Where did you send the drive?
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
$60, no. It will be more than that. Did you identify a dead drive and confirm it doesn't work in another computer?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
$60, no. It will be more than that. Did you identify a dead drive and confirm it doesn't work in another computer?
Don't do this, of course the drive will not work in another computer. It's using zfs so people end up reformatting it the second it touches windows. Look at smart data to figure out if the drive is working or not.
 

NAS_Noob

Cadet
Joined
Sep 23, 2017
Messages
5
Don't do this, of course the drive will not work in another computer. It's using zfs so people end up reformatting it the second it touches windows. Look at smart data to figure out if the drive is working or not.

Are there particular SMART data/stats that I should look for to confirm a failed drive? I found the following article, but did not know if it would be applicable to FreeNAS/ZFS, etc.

https://www.backblaze.com/blog/what-smart-stats-indicate-hard-drive-failures/

It looks like they are suggesting the following.

Code:
Attribute	  Description
SMART 5		Reallocated Sectors Count
SMART 187	  Reported Uncorrectable Errors
SMART 188	  Command Timeout
SMART 197	  Current Pending Sector Count
SMART 198	  Uncorrectable Sector Count

They also casually mentioned the following:
  • SMART 189 – High Fly Writes
  • SMART 12 – Power Cycles
Thanks again.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Your auto running smart tests that should have been setup will send you emails. But I suspect that step was never followed. So you need to run smartctl -a /dev/adaX or daX where x is your disk number.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Your auto running smart tests that should have been setup will send you emails. But I suspect that step was never followed. So you need to run smartctl -a /dev/adaX or daX where x is your disk number.
He has WD Green 2TB drives. I thought it might be possible that ZFS ejected the disk for being slow to respond.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Don't do this, of course the drive will not work in another computer.
Certainly it will. It won't mount typically (and definitely not if it's just one disk of a multi-disk stripe), but if it's functioning at all, the device itself should still be recognized.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Certainly it will. It won't mount typically (and definitely not if it's just one disk of a multi-disk stripe), but if it's functioning at all, the device itself should still be recognized.
Yeah and windows will say this disk is not working properly would you like to fix that? Click ok and it gets formatted to ntfs.
 

NAS_Noob

Cadet
Joined
Sep 23, 2017
Messages
5
Certainly it will. It won't mount typically (and definitely not if it's just one disk of a multi-disk stripe), but if it's functioning at all, the device itself should still be recognized.

I connected it to another computer and it would not be recognized by that computer either. No prompt, nothing. Seems like there might be something else going on with the drive.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Are there particular SMART data/stats that I should look for to confirm a failed drive? I found the following article, but did not know if it would be applicable to FreeNAS/ZFS, etc.
The big question is, after checking all your connections, is the drive being detected. If the drive is not detected, you can't run any tests.
You have one of the 2TB data drives that is not detected by the operating system (FreeNAS) and unless that is corrected, you can't do any diagnostic of the drive.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I connected it to another computer and it would not be recognized by that computer either. No prompt, nothing. Seems like there might be something else going on with the drive.
So, I just posted about that.. Is the drive making any noise at initial power? Does it spin and then stop? The drive not being detected indicates that the controller on the drive is not talking to the computer which could be because the drive fails self test at power.
 
Status
Not open for further replies.
Top