Unable to import Zpool

Status
Not open for further replies.

mowa

Cadet
Joined
Jun 16, 2016
Messages
3
Hi Freenas gurus,

I'm unable to import the zpool by a import volume from webgui or shell.
Last week Freenas suddenly crashed a couple off days after changing a defected hdd. It had 1 unreadable sector, after writing that sector all seems fine but a couple off days later 3 sectors were unreadable and finally 6 sectors. I changed it by the book. Last year i changed a hdd by the book with no problems at all.
Now it ended up in a kdb enter panic.
First i tried upgrading the config, any fresh install with any uploaded backup configs also ended up in a kdb enter panic.
Also tried this https://headcrash.industries/reference/recovering-freenas-configuration-from-zfs-boot-drive/ with no succes.

After that i tried a fresh install and importing my zpool with no result.


Code:
Freenas Version: FreeNAS-9.10.2-U1 (86c7ef5)
CPU: Intel(R) Celeron(R) CPU G1840 @ 2.80GHz
Memory: 7815MB (Crucial CT51264BA160BJ)
Motherboard: ASRock B85M Pro4
Power: be quiet! System Power 7 300W
HDD: 4x WD Green WD40EZRX, 1x WD Blue WD40EZRZ, 1x Seagate ST4000DM000-1F2168 CC52
Boot drive: 1x ssd PI-291 FCR-HS2SATA 1.04

Code:
[root@freenas ~]# uname -a																										  
FreeBSD freenas.local 10.3-STABLE FreeBSD 10.3-STABLE #0 r295946+1805185(9.10.2-STABLE): Wed Jan 11 17:12:42 UTC 2017	 root@gaunt
let:/freenas-9.10-releng/_BE/objs/freenas-9.10-releng/_BE/os/sys/FreeNAS.amd64  amd64

Code:
[root@freenas] ~# camcontrol devlist
<ST4000DM000-1F2168 CC52>		  at scbus0 target 0 lun 0 (pass0,ada0)
<WDC WD40EZRZ-00WN9B0 80.00A80>	at scbus1 target 0 lun 0 (pass1,ada1)
<WDC WD40EZRX-00SPEB0 80.00A80>	at scbus2 target 0 lun 0 (pass2,ada2)
<WDC WD40EZRX-00SPEB0 80.00A80>	at scbus3 target 0 lun 0 (pass3,ada3)
<WDC WD40EZRX-00SPEB0 80.00A80>	at scbus4 target 0 lun 0 (pass4,ada4)
<WDC WD40EZRX-22SPEB0 80.00A80>	at scbus5 target 0 lun 0 (pass5,ada5)
<PI-291 FCR-HS2SATA 1.04>		  at scbus7 target 0 lun 0 (pass6,da0)

Code:
[root@freenas ~]# gpart show																										
=>		34  7814037101  ada0  GPT  (3.6T)																						
		  34		  94		- free -  (47K)																					
		 128	 4194304	 1  freebsd-swap  (2.0G)																				
	 4194432  7809842696	 2  freebsd-zfs  (3.6T)																				
  7814037128		   7		- free -  (3.5K)																					
																																   
=>		34  7814037101  ada1  GPT  (3.6T)																						
		  34		  94		- free -  (47K)																					
		 128	 4194304	 1  freebsd-swap  (2.0G)																				
	 4194432  7809842696	 2  freebsd-zfs  (3.6T)																				
  7814037128		   7		- free -  (3.5K)																					
																																   
=>		34  7814037101  ada2  GPT  (3.6T)																						
		  34		  94		- free -  (47K)																					
		 128	 4194304	 1  freebsd-swap  (2.0G)																				
	 4194432  7809842696	 2  freebsd-zfs  (3.6T)																				
  7814037128		   7		- free -  (3.5K)																					
																																   
=>		34  7814037101  ada3  GPT  (3.6T)																						
		  34		  94		- free -  (47K)																					
		 128	 4194304	 1  freebsd-swap  (2.0G)																				
	 4194432  7809842696	 2  freebsd-zfs  (3.6T)																				
  7814037128		   7		- free -  (3.5K)																					
																																   
=>		34  7814037101  ada4  GPT  (3.6T)																						
		  34		  94		- free -  (47K)																					
		 128	 4194304	 1  freebsd-swap  (2.0G)																				
	 4194432  7809842696	 2  freebsd-zfs  (3.6T)																				
  7814037128		   7		- free -  (3.5K)																					
																																   
=>		34  7814037101  ada5  GPT  (3.6T)																						
		  34		  94		- free -  (47K)																					
		 128	 4194304	 1  freebsd-swap  (2.0G)																				
	 4194432  7809842696	 2  freebsd-zfs  (3.6T)																				
  7814037128		   7		- free -  (3.5K)																					
																																   
=>	   34  117231341  da0  GPT  (56G)																							
		 34	   1024	1  bios-boot  (512K)																					  
	   1058		  6	   - free -  (3.0K)																					  
	   1064  117229360	2  freebsd-zfs  (56G)																					
  117230424		951	   - free -  (476K) 

Code:
[root@freenas ~]# glabel status																									
									  Name  Status  Components																	  
gptid/bafc8714-2772-11e6-bd3d-d05099455990	 N/A  ada0p2																		  
gptid/15963456-dfe0-11e6-b62e-d05099455990	 N/A  ada1p1																		  
gptid/15ad79b1-dfe0-11e6-b62e-d05099455990	 N/A  ada1p2																		  
gptid/da25d3a9-26ff-11e5-86c1-d05099455990	 N/A  ada2p1																		  
gptid/da3c2780-26ff-11e5-86c1-d05099455990	 N/A  ada2p2																		  
gptid/d9388397-26ff-11e5-86c1-d05099455990	 N/A  ada3p1																		  
gptid/d95441f5-26ff-11e5-86c1-d05099455990	 N/A  ada3p2																		  
gptid/d7838081-26ff-11e5-86c1-d05099455990	 N/A  ada4p1																		  
gptid/d79ba265-26ff-11e5-86c1-d05099455990	 N/A  ada4p2																		  
gptid/d855c813-26ff-11e5-86c1-d05099455990	 N/A  ada5p1																		  
gptid/d86b9045-26ff-11e5-86c1-d05099455990	 N/A  ada5p2																		  
gptid/47fbce48-e24a-11e6-84d8-d05099455990	 N/A  da0p1																		  
gptid/4800efbc-e24a-11e6-84d8-d05099455990	 N/A  da0p2																		  
gptid/bae80b34-2772-11e6-bd3d-d05099455990	 N/A  ada0p1 

Code:
[root@freenas] ~# zpool import
   pool: TANK
	 id: 14367573279537233406
  state: DEGRADED
 status: One or more devices were being resilvered.
 action: The pool can be imported despite missing or damaged devices.  The
		fault tolerance of the pool may be compromised if imported.
 config:

		TANK											  DEGRADED
		  raidz1-0										DEGRADED
			gptid/d79ba265-26ff-11e5-86c1-d05099455990	ONLINE
			gptid/d86b9045-26ff-11e5-86c1-d05099455990	ONLINE
			gptid/d95441f5-26ff-11e5-86c1-d05099455990	ONLINE
			gptid/da3c2780-26ff-11e5-86c1-d05099455990	ONLINE
			gptid/bafc8714-2772-11e6-bd3d-d05099455990	ONLINE
			replacing-5								   DEGRADED
			  5300142212725832104						 UNAVAIL  cannot open
			  gptid/15ad79b1-dfe0-11e6-b62e-d05099455990  ONLINE


Code:
[root@freenas ~]# zpool status																									 
  pool: TANK																														
state: DEGRADED																													
status: One or more devices is currently being resilvered.  The pool will														   
		continue to function, possibly in a degraded state.																		 
action: Wait for the resilver to complete.																						 
  scan: resilver in progress since Mon Jan 23 01:19:28 2017																		 
		4.27T scanned out of 19.7T at 1/s, (scan is slow, no estimated time)														
		692G resilvered, 21.62% done																								
config:																															 
																																	
		NAME											  STATE	 READ WRITE CKSUM												
		TANK											  DEGRADED	 0	 0	 1												
		  raidz1-0										DEGRADED	 0	 0	 2												
			gptid/d79ba265-26ff-11e5-86c1-d05099455990	ONLINE	   0	 0	 0												
			gptid/d86b9045-26ff-11e5-86c1-d05099455990	ONLINE	   0	 0	 0												
			gptid/d95441f5-26ff-11e5-86c1-d05099455990	ONLINE	   0	 0	 0												
			gptid/da3c2780-26ff-11e5-86c1-d05099455990	ONLINE	   0	 0	 0												
			gptid/bafc8714-2772-11e6-bd3d-d05099455990	ONLINE	   0	 0	 0												
			replacing-5								   DEGRADED	 0	 0	 0												
			  5300142212725832104						 UNAVAIL	  0	 0	 0  was /dev/gptid/23834bb3-dd1a-11e6-8006-d050994
55990																															   
			  gptid/15ad79b1-dfe0-11e6-b62e-d05099455990  ONLINE	   0	 0	 0												
																																	
errors: 7 data errors, use '-v' for a list																						 
																																	
  pool: freenas-boot																												
state: ONLINE																													 
  scan: none requested																											 
config:																															 
																																	
		NAME										  STATE	 READ WRITE CKSUM													
		freenas-boot								  ONLINE	   0	 0	 0													
		  gptid/4800efbc-e24a-11e6-84d8-d05099455990  ONLINE	   0	 0	 0													
																																	
errors: No known data errors										 



#zpool import -f -R /mnt 14367573279537233406 TANK ends up in just a reboot.
# zpool import -fmNF TANK
cannot import 'TANK': a pool with that name is already created/imported,
and no additional pools with that name were found
Tried all sorts of zpool import combinations.

I can acces my data by:
sh /etc/rc.initdiskless
zpool import -f -R /mnt -o rdonly=on TANK
I've build an emergency nas with all sorts of hdd's from myself en some friends with spare hdd's with just enough for a stripe pool. I copied all my data with rsync to this emergency build. Crossing fingers nothing breaks.:eek:
As a last resort i can begin all over again from scratch and rsync the data back again.

So i finally want to trie for a last time with your kind help and knolegde to help me out. How can get my zpool imported?
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Based on your zpool status output, the pool is imported and is busy resilvering.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I think you are panicking and going to break something. Stop using the cli, wait for your disk to be replaced. Why is it being replace? Did I miss that part of your story?

Sent from my Nexus 5X using Tapatalk
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
As both Robert Trevellyan and SweetAndLow have said, just let it run. It' s busy resilvering.

If you need to start over, we recommend going with RAIDz2 instead. 6x4TB drives in RAIDz1 is asking for trouble.

As a last resort i can begin all over again from scratch and rsync the data back again.
 

mowa

Cadet
Joined
Jun 16, 2016
Messages
3
Based on your zpool status output, the pool is imported and is busy resilvering.
I think you are panicking and going to break something. Stop using the cli, wait for your disk to be replaced. Why is it being replace? Did I miss that part of your story?

As both Robert Trevellyan and SweetAndLow have said, just let it run. It' s busy resilvering.

If you need to start over, we recommend going with RAIDz2 instead. 6x4TB drives in RAIDz1 is asking for trouble.
Thnxyou all for replying.
My fault, i had to mention this.
Code:
replacing-5								 DEGRADED
5300142212725832104						 UNAVAIL  cannot open

As i said, i changed the drive by the book, this however starting to show up after the resilvering process, some days after the resilvering.
Code:
scan: resilver in progress since Mon Jan 23 01:19:28 2017																		 
		4.27T scanned out of 19.7T at 1/s, (scan is slow, no estimated time)														
		692G resilvered, 21.62% done

Despite showing it is NOT resilvering anymore. After replacing my drive it was resilvering for about a day. About the timestamp in the code it had crashed for the 3rd time. After that i setup the new system i began with as mentioned in the opening post.
Normally you must be able to import a pool while resilvering. Wich is not the case right now.
Besides the system is up for a week, in the meantime copying all my data. If it was resilvering it had to be done allready.

About the remark RAIDz1 is asking for trouble. Can you please clearify that.
I'm using Freenas for years, in 2011 going from v7 to v8. Untill now i was able to repair and coping troubles. The new system talking about here is up since 2015 doing fine, only these WD drives breaking now and then.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Back in 2011, you probably weren't using 4TB drives. And, the WD Blues (are they the rebranded Greens?) arent' rated for 24x7 use in a "server". In the WD line, the Reds would be a better choice.

Anyway, for more information on the case against RAIDz1, search the forum for "cyberjock's guide".

In a nutshell, while RAIDz1 may have been okay in the olden days, with drive sizes under a 1TB, as drive sizes have grown and resilvering taking longer amounts of time, the odds of running into an issue with a second drive increase. We've had a number of users on the forum lose their volume due to this issue.

About the remark RAIDz1 is asking for trouble. Can you please clarify that. I'm using Freenas for years, in 2011 going from v7 to v8. Untill now i was able to repair and coping troubles. The new system talking about here is up since 2015 doing fine, only these WD drives breaking now and then.

I realize English is probably not your mother tongue, but I'm confused by the statement below.

Normally you must be able to import a pool while resilvering. Which is not the case right now. Besides the system is up for a week, in the meantime copying all my data. If it was resilvering it had to be done allready.

If it's not resilvering, what's going on with it now? Are you still copying data?

Post the results for - zpool status -v

Where are the errors?
 

mowa

Cadet
Joined
Jun 16, 2016
Messages
3
Back in 2011, you probably weren't using 4TB drives. And, the WD Blues (are they the rebranded Greens?) arent' rated for 24x7 use in a "server". In the WD line, the Reds would be a better choice.
Don't know about the Blue's, the Reds are physically the same as the Greens with different firmware, TLER, vibration sensor. Maybe looking for some head parking tweak workaround for a longer lifespan.
Normally the nas is not up for a week but just a couple of hours in the evening. It was up so long to review the resilvering and scrub process.
If it's not resilvering, what's going on with it now? Are you still copying data?

Post the results for - zpool status -v

Where are the errors?
Nothing is really going on right now. Data copying was finished this weekend. The nas is offline at the moment, i' ve should show the zpool status -v earlier. The code from below is from yesterday.
Code:
[root@freenas ~]# zpool status -v																								   
  pool: TANK																														
state: DEGRADED																													
status: One or more devices is currently being resilvered.  The pool will														   
		continue to function, possibly in a degraded state.																		 
action: Wait for the resilver to complete.																						 
  scan: resilver in progress since Mon Jan 23 01:19:28 2017																		 
		4.27T scanned out of 19.7T at 1/s, (scan is slow, no estimated time)														
		692G resilvered, 21.62% done																								
config:																															 
																																	
		NAME											  STATE	 READ WRITE CKSUM												
		TANK											  DEGRADED	 0	 0	 0												
		  raidz1-0										DEGRADED	 0	 0	 0												
			gptid/d79ba265-26ff-11e5-86c1-d05099455990	ONLINE	   0	 0	 0												
			gptid/d86b9045-26ff-11e5-86c1-d05099455990	ONLINE	   0	 0	 0												
			gptid/d95441f5-26ff-11e5-86c1-d05099455990	ONLINE	   0	 0	 0												
			gptid/da3c2780-26ff-11e5-86c1-d05099455990	ONLINE	   0	 0	 0												
			gptid/bafc8714-2772-11e6-bd3d-d05099455990	ONLINE	   0	 0	 0												
			replacing-5								   DEGRADED	 0	 0	 0												
			  5300142212725832104						 UNAVAIL	  0	 0	 0  was /dev/gptid/23834bb3-dd1a-11e6-8006-d050994
55990																															   
			  gptid/15ad79b1-dfe0-11e6-b62e-d05099455990  ONLINE	   0	 0	 0												
																																	
errors: Permanent errors have been detected in the following files:																 
																																	
		<metadata>:<0x102>																										 
		<metadata>:<0x97>																										   
		/mnt/TANK/Backup/HD1/TV 3/THE LORD OF THE RINGS TRILOGY EXTENDED EDITION 1080pBluRayx264-SiNNERS/The Lord of the Rings 3 - T
he Return of the King.mkv																										   
		/mnt/TANK/Backup/HD1/backup/Documenten/Zorg/EVV/A3N/A3 23 september/DSCF1536.JPG											
		/mnt/TANK/Backup/HD1/backup/Documenten/Zorg/EVV/A3N/A3 23 september/DSCF1537.JPG											
		/mnt/TANK/Backup/HD1/backup/Google Chrome/Chrome/User Data/Default/Cache/f_0004e5										   
		/mnt/TANK/Backup/HD1/TV 3/Wagner - Meistersinger - Terfel/The_Mastersingers_Acts_1&2.avi									
																																	
  pool: freenas-boot																												
state: ONLINE																													 
  scan: none requested																											 
config:																															 
																																	
		NAME										  STATE	 READ WRITE CKSUM													
		freenas-boot								  ONLINE	   0	 0	 0													
		  gptid/4800efbc-e24a-11e6-84d8-d05099455990  ONLINE	   0	 0	 0													
																																	
errors: No known data errors	 
 
Status
Not open for further replies.
Top