• Currently using VMware at work? We want to hear from you.

    Thinking of making a switch from VMware? We'd love to hear your thoughts and feedback about which hypervisor you have been researching or already using. Click here to vote and share your thoughts! You can vote HERE!

SOLVED Can't attach disk to existing mirror

Status
Not open for further replies.

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
My pool contains a mix of raw devices (/dev/da...) and GPT formatted devices (/dev/gptid/...).

I am trying to upgrade a mirrored vdev from 2 to 3 disks, and ZFS isn't having any of it. I can't figure why. The disk is da4 and the vdev I'm trying to add it to is listed as "mirror-2", containing da0.

The disk is large enough and fully wiped, the pool was scrubbed a day ago, and a search here and elsewhere online doesn't shed light on the error I get. Relevant output:

# zpool status -v tank
Code:
  pool: tank
 state: ONLINE
  scan: scrub repaired 0 in 40h1m with 0 errors on Fri Nov 17 14:16:06 2017
config:

		NAME											STATE	 READ WRITE CKSUM
		tank											ONLINE	   0	 0	 0
		  mirror-0									  ONLINE	   0	 0	 0
			gptid/6c62bc1a-0b7b-11e7-86ae-000743144400  ONLINE	   0	 0	 0
			gptid/94cad523-0b45-11e7-86ae-000743144400  ONLINE	   0	 0	 0
		  mirror-1									  ONLINE	   0	 0	 0
			ada0p2									  ONLINE	   0	 0	 0
			gptid/e619dab7-03f1-11e7-8f93-000743144400  ONLINE	   0	 0	 0
		  mirror-2									  ONLINE	   0	 0	 0
			gptid/c68f80ae-01da-11e7-b762-000743144400  ONLINE	   0	 0	 0
			da0										 ONLINE	   0	 0	 0
		  mirror-4									  ONLINE	   0	 0	 0
			da2										 ONLINE	   0	 0	 0
			da3										 ONLINE	   0	 0	 0
errors: No known data errors

# camcontrol devlist
Code:
<SEAGATE ST6000NM0054 ET05>		at scbus0 target 0 lun 0 (pass0,da0)
<ATA ST6000NM0024-1HT SN05>		at scbus0 target 2 lun 0 (da4,pass12)
<ATA ST3500418AS CC49>			 at scbus0 target 3 lun 0 (pass1,da1)
<ATA ST4000DM000-1F21 CC52>		at scbus0 target 4 lun 0 (pass2,da2)
<ATA Hitachi HDS72404 A3B0>		at scbus0 target 6 lun 0 (pass3,da3)
<ST6000NM0004-1FT17Z NN01>		 at scbus1 target 0 lun 0 (pass4,ada0)
<ST6000NM0024-1HT17Z SN02>		 at scbus2 target 0 lun 0 (pass5,ada1)
<ST6000NM0115-1YZ110 SN02>		 at scbus3 target 0 lun 0 (pass6,ada2)
<ST6000NM0024-1HT17Z SN02>		 at scbus4 target 0 lun 0 (pass7,ada3)
<INTEL SSDSA2CT040G3 4PC10362>	 at scbus5 target 0 lun 0 (pass8,ada4)
<INTEL SSDSA2CT040G3 4PC10362>	 at scbus6 target 0 lun 0 (pass9,ada5)
<ST6000NM0024-1HT17Z SN02>		 at scbus7 target 0 lun 0 (pass10,ada6)
<ST6000NM0024-1HT17Z SN02>		 at scbus8 target 0 lun 0 (pass11,ada7)

# glabel status
Code:
									  Name  Status  Components
gptid/219a066e-0433-11e7-9829-000743144400	 N/A  nvd0p1
gptid/9d6c704e-0378-11e7-b762-000743144400	 N/A  nvd1p1
gptid/6c62bc1a-0b7b-11e7-86ae-000743144400	 N/A  ada1p2
gptid/c68f80ae-01da-11e7-b762-000743144400	 N/A  ada2p2
gptid/9df26501-8cf6-11e7-9a6d-000743144400	 N/A  ada3p2
gptid/3b2b904b-02b3-11e7-b762-000743144400	 N/A  ada4p1
							  label/efibsd	 N/A  ada5p1
gptid/fb71e387-016b-11e7-9ddd-000743144400	 N/A  ada5p1
gptid/94cad523-0b45-11e7-86ae-000743144400	 N/A  ada6p2
gptid/e619dab7-03f1-11e7-8f93-000743144400	 N/A  ada7p2
gptid/b0833c89-73d8-11e7-a989-000743144400	 N/A  da1p2
	  gpt/Microsoft%20reserved%20partition	 N/A  raid/r0p1	  [SORRY! *cries*]
gptid/7e7b5af1-6859-464b-af5a-6daae8b991bf	 N/A  raid/r0p1

# gpart show
Code:
=>	   34  488397101  nvd0  GPT  (233G)
		 34		 94		- free -  (47K)
		128  488397000	 1  freebsd-zfs  (233G)
  488397128		  7		- free -  (3.5K)

=>	   34  781422701  nvd1  GPT  (373G)
		 34		 94		- free -  (47K)
		128  781422600	 1  freebsd-zfs  (373G)
  781422728		  7		- free -  (3.5K)

=>		 6  1465130635  ada0  GPT  (5.5T)
		   6		 122		- free -  (488K)
		 128	 6291456	 1  freebsd-swap  (24G)
	 6291584  1458839057	 2  freebsd-zfs  (5.4T)

=>		 34  11721045101  ada1  GPT  (5.5T)
		   34		   94		- free -  (47K)
		  128	  6291456	 1  freebsd-swap  (3.0G)
	  6291584  11714753544	 2  freebsd-zfs  (5.5T)
  11721045128			7		- free -  (3.5K)

=>		 34  11721045101  ada2  GPT  (5.5T)
		   34		   94		- free -  (47K)
		  128	  6291456	 1  freebsd-swap  (3.0G)
	  6291584  11714753544	 2  freebsd-zfs  (5.5T)
  11721045128			7		- free -  (3.5K)

=>		 40  11721045088  ada3  GPT  (5.5T)
		   40		   88		- free -  (44K)
		  128	  6291456	 1  freebsd-swap  (3.0G)
	  6291584  11714753536	 2  freebsd-zfs  (5.5T)
  11721045120			8		- free -  (4.0K)

=>	  34  78165293  ada4  GPT  (37G)
		34	204800	 1  efi  (100M)
	204834		 6		- free -  (3.0K)
	204840  77960480	 2  freebsd-zfs  (37G)
  78165320		 7		- free -  (3.5K)

=>	  34  78165293  ada5  GPT  (37G)
		34	204800	 1  efi  (100M)
	204834		 6		- free -  (3.0K)
	204840  77960480	 2  freebsd-zfs  (37G)
  78165320		 7		- free -  (3.5K)

=>		 34  11721045101  ada6  GPT  (5.5T)
		   34		   94		- free -  (47K)
		  128	  6291456	 1  freebsd-swap  (3.0G)
	  6291584  11714753544	 2  freebsd-zfs  (5.5T)
  11721045128			7		- free -  (3.5K)

=>		 34  11721045101  ada7  GPT  (5.5T)
		   34		   94		- free -  (47K)
		  128	  6291456	 1  freebsd-swap  (3.0G)
	  6291584  11714753544	 2  freebsd-zfs  (5.5T)
  11721045128			7		- free -  (3.5K)

=>	   40  976773088  da1  GPT  (466G)
		 40		 88	   - free -  (44K)
		128	6291456	1  freebsd-swap  (3.0G)
	6291584  970481536	2  freebsd-zfs  (463G)
  976773120		  8	   - free -  (4.0K)

=>	   34  156243901  raid/r0  GPT  (75G)
		 34	 262144		1  ms-reserved  (128M)
	 262178  155981757		   - free -  (74G)

# gpart show da4
Code:
gpart: No such geom: da4

But....
# zpool attach tank da0 da4
Code:
cannot attach da4 to da0: no such pool or dataset

# zpool attach tank /dev/da0 /dev/da4
Code:
cannot attach /dev/da4 to /dev/da0: no such pool or dataset

# zpool attach tank /dev/gptid/c68f80ae-01da-11e7-b762-000743144400 /dev/da4 [note: output corrected: paste error originally. Thanks @rs225 ]
Code:
cannot attach /dev/da4 to /dev/gptid/c68f80ae-01da-11e7-b762-000743144400: no such pool or dataset

What am I missing?

Also, while obviously not doing any harm, is it sensible to have a mix of GPT and raw devices like this, or should I prefer one or the other when adding disks and creating mirrors in future? I can imagine the only effect is CLI ease - if so which is likely to be easier?
 
Last edited:

rs225

Guru
Joined
Jun 28, 2014
Messages
878
Try /dev/da0 and /dev/da4

I also notice that in your devlist, da4 is flipped with pass12. I don't know if that means anything.
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Try /dev/da0 and /dev/da4
I already tried that - sorry, I forgot to include it - and also tried with the GUID.
I've now added both of them to the OP so it's clear.

I still have no idea what the issue is.
I also notice that in your devlist, da4 is flipped with pass12. I don't know if that means anything.
A bit of searching says it may relate to the SCSI passthrough device associated with the device. This post suggests it's not important, it's due to the order in which devices attach.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
Is it possible your pool name has an unprintable character in it? zdb -l /dev/da0 should output some data that might show anything strange.
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Is it possible your pool name has an unprintable character in it? zdb -l /dev/da0 should output some data that might show anything strange.
Not possible. I've just recently added mirrors to other vdevs in it (a disk in another vdev started to snowball lost sectors last week, and the exact same attach command had no issues to add a spare disk as a 3rd mirror + resilvering + detach failing disk afterwards). If it was the pool name or something wrong with the pool, I think that would have also been affected.

Here's the output to that command. It looks fine to me (and was "normal" on the console, no weird formatting or chars):
Code:
# zdb -l /dev/da0

--------------------------------------------
LABEL 0
--------------------------------------------
	version: 5000
	name: 'tank'
	state: 0
	txg: 4511074
	pool_guid: 17735542080203756619
	hostid: 2315700283
	hostname: ''
	top_guid: 12275387705442396426
	guid: 554162750975504770
	vdev_children: 5
	vdev_tree:
		type: 'mirror'
		id: 2
		guid: 12275387705442396426
		whole_disk: 0
		metaslab_array: 36
		metaslab_shift: 35
		ashift: 12
		asize: 5997949091840
		is_log: 0
		create_txg: 4
		children[0]:
			type: 'disk'
			id: 0
			guid: 13374135854265202333
			path: '/dev/gptid/c68f80ae-01da-11e7-b762-000743144400'
			whole_disk: 1
			DTL: 312
			create_txg: 4
		children[1]:
			type: 'disk'
			id: 1
			guid: 554162750975504770
			path: '/dev/da0'
			whole_disk: 1
			DTL: 1515
			create_txg: 4
	features_for_read:
		com.delphix:hole_birth
		com.delphix:embedded_data
--------------------------------------------
LABEL 1
--------------------------------------------
	version: 5000
	name: 'tank'
	state: 0
	txg: 4511074
	pool_guid: 17735542080203756619
	hostid: 2315700283
	hostname: ''
	top_guid: 12275387705442396426
	guid: 554162750975504770
	vdev_children: 5
	vdev_tree:
		type: 'mirror'
		id: 2
		guid: 12275387705442396426
		whole_disk: 0
		metaslab_array: 36
		metaslab_shift: 35
		ashift: 12
		asize: 5997949091840
		is_log: 0
		create_txg: 4
		children[0]:
			type: 'disk'
			id: 0
			guid: 13374135854265202333
			path: '/dev/gptid/c68f80ae-01da-11e7-b762-000743144400'
			whole_disk: 1
			DTL: 312
			create_txg: 4
		children[1]:
			type: 'disk'
			id: 1
			guid: 554162750975504770
			path: '/dev/da0'
			whole_disk: 1
			DTL: 1515
			create_txg: 4
	features_for_read:
		com.delphix:hole_birth
		com.delphix:embedded_data
--------------------------------------------
LABEL 2
--------------------------------------------
	version: 5000
	name: 'tank'
	state: 0
	txg: 4511074
	pool_guid: 17735542080203756619
	hostid: 2315700283
	hostname: ''
	top_guid: 12275387705442396426
	guid: 554162750975504770
	vdev_children: 5
	vdev_tree:
		type: 'mirror'
		id: 2
		guid: 12275387705442396426
		whole_disk: 0
		metaslab_array: 36
		metaslab_shift: 35
		ashift: 12
		asize: 5997949091840
		is_log: 0
		create_txg: 4
		children[0]:
			type: 'disk'
			id: 0
			guid: 13374135854265202333
			path: '/dev/gptid/c68f80ae-01da-11e7-b762-000743144400'
			whole_disk: 1
			DTL: 312
			create_txg: 4
		children[1]:
			type: 'disk'
			id: 1
			guid: 554162750975504770
			path: '/dev/da0'
			whole_disk: 1
			DTL: 1515
			create_txg: 4
	features_for_read:
		com.delphix:hole_birth
		com.delphix:embedded_data
--------------------------------------------
LABEL 3
--------------------------------------------
	version: 5000
	name: 'tank'
	state: 0
	txg: 4511074
	pool_guid: 17735542080203756619
	hostid: 2315700283
	hostname: ''
	top_guid: 12275387705442396426
	guid: 554162750975504770
	vdev_children: 5
	vdev_tree:
		type: 'mirror'
		id: 2
		guid: 12275387705442396426
		whole_disk: 0
		metaslab_array: 36
		metaslab_shift: 35
		ashift: 12
		asize: 5997949091840
		is_log: 0
		create_txg: 4
		children[0]:
			type: 'disk'
			id: 0
			guid: 13374135854265202333
			path: '/dev/gptid/c68f80ae-01da-11e7-b762-000743144400'
			whole_disk: 1
			DTL: 312
			create_txg: 4
		children[1]:
			type: 'disk'
			id: 1
			guid: 554162750975504770
			path: '/dev/da0'
			whole_disk: 1
			DTL: 1515
			create_txg: 4
	features_for_read:
		com.delphix:hole_birth
		com.delphix:embedded_data

I hope this shows something, because I can't think of any issue that should prevent "attach" from working.....
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
What is your version of FreeNAS? I notice when you used the gptid, the error message referenced the wrong device. Was that a paste error?
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Looks fine. Try ls -l /dev/da? /dev/pass12
I ran it for all da, ada and pass devices, just in case it highlights anything unexpectedly.
Code:
# ls -l /dev/da* /dev/ada* /dev/pass*
crw-r-----  1 root  operator   0x71 Nov 17 03:47 /dev/ada0
crw-r-----  1 root  operator   0x94 Nov 17 03:47 /dev/ada0p1
crw-r-----  1 root  operator   0xbf Nov 17 03:49 /dev/ada0p1.eli
crw-r-----  1 root  operator   0x95 Nov 17 03:47 /dev/ada0p2
crw-r-----  1 root  operator   0x8b Nov 17 03:47 /dev/ada1
crw-r-----  1 root  operator   0x96 Nov 17 03:47 /dev/ada1p1
crw-r-----  1 root  operator  0x1ea Nov 17 03:49 /dev/ada1p1.eli
crw-r-----  1 root  operator   0x97 Nov 17 03:47 /dev/ada1p2
crw-r-----  1 root  operator   0x98 Nov 17 03:47 /dev/ada2
crw-r-----  1 root  operator   0xa6 Nov 17 03:47 /dev/ada2p1
crw-r-----  1 root  operator   0xc1 Nov 17 03:49 /dev/ada2p1.eli
crw-r-----  1 root  operator   0xa7 Nov 17 03:47 /dev/ada2p2
crw-r-----  1 root  operator   0x99 Nov 17 03:47 /dev/ada3
crw-r-----  1 root  operator   0xa8 Nov 17 03:47 /dev/ada3p1
crw-r-----  1 root  operator   0xc7 Nov 17 03:49 /dev/ada3p1.eli
crw-r-----  1 root  operator   0xa9 Nov 17 03:47 /dev/ada3p2
crw-r-----  1 root  operator   0x9c Nov 17 03:47 /dev/ada4
crw-r-----  1 root  operator   0xaa Nov 17 03:47 /dev/ada4p1
crw-r-----  1 root  operator   0xab Nov 17 03:47 /dev/ada4p2
crw-r-----  1 root  operator   0x9d Nov 17 03:47 /dev/ada5
crw-r-----  1 root  operator   0xac Nov 17 03:47 /dev/ada5p1
crw-r-----  1 root  operator   0xad Nov 17 03:47 /dev/ada5p2
crw-r-----  1 root  operator   0x9e Nov 17 03:47 /dev/ada6
crw-r-----  1 root  operator   0xae Nov 17 03:47 /dev/ada6p1
crw-r-----  1 root  operator   0xa0 Nov 17 03:49 /dev/ada6p1.eli
crw-r-----  1 root  operator   0xaf Nov 17 03:47 /dev/ada6p2
crw-r-----  1 root  operator   0x9f Nov 17 03:47 /dev/ada7
crw-r-----  1 root  operator   0xb0 Nov 17 03:47 /dev/ada7p1
crw-r-----  1 root  operator   0x9a Nov 17 03:49 /dev/ada7p1.eli
crw-r-----  1 root  operator   0xb1 Nov 17 03:47 /dev/ada7p2
crw-r-----  1 root  operator   0xa2 Nov 17 03:47 /dev/da0
crw-r-----  1 root  operator   0xa3 Nov 17 03:47 /dev/da1
crw-r-----  1 root  operator   0xb4 Nov 17 03:47 /dev/da1p1
crw-r-----  1 root  operator   0xb6 Nov 17 03:49 /dev/da1p1.eli
crw-r-----  1 root  operator   0xb5 Nov 17 03:47 /dev/da1p2
crw-r-----  1 root  operator   0xa4 Nov 17 03:47 /dev/da2
crw-r-----  1 root  operator   0xa5 Nov 17 03:47 /dev/da3
crw-r-----  1 root  operator  0x1ed Nov 17 04:37 /dev/da4
crw-------  1 root  operator   0x72 Nov 17 03:47 /dev/pass0
crw-------  1 root  operator   0x73 Nov 17 03:47 /dev/pass1
crw-------  1 root  operator   0x7c Nov 17 03:47 /dev/pass10
crw-------  1 root  operator   0x7d Nov 17 03:47 /dev/pass11
crw-------  1 root  operator  0x1ec Nov 17 04:37 /dev/pass12
crw-------  1 root  operator   0x74 Nov 17 03:47 /dev/pass2
crw-------  1 root  operator   0x75 Nov 17 03:47 /dev/pass3
crw-------  1 root  operator   0x76 Nov 17 03:47 /dev/pass4
crw-------  1 root  operator   0x77 Nov 17 03:47 /dev/pass5
crw-------  1 root  operator   0x78 Nov 17 03:47 /dev/pass6
crw-------  1 root  operator   0x79 Nov 17 03:47 /dev/pass7
crw-------  1 root  operator   0x7a Nov 17 03:47 /dev/pass8
crw-------  1 root  operator   0x7b Nov 17 03:47 /dev/pass9
What is your version of FreeNAS? I notice when you used the gptid, the error message referenced the wrong device. Was that a paste error?
Version is 11.0-U4 (54848d13b).

I think I see the gptid inconsistency you mean - the one where I edited in the /dev/ responses. I refer to attach gptid + da4 but the error refers to gptid + da0. It's a paste error, thanks. I just reran all of them, and updated the 1st post to correct it.
 
Last edited:

rs225

Guru
Joined
Jun 28, 2014
Messages
878
In the original post, the last command uses the gptid in the zpool attach. The error message seems to flip the intent, and also references da0 instead of da4.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
The only other thing I notice (which may be nothing) is the device number for da4 is much higher than the others; maybe there is a bug somewhere related to that. A reboot would probably re-order it back down lower.
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Reboot doesn't help. The device ID changes (new disk is now da1 instead of da4, da0 and gptid are unchanged). Same result:

Code:
# zpool status -v tank
  pool: tank
 state: ONLINE
  scan: scrub repaired 0 in 40h1m with 0 errors on Fri Nov 17 14:16:06 2017
		NAME											STATE	 READ WRITE CKSUM
		tank											ONLINE	   0	 0	 0
		  mirror-0									  ONLINE	   0	 0	 0
			gptid/6c62bc1a-0b7b-11e7-86ae-000743144400  ONLINE	   0	 0	 0
			gptid/94cad523-0b45-11e7-86ae-000743144400  ONLINE	   0	 0	 0
		  mirror-1									  ONLINE	   0	 0	 0
			ada0p2									  ONLINE	   0	 0	 0
			gptid/e619dab7-03f1-11e7-8f93-000743144400  ONLINE	   0	 0	 0
		  mirror-2									  ONLINE	   0	 0	 0
			gptid/c68f80ae-01da-11e7-b762-000743144400  ONLINE	   0	 0	 0
			da0										 ONLINE	   0	 0	 0
		  mirror-4									  ONLINE	   0	 0	 0
			da3										 ONLINE	   0	 0	 0
			da4										 ONLINE	   0	 0	 0
		logs
		  gptid/9d6c704e-0378-11e7-b762-000743144400	ONLINE	   0	 0	 0
		cache
		  gptid/219a066e-0433-11e7-9829-000743144400	ONLINE	   0	 0	 0
errors: No known data errors

# camcontrol devlist
<SEAGATE ST6000NM0054 ET05>		at scbus0 target 0 lun 0 (pass0,da0)
<ATA ST6000NM0024-1HT SN05>		at scbus0 target 2 lun 0 (pass1,da1)
<ATA ST3500418AS CC49>			 at scbus0 target 3 lun 0 (pass2,da2)
<ATA ST4000DM000-1F21 CC52>		at scbus0 target 4 lun 0 (pass3,da3)
<ATA Hitachi HDS72404 A3B0>		at scbus0 target 6 lun 0 (pass4,da4)
<ST6000NM0004-1FT17Z NN01>		 at scbus1 target 0 lun 0 (pass5,ada0)
<ST6000NM0024-1HT17Z SN02>		 at scbus2 target 0 lun 0 (pass6,ada1)
<ST6000NM0115-1YZ110 SN02>		 at scbus3 target 0 lun 0 (pass7,ada2)
<ST6000NM0024-1HT17Z SN02>		 at scbus4 target 0 lun 0 (pass8,ada3)
<INTEL SSDSA2CT040G3 4PC10362>	 at scbus5 target 0 lun 0 (pass9,ada4)
<INTEL SSDSA2CT040G3 4PC10362>	 at scbus6 target 0 lun 0 (pass10,ada5)
<ST6000NM0024-1HT17Z SN02>		 at scbus7 target 0 lun 0 (pass11,ada6)
<ST6000NM0024-1HT17Z SN02>		 at scbus8 target 0 lun 0 (pass12,ada7)

# glabel status
									  Name  Status  Components
gptid/219a066e-0433-11e7-9829-000743144400	 N/A  nvd0p1
gptid/9d6c704e-0378-11e7-b762-000743144400	 N/A  nvd1p1
gptid/6c62bc1a-0b7b-11e7-86ae-000743144400	 N/A  ada1p2
gptid/c68f80ae-01da-11e7-b762-000743144400	 N/A  ada2p2
gptid/9df26501-8cf6-11e7-9a6d-000743144400	 N/A  ada3p2
gptid/3b2b904b-02b3-11e7-b762-000743144400	 N/A  ada4p1
							  label/efibsd	 N/A  ada5p1
gptid/fb71e387-016b-11e7-9ddd-000743144400	 N/A  ada5p1
gptid/94cad523-0b45-11e7-86ae-000743144400	 N/A  ada6p2
gptid/e619dab7-03f1-11e7-8f93-000743144400	 N/A  ada7p2
gptid/b0833c89-73d8-11e7-a989-000743144400	 N/A  da2p2
gptid/7e7b5af1-6859-464b-af5a-6daae8b991bf	 N/A  raid/r0p1

# zpool attach tank da0 da1
cannot attach da1 to da0: no such pool or dataset

# zpool attach tank /dev/da0 /dev/da1
cannot attach /dev/da1 to /dev/da0: no such pool or dataset

# zpool attach tank /dev/gptid/c68f80ae-01da-11e7-b762-000743144400 /dev/da1
cannot attach /dev/da1 to /dev/gptid/c68f80ae-01da-11e7-b762-000743144400: no such pool or dataset

One thing - what do you make of the last devlist entry, "raid/r0p1"? There aren't any disks outside ZFS control AFAIK (my ZIL has an LSI controller under the hood but there's only 2 NVMes and nvd0+nvd1 both appear elsewhere in the list), and HBA cards only - no HW raid or anything in use. It looks anomalous. How can I find what it refers to?
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Yeah, that was it!

I had wiped the disk fully, so it * should * have been empty. I had also checked it and all the tools said it was uninitialised.

On a hunch I checked it on Windows, because I do use Intel RST caching sometimes, and I had a suspicion that Intel RST stores raid metadata on the disk separate from any usual data, and this *might* have been one of the disks I used in soft raid while the NAS was building ages ago, and if so...

Sure enough, as soon as I plugged it into Windows, RST control panel reported it as "degraded" - not "empty". I used RST control panel to delete the "volume" (i.e. wipe any Intel RST metadata and reset the disk), and when I plugged it back into FreeNAS it had no issues at all.

Thank you for the help, @rs225 - I would not have seen that, or thought of it, if you hadn't drawn my attention to the list. It's now happily resilvering.
 
Last edited:
Status
Not open for further replies.
Top