Not Sure what hard drive has failed?

Status
Not open for further replies.

BTW

Dabbler
Joined
Feb 1, 2014
Messages
33
Hello,
I have had drives fail in the past. Replace the drive and replace via GUI. Simple enough.

This situation is a little different. I am confused if the drive has actually failed or FreeNas got stuck replacing a drive.

From the "view disk" page, it shows me all the drives and reports a drive size & serial (usually a failed drive does not).
From the "volume status" page, If I choose the "UNAVAIL" disk and click "replace", my system does not find a valid disk

Version:FreeNAS-9.10.2-U6 (561f0d7a1)
Setup (general): 10 x 3TB drives in 5 x mirrored vdevs (production), 2 x 1TB SSD in 1 x mirrored vdev (production), 5 x 3TB in 1 x raidz vdev (backups), 1 x 3TB spare, 1 x 400GB PCI (slog), 1 x 240GB L2

Devices:
Code:
[root@cdanas002] ~# camcontrol devlist
<ATA ST3000DM001-1CH1 CC29>		at scbus0 target 0 lun 0 (pass0,da0)
<ATA ST3000DM001-1CH1 CC29>		at scbus0 target 4 lun 0 (pass1,da1)
<ATA ST3000DM001-1CH1 CC29>		at scbus0 target 6 lun 0 (pass2,da2)
<ATA ST3000VN000-1HJ1 SC60>		at scbus0 target 9 lun 0 (pass3,da3)
<ATA ST3000DM008-2DM1 CC26>		at scbus0 target 12 lun 0 (pass4,da4)
<ATA ST3000DM008-2DM1 CC26>		at scbus0 target 13 lun 0 (pass5,da5)
<ATA ST3000DM008-2DM1 CC26>		at scbus0 target 14 lun 0 (pass6,da6)
<ATA ST3000DM008-2DM1 CC26>		at scbus0 target 15 lun 0 (pass7,da7)
<ATA SanDisk SDSSDXPS 00RL>		at scbus1 target 0 lun 0 (pass8,da8)
<ATA Crucial_CT1024MX MU03>		at scbus1 target 1 lun 0 (pass9,da9)
<ATA Crucial_CT1024MX MU03>		at scbus1 target 2 lun 0 (pass10,da10)
<ATA ST3000DM001-1CH1 CC29>		at scbus2 target 0 lun 0 (pass11,da11)
<ATA ST3000DM001-1CH1 CC29>		at scbus2 target 3 lun 0 (pass12,da12)
<ATA ST3000DM001-1CH1 CC29>		at scbus2 target 4 lun 0 (pass13,da13)
<ATA ST3000DM001-1CH1 CC29>		at scbus2 target 5 lun 0 (pass14,da14)
<ATA ST3000DM001-1CH1 CC29>		at scbus2 target 6 lun 0 (pass15,da15)
<ATA ST3000DM001-1CH1 CC29>		at scbus2 target 11 lun 0 (pass16,da16)
<ATA ST3000DM008-2DM1 CC26>		at scbus2 target 12 lun 0 (pass17,da17)
<ATA ST3000DM008-2DM1 CC26>		at scbus2 target 13 lun 0 (pass18,da18)
<Kingston DataTraveler 2.0 PMAP>   at scbus8 target 0 lun 0 (pass19,da19)
<Kingston DataTraveler 2.0 PMAP>   at scbus9 target 0 lun 0 (pass20,da20)


DeviceStatus:
Code:
[root@cdanas002] ~# glabel status
									  Name  Status  Components
gptid/632bbce1-30d7-11e6-a99f-00e081c5a93e	 N/A  nvd0p1
gptid/61e3aafa-30d7-11e6-a99f-00e081c5a93e	 N/A  da0p2
gptid/fb43cc18-30db-11e6-a99f-00e081c5a93e	 N/A  da1p2
gptid/42deccfc-30dc-11e6-a99f-00e081c5a93e	 N/A  da2p2
gptid/44fe493e-7049-11e6-af63-00e081c5a93e	 N/A  da3p2
gptid/1236b0ac-b4f7-11e6-a925-00e081c5a93e	 N/A  da4p2
gptid/34a8ef59-b502-11e6-a925-00e081c5a93e	 N/A  da5p2
gptid/8d2722d2-c9e8-11e6-a925-00e081c5a93e	 N/A  da6p2
gptid/6850ae5b-ffbb-11e6-973f-00e081c5a93e	 N/A  da7p2
gptid/62f706f1-30d7-11e6-a99f-00e081c5a93e	 N/A  da8p1
gptid/2a17f3e2-a443-11e6-a46a-00e081c5a93e	 N/A  da9p2
gptid/2a59dd00-a443-11e6-a46a-00e081c5a93e	 N/A  da10p2
gptid/9a51538d-30dc-11e6-a99f-00e081c5a93e	 N/A  da11p2
gptid/63c6a1c2-30d7-11e6-a99f-00e081c5a93e	 N/A  da12p2
gptid/6401a24e-a49d-11e6-a46a-00e081c5a93e	 N/A  da13p2
gptid/649d4853-a49d-11e6-a46a-00e081c5a93e	 N/A  da14p2
gptid/61ad9ca4-a49d-11e6-a46a-00e081c5a93e	 N/A  da15p2
gptid/6347599e-a49d-11e6-a46a-00e081c5a93e	 N/A  da16p2
gptid/b35746ff-e1ca-11e6-a925-00e081c5a93e	 N/A  da17p2
gptid/2a4e9d36-013d-11e7-973f-00e081c5a93e	 N/A  da18p2
gptid/36d5323f-22c4-11e6-a8e1-00e081c5a93e	 N/A  da19p1
gptid/36f29657-22c4-11e6-a8e1-00e081c5a93e	 N/A  da19p2
gptid/373b6d5b-22c4-11e6-a8e1-00e081c5a93e	 N/A  da20p1
gptid/3758e52b-22c4-11e6-a8e1-00e081c5a93e	 N/A  da20p2


VolumeStatus:

Code:
[root@cdanas002] ~# zpool status CDANAS002_VOL01
  pool: CDANAS002_VOL01
 state: DEGRADED
  scan: scrub repaired 0 in 7h7m with 0 errors on Fri Aug 18 19:07:13 2017
config:

		NAME												STATE	 READ WRITE CKSUM
		CDANAS002_VOL01									 DEGRADED	 0	 0	 0
		  mirror-0										  ONLINE	   0	 0	 0
			gptid/61e3aafa-30d7-11e6-a99f-00e081c5a93e	  ONLINE	   0	 0	 0
			gptid/44fe493e-7049-11e6-af63-00e081c5a93e	  ONLINE	   0	 0	 0
		  mirror-2										  ONLINE	   0	 0	 0
			gptid/8d2722d2-c9e8-11e6-a925-00e081c5a93e	  ONLINE	   0	 0	 0
			gptid/63c6a1c2-30d7-11e6-a99f-00e081c5a93e	  ONLINE	   0	 0	 0
		  mirror-3										  ONLINE	   0	 0	 0
			gptid/fb43cc18-30db-11e6-a99f-00e081c5a93e	  ONLINE	   0	 0	 0
			gptid/1236b0ac-b4f7-11e6-a925-00e081c5a93e	  ONLINE	   0	 0	 0
		  mirror-4										  ONLINE	   0	 0	 0
			gptid/42deccfc-30dc-11e6-a99f-00e081c5a93e	  ONLINE	   0	 0	 0
			gptid/6850ae5b-ffbb-11e6-973f-00e081c5a93e	  ONLINE	   0	 0	 0
		  mirror-5										  DEGRADED	 0	 0	 0
			gptid/9a51538d-30dc-11e6-a99f-00e081c5a93e	  ONLINE	   0	 0	 0
			replacing-1									 DEGRADED	 0	 0	 0
			  spare-0									   DEGRADED	 0	 0	 0
				2758674182642180211						 UNAVAIL	  0	 0	 0  was /dev/gptid/eca80bf2-3334-11e6-83fa-00e081c5a93e
				gptid/34a8ef59-b502-11e6-a925-00e081c5a93e  ONLINE	   0	 0	 0
			  gptid/2a4e9d36-013d-11e7-973f-00e081c5a93e	ONLINE	   0	 0	 0
		logs
		  gptid/632bbce1-30d7-11e6-a99f-00e081c5a93e		ONLINE	   0	 0	 0
		cache
		  gptid/62f706f1-30d7-11e6-a99f-00e081c5a93e		ONLINE	   0	 0	 0
		spares
		  2578709866869120013							   INUSE	 was /dev/gptid/34a8ef59-b502-11e6-a925-00e081c5a93e

errors: No known data errors


[root@cdanas002] ~# glabel status | grep eca80bf2
[root@cdanas002] ~# glabel status | grep 2758674182642180211
[root@cdanas002] ~# glabel status | grep 34a8ef59
gptid/34a8ef59-b502-11e6-a925-00e081c5a93e	 N/A  da5p2


[root@cdanas002] ~# camcontrol identify da5 | grep serial
serial number		 C504F34C



Thanks
B
 

BTW

Dabbler
Joined
Feb 1, 2014
Messages
33
Forgot to mention:
FreeNas is not reporting a failed disk via the general status "Alert System".
 

BTW

Dabbler
Joined
Feb 1, 2014
Messages
33
If you are asking did I start a resilver operation lately, then the answer is no. The last failed drive I had was about 2 months ago.

If you are asking me did I just replace a drive now and start the resilver, that is the basis of my question. If I have a failed drive, not sure which one I have to replace.

Are there any other commands I can run to see if the drive has actually failed or not? Or if a resilver has happened (history)
Thanks
 

BTW

Dabbler
Joined
Feb 1, 2014
Messages
33
So I found this on the history of the volume.

Code:
[root@cdanas002] ~# zpool history CDANAS002_VOL01
History for 'CDANAS002_VOL01':
2016-06-12.15:54:03 zpool create -o cachefile=/data/zfs/zpool.cache -o failmode=continue -o autoexpand=on -O compression=lz4 -O aclmode=passthrough -O aclinherit=passthrough -f -m /CDANAS002_VOL01 -o altroot=/mnt CDANAS002_VOL01 mirror /dev/gptid/61e3aafa-30d7-11e6-a99f-00e081c5a93e /dev/gptid/62a04953-30d7-11e6-a99f-00e081c5a93e cache /dev/gptid/62f706f1-30d7-11e6-a99f-00e081c5a93e log /dev/gptid/632bbce1-30d7-11e6-a99f-00e081c5a93e spare /dev/gptid/63c6a1c2-30d7-11e6-a99f-00e081c5a93e
2016-06-12.15:54:03 zfs inherit mountpoint CDANAS002_VOL01
2016-06-12.15:54:03 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2016-06-12.15:54:08 zfs set dedup=off CDANAS002_VOL01
2016-06-12.16:25:08 zpool add -f CDANAS002_VOL01 mirror /dev/gptid/bd02af39-30db-11e6-a99f-00e081c5a93e /dev/gptid/bdf99e42-30db-11e6-a99f-00e081c5a93e
2016-06-12.16:26:53 zpool add -f CDANAS002_VOL01 mirror /dev/gptid/fb43cc18-30db-11e6-a99f-00e081c5a93e /dev/gptid/fc315671-30db-11e6-a99f-00e081c5a93e
2016-06-12.16:28:53 zpool add -f CDANAS002_VOL01 mirror /dev/gptid/42deccfc-30dc-11e6-a99f-00e081c5a93e /dev/gptid/43d4e796-30dc-11e6-a99f-00e081c5a93e
2016-06-12.16:31:19 zpool add -f CDANAS002_VOL01 mirror /dev/gptid/9a51538d-30dc-11e6-a99f-00e081c5a93e /dev/gptid/9b0e4d4a-30dc-11e6-a99f-00e081c5a93e
2016-06-12.20:24:19 zfs create -o volblocksize=16K -V 2T CDANAS002_VOL01/PRODT2-CDANAS002-LUN-204
2016-06-12.21:13:12 zfs set sync=always CDANAS002_VOL01
2016-06-15.15:57:09 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2016-06-15.15:57:09 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2016-06-15.16:08:39 zpool replace -f CDANAS002_VOL01 9136240301091673146 gptid/eca80bf2-3334-11e6-83fa-00e081c5a93e
2016-06-15.16:08:54 zpool detach CDANAS002_VOL01 9136240301091673146
2016-06-16.08:31:42 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2016-06-16.08:31:52 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2016-08-02.18:26:06 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2016-08-02.18:26:06 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2016-08-02.18:43:45 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2016-08-02.18:43:45 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2016-08-02.19:05:26 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2016-08-02.19:05:26 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2016-08-15.16:29:42 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2016-08-15.16:29:42 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2016-08-15.17:50:41 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2016-08-15.17:50:41 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2016-08-15.20:31:20 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2016-08-15.20:31:20 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2016-08-15.20:50:23 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2016-08-15.20:50:38 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2016-08-19.13:46:38 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2016-08-19.13:46:38 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2016-08-21.00:00:09 zpool scrub CDANAS002_VOL01
2016-08-22.02:00:09 zpool scrub CDANAS002_VOL01
2016-08-23.04:00:09 zpool scrub CDANAS002_VOL01
2016-08-24.06:00:10 zpool scrub CDANAS002_VOL01
2016-08-25.08:00:09 zpool scrub CDANAS002_VOL01
2016-08-26.10:00:09 zpool scrub CDANAS002_VOL01
2016-08-27.12:00:10 zpool scrub CDANAS002_VOL01
2016-08-28.14:00:09 zpool scrub CDANAS002_VOL01
2016-08-29.16:00:09 zpool scrub CDANAS002_VOL01
2016-08-30.18:00:09 zpool scrub CDANAS002_VOL01
2016-09-01.02:10:56 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2016-09-01.02:10:56 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2016-09-01.09:27:53 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2016-09-01.09:27:53 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2016-09-01.09:37:58 zpool replace -f CDANAS002_VOL01 17967836871681542934 gptid/44fe493e-7049-11e6-af63-00e081c5a93e
2016-09-01.09:38:12 zpool detach CDANAS002_VOL01 17967836871681542934
2016-11-06.11:28:32 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2016-11-06.11:28:42 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2016-11-06.11:57:00 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2016-11-06.11:57:05 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2016-11-06.12:00:09 zpool scrub CDANAS002_VOL01
2016-11-13.12:36:33 zpool scrub CDANAS002_VOL01
2016-11-14.20:18:04 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2016-11-14.20:18:04 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2016-11-27.17:32:54 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2016-11-27.17:32:54 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2016-11-27.18:01:21 zpool scrub CDANAS002_VOL01
2016-11-27.18:13:26 zpool replace CDANAS002_VOL01 4561408088713390006 gptid/1236b0ac-b4f7-11e6-a925-00e081c5a93e
2016-11-27.18:13:45 zpool detach CDANAS002_VOL01 4561408088713390006
2016-11-27.19:08:39 zpool detach CDANAS002_VOL01 9150220252052241131
2016-11-27.19:33:02 zpool add -f CDANAS002_VOL01 spare /dev/gptid/34a8ef59-b502-11e6-a925-00e081c5a93e
2016-12-04.18:30:09 zpool scrub CDANAS002_VOL01
2016-12-12.00:00:09 zpool scrub CDANAS002_VOL01
2016-12-19.00:30:09 zpool scrub CDANAS002_VOL01
2016-12-24.09:52:21 zpool replace CDANAS002_VOL01 13759733357039705620 gptid/8d2722d2-c9e8-11e6-a925-00e081c5a93e
2016-12-24.09:52:37 zpool detach CDANAS002_VOL01 13759733357039705620
2016-12-26.06:00:09 zpool scrub CDANAS002_VOL01
2016-12-28.19:24:58 zfs create -s -o volblocksize=16K -V 4.5T CDANAS002_VOL01/prodt2-cdanas002-lun-205
2016-12-31.08:56:05 zfs create -s -o volblocksize=16K -V 300G CDANAS002_VOL01/templates-cdanas002-lun-206
2017-01-02.06:30:09 zpool scrub CDANAS002_VOL01
2017-01-09.12:00:10 zpool scrub CDANAS002_VOL01
2017-01-16.12:30:09 zpool scrub CDANAS002_VOL01
2017-01-23.18:00:09 zpool scrub CDANAS002_VOL01
2017-01-30.18:30:09 zpool scrub CDANAS002_VOL01
2017-02-04.15:44:35 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2017-02-04.15:44:40 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2017-02-04.17:06:43 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2017-02-04.17:06:43 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2017-02-07.00:00:09 zpool scrub CDANAS002_VOL01
2017-02-14.00:30:08 zpool scrub CDANAS002_VOL01
2017-02-21.06:00:09 zpool scrub CDANAS002_VOL01
2017-02-25.07:18:22 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2017-02-25.07:18:22 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2017-02-28.06:30:08 zpool scrub CDANAS002_VOL01
2017-03-02.21:45:29 zpool replace CDANAS002_VOL01 gptid/43d4e796-30dc-11e6-a99f-00e081c5a93e gptid/6850ae5b-ffbb-11e6-973f-00e081c5a93e
2017-03-04.19:46:40 zpool replace CDANAS002_VOL01 gptid/eca80bf2-3334-11e6-83fa-00e081c5a93e gptid/2a4e9d36-013d-11e7-973f-00e081c5a93e
2017-03-07.12:00:09 zpool scrub CDANAS002_VOL01
2017-03-14.18:00:09 zpool scrub CDANAS002_VOL01
2017-03-18.11:48:00 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2017-03-18.11:48:11 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2017-03-18.11:55:41 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2017-03-18.11:55:41 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2017-03-18.14:31:38 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2017-03-18.14:31:38 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2017-03-21.18:30:09 zpool scrub CDANAS002_VOL01
2017-03-29.00:00:09 zpool scrub CDANAS002_VOL01
2017-04-05.00:30:09 zpool scrub CDANAS002_VOL01
2017-04-12.06:00:09 zpool scrub CDANAS002_VOL01
2017-04-19.06:30:09 zpool scrub CDANAS002_VOL01
2017-04-26.12:00:09 zpool scrub CDANAS002_VOL01
2017-05-03.12:30:09 zpool scrub CDANAS002_VOL01
2017-05-05.15:04:54 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2017-05-05.15:04:54 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2017-05-10.18:00:09 zpool scrub CDANAS002_VOL01
2017-05-17.18:30:10 zpool scrub CDANAS002_VOL01
2017-05-25.00:00:10 zpool scrub CDANAS002_VOL01
2017-06-01.00:30:09 zpool scrub CDANAS002_VOL01
2017-06-08.06:00:10 zpool scrub CDANAS002_VOL01
2017-06-15.06:30:10 zpool scrub CDANAS002_VOL01
2017-06-17.11:16:56 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2017-06-17.11:17:11 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2017-06-17.11:23:07 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2017-06-17.11:23:07 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2017-06-17.13:42:23 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2017-06-17.13:42:28 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2017-06-22.12:00:09 zpool scrub CDANAS002_VOL01
2017-06-29.12:30:10 zpool scrub CDANAS002_VOL01
2017-07-06.18:00:11 zpool scrub CDANAS002_VOL01
2017-07-13.18:30:11 zpool scrub CDANAS002_VOL01
2017-07-21.00:00:11 zpool scrub CDANAS002_VOL01
2017-07-28.00:30:11 zpool scrub CDANAS002_VOL01
2017-07-31.17:01:12 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2017-07-31.17:01:12 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2017-07-31.18:27:02 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2017-07-31.18:27:12 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2017-07-31.18:33:05 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 16723384952566375311
2017-07-31.18:33:10 zpool set cachefile=/data/zfs/zpool.cache CDANAS002_VOL01
2017-08-04.06:00:11 zpool scrub CDANAS002_VOL01
2017-08-11.06:30:12 zpool scrub CDANAS002_VOL01
2017-08-18.12:00:12 zpool scrub CDANAS002_VOL01
2017-08-19.11:22:46 zfs create -s -o volblocksize=16K -V 3T CDANAS002_VOL01/PRODT2-CDANAS002-LUN-207
2017-08-19.11:22:51 zfs set org.freenas:description=Media Store CDANAS002_VOL01/PRODT2-CDANAS002-LUN-207



If you look at:
Code:
2017-03-02.21:45:29 zpool replace CDANAS002_VOL01 gptid/43d4e796-30dc-11e6-a99f-00e081c5a93e gptid/6850ae5b-ffbb-11e6-973f-00e081c5a93e
2017-03-04.19:46:40 zpool replace CDANAS002_VOL01 gptid/eca80bf2-3334-11e6-83fa-00e081c5a93e gptid/2a4e9d36-013d-11e7-973f-00e081c5a93e


You notice there were 2 replace commands but no detach command.

Does that mean the resilver failed?
 

BTW

Dabbler
Joined
Feb 1, 2014
Messages
33
Can anyone assist on this with a definitive answer?

Did the drive actually fail or did the resilver process fail?

I cannot find any commands at this point that would say for sure.

Thanks
B
 

BTW

Dabbler
Joined
Feb 1, 2014
Messages
33
I was not able to get any help with clarification on this error. Even the bug tracker was no real help (https://bugs.freenas.org/issues/25704).

So here was the fix (make sure you have good backups):

  1. Get into the command line of Freenas
  2. Check the pool status of the DEGRADED pool
  3. Code:
    zpool status %NameOfYourPool%

  4. There will be a drive shown with a status as "UNAVAIL" (see first post for example)
  5. Manually remove the drive from the pool. (http://docs.oracle.com/cd/E19253-01/819-5461/6n7ht6qvt/index.html - example 4-10)
  6. Code:
    zpool detach %NameOfDrive% %NameOfPool%

  7. Check pool status again. It should show as healthy
  8. Now (in my case) FreeNas kicked/removed the spare drive associated with this volume, permanately
  9. Add the spare back into the pool (as per normal FreeNas procedures)
  10. Reboot system (not necessary but good idea)
 
Status
Not open for further replies.
Top