May I have your opinion on what happened?

Status
Not open for further replies.

AuBird

Dabbler
Joined
Aug 17, 2015
Messages
29
I feel like this is wrong. That line should end with IT (Initiator Target) instead of IR. The IR firmware has the hardware RAID functionality and FreeNAS doesn't like that. The other thing, which doesn't feel like as big a deal, is this:

That firmware is a little out of date, if I recall correctly, the latest is: 20.00.07.00
I am not saying this is the problem, but it looks like a possibility. Did this system come from iXsystems configured like this?


I cannot say with 100% certainty, but I think so. I remember thinking it was strange when we first bought it, but went with it.
 

AuBird

Dabbler
Joined
Aug 17, 2015
Messages
29
Just checked two others purchased around the same time and they also have the IR branch. Both of these are under a decent load and have been rock solid.

Code:
		Firmware Product ID			: 0x2714 (IR)
		Firmware Version			   : 20.00.02.00
		NVDATA Vendor				  : LSI
		NVDATA Product ID			  : LSI2308-IR



Code:
		Firmware Product ID			: 0x2714 (IR)
		Firmware Version			   : 20.00.04.00
		NVDATA Vendor				  : LSI
		NVDATA Product ID			  : LSI2308-IR

 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
I would run a memory test, and then start suspecting power, system, or the controller. If you move the drives and no errors show on scrub, then that tells you the problem is happening during the read process, and is not actually in the drives or what was written. Especially look at that controller firmware.
 

AuBird

Dabbler
Joined
Aug 17, 2015
Messages
29
Well... this happened. No worries though we already got our data off before the scrub did its thing. I'll pull the drives out of this chassis and place them into our spare, then I'll start running a memory test. Kinda cool to see what happens when things go south.

Code:
[root@freenas] ~# zpool status -v
  pool: fnas01p01
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
		continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Thu Nov 30 12:45:44 2017
		329G scanned out of 8.65T at 378M/s, 6h24m to go
		28.4G resilvered, 3.71% done
config:

		NAME											  STATE	 READ WRITE CKSUM
		fnas01p01										 DEGRADED	 0	 0   233
		  raidz2-0										DEGRADED	 0	 0   250
			gptid/dc9a2811-615c-11e5-929b-0cc47a55b944	ONLINE	   0	 0	11
			gptid/dce660b8-615c-11e5-929b-0cc47a55b944	DEGRADED	 0	 0	12  too many errors
			spare-2									   DEGRADED	 0	 0	 0
			  gptid/dd342422-615c-11e5-929b-0cc47a55b944  DEGRADED	 0	 0	10  too many errors
			  gptid/bcabc891-62ba-11e5-929b-0cc47a55b944  ONLINE	   0	 0	 0  (resilvering)
			gptid/dd830db8-615c-11e5-929b-0cc47a55b944	ONLINE	   0	 0	 5
			gptid/ddd197f0-615c-11e5-929b-0cc47a55b944	ONLINE	   0	 0	 7
			gptid/de20caf7-615c-11e5-929b-0cc47a55b944	DEGRADED	 0	 0	 8  too many errors
			spare-6									   DEGRADED	 0	 0	 0
			  gptid/de70e4c0-615c-11e5-929b-0cc47a55b944  DEGRADED	 0	 0	 7  too many errors
			  gptid/e00654c8-615c-11e5-929b-0cc47a55b944  ONLINE	   0	 0	 0  (resilvering)
			gptid/debecb5a-615c-11e5-929b-0cc47a55b944	DEGRADED	 0	 0	 5  too many errors
			gptid/df0e3e33-615c-11e5-929b-0cc47a55b944	DEGRADED	 0	 0	 7  too many errors
			gptid/df5e1829-615c-11e5-929b-0cc47a55b944	DEGRADED	 0	 0	10  too many errors
			gptid/dfaea5ca-615c-11e5-929b-0cc47a55b944	ONLINE	   0	 0	 5
		  raidz2-1										DEGRADED	 0	 0   216
			gptid/d880b4e4-62b8-11e5-929b-0cc47a55b944	DEGRADED	 0	 0	10  too many errors
			gptid/d8ed9dcd-62b8-11e5-929b-0cc47a55b944	DEGRADED	 0	 0	 3  too many errors
			gptid/d95e33ef-62b8-11e5-929b-0cc47a55b944	DEGRADED	 0	 0	 6  too many errors
			gptid/d9c8a064-62b8-11e5-929b-0cc47a55b944	DEGRADED	 0	 0	 7  too many errors
			gptid/da3642fb-62b8-11e5-929b-0cc47a55b944	DEGRADED	 0	 0	 5  too many errors
			gptid/daa1e095-62b8-11e5-929b-0cc47a55b944	DEGRADED	 0	 0	 5  too many errors
			gptid/db0bc938-62b8-11e5-929b-0cc47a55b944	DEGRADED	 0	 0	 6  too many errors
			gptid/db7cfec9-62b8-11e5-929b-0cc47a55b944	DEGRADED	 0	 0	12  too many errors
			gptid/dbeac94c-62b8-11e5-929b-0cc47a55b944	DEGRADED	 0	 0	 9  too many errors
			gptid/dc567379-62b8-11e5-929b-0cc47a55b944	DEGRADED	 0	 0	 2  too many errors
			gptid/dcc284fc-62b8-11e5-929b-0cc47a55b944	DEGRADED	 0	 0	14  too many errors
		spares
		  3464253044225404816							 INUSE	 was /dev/gptid/e00654c8-615c-11e5-929b-0cc47a55b944
		  6813808265978131437							 INUSE	 was /dev/gptid/bcabc891-62ba-11e5-929b-0cc47a55b944

errors: Permanent errors have been detected in the following files:

		fnas01p01/rep01lun01:<0x1>


 
Last edited by a moderator:

AuBird

Dabbler
Joined
Aug 17, 2015
Messages
29
Well, I'll wait to see what the resilver does first.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Well, I'll wait to see what the resilver does first.
That is a totally crashed pool. I am happy that you got your data out first, but there isn't any point waiting for that to try to recover.
You said you have another server you can put the drives in to test them?
I would do that to try and find out if I got a batch of bad drives or if it is a bad controller. At this point, I am leaning toward a bad controller, but it could be that you got a bad run of disks.
Here is a link to some scripts that the community has put together, one of them is designed to test drives prior to putting them in production. I would run that against these drives, in a different server, to see if the drives are at fault. If the drives work under a different controller, that answers the question.
https://forums.freenas.org/index.ph...for-freenas-scripts-including-disk-burnin.28/
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Fatal errors on all (or almost all) disks really is fishy. @Chris Moore suggests a bad controller, which is certainly plausible; bad cables are another possibility. I'm not sure about the firmware version on your card; I recall hearing of issues with some of the 20-series versions, but don't recall specifically if 20.00.02 was problematic. The IR firmware isn't necessarily a problem, as long as you aren't actually using the IR features (which you don't appear to be).
 

AuBird

Dabbler
Joined
Aug 17, 2015
Messages
29
Just an update in case anyone was wondering. I've rolled back from 9.10 to 9.3, recreated the pool, and used iozone to fill it to 75% capacity, and then started a scrub. So far no hints of any issues.

Code:
[root@freenas] /mnt/tank/fillmebabyonemoretime# zfs list
tank														32.9T  9.23T	96K  /mnt/tank
tank/.system												1.01M  9.23T   104K  legacy
tank/.system/configs-f1ae6c68bbe041c7bb38cadeec088781		488K  9.23T   488K  legacy
tank/.system/cores											96K  9.23T	96K  legacy
tank/.system/rrd-f1ae6c68bbe041c7bb38cadeec088781			 96K  9.23T	96K  legacy
tank/.system/samba4										  152K  9.23T   152K  legacy
tank/.system/syslog-f1ae6c68bbe041c7bb38cadeec088781		  96K  9.23T	96K  legacy
tank/fillmebabyonemoretime								  32.9T  9.23T  32.9T  /mnt/tank/fillmebabyonemoretime

  pool: tank
 state: ONLINE
  scan: scrub in progress since Tue Dec  5 07:12:47 2017
		1.17T scanned out of 32.9T at 983M/s, 9h23m to go
		0 repaired, 3.56% done
config:

		NAME											STATE	 READ WRITE CKSUM
		tank											ONLINE	   0	 0	 0
		  mirror-0									  ONLINE	   0	 0	 0
			gptid/db76f5f6-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/dbce6ce5-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
		  mirror-1									  ONLINE	   0	 0	 0
			gptid/dc25a8fc-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/dc7db71b-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
		  mirror-2									  ONLINE	   0	 0	 0
			gptid/dcd4ba50-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/dd2c3eee-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
		  mirror-3									  ONLINE	   0	 0	 0
			gptid/dd85d4dc-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/dddfb6ee-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
		  mirror-4									  ONLINE	   0	 0	 0
			gptid/de3832cc-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/de92c0d1-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
		  mirror-5									  ONLINE	   0	 0	 0
			gptid/deee82a8-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/df4734ee-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
		  mirror-6									  ONLINE	   0	 0	 0
			gptid/dfa79c6f-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/e003c820-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
		  mirror-7									  ONLINE	   0	 0	 0
			gptid/e066c858-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/e0c09847-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
		  mirror-8									  ONLINE	   0	 0	 0
			gptid/e11d7d66-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/e178f30a-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
		  mirror-9									  ONLINE	   0	 0	 0
			gptid/e1da00c2-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/e23bac8d-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
		  mirror-10									 ONLINE	   0	 0	 0
			gptid/e29c1f92-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/e2f60d9d-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
		  mirror-11									 ONLINE	   0	 0	 0
			gptid/e36091c7-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/e3bcb637-d930-11e7-bd1e-0cc47a55b944  ONLINE	   0	 0	 0

errors: No known data errors

 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Just an update in case anyone was wondering. I've rolled back from 9.10 to 9.3, recreated the pool, and used iozone to fill it to 75% capacity, and then started a scrub. So far no hints of any issues.
Arranged as a pool of mirrors, it should be much faster, but at the sacrifice of storage capacity. Is that just for testing or is this how you want the pool configured?
 

AuBird

Dabbler
Joined
Aug 17, 2015
Messages
29
Chris,
This is just for testing. I'll use 3 raid2z for production, or maybe raid3z. Not really sure if 3z is worth the storage space hit.

Robert
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
This is just for testing.
Thanks for the update. I am curious where the problem is in this system, drives or controller or something else...
Is this diagnostic testing on the drives in another system or is this still in the original chassis that was having issues?
 

AuBird

Dabbler
Joined
Aug 17, 2015
Messages
29
Original chassis. If/when I figure this out I'll mark this thread as solved with what I found.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
FYI: IR firmware is not a problem per se. IT is better since it has fewer moving parts, but IR should be a superset of IT.

As for the version, older P20 revisions were very buggy. Definitely update to .07.
 

AuBird

Dabbler
Joined
Aug 17, 2015
Messages
29
So I've reverted back to 9.3, rebuilt the pool, filled it with 33TB of data, ran a scrub, upgraded to 9.10, and then ran another scrub. No errors.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Did you change anything, firmware etc, other than just rebuildingthe pool as mirrors?

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

AuBird

Dabbler
Joined
Aug 17, 2015
Messages
29
Did you change anything, firmware etc, other than just rebuildingthe pool as mirrors?

Sent from my SAMSUNG-SGH-I537 using Tapatalk

No changes to firmware.
 

AuBird

Dabbler
Joined
Aug 17, 2015
Messages
29
So it happened again to anther FreeNAS, well at least one of the two pools on this FreeNAS show corruption. Same scenario where I upgraded from 9.3 to the latest, and this time I went from 9.3 to 11.1. I also upgrade the firmware of the controller from the 02 version to 07 during this upgrade. Shut down the FreeNAS, removed all drives, upgraded to 11.1 and shutdown, inserted all drives back into the FreeNAS, powered up and upgraded the pool, then rebooted once more. It was fine all night until backups were replicated to the server using the currently corrupted pool.

My plan now is to just throw a spare EqualLogic in there and copy data off then start with a fresh pool.

Code:
root@freenas02:~ # zpool status -v
  pool: fnas02p01_2z
 state: ONLINE
status: One or more devices has experienced an error resulting in data
		corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
		entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub repaired 0 in 2 days 22:52:23 with 0 errors on Tue Nov 21 22:52:26 2017
config:

		NAME											STATE	 READ WRITE CKSUM
		fnas02p01_2z									ONLINE	   0	 0	 0
		  raidz2-0									  ONLINE	   0	 0	 0
			gptid/492612cd-65f7-11e5-b8f3-0cc47a55b7f0  ONLINE	   0	 0	 0
			gptid/4974f654-65f7-11e5-b8f3-0cc47a55b7f0  ONLINE	   0	 0	 0
			gptid/49c20fde-65f7-11e5-b8f3-0cc47a55b7f0  ONLINE	   0	 0	 0
			gptid/4a12e84e-65f7-11e5-b8f3-0cc47a55b7f0  ONLINE	   0	 0	 0
			gptid/4a64eb1f-65f7-11e5-b8f3-0cc47a55b7f0  ONLINE	   0	 0	 0
			gptid/4ab64141-65f7-11e5-b8f3-0cc47a55b7f0  ONLINE	   0	 0	 0
			gptid/4b065e0c-65f7-11e5-b8f3-0cc47a55b7f0  ONLINE	   0	 0	 0
			gptid/4b5831d4-65f7-11e5-b8f3-0cc47a55b7f0  ONLINE	   0	 0	 0
			gptid/4bacc225-65f7-11e5-b8f3-0cc47a55b7f0  ONLINE	   0	 0	 0
			gptid/4bfd2392-65f7-11e5-b8f3-0cc47a55b7f0  ONLINE	   0	 0	 0
			gptid/4c4e1480-65f7-11e5-b8f3-0cc47a55b7f0  ONLINE	   0	 0	 0
		spares
		  gptid/a9c039eb-65f7-11e5-b8f3-0cc47a55b7f0	AVAIL

errors: Permanent errors have been detected in the following files:

		fnas02p01_2z/p01Lun01:<0x1>

  pool: fnas02p02_r10
 state: ONLINE
  scan: scrub repaired 0 in 1 days 05:41:50 with 0 errors on Mon Dec  4 05:41:54 2017
config:

		NAME											STATE	 READ WRITE CKSUM
		fnas02p02_r10								   ONLINE	   0	 0	 0
		  mirror-0									  ONLINE	   0	 0	 0
			gptid/1e24ac5f-938e-11e5-a269-0cc47a55b7f0  ONLINE	   0	 0	 0
			gptid/1e78ab9c-938e-11e5-a269-0cc47a55b7f0  ONLINE	   0	 0	 0
		  mirror-1									  ONLINE	   0	 0	 0
			gptid/1ed1c43c-938e-11e5-a269-0cc47a55b7f0  ONLINE	   0	 0	 0
			gptid/1f26ab0f-938e-11e5-a269-0cc47a55b7f0  ONLINE	   0	 0	 0
		  mirror-2									  ONLINE	   0	 0	 0
			gptid/1f8616d5-938e-11e5-a269-0cc47a55b7f0  ONLINE	   0	 0	 0
			gptid/1fdea31a-938e-11e5-a269-0cc47a55b7f0  ONLINE	   0	 0	 0
		  mirror-3									  ONLINE	   0	 0	 0
			gptid/203c97b1-938e-11e5-a269-0cc47a55b7f0  ONLINE	   0	 0	 0
			gptid/20957c22-938e-11e5-a269-0cc47a55b7f0  ONLINE	   0	 0	 0
		  mirror-4									  ONLINE	   0	 0	 0
			gptid/20f90754-938e-11e5-a269-0cc47a55b7f0  ONLINE	   0	 0	 0
			gptid/215072d8-938e-11e5-a269-0cc47a55b7f0  ONLINE	   0	 0	 0
		logs
		  gptid/43ea9637-6316-11e6-b12e-0cc47a55b7f0	ONLINE	   0	 0	 0
		cache
		  gptid/6faa0391-6316-11e6-b12e-0cc47a55b7f0	ONLINE	   0	 0	 0
		spares
		  gptid/21aeb105-938e-11e5-a269-0cc47a55b7f0	AVAIL

errors: No known data errors

 

AuBird

Dabbler
Joined
Aug 17, 2015
Messages
29
EDIT: Added the note about upgrading to 11.1. Previous scrubs where all on 11.0-U4 with no issue


Heh, happened again to the original FreeNAS in this post. I'm starting to suspect Windows servers talking through iSCSI to FreeNAS 11.x breaking things.

FreeNAS 11.1
LSI controller with the 20.07 firmware


I had rebuilt the pool and was running scrub after scrub with no issues. I then upgraded to 11.1, rebuilt it once more and setup iSCSI sharing then forgot about it until today when I ran two scrubs back to back, the second scrub broke it. Or a combination of running the scrub and using Crystal Disk 6.0 while the scrub was running.

Code:
  pool: tank
 state: ONLINE
status: One or more devices has experienced an error resulting in data
		corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
		entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub repaired 0 in 0 days 00:02:04 with 1 errors on Fri Dec 29 10:00:24 2017
config:

		NAME											STATE	 READ WRITE CKSUM
		tank											ONLINE	   0	 0	 2
		  raidz3-0									  ONLINE	   0	 0	 4
			gptid/f89ed320-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/f9482bd5-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/f9fb8f9b-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/fab21bdf-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/fb69370c-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/fc1f9115-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/fcd6fc50-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/fd8a2e05-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/fe442c4b-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/fef93952-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/ffb36d9e-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/00752e34-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
		  raidz3-1									  ONLINE	   0	 0	 0
			gptid/015d1284-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/021c091b-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/02da8ee8-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/039a831f-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/045c3ca5-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/051ea9c5-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/05e55aae-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/06ae2705-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/07737715-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/0834cf21-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/08f8911d-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/09bd3c6b-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
		cache
		  gptid/0a827262-e677-11e7-aa30-0cc47a55b944	ONLINE	   0	 0	 0

errors: Permanent errors have been detected in the following files:

		tank/zvol0:<0x1>

 
Last edited:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Heh, happened again to the original FreeNAS in this post. I'm starting to suspect Windows servers talking through iSCSI to FreeNAS 11.x breaking things.

FreeNAS 11.1
LSI controller with the 20.07 firmware


I had rebuilt the pool and was running scrub after scrub with no issues. I then rebuilt it once more and setup iSCSI sharing then forgot about it until today when I ran two scrubs back to back, the second scrub broke it. Or a combination of running the scrub and using Crystal Disk 6.0 while the scrub was running.

Code:
  pool: tank
 state: ONLINE
status: One or more devices has experienced an error resulting in data
		corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
		entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub repaired 0 in 0 days 00:02:04 with 1 errors on Fri Dec 29 10:00:24 2017
config:

		NAME											STATE	 READ WRITE CKSUM
		tank											ONLINE	   0	 0	 2
		  raidz3-0									  ONLINE	   0	 0	 4
			gptid/f89ed320-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/f9482bd5-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/f9fb8f9b-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/fab21bdf-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/fb69370c-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/fc1f9115-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/fcd6fc50-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/fd8a2e05-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/fe442c4b-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/fef93952-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/ffb36d9e-e676-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/00752e34-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
		  raidz3-1									  ONLINE	   0	 0	 0
			gptid/015d1284-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/021c091b-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/02da8ee8-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/039a831f-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/045c3ca5-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/051ea9c5-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/05e55aae-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/06ae2705-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/07737715-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/0834cf21-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/08f8911d-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
			gptid/09bd3c6b-e677-11e7-aa30-0cc47a55b944  ONLINE	   0	 0	 0
		cache
		  gptid/0a827262-e677-11e7-aa30-0cc47a55b944	ONLINE	   0	 0	 0

errors: Permanent errors have been detected in the following files:

		tank/zvol0:<0x1>

It shouldn't be windows server. What's happening is data that has been written to disk is not being read correctly and can't be fixed. Are you sure this controller is in IT mode or there is zero write cache enabled.
 
Status
Not open for further replies.
Top