devnullius
Patron
- Joined
- Dec 9, 2015
- Messages
- 289
At boot, in no particular order...
CAM status: CCB request completed with an error
Retrying command
WRITE(10). CDB: <EDIT>
SCSI sense: ILLEGAL MEDIUM ERROR asc:11,0 (unrecovered read error)
Error 5, Unretryable
Info: 0x3df93a48
SCIS sense: HARDWARE FAILURE asc:44,0 (internal target failure)
(then it reboots)
Supermicro X8DA3/X8DAI
Avago MPT SAS2
Background:
I have 1 ZFS pool with 3 disks.
Today, I added 5 more and created a new pool to test it all.
But with 8 disks total, I encounter all kinds of problems (see above). Unplug one (no matter which one) and all is ok. BUT with all 8 disks plugged in, my first ZFS pool suddenly needs data redundancy checks if I boot with 7 disks after the reboot (and after unplugging 1 of the 5 new ones). After that, the first pool is ok again.
Where should I start looking for this problem? :) I'll be testing a bit more to see if I learn more: 1 bay always seems to be in the middle of the problems... But if I double-check that bay, all seems ok. So for now the main problem = 7 is company, 8 is a crowd ;p
Post-edit: I now got this in the console log (I've seen it before with the 8 disks attached) --> The volume ZFSPool1 state is ONLINE: One or more devices are faulted in response to IO failures.
Which sounds like something is wrong with 1 of my 3 primary disks. This is not the case. Remove of of the 5 new disks, and the first ZFS Pool with the 3 disks works like a charm again.
CAM status: CCB request completed with an error
Retrying command
WRITE(10). CDB: <EDIT>
SCSI sense: ILLEGAL MEDIUM ERROR asc:11,0 (unrecovered read error)
Error 5, Unretryable
Info: 0x3df93a48
SCIS sense: HARDWARE FAILURE asc:44,0 (internal target failure)
(then it reboots)
Supermicro X8DA3/X8DAI
Avago MPT SAS2
Background:
I have 1 ZFS pool with 3 disks.
Today, I added 5 more and created a new pool to test it all.
But with 8 disks total, I encounter all kinds of problems (see above). Unplug one (no matter which one) and all is ok. BUT with all 8 disks plugged in, my first ZFS pool suddenly needs data redundancy checks if I boot with 7 disks after the reboot (and after unplugging 1 of the 5 new ones). After that, the first pool is ok again.
Where should I start looking for this problem? :) I'll be testing a bit more to see if I learn more: 1 bay always seems to be in the middle of the problems... But if I double-check that bay, all seems ok. So for now the main problem = 7 is company, 8 is a crowd ;p
Post-edit: I now got this in the console log (I've seen it before with the 8 disks attached) --> The volume ZFSPool1 state is ONLINE: One or more devices are faulted in response to IO failures.
Which sounds like something is wrong with 1 of my 3 primary disks. This is not the case. Remove of of the 5 new disks, and the first ZFS Pool with the 3 disks works like a charm again.
Last edited: