Hi,
I have a rather strange problem with the checksums on disk.
At the beginning it presents my configuration:
Supermicro X10SLH-F Motherboard
Xeon® Processor E3-1240 v3
Supermicro 4x8GB DDR3 1600MHz ECC
Supermicro SuperChassis 846BE1C-R1K28B (with SAS expander)
N2215 - HBA
10x 4TB WD RED
1 Volume Using Raid Z2 (on 10x HDD) + 1 SSD 250GB as cache + 1 SSD 120GB as LOG
This configuration is almost brand new, and worked well for 2 months (the only difference was that I had used the 10x some old HDD). After the 2month test period i destroy that RAID Z2 volume , upgrade Freenas to newest version (FreeNAS-9.3-STABLE-201602031011) and swapped old used HDD to brand new 4TB WD Red and cerate new volume using this same settings Raid Z2 on 10HDD + 1 SSD as LOG +1 SSD as cache. And after that i have high CheckSum error rate on all HDD disks (except SSD disks working as Log and cache) .
I check Smart, after short and long test and any of all 10 disks don't have errors.
If anyone has an idea, what else can I check to solve the problem without destroying the volume and creating it again?. Or is there a chance that there is a bug in the latest version of which causes such behavior?
I have a rather strange problem with the checksums on disk.
At the beginning it presents my configuration:
Supermicro X10SLH-F Motherboard
Xeon® Processor E3-1240 v3
Supermicro 4x8GB DDR3 1600MHz ECC
Supermicro SuperChassis 846BE1C-R1K28B (with SAS expander)
N2215 - HBA
10x 4TB WD RED
1 Volume Using Raid Z2 (on 10x HDD) + 1 SSD 250GB as cache + 1 SSD 120GB as LOG
This configuration is almost brand new, and worked well for 2 months (the only difference was that I had used the 10x some old HDD). After the 2month test period i destroy that RAID Z2 volume , upgrade Freenas to newest version (FreeNAS-9.3-STABLE-201602031011) and swapped old used HDD to brand new 4TB WD Red and cerate new volume using this same settings Raid Z2 on 10HDD + 1 SSD as LOG +1 SSD as cache. And after that i have high CheckSum error rate on all HDD disks (except SSD disks working as Log and cache) .
I check Smart, after short and long test and any of all 10 disks don't have errors.
If anyone has an idea, what else can I check to solve the problem without destroying the volume and creating it again?. Or is there a chance that there is a bug in the latest version of which causes such behavior?