I recently had a RAID 5 (3 Disks) via ZFS working for several months without issue. After upgrading to the 11.2 branch and re-building the RAID from scratch I keep getting a degraded pool where only 2 of 3 disks are working (both Seagates). At first I had 2 WD Blue drives and 1 Seagate (all 3 TB) before I got the first degradation. After looking at the alerts and pool/disk status it said that one of the WD disks failed. I bought a brand new identical Seagate drive and resilvered the drive. There were no issues for a few days before it said that the other WD had failed. I ran a memtest and the WD HDD utility to check the "failed" WD drive, and no issues/errors were reported. When I run a df -ah from the shell they all seem to say the sector and disk size is identical, but am wondering if because of the different brands or w/e there might be an issue using different hardware. I will be happy to post any information necessary to troubleshoot this but do not understand what is needed to keep the RAID/pool healthy. Do I need to buy another Seagate (or WD, since the drives seem fine from the WD utlitity) or is there potentially something else going on? I have checked the SATA cable as well and can't seem to find any specific hardware failure but will be happy to try anyone's advice on this.