During testing of a new 4 drive zfs raid-z2 (iSCSI device extent with esxi5 vm datastore) and while installing a new vm on the raid datastore, I removed 2 of the 4 drives to exercise the raid drive failure detection, alerting and raid rebuild behavior. However, 30 minutes later, the FreeNAS gui still shows that the raid is healthy (ie. "Alert" button is still green), although the console indicates the 2 drives that were failed (lost device). Browser refresh (Firefox) and logging out and back in do not have any affect. The gui still thinks the raid is healthy even though it has lost 2 of its 4 drives.