Hello,
first of all I'm new to FreeNAS. Looking for a last bastion backup (backup of backup) solution I decided to give it a try.
So I bought a N54L for testing purposes, installed 16GB RAM, applied the BIOS Patch, installed FreeNAS on a Kingston 16GB USB.
Three old fashioned 250GB SATA-Drives were moved to a RAID-Z1 pool.
Created a dataset, a CIFS-Share, set permissons, copied some files -> works as designed.
So next step was to check what happend if a drive fails during operation.
Therefore I remove the ada2 device when the system was powered on.
Of course the volume was shown as gegraded.
So I reconnected the drive without deleting data or reinitialize the drive.
And now the odyssey starts.
In Storage->Volumes->tank (name of the pool)->Volume Status (right buttom on bottom bar)
the working drives are shown as ada1p2 and ada0p2.
The pulled and reinserted drive is shown as 7850339049424108492
So I checked it (highlighted) and pressed the replace buttom.
In the appering window I can't select a member disk because the drop down field is empty.
Googling around there's no working method I could find to replace the failed disk.
So after reboot the pool was healed automatically (okay, but reboot is in normal operation not an option)
Next check. Now I tried to use a new disk not reinserting the old drive.
Pulled the drive during operation -> not able to replace.
After reboot the disk is replaceable as expected!
So there are some questions:
Is there a standard procedure to replace disk if they fail during operation?
Especially a way to rescan the drives and creating device files?
Are the N54L devices really hot plugable when the BIOS-Patch is installed which if not would lead to an other HW setup?
Any help appreciated
Regards
Gerd
first of all I'm new to FreeNAS. Looking for a last bastion backup (backup of backup) solution I decided to give it a try.
So I bought a N54L for testing purposes, installed 16GB RAM, applied the BIOS Patch, installed FreeNAS on a Kingston 16GB USB.
Three old fashioned 250GB SATA-Drives were moved to a RAID-Z1 pool.
Created a dataset, a CIFS-Share, set permissons, copied some files -> works as designed.
So next step was to check what happend if a drive fails during operation.
Therefore I remove the ada2 device when the system was powered on.
Of course the volume was shown as gegraded.
So I reconnected the drive without deleting data or reinitialize the drive.
And now the odyssey starts.
In Storage->Volumes->tank (name of the pool)->Volume Status (right buttom on bottom bar)
the working drives are shown as ada1p2 and ada0p2.
The pulled and reinserted drive is shown as 7850339049424108492
So I checked it (highlighted) and pressed the replace buttom.
In the appering window I can't select a member disk because the drop down field is empty.
Googling around there's no working method I could find to replace the failed disk.
So after reboot the pool was healed automatically (okay, but reboot is in normal operation not an option)
Next check. Now I tried to use a new disk not reinserting the old drive.
Pulled the drive during operation -> not able to replace.
After reboot the disk is replaceable as expected!
So there are some questions:
Is there a standard procedure to replace disk if they fail during operation?
Especially a way to rescan the drives and creating device files?
Are the N54L devices really hot plugable when the BIOS-Patch is installed which if not would lead to an other HW setup?
Any help appreciated
Regards
Gerd