robindhood
Cadet
- Joined
- Oct 31, 2011
- Messages
- 5
I was showing off the FreeNAS server last night and I pulled a drive out to show a friend the quality of the drive sleds ( it was powered down at the time ).
I guess when I put it back in, it did not reseat correctly in the drive cage. ( check out my profile to see the beast )
I went in and ran a "zpool status" and found 1 drive was marked "unavailable".
I stopped the current copy that was running and powered down the machine. I pulled the drive, hit it with compressed air and reseated it. It powered back on fine. I ran "zpool status" again and all drives were online. I checked in the UI and both of my sets report "healthy".
Do I need to take any action ( scrub that RAID set ) just to be on the safe side? The set in question is 8 drives, RAID-Z2. Will it "self heal" as it's running as the descriptions of the ZFS imply?
BTW - I'm loving this server!! Thank you all for your work on this project as well as all the great information available here in the forums!!
Hood
I guess when I put it back in, it did not reseat correctly in the drive cage. ( check out my profile to see the beast )
I went in and ran a "zpool status" and found 1 drive was marked "unavailable".
I stopped the current copy that was running and powered down the machine. I pulled the drive, hit it with compressed air and reseated it. It powered back on fine. I ran "zpool status" again and all drives were online. I checked in the UI and both of my sets report "healthy".
Do I need to take any action ( scrub that RAID set ) just to be on the safe side? The set in question is 8 drives, RAID-Z2. Will it "self heal" as it's running as the descriptions of the ZFS imply?
BTW - I'm loving this server!! Thank you all for your work on this project as well as all the great information available here in the forums!!
Hood