mikesoultanian
Dabbler
- Joined
- Aug 3, 2017
- Messages
- 43
I have the strangest thing that I can't figure out. I inherited a FreeNAS system with three 24 drive JBODs that have been in service for the past couple years, so I know all of the drives and drive bays work. I deleted all the existing volumes and pulled out all of the drives as I wanted to scan the serial numbers on the drives. Most of the drives that I put back in would be detected and the system log would confirm that, but some drives wouldn't be detected. So, I'd try the same drive in a different drive bay of the same JBOD enclosure and it would work there. What's up with that? I even tried hooking the drive up to my workstation and Windows is able to detect it just fine, partition it, format it, etc. I tried using diskpart and the clear command, but that didn't help - also tried gparted to wipe it and still didn't help.
Here's another weird one along those lines. I have about 10 drives in a RAID10 configuration and I thought I'd test a drive failure by pulling it out of its drive bay (drive was plugged into bay #1) - as expected I received a degraded message - I plugged the drive back in, there were a bunch of flashing lights, and the volume was healthy again. This time, I wanted to test moving the drive to a different bay, so I removed it, volume reported degraded, and I moved it to bay #15 and it switched back to healthy. Here's where it gets weird - when I pull the drive and put back in #1, FreeNAS wouldn't detect it?! I removed it and reinserted it an nada, nothing. I put it back in #15 and now it's working.
Are drive bays somehow being marked unusable by FreeNAS? I now have about 16 bays that won't work with any drive I put in them, but I know they were all working drive bays. Unfortunately I can't reboot this system as I have some VMs running on another volume, otherwise I'd give that a try, but it just seems strange that I'd have to reboot anyways. I inserted all the other 40 drives without rebooting and FreeNAS detected them just fine so I don't understand what's up with the remaining disks.
Btw, I'm running 9.10 - please let me know if there's any more system information that I can provide.
thanks!
Mike
Here's another weird one along those lines. I have about 10 drives in a RAID10 configuration and I thought I'd test a drive failure by pulling it out of its drive bay (drive was plugged into bay #1) - as expected I received a degraded message - I plugged the drive back in, there were a bunch of flashing lights, and the volume was healthy again. This time, I wanted to test moving the drive to a different bay, so I removed it, volume reported degraded, and I moved it to bay #15 and it switched back to healthy. Here's where it gets weird - when I pull the drive and put back in #1, FreeNAS wouldn't detect it?! I removed it and reinserted it an nada, nothing. I put it back in #15 and now it's working.
Are drive bays somehow being marked unusable by FreeNAS? I now have about 16 bays that won't work with any drive I put in them, but I know they were all working drive bays. Unfortunately I can't reboot this system as I have some VMs running on another volume, otherwise I'd give that a try, but it just seems strange that I'd have to reboot anyways. I inserted all the other 40 drives without rebooting and FreeNAS detected them just fine so I don't understand what's up with the remaining disks.
Btw, I'm running 9.10 - please let me know if there's any more system information that I can provide.
thanks!
Mike