Bad drive in RAID 1 array--but which one?

Status
Not open for further replies.

watha

Dabbler
Joined
Mar 3, 2013
Messages
24
I built my own NAS using FreeNAS version 0.7 and an old computer. It filled up, I copied the files to a standard external hard drive, and got to work wiping the NAS for further use. But on booting, my old computer said that my hardware RAID setup was "critical." I guess that means one or both of the drives have gone bad. (I have 2 of them.) But how do I tell which drive is messed up? I wanted to upgrade to FreeNAS 9 anyway and don't mind starting over. But the readout from my computers BIOS utility doesn't tell me which of the two drives needs replacing. I don't get a serial number, just a model number. Since they're both the same model of drive, that's no help. How do I figure out which drive is busted? Thanks.
 

budmannxx

Contributor
Joined
Sep 7, 2011
Messages
120
I think you should post this in the NAS4Free forum. That's the place for what used to be called FreeNAS 0.7. I assume they also have rules about posting system specs and other details that will help them help you.
 

brbubba

Dabbler
Joined
May 14, 2013
Messages
12
Presumably you could "zpool status" and then offline the bad disk. Turn off the system, remove the drive you suspect is bad and then reboot and see if the array is still only showing the good drive available. If it is, great, turn off system again, attach new drive and then replace and resilver. The other alternative, if you have the space, is to add the new drive replace and resilver and then offline, shut down and remove the suspect drive.
 

watha

Dabbler
Joined
Mar 3, 2013
Messages
24
Presumably you could "zpool status" and then offline the bad disk. Turn off the system, remove the drive you suspect is bad and then reboot and see if the array is still only showing the good drive available. If it is, great, turn off system again, attach new drive and then replace and resilver. The other alternative, if you have the space, is to add the new drive replace and resilver and then offline, shut down and remove the suspect drive.


Okay...ignorance on display. How do I do any of this. I know how to remove the drives, of course. But how do I "replace and resilver?" And what is "zpool status?"

BTW, I was using a hardware RAID. I used the command to delete the array and changed the BIOS to boot the machine as if it had just two plain hard drives. That worked. In this mode, I booted Ubuntu Linux from a thumb drive. It recognized the drives as working The drives passed a SMART test. I formatted them ext4 then dragged and dropped files onto them. They worked just fine. I honestly think there's nothing wrong with them. But my hardware RAID BIOS says that one of them has failed. I just want to wipe them and start over. Why can't I do this?
 

brbubba

Dabbler
Joined
May 14, 2013
Messages
12
My apologies... Ignore everything I said, I need sleep. If you don't want to use the onboard RAID at all you could disable it. I'm assuming you have more than one controller on your board?
 

watha

Dabbler
Joined
Mar 3, 2013
Messages
24
It looks like things are coming together. I ran Darik's Boot & Nuke and completely wiped my two drives. It took exactly one day, three minutes and 35 seconds. Then I ran ZFS volume manager and tried to create a volume. It worked. Just like that. I guess even after reformatting the disks, they contained some polluted files from the previous RAID setup. Now I just have to learn how to set up a ZFS software RAID. BTW, is software RAID as good as hardware? Just asking...
 
Status
Not open for further replies.
Top