Replacement of ZRaid drive resulted in lose of data?

Status
Not open for further replies.

RichTJ99

Patron
Joined
Sep 12, 2013
Messages
384
Hi,

I am running freenas in a HP microserver. I have 4x 2tb drives running in ZFS Raid1 mode, so my array shows up as a 6tb drive. I decided to 'test' a failure, I turned it off, pulled a drive, turned it back on, I didnt get any emails or notices about a failure, I let the system run for a few days. Then I shut it off, plugged the old drive back in & turned it back on figuring the raid array would rebuilt itself.

So when I rebooted all it shows now is a 1GB (not TB) drive size. If I got to the volume manager, I see all 4 drives showing as active & healthy. It shows the 6tb (5.something) volume, but its not what i am accessing.

Any ideas what I did wrong?

I kind of wanted to simulate a drive failure for testing & I am glad I did, since clearly I did something wrong.

Thanks,
Rich
 

RichTJ99

Patron
Joined
Sep 12, 2013
Messages
384
Is my issue that I was doing a test of a failure & then put the 'failed' drive back in? Is the data actually gone?

I dont know for sure but I suspect my 1gb drive is the usb drive.


Code:
[root@freenas ~]# zpool status
 
pool: Raid1 state: ONLINE
 
scan: none requested
 
config: NAME STATE READ WRITE CKS UM
 
Raid1 ONLINE 0 0 0
 
raidz1-0 ONLINE 0 0 0
 
gptid/1b63249f-1969-11e3-898c-441ea13df98e ONLINE 0 0 0
 
gptid/1bcd811e-1969-11e3-898c-441ea13df98e ONLINE 0 0 0
 
gptid/1c699223-1969-11e3-898c-441ea13df98e ONLINE 0 0 0
 
gptid/1ce2996a-1969-11e3-898c-441ea13df98e ONLINE 0 0 0
 
errors: No known data errors [root@freenas ~]# ^C [root@freenas ~]# 
 
D

dlavigne

Guest
That looks fine. Post a screenshot of what you think is the error.
 
Status
Not open for further replies.
Top