Removing the HD ID # after replacing HDD in 9.3+

Status
Not open for further replies.

SATELLITE

Dabbler
Joined
Jan 3, 2015
Messages
12
Hi..

I have a problem with removing the ID # of the old drive after replacing a 2TB for a 4TB drive.

I have tried in GUI but I can not see the replace drive option etc. in disks.

Right now I am running in a degraded state & I am not doing a re-silver yet as far as I can tell.

The unvail is the drive I am trying to remove & resilver with a new 4TB drive which is being listed as running normal.

See below:

[root@MEDIA-SERVER ~]# zpool status
pool: VOL2
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://illumos.org/msg/ZFS-8000-2Q
scan: scrub canceled on Sat Jan 3 19:07:43 2015
config:

NAME STATE READ WRITE CKSUM
VOL2 DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
gptid/7c3bd84b-81c8-11e4-89d2-001b212a6698 ONLINE 0 0 0
gptid/7c9190bd-81c8-11e4-89d2-001b212a6698 ONLINE 0 0 0
gptid/7ce3292b-81c8-11e4-89d2-001b212a6698 ONLINE 0 0 0
gptid/d88a8385-86e9-11e4-89a1-001b212a6698 ONLINE 0 0 0
gptid/7d7e2971-81c8-11e4-89d2-001b212a6698 ONLINE 0 0 0
gptid/7dbcaa6d-81c8-11e4-89d2-001b212a6698 ONLINE 0 0 0
gptid/7df27e4d-81c8-11e4-89d2-001b212a6698 ONLINE 0 0 0
17250287981362600414 UNAVAIL 0 0 0 was /dev/gptid/7e2f50b4-81c8-11e4-89d2-001b212a6
698
gptid/86993603-93aa-11e4-ab44-001b212a6698 ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0h1m with 0 errors on Thu Jan 1 17:16:38 2015
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da0p2 ONLINE 0 0 0

errors: No known data errors


Any help would be appreciated.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421

SATELLITE

Dabbler
Joined
Jan 3, 2015
Messages
12
Maybe I was unclear…

I am tying to remove:

17250287981362600414 UNAVAIL 0 0 0 was /dev/gptid/7e2f50b4-81c8-11e4-89d2-001b212a6
698

I have no options through the gui and I am having issues finding any in the instructions. So in answer to you.. yes I have read the instructions and I am not trying to take anything back online since ALL DRIVES ARE ONLINE & the VOL2 has expanded completely.

Thanks.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Post the output of zpool status in code tags. Indentation matters. Kinda sounds like you added a single drive to your pool. You don't remove devices, you replace them.

Make sure you are getting to the devices from Storage, < Highlight the pool >, Volume Status.
 

SATELLITE

Dabbler
Joined
Jan 3, 2015
Messages
12
Hi..

I had a 2TB drive fail… I replaced it with a 4TB.

I used the vol manager to add the new drive in… now the output (at top of posting) is giving me all hell removing it.

There is no option to replace or remove the drive in the gui (STORAGE-VIEW DISKS).

Can I do it through the command line?
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
You don't use the volume manager to add the drive in. You use the option I described. Post the output as requested and we'll know in 2 seconds.
 

SATELLITE

Dabbler
Joined
Jan 3, 2015
Messages
12
Explain:

Post the output of zpool status in code tags. ??

Not like I posted at the top of the thread?
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
The forum kills the indentation. The devices are indented to show how the pool is comprised. There is a Insert | Code button, that will allow you to post the output and keep the indentation.
 

SATELLITE

Dabbler
Joined
Jan 3, 2015
Messages
12
Code:
[root@MEDIA-SERVER ~]# zpool status                                                                                                
  pool: VOL2                                                                                                                       
state: DEGRADED                                                                                                                   
status: One or more devices could not be opened.  Sufficient replicas exist for                                                    
        the pool to continue functioning in a degraded state.                                                                      
action: Attach the missing device and online it using 'zpool online'.                                                              
   see: http://illumos.org/msg/ZFS-8000-2Q                                                                                         
  scan: scrub in progress since Sat Jan  3 20:02:29 2015                                                                           
        378G scanned out of 7.27T at 212M/s, 9h28m to go                                                                           
        0 repaired, 5.07% done                                                                                                     
config:                                                                                                                            
                                                                                                                                   
        NAME                                            STATE     READ WRITE CKSUM                                                 
        VOL2                                            DEGRADED     0     0     0                                                 
          raidz1-0                                      DEGRADED     0     0     0                                                 
            gptid/7c3bd84b-81c8-11e4-89d2-001b212a6698  ONLINE       0     0     0                                                 
            gptid/7c9190bd-81c8-11e4-89d2-001b212a6698  ONLINE       0     0     0                                                 
            gptid/7ce3292b-81c8-11e4-89d2-001b212a6698  ONLINE       0     0     0                                                 
            gptid/d88a8385-86e9-11e4-89a1-001b212a6698  ONLINE       0     0     0                                                 
            gptid/7d7e2971-81c8-11e4-89d2-001b212a6698  ONLINE       0     0     0                                                 
            gptid/7dbcaa6d-81c8-11e4-89d2-001b212a6698  ONLINE       0     0     0                                                 
            gptid/7df27e4d-81c8-11e4-89d2-001b212a6698  ONLINE       0     0     0                                                 
            17250287981362600414                        UNAVAIL      0     0     0  was /dev/gptid/7e2f50b4-81c8-11e4-89d2-001b212a6
698                                                                                                                                
          gptid/86993603-93aa-11e4-ab44-001b212a6698    ONLINE       0     0     0                                                 
                                                                                                                                   
errors: No known data errors                                                                                                       
                                                                                                                                   
  pool: freenas-boot                                                                                                               
state: ONLINE                                                                                                                     
  scan: scrub repaired 0 in 0h1m with 0 errors on Thu Jan  1 17:16:38 2015                                                         
config:                                                                                                                            
                                                                                                                                   
        NAME        STATE     READ WRITE CKSUM                                                                                     
        freenas-boot  ONLINE       0     0     0                                                                                   
          da0p2     ONLINE       0     0     0                                                                                     
                                                                                                                                   
errors: No known data errors                                       
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
See how that last drive gptid/869... is inline with the raidz1-0 vdev. You have added a single top level device to your pool. That is bad news unfortunately now that pool is at risk if that single drive fails. ZFS is a little unforgiving of things like that. So the options are pretty limited. Basically you can backup your data, restore to a freshly created pool. Or mirror the last drive you added manually. Kinda sucks. Sorry.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Despite what you think, you did not replace the failed disk. It's still showing as part of the array because it is, and the array is degraded because it isn't there. What you did with the volume manager was not to replace the failing/failed disk, but to add a new single disk to your pool. This disk is now striped with the rest of your array--if it fails, all your data will be lost. If you had followed the instructions in the manual (http://doc.freenas.org/9.3/freenas_storage.html#replacing-a-failed-drive - note that nowhere in these instructions does it say to use the volume manager) for replacing a disk, this would not have happened.

When you saw the red warning text saying "you are trying to add a virtual device of type 'stripe' in a pool that has a virtual device of type 'raidz'" (see https://forums.freenas.org/index.ph...ying-to-add-disks-to-raidz.24163/#post-148178 for a screen shot of the warning you saw), why did you decide you had to do it anyway, and switch to manual mode? You've just compromised the redundancy of your entire pool. Your only option to save this is to back up your data, destroy and recreate your pool.

The good news is that you should never have been running an 8-disk RAIDZ1 pool anyway, so this gives you the opportunity to fix that as well.
 

SATELLITE

Dabbler
Joined
Jan 3, 2015
Messages
12
What kind of raidz pool do you suggest?

There will be 12 x 6TB Drives updated shortly.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798

SATELLITE

Dabbler
Joined
Jan 3, 2015
Messages
12
Thank you for all the help peoples… I backed up all my files and redid the array to RAIDz2.

Everything running 1000%. :)
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Good to hear. Too often people get in over their head and have to scramble to back things up, or simply can't pull it off.

Enjoy your much safer pool.
 
Status
Not open for further replies.
Top