SOLVED Adding a drive and using zpool to change existing stripe to mirror

Status
Not open for further replies.

gohkhm

Cadet
Joined
Jan 1, 2015
Messages
6
Hi all, I currently have freenas 9.2.1.5 with 2x2tb on mirror and a 1x1tb stripe as shown below

pool: gohvault
state: ONLINE
scan: scrub repaired 0 in 1h9m with 0 errors on Tue Dec 30 07:10:42 2014
config:

NAME STATE READ WRITE CKSUM
gohvault ONLINE 0 0 0
gptid/a07ea483-8db1-11e4-ad30-fcaa14743963 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/a13312a4-8db1-11e4-ad30-fcaa14743963 ONLINE 0 0 0
gptid/a20ba1e9-8db1-11e4-ad30-fcaa14743963 ONLINE 0 0 0

errors: No known data errors

I just installed a new 1tb drive and would like to convert the stripe to a mirror. I found the following commands from this site and was wondering if it would work

Code:
Run "zpool status" to get the name of your pool and the gptid device name of the current drive
Run "gpart list | less" and find the rawuuid of the new drive. It will match a file in the /dev/gptid folder
Run "zpool attach <pool name> gptid/<existing device gptid> gptid/<new device gptid>"


Or should I use this second method posted by Dusan

Let's assume ada0 is your existing disk, ada1 is the new one, tank is the pool name.
Code:
gpart create -s gpt /dev/ada1
gpart add -i 1 -b 128 -t freebsd-swap -s 2g /dev/ada1
gpart add -i 2 -t freebsd-zfs /dev/ada1
Run zpool status and note the gptid of the existing disk
Run glabel status and find the gptid of the newly created partition. It is the gptid associated with ada1p2.
zpool attach tank /dev/gptid/[gptid_of_the_existing_disk] /dev/gptid/[gptid_of_the_new_partition]

Appreciate any advise. Thanks very much
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I would strongly recommend backup up the data on the striped device, then destroying that pool, then adding the additional drive and creating your mirror, lastly restore your data.

You may be able to do this using the CLI however the GUI may not recognize it properly.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Those commands are outdated. Find Josh Paetzel's post about this. There was a thread on this a couple weeks ago. The key difference is '-b 4k' on the gpart command. Detach and auto-import will resync the GUI.

Much easier and safer do it Joe's way.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Yep. Was on my phone, so didn't have the link handy. Also wasn't only the -b that was wrong. Use '-a 4K' instead of 4096... that is what the notifier.py code calls when building partitions. In another thread they were shown to create different partitions even though numerically equivalent. I never looked into why.

As long as the pool gets exported and imported properly the GUI and FreeNAS should be happy. Obviously these are not everyday procedures you can screw up and cause yourself grief, and/or lose your data.
 

gohkhm

Cadet
Joined
Jan 1, 2015
Messages
6
Thanks for the speedy reply. Ok so the updated commands that would work for the current ver 9.2.1.5 would be...

Code:
gpart create -s gpt /dev/ada1
gpart add -a 4k -i 1 -s 2g -t freebsd-swap /dev/ada1
gpart add -a 4k -i 2 -t freebsd-zfs /dev/ada1
Run zpool status and note the gptid of the existing disk
Run glabel status and find the gptid of the newly created partition. It is the gptid associated with ada1p2.
zpool attach tank /dev/gptid/[gptid_of_the_existing_disk] /dev/gptid/[gptid_of_the_new_partition]
?
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Looks right to me. I'd probably detach the pool from the GUI first. Then when finished auto-import. Not much too it, but easy to screw up if you have no idea what's going on.
 

gohkhm

Cadet
Joined
Jan 1, 2015
Messages
6
I've got ada0 - ada3. The new drive is ada1. When I list my existing partitions, it shows that ada1 already has 3 partitions

Code:
#gpart list | less
Geom name: ada1                                                              
modified: false                                                              
state: OK                                                                    
fwheads: 16                                                                  
fwsectors: 63                                                                
last: 1953525167                                                             
first: 63                                                                    
entries: 4                                                                   
scheme: MBR                                                                  
Providers:                                                                   
1. Name: ada1s1                                                              
   Mediasize: 1028127744 (980M)                                              
   Sectorsize: 512                                                           
   Stripesize: 0                                                             
   Stripeoffset: 32256                                                       
   Mode: r0w0e0                                                              
   rawtype: 131                                                              
   length: 1028127744                                                        
   offset: 32256                                                             
   type: linux-data                                                          
   index: 1                                                                  
   end: 2008124                                                              
   start: 63                                                                 
:            
2. Name: ada1s2                                                              
   Mediasize: 5124349440 (4.8G)                                              
   Sectorsize: 512                                                           
   Stripesize: 0                                                             
   Stripeoffset: 1028160000                                                  
   Mode: r0w0e0                                                              
   rawtype: 131                                                              
   length: 5124349440                                                        
   offset: 1028160000                                                        
   type: linux-data                                                          
   index: 2                                                                  
   end: 12016619                                                             
   start: 2008125                                                            
3. Name: ada1s4                                                              
   Mediasize: 994049763840 (925G)                                            
   Sectorsize: 512                                                           
   Stripesize: 0                                                             
   Stripeoffset: 1857542144                                                  
   Mode: r0w0e0                                                              
   rawtype: 5                                                                
   length: 994049763840                                                      
   offset: 6152509440                                                        
   type: ebr           
index: 4                                                                  
   end: 1953520064                                                           
   start: 12016620                                                           
Consumers:                                                                   
1. Name: ada1                                                                
   Mediasize: 1000204886016 (931G)                                           
   Sectorsize: 512                                                           
   Mode: r0w0e0     


So my question now is do I need to destroy the ada1 label first?

Code:
gpart destroy -F /dev/ada1


Eeeks...

Thanks in advance.
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I'm watching this thread to see if you destroy your data or not. Be careful. If I were you I'd disconnect all my other drives before destroying the new(er) drive.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
That is some superb advice by Mr. Shmuck. There isn't a lot of room for "EEK" or oops.

If that is the only drive in the box. The size looks right. You know you used it on an install before... then destroy the labels.

The only risk at this point is that you have the wrong drive, and nuke your data in error.

Seriously, there is no harm in pulling drives and being paranoid to keep things safe. I do it all the time and I've been at this for decades. That said, adding a drive to a stripe or mirror is pretty trivial and safe in zfs. The magic here is keeping FreeNAS happy. Where it goes sideways is people just following a man page or oracle documentation. Next thing you know we have a GUI out of whack with the pool, and things aren't being kept track of properly. We are literally running the same code manually that the GUI does.

Take every precaution, then just relax and do it. You'll be fine.

Here's one done just for you. Typo's and all. Interestingly the GUI picked up the change without re-import. First ugly zpool: single drive striped to mirror was made via gui. Then mirror added manually to the single drive.

Code:
login as: root
root@192.168.1.129's password:
FreeBSD 9.3-RELEASE-p5 (FREENAS.amd64) #0 3b4abc3: Mon Dec  8 15:09:41 PST 2014

FreeNAS (c) 2009-2014, The FreeNAS Development Team
All rights reserved.
FreeNAS is released under the modified BSD license.

For more information, documentation, help or support, go here:
http://freenas.org
Welcome to FreeNAS
[root@freenas93] ~# c
[root@freenas93] ~# camcontrol devlist

<NECVMWar VMware IDE CDR10 1.00>   at scbus1 target 0 lun 0 (pass0,cd0)
<VMware Virtual disk 1.0>          at scbus2 target 0 lun 0 (pass1,da0)
<VMware Virtual disk 1.0>          at scbus2 target 1 lun 0 (pass2,da1)
<VMware Virtual disk 1.0>          at scbus2 target 2 lun 0 (pass3,da2)
<VMware Virtual disk 1.0>          at scbus2 target 3 lun 0 (pass4,da3)
<VMware Virtual disk 1.0>          at scbus2 target 4 lun 0 (pass5,da4)
<SanDisk Ultra 1.26>               at scbus4 target 0 lun 0 (da5,pass6)
[root@freenas93] ~# zpool status

  pool: freenas-boot
state: ONLINE
  scan: none requested
config:

NAME        STATE     READ WRITE CKSUM
freenas-boot  ONLINE       0     0     0
  da0p2     ONLINE       0     0     0

errors: No known data errors

  pool: tank
state: ONLINE
  scan: none requested
config:

NAME                                            STATE     READ WRITE CKSUM
tank                                            ONLINE       0     0     0
  gptid/45758f8b-92ab-11e4-ae10-000c29bebb01    ONLINE       0     0     0
  mirror-1                                      ONLINE       0     0     0
    gptid/899e1b49-92ab-11e4-ae10-000c29bebb01  ONLINE       0     0     0
    gptid/89b91ab6-92ab-11e4-ae10-000c29bebb01  ONLINE       0     0     0

errors: No known data errors

[root@freenas93] ~# gpart create -s gpt /dev/da4

da4 created
[root@freenas93] ~# gpart add -a 4k -i -s 2g -t freebsd-swap /dev/da4

gpart: Invalid value for 'i' argument: Invalid argument
[root@freenas93] ~# gpart add -a 4k -i 1 -s 2g -t freebsd-swap /dev/da4

da4p1 added
[root@freenas93] ~# gpart add -a 4k -i 2 -t freebsd-zfs /dev/da4

da4p2 added
[root@freenas93] ~# glabel status

                                      Name  Status  Components
gptid/f1eaeb64-82ff-11e4-870b-000c29bebb01     N/A  da0p1
gptid/45758f8b-92ab-11e4-ae10-000c29bebb01     N/A  da1p2
gptid/899e1b49-92ab-11e4-ae10-000c29bebb01     N/A  da2p2
gptid/89b91ab6-92ab-11e4-ae10-000c29bebb01     N/A  da3p2
gptid/f450228d-92ab-11e4-ae10-000c29bebb01     N/A  da4p1
gptid/0474afa6-92ac-11e4-ae10-000c29bebb01     N/A  da4p2
[root@freenas93] ~# cd /dev/gptid

[root@freenas93] /dev/gptid# ls

./                                    45758f8b-92ab-11e4-ae10-000c29bebb01  f1eaeb64-82ff-11e4-870b-000c29bebb01
../                                   899e1b49-92ab-11e4-ae10-000c29bebb01  f450228d-92ab-11e4-ae10-000c29bebb01
0474afa6-92ac-11e4-ae10-000c29bebb01  89b91ab6-92ab-11e4-ae10-000c29bebb01
[root@freenas93] /dev/gptid# zpool attach tank /dev/gptid/45758f8b-92ab-11e4-ae10-000c29bebb01 /dev/gptid/0474afa6-92ac-11e4-ae10-000c29b
ebb01

[root@freenas93] /dev/gptid# zpool status

  pool: freenas-boot
state: ONLINE
  scan: none requested
config:

NAME        STATE     READ WRITE CKSUM
freenas-boot  ONLINE       0     0     0
  da0p2     ONLINE       0     0     0

errors: No known data errors

  pool: tank
state: ONLINE
  scan: resilvered 252K in 0h0m with 0 errors on Fri Jan  2 10:22:34 2015
config:

NAME                                            STATE     READ WRITE CKSUM
tank                                            ONLINE       0     0     0
  mirror-0                                      ONLINE       0     0     0
    gptid/45758f8b-92ab-11e4-ae10-000c29bebb01  ONLINE       0     0     0
    gptid/0474afa6-92ac-11e4-ae10-000c29bebb01  ONLINE       0     0     0
  mirror-1                                      ONLINE       0     0     0
    gptid/899e1b49-92ab-11e4-ae10-000c29bebb01  ONLINE       0     0     0
    gptid/89b91ab6-92ab-11e4-ae10-000c29bebb01  ONLINE       0     0     0

errors: No known data errors
[root@freenas93] /dev/gptid#


Good Luck.
 
Last edited:

gohkhm

Cadet
Joined
Jan 1, 2015
Messages
6
I'm watching this thread to see if you destroy your data or not. Be careful. If I were you I'd disconnect all my other drives before destroying the new(er) drive.

Lol. Ok. Will remove my mirror first before attempting to destroy the label. Thank you for the sagely advice!

Mjws00, thanks so much for your help.
 

gohkhm

Cadet
Joined
Jan 1, 2015
Messages
6
OMG! What a nerve wrecking ride. IT WORKED! No complaints from FreeNas GUI. Thanks so much fellas!

Here's the steps along with my setup for future reference
  • FreeNas version 9.2.1.5 booted from 16gb USB stick
  • One volume consisting of 2x2tb mirror and a single stripe 1x1tb drive
Steps to convert a single stripe to mirror
  1. From GUI, detach the volume, shutdown and unplug the existing drives. Install a fresh 1x1tb drive.
  2. Startup again and you should only see one disk under View Disks. Run the command with Shell and clear any existing partitions on the new drive

    Code:
    gpart destroy -F /dev/ada0


  3. Create the new partitions

    Code:
    gpart create -s gpt /dev/ada0
    gpart add -a 4k -i 1 -s 2g -t freebsd-swap /dev/ada0
    gpart add -a 4k -i 2 -t freebsd-zfs /dev/ada0
  • Check out the new gptid for the drive and copy down the gptid for ada0p2

    Code:
    glabel status
  • Shut down. Replug all the drives. Restart. Go to Storage and Auto-Import the volume
  • Go to Volume Status and take note of the name for the stripe eg ada1p2
  • Check out the gptid again and this time find the gptid for ada1p2

    Code:
    glabel status
  • Finally, run the command to set the stripe as a mirror where tank is the name of the volume. IMPORTANT: gptid for the existing disk comes first

    Code:
    zpool attach tank /dev/gptid/[gptid_of_the_existing_disk] /dev/gptid/[gptid_of_the_new_partition]
  • Go back to the GUI Volume Status and you'll see a new mirror. You can check the reslivering process with the following command

    Code:
    zpool status
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Glad it worked and you did no harm in the process.
 

brashquido

Dabbler
Joined
Jun 30, 2014
Messages
17
OMG! What a nerve wrecking ride. IT WORKED! No complaints from FreeNas GUI. Thanks so much fellas!

Here's the steps along with my setup for future reference
  • FreeNas version 9.2.1.5 booted from 16gb USB stick
  • One volume consisting of 2x2tb mirror and a single stripe 1x1tb drive
Steps to convert a single stripe to mirror
  1. From GUI, detach the volume, shutdown and unplug the existing drives. Install a fresh 1x1tb drive.
  2. Startup again and you should only see one disk under View Disks. Run the command with Shell and clear any existing partitions on the new drive

    Code:
    gpart destroy -F /dev/ada0


  3. Create the new partitions

    Code:
    gpart create -s gpt /dev/ada0
    gpart add -a 4k -i 1 -s 2g -t freebsd-swap /dev/ada0
    gpart add -a 4k -i 2 -t freebsd-zfs /dev/ada0
  • Check out the new gptid for the drive and copy down the gptid for ada0p2

    Code:
    glabel status
  • Shut down. Replug all the drives. Restart. Go to Storage and Auto-Import the volume
  • Go to Volume Status and take note of the name for the stripe eg ada1p2
  • Check out the gptid again and this time find the gptid for ada1p2

    Code:
    glabel status
  • Finally, run the command to set the stripe as a mirror where tank is the name of the volume. IMPORTANT: gptid for the existing disk comes first

    Code:
    zpool attach tank /dev/gptid/[gptid_of_the_existing_disk] /dev/gptid/[gptid_of_the_new_partition]
  • Go back to the GUI Volume Status and you'll see a new mirror. You can check the reslivering process with the following command

    Code:
    zpool status
Perfect. Thank you!
 

Terry Pounds

Dabbler
Joined
Jan 21, 2017
Messages
21
Hi everyone,

I have 5 disk bastardized ZFS setup running FreeNAS-9.3-STABLE-201604150515 . I want to convert this over to a mirror setup. The ZFS consists of three 3.0TB drive and one 1.0TB Drive. I have one new 3.0TB I want to add to start the process of my mirror. Currently I have 5.3TB available space left out of the 10TB. My question is if I get the total data on the 4 ZFS drives below 3.0TB could I detach the 4 ZFS drives and then import them to my new mirror drive then decommission the 4 ZFS drives and then bring the three 3.0TB back as mirror drives?

This thread started out with my issue but he only had one stripe to convert so this throws me off. Also he talks about detaching the stripe which with my setup it will not allow me to detach any drive from the GUI.

My goal is to convert to ZFS to four 3.0TB mirror without setting everything back up from scratch. Most of the data I can dump to client through wire although if I could leave 2.5TB on the 4 ZFS drive that would be easier as the space I have on my client would be crammed full if I have to dump all contents.

So if I am able to complete my goal using the the instructions for the beginning of this thread what are the additional steps I need to do? How do I detach the 4 drives in GUI?

If I am not able to convert using above, if I backup my config, dump all files, jails, etc. to a client, could I start from scratch put files, jails back the way they were on new setup and then restore config? Or would that not work?
 

Attachments

  • 2017-01-23_10-08-52.jpg
    2017-01-23_10-08-52.jpg
    180.3 KB · Views: 361
  • 2017-01-23_10-09-49.jpg
    2017-01-23_10-09-49.jpg
    152.6 KB · Views: 345
  • 2017-01-23_10-10-31.jpg
    2017-01-23_10-10-31.jpg
    83.8 KB · Views: 363

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
What is the output of "zpool status" ?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Wow, not what I was expecting for an output. So how was your Media pool created? I was expecting it to tell me that it was a RAID-Z1 or something but it didn't. Well I could just be out of my knowledge zone too.

My advice would be right now to copy off all the data and destroy your pool and then rebuild it, that is the one way to ensure it is built correctly and it's likely to take less time, provided you have local places to store your data. I myself would store my data on a few extra hard drives either on other computers or by adding single disk RAIDZ1 drives to FreeNAS and copying the data over. Another way is to use DVD-R media to store your data on. I like doing this periodically for my photos and financial records, stuff that is truley important.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
This pool looks like a stripe... I hope there's no data on it or there's backup(s) of the said data.
 
Status
Not open for further replies.
Top