Growing a zpool by replacing disks

Status
Not open for further replies.

cwalkatron

Cadet
Joined
Jan 4, 2014
Messages
2
I have a RAID-Z1 (4x2TB disks) that is working great. I'd like to increase the size of this pool by replacing disks one by one with bigger disks. autoexpand is on, so I think this should work fine. I have good backups offsite, so I'm fine with the zpool being degraded for a bit.

What is the correct way to remove a functioning disk from a zpool to replace it with a larger disk?

I only have 4 bays (HP Proliant Microserver) with no hot swap. The specs are below. This is FreeNAS-9.3-STABLE-201509282017. Thanks for any help.

freenas# camcontrol devlist
<WDC WD20EFRX-68EUZN0 80.00A80> at scbus0 target 0 lun 0 (pass0,ada0)
<WDC WD2002FAEX-007BA0 05.01D05> at scbus1 target 0 lun 0 (pass1,ada1)
<WDC WD20EFRX-68EUZN0 80.00A80> at scbus2 target 0 lun 0 (pass2,ada2)
<WDC WD20EZRX-00DC0B0 80.00A80> at scbus3 target 0 lun 0 (pass3,ada3)
<SanDisk Cruzer Fit 1.26> at scbus7 target 0 lun 0 (pass4,da0)
freenas# glabel status
Name Status Components
gptid/c5c1db87-92c0-11e4-8685-a0481cb89848 N/A da0p1
gptid/e47bb823-7dfc-11e3-bf93-a0481cb89848 N/A ada0p2
gptid/e4e2b58c-7dfc-11e3-bf93-a0481cb89848 N/A ada1p2
gptid/ce08a4fa-8255-11e3-96f7-a0481cb89848 N/A ada2p2
gptid/e5d9fe38-7dfc-11e3-bf93-a0481cb89848 N/A ada3p2
freenas# gpart show
=> 34 31266749 da0 GPT (14G)
34 1024 1 bios-boot (512k)
1058 6 - free - (3.0k)
1064 31265712 2 freebsd-zfs (14G)
31266776 7 - free - (3.5k)

=> 34 3907029101 ada0 GPT (1.8T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 3902834696 2 freebsd-zfs (1.8T)
3907029128 7 - free - (3.5k)

=> 34 3907029101 ada1 GPT (1.8T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 3902834703 2 freebsd-zfs (1.8T)

=> 34 3907029101 ada2 GPT (1.8T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 3902834696 2 freebsd-zfs (1.8T)
3907029128 7 - free - (3.5k)

=> 34 3907029101 ada3 GPT (1.8T)
34 94 - free - (47k)
128 4194304 1 freebsd-swap (2.0G)
4194432 3902834696 2 freebsd-zfs (1.8T)
3907029128 7 - free - (3.5k)
freenas# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
freenas-boot 14.9G 4.83G 10.0G - - 32% 1.00x ONLINE -
tank 7.25T 5.60T 1.65T - 19% 77% 1.00x ONLINE /mnt

# zpool status tank
pool: tank
state: ONLINE
scan: scrub repaired 0 in 9h52m with 0 errors on Sat Oct 31 09:52:32 2015
config:

NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/e47bb823-7dfc-11e3-bf93-a0481cb89848 ONLINE 0 0 0
gptid/e4e2b58c-7dfc-11e3-bf93-a0481cb89848 ONLINE 0 0 0
gptid/ce08a4fa-8255-11e3-96f7-a0481cb89848 ONLINE 0 0 0
gptid/e5d9fe38-7dfc-11e3-bf93-a0481cb89848 ONLINE 0 0 0

errors: No known data errors
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
See section 8.1.11 of the manual.
Online version found here.
http://doc.freenas.org/9.3/
If you still have questions, we'll be here...
 

s53zo

Cadet
Joined
Jan 14, 2014
Messages
5
I have done this (growing the pool) from 4x1TB to 4x3TB with the help of 5th disk connected to external esata as described in the manul. A bit of rebooting later everything worked flawlessly.
 

eruizlimon

Cadet
Joined
Dec 25, 2015
Messages
2
i have recently upgraded the drives in my FreeNAS server (running FreeNAS 9.3STABLE201509022158 with autoexpand enabled) from 6x1.5TB to 4x4TB+2x1.5TB. since i did not have any open drive ports, this involved replacing one drive at a time according to the instructions provided in sections 8.1.10 and 8.1.11 of the user guide. with each drive replacement and after this was entirely complete, i noticed that the pool size did not increase as expected, and then followed the instructions provided in section 8.1.12 of the user guide.

unfortunately nothing has changed in spite of my efforts... the total space of my volume is exactly the same as it was prior to the drive replacement.

does anyone have any ideas regarding what i may have done wrong?

thanks in advance for our time!
 

rsquared

Explorer
Joined
Nov 17, 2015
Messages
81
The amount of space used per drive will never be more than the smallest drive in the vdev. In other words, until you replace the last two 1.5 TB drives, the 4 TB drives are only providing 1.5 TB worth of space each.
 

eruizlimon

Cadet
Joined
Dec 25, 2015
Messages
2
hahahahaha... now that would explain things, wouldn't it?

now i feel a bit foolish... :blush:

thank you for the clarification... now i'm off to the store for two more drives!

seriously, thank you.
 
Status
Not open for further replies.
Top