Added 12 drives (3 x RAIDZ2) to my existing zpool, how can I expand the dataset?

Status
Not open for further replies.

azathot

Dabbler
Joined
Jan 13, 2015
Messages
10
Hi there,

I'm running FreeNAS 9.3 (FreeNAS-9.3-STABLE-201501241715).
My pool was created with 2 RAIDZ2 consisting of 4 - 4TB drives each.

Today I added another 12 drives and expanded the volume by adding 3 - 4 Drive RAIDZ2 vdevs.
The drives added fine, the total space for the pool shows 64TB:
fnas_20150126.png


Code:
pool: pool
state: ONLINE
  scan: none requested
config:

    NAME                                            STATE     READ WRITE CKSUM
    pool                                            ONLINE       0     0     0
     raidz2-0                                      ONLINE       0     0     0
       gptid/3041fe42-9f1d-11e4-8ae8-002590577d8f  ONLINE       0     0     0
       gptid/32d94356-9f1d-11e4-8ae8-002590577d8f  ONLINE       0     0     0
       gptid/3526815b-9f1d-11e4-8ae8-002590577d8f  ONLINE       0     0     0
       gptid/3769f844-9f1d-11e4-8ae8-002590577d8f  ONLINE       0     0     0
     raidz2-1                                      ONLINE       0     0     0
       gptid/39a98def-9f1d-11e4-8ae8-002590577d8f  ONLINE       0     0     0
       gptid/3bdff147-9f1d-11e4-8ae8-002590577d8f  ONLINE       0     0     0
       gptid/3d63a46c-9f1d-11e4-8ae8-002590577d8f  ONLINE       0     0     0
       gptid/3ee2ba9b-9f1d-11e4-8ae8-002590577d8f  ONLINE       0     0     0
     raidz2-2                                      ONLINE       0     0     0
       gptid/d61f800c-a58f-11e4-b88b-002590577d8f  ONLINE       0     0     0
       gptid/d6f88f50-a58f-11e4-b88b-002590577d8f  ONLINE       0     0     0
       gptid/d7d4ed40-a58f-11e4-b88b-002590577d8f  ONLINE       0     0     0
       gptid/d8b294c7-a58f-11e4-b88b-002590577d8f  ONLINE       0     0     0
     raidz2-3                                      ONLINE       0     0     0
       gptid/eeeb6162-a58f-11e4-b88b-002590577d8f  ONLINE       0     0     0
       gptid/efcdc1d5-a58f-11e4-b88b-002590577d8f  ONLINE       0     0     0
       gptid/f0b1b588-a58f-11e4-b88b-002590577d8f  ONLINE       0     0     0
       gptid/f191d150-a58f-11e4-b88b-002590577d8f  ONLINE       0     0     0
     raidz2-4                                      ONLINE       0     0     0
       gptid/07c8bcf9-a590-11e4-b88b-002590577d8f  ONLINE       0     0     0
       gptid/0a06fa1b-a590-11e4-b88b-002590577d8f  ONLINE       0     0     0
       gptid/0c3c2179-a590-11e4-b88b-002590577d8f  ONLINE       0     0     0
       gptid/0e81c468-a590-11e4-b88b-002590577d8f  ONLINE       0     0     0


The available space is showing up as half of that:

Code:
freenas# zfs list pool
NAME   USED  AVAIL  REFER  MOUNTPOINT
pool  4.04T  30.0T   587G  /mnt/pool
freenas# zpool list pool
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
pool  72.5T  8.34T  64.2T         -     5%    11%  1.00x  ONLINE  /mnt
freenas#


Is there a way to expand the dataset to reflect the total amount of space?

Thanks for the help everyone!
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,975
Just curious as to why you added them in 4 disk vdevs and not 2 vdevs of 6?

The available space is showing correctly as you have half of your drives dedicated to parity.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
You are doing raidz2 with 4 disk vdevs so half of your total storage is devoted to parity. ;)
 

azathot

Dabbler
Joined
Jan 13, 2015
Messages
10
Just curious as to why you added them in 4 disk vdevs and not 2 vdevs of 6?

The available space is showing correctly as you have half of your drives dedicated to parity.
Mainly because of my case, a Norco RPC-4220. Each backplane has four drives.

Is there another recommended setup for 20 drives? I could migrate the existing data off the pool and recreate it.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,975
Mainly because of my case, a Norco RPC-4220. Each backplane has four drives.

Is there another recommended setup for 20 drives? I could migrate the existing data off the pool and recreate it.

2 - 10 disk RAIDZ2 pools is what I would go with if you are starting fresh. BUT, that is a lot of space and data and it's going to require a lot of memory.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Mainly because of my case, a Norco RPC-4220. Each backplane has four drives.

Is there another recommended setup for 20 drives? I could migrate the existing data off the pool and recreate it.
Depending on your system specs you could try 2 x 10 disk raidz2 vdevs. I would create the zpool with that then test performance.

Edit: I didn't see jailer's response :D
Give it a shot though
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,975
Heh, a mirrored pair of responses. :D
 

azathot

Dabbler
Joined
Jan 13, 2015
Messages
10
The performance is pretty good so far:
Code:
freenas# gdd if=/dev/zero of=./TESTFILE bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes (107 GB) copied, 23.1123 s, 4.6 GB/s
freenas#


The specs on the machine seem ok so far:
PlatformIntel(R) Xeon(R) CPU E31230 @ 3.20GHz
Memory32713MB
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
The performance is pretty good so far:
Code:
freenas# gdd if=/dev/zero of=./TESTFILE bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes (107 GB) copied, 23.1123 s, 4.6 GB/s
freenas#


The specs on the machine seem ok so far:
PlatformIntel(R) Xeon(R) CPU E31230 @ 3.20GHz
Memory32713MB
Be sure that compression is off when you're doing dd tests on the pool. It will give you incorrect results
 

azathot

Dabbler
Joined
Jan 13, 2015
Messages
10
Good call on the compression, this seems a bit more realistic:
Code:
freenas# gdd if=/dev/zero of=./TESTFILE bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes (107 GB) copied, 109.158 s, 984 MB/s
freenas#
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Good call on the compression, this seems a bit more realistic:
Code:
freenas# gdd if=/dev/zero of=./TESTFILE bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes (107 GB) copied, 109.158 s, 984 MB/s
freenas#
That looks good, I'd go with that pool config.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I have the norco 4224 and I am running 2 pools.. one is a 10 disk RAIDZ2 (single vdev) of 6TB drives and the other is a 6 disk RAIDZ2 of 3TB drives.

Definitely look at doing 10 disk (or some other combo that works out well for you). I'd definitely avoid 4 disk RAIDZ2s.. that's a lot of parity in relation to disk space available.
 

azathot

Dabbler
Joined
Jan 13, 2015
Messages
10
Thanks guys, I'll migrate the data and move it based on your recommendations. I appreciate the help!
 

azathot

Dabbler
Joined
Jan 13, 2015
Messages
10
Quick update, performance is even better after the new config:
Code:
freenas# gdd if=/dev/zero of=./TESTFILE bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes (107 GB) copied, 63.1932 s, 1.7 GB/s
freenas#


Thanks again guys!
 
Status
Not open for further replies.
Top