Extending a Z2 pool

Longshot

Cadet
Joined
Jun 10, 2020
Messages
4
Hello, this is my first Freenas build. I made a Z2 of 8x4TB and copied all of my data over the network to it from my windows 10 rig. I took 8x4TB from my windows rig and put them in my Freenas. I wanted to double the capacity of my original Z2 pool by adding a vdev which would be identical model and number of drives. I went to the original Z2 pool and choose EXTEND to add the 8 more drives. It seems to have worked because my Z2pool now shows both vdevs of 8 drives. My problem is that my capacity hasnt increased at all. The original vdev capacity was about 21TB and I copied files to it until it was about 87% full. I am confused because the capacity didnt increase at all and I was expecting to have 42TB meanwhile the Z2 pool says its 44% full now.

I would have searched more but I didnt realize how hot all my drives were getting now that they are all crammed into the server case. Ive started to get some critical alerts from a few drives saying they are unreadable, yet the dashboard says the pool is healthy. I have a spare but I havent set it up, yet.

What did I do wrong?

I would also like to know if its possible to make a striped pool of 2x2TB NVME and use it as a hot spare until I can get a real one.

Thanks!
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey @Longshot,

First thing would be to check what the actual situation is... Can you please post the output for
zpool status -v
 

Longshot

Cadet
Joined
Jun 10, 2020
Messages
4
Warning: settings changed through the CLI are not written to
the configuration database and will be reset on reboot.

root@freenas[~]# zpool status-v
unrecognized command 'status-v'
usage: zpool command args ...
where 'command' is one of the following:

create [-fnd] [-B] [-o property=value] ...
[-O file-system-property=value] ...
[-m mountpoint] [-R root] [-t tempname] <pool> <vdev> ...
destroy [-f] <pool>

add [-fn] <pool> <vdev> ...
remove [-nps] <pool> <device> ...

labelclear [-f] <vdev>

checkpoint [--discard] <pool> ...

list [-Hpv] [-o property[,...]] [-T d|u] [pool] ... [interval [count]]
iostat [-v] [-T d|u] [pool] ... [interval [count]]
status [-vx] [-T d|u] [pool] ... [interval [count]]

online [-e] <pool> <device> ...
offline [-t] <pool> <device> ...
clear [-nF] <pool> [device]
reopen <pool>

attach [-f] <pool> <device> <new-device>
detach <pool> <device>
replace [-f] <pool> <device> [new-device]
split [-n] [-R altroot] [-o mntopts]
[-o property=value] <pool> <newpool> [<device> ...]

initialize [-cs] <pool> [<device> ...]
scrub [-s | -p] <pool> ...

import [-d dir] [-D]
import [-o mntopts] [-o property=value] ...
[-d dir | -c cachefile] [-D] [-f] [-m] [-N] [-R root] [-F [-n]] -a
import [-o mntopts] [-o property=value] ...
[-d dir | -c cachefile] [-D] [-f] [-m] [-N] [-R root] [-F [-n]] [-t]
[--rewind-to-checkpoint] <pool | id> [newpool]
export [-f] <pool> ...
upgrade [-v]
upgrade [-V version] <-a | pool ...>
reguid <pool>

history [-il] [<pool>] ...
get [-Hp] [-o "all" | field[,...]] <"all" | property[,...]> <pool> ...
set <property=value> <pool>
root@freenas[~]# zpool status -v
pool: NVME4TB
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
NVME4TB ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/9bb98a3a-ab1c-11ea-a2b7-00005ddcdda6 ONLINE 0 0 0
gptid/9bbe737c-ab1c-11ea-a2b7-00005ddcdda6 ONLINE 0 0 0

errors: No known data errors

pool: RAIDZ201
state: ONLINE
scan: resilvered 8K in 0 days 00:00:01 with 0 errors on Wed Jun 10 07:15:03 2020
config:

NAME STATE READ WRITE CKSUM
RAIDZ201 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/c72adf32-a798-11ea-9ed6-00005ddcdda6 ONLINE 0 0 0
gptid/ce53f9e2-a798-11ea-9ed6-00005ddcdda6 ONLINE 0 0 0
gptid/cb326506-a798-11ea-9ed6-00005ddcdda6 ONLINE 0 0 0
gptid/e62396ca-a798-11ea-9ed6-00005ddcdda6 ONLINE 0 0 0
gptid/e5f468ea-a798-11ea-9ed6-00005ddcdda6 ONLINE 0 0 0
gptid/e303f61e-a798-11ea-9ed6-00005ddcdda6 ONLINE 0 0 0
gptid/ebd9f1f0-a798-11ea-9ed6-00005ddcdda6 ONLINE 0 0 0
gptid/eed54768-a798-11ea-9ed6-00005ddcdda6 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/166a789e-aace-11ea-87de-00005ddcdda6 ONLINE 0 0 0
gptid/3b7ed6e8-aace-11ea-87de-00005ddcdda6 ONLINE 0 0 0
gptid/4204774d-aace-11ea-87de-00005ddcdda6 ONLINE 0 0 0
gptid/3e9d9ba7-aace-11ea-87de-00005ddcdda6 ONLINE 0 0 0
gptid/41d46f42-aace-11ea-87de-00005ddcdda6 ONLINE 0 0 0
gptid/5bb75e81-aace-11ea-87de-00005ddcdda6 ONLINE 0 0 0
gptid/55fd5ce6-aace-11ea-87de-00005ddcdda6 ONLINE 0 0 0
gptid/55ccbc7c-aace-11ea-87de-00005ddcdda6 ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
scan: none requested
config:
 

Longshot

Cadet
Joined
Jun 10, 2020
Messages
4
There is a space there, I fumbled the copy and paste...


root@freenas[~]# zpool status -v
pool: NVME4TB
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
NVME4TB ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/9bb98a3a-ab1c-11ea-a2b7-00005ddcdda6 ONLINE 0 0 0
gptid/9bbe737c-ab1c-11ea-a2b7-00005ddcdda6 ONLINE 0 0 0

errors: No known data errors

pool: RAIDZ201
state: ONLINE
scan: resilvered 8K in 0 days 00:00:01 with 0 errors on Wed Jun 10 07:15:03 2020
config:

NAME STATE READ WRITE CKSUM
RAIDZ201 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/c72adf32-a798-11ea-9ed6-00005ddcdda6 ONLINE 0 0 0
gptid/ce53f9e2-a798-11ea-9ed6-00005ddcdda6 ONLINE 0 0 0
gptid/cb326506-a798-11ea-9ed6-00005ddcdda6 ONLINE 0 0 0
gptid/e62396ca-a798-11ea-9ed6-00005ddcdda6 ONLINE 0 0 0
gptid/e5f468ea-a798-11ea-9ed6-00005ddcdda6 ONLINE 0 0 0
gptid/e303f61e-a798-11ea-9ed6-00005ddcdda6 ONLINE 0 0 0
gptid/ebd9f1f0-a798-11ea-9ed6-00005ddcdda6 ONLINE 0 0 0
gptid/eed54768-a798-11ea-9ed6-00005ddcdda6 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/166a789e-aace-11ea-87de-00005ddcdda6 ONLINE 0 0 0
gptid/3b7ed6e8-aace-11ea-87de-00005ddcdda6 ONLINE 0 0 0
gptid/4204774d-aace-11ea-87de-00005ddcdda6 ONLINE 0 0 0
gptid/3e9d9ba7-aace-11ea-87de-00005ddcdda6 ONLINE 0 0 0
gptid/41d46f42-aace-11ea-87de-00005ddcdda6 ONLINE 0 0 0
gptid/5bb75e81-aace-11ea-87de-00005ddcdda6 ONLINE 0 0 0
gptid/55fd5ce6-aace-11ea-87de-00005ddcdda6 ONLINE 0 0 0
gptid/55ccbc7c-aace-11ea-87de-00005ddcdda6 ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
scan: none requested
config:
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
What's the output of zpool list and zfs list -o space?
 

Longshot

Cadet
Joined
Jun 10, 2020
Messages
4
Everything seems to be working now. Ive been shutting down ASAP so my drives dont over heat. Does it help to do a scrub after a pool extend operation to get current info? Many of my HD's have been inactive but most were made in 2014. I expect failures. Can I use 2x2TB=4TB NVME PCIE SSD as a hot spare? Would the speed of NVME be beneficial? Can I use iscsi to mirror my zpool to a raid 0 array on my windows rig for performance, lower power, and redundancy?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Does it help to do a scrub after a pool extend operation to get current info?
I can't imagine how.

Can I use 2x2TB=4TB NVME PCIE SSD as a hot spare?
Conceivably, but that doesn't seem like a sustainable solution.

Would the speed of NVME be beneficial?
Not mixed with spinning rust.

Can I use iscsi to mirror my zpool to a raid 0 array on my windows rig for performance, lower power, and redundancy?
I'm not sure what you mean, but that sounds like a crazy setup.
 
Top