Can't expand volume or create new vdevs

Status
Not open for further replies.

pbryan

Dabbler
Joined
Apr 14, 2017
Messages
16
Relevant Info

Hardware: Dell R530, Xeon E5-2630v4, 64GB DDR4, 8x 4TB SAS 7.2K, 2x 16GB SD Card (Mirror)
Software: FreeNAS 9.10.2-U2
Current Pool: 6x 4TB RAID10 (Mirror)

[root@csc-san2 ~]# zpool status
pool: SAN2
state: ONLINE
scan: scrub repaired 0 in 1h15m with 0 errors on Sun Jul 2 01:15:22 2017
config:

NAME STATE READ WRITE CKSUM
SAN2 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
da0p2 ONLINE 0 0 0
da1p2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
da2p2 ONLINE 0 0 0
da3p2 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
da4p2 ONLINE 0 0 0
da5p2 ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0h1m with 0 errors on Wed Jun 21 03:46:58 2017
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da8p2 ONLINE 0 0 0

errors: No known data errors
[root@csc-san2 ~]#


My Problem

My ZFS pool is comprised of six (6) 4TB drives in RAID10 (Mirror). I have two (2) 4TB drives in the enclosure that are currently not apart of the pool; these drives are brand new. I want to expand my volume to include the 2 new drives as an additional mirrored vdev.

I tried using the GUI Volume Manager to expand the array; it says "Volume successfully added." but nothing happens. The volume remains the same (6x4TB), the drives are not added, no additional vdevs are created. I rebooted the system and tried again; no success.

I checked the console for messages during the procedure; the only thing of note is the following:

GEOM: da6: the primary GPT table is corrupt or invalid.
GEOM: da6: using the secondary instead -- recovery strongly advised.
GEOM: da7: the primary GPT table is corrupt or invalid.
GEOM: da7: using the secondary instead -- recovery strongly advised.


I tried to fix the GPT tables using GPART:

[root@csc-san2 ~]# gpart recover /dev/da6
da6 recovering is not needed
[root@csc-san2 ~]# gpart recover /dev/da7
da7 recovering is not needed
[root@csc-san2 ~]#


I have not tried to expand the array using the command line as the consensus seems to be ALWAYS use the GUI or risk future pool failures. So, using the GUI, I then tried to create a separate mirrored volume using just the 2 new drives; it shows the following error:

Error: Unable to create the pool: cannot open '/dev/gptid/996bf431-6e33-11e7-af2d-a0369fdcac5c': No such file or directory,

So now I have 2 drives in my 8 drive enclosure which cannot be added to my pool. I do not want to rebuild my pool from scratch; although I have backups, it seems silly to redo my volume every time I want to add a drive. Some of my ZFS boxes are in production; rebuilding is unacceptable.

I'm not sure what other logs to check. I tried Googling but results seem to show the expand process as always working successfully. What else can I do to troubleshoot?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Odd failures.

While I doubt this is the problem or solution, have you run any testing on the new drives to ensure each one works properly? A Smart Long test and badblocks?
 

Thomas102

Explorer
Joined
Jun 21, 2017
Messages
83
I'll try to create a new zpool with command line. This way you won't risk to break your existing pool.
Then check if some new error messages pops up.

zpool create SAN-test mirror da6 da7
 

pbryan

Dabbler
Joined
Apr 14, 2017
Messages
16
It looks like the SMART Long test I ran over the weekend did not flag any drives as questionable. I tried the command line to make a zpool and that seems to have worked successfully:

[root@csc-san2 ~]# zpool create SAN-test mirror da6 da7
[root@csc-san2 ~]# zpool status
pool: SAN-test
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
SAN-test ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
da6 ONLINE 0 0 0
da7 ONLINE 0 0 0

errors: No known data errors

pool: SAN2
state: ONLINE
scan: scrub repaired 0 in 1h15m with 0 errors on Sun Jul 2 01:15:22 2017
config:

NAME STATE READ WRITE CKSUM
SAN2 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
da0p2 ONLINE 0 0 0
da1p2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
da2p2 ONLINE 0 0 0
da3p2 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
da4p2 ONLINE 0 0 0
da5p2 ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0h1m with 0 errors on Wed Jun 21 03:46:58 2017
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da8p2 ONLINE 0 0 0

errors: No known data errors
[root@csc-san2 ~]#


However, I do not see anything in the GUI that says a new pool is available. Additionally, the Volume Manager still shows da6 and da7 as available to use to expand/new pool. I'm not sure how else to "refresh" the GUI without a reboot.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
If you look the top of the output you provided, you will see the new SAN-test mirror volume.

We don't recommend creating volumes from the command line. I realize this was done as a test. FreeNAS stores its configuration in a database. By doing things like this outside the webGUI it gets confused. One could fix it ... but it's since it's a test - it will probably get destroyed.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
There are lots of posts about corrupt GPT tables causing an issue. If these drives were 'new' they probably weren't.

One of the simplest solutions would be to simply format them in a Windows box.
 

Thomas102

Explorer
Joined
Jun 21, 2017
Messages
83
Yes, it was only a test, you must delete the the mirror and not try to add it to your existing zpool because the mirror uses unpartitionned disks.
zpool destroy SAN-test.

Maybe it has changed the disk content and the GUI is working now ?

Also you can work on manually create GPT on the disks
http://wonkity.com/~wblock/docs/html/disksetup.html
 

pbryan

Dabbler
Joined
Apr 14, 2017
Messages
16
I know this thread is kind of old but things got busy and I wasn't able to update.

@gpsguy: Yes, this was only a test. I destroyed the test mirror after testing.
@Thomas102: No, doing the test on those volumes did not change the outcome; I still can't expand the array.
@Stux: I wasn't able to format them in a Windows box before finding a solution.

I was able to solve this by updating from 9.10.2 U2 -> U6. Using that newer version, I was able to successfully expand my ZFS pool using the GUI without any issues. So I don't know exactly what caused the issue, but I know the issue was fixed in 9.10.2 U6.
 
Status
Not open for further replies.
Top