Dissimilar VDEV's in a ZPOOL - Missing space?

Status
Not open for further replies.

gotenks

Cadet
Joined
Mar 1, 2017
Messages
9
Hey there!

Long time lurker first time poster.

Running: FreeNAS-9.10.2-U1 (86c7ef5)

I have the following setup installed to a 24-bay personal server in rows of four. This was a clean build, nothing transitioned or upgraded.
4x 6TB WD Red
8x 3TB WD Red

My hope was to use each row as its own VDEV of the pool in RAIDZ-1.

Worried I've lost some capacity by letting the FreeNAS GUI handle ZFS and Volume/Dataset creation or to put it another way, I'm afraid that only 3TB on the 6TB drives are being used but can't say for sure.

Am I going to have to rebuild the pool through command line in FreeNAS in order to take advantage of one higher capacity VDEV?

zpool list
Code:
NAME		   SIZE  ALLOC   FREE  EXPANDSZ   FRAG	CAP  DEDUP  HEALTH  ALTROOT

freenas-boot   111G  2.14G   109G		 -	  -	 1%  1.00x  ONLINE  -

zp0		   43.5T  12.6T  30.9T		 -	10%	28%  1.00x  ONLINE  /mnt



zp0 (otherwise known as tank) shows me what I want to see space-wise by doing a 'zpool list' but then the next command shows I don't actually have that.

zfs list
Code:
NAME												   USED  AVAIL  REFER  MOUNTPOINT

freenas-boot										  2.14G   105G	31K  none

freenas-boot/ROOT									 2.04G   105G	25K  none

freenas-boot/ROOT/9.10.2							  14.2M   105G   650M  /

freenas-boot/ROOT/9.10.2-U1						   2.03G   105G   654M  /

freenas-boot/ROOT/default							 3.77M   105G   511M  legacy

freenas-boot/grub									 86.2M   105G  6.34M  legacy

zp0												   9.15T  21.5T   128K  /mnt/zp0

zp0/.system										   72.6M  21.5T  22.0M  legacy

zp0/.system/configs-5ece5c906a8f4df886779fae5cade8a5  7.08M  21.5T  7.08M  legacy

zp0/.system/cores									 38.1M  21.5T  38.1M  legacy

zp0/.system/rrd-5ece5c906a8f4df886779fae5cade8a5	   128K  21.5T   128K  legacy

zp0/.system/samba4									 767K  21.5T   767K  legacy

zp0/.system/syslog-5ece5c906a8f4df886779fae5cade8a5   4.52M  21.5T  4.52M  legacy

zp0/data											  9.14T  21.5T  9.14T  /mnt/zp0/data

zp0/jails											 6.70G  21.5T   227K  /mnt/zp0/jails

zp0/jails/.warden-template-standard_10_3			  3.12G  21.5T  3.11G  /mnt/zp0/jails/.warden-template-standard_10_3

zp0/jails/plex										3.58G  21.5T  6.29G  /mnt/zp0/jails/
 

gotenks

Cadet
Joined
Mar 1, 2017
Messages
9
zpool status

Code:
pool: freenas-boot

state: ONLINE

status: One or more devices are configured to use a non-native block size.

	Expect reduced performance.

action: Replace affected devices with devices that support the

	configured block size, or migrate data to a properly configured

	pool.

  scan: scrub repaired 0 in 0h0m with 0 errors on Sun Feb 12 03:45:17 2017

config:


	NAME		STATE	 READ WRITE CKSUM

	freenas-boot  ONLINE	   0	 0	 0

	  ada0p2	ONLINE	   0	 0	 0  block size: 512B configured, 4096B native


errors: No known data errors


  pool: zp0

state: ONLINE

  scan: scrub repaired 1.10M in 1h47m with 0 errors on Mon Feb 13 11:12:36 2017

config:


	NAME											STATE	 READ WRITE CKSUM

	zp0											 ONLINE	   0	 0	 0

	  raidz1-0									  ONLINE	   0	 0	 0

		gptid/532720b3-f0f8-11e6-a3de-00259057bb07  ONLINE	   0	 0	 0

		gptid/53de2bff-f0f8-11e6-a3de-00259057bb07  ONLINE	   0	 0	 0

		gptid/54abdb23-f0f8-11e6-a3de-00259057bb07  ONLINE	   0	 0	 0

		gptid/555f4b16-f0f8-11e6-a3de-00259057bb07  ONLINE	   0	 0	 0

	  raidz1-1									  ONLINE	   0	 0	 0

		gptid/562c5581-f0f8-11e6-a3de-00259057bb07  ONLINE	   0	 0	 0

		gptid/56e752f8-f0f8-11e6-a3de-00259057bb07  ONLINE	   0	 0	 0

		gptid/57a71401-f0f8-11e6-a3de-00259057bb07  ONLINE	   0	 0	 0

		gptid/586458f3-f0f8-11e6-a3de-00259057bb07  ONLINE	   0	 0	 0

	  raidz1-2									  ONLINE	   0	 0	 0

		gptid/59130982-f0f8-11e6-a3de-00259057bb07  ONLINE	   0	 0	 0

		gptid/5a1ede44-f0f8-11e6-a3de-00259057bb07  ONLINE	   0	 0	 0

		gptid/5b33d161-f0f8-11e6-a3de-00259057bb07  ONLINE	   0	 0	 0

		gptid/5c4d2a31-f0f8-11e6-a3de-00259057bb07  ONLINE	   0	 0	 0


errors: No known data errors

 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Well, you do have 3 distinct vdev's. You'll need to map the gptid's to the hard disks to ensure that all 4x6TB drives are in the same vdev.
 

gotenks

Cadet
Joined
Mar 1, 2017
Messages
9
It sounds like I'm gonna have to backup my data, re-create the pool via command line and ensure disks are labelled well.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Please don't use the command line to create the pool. Many users that do this, don't do it the way FreeNAS intends that it be done and end up with problems in the future. You can do what you desire from the webGUI.

Did you check to see if the 4x6TB drives are in the same vdev?

@Bidule0hm has a script (look in his signature) to easily map gptid's to serial numbers.

It sounds like I'm gonna have to backup my data, re-create the pool via command line and ensure disks are labelled well.
 

Mr_N

Patron
Joined
Aug 31, 2013
Messages
289
I also wouldn't make a pool of multiple vdevs and rely on raidz1 either...

Buy 2 more 6TB drives and make a pool with 2 vdevs 8x3TB and 6x6TB using raidz2 for each.

And ontop of that from the backblaze stats for disk failures in 2016 the 6TB red's were the 2nd highest :|
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
re-create the pool
Most likely the 3TB and 6TB drives are mixed in each vdev, as you suspected. You can rebuild the pool using the GUI and avoid this by building it one vdev at a time. After initially creating the pool from four equal-size drives, proceed to extend the pool four drives at a time. You will need to use the 'manual' interface because of the mismatched vdevs.

But as noted above, RAIDZ1 is not recommended for drives larger than 1TB. One way to look at it is to calculate the chances of a 2nd drive failure destroying the entire pool. If you go with 3 x 4 x RAIDZ1, the chances of a 2nd drive failure destroying the pool are 3/11, or about 27%. If you build with RAIDZ2 vdevs, in any combination, the chances of a 2nd drive failure destroying the pool are precisely 0.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Come to think of it, your zfs list output is right about what you'd expect from 3 x 6TB + 6 x 3TB, converting TB to TiB and accepting some filesystem overhead: 30.6TiB = 33.6TB, i.e. only about 2.4TB short of 36TB.
 
Status
Not open for further replies.
Top