After dataset restore onto new pool, available space weirdly low

Status
Not open for further replies.

qqBazz

Dabbler
Joined
Nov 5, 2015
Messages
34
I'm trying to figure out why a dataset is showing a surprisingly low amount of available space left to it, despite the underlying pool having lots.

I had a FreeNAS RAIDZ1 with five drives in it that I wanted to erase and recreate using eight drives. I snapshotted the existing dataset from it onto an external drive, which I treated very, very carefully. :) (for the record, I was generally following the instructions in this excellent post, including saving a pre-move system configuration to file.)

I then destroyed the zpool and created a brand-new RAIDZ2 out of 8x2TB drives. Gingerly plugging in the external drive, I started a zfs send | zfs recv from it in order to put the snapshot onto the new RAIDZ2. All seemed to go perfectly: I restored all the data, mounted and spot-checked it, and destroyed the snapshots so as not to be chewing up space in the background going forward.

The pool itself looks totally as I expected:

scarecrow% zpool list first-five
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
first-five 14.5T 5.19T 9.31T - 20% 35% 1.00x ONLINE /mnt


... which I read as "14.5TiB in raw storage, of which data + parity are currently using 5.19TiB, leaving 9.31TiB available." All good.

The part that confuses me is when I'm looking at the dataset I've put onto my new volume, which shows me 4.29TiB available. That's way, way less than 9.31TiB.


scarecrow% zfs list -o name,used,avail,refer,usedds,usedchild
NAME USED AVAIL REFER USEDDS USEDCHILD
first-five 2.52T 4.29T 128K 128K 2.52T
first-five/Backups 570G 4.29T 570G 570G 0
first-five/homedirs 18.9G 4.29T 18.9G 18.9G 0
first-five/jails 20.5G 4.29T 2.81G 2.81G 17.7G
first-five/jails/.warden-template-VirtualBox-4.3.12 741M 4.29T 741M 741M 0
first-five/jails/.warden-template-pluginjail--x64 796M 4.29T 791M 791M 0
first-five/jails/.warden-template-pluginjail-9.3-x64 508M 4.29T 508M 508M 0
first-five/jails/.warden-template-pluginjail-open-x86 209K 4.29T 128K 128K 0
first-five/jails/.warden-template-standard 2.10G 4.29T 2.10G 2.10G 0
first-five/jails/.warden-template-standard--x64 2.08G 4.29T 2.02G 2.02G 0
first-five/jails/.warden-template-standard-9.3-x64 2.06G 4.29T 2.06G 2.06G 0
first-five/jails/.warden-template-standard-open-x86 209K 4.29T 128K 128K 0
first-five/jails/plexmediaserver_1 8.69G 4.29T 8.69G 8.69G 0
first-five/jails/raven 832M 4.29T 2.77G 832M 0
first-five/stuff 1.92T 4.29T 1.92T 1.92T 0


I wondered if snapshots could be grabbing some of that space, but it would appear not.


scarecrow% zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
first-five/Backups@auto-20170108.1028-1w 13.6M - 568G -
first-five/Backups@auto-20170109.1432-1w 12.5M - 569G -
first-five/Backups@auto-20170111.1259-1w 19.6M - 569G -
first-five/jails/.warden-template-VirtualBox-4.3.12@clean 93K - 741M -
first-five/jails/.warden-template-pluginjail--x64@clean 4.50M - 791M -
first-five/jails/.warden-template-pluginjail-9.3-x64@clean 163K - 508M -
first-five/jails/.warden-template-pluginjail-open-x86@clean 81.4K - 128K -
first-five/jails/.warden-template-standard@clean 163K - 2.10G -
first-five/jails/.warden-template-standard--x64@clean 68.0M - 2.02G -
first-five/jails/.warden-template-standard-9.3-x64@clean 174K - 2.06G -
first-five/jails/.warden-template-standard-open-x86@clean 81.4K - 128K -


Having gone through a fair amount of work to set up this new 8x2TB array, I'm a little disheartened to see about half as much free space as I'd hoped. I don't believe I've got any quotas set. Am I misunderstanding something, or does that seem like a weirdly small amount of free space to be seeing in this context? Could restoring the original system config have brought the original zpool allocation size with it?
 

qqBazz

Dabbler
Joined
Nov 5, 2015
Messages
34
The Storage tab shows the same numbers, rounded: (I'm also a little puzzled as to why a zpool with only one 2.5T dataset on it is listing 5.2T used.)

Screen Shot 2017-01-11 at 6.20.39 PM.png
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
OK, you have a pool with approximately 7TiB total storage capacity, of which 2.5TiB are in use. The top row is the raw storage without accounting for redundancy. My guess is you don't actually have 8x2TB in RAIDZ2. Maybe you have striped mirrors, or something else.

What is the output of zpool status?
 

qqBazz

Dabbler
Joined
Nov 5, 2015
Messages
34
Here's zpool status, which ... looks like a RAIDZ2, to my novice eye?

Code:
scarecrow% zpool status

  pool: first-five
state: ONLINE
  scan: scrub repaired 1.62M in 2h32m with 0 errors on Wed Jan 11 15:57:44 2017
config:

	NAME											STATE	 READ WRITE CKSUM
	first-five									  ONLINE	   0	 0	 0
	  raidz2-0									  ONLINE	   0	 0	 0
		gptid/6e073ec6-d7ae-11e6-8813-d05099192dd3  ONLINE	   0	 0	 0
		gptid/6ec770bb-d7ae-11e6-8813-d05099192dd3  ONLINE	   0	 0	 0
		gptid/6f8efffc-d7ae-11e6-8813-d05099192dd3  ONLINE	   0	 0	 0
		gptid/704fb052-d7ae-11e6-8813-d05099192dd3  ONLINE	   0	 0	 0
	  raidz2-1									  ONLINE	   0	 0	 0
		gptid/710d1084-d7ae-11e6-8813-d05099192dd3  ONLINE	   0	 0	 0
		gptid/71d2faf6-d7ae-11e6-8813-d05099192dd3  ONLINE	   0	 0	 0
		gptid/72998e74-d7ae-11e6-8813-d05099192dd3  ONLINE	   0	 0	 0
		gptid/7354cc84-d7ae-11e6-8813-d05099192dd3  ONLINE	   0	 0	 0

errors: No known data errors

  pool: freenas-boot
state: ONLINE
  scan: scrub repaired 0 in 0h5m with 0 errors on Fri Dec 30 03:50:42 2016
config:

	NAME											STATE	 READ WRITE CKSUM
	freenas-boot									ONLINE	   0	 0	 0
	  mirror-0									  ONLINE	   0	 0	 0
		gptid/3f1e82fa-c612-11e4-b263-d050995afbcc  ONLINE	   0	 0	 0
		gptid/0a790ea0-c65b-11e4-a38b-d050995afbcc  ONLINE	   0	 0	 0

errors: No known data errors
 

qqBazz

Dabbler
Joined
Nov 5, 2015
Messages
34
... crap, or is that indicating a mirrored pair of 4-disk RAIDz2s? PEBCAK strikes again.

(Yes, that would seem to be what it's indicating. My bad for misinterpreting the arrangement that the GUI Volume Manager presented me when I started the process with 8 disks. "4x2x2.0Tib" signified a mirrored pair of 4-disk RAIDZ2 vdevs.)
 
Last edited:

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
... crap, or is that indicating a mirrored pair of 4-disk RAIDz2s? Sigh.

In fact a stripe over two raidz2 vdevs. The capacity of four disks reserved for redundancy in total, two in each vdev.
 

qqBazz

Dabbler
Joined
Nov 5, 2015
Messages
34
I'm a cautious guy, I'll be honest, but reserving four disks out of eight is more cautious than I'd really intended. I'll just start over with a fresh, 1x8x2TiB RAIDZ2 zpool. Thanks, all.
 
Status
Not open for further replies.
Top