qqBazz
Dabbler
- Joined
- Nov 5, 2015
- Messages
- 34
I'm trying to figure out why a dataset is showing a surprisingly low amount of available space left to it, despite the underlying pool having lots.
I had a FreeNAS RAIDZ1 with five drives in it that I wanted to erase and recreate using eight drives. I snapshotted the existing dataset from it onto an external drive, which I treated very, very carefully. :) (for the record, I was generally following the instructions in this excellent post, including saving a pre-move system configuration to file.)
I then destroyed the zpool and created a brand-new RAIDZ2 out of 8x2TB drives. Gingerly plugging in the external drive, I started a
The pool itself looks totally as I expected:
... which I read as "14.5TiB in raw storage, of which data + parity are currently using 5.19TiB, leaving 9.31TiB available." All good.
The part that confuses me is when I'm looking at the dataset I've put onto my new volume, which shows me 4.29TiB available. That's way, way less than 9.31TiB.
I wondered if snapshots could be grabbing some of that space, but it would appear not.
Having gone through a fair amount of work to set up this new 8x2TB array, I'm a little disheartened to see about half as much free space as I'd hoped. I don't believe I've got any quotas set. Am I misunderstanding something, or does that seem like a weirdly small amount of free space to be seeing in this context? Could restoring the original system config have brought the original zpool allocation size with it?
I had a FreeNAS RAIDZ1 with five drives in it that I wanted to erase and recreate using eight drives. I snapshotted the existing dataset from it onto an external drive, which I treated very, very carefully. :) (for the record, I was generally following the instructions in this excellent post, including saving a pre-move system configuration to file.)
I then destroyed the zpool and created a brand-new RAIDZ2 out of 8x2TB drives. Gingerly plugging in the external drive, I started a
zfs send | zfs recv
from it in order to put the snapshot onto the new RAIDZ2. All seemed to go perfectly: I restored all the data, mounted and spot-checked it, and destroyed the snapshots so as not to be chewing up space in the background going forward.The pool itself looks totally as I expected:
scarecrow% zpool list first-five
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
first-five 14.5T 5.19T 9.31T - 20% 35% 1.00x ONLINE /mnt
... which I read as "14.5TiB in raw storage, of which data + parity are currently using 5.19TiB, leaving 9.31TiB available." All good.
The part that confuses me is when I'm looking at the dataset I've put onto my new volume, which shows me 4.29TiB available. That's way, way less than 9.31TiB.
scarecrow% zfs list -o name,used,avail,refer,usedds,usedchild
NAME USED AVAIL REFER USEDDS USEDCHILD
first-five 2.52T 4.29T 128K 128K 2.52T
first-five/Backups 570G 4.29T 570G 570G 0
first-five/homedirs 18.9G 4.29T 18.9G 18.9G 0
first-five/jails 20.5G 4.29T 2.81G 2.81G 17.7G
first-five/jails/.warden-template-VirtualBox-4.3.12 741M 4.29T 741M 741M 0
first-five/jails/.warden-template-pluginjail--x64 796M 4.29T 791M 791M 0
first-five/jails/.warden-template-pluginjail-9.3-x64 508M 4.29T 508M 508M 0
first-five/jails/.warden-template-pluginjail-open-x86 209K 4.29T 128K 128K 0
first-five/jails/.warden-template-standard 2.10G 4.29T 2.10G 2.10G 0
first-five/jails/.warden-template-standard--x64 2.08G 4.29T 2.02G 2.02G 0
first-five/jails/.warden-template-standard-9.3-x64 2.06G 4.29T 2.06G 2.06G 0
first-five/jails/.warden-template-standard-open-x86 209K 4.29T 128K 128K 0
first-five/jails/plexmediaserver_1 8.69G 4.29T 8.69G 8.69G 0
first-five/jails/raven 832M 4.29T 2.77G 832M 0
first-five/stuff 1.92T 4.29T 1.92T 1.92T 0
I wondered if snapshots could be grabbing some of that space, but it would appear not.
scarecrow% zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
first-five/Backups@auto-20170108.1028-1w 13.6M - 568G -
first-five/Backups@auto-20170109.1432-1w 12.5M - 569G -
first-five/Backups@auto-20170111.1259-1w 19.6M - 569G -
first-five/jails/.warden-template-VirtualBox-4.3.12@clean 93K - 741M -
first-five/jails/.warden-template-pluginjail--x64@clean 4.50M - 791M -
first-five/jails/.warden-template-pluginjail-9.3-x64@clean 163K - 508M -
first-five/jails/.warden-template-pluginjail-open-x86@clean 81.4K - 128K -
first-five/jails/.warden-template-standard@clean 163K - 2.10G -
first-five/jails/.warden-template-standard--x64@clean 68.0M - 2.02G -
first-five/jails/.warden-template-standard-9.3-x64@clean 174K - 2.06G -
first-five/jails/.warden-template-standard-open-x86@clean 81.4K - 128K -
Having gone through a fair amount of work to set up this new 8x2TB array, I'm a little disheartened to see about half as much free space as I'd hoped. I don't believe I've got any quotas set. Am I misunderstanding something, or does that seem like a weirdly small amount of free space to be seeing in this context? Could restoring the original system config have brought the original zpool allocation size with it?