Hello,
Summary
I try to create a new volume, smaller than "free" space shown by zpool list, butI get an error "pool out of space".
What is happening ?
Explanantion
A few weeks ago, I set up a FreeNAS server to store backups.
The server has a single zpool, which has a single vdev raidz2 of 12 3TB disks. So I have about usable 30TB.
I created a few volumes and deleted some. Right now, I have the following volumes and datasets configured:
- one dataset with a quota of 10G
- three volumes of 5 terabytes
Volumes are exported with iSCSI and formatted with NTFS.
Today I want to create another volume but GUI and command line says the pool is "out of space".
zpool list says otherwise:
But one of my volumes, the one most utilized where most backups are written to show strange used space. It's tank1/veeambackups hereunder:
I don't have any snapshot at all:
Volume sizes are 5T:
I have another 2 year old server running OpenIndiana and serving the same purpose, though much smaller (8 1TB drives), and I don't have this behaviour.
What is happening ? Is this normal ? How can I reclaim that space ?
Thanks for reading.
F.
Summary
I try to create a new volume, smaller than "free" space shown by zpool list, butI get an error "pool out of space".
What is happening ?
Explanantion
A few weeks ago, I set up a FreeNAS server to store backups.
The server has a single zpool, which has a single vdev raidz2 of 12 3TB disks. So I have about usable 30TB.
Code:
# zpool status -v pool: tank1 state: ONLINE scan: scrub in progress since Sun Jul 7 13:12:09 2013 1.27T scanned out of 19.2T at 212M/s, 24h32m to go 0 repaired, 6.61% done config: NAME STATE READ WRITE CKSUM tank1 ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/288e4cef-c94e-11e2-a3c8-002590c06ac8 ONLINE 0 0 0 gptid/28f02ee5-c94e-11e2-a3c8-002590c06ac8 ONLINE 0 0 0 gptid/2950aaa5-c94e-11e2-a3c8-002590c06ac8 ONLINE 0 0 0 gptid/29b63d91-c94e-11e2-a3c8-002590c06ac8 ONLINE 0 0 0 gptid/2a18235c-c94e-11e2-a3c8-002590c06ac8 ONLINE 0 0 0 gptid/2a7b7f98-c94e-11e2-a3c8-002590c06ac8 ONLINE 0 0 0 gptid/2ade6ea4-c94e-11e2-a3c8-002590c06ac8 ONLINE 0 0 0 gptid/2b41c4dd-c94e-11e2-a3c8-002590c06ac8 ONLINE 0 0 0 gptid/2ba51ce9-c94e-11e2-a3c8-002590c06ac8 ONLINE 0 0 0 gptid/2c05c849-c94e-11e2-a3c8-002590c06ac8 ONLINE 0 0 0 gptid/2c683e36-c94e-11e2-a3c8-002590c06ac8 ONLINE 0 0 0 gptid/2cca9251-c94e-11e2-a3c8-002590c06ac8 ONLINE 0 0 0 errors: No known data errors
I created a few volumes and deleted some. Right now, I have the following volumes and datasets configured:
- one dataset with a quota of 10G
- three volumes of 5 terabytes
Volumes are exported with iSCSI and formatted with NTFS.
Today I want to create another volume but GUI and command line says the pool is "out of space".
zpool list says otherwise:
Code:
# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT tank1 32.5T 19.2T 13.3T 58% 1.00x ONLINE /mnt
But one of my volumes, the one most utilized where most backups are written to show strange used space. It's tank1/veeambackups hereunder:
Code:
# zfs list NAME USED AVAIL REFER MOUNTPOINT tank1 21.9T 2.45T 356K /mnt/tank1 tank1/be-backups 5.16T 6.13T 1.47T - tank1/ftp 6.81G 3.19G 6.81G /mnt/tank1/ftp tank1/veeam-archives 5.16T 6.09T 1.51T - tank1/veeambackups 11.6T 2.45T 11.6T -
I don't have any snapshot at all:
Code:
# zfs list -t snapshot no datasets available
Volume sizes are 5T:
Code:
# zfs get volsize NAME PROPERTY VALUE SOURCE tank1 volsize - - tank1/be-backups volsize 5T local tank1/ftp volsize - - tank1/veeam-archives volsize 5T local tank1/veeambackups volsize 5T local
I have another 2 year old server running OpenIndiana and serving the same purpose, though much smaller (8 1TB drives), and I don't have this behaviour.
What is happening ? Is this normal ? How can I reclaim that space ?
Thanks for reading.
F.