Free storage 'disappeared' with RAID-Z1 volume

Status
Not open for further replies.
Joined
Jan 14, 2014
Messages
3
Hi all,

I'm running FreeNAS on a QNAP TS-809U-RP unit. It's got 8 x 4TB drives in addition to the flash that I'm booting off.

I configured 7 of the drives in RAID-Z1 and left the 8th as a spare. It gave me around 21TB of usable space.

I've created a ZFS dataset in that volume with 12TB reserved. I copied 7.8TB of data into it.

Now, at some point (maybe after a restart?) the RAID-Z1 volume seems to have reduced is size dramatically. I had between 7/8TB available after creating the dataset, but recently it's dropped to 1.2TB through no logical reason.

Here's some output :

Code:
[root@ukcvtnas02] /mnt/volume1/backup_perforce/.zfs/snapshot# zpool status
  pool: volume1
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Fri Jan 10 00:38:36 2014
config:

    NAME                                            STATE     READ WRITE CKSUM
    volume1                                         ONLINE       0     0     0
      raidz1-0                                      ONLINE       0     0     0
        gptid/56fb804e-798c-11e3-ba8c-00089bcb2c04  ONLINE       0     0     0
        gptid/57dfcbf1-798c-11e3-ba8c-00089bcb2c04  ONLINE       0     0     0
        gptid/58beb597-798c-11e3-ba8c-00089bcb2c04  ONLINE       0     0     0
        gptid/59a155cc-798c-11e3-ba8c-00089bcb2c04  ONLINE       0     0     0
        gptid/5a7cf141-798c-11e3-ba8c-00089bcb2c04  ONLINE       0     0     0
        gptid/5b604875-798c-11e3-ba8c-00089bcb2c04  ONLINE       0     0     0
        gptid/5c483970-798c-11e3-ba8c-00089bcb2c04  ONLINE       0     0     0
    spares
      gptid/5d35e878-798c-11e3-ba8c-00089bcb2c04    AVAIL   

errors: No known data errors


Code:
[root@ukcvtnas02] /mnt/volume1/backup_perforce/.zfs/snapshot# df -h
Filesystem                 Size    Used   Avail Capacity  Mounted on
/dev/ufs/FreeNASs1a        926M    596M    255M    70%    /
devfs                      1.0k    1.0k      0B   100%    /dev
/dev/md0                   4.6M    3.7M    477k    89%    /etc
/dev/md1                   823k    2.0k    756k     0%    /mnt
/dev/md2                   149M     29M    107M    22%    /var
/dev/ufs/FreeNASs4          19M    1.4M     16M     7%    /data
volume1                    1.2T    256k    1.2T     0%    /mnt/volume1
volume1/backup_perforce     12T    7.8T    4.2T    65%    /mnt/volume1/backup_perforce
/dev/md3                   1.9G    463M    1.3G    26%    /var/tmp/.cache


Does anyone have any idea what might cause the total space to shrink like this? I'm completely lost...

Help much appreciated, thanks.
 
Joined
Jan 14, 2014
Messages
3
My fault-- I'd put the same value for both reserved space and quota space in the dataset. That's what caused my space to disappear.

By removing the reserved value and leaving the quota value, the space re-appeared. Don't really understand the behaviour though... (?)
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
By removing the reserved value and leaving the quota value, the space re-appeared. Don't really understand the behaviour though... (?)
Read the documentation on the quota, reservation and refreservation here: http://www.freebsd.org/cgi/man.cgi?query=zfs
I'm running FreeNAS on a QNAP TS-809U-RP unit. It's got 8 x 4TB drives in addition to the flash that I'm booting off.
Do I understand this right? You are running a 21TB ZFS pool with 2GB of memory? Seven 4TB drives in RAIDZ1? I hope your data are not important, because this has fail written all over it:
  • 8GB is minimum memory for ZFS in FreeNAS (http://doc.freenas.org/index.php/Hardware_Recommendations#RAM)
  • To resilver a full pool when a drive fails and you replace it, the system will need to read 6 * 4TB = 2.4 * 10^13 bytes = 1.92 * 10^14 bits. The unrecoverable error rate of current drives is 1 in 10^14 bits. This almost guarantees you will see an URE during the resilver leading to data loss.
I configured 7 of the drives in RAID-Z1 and left the 8th as a spare. It gave me around 21TB of usable space.
Spares will not autoreplace in FreeNAS...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
And spares shouldn't be used unless you are already at RAIDZ3. At lower RAIDZ levels you should simply integrate the drive into the pool and bump up the RAIDZ level.
 
Joined
Jan 14, 2014
Messages
3
Thanks for all the advice gents. This really isn't important data and is just me testing a few configurations with old data.

Great advice on the error rate & drive configuration though, it's not something that had occured to me. Appreciate the insights.
 
Status
Not open for further replies.
Top