Hello,
I have a quite strange problem since a bit time with a FreeNAS-8.2.0-RELEASE-p1-x64 (r11950) setup.
I have about 124Go of files in a storage pool called prod, which has one subvolume called prod/admin.
I make periodic snapshots of both pools which i keeep 45 days, the average delta of snapshots is about 20~30MB. Snapshots are set to be non-recursive.
Everything is stored on a RAIDZ array of 2To disks, so there should be plenty of space, but zfs claims that 1.24To is already used shouldn't be correct, but ineed is what happens.
Here's my additionnal info i can provide:
When i list all my snapshots and add every single byte, i do not get any value approaching "usedbysnapshots" value. Here's how i proceeded:
5511597056 / 1024^3 = 5,13GB... We're far from 1.12T used by snapshots
Even when counting the snapshots of the pools children (replacing grep prod@ by grep @ in the code) gives me not more than 5748693504 bytes.
Now the strange thing is that zfs get used says that this pool really takes 1.24To... but not the snapshots.
When i try to check with standard bsd tools, i endup with normal disk usage
I do not really know where to search anymore... When i remove a snapshot which zfs claims uses about 30Mo, disk frees space up to 10-20Go.
I would be really happy if someone could give me a clue... Even a little.
I'm willing to do some tests, even upload config db (after having removed password and sensible info).
Thanks.
PS: i used zfs get -pH used instead of zfs list -t snapshot because numbers are given in parsable format, but the latter commands lists the same sizes in K, M & G.
I have a quite strange problem since a bit time with a FreeNAS-8.2.0-RELEASE-p1-x64 (r11950) setup.
I have about 124Go of files in a storage pool called prod, which has one subvolume called prod/admin.
I make periodic snapshots of both pools which i keeep 45 days, the average delta of snapshots is about 20~30MB. Snapshots are set to be non-recursive.
Everything is stored on a RAIDZ array of 2To disks, so there should be plenty of space, but zfs claims that 1.24To is already used shouldn't be correct, but ineed is what happens.
Here's my additionnal info i can provide:
Code:
# zfs get all prod | grep used prod used 1.24T - prod usedbysnapshots 1.12T - prod usedbydataset 124G - prod usedbychildren 2.49G - prod usedbyrefreservation 0 -
When i list all my snapshots and add every single byte, i do not get any value approaching "usedbysnapshots" value. Here's how i proceeded:
Code:
# zfs get -pH used | grep prod@ | cut -f3 > /tmp/snapusedbytes # awk '{s+=$1} END {print s}' /tmp/snapusedbytes 5511597056
5511597056 / 1024^3 = 5,13GB... We're far from 1.12T used by snapshots
Even when counting the snapshots of the pools children (replacing grep prod@ by grep @ in the code) gives me not more than 5748693504 bytes.
Now the strange thing is that zfs get used says that this pool really takes 1.24To... but not the snapshots.
Code:
# zfs get -pH used | head prod used 1365649612800 - prod@auto-20120730.0600-45d used 21562368 - prod@auto-20120730.1000-45d used 10911232 - prod@auto-20120730.1400-45d used 28395520 - prod@auto-20120730.1800-45d used 7174144 - prod@auto-20120731.0844-45d used 4149698560 - prod@auto-20120731.1244-45d used 9136128 - prod@auto-20120731.1644-45d used 3654144 - prod@auto-20120801.0600-45d used 19076608 - prod@auto-20120801.1000-45d used 9630208 -
When i try to check with standard bsd tools, i endup with normal disk usage
Code:
# zfs get mountpoint prod NAME PROPERTY VALUE SOURCE prod mountpoint /mnt/prod default # du -h -d1 /mnt/prod 1.5G /mnt/prod/admin 1.5K /mnt/prod/.freenas 87G /mnt/prod/IPN 27G /mnt/prod/SARL 59M /mnt/prod/SCI 217M /mnt/prod/indus 9.4G /mnt/prod/archives 125G /mnt/prod
I do not really know where to search anymore... When i remove a snapshot which zfs claims uses about 30Mo, disk frees space up to 10-20Go.
I would be really happy if someone could give me a clue... Even a little.
I'm willing to do some tests, even upload config db (after having removed password and sensible info).
Thanks.
PS: i used zfs get -pH used instead of zfs list -t snapshot because numbers are given in parsable format, but the latter commands lists the same sizes in K, M & G.