After Upgrade to 9.3 no avalible space

Status
Not open for further replies.

FlangeMonkey

Contributor
Joined
Dec 6, 2012
Messages
111
Hi Guys,

Really strange issue, I upgraded from 9.2.1.9 to 9.3 and the pool needed upgrading, however before and after the pool upgrade the available space was 0, although there was space prior to the upgrade.

I have also deleted some files and have seen used space drop, however available space stays consistent at 0.

Any ideas? I have not seen this type of issues before with ZFS.

Thanks,
 

FlangeMonkey

Contributor
Joined
Dec 6, 2012
Messages
111
I can delete files that aren't on a dataset with snapshots and used space drops, but available space stays consistent at 0.
Screenshot 2014-12-15 15.53.17.png
Screenshot 2014-12-15 15.56.36.png
 
D

dlavigne

Guest
That's why. I forget at which exact percentage it will stop showing as having available space left but suspect it is somewhere around the 90% range.
 

FlangeMonkey

Contributor
Joined
Dec 6, 2012
Messages
111
That cannot be right, unless its a new thing because prior to upgrade it was show fine, even down to 15/20gb freespace.
 
D

dlavigne

Guest
Yup, how View Volumes displays space and the columns which are displayed changed in 9.3.
 

FlangeMonkey

Contributor
Joined
Dec 6, 2012
Messages
111
That's a bit silly if that is the case. Its not just the columns view, but from 'zfs list' and 'df -h'. Even when I try to do things in the OS I get messages like "No space left on device".

Can you point me to where it notes that in the changelog? So I can see the % figure.
 
D

dlavigne

Guest
Sorry, I don't remember which git commit or bug # addressed this, but both are searchable if you're curious.

From a feature standpoint, it is desirable as it is a bad thing to let the pool go over 90%. We don't want users to still think they have usable space to keep adding stuff in this scenario.
 

FlangeMonkey

Contributor
Joined
Dec 6, 2012
Messages
111
In my opinion, its not recommended to go below 85% due to performance derogation, mainly because copy-on-write and snapshots. I don't care for the performance drop because I currently hit the box at 1GB and the box isn't doing much more than file serving. I even tested it with 20GB free and I was getting 300MB writes and Reads.

This type of thing makes it feel like its a Nanny approach and that irritates me because I was happy to go down to 99% with the understanding of the downsides... Not to mention, I'll lose another 1.6TB on a 16TB pool, when the performance doesn't make much of a difference to me.

For me that is one way to push me to another platform. It shouldn't be enforced in this way, or there should be an option to override it.
 

FlangeMonkey

Contributor
Joined
Dec 6, 2012
Messages
111
Are you sure about the softcap? There is no mention of it, so fare?

From the FreeNAS 9.3 User Guide:
"At 90% capacity, ZFS switches from performance- to space-based optimization, which has massive performance implications. For maximum write performance and to prevent problems with drive replacement, add more capacity before a pool reaches 80%. If you are using iSCSI, it is recommended to not let the pool go over 50% capacity to prevent fragmentation issues."
 
Status
Not open for further replies.
Top