Don't go over 80% ?

Status
Not open for further replies.

Steven Sedory

Explorer
Joined
Apr 7, 2014
Messages
96
My company is building a 220TB storage appliance for surveillance storage. I would prefer to go with FreeNAS/ZFS, but in my experience, there are serious issues when going over 80% max capacity. The reason I want to shy away from that is due to the price difference between 275TB and 220TB, 275TB being what's needed to get 220TB after slicing off 20%.

Is there a way to tune the OS to handle the data differently, due to this application? If not, I think we're going to go with a MegaRAID card instead of HBA's and just pass the volumes to a different OS.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
The 80% problem is not a programmatic problem. The problem is your free space begins to be fragmented which requires more work to fill the pool fuller.

95% is when ZFS actually changes its behavior. AFAIK that's hard coded, but if you are already unhappy at 80%, you won't make it to 95% before jumping off a bridge. :P
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
My company is building a 220TB storage appliance for surveillance storage. I would prefer to go with FreeNAS/ZFS, but in my experience, there are serious issues when going over 80% max capacity. The reason I want to shy away from that is due to the price difference between 275TB and 220TB, 275TB being what's needed to get 220TB after slicing off 20%.
This is more of an issue with using a copy-on-write (CoW) filesystem. Microsoft's ReFS also has the same 'problem'. The only real solution is to use a non-CoW filesystem. Of course, if you do that then you lose all of the advantages of using CoW.
 

Kayman

Dabbler
Joined
Aug 2, 2014
Messages
23
The 80% problem is not a programmatic problem. The problem is your free space begins to be fragmented which requires more work to fill the pool fuller.

95% is when ZFS actually changes its behavior. AFAIK that's hard coded, but if you are already unhappy at 80%, you won't make it to 95% before jumping off a bridge. :p

So write speed starts to degrade when you go beyond 80% capacity, that's understandable but what about read speed. What if your FreeNAS is used as an archive. 99% of the time things are written once and never deleted so write speed isn't that critical only read speed. The data is mostly bluray mkv files. So size is anywhere from 5-15 GB per file. Once say 90% capacity is reached will the read speed hold up so the NAS is still usable. (able to stream direct the mkv's to 1 user maybe 2). Will it still be safe as in no extra risk of losing the pool and what exactly happens at 95% capacity.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Read speeds will follow the write speed, to some extent. Fragmented files will be read more slowly, which will affect pool performance. So if you want to keep your pool from fragmenting itself to hell, keep the pool below 80% full. Note that there are no ZFS defrag tools, so aside from doing short-term "overfilling" of a drive, you can expect that once you've trashed a pool and it's badly fragmented the only fix is to repave and restore from backup.

As for your blu-ray example, you clearly understand the problem. There is no answer for your question though as the factors that affect performance involve things like "pool history" which is very specific to your server and the data it stores and how you've consumed it in the past.

There is no "risk" of overfilling the pool causing a loss of the pool, except when you hit 100%. At 95% ZFS will deliberately fragment files in an effort to optimize disk space usage (vice performance), so going above 95% is a terrible idea for so many reasons.
 

Steven Sedory

Explorer
Joined
Apr 7, 2014
Messages
96
Thank you for all of your input. With the application that I'll be using, camera surveillance, what would you say would be the safe amount of the total capacity to consume without performance issues?
 

Sokonomi

Contributor
Joined
Jul 15, 2018
Messages
115
I'm a little late to this party, but it might still be relevant.

Would it be a good idea to create a "dummy" dataset and making it reserve some (5 to 20%) pool space? I'm guessing this would prevent some accidents from happening with programs that have no regard of remaining space?

Id hate to have an automatic download or cloud service jump off the deep end and cementing my pool shut. :')
 

Tim1962

Patron
Joined
Feb 26, 2015
Messages
281
Cyberjock said "Note that there are no ZFS defrag tools, so aside from doing short-term "overfilling" of a drive, you can expect that once you've trashed a pool and it's badly fragmented the only fix is to repave and restore from backup."

The basic topic of the OP is out of my league but my humble home NAS (4*3Tb Raidz2) has been at 90% for a fortnight or so, with new additional (unraided) drive going in today if the hardware plays. That should bring down to 60% or so I think, will the fragmentation issues that presumably have developed resolve themselves? Everything works for my lowish demands
 

Alecmascot

Guru
Joined
Mar 18, 2014
Messages
1,177
Cyberjock said .......but my humble home NAS (4*3Tb Raidz2) has been at 90% for a fortnight or so, with new additional (unraided) drive going in today if the hardware plays. That should bring down to 60% or so I think, will the fragmentation issues that presumably have developed resolve themselves? Everything works for my lowish demands

You cannot add a single drive to your pool and preserve any kind of redundancy.....
 

Tim1962

Patron
Joined
Feb 26, 2015
Messages
281
I know. The extra drive is there to offload some "backups of backups" and thereby free some space on the "real" pool. It will be a stand alone disk
 

Alecmascot

Guru
Joined
Mar 18, 2014
Messages
1,177
I know. The extra drive is there to offload some "backups of backups" and thereby free some space on the "real" pool. It will be a stand alone disk

But not in the pool......
 

Steven Sedory

Explorer
Joined
Apr 7, 2014
Messages
96
I'm a little late to this party, but it might still be relevant.

Would it be a good idea to create a "dummy" dataset and making it reserve some (5 to 20%) pool space? I'm guessing this would prevent some accidents from happening with programs that have no regard of remaining space?

Id hate to have an automatic download or cloud service jump off the deep end and cementing my pool shut. :')

I think so. Can someone else confirm?
 

Tim1962

Patron
Joined
Feb 26, 2015
Messages
281
But not in the pool......

Nope but the pool will be 1-2Tb lighter

Therefore if it was 90% of lets say 10Tb (Cant check the maths right now) = 9Tb Data
Minus 2Tb
It will now be at 70% of 10Tb = 7Tb
 
Last edited:
Status
Not open for further replies.
Top