"Slowing down as it fills up" was what I meant :). If you like, "slowing down as occupancy rate rises", better?
If we want to be pedantic, it is actually slowing down as
fragmentation increases, in combination with the occupancy rate.
The part that can fool you is that on a fresh pool, you can really run all the way out to 95%+ at full speed, and if you never wrote anything to it again, reads would also be at full speed because there'd be no fragmentation. People benchmark on fresh pools, which is, just to be blunt, kinda pointless and/or stupid, because that will give you highly sunshine-fulled results that are not representative.
It's the small block rewrites that get you. And I've talked about this endlessly as well. You'll often see me refer to a Delphix graph. This graph shows "steady state" behaviour, which is where you run a pool at a given percent full. The steady state is "as bad as it gets" (or at least is as bad as it is likely to get).
Because this forumware is retarded, it won't let me insert the image anymore.
https://extranet.www.sol.net/files/freenas/fragmentation/delphix-small.png
So it isn't saying that at 50% or 90% your pool will immediately suck. It's saying that over time, as lots of writes and frees happen, your performance will be much better if you're at 25% capacity than 90% capacity. The horrifying thing is that there isn't a ton of difference between 50 and 90%. But also, if you look at 10%, you get AMAZING write speeds, even for random data, because ZFS is basically transforming most writes into sequential writes.
Thanks!
Post #9 in your link is exactly what I was looking for.
Followup question if I may. What's so compelling about iSCSI that folk are happy to meet the requirements for it, over NFS?
A few I can think of:
- We've standardized on it
- Multipathing is very nice
- My app is better supported / only supports iSCSI
- My app performs much better on iSCSI
- We've always done things this way
You forgot the common one:
For block storage such as VM's or databases, the problem is similar for both NFS and iSCSI, as are the requirements.
https://www.ixsystems.com/community...res-more-resources-for-the-same-result.28178/
The thing is, the idea that NFS and iSCSI work out to requiring similar resources is not obvious to some people but obvious to others. My article here is really trying to say that large blobs of storage that involve random writes to portions in the middle, etc., present a very different challenge than traditional NAS-style "store these files" usage. ZFS gets to be aware of the context when you create a media library on your NAS and store large files. Storing the file is done efficiently and as contiguously as free space allows. The files are static and you won't be frequently updating random little blocks inside the files. Minimal impact on fragmentation.
This should never, ever be confused with storing VMDK or database files. This is challenging regardless of NFS or iSCSI. Block rewrites increase frag. It is THIS that actually requires more resources. You need more ARC to hold more metadata, because free space is much more scattershot across the pool. You need more ARC and L2ARC to mitigate poor read speeds due to increased fragmentation. Etc.