Ahrens: ZFS performance is fine above 80%

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Those were the days ... :smile:
livingston-pm2er-portmaster-2e-pm2er-communication-server-10-ports-1a.17__49539.1490266659.jpg 3d5d6875.jpg
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Not to rain on the nostalgia parade by coming back to the main subject of the thread or anything, but ...

As with so many things in ZFS, especially when performance is being discussed, the phrase "results may vary" applies strongly to how full you can make your pool. Contributing factors include:

Hardware resources available (total size of pool, amount of memory, speed of vdevs)
Granularity of data access both in recordsize and actual client-facing I/O
Volume and ratio of reads to writes
Nature of the workload in terms of CRUD (Create, Read, Update, Delete)
Your barometer of "reasonable" or "acceptable" performance

One user may be able to fill right up to 99.9% full because they're just archiving endless amounts of questionably-ethically-acquired media, and never deleting any of it. When they run out of space, they add more vdevs, JBODs, etc. They don't care about the performance because they're never accessing it at rates beyond whatever an H.265 Blu-Ray rip comes out at these days.

Another user might wire together a few dozen 2TB SAS drives and never allow it to fill beyond 25% capacity because they want to carve out LUNs for a devops team to screw around with, resulting in a bunch of random I/O and overwrites.

There's been a fair amount of, as @jgreco aptly described it, "fossilized knowledge" that's been spread around re: ZFS, some of it being accurate for the time and others just having been out to lunch to begin with. I have no doubt that some of the things I've said in the past, am saying now, and will say in the future will be inaccurate and I hope that anyone coming upon a statement will look at the context surrounding it as well. (Although I'm reserving the right to be an old fogey about SMR drives for an indeterminate period.) So I'm glad to see that when a conversation comes up about some of the "sacred cows" of ZFS that we can all actually have debates and discussion over it, bring up relevant points like the changes in technology (affordable NAND, the IOPS/TB issue HDDs face - that are trying to be beaten up with multi-actuator) rather than just throwing ad-hominems and le downvotes like another site that may start with R and end with Eddit.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912

HolyK

Ninja Turtle
Moderator
Joined
May 26, 2011
Messages
654
just archiving endless amounts of questionably-ethically-acquired media, and never deleting any of it. When they run out of space, they add more vdevs,
Since when hoarding Linux ISOs became a "questionably-ethically-acquired media" ? :eek:
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Since when hoarding Linux ISOs became a "questionably-ethically-acquired media" ? :eek:
Depends, Maybe he is hoarding ICBM distro's?

uBOOMtu

I can assure you, it made a significant impact...
 
Top