snicke
Explorer
- Joined
- May 5, 2015
- Messages
- 74
My bet is that @jgreco will advice "go default" :Pwhich recordsize would you choose?
Or criticize whatever choice you make. :DMy bet is that @jgreco will advice "go default" :p
Or criticize whatever choice you make. :D
Been playing with dtrace and samba this morning trying to make something work. Forgot to make coffee till a little bit ago. In other words, I'm wearing my grumpy pants. :DWow, what brought that on?
I've been wondering who's been pilfering my nasty pills.
Been playing with dtrace and samba this morning trying to make something work. Forgot to make coffee till a little bit ago. In other words, I'm wearing my grumpy pants. :D
So @jgreco, in a mixed environment with a lot of big files like photos, videos and movies but also smaller files like documents and source code etc. and a place to put VMs and jails on, which recordsize would you choose?
Because we all have to make a choice. ;)
Well, if you have to choose, choose to put your VM's on a mirror of SSD's and then this becomes a nonissue. And at this point SSD is almost cheap enough that I don't even feel bad saying it.
Whether any of these downsides are applicable to any given scenario is, of course, a different matter entirely.
And then go for recordsize 1M on your main RAIDZ2/RAIDZ3 pool for mixed files of which large photos, movies and videos stands for the absolute largest part of the pool?
And then go for recordsize 1M on your main RAIDZ2/RAIDZ3 pool for mixed files of which large photos, movies and videos stands for the absolute largest part of the pool?
That's the important line; I made this thread for home systems, not enterprise systems with VMs, etc.. Plus we are talking about RAID-Z pools only, which aren't very recommend for VMs or DBs..
Big files will naturally benefit from a large recordsize. Data such as VM's will suffer on a pool that is designed for large file use, because they're just so different. If it's only a VM or two and poor performance is acceptable, then you're probably just going to go do the RAIDZ2/RAIDZ3 thing with the default recordsize, and then make a zvol with a 16K or maybe 32K block size. You can't "fix" this, all you can do is mitigate.
Well, if you have to choose, choose to put your VM's on a mirror of SSD's and then this becomes a nonissue. And at this point SSD is almost cheap enough that I don't even feel bad saying it.
Right, but this is the software/configuration equivalent of the "server grade hardware" problem. People come in here hoping to make do, because they are hoping that their 2009 era 8GB box with five drives in RAIDZ2 can store their files AND host a few VM's. I fully appreciate their despair at discovering that their hopes are unrealistic. Hardware is now cheap enough (and has been for awhile) that running a few VM's is not impractical or impossible.
Yeah but I can't do anything about that... but if they didn't read the most basic pieces of info they likely won't read my thread either, so in the end... :D
I'm approaching five years of having fought that battle here on the FreeNAS forums.
I remember the other day where the Linus was strong with one and I likely made him very upset.
So maybe it was you who has been pilfering jgreco's nasty pills... :D
Even those of us who are still learning but trying to help take some of the pressure off you guys are fighting the battle of people who insist on doing it their way. I remember the other day where the Linus was strong with one and I likely made him very upset.
Here is a link as to why the rule 2^n + p no longer applies from one of the zfs devs.
http://blog.delphix.com/matt/2014/06/06/zfs-stripe-width/
Nice write up, thanks for the post.