Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Proposed Alternative to SSD Caching. Bad idea?

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE
Status
Not open for further replies.

Micheal

Neophyte
Joined
Nov 3, 2016
Messages
9
I'm not even convinced that SSD caching is a best practice first off, so here's my --possibly stupid, but please humor me-- question:

Say I have 12x 2TB hard drives on my volume (the type of raid here doesn't really come in play with pertaining to the question, but say I use a RAIDZ2 -- being that I usually use this option). With that said, would it be possible to say steel 5GB from each drive in a 60GB RAID0 equivalent in FreeNAS to be used as your SSD Cache (which in theory and in my experience performs better than a single SSD drive, due to the limitations of the SATA port, say capped about 450 MB/s)?

Or would it be better to have loads of RAM, and does FreeNAS just kind of handle Caching with the free memory? Even with 72GB of RAM on my Dell Poweredge, I've seen RAM utilization upwards of 60% without much really going on but then it eventually goes down. I really can't complain with my first FreeNAS build as I'm getting close to the theoretical maximum over Gigabit of about 115MB/s real use (out of 125MB/s per 1000Mbps/8 theoretically possible) with 4 computers transferring gigs of data at the same time (mostly done for testing with LAN Speed test). I was honestly impressed, and it was also surprising to note that network performance is better from a physical machine that a virtual one on the same server (except for the CrystalDiskMark hard drive performance tests is over 1GB/s and you won't see anywhere near that with a desktop hard drive.. SSD or otherwise. But the NIC only operates at only about 65MB/s but that may be a VM setting or VM overhead but this was also when I was connected remotely).

I'm really just asking from 1) a theoretical point of view, 2) a proposed optimization/performance standpoint, and 3) thirdly a question on best (or at least not worst) practices.

Much appreciated and thanks.

Best,

Micheal
 
Joined
Jan 13, 2016
Messages
129
Robert is right, RAM is "the only way to fly", the more the better.

If you want more speed, go with 10 gigabit on FreeNAS and server/stations.

"would it be possible to say steel 5GB from each drive in a 60GB RAID0 equivalent in FreeNAS to be used as your SSD Cache (which in theory and in my experience performs better than a single SSD drive, due to the limitations of the SATA port, say capped about 450 MB/s)?" - NO

SSD caching, read the manual, it is likely you will never need it. RAM you will.

VM's almost never hit 1 gigabit speed if the physical nic in the server is 1 gigabit also.

If you have a server with 10 gigabit nic, and 6 vm's on it, which have 1 gigabit virtual nic each, all 6 VMs will do 6 gigabit traffic over the 10 gigabit physical nic of the server.
 
Last edited by a moderator:

Micheal

Neophyte
Joined
Nov 3, 2016
Messages
9
I don't really feel like my answer is getting answered well here, and perhaps that's my fault due to a misleading post. That I really don't know but I appreciate all the responses, so thank you. I might just have to try it out and compare the benefits of both as I'd expect a RAID0 volume (in this case) to perform at over 2GB/s (I do not mean gigabit, nor NIC.. I mean hard drive read/write speed which is faster than any commercial SSD basically for no cost). I was just thinking it was worth a consideration, not just a brush off by the community. And I already have a lot of RAM, and always plan on keeping a lot of RAM with FreeNAS (around 72GB to 128GB) being that used PowerEdge servers are cheap (you'd pay more for the hard drives alone then ALL the hardware).

I was wanting performance for scaling, not for VMs.. several machines that would be connected would be physical machines (like desktops, netbook/laptops, tablets, and phones, and two Android KIII TV boxes --at gigabit). Many tablets are limited 10/100 NICs so their theoretical connection is capped at 12.5 MB/s but all desktops are Gigabit so they're capped at a 125 MB/s limit. Also, obviously wireless would also be much slower and less reliable.

Also, I think you misunderstood.. I reached over 1GB/s in hard drive read/write due to the RAIDZ2 with 12 Disks. I'm not expecting, nor do I want to get higher than 1 Gbps through the network per machine. I'm shooting for fewer bottlenecks as it is and this is a budget build, so I don't need 10 Gbps NICs and/or a new switch. Lastly, I put all the network card on the server (4 ports) in a LACP LAGG so in theory it can utilize 4 Gbps bandwidth to a managed Netgear switch.

The performance is currently great, I'm not complaining. I'm worried about scale-ability and reliability over time and with many devices concurrently connected. Plus I'm a geek at heart, always will be, and am just interested in the benchmarks, potential benefits, and insight in general.. which I was hoping someone else already had being that it is the FreeNAS community. I don't mean to strike any punches, but I was hoping for something more.

And what is this Manual you speak of? I can find the FreeNAS documentation but no such manual and caching isn't really mentioned.

I did however find this on the user manual "If an SSD is dedicated as a cache device, it is known as an L2ARC and ZFS uses it to store more reads which can increase random read performance. However, adding an L2ARC is not a substitute for insufficient RAM as L2ARC needs RAM in order to function." I understand it's not a substitute, that wasn't really my question. But reading through the ZFSPrimer documentation, without direct answers or real world benchmarks, the answer I get from FreeNAS is "Maybe". And the "Maybe" to me means that I guess I'll have to test it out on my workbench (when I have it up lol).

See https://doc.freenas.org/9.3/zfsprimer.html

Thank you,

Micheal
 

Micheal

Neophyte
Joined
Nov 3, 2016
Messages
9
From what I've read now.. more on ZFS than FreeNAS in particular, it looks like SSD or a RAID0 Cache probably isn't really needed for ARC/L2ARC unless you don't have enough RAM but it could be useful for the ZIL (ZFS intent log) cache, and the zilstat utility by Richard Elling can be used to test the stress level/load of the ZIL, which should tell me if this option would be beneficial before spending the time on physically testing it out (but I think it be fun --as something new to learn-- anyways). Additionally Ben Rockwood wrote a utility called arc_summary that can benchmark the "Adaptive Replacement Cache" ARC to determine if a SSD or RAID0 volume would be beneficial there.

I guess until physically testing everything, I answered my own question. Apparently, I was just looking in the wrong place (I was asking the FreeNAS community/forums/documentation instead of directly going to the ZFS source). Hopefully this might be able to help someone else.

Thanks!

http://constantin.glez.de/blog/2011/02/frequently-asked-questions-about-flash-memory-ssds-and-zfs
https://doc.freenas.org/9.3/zfsprimer.html
 
Joined
Jan 18, 2017
Messages
3
What specifically did you feel wasn't answered? Or maybe better asked, what problem are you trying to solve?

Your original post asked about taking 5GB away form 12 SATA disks to "be used as your SSD Cache". This is simply not possible. When you add the cache to the zpool, you supply a device pointer, so anything else, regardless of perceived/potential performance is not going to work.

RAM is superior to SSD in terms of both performance and latency. RAM can in theory operate in GB/s, where SSD and other storage are still MB/s. Additionally, latency with RAM is nanoseconds, where SSD and other storage are microseconds.

If you're applying that to a L2ARC caching concept, you could see how in many cases you could actually cause more harm than good by utilizing L2ARC.

Until the cost of RAM becomes an issue, or the limitation of your hardware to actually use more RAM, RAM is going to be the best way to increase general performance.

To be more specific, if you've hit a point where your board doesn't support any more RAM, you have to think through the cost of the board that supports more RAM, the cost of the new RAM, the potential new CPU you would need for the new board, and so on. At that point, you could be looking at a $1,000+ investment, and depending on your ultimate need and use of the NAS, it might be worth considering using the L2ARC. It's a cheap test, but you may still find that it's not enough to relieve whatever problem you're having.

From your second post it seemed more that the problem trying to be solved would be accommodating the increased number of other machines and devices connecting to the NAS. In a situation like that, you'd likely benefit more from something like link aggregation, as that would help you effectively handle more concurrent connections.

While not perfect, I would measure your ARC hit/miss, and until you're getting a point where you have a poor hit rate, 72GB of RAM should be suffice.
 

Micheal

Neophyte
Joined
Nov 3, 2016
Messages
9
Thank you, that was a very good answer.. what I was expecting or "hoping" for the first time around. A one word response didn't answer anything and neither did effectually a response to a response, not my question. I still don't understand why making a RAID0 volume (separate from the RAIDZ2 volume couldn't act as a SSD or cache volume) but maybe I'll take a look. Thanks again.
 
Joined
Jan 18, 2017
Messages
3
Thank you, that was a very good answer.. what I was expecting or "hoping" for the first time around. A one word response didn't answer anything and neither did effectually a response to a response, not my question. I still don't understand why making a RAID0 volume (separate from the RAIDZ2 volume couldn't act as a SSD or cache volume) but maybe I'll take a look. Thanks again.
Using a separate set of dedicated hardware isn't an issue, but "picking away" from other existing drives, as you originally asked, is.

Again though, nothing is going to come close to the performance of RAM. And if you're putting an L2ARC in place when there doesn't need to be one, you're going to negatively impact your system.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Theoretically, you could make a RAID0 volume and tell ZFS to use it for L2ARC, by partitioning the hard drives and building your vdevs out of partitions. The FreeNAS GUI doesn't support this approach, so you'd be using the CLI. The physical drives would be doing double duty, as part of the L2ARC and as persistent storage, so it seems very unlikely that there would be a performance benefit.
I'll have to test it out on my workbench
Yes, only testing with a realistic workload can tell you for sure which configuration will deliver the best performance.
 

tvsjr

Neophyte Sage
Joined
Aug 29, 2015
Messages
943
(which in theory and in my experience performs better than a single SSD drive, due to the limitations of the SATA port, say capped about 450 MB/s)?
You're only looking at performance from the perspective of bandwidth. For both L2ARC and SLOG, latency and IOPS are FAR more critical than bandwidth. The purpose of cache (of any type) is to return a fairly small amount of information REALLY fast. Returning a torrent of data, but taking 20ms to do it, is worthless... the system may as well go to the drive pool.
 

Donny Davis

Member
Joined
Jul 31, 2015
Messages
139
Before this post goes too sideways, can you specify if you are looking for write or read caching?

Say similar to the way a HW raid card would do
 

Micheal

Neophyte
Joined
Nov 3, 2016
Messages
9
From what I've read, the only thing that might benefit is ZIL (ZFS intent log) caching. So I'll probably just stick to RAM but it be easy enough to monitor and test at some point. It's mostly now just a fun curiosity and I appreciate all the input, it at least pushed me in the right direction and more or less answered my question. It made me think of the possibility of running an OS or Hard Disk off a iSCSi LUN over 10GbE (for a physical computer), which in theory could offer 2,000 MB/s+ read/writes which is really just an interesting thought. I'll consider trying the two as little side tests to my work bench. It be curious to me to try it on some games and see the performance and such.

The discussion is much appreciated. Thanks.
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,505
a iSCSi LUN over 10GbE (for a physical computer), which in theory could offer 2,000 MB/s+ read/writes
Your RAM is likely already quite a bit faster than that. Depends on the system, but for an E3 1230v3, maximum memory bandwidth is 25.6 GB/second.
 

brando56894

Dedicated Sage
Joined
Feb 15, 2014
Messages
1,506
More RAM is always better, max that out first before you even think about adding a L2ARC. I have 64 GB in my server and added an L2ARC just for the hell of it....and it saw about 2% utilization. If you don't think an ACHI SATA SSD is fast enough, get yourself a PCI-E or M.2 NVME drive, I have a 256 GB Samsung 960 Evo and it writes at about 850 MB/sec and reads at 2 GB/sec.
 
Status
Not open for further replies.
Top