Is it posible for multiple zpools to use the same cache disk ?

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
So the YouTuber opted out for SWAP and dug in for L2ARC. Have you watched it ?
Youtubers do stupid things, sometimes. Stating he was using an SSD cache instead of swap is just nonsense. Generally you do not want to create the swap partition on your boot medium - so far he is correct, but for the wrong reasons. Putting swap on the boot drive will put additional write load and wear on what might be a single SSD. So yes, skip that.

Later, when you create your HDD pool, TrueNAS will automatically create swap on all of these drives. That's what I was referring to.

Still one question: how much memory are you going to put in this server?
 

vn_mnm

Explorer
Joined
Nov 23, 2020
Messages
66
I thought you were going to have a VM pool on SSDs?
My plan is that I will be having the VMs stored on the 2.5” SATA SSD (Samsung 850 Evo 256gb). But I believe the NVMe SSD is much faster than all the SATA drives, hence I intend to divide the NVMe drive into smaller partitions, each for using as cache to the zpools (e.g. NAS, VM, etc.).
 

vn_mnm

Explorer
Joined
Nov 23, 2020
Messages
66
Still one question: how much memory are you going to put in this server?
Currently, I am having 4x 8GB DDR4 UDIMM ECC 2400 RAM sticks. I plan to upgrade to 4x 16 (32) GB DDR4 UDIMM ECC 2666 as soon as I can find 2nd-handed sticks taken from corporate servers around.

So, L2ARC is for the very near future.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I'd set up the server without L2ARC and watch the ARC statistics in operation. If you have a cache hit rate near 100% as is frequently the case for a home server with 32 or 64 G of memory, an L2ARC does not buy you anything. In the worst case it will slow down your system because you have less memory for ARC.

So instead of guesswork no matter how well founded (by all of us, not attacking you!) bring the system into operation and measure. It all depends so much on your particular working set and load ...
 

vn_mnm

Explorer
Joined
Nov 23, 2020
Messages
66
Putting swap on the boot drive will put additional write load and wear on what might be a single SSD. So yes, skip that.
So my system will still function normally without SWAP on the boot drive, won’t it ? Will there be any significant difference in speed between booting SWAP and SWAP-less ?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
So my system will still function normally without SWAP on the boot drive, won’t it ? Will there be any significant difference in speed between booting SWAP and SWAP-less ?
Yes and no. Yes, it will boot. No, no noticeable difference at all. Swap is not a speed up mechanism, rather the opposite. It will give the operating system some maneuvering room when running out of physical memory at the expense of speed. RAM is always faster than even the fastest NVME SSD. When running low on memory the OS will clean up cache pages and other things to make room for new data, But it needs some wiggle room to do that. So that's when the swap space might come into play.
Without swap everything might work just as well or a lot of memory pressure might cause the system to fail hard.

Again, it all depends on many details ...
 

vn_mnm

Explorer
Joined
Nov 23, 2020
Messages
66
bring the system into operation and measure.
Thanks for your patience buddy. Some questions before I assemble this server :
  1. If I replace the OS drive with a M2 SATA one, will it bottleneck the performance of the NVMe SSDs in the VM’s zpool ?
  2. Assuming my system has already had 128gb RAM in it, SLOG cache for CCTV zpool & L2ARC for NAS zpool or vice versa ?
  3. As the cache disks for the NAS & CCTV pools will be 128~256gb SATA SSDs as well as the HDDs included in these pools, will it boost the respective pool’s read (write) speed significantly ?
  4. Is the Transcend SSD452P, with PLP functionality, be a good alternative to Intel Optanes ( https://us.transcend-info.com/embedded/product/embedded-ssd-solutions/ssd452p-ssd452p-i ) ?
Transcend’s PLP technology : https://us.transcend-info.com/embedded/technology/power-loss-protection-plp


Thank you in advance again. Much appreciated.
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
If I replace the OS drive with a M2 SATA one, will it bottleneck the performance of the NVMe SSDs in the VM’s zpool ?
No.

Assuming my system has already had 128gb RAM in it, SLOG cache for CCTV zpool & L2ARC for NAS zpool or vice versa ?
How will client systems be accessing this two pools? If they are using SMB an SLOG will buy you absolutely nothing as has been repeatedly brought up. An SLOG only helps in the case of synchronous writes, e.g. when serving VM storage via iSCSI or NFS.
Again: if you are only using SMB there will practically never be any write to your SLOG device. An SLOG is not a write cache.

As the cache disks for the NAS & CCTV pools will be 250/256gb SATA SSDs as well as the HDDs included in these pools, will it boost the respective pool’s read (write) speed significantly ?
With 128 G of memory there is a high probability of a good cache hit rate. Build the machine without L2ARC and measure the hit rate, then decide if an L2ARC might be useful.

One of your pools is for CCTV, right? That's mostly "write once and forget"? In that case an L2ARC will buy you nothing.

Why are you so obsessed with "cache SSDs"? SLOG is not cache, L2ARC only improves things if the ARC in RAM is full and the working set is so large you have frequent cache misses. ZFS will utilise all RAM for cacheing, anyway.

Is the Transcend SSD452P, with PLP functionality, be a good alternative to Intel Optanes ( https://us.transcend-info.com/embedded/product/embedded-ssd-solutions/ssd452p-ssd452p-i ) ?
No idea, sorry.

Please read the ZFS primer:

This is my NAS at home with a mix of 3 VMs, 7 jails, some file sharing:
Code:
ARC total accesses (hits + misses):                                 9.7G
        Cache hit ratio:                               99.8 %       9.7G
        Cache miss ratio:                               0.2 %      17.9M
        Actual hit ratio (MFU + MRU hits):             99.8 %       9.7G
        Data demand efficiency:                        94.0 %     148.8M
        Data prefetch efficiency:                       5.7 %       4.9M

99.8% cache hit rate means an L2ARC would not be doing anything to improve perceived performance. That's what I meant with "build, test, measure" first.

HTH,
Patrick
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776

vn_mnm

Explorer
Joined
Nov 23, 2020
Messages
66
So no SLOG.
I see. I am asking about the possibility of using a L2ARC ssd for NAS zpool & another one as SLOG for CCTV zpool. From what you have been responding to my question, I get that this combination is a viable possibility, am I correct ?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
How will client systems (cameras?) access the CCTV pool?
 

vn_mnm

Explorer
Joined
Nov 23, 2020
Messages
66
How will client systems (cameras?) access the CCTV pool?
I believe I will be setting up a VM (docker, depends on whether it is in the repository or not) running either Shinobi or ZoneMinder. From there, clients will access them locally using their mobile app, and via VPN remotely.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Because SSD is much faster than HDD, regardless of the interface it is implemented on.
Correct. But RAM is still faster than any SSD. And ZFS very aggressively caches everything using all available RAM. So if your introduction of an SSD leads to data ending up in the SSD cache instead of the RAM cache, you will actually slow down your system.

L2ARC needs RAM for management of said L2ARC. That amount of RAM is not available for cacheing, anymore. So possibly data that without L2ARC would have been cached in RAM is now cached on the SSD ...

Only when the "hot" data exceeds the size of the ARC can an L2ARC be useful.

I believe I will be setting up a VM (docker, depends on whether it is in the repository or not) running either Shinobi or ZoneMinder. From there, clients will access them locally using their mobile app, and via VPN remotely.
Will all the storage for your CCTV reside inside the VM's virtual disks or will there be a network drive/sharing involved? If the former, will you set the VM pool to sync=always?
 

vn_mnm

Explorer
Joined
Nov 23, 2020
Messages
66
Will all the storage for your CCTV reside inside the VM's virtual disks or will there be a network drive/sharing involved?
I will spare the entire CCTV zpool just for the CCTV vm (docker) to use it.
I am asking about the possibility of using a L2ARC ssd for NAS zpool & another one as SLOG for CCTV zpool.
What do you think, buddy ? Is this a viable possibility in case of more memory is needed ?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Answering your last two posts in one:

1. Repeating myself an L2ARC will only improve things if your ARC is full and you get cache misses. And I cannot tell you if that will be the case beforehand. You can only set the machine in production and measure.

2. Unless you set sync=always for your VM pool an SLOG will not improve anything. Not. at. all.

Now what does sync=always do?

In ZFS all writes go to RAM in the regular case. That's the fastest way possible. An then the RAM is flushed to stable storage every couple of seconds in what is called a transaction group. If the power fails or the system crashes before the data is written, it is lost.
The regular case is called asynchronous write and is applied in nearly all sharing (SMB and the like).

OTOH if you serve block storage to a hypervisor host for VMs, the VMs themselves already do all sorts of cacheing (running a complete OS) so when the guest OS flushes data to its virtual "disk" it is good practice to turn that into a synchronous write on the storage system. That means the write will only be acknowledged when the data is written to stable storage, not while it is only in RAM. Synchronous writes are the default for iSCSI and NFS.

VMs running on TrueNAS use asynchronous writes per default. Which is fast but not as secure as the block storage protocols above. And setting sync=always for the pool enforces synchronous writes.

Now synchronous writes are orders of magnitude slower than asynchronous ones. (Remember? Nothing is as fast as RAM.) So ZFS can use an SLOG - a "really really super fast super reliable gold-pressed latinum coated SSD" to speed up synchronous writes. Or rather not the writes but the acknowledgement of the successful write. The data is still written to RAM - but also to the SLOG in parallel. And as soon as the data is on the SLOG the write can be acked. Then the data is flushed from memory to the pool drives in the next transaction group just like with asynchronous writes, but if the system crashes now, there's a copy on the SLOG that can be replayed at startup time.
Once the data is successfully flushed to storage (from RAM, not from the SLOG!) the SLOG data is invalidated and the space on the SLOG can be used for something new.

See? The SLOG is never read in normal operation. That's why it is not a write cache. It can speed up the acknowledgement "I got this, trust me" of synchronous writes. Still these synchronous writes will be orders of magnitude slower than asynchronous writes, because nothing is faster than RAM.

So you need an SSD with a high sustained write rate and exceptional endurance. And if possible, PLP. And I would go with what the regulars here recommend - there are extra threads about particular SLOG devices. And don't try go cheap. And don't expect any magic "speedup". A synchronous write with an SLOG will be orders of magnitude slower than an asynchronous write. The SLOG just makes synchronous writes in situations where you need them less painful.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
SLOG is NOT a "write cache".
SLOG is not a performance booster.
SLOG is only used by sync writes.
Setting on sync writes IS a performance killer.
Sync writes with a SLOG are not as bad as plain sync writes, but still a lot worse than async writes.

A HDD pool can easily saturate a 1 GbE link.
 
Top