To SLOG or not to SLOG?

Sprint

Explorer
Joined
Mar 30, 2019
Messages
72
Hi all

So, my build is ever evolving, and I continue to read more and more and learn with each passing day, but I'm at the point where I’d like some input, and also to check my understanding is accurate...

My question is regarding how best to implement a write cache (SLOG?), but first let me give you some background and specs

I'm using a pair of 10 Core Xeons, and 256Gb of ram. The idea was to build a single server todo both my mass storage as well as host my VMs, and after learning I could virtulize FreeNas and doing heaps of research, that’s the route I took, and so far it’s been working superbly. I have 18 VMs (although most are test machines and short term VMs, only 8 powered up at the moment).

My build thread is here if you’re interested, has pictures too!! - https://www.ixsystems.com/community...includes-pictures-of-build.76118/#post-530943

I have 2x 9207-8I LSI cards (soon to be 3) being passed through to the FreeNas VM which has 64GB of ram assigned and 8 Cores.
6x4Tb WD Reds in RaidZ2 (secondary array)
8x8Tb WD Reds in RaidZ2 (Primary array - being installed imminently)
I may well double that to 128Gb or more soon, haven’t decided yet
I have a pair of cheap desktop 240Gb SanDisks Sata SSDs I was using as L2ARC (until i removed that pool to make way for the 8Tb drives that are about to be installed) and 2x512Gb Samsung 860 Pro Sata SSDs that are currently in my Synology box as Cache drives to the afore mentioned 8TB drives. I mention these as I have them, but am not sure what todo with them, and wonder if they might be of use...

The VMs in ESXi are all stored on the (smaller) array (and will be migrated to the bigger one once installed), but I realised as I continue learning (shout out to jgreco for this thread he linked me to in my build guide - https://www.ixsystems.com/community/threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/) that the array my VMs are on had Syncs set to standard (and that actually meant they were as good as disabled under iSCSI). I went and changed the dataset that the zvols live in to 'always', as I don't want to risk my VM data. I knew my writes would take a hit now I understood it better, it seemed like the only safe option, but wow, I went from sequential write speeds of 2.3GB/s to 11MB/s!!! (tested in a VM using crystal disk mark) I also intend (once I install a 10Gb Nic in both the server and my desktop) to be editing 4K video projects direct off the NAS (same Pool as the VMs, 'primary'), so although I now know that a SLOG won't be as quick as my current speeds (with sync-writes not being on within iSCSI services), at least I get some performance back and retain my margin of safety

My understanding is that using SATA drives as a SLOG would add to much latency through the HBA and is a total non starter. I looked at last gen Intel PCIe SSDs but they are very expensive, so I found myself looking at the 280Gb 900p PCIe Optane drives (after seeing them on Level1Techs), Something Wendel mentioned in his video, was that you can partition the drive up, so I wonder if this is something I could do, assign perhaps 40Gb for use as a SLOG, and say 200Gb L2ARC?

What’s peoples thoughts on this as a practice? I know in an ideal/money is no object world, you'd have separate drives for L2ARC and SLOG, maybe even mirror them, but this is a private system used 95% by me. Would it be fair to say this is safer than running with sync writes disabled? Would this allow me to kill two birds with one stone, and save using SATA SSDs as L2ARC? (in which case I might buy a handful more and make a high speed SSD array for editing out of them) Or is there an alternative I’m not seeing?

All thoughts/suggestions/criticism welcome. I'm learning so be gentle...!
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
It is not advisable to use a single device for both SLOG and L2ARC, the non-pedantic reason being that you don't want IO contention on the SLOG. You should determine if your system would benefit from a L2ARC. If the workload is not suited, the installation of a L2ARC will actually hurt system performance. Increasing system memory is going to result in better performance than that of a L2ARC. That being said, If you determine a L2ARC is beneficial, then the better suited drives you have for it would be the 2x512GB Samung 860s.
 

Sprint

Explorer
Joined
Mar 30, 2019
Messages
72
ok heres a question, I have 2 array's (may add a third SSD pool in the not very distance future...) if i did get a 240Gb Optane PCIE card, would that serve as a SLOG for all your pools? Or does it have to be assigned to just a single pool? If so, again, could i partition it up into say 3x 20Gb partitions and assigned each partition to a pool?
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
--snip--
The VMs in ESXi are all stored on the (smaller) array (and will be migrated to the bigger one once installed)
--snip--
IOPS scale by vdev, and you only have 1 vdev in the RAIDZ2 pool where you're storing your VMs. Changing this datastore from RAIDZ2 to a set of mirrors will help improve performance. If you reconfigured your 6 x 4TB RAIDZ2 to a set of 3 mirrors you'd triple your IOPS -- at the expense of losing storage capacity, of course.
 

Sprint

Explorer
Joined
Mar 30, 2019
Messages
72
IOPS scale by vdev, and you only have 1 vdev in the RAIDZ2 pool where you're storing your VMs. Changing this datastore from RAIDZ2 to a set of mirrors will help improve performance. If you reconfigured your 6 x 4TB RAIDZ2 to a set of 3 mirrors you'd triple your IOPS -- at the expense of losing storage capacity, of course.

Hmmmm 3 mirrors of 2 would only lose me 4Tb over my RaidZ6 of 6 drives.... That might actually be do-able, and keep the VMs on that pool, (and add a SSD as L2ARC).

On the other hand... I'm fairly happy with the performance at the moment, it is just for me, no other clients, and anything is a step up from my Synology box which maxed out at 150Mb/s. But the future SSD array, I do want to edit off over 10Gb, so that I am thinking perhaps 4x 512Gb Samsung 860 Pros (so i'll need two more) in mirrors..... and a slog partition for I can enable sync writes..... Hmmmm the plan continues to evolve :D Thanks for the suggestion, thats got me thinking....
 

Snow

Patron
Joined
Aug 1, 2014
Messages
309
Why not Just use the extra SSD's for fast Vm's or jail storage Disks? I had to remove my slog as like Jgreco said in the past it slows things down more then speeds them up! I think ones you pass 8 disk's unless you need PLP it just slows the array down. I think extra memory & tuneable's is the best way to get better performance out of a freenas then adding Slog/L2arc. Other note Unless you need a Slog/L2Arc in a system that can use it Databases or a heavy amount of users I think you should save your money.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Hmmmm 3 mirrors of 2 would only lose me 4Tb over my RaidZ6 of 6 drives.... That might actually be do-able, and keep the VMs on that pool, (and add a SSD as L2ARC)..

Please go and read the link I provided. RAIDZ does not use a fixed amount of space for parity, and if you're doing block storage with it, you can quickly write yourself into a bad corner where you are losing gobs of space to poor optimization.
 
Top