Sprint
Explorer
- Joined
- Mar 30, 2019
- Messages
- 72
Hi all
So, my build is ever evolving, and I continue to read more and more and learn with each passing day, but I'm at the point where I’d like some input, and also to check my understanding is accurate...
My question is regarding how best to implement a write cache (SLOG?), but first let me give you some background and specs
I'm using a pair of 10 Core Xeons, and 256Gb of ram. The idea was to build a single server todo both my mass storage as well as host my VMs, and after learning I could virtulize FreeNas and doing heaps of research, that’s the route I took, and so far it’s been working superbly. I have 18 VMs (although most are test machines and short term VMs, only 8 powered up at the moment).
My build thread is here if you’re interested, has pictures too!! - https://www.ixsystems.com/community...includes-pictures-of-build.76118/#post-530943
I have 2x 9207-8I LSI cards (soon to be 3) being passed through to the FreeNas VM which has 64GB of ram assigned and 8 Cores.
6x4Tb WD Reds in RaidZ2 (secondary array)
8x8Tb WD Reds in RaidZ2 (Primary array - being installed imminently)
I may well double that to 128Gb or more soon, haven’t decided yet
I have a pair of cheap desktop 240Gb SanDisks Sata SSDs I was using as L2ARC (until i removed that pool to make way for the 8Tb drives that are about to be installed) and 2x512Gb Samsung 860 Pro Sata SSDs that are currently in my Synology box as Cache drives to the afore mentioned 8TB drives. I mention these as I have them, but am not sure what todo with them, and wonder if they might be of use...
The VMs in ESXi are all stored on the (smaller) array (and will be migrated to the bigger one once installed), but I realised as I continue learning (shout out to jgreco for this thread he linked me to in my build guide - https://www.ixsystems.com/community/threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/) that the array my VMs are on had Syncs set to standard (and that actually meant they were as good as disabled under iSCSI). I went and changed the dataset that the zvols live in to 'always', as I don't want to risk my VM data. I knew my writes would take a hit now I understood it better, it seemed like the only safe option, but wow, I went from sequential write speeds of 2.3GB/s to 11MB/s!!! (tested in a VM using crystal disk mark) I also intend (once I install a 10Gb Nic in both the server and my desktop) to be editing 4K video projects direct off the NAS (same Pool as the VMs, 'primary'), so although I now know that a SLOG won't be as quick as my current speeds (with sync-writes not being on within iSCSI services), at least I get some performance back and retain my margin of safety
My understanding is that using SATA drives as a SLOG would add to much latency through the HBA and is a total non starter. I looked at last gen Intel PCIe SSDs but they are very expensive, so I found myself looking at the 280Gb 900p PCIe Optane drives (after seeing them on Level1Techs), Something Wendel mentioned in his video, was that you can partition the drive up, so I wonder if this is something I could do, assign perhaps 40Gb for use as a SLOG, and say 200Gb L2ARC?
What’s peoples thoughts on this as a practice? I know in an ideal/money is no object world, you'd have separate drives for L2ARC and SLOG, maybe even mirror them, but this is a private system used 95% by me. Would it be fair to say this is safer than running with sync writes disabled? Would this allow me to kill two birds with one stone, and save using SATA SSDs as L2ARC? (in which case I might buy a handful more and make a high speed SSD array for editing out of them) Or is there an alternative I’m not seeing?
All thoughts/suggestions/criticism welcome. I'm learning so be gentle...!
So, my build is ever evolving, and I continue to read more and more and learn with each passing day, but I'm at the point where I’d like some input, and also to check my understanding is accurate...
My question is regarding how best to implement a write cache (SLOG?), but first let me give you some background and specs
I'm using a pair of 10 Core Xeons, and 256Gb of ram. The idea was to build a single server todo both my mass storage as well as host my VMs, and after learning I could virtulize FreeNas and doing heaps of research, that’s the route I took, and so far it’s been working superbly. I have 18 VMs (although most are test machines and short term VMs, only 8 powered up at the moment).
My build thread is here if you’re interested, has pictures too!! - https://www.ixsystems.com/community...includes-pictures-of-build.76118/#post-530943
I have 2x 9207-8I LSI cards (soon to be 3) being passed through to the FreeNas VM which has 64GB of ram assigned and 8 Cores.
6x4Tb WD Reds in RaidZ2 (secondary array)
8x8Tb WD Reds in RaidZ2 (Primary array - being installed imminently)
I may well double that to 128Gb or more soon, haven’t decided yet
I have a pair of cheap desktop 240Gb SanDisks Sata SSDs I was using as L2ARC (until i removed that pool to make way for the 8Tb drives that are about to be installed) and 2x512Gb Samsung 860 Pro Sata SSDs that are currently in my Synology box as Cache drives to the afore mentioned 8TB drives. I mention these as I have them, but am not sure what todo with them, and wonder if they might be of use...
The VMs in ESXi are all stored on the (smaller) array (and will be migrated to the bigger one once installed), but I realised as I continue learning (shout out to jgreco for this thread he linked me to in my build guide - https://www.ixsystems.com/community/threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/) that the array my VMs are on had Syncs set to standard (and that actually meant they were as good as disabled under iSCSI). I went and changed the dataset that the zvols live in to 'always', as I don't want to risk my VM data. I knew my writes would take a hit now I understood it better, it seemed like the only safe option, but wow, I went from sequential write speeds of 2.3GB/s to 11MB/s!!! (tested in a VM using crystal disk mark) I also intend (once I install a 10Gb Nic in both the server and my desktop) to be editing 4K video projects direct off the NAS (same Pool as the VMs, 'primary'), so although I now know that a SLOG won't be as quick as my current speeds (with sync-writes not being on within iSCSI services), at least I get some performance back and retain my margin of safety
My understanding is that using SATA drives as a SLOG would add to much latency through the HBA and is a total non starter. I looked at last gen Intel PCIe SSDs but they are very expensive, so I found myself looking at the 280Gb 900p PCIe Optane drives (after seeing them on Level1Techs), Something Wendel mentioned in his video, was that you can partition the drive up, so I wonder if this is something I could do, assign perhaps 40Gb for use as a SLOG, and say 200Gb L2ARC?
What’s peoples thoughts on this as a practice? I know in an ideal/money is no object world, you'd have separate drives for L2ARC and SLOG, maybe even mirror them, but this is a private system used 95% by me. Would it be fair to say this is safer than running with sync writes disabled? Would this allow me to kill two birds with one stone, and save using SATA SSDs as L2ARC? (in which case I might buy a handful more and make a high speed SSD array for editing out of them) Or is there an alternative I’m not seeing?
All thoughts/suggestions/criticism welcome. I'm learning so be gentle...!