And a bonus stupid question. Is it not better to just have an UPS (so that there is a PLP for the entire system - i.e. Freenas would have the chance to finish doing what it's doing and shut down) as opposed to paying a premium for an SSD with PLP.
(home use, not enterpris'y use in mind obviously).
Would not recommend using the drive for boot and slog, although it is possible, it’s not supported.
And to answer the first part of my question,- found Using Intel Optane as both boot drive and ZIL/SLOG?
Is that really really really not recommended
Say you were considering getting a stupid fast SLOG... because your VMs are too slow...
You could create a SLOG on a RAM disk. This is an incredibly stupid idea, but it does demonstrate the maximum performance gains to be had from using the absolutely fastest SLOG you could possibly get...
Potentially yes, but you are missing one of the most important features of an SLOG. That is that it is making sure your writes are stored on some kind of permanent media. A RAM disk would be faster, but those writes would be lost if there was a power outage or software crash. That could leave you with a corrupted pool. If you are willing to roll the dice on something like that (data loss and pool corruption), you should force writes via shares to be synch=off.Can one get as high of write speeds as using RAM disk as the fastest SLOG?
Potentially yes, but you are missing one of the most important features of an SLOG. That is that it is making sure your writes are stored on some kind of permanent media. A RAM disk would be faster, but those writes would be lost if there was a power outage or software crash. That could leave you with a corrupted pool. If you are willing to roll the dice on something like that (data loss and pool corruption), you should force writes via shares to be synch=off.
Stux - Is this statement accurate? Can one get as high of write speeds as using RAM disk as the fastest SLOG? And if so, what makes the following example using Optane so fast vs your example where ~400 MB/s was the MAX?
https://www.servethehome.com/exploring-best-zfs-zil-slog-ssd-intel-optane-nand/
Funny you should say that. The motherboard blew up in a lightning storm.@Stux: A few questions if you don't mind.
How is your VM performance on your Fractal Node 304 build? I'm in the process of building an 804 case that will be modded to hang 6x additional drives on the mobo side for a total of 14 (16 drives if you include the base/floor). Main pool will be 8x 4tb hgst 7200rpm sas drives (mirrored pairs). Wondering how your VM's run on that 8c/16t xeon-d + 128gb RDIMM platform?
I own 128GB (4x 32gb) RDIMM and with today's world/market/prices... The 8c/16t x10sdv board you are running is about the same price as buying a mATX lga 3647 board with xeon scalable... And PMEM 100 DIMMS instead of that P3700 pcie ssd you put in your node 304.
I'm looking for a rough hardware baseline. I see some people say they are running an e3-1220v5 with 32gb udimm and are happy, and then the next person is running a dual-socket e5-2699v4 with 512gb RAM, a SLOG worth more than the HDD in their NAS, and they aren't happy with performance.
I'm worried about latency and don't have a clue about performance with 10g network and a pool of 4x mirrored pairs (8-drives). Optane pmem 100 sticks are about $100/each for 128gb sticks. Yes that's much more than needed for slog... Possibly use a portion of the rest for small file or metadata space? 6x channel Xeon SP = 6x DIMMs. I own 4x 32gb RDIMM so would be 256gb in DIMMs, 128gb optane, 128gb ddr4 rdimm.
I have heard that mirroring PMEM DIMMs across CPU sockets (qpi or dpi links) is stupid-slow. But in a single-socket mATX board, does anyone have performance numbers/metrics mirroring slog/zil on pmem (dare I say persistent "ramdrive")? I also cannot find any info on what exactly is supported vs. enterprise paid, vs. unsupported. It seems like pmem (not sure 100 or 200) is 100% supported in Core paid version. In community (free) version it will work via CLI but no web UI management? I have found no information about what is/is not supported in Scale.
@Stux: If you were to build your 304 again today (today, in the middle of this price and chipset mess) what hw would you choose?
Thanks.