NVDIMM for SLOG question

lonelyzinc

Dabbler
Joined
Aug 8, 2019
Messages
35
I'm new to FreeNAS, and have don't currently have any NVDIMM's. My question is for someone that has used NVDIMM's as an SLOG before:

Does it just show up as a storage device here, or is it more complicated to configure? I can't find any documentation about it

Screen Shot 2019-11-04 at 5.02.00 PM.png
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
If you have a board that supports NVDIMM correctly, as well as the required battery/capacitor pack to make it actually "NV" on power loss, they show up as pmem devices, and run extremely well:

 

lonelyzinc

Dabbler
Joined
Aug 8, 2019
Messages
35
If you have a board that supports NVDIMM correctly, as well as the required battery/capacitor pack to make it actually "NV" on power loss, they show up as pmem devices, and run extremely well:

Thanks for the info, I reached out to Supermicro and this is what they said about my build:

Yes, X10DRi supports NVDIMM.

Please note that you must update BIOS to R2.1 or later, which include RC code update and add JEDEC NVDIMM support.


Great, now I am trying to get a hold of the people that produce Powergems, AgigA Tech... wow, very difficult!!

Do you think there would be any reason to get a 32 GB NVDIMM for the system in my sig, or would I likely see no improvement from a 16 GB? I even read that Micron states most their NVDIMM customers require 64 GB?? Not sure where the bottleneck would be at this point. And by the way, do you know anything about Direct Access mode, is this utilized by FreeNAS?
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Good luck with the hunt for a supported PowerGem. You may want to ask @Rand about experiences with NVDIMMs as well.

Do you think there would be any reason to get a 32 GB NVDIMM for the system in my sig, or would I likely see no improvement from a 16 GB? I even read that Micron states most their NVDIMM customers require 64 GB?? Not sure where the bottleneck would be at this point.

Potentially; ZFS by default only allows for 4GB of dirty data in RAM (and you can think of the SLOG as a mirror of your RAM) but you could increase this amount to the full size of your NVDIMM device, with the understanding that it would eat up that much main memory and make it unavailable for use by ARC as read caching. With a quad 10GbE link you could ingest a whole lot of data from the wire very quickly; so you'll want to ensure your back-end vdevs are can keep up; lots of fast SSDs are going to be required here.

And by the way, do you know anything about Direct Access mode, is this utilized by FreeNAS?

Currently no support for this type of device in ZFS, although the real-world overhead from going through the NVMe block driver layer is minimal compared to the actual write latency of most devices. NVDIMM is where you could start to split hairs and call it necessary. I'm sure the iX team is working on it. ;)
 

alexr

Explorer
Joined
Apr 14, 2016
Messages
59
First, you'll have to navigate the technomumbojumbo in the BIOS config to get the NVDIMM set to save and restore state.

Then you'll want to add tunables to load the nvdimm driver: nvdimm_load and ntb_load.

As a safety measure, I've been removing the log device before doing an OS update because of a lack of assuredness that the update won't require additional futzing around with tunables or something else, then adding it back after successful upgrades.

This is totally a YMMV thing in that iX has made it clear that you're on your own here since it's not TrueNAS.
 

lonelyzinc

Dabbler
Joined
Aug 8, 2019
Messages
35
First, you'll have to navigate the technomumbojumbo in the BIOS config to get the NVDIMM set to save and restore state.

Then you'll want to add tunables to load the nvdimm driver: nvdimm_load and ntb_load.

As a safety measure, I've been removing the log device before doing an OS update because of a lack of assuredness that the update won't require additional futzing around with tunables or something else, then adding it back after successful upgrades.

This is totally a YMMV thing in that iX has made it clear that you're on your own here since it's not TrueNAS.
Thanks for adding to this. I think the Supermicro engineering team (although slow to reply via email) will be able to assist with the BIOS config. It's funny because a lot of this stuff is so new (at least to the X10 generation boards, as NVDIMM support was added in a BIOS update), that it's not even documented anywhere on their memory guides. Since this is a used build I'm creating, I'm grateful for all the help I can get from them.

Basically I see that the top of the line TrueNAS M50 has NVDIMM, I think I... why not add it to my build? But it's very hard to source the parts (10 week backorder for an Agiga Powergem I've heard from one of the few suppliers, Arrow), and there seems to be very few people actually using them here on the forums.

I think I might just use an Intel Optane for SLOG instead, since it appears to be more popular and better documented on here.

Makes me think... if I do have issues with the NVDIMM as SLOG, what would I expect them to be? Loss of the data being written during a power outage if the Powergem device isn't working, or worse?
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
So first - while X10 boards officially support NVDimms I never got it to run as expected on mine (also x10dri, not sure of the Bios version but I am sure I updated to the latest 6 months ago). Also they only support NVDimm in Dual CPU boards, just to mention it.

I have had more luck with Scalable Boards and only the 4th total I tried (an X11SPH-nCTPF) actually provided the full support that I needed.
Full support in this case means that I was able to update my (bought used) NVDimm modules to the latest Firmware - they ran on some of the others too.
Now I can't remember whether the nvdimm actually ran as nvdimm on my x10dri (provided you don't need the firmware update since you're buying new anyway), but I would suppose it should if its officially supported. Please keep in mind that while SM supports running them on their board, they don't actually support them as in providing help in getting them to run properly (at least they didnt/couldnt 6 months ago in Europe).

Regarding obtaining them -
You should be able to get them from Micron/Crucial as they produce their own; of course HP/Dell have them too for their boxes.
But I know that the nvdimms I have (16GB 2666) have been EOL'ed already, so you might need to look at 32GB modules to find availability (or buy used after all). Either way you need matching powerGems as you know - and I have been told thats not plug&play either (as in I got some from Ebay and have been told they don't match my nvdimms, but have not tried them).

The availability of PGs at Arrows is always 10 weeks, its been that ago 6 months ago, but that is not actual availability just a placeholder. Have you actually ordered some or just inquired?

For PGs I have been told there might be availability at https://www.i-components.com too, but not sure whether thats only for the 2.5 models or the HHL card too.
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
Let me put this q from the PM in the open as it might be interesting for more ppl:
By the way, something that has been on my mind lately, that maybe you know the answer to: If all my DIMMs are going to be either 64 GB or 32 GB, does it matter if the NVDIMM is 16 GB? Would there be any point to even having two 16 GB NVDIMM, since the only intent is SLOG, and it cannot be raided?

I know Supermicro warn about matching RAM sizes, and since NVDIMM also has DRAM, I thought maybe I would need to do the same.

Not sure about the exact consequences of mixing sizes, perhaps they are minor?

At this point its quite unclear how nvdimms are handled in the memory chain - I recently started a thread on STH to try to get some feedback (https://forums.servethehome.com/index.php?threads/memory-config-diagnosis-tool.26232/) but nobody was able to answer my q's yet.

1. At this point in my tests it looks like the NVDimm counts in the number of channels used assignment, so in order to get maximum bandwith you should use 5+1 or 4+2 modules (regular/nvdimm) (or 2+1 to get tripple channel).
2. Now it should not matter which size your modules are, more relevant would be Ranks, but I would assume the impact is miniscule
3. You cannot mix LRDIMMS with NVDimms (which are regular RDimms)
4. Unless I am mistaken you should be able to mirror or stripe NVDimms like any regular drive. It used to be not possible but at this time it is afaik
5. The size you need to have for your nvdimm is dependent on how much traffic you are possibly able to ingest in 5s (or whatever you set your transaction group flush duration to) over the network. For 100Gb/s this would be some 8GB/s so you'd need a 40GB slog - but realistically you won't reach that so 16GB is plenty;)
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
With a quad 10GbE link you could ingest a whole lot of data from the wire very quickly; so you'll want to ensure your back-end vdevs are can keep up; lots of fast SSDs are going to be required here.
My understanding is that ZFS monitors disk latency to throttle IO in order to prevent thrashing. I don't know details on the the time scales on which this operates though so depending on just how bursty a workload is this could be overwhelmed with sufficiently fast network connections.

if I do have issues with the NVDIMM as SLOG, what would I expect them to be? Loss of the data being written during a power outage if the Powergem device isn't working
Yeah I would think that's about it. The funny thing about using a fancy SLOG device is that you never use it unless something goes wrong like a power failure.
 

lonelyzinc

Dabbler
Joined
Aug 8, 2019
Messages
35
So first - while X10 boards officially support NVDimms I never got it to run as expected on mine (also x10dri, not sure of the Bios version but I am sure I updated to the latest 6 months ago). Also they only support NVDimm in Dual CPU boards, just to mention it.

I have had more luck with Scalable Boards and only the 4th total I tried (an X11SPH-nCTPF) actually provided the full support that I needed.
Full support in this case means that I was able to update my (bought used) NVDimm modules to the latest Firmware - they ran on some of the others too.
Now I can't remember whether the nvdimm actually ran as nvdimm on my x10dri (provided you don't need the firmware update since you're buying new anyway), but I would suppose it should if its officially supported. Please keep in mind that while SM supports running them on their board, they don't actually support them as in providing help in getting them to run properly (at least they didnt/couldnt 6 months ago in Europe).

Regarding obtaining them -
You should be able to get them from Micron/Crucial as they produce their own; of course HP/Dell have them too for their boxes.
But I know that the nvdimms I have (16GB 2666) have been EOL'ed already, so you might need to look at 32GB modules to find availability (or buy used after all). Either way you need matching powerGems as you know - and I have been told thats not plug&play either (as in I got some from Ebay and have been told they don't match my nvdimms, but have not tried them).

The availability of PGs at Arrows is always 10 weeks, its been that ago 6 months ago, but that is not actual availability just a placeholder. Have you actually ordered some or just inquired?

For PGs I have been told there might be availability at https://www.i-components.com too, but not sure whether thats only for the 2.5 models or the HHL card too.

I spoke with AgigA today, and the primary sales guy there said that they do in fact have the specific PowerGEM I want in stock. He is putting me in touch with his specific contact at Arrow, since when I called by phone, checked out the website, it said the part was unavailable.

Will probably order a single 16GB Micron NVDIMM-N used, since they cost about 10x less than a new 32GB Micron.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
My understanding is that ZFS monitors disk latency to throttle IO in order to prevent thrashing. I don't know details on the the time scales on which this operates though so depending on just how bursty a workload is this could be overwhelmed with sufficiently fast network connections.

Much like a Facebook relationship status, "it's complicated" is the best way to briefly describe ZFS's write throttling.

ZFS throttles incoming writes from the "outside" based on the amount of outstanding dirty data (pending txg not committed to vdevs yet) and with 40Gbps combined that's about 5GB/s potential ingest rate - at smaller block sizes, that could potentially overwhelm an NVDIMM even, so you might experience a tiny bit of throttle there.

But even if you had an infinitely fast SLOG device, you're ultimately bottlenecked by how fast you can flush that data to stable vdevs. That at least happens async (for the most part) but even with 36x SAS drives in mirrors, the lack of IOPS and seek time on physical drives might eventually make it bog down, especially as usage increases and free space becomes more fragmented.

I'd suggest increasing the dirty data max from the default 4GB to 16GB (the full size of the NVDIMM) in order to have the largest buffer available for the incoming writes. Using the default 60% throttle threshold that lets you eat about 9.6GB of burst writes before it starts to slow things down from the outside.
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
I spoke with AgigA today, and the primary sales guy there said that they do in fact have the specific PowerGEM I want in stock. He is putting me in touch with his specific contact at Arrow, since when I called by phone, checked out the website, it said the part was unavailable.

Will probably order a single 16GB Micron NVDIMM-N used, since they cost about 10x less than a new 32GB Micron.
Have you confirmed with him that the module you want matches the nvdimm you are looking to get?
And Arrow will allow you to buy a single one? They wanted me to get 50 when I asked last (minimum order size). My contact from Agiga was willing to lower that to 20 for me, but even that is a wee bit too much at 200 bucks each :p
 

ehsab

Dabbler
Joined
Aug 2, 2020
Messages
45
@Rand did you (or anyone else for that matter) come further on this topic?
I'm really hoping to use Intels DCPMM in DA mode for SLOG and L2ARC to my NVMe disks, but i can't find any information that this has been done and works.
We are now 6+ months since you discussed this and i was hoping to get some positive feedback :)
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
I run a NVDimm (-N) module happily on my box; have not dabbled in Optane NVDimms yet

I run it as block device so probably not memory mode:
Code:
dmesg|grep -i nvdi
nvdimm_root0: <NVDIMM root> on acpi0
nvdimm0: <NVDIMM region 16GB interleave 1> at iomem 0x8080000000-0x847fffffff numa-domain 0 on nvdimm_root0
pmem0: <PMEM region 16GB> at iomem 0x8080000000-0x847fffffff numa-domain 0 on nvdimm_root0
 

ehsab

Dabbler
Joined
Aug 2, 2020
Messages
45
I run a NVDimm (-N) module happily on my box; have not dabbled in Optane NVDimms yet

I run it as block device so probably not memory mode:
Code:
dmesg|grep -i nvdi
nvdimm_root0: <NVDIMM root> on acpi0
nvdimm0: <NVDIMM region 16GB interleave 1> at iomem 0x8080000000-0x847fffffff numa-domain 0 on nvdimm_root0
pmem0: <PMEM region 16GB> at iomem 0x8080000000-0x847fffffff numa-domain 0 on nvdimm_root0
What specifications is that NVDIMM?
How does a diskinfo -wS look like?

Are you running FreeNAS on that box?
 

alexr

Explorer
Joined
Apr 14, 2016
Messages
59
I’m setup similarly to Rand. iX sales suggested it for my box. I had to add two modules to make it work with FreeNAS.
 
Top