Intel Optane Memory module not showing up in TrueNAS

sniperguy135

Cadet
Joined
May 4, 2022
Messages
8
Hi all! Just wanted to say that I'm new to this, and TrueNAS has been quite the adventure so far.

As the title suggests, I have obtained a 16GB Intel Optane Memory M10 module and wish to use it as a cache vdev for my main storage pool. However, I cannot find it under the GUI Storage->Disks, neither can I find it in gpart:
root@HomeNAS[~]# gpart show => 40 488397088 ada0 GPT (233G) 40 532480 1 efi (260M) 532520 59965440 2 freebsd-zfs (29G) 60497960 2008 - free - (1.0M) 60499968 427896832 3 freebsd-zfs (204G) 488396800 328 - free - (164K) => 40 3907029088 ada1 GPT (1.8T) 40 88 - free - (44K) 128 4194304 1 freebsd-swap (2.0G) 4194432 3902834696 2 freebsd-zfs (1.8T) => 40 3907029088 ada2 GPT (1.8T) 40 88 - free - (44K) 128 4194304 1 freebsd-swap (2.0G) 4194432 3902834696 2 freebsd-zfs (1.8T) => 40 3907029088 ada3 GPT (1.8T) 40 88 - free - (44K) 128 4194304 1 freebsd-swap (2.0G) 4194432 3902834696 2 freebsd-zfs (1.8T) => 40 3907029088 ada4 GPT (1.8T) 40 88 - free - (44K) 128 4194304 1 freebsd-swap (2.0G) 4194432 3902834696 2 freebsd-zfs (1.8T) root@HomeNAS[~]#

The module is installed in the Ultra M.2 Slot, and does show in BIOS under NVME storage. It also showed in BIOS when installed in the second m.2 slot, but that slot shares bandwidth with the SATA ports I'm currently using.
ASRock B450 Pro4 [P4.20]
Ryzen 5 2400G
24 (8 + 16) GB Corsair Vengeance DDR4
250GB Crucial MX500, 29GB boot pool
4x 2TB Seagate Barracuda ST2000DM008, RAID-Z
Realtek RTL8111H


Please do let me know if I should supplement any additional information. Thanks!
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,949
Can you post you hardware spec as per forum rules please
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
If there is no partition table on the device, gpart won't show it, AFAIK. nvmecontrol devlist should. If it doesn't, there's a hardware issue. In that case see the previous post, please.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,949
Are you running afoul of slots being used disabling other things?
Can you see the nvme in BIOS?
[LIST] [*][B]Slots[/B] AMD Ryzen series CPUs (Vermeer, Matisse, Cezanne, Renoir, Summit Ridge and Pinnacle Ridge) - 2 x PCI Express 3.0 x16 Slots (PCIE2: x16 mode; PCIE4: x4 mode)* AMD Ryzen series CPUs (Picasso, Raven Ridge) - 2 x PCI Express 3.0 x16 Slots (PCIE2: x8 mode; PCIE4: x4 mode)* [B]AMD Athlon series CPUs[/B] - 2 x PCI Express 3.0 x16 Slots (PCIE2: x4 mode; PCIE4: x2 mode)* - 4 x PCI Express 2.0 x1 Slots - Supports AMD Quad CrossFireX™ and CrossFireX™** *Supports NVMe SSD as boot disks If M2_1 is occupied, PCIE4 will be disabled. **This feature is only supported with Ryzen Series CPUs (Vermeer, Matisse, Cezanne, Renoir, Summit Ridge, Pinnacle Ridge, Picasso and Raven Ridge). [*][B]Storage[/B] - 4 x SATA3 6.0 Gb/s Connectors, support RAID (RAID 0, RAID 1 and RAID 10), NCQ, AHCI and Hot Plug* - 2 x SATA3 6.0 Gb/s Connectors by ASMedia ASM1061, support NCQ, AHCI and Hot Plug - 1 x Ultra M.2 Socket (M2_1), supports M Key type 2242/2260/2280 M.2 PCI Express module up to Gen3 x4 (32 Gb/s) (with Vermeer, Matisse, Cezanne, Renoir, Picasso, Summit Ridge, Raven Ridge and Pinnacle Ridge) or Gen3 x2 (16 Gb/s) (with Athlon series APU)** - 1 x M.2 Socket (M2_2), supports M Key type 2230/2242/2260/2280/22110 M.2 SATA3 6.0 Gb/s module and M.2 PCI Express module up to Gen3 x2 (16 Gb/s)** *M2_2, SATA3_3 and SATA3_4 share lanes. If either one of them is in use, the others will be disabled. **If M2_1 is occupied, PCIE4 will be disabled. Supports NVMe SSD as boot disks Supports ASRock U.2 Kit [/LIST]

Your hardware spec (which I realy hope wasn't there before, when I look earlier) mentions a seperate RTL network card - is that correct?
I have posted the important part of the tech spec for the motherboard above

Lastly "250GB Crucial MX500, 29GB boot pool"
Does that mean you have creates a 29TBbpartition on the MX500 and boot from that with the intention of using the remaining space as a pool?
[Not that if you have that I think its related ro your issue in anyway]
 
Last edited:

sniperguy135

Cadet
Joined
May 4, 2022
Messages
8
Thank you all for your replies!
Can you post you hardware spec as per forum rules please
My hardware specs were in the spoilers, but I'll post them here again:
ASRock B450 Pro4 [P4.20]
Ryzen 5 2400G
24 (8 + 16) GB Corsair Vengeance DDR4
250GB Crucial MX500, 29GB boot pool
4x 2TB Seagate Barracuda ST2000DM008, RAID-Z
Realtek RTL8111H

If there is no partition table on the device, gpart won't show it, AFAIK. nvmecontrol devlist should.
TIL, I'll check this later. If it shows under nvmecontrol devlist, how would I go about to make it such that it will show under disks?

Are you running afoul of slots being used disabling other things?
Can you see the nvme in BIOS?
The module shows up in the BIOS, and I've checked that the M.2 slot its using (M2_1) is not in conflict with other slots (R5 2400G is Raven Ridge):
- 1 x Ultra M.2 Socket (M2_1), supports M Key type 2242/2260/2280 M.2 PCI Express module up to Gen3 x4 (32 Gb/s) (with Vermeer, Matisse, Cezanne, Renoir, Picasso, Summit Ridge, Raven Ridge and Pinnacle Ridge) or Gen3 x2 (16 Gb/s) (with Athlon series APU)** - 1 x M.2 Socket (M2_2), supports M Key type 2230/2242/2260/2280/22110 M.2 SATA3 6.0 Gb/s module and M.2 PCI Express module up to Gen3 x2 (16 Gb/s)** *M2_2, SATA3_3 and SATA3_4 share lanes. If either one of them is in use, the others will be disabled. **If M2_1 is occupied, PCIE4 will be disabled. Supports NVMe SSD as boot disks Supports ASRock U.2 Kit
Your hardware spec (which I realy hope wasn't there before, when I look earlier) mentions a seperate RTL network card
It's the one on the motherboard, included it as its a field in the forum rules post. Should I exclude it?
Does that mean you have creates a 29TBbpartition on the MX500 and boot from that with the intention of using the remaining space as a pool?
[Not that if you have that I think its related ro your issue in anyway]
Yes, its actually one of Patrick M. Housen's guides. Its not officially supported, but leaves me some space to do jails/plugins/VMs without using my main pool.

I'll check nvmecontrol devlist when I'm with my setup again, but I'm still open to suggestions!
 

sniperguy135

Cadet
Joined
May 4, 2022
Messages
8

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,949
When you say cache - do you mean L2ARC?
If its in the BIOS - then TN should see it as an nvd0.
Scale or Core?
 

sniperguy135

Cadet
Joined
May 4, 2022
Messages
8
When you say cache - do you mean L2ARC?
If its in the BIOS - then TN should see it as an nvd0.
Scale or Core?
Yes, L2ARC.
Currently running Core.
I'll be with my setup in a few hours, and will check nvmecontrol devlist for a nvd0.

Thanks!
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776

sniperguy135

Cadet
Joined
May 4, 2022
Messages
8
If there is no partition table on the device, gpart won't show it, AFAIK. nvmecontrol devlist should. If it doesn't, there's a hardware issue. In that case see the previous post, please.
Result I got from nvmecontrol devlist:
Code:
root@HomeNAS[~]# nvmecontrol devlist
No NVMe controllers found.
root@HomeNAS[~]#


On a separate note, system was unresponsive and had to be shutdown unsafely. Has happened a few times even prior to installing the nvme drive. One drive also fails to show up sometimes. Could it be a motherboard issue?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,949
Maybe - I got no other ideas I am afraid. It should just work
 

sniperguy135

Cadet
Joined
May 4, 2022
Messages
8
That's a shame :/ Thanks for your help though
Hopefully someone has encountered this before too
 

sniperguy135

Cadet
Joined
May 4, 2022
Messages
8
Just wanted to update:

It's working now, and it's probably by virtue of tech rule 101 switch on and off, take out and put back in.
Here's some info and steps I did in case someone (unfortunately) comes across this situation:

I have my personal rig running an R5 5600X on a Pro4 X570 (same family as my TN motherboard)
When plugged into the M2_3 slot, its detected by my personal rig. In Windows, disk management also prompts me to format the drive, which I chose GPT.

I put another M.2 PCIe SSD into the TN rig, and it is detected by TN through nvmecontrol devlist, which meant it wasn't a hardware issue. At this point, still no joy with the optane drive.

So, I booted TN with the optane drive on my personal rig, It showed up on nvmecontrol devlist as nvme0.
Subsequently when I booted up to TN on the original rig, it showed up as well, both on CLI and GUI. So my guess is that its just a really, really obscure bug that TN needs to see this drive exist once before it will subsequently find it again.

Since everything is working for now, I won't try a reboot, but if I do I'll post an update if this solution sticks. Thanks for your replies!



Update: Upon restart, it disappeared. :( I'm guessing its just my specific config that is not playing well with everything.
 
Last edited:

rkbest

Dabbler
Joined
Nov 12, 2021
Messages
11
A very relatable issue that happened to me but slightly different. I may start another thread for it.
had optane m10 as slog to the xfs pool in core and was working perfect. I decided to upgrade to truenas scale (not fresh install). Everything worked and start and i found zfs pool degraded. on inspection, the nvme M10 optane is not even listed. removed the nvme and pool came online. But nvme is not detected by truenas scale.
troubleshootng more: restarted and slected truenas CORE option from grub entries, this laod CORE but gui was all messed. Still something were same so went inside storage>disks> and M10 OPTANE is listed fine. This seems more like a bug, or limitation of SCALE to work with m10 optane.
 

sniperguy135

Cadet
Joined
May 4, 2022
Messages
8
Back with some updates on my issue:
Bought an nvme to PCIe x4 adapter, and the M10 seems to be reliably showing up now. So I think its just a really obscure bug.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Subsequently when I booted up to TN on the original rig, it showed up as well, both on CLI and GUI. So my guess is that its just a really, really obscure bug that TN needs to see this drive exist once before it will subsequently find it again.
I don't believe that for a second.

It's far more likely there was just a poor connection of the module that was somehow OK enough for the BIOS, but not for anything else. Once you put it back in, you must have done a better job and all was well.
 
Top