TrueNAS Scale Crashing on New Build

Joined
May 14, 2023
Messages
6
Hello all,

System Specs:
Mobo:
Topton n5105
CPU: Intel Celeron N5105
RAM: TEAMGROUP Elite DDR4 64GB Kit (2 x 32GB) 3200MHz PC4-25600 CL22 Unbuffered Non-ECC (Running @2933 MHz)
Drives:

SSD: PNY CS1030 250GB M.2 NVMe (x2) Raid 1
HDD: Seagate IronWolf 4TB NAS 5400 RPM 64MB Cache (x5) RAIDZ1
Network Cards:
Intel i266-V (x4) - only using one

Currently, this is set up to run Proxmox with a VM TrueNAS Scale inside. This is due to the fact how unstable it is. I did try running TrueNAS Core originally and had no crashing issues until I switched to Scale, which has way better features. Ideally, if Scale runs stably, I would eliminate ProxMox and just transfer the Pool. The system has 50 GB of RAM allocated to it and 104 GB on the SSD. The HDDs are all passed through to TrueNAS.

I am having instability on TrueNAS Scale. In large amounts of Data Transfer, I find that the system crashes, the CPU doesn't appear to be overloaded and HDD temps are. I have read a bit of the form and know this might be related to not having enough ARC storage, which is where I tend to drift. I understand that the MOBO is new and might not be fully supported either.

Any advice/tips/tricks would be much appreciated!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I don't see any information on your disk controller; you need to pass your HBA or disk controller through to TrueNAS using PCIe passthru. This is a key thing you need to do to have a stable virtualized TrueNAS. When you say "The HDDs are all passed through to TrueNAS" it sounds like you've skipped this important step.

See the full primer on virtualization at

 
Joined
Jun 2, 2019
Messages
591
@KarmaGotMyBack

Is this the motherboard?



@jgreco

Could this be a contributor?

  • Storage
    • 2x M.2 NVMe PCIe 3.0 socket (2280)
    • 6x SATA 3.0 connectors, 5x of which are implemented through a Jmicroon JMB585 PCIe Gen3 x2 to x5 SATA 3.0 bridge
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Could this be a contributor?
6x SATA 3.0 connectors, 5x of which are implemented through a Jmicroon JMB585 PCIe Gen3 x2 to x5 SATA 3.0 bridge

Ouch. You trying to give up being an elvisimprsntr and instead trying to be shrlckdtctv? Good catch.

To the OP, the JMB585 chipset has been known to be problematic for some users, and especially if you are attempting to pass thru the controller, I have no idea if this is going to work out. We know that at least some of the AsMedia controllers (specifically the ASM106x) work fine on bare metal and I've seen a few reports of VT-d success with them IIRC, but the Jmicron controllers have been problematic for many users in any role.

I'll think about it and see if I have any suggestions. I think we need clarification on what's being done with the disks here as far as passthru goes.
 
Joined
May 14, 2023
Messages
6
So I am still learning some of the terms, so correct me if I am off or wrong.

VT-d is 'enabled' in the bios but not supported. So from what I can find, I can't enable the SATA Controller (JMB585) to be passed through on ProxMox d/t 'No IOMMU detected'.

1684265664830.png


What I did to pass through the drives in proxmox was in:

qm set 100 -scsi1 /dev/disk/by-id/ata-ST4000VN006-3CW104_ZW605M61
qm set 100 -scsi2 /dev/disk/by-id/ata-ST4000VN006-3CW104_ZW6066PE
qm set 100 -scsi3 /dev/disk/by-id/ata-ST4000VN006-3CW104_ZW60ECTJ
qm set 100 -scsi4 /dev/disk/by-id/ata-ST4000VN006-3CW104_ZW60EQ1K
qm set 100 -scsi5 /dev/disk/by-id/ata-ST4000VN006-3CW104_ZW604W72

@KarmaGotMyBack

Is this the motherboard?


Yes.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
VT-d is 'enabled' in the bios but not supported. So from what I can find, I can't enable the SATA Controller (JMB585) to be passed through on ProxMox d/t 'No IOMMU detected'.

Okay, so this system is not usable for virtualization.

What I did to pass through the drives in proxmox was in:

A very dangerous trick that will eventually cause corruption.

This is usually the point at which people get angry at me for telling it like it is. Feel free. Go ahead. But this setup is not expected to work reliably, and for ZFS, that means it isn't expected to work. Please consider checking in the Resources section of the forum for suggestions for boards that are known to work well with TrueNAS.
 
Joined
May 14, 2023
Messages
6
Okay, so this system is not usable for virtualization.



A very dangerous trick that will eventually cause corruption.

This is usually the point at which people get angry at me for telling it like it is. Feel free. Go ahead. But this setup is not expected to work reliably, and for ZFS, that means it isn't expected to work. Please consider checking in the Resources section of the forum for suggestions for boards that are known to work well with TrueNAS.
I had been reading many of the posts you posted and basically realized this as well from what you already have posted.

To be honest, I don't even want to run ProxMox. But when I run only Scale it crashes like no other. I am not really trying to fix the issues with that virtualization, just the general stability for scale. I will happily run just scale without it being a VM it (Scale) doesn't survive more than an hour of staying up before it fails. Any ideas on that setup and why that would be the case?

Does that make sense how I worded that?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Any ideas on that setup and why that would be the case?

No. I'm a FreeBSD guy, so my go-to answer here, especially in light of the fact that you indicated CORE had been working fine, would be to run CORE, and then for whatever Kubernetes or container stuff you feel compelled to run, install some friendly Linux as a VM inside CORE's bhyve hypervisor and give that a shot.

You get the benefits of CORE's stability (in general, and also infrequent update requirements) while also getting the "way better features" that don't work all that well on SCALE anyways. Just my offhand opinion, and trying to give you something viable to do. Debugging Linux is a major PITA in my experience so if I can avoid it... :smile:
 
Joined
May 14, 2023
Messages
6
No. I'm a FreeBSD guy, so my go-to answer here, especially in light of the fact that you indicated CORE had been working fine, would be to run CORE, and then for whatever Kubernetes or container stuff you feel compelled to run, install some friendly Linux as a VM inside CORE's bhyve hypervisor and give that a shot.

You get the benefits of CORE's stability (in general, and also infrequent update requirements) while also getting the "way better features" that don't work all that well on SCALE anyways. Just my offhand opinion, and trying to give you something viable to do. Debugging Linux is a major PITA in my experience so if I can avoid it... :smile:
Thanks for the reply.

For the time being, I switched back to Scale only (no ProxMox) and will see if it wants to be stable. Otherwise, I might go back to CORE. (Unless you happen to know any mini ITX mobo's that have nearly the same specs) I wouldn't mind a faster CPU and a mobo that runs VTd & ECC Ram...
I am not sure that exists in that form factor, though.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Debugging Linux is a major PITA in my experience so if I can avoid it... :smile:
Something tells me you're going to have a mighty fun time here cause from my anecdotal observations, new users overwhelmingly skew SCALE due to "ZOMG APPS and LINUX!!!"
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
For the time being, I switched back to Scale only (no ProxMox) and will see if it wants to be stable. Otherwise, I might go back to CORE. (Unless you happen to know any mini ITX mobo's that have nearly the same specs) I wouldn't mind a faster CPU and a mobo that runs VTd & ECC Ram...
I am not sure that exists in that form factor, though.
Why the insistence on mini ITX? micro ATX isn't that much bigger and opens up your options a lot more. It also tends to have better expansion options and airflow/cooling.
 
Joined
May 14, 2023
Messages
6
Why the insistence on mini ITX? micro ATX isn't that much bigger and opens up your options a lot more. It also tends to have better expansion options and airflow/cooling.
I totally agree with all those options. For this first NAS, I built I wanted something that was small and clean-looking but still pretty functional so I used a JONSBO N1 Mini-ITX NAS Chassis.

I am already wishing I had a bigger case and motherboard though.
 
Joined
May 14, 2023
Messages
6
Update
I would say this had been resolved. May solutions were to not run TrueNAS Scale as a VM, since my hardware isn't supporting it, and to disable turbo boost on my CPU in the bios. The combo has make it stable.
 
Top