ARC with virtualized TrueNAS. a shame to use only 50% memory?!

Oedzesg

Dabbler
Joined
Oct 24, 2020
Messages
19
Hello all.

I recently installed TrueNAS scale in Proxmox and everything works great!

My system has 64GB of ECC memory of which I have passed 32 to TrueNAS.

Total storage capacity 4x4 TB WDRED HDD & 4x2 TB WDRED SSD.
So 32 seemed sufficient to me.

Since Truenas is virtualized and no other services are running on it, I think it's a shame that ARC only uses 50% of the memory.


Is it safe to set it to, say, 28 GB?

Thank you in advance..
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
I think that's a SCALE thing. My CORE VM uses 30.6/32GB (95%) of my RAM straight out of the box with no modifications.
 

Oedzesg

Dabbler
Joined
Oct 24, 2020
Messages
19
I think that's a SCALE thing. My CORE VM

Tnx for reply!

Using core on proxmox, ESXI or baremetal?

Somehow i have the feeling that Core is only there for Docker en better VM's.

I doubted for a long time whether I should choose proxmox or ESXI.

Proxmox does not support FreeBSD and on the other hand ESXI does not support software raid for my VM disks and boot drive.

In This YouTube video: Scale vs Core performance. can you also see clearly that core is more for pure storage and scale is more for docker and vms etc.

Since my only goal with Truenas is a NAS as stable as possible and nothing more, I'm still hesitating between Proxmox with Scale / ESXI with Core.


Tough choices all..
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Using core on proxmox, ESXI or baremetal?
Proxmox.
Somehow i have the feeling that Core is only there for Docker en better VM's.
You mean SCALE is only there for Docker and better VM's, which I would tend to agree.
I doubted for a long time whether I should choose proxmox or ESXI.
I ended up going to Proxmox because there was something ESXi didn't support, I don't remember specifically, but may have been my HBA.
Proxmox does not support FreeBSD and on the other hand ESXI does not support software raid for my VM disks and boot drive.
It doesn't? That's news to me. I'm running both TrueNAS CORE and a vanilla FreeBSD VM's on my Proxmox. And they're both running very solid.
Since my only goal with Truenas is a NAS as stable as possible and nothing more, I'm still hesitating between Proxmox with Scale / ESXI with Core.
I don't know if this is any reassurance, but I'm currently running Proxmox with vanilla FreeBSD, TrueNAS CORE, and even FreeBSD router (OPNsense) with great success. If you can't already tell, I'm a big FreeBSD fan.
 

Oedzesg

Dabbler
Joined
Oct 24, 2020
Messages
19
That's news to me. I'm running both TrueNAS CORE and a vanilla FreeBSD VM's on my Proxmox. And they're both running very solid.

Everything I've read in recent times is that ESXI would go well with FreeBSD and mostly everyone advises against FreeBSD in Proxmox.

Good to hear that things are stable for you.. definitely going to give it a chance.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Everything I've read in recent times is that ESXI would go well with FreeBSD and mostly everyone advises against FreeBSD in Proxmox.

Good to hear that things are stable for you.. definitely going to give it a chance.
If you're interested in my HW setup. Feel free to check the "primary" system in my sig.
 

Oedzesg

Dabbler
Joined
Oct 24, 2020
Messages
19
Feel free to check the "primary" system in my sig.

Looks good!!

OS: TrueNAS CORE 13.0.U3.1-STABLE VM on Proxmox 7.3-3 with 4 cores and 32 GB non-ballooned RAM.

I also have baloon set to 0 but somehow the memory has not been fully passed on and according to the report in proxmox there is sometimes a few percent difference in what the Truenas VM consumes.


memory_truenas_vm.png


truenas_vm.png


hardware_truenas_vm.png


my setup:

* Silverstone cs381
* supermicro x12sth-ln4f
* Intel Xeon E3256G
* 4x Kingston 16 GB ECC DDR4-3200 (wil upgrade to 4x 32GB)
* 2x Noctua 120mm fan
* 2x Noctua 80mm fan
* Noctua CPU Cooler
* 2x wd red sa500 (boot drive mirror)
* Linkreal PCIe 3.0 x8 to m.2 nvme (2x mirror Samsung 980pro 1tb / 2x mirror samsung 980pro 500gb)
* dell perc h310 HBA
* Mellanox MCX312B-XCCT CX312B ConnectX-3 Pro 10GbE SFP+ Dual-Port PCIe NIC

I initially planned to build 2 servers. 1 with truenas baremetal and 1 vm server with proxmox.

But since energy costs are quite high here in the Netherlands, I decided to cram all my hardware into 1 box and it just works great that way.

Unfortunately, it is very tight in PCIE lanes..
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
I also have baloon set to 0 but somehow the memory has not been fully passed on and according to the report in proxmox there is sometimes a few percent difference in what the Truenas VM consumes.
This is probably because instead of inputting 32000 MiB, you actually need to put in 32768 MiB RAM (32 x 1024). Here is how mine looks like:
1674845983769.png

But since energy costs are quite high here in the Netherlands, I decided to cram all my hardware into 1 box and it just works great that way.
My energy costs isn't exactly cheap either here in New York City. 15c/kWh supply + another 15c/kWh delivery for a total of 30c/kWh. And that's before other taxes and fees, which total around $50 before even using ANY electricity (Basic service charge + customer charge, whatever those mean). It's really infuriating looking at my bill sometimes.
Unfortunately, it is very tight in PCIE lanes..
Fortunately for me, my board does not have that problem. Plenty of PCIe lanes.
 
Last edited:

Oedzesg

Dabbler
Joined
Oct 24, 2020
Messages
19
you actually need to put in 32768 MiB RAM (32 x 1024)


I understood that, but somehow a mistake crept in.

memory.png


energy costs isn't exactly cheap either here in New York City.

Unfortunately, some events in the Far East affect the whole world.

For Truenas core VM:
Guest OS: Other or just Linux?

Thnx helping me out.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
It's a linux (kernel?) thing. IIRC there were workarounds.
While on that subject, I think this is also a SCALE thing. I think kubernetes, if my memory serves me. SCALE CPU usage is ridiculous compared to CORE as you can see in the below shot. I'm running SCALE just for experimental purposes. It has NO DATA and only 4 TrueCharts apps running... not even doing anything (just a deploy and I left them uncustomized). The FreeBSD VM listed there, on the other hand, is running Jellyfin, Transmission, and a couple other VNET jails ACTIVELY transmitting/receiving files and using far less CPU cycles. SCALE is using almost as much as the Windows 11 VM that runs a GUI!!! This reason alone is probably why I basically would never deploy SCALE in any production capacity.

Man, this is why I don't use Linux, except on the desktop. Also, If you're wondering why TrueNAS2 is missing, it's because TrueNAS2 is a baremetal machine in my second sig.
1674848190523.png
 
Last edited:
Joined
Oct 22, 2019
Messages
3,641
It's a linux (kernel?) thing. IIRC there were workarounds.
Unlike with FreeBSD, ZFS is "second class" for Linux. :frown:

The ZFS ARC is also limited to 50% of RAM in Linux, whereas on FreeBSD it defaults to "as high as RAM minus 1GiB".

We're in a strange valley between two worlds when it comes to TrueNAS:

Core is more "stable", and ZFS performs better in a FreeBSD base versus a Linux base.

Yet SCALE is where polish and new features will be developed, as Core is beginning to feel like a stepchild in the family. (Yes, they'll still "support" it, for now.) Just look at the whole "Plugins" debacle. I don't even like submitting bug reports against Core anymore, since if anything they'll just fix/implement it in SCALE, unless it's an Armageddon-level bug.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
It's a linux (kernel?) thing. IIRC there were workarounds.
Adjusting the ARC upwards on SCALE/Linux isn't recommended as it may compromise system stability, due to free memory fragmentation.

I don't consider tuning this higher than the default to be safe on SCALE. With any VMs configured, manual tuning can be overridden because middleware automatically adjusts the ARC max as needed. Besides that, the default max is the limit because the Linux kernel's allocator can end up using more memory than is physically installed beyond that point. See the commit message:
https://github.com/openzfs/zfs/commit/518b4876022eee58b14903da09b99c01b8caa754

Going beyond 1/2 of physical memory may compromise system stability on Linux.
 
Top