High CPU Usage

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Hello all,

I setup 4x4 tb drives in raidz2 at my system:

asrock b560m-itx/ac
i3 10100
32 GB RAM
Delock 5 port SATA PCI Express x4 Card
4x WD Red Plus 4 TB
+ SSDs for proxmox and backup of VMs
Truenas as VM

and I assigend 6 cores and 16 GB RAM to Truenas. When I copy files I hit almost 100% usage on all 6 cores during the transfer (via a SMB Share from a Windows Client).

On a live dataset I had the compression set to ZTSD, then changed it to LZ4 - no change and then to compression off. Still the CPU load is that high during transfers.

Any ideas where I can look what may not be configured properly? The documentation states "TrueNAS does not require two cores, as most halfway-modern 64-bit CPUs likely already have at least two." hence I believe such high system loads are not to be expected.

edit: added screenshots for further information


Thanks in advance!
 

Attachments

  • Screenshot 2023-02-07 180221.png
    Screenshot 2023-02-07 180221.png
    57 KB · Views: 397
  • Untitled.png
    Untitled.png
    39.9 KB · Views: 417
  • Screenshot 2023-02-07 180537.png
    Screenshot 2023-02-07 180537.png
    37.6 KB · Views: 372
Last edited:

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Originally the VM had 4 cores and 12 GB RAM and when I saw that all four cores are at 100% I added two and some RAM.
The CPU has hyper threading, at first I thought I couldn't assign more than 4 cores but 4 cores at 100 % from truenas corresponded to around 50% load for the proxmox server.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
There is not a lot of information about how the virtualization is done, and a lot of ways exist to screw this up. That something seems to be working, does not mean it does that in the face of a critical situation.

Please have a thorough look at this:

Also, the Delock SATA may be cause for concern (depending on the chipset etc.)
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
There is not a lot of information about how the virtualization is done, and a lot of ways exist to screw this up. That something seems to be working, does not mean it does that in the face of a critical situation.
Sorry, I'll try to provide more information.

The hypervisor is Proxmox (Linux 5.15.74-1-pve #1 SMP PVE 5.15.74-1)
I downloaded the TrueNAS CORE 13.0-U3.1 iso and created a new VM with more or less the default settings and installed from the iso.
The WD Red plus are attached via the HBA, proxmox is running on a 2.5" SSD (on the mainboards sata ports).

1675803656167.png

1675803765687.png


Please have a thorough look at this:
I read that one after I build the server. Originally I purchased the server to serve my home assistant instance with regards to also be able to setup a NAS solution. If I was set on truenas beforehand (and knew I'd spend the $$$ on the hard drives) I'd probably chosen different hardware.
When I'm done playing around with my proxmox machine and have all my VMs and containers etc. set, I could still think about migrating to truenas as my hypervisor and see if everything or most of it will still work as I want.*
* edit: actually I could look into that. I basically just need one or two ubuntu server VMs (wireguard, paperless, pi hole, heimdall, portainer) and my home assistant instance. From what I read since you provided the link again that virtualization is not the best idea, this may be my best bet going forward. That's the beauty of VMs though, I can just backup all my VMs, install truenas baremetal and if I feel to go back I just deploy proxmox and restore my VMs. I'm not too sure about how to move my VM contents to truenas hosted VMs though. But setting up pihole again etc. shouldn't be too hard as I can safe my config.
I'd only need a nvme SSD then since my mainboard only has four sata ports.

Also, the Delock SATA may be cause for concern (depending on the chipset etc.)
It uses a JMicron JMB585 chipset. I didn't find the mentioned HBA cards for a reasonable price when I started realizing I need a HBA. (I know discovered some are affordable used on ebay, but well.). Call me naive or enlighten me, but in my usecase I think if the HBA card fails I'm looking a week or so of downtime, still have my local mirrors + backblaze backup (although I want to shift to B2 storage using truenas remote encryption so retrieving the files is dependend on the server running, however I could boot truenas from a stick or another SSD and use my backuped config). But all data will be backed to our local machines as a backup anyway. It would be nice and for the money spend desireable to have truenas running realiably and profiting of th zfs system, but I don't think I'll be lost without it as it should not replace my other backups.

@HoneyBadger I changed to two cores and allocated even more RAM:
2 cores running almost at 100 % equal around a quarter of CPU load, hence I think I could assign 8 cores. But that's not too important since none of my applications are really CPU intensive and that many cores anyway. Apparantly except for truenas.

1675805827068.png
 
Last edited:

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Update
I got a nvme ssd (way too big though, only realized afterwards that I can't use it for anything else like VMs) and installed treunas bare metal.

So far the cpu load seems okay, for some reason I didn't save the screenshot from the latest larger smb transfer, but I recall the load was around 10 %, which seems fine.

I will need some time to get used to the new hypervisor but if bare metal is the safest bet I'll take it.
 
Last edited:
Top