TrueNAS Scale VM with Nextcloud crash every few hours

Kienaba

Explorer
Joined
May 24, 2022
Messages
52
Hello,

I just switched to TrueNAS to run my Nextcloud. Yesterday I installed everything and now my old Nextcloud files will upload to my new Nextcloud. To the TrueNAS. But this sync will stop every few hours, because the vm will give me this error:

2023-07-05 22:55:22.135+0000: shutting down, reason=crashed

This is very annoying as I thought TrueNAS was running without any problems. I did a clean installation. And yet now there are problems...

At the beginning I had problems that the computer time had not matched with the current time, whereby the reports in TrueNAS were all on null. I then adjusted the time and fixed it.

I am also very unsure about my VM configuration. Would be very nice if someone can tell me what the optimal configuration is for my Nextcloud VM. In the TrueNAS PC I have 64GB DDR4. And actually I wanted to give 32GB to Nextcloud, even though I know this is overkill. And the PC has 8 cores with 16 threads in total (AMD Ryzen 7 5700U). Had assigned 4 cores to the VM.

Attached a few pictures of my configuration. I have two volumes for the Nextcloud VM. Both are encrypted with AES. One volume for the Nextcloud system and one volume for the Nextcloud data.

And just now I set the VM to 16GB RAM because ZFS was somehow using more than 32GB RAM, which prevented me from using the original 32GB for the VM. This is a bit annoying. Would already like to use 32GB RAM. Maybe the crash is also because of that? That I use too much RAM for the VM? But why would too much RAM be a problem?

Thank you....
 

Attachments

  • 4.png
    4.png
    8.9 KB · Views: 201
  • 3.png
    3.png
    12.1 KB · Views: 192
  • 2.png
    2.png
    10.7 KB · Views: 185
  • 1.png
    1.png
    21.6 KB · Views: 178
  • 6.png
    6.png
    872.4 KB · Views: 178
  • 5.png
    5.png
    12 KB · Views: 177

Kienaba

Explorer
Joined
May 24, 2022
Messages
52
Now I changed my vm settings again. Can someone tell me this is fine?

I changed 1 Threads to 2 Threads. Does this mean 2 threads per core? Or 2 threads in total? I want 2 threads per core.
And I changed the RAM back to 32GB. So far no crash. Now the ZFS cache is again 32GB and services (VM with 32GB) is also 32GB. And then there is 0,4GB free RAM. Is this okay?
Do zfs need always 50% of ram? So I can only use 50% of the RAM I add to my server? Or will the zfs cache adjust? So now its 32GB cache and when I add a new vm with 16GB ram, then ZFS cache will reduce the size to 16GB?
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
I changed 1 Threads to 2 Threads. Does this mean 2 threads per core? Or 2 threads in total? I want 2 threads per core.
And I changed the RAM back to 32GB. So far no crash. Now the ZFS cache is again 32GB and services (VM with 32GB) is also 32GB. And then there is 0,4GB free RAM.
Linux has a nasty tendency to just kill stuff when it runs out of RAM. I'm guessing this is what is happening to you.

Is this okay?
Do zfs need always 50% of ram? So I can only use 50% of the RAM I add to my server? Or will the zfs cache adjust? So now its 32GB cache and when I add a new vm with 16GB ram, then ZFS cache will reduce the size to 16GB?
Is this okay? Well, it depends. I know on CORE (FreeBSD-based), it's totally fine. In fact, you WANT ZFS to use that free RAM. Unused RAM is wasted RAM and an opportunity to improve performance. But FreeBSD has used ZFS as first-class citizen for far far longer than Linux and has much better (stable) memory management when it comes to freeing up ZFS cache when other apps/services need it.

Unfortunately, I can't really tell you how SCALE will behave cause I don't run it in any production capacity. I don't and have no plans to ever use SCALE Apps, so I don't see me ever running SCALE over CORE. Others more experienced with SCALE here may be able to shed more light on it.
 

Kienaba

Explorer
Joined
May 24, 2022
Messages
52
Thanks @Whattteva !

Maybe you can help me with this: I do not understand the overview of my datasets. Please check the screenshot. External Dataset is correct, there are 400GB in use and 3TB free. But I do not understand all the data for "Pool". Maybe I understand just the first one with "1 TiB / 662 GiB". Because the total space is around 1.8TB. So a little bit more about 1TB is in use and 662 GB is free. Seems ok for me and make sense.

But what is wrong with my volumes?! I gave my nextcloud data volume 1000 GB. Then this number will be added to the total stats of "Pool". This was done. But the Nextcloud volume should have around 700GB of data. So it make no sense, that there is 800GB free.

Also the Nextcloud system volume... I gave it 100GB and there are 743 GB free?! Make no sense. And the same for the pihole volume.
 

Attachments

  • e4bd0679c4b91bb09fc2fa4fa01509ef.png
    e4bd0679c4b91bb09fc2fa4fa01509ef.png
    20.7 KB · Views: 182
Top