Best practices for boot/data drive setup (TN Scale on Proxmox)

daryusnslr

Cadet
Joined
Oct 26, 2023
Messages
9
im still debating TrueNas Scale vs Core for VM, i started using TrueNas Scale last year so im more comfortable with it & linux , but everyone says TrueNas Core is more stable plus it uses RAM more efficiently( on Scale due to a linux limitation, it can only use 50% of the available RAM, but there is a workaround where you set the zfs_arc_max vaule, until TrueNas Scale gets better RAM management built in), i think ill use scale

for nextcloud, i must thank you for showing me this, it looks like a painless way to nextcloud, before seeing NcVM, i was using nextcloud with TrueCharts, but they keep on breaking the updates, which got fustrating, plus i recetly learned about NC AIO and tried it on my docker debian VM and it all seemed fine, setup wise so i was just going to contine using that tbh, but yeah NcVM looks very tempting lmao

That's my story too!! I only have linux experience, the word freeBSD (and jails :eek: ) still sound scary to me. I actually thought of finding a way to set up that NcVM (from .vma image for example) on a Scale and obviate the need for a full feature hypervisor altogether. My online search did not yield any straightforward path to that since everything needs to be set up in GUI, but I will, as a last shot, attempt a TN Scale ubuntu server VM and give the NC installer script a try...
 

KingKaido

Dabbler
Joined
Oct 23, 2022
Messages
23
That's my story too!! I only have linux experience, the word freeBSD (and jails :eek: ) still sound scary to me. I actually thought of finding a way to set up that NcVM (from .vma image for example) on a Scale and obviate the need for a full feature hypervisor altogether. My online search did not yield any straightforward path to that since everything needs to be set up in GUI, but I will, as a last shot, attempt a TN Scale ubuntu server VM and give the NC installer script a try...
Yeah truenas is a nas first, hypervisor second, i thought I'd be okay with it, but I've been getting into self hosting alot these days, and while creating a Debian or widows VMs hasn't been a bad experience for the most part, there has been alot of annoying things that are easier to do on a another OS, so i just found it made sense to just virtualise truenas and let it focus on its key feature, I've learnt so much this year about Linux, Unix, docker, zfs it's been fun, but i guess i just wanted to increase my knowledge more by using a proper hypervisor and creating a bunch of VMs easily, if you get me.
Edit: Like your example of creating that NC vm looks way more involved , or creating a macos vm is way easier on proxmox via copying the image or through guides/YouTube videos, there's not that many guides with truenas scale since it's just a nas os
 
Last edited:

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
I actually didn't know about FreeBSD and was primarily a Linux user until I started using FreeNAS 8.3 (precursor to TrueNAS) probably about 10 years ago. Since then, I'm all-in on FreeBSD. In my opinion, it is a much cleaner and more coherent OS than the haphazard thing Linux is. pf is also a much better firewall with a lot saner syntax in my opinion and FreeBSD boot environments is a thing of beauty. And of course, jails.... much more mature container system that was well ahead of its time and before the term "container" was even coined.

While I do run Proxmox (Linux) for my hypervisor, all the VM's are FreeBSD's (OPNsense, TrueNAS CORE, and a vanilla FreeBSD for all other services). In my opinion, there is very little reason to go with SCALE ever when you have a full-blown hypervisor at your disposal with the exception of familiarity, which you both have alluded to.

Finally, CORE VM idles at 0-1% for me while my SCALE VM idles as much as 10% for no reason. If I was on the fence on SCALE before, this seals the deal. BTW, I run SCALE only in experimental capacity, so this is not even hosting any real pool or users! It is only running a few apps like Nextcloud, Syncthing, etc that are doing nothing (not being used in any capacity, just idling).

I think these pictures speak for themselves, CORE graph upper limit is only 5%, averaging around 1%, while SCALE goes all the way to 100% and averages around 10%. It's not even remotely close. Also, the bugs are crazy. The spike to 110% in the beginning is some apps stuck at "deploying" forever and sucking up ghost CPU cycles for no reason. I deleted those apps and you can see the CPU usage dropping down in response, but still never really truly idling (bottoming out at 10%).

CORE:
1698422629673.png

SCALE:
1698422564525.png
 
Last edited:

KingKaido

Dabbler
Joined
Oct 23, 2022
Messages
23
Hey,
I just wanted to post an update on my successful migration :)
There was a few problems getting it stable, but it was all due to using Kingston A400 SSDs as the boot devices, they have no DRAM cache, so as soon as you starting writing 4GB+ to the SSD (via uploading a iso or something, it would slowdown the whole system, i had to wait 5-10 mins until i was able to access the proxmox GUI again, so yeah AVOID Kingston A400 SSDs like the plague because they have no DRAM cache as a map for the NAND flash, it uses thes precious system RAM or uses the slow NAND flash as a map, i thought it was exclusively a 120GB problem, but it extends from their 120GB to 960GB SSDs, they all have no dram cache, so ultimately the fix was buying 2 x 250GB Samsung 870 EVO SSDs to Mirror and use a boot pool, its kinda overkill but they afaik no one makes 120GB SSD with a DRAM Cache, so you either gotta get the Samsung or crucial MX500 250GB, pcpartpicker is a good resource for finding out if a ssd has cache or not, but yeah since i swapped out the Kingston SSDs, its been smooth since then, proxmox doesn't freeze anymore and i can upload big isos again
(side note, i always wondered why i used to get 'Device /dev/*** is causing slow I/O on pool boot-pool, emails randomly or when updating TrueNAS when that was my main OS, but it turned out because the the SSDs were slowing down due to the no DRAM issue lmao, but i didn't notice it as much because my System Dataset Pool was on a nvme pool lmao)

i must say i'm a big fan of the hypervisor experience, easy USB & PCI passthrough, easily being able to clone VMs or create templates, backing up and restoring, testing multuiple TrueNAS versions (core or scale), compartmentalising everything, its amazing :)

the next step is getting one of those NAS boxes like the ZimaCube, and running TrueNAS barebones on it for proper backups ;)
Screenshot 2023-11-02 at 13.50.14.png
Screenshot 2023-11-02 at 14.09.07.png
 
Last edited:

daryusnslr

Cadet
Joined
Oct 26, 2023
Messages
9
Thanks for posting your update. You likely saved me the hassle by pointing out the issue you had with those 120 disks, they were exactly what I had planned to use as boot drive when I get some time to put together my new server this weekend. Are you planning to use an SLOG device for your virtualized truenas?
 

KingKaido

Dabbler
Joined
Oct 23, 2022
Messages
23
Thanks for posting your update. You likely saved me the hassle by pointing out the issue you had with those 120 disks, they were exactly what I had planned to use as boot drive when I get some time to put together my new server this weekend. Are you planning to use an SLOG device for your virtualized truenas?
I need to do more research into SLOG, from what i remember, it's best used in situations where there is alot of data being written to the server, via Zvols VMs or ISCSi, but that's not my usecase at the moment, also I've run out of PCIe nvme slots on my motherboard so i need to get one of them plx chip based X16 to 4 nvme cards, unless a sata ssd is okay to use? I see alot of people use optane for it because of the high write tolerance, and most of them are nvme based.

So I'll throw the question back to you so i can understand SLOG more, what would be the purpose for one in your usecase?
 

daryusnslr

Cadet
Joined
Oct 26, 2023
Messages
9
There is a lot of detailed information/discussion on the resources section... from my cursory readings so far, it seems to me that as I intend to set up a virtualized truenas core and have a nextcloud VM access the zfs pool via NFS, SLOG could help performance... but not entirely sure. I won't delve into that before testing the system without any advanced features and learning more about the potential benefit. Those enterprise SSDs that meet the requirements are expensive anyways, so better make sure it makes sense!
 

KingKaido

Dabbler
Joined
Oct 23, 2022
Messages
23
There is a lot of detailed information/discussion on the resources section... from my cursory readings so far, it seems to me that as I intend to set up a virtualized truenas core and have a nextcloud VM access the zfs pool via NFS, SLOG could help performance... but not entirely sure. I won't delve into that before testing the system without any advanced features and learning more about the potential benefit. Those enterprise SSDs that meet the requirements are expensive anyways, so better make sure it makes sense!
Ooo i see nextcloud, in that case i can see the appeal but yeah i agree best to test performance before to see if you even need the performance boost, I done that with special metadata vdev and since I've removed it, i don't even notice the difference...(or maybe i set it up wrong)

But i will say with nextcloud are you going to be writing alot to it? Or reading mostly, if it's the latter a slog won't be needed as much
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I need to do more research into SLOG, from what i remember, it's best used in situations where there is alot of data being written to the server, via Zvols VMs or ISCSi, but that's not my usecase at the moment,
SLOG is only useful for synchronous writes. No sync writes = no SLOG.
If you do have lots of data to write, asynchronous is always going to beat the best SLOG money could buy—and it won't even be a contest.
 

Lord Baldrick

Dabbler
Joined
Oct 5, 2023
Messages
20
Given I've only just ordered hardware for my NAS (I intend to run Proxmox & TN Core) it seems daft for *me* to suggest an answer to running docker on Proxmox ... but I've seen some YouTubers doing just that by running Portainer under Proxmox. Has anyone here tried that?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Given I've only just ordered hardware for my NAS (I intend to run Proxmox & TN Core)
Make sure to read the following resource.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Given I've only just ordered hardware for my NAS (I intend to run Proxmox & TN Core) it seems daft for *me* to suggest an answer to running docker on Proxmox ... but I've seen some YouTubers doing just that by running Portainer under Proxmox. Has anyone here tried that?
I run both Proxmox and TrueNAS CORE, but I do not run Docker nor Portainer. I'm a bit old-fashioned and like to setup my services in a plain ol' FreeBSD (or Debian) VM with Caddy as the reverse proxy entry point. I don't think I can live without Caddy, the automatic HTTPS is just too good.
 

Lord Baldrick

Dabbler
Joined
Oct 5, 2023
Messages
20

daryusnslr

Cadet
Joined
Oct 26, 2023
Messages
9
@KingKaido did you end up trying Nextcloud VM/AIO on a virtualized TrueNAS nfs share? I've had success with the vm on my new server. Though I haven't officially migrated my old server's data yet, still testing different recovery scenarios. Moving database and configurations from a TrueCharts NC app seems tricky, not sure if it's worth the hassle... I might just move data and set up everything else from scratch. Given I'm a total noob I found proxmox quite easy to set up and navigate so far. PCIe/ USB Passthrough has been working OK.
 
Last edited:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
@Davvo - thanks, I have now. Although the contained "Please do not run FreeNAS in production as a Virtual Machine!" link is apparently only accessible with admin access, can that be changed?

 

KingKaido

Dabbler
Joined
Oct 23, 2022
Messages
23
@KingKaido did you end up trying Nextcloud VM/AIO on a virtualized TrueNAS nfs share? I've had success with the vm on my new server. Though I haven't officially migrated my old server's data yet, still testing different recovery scenarios. Moving database and configurations from a TrueCharts NC app seems tricky, not sure if it's worth the hassle... I might just move data and set up everything else from scratch. Given I'm a total noob I found proxmox quite easy to set up and navigate so far. PCIe/ USB Passthrough has been working OK. My hardware:

Motherboard: Asrock Rack X570D4U
CPU: AMD Ryzen 5700X
RAM: 2 x 32Gb 3200MHz ECC UDIMMs (Micron)
Boot Device: 2 x 250 GB Samsung 870 Evo (ZFS mirror, running Proxmox)
VM storage: Samsung 970 Evo Plus NVMe
NAS storage: A bunch of Ironwolf/WD Red HDDs CMR disks (attached to LSI 9207-8i firmware v. 20.00.07.00 passed through to virtual TrueNAS Core)
GPU: NVIDIA Quadro P2200 (passed through to a Windows VM for now)
PSU: Corsair RM750X
Case: Fractal Node 804 with additional fans, one blowing directly on the LSI card's heat sink
UPS: none yet...
Hey again :)

I'm running Nextcloud Aio in a Debian docker vm, it's been really solid, i even have it setup with https/ssl via nginx proxy manager using my own domain, i do need to try the Nextcloud VM but when i migrated to proxmox i wanted to set everything up quickly, so i stuck to what i know,

In order to get the data over from truenas to a vm will be tricky as there are many Nextcloud implementation (eg with a separate database or one built it, or just using different docker providers) it might be a challenge but because i copied my truenas plex data to a Debian docker it must be possible, what you need is to download heavyscript on your truenas, and then mount your nextcloud containers (via the truenas shell) using

Code:
--mount
follow the cli to mount then cd
cd /mnt/**insertpool**/directory/where/nextcloud/is/temporarily/stored

look through the data, through unix cd or create a smb share oor
just copy all the nextcloud data(not the dataset) to your main pool, using
cp -R /whereitsstored /newlocation

--unmount
(^^make sure you unmount after you are done especially if you wanna run nextcloud on truneas again)


hopefully all your data is all there, then when you setup either nc AIO or VM, you gotta copy the data over,
but tbh it might be easier just downloading all the data thats specifically on nextcloud and then recreating from scratch

in terms of NFS and SMB with nextcloud, yeah its working brilliant, i actually have both setup, passing through a nfs folder via debian nfs mount and smb via nextcloud, i prefer the SMB folder because it shows data last modified and size of a folder.

p.s. ill probably update the comment with better formatting / grammar haha
 
Last edited:

daryusnslr

Cadet
Joined
Oct 26, 2023
Messages
9
Hey again :)

I'm running Nextcloud Aio in a Debian docker vm, it's been really solid, i even have it setup with https/ssl via nginx proxy manager using my own domain, i do need to try the Nextcloud VM but when i migrated to proxmox i wanted to set everything up quickly, so i stuck to what i know,

In order to get the data over from truenas to a vm will be tricky as there are many Nextcloud implementation (eg with a separate database or one built it, or just using different docker providers) it might be a challenge but because i copied my truenas plex data to a Debian docker it must be possible, what you need is to download heavyscript on your truenas, and then mount your nextcloud containers (via the truenas shell) using
Thanks, yes a full migration seems tricky. In my case I didn't care about the old nextcloud user config etc on the Truecharts app, only cared about the data. After setting up the VM I copied the files over into the NFS share and ran
Code:
sudo -u www-data php /var/www/nextcloud/occ files:scan --all
inside the VM to write them to the database, so no time wasted!
 
Top