Best practices for boot/data drive setup (TN Scale on Proxmox)

mikeintexas

Dabbler
Joined
Mar 11, 2013
Messages
14
Hello Everyone,

I've read a bunch of threads on virtualizing TN Core and Scale, but much of it is about running on top of ESXi, which I have very little experience with. I've decided to try and virtualize almost everything in my home environment (except my gaming PC...for now) and want to give Proxmox a go.

In my few weeks of testing Promox on a dedicated bare metal server, I've had several Win10 VMs running well with no issues. I've enabled IOMMU and have been able to pass through HBAs and NIC to those VMs, so I feel semi-confident in giving it a shot with a virtualized instance of TN Scale. My Linux command line experience is limited, but I can follow a how-to guide like no one's business. :D Just being honest about my Linux-fu.

I've not tested virtual anything with TN Scale yet; I figured I'd come here first and hopefully avoid a lot of mistakes and frustration. :)

I understand the hardware has to support what I'm trying to accomplish, and am building a new server to replace all the old bare metal servers and the dozen or so Win10 boxes scattered throughout the house.

Supermicro H12SSL
Epyc 7302P
512GB Reg ECC
2 x Broadcom/LSI 9400-16i HBA
8 x 4TB SSDs (Crucial/consumer grade drives - for live data storage and VMs)
6 x 18TB Seagate Exos spinners (for backup of VMs and data storage)
Intel X520-2 for in-rack comms b/t servers (will be building a separate TN box for a backup server)
Onboard 1GB NICs for serving data to users

My question is regarding boot and data drives for TN itself. As I understand, it's best practice to pass an HBA and it's drives directly to TN for storages/shares. What about the boot drives? For bare metal anything, I always have a mirror setup for the boot drive.

As I see it, I have two choices (maybe there are more?):
1. Create a dedicated 2-drive mirror ZFS pool on Proxmox and install TN Scale on it, then pass the HBA with the storage drives to TN
2. Install TN on the HBA, using 2 of the drives connected to the HBA

What is the best practice and what am I missing? My goal here is stability and reliability. Thank you in advance.
 
Last edited:

KingKaido

Dabbler
Joined
Oct 23, 2022
Messages
23
Hey, its a shame no one replied to your dillema, what did you end up doing?

I am currently going though the same problem, i currently run TrueNas Scale bare metal, but i dont really like the way Scale does apps and iGPU passthrough to VMs (you cant passthrough your iGPU to a VM, only apps) so this is annoying as i wanted to run all my apps in a debian docker VM, and moving plex over is the last hurdle,

so my plan is to use my 2 x 1TB NVMEs (which atm are used for special metadata but i can remove them from the pool) and install proxmox on them, then passthrough my HBA from proxmox to a TrueNas Scale VM and hopefully be up and running fairly quickly

but since i currently use 2 x 120GB SSD as my boot drives, do i retire them and use promox storage (which should be ZFS Mirror) and allocate 35-40GB to the boot drive (plus i'll have the option of snapshots of the boot drive via promox ZFS) or connect the 2 SSDs to the HBA so the TrueNas VM has full access to them and in theory i should be able to boot from it, if anything goes wrong with promox, but im not 100% sure its as easy as that.

but yeah any help will be appreciated!

Edit:
https://www.truenas.com/community/threads/proxmox-where-to-install-truenas-local-zfs-or-physical-drive-attached-to-hba.108678/
has some great discussion on how to do it
 
Last edited:

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
TrueNAS needs direct physical access to the drive on which ZFS is.
The thing is, the boot itself is also ZFS, so the best is to also passthrough the boot device.

But corruption of the boot device is nowhere near of the consequences of corruption in the data pool. As such, the moment you have good and frequent backup of your config, you should be in a good-enough situation.

So just ensure you backup your config in the data pool or out of Proxmox at least daily and should something goes wrong, you re-install TrueNAS, re-import that config and then re-import the pool.
 

KingKaido

Dabbler
Joined
Oct 23, 2022
Messages
23
TrueNAS needs direct physical access to the drive on which ZFS is.
The thing is, the boot itself is also ZFS, so the best is to also passthrough the boot device.

But corruption of the boot device is nowhere near of the consequences of corruption in the data pool. As such, the moment you have good and frequent backup of your config, you should be in a good-enough situation.

So just ensure you backup your config in the data pool or out of Proxmox at least daily and should something goes wrong, you re-install TrueNAS, re-import that config and then re-import the pool.
Thank you for the speedy response!

Okay, that makes alot of sense,
so in theory if i was install proxmox on a seperate SSD, and attach my 2 TNS boot SSDs to my HBA, i could just run TrueNas in a VM without having to reinstall anything?
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
2 x 120GB SSD
That is overkill as boot drives...

So indeed, I would save these for something else.

So first step is to backup everything (config and data) in case something wrong happen during the modification.
You turn off your TrueNAS.
You re-install TrueNAS using virtual drives as boot drives.
You passthrough the controller to that virtual TrueNAS.
You import your config in that TrueNAS.
If your pool does not load by itself with that, you can try to import it manually.

Once running, ensure to do regular backups of your configs outside of TrueNAS.

With that, the ZFS structure that hosts your data is safe (on disks reached over the controller you passed). The ZFS boot pool is not as well protected but the regular backup will help you re-install when needed and that process is 100% safe for your data pool.
 

KingKaido

Dabbler
Joined
Oct 23, 2022
Messages
23
That is overkill as boot drives...

So indeed, I would save these for something else.
Haha foreal? they are leftover drives from when i used to do gpu mining, but its good to know i can use them for other stuff now
So first step is to backup everything (config and data) in case something wrong happen during the modification.
You turn off your TrueNAS.
You re-install TrueNAS using virtual drives as boot drives.
You passthrough the controller to that virtual TrueNAS.
You import your config in that TrueNAS.
If your pool does not load by itself with that, you can try to import it manually.

Once running, ensure to do regular backups of your configs outside of TrueNAS.

With that, the ZFS structure that hosts your data is safe (on disks reached over the controller you passed). The ZFS boot pool is not as well protected but the regular backup will help you re-install when needed and that process is 100% safe for your data pool.
Okay brilliant! i will try this over the weekend, and hopefully it goes smoothly...
Many thanks :)
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
I've read a bunch of threads on virtualizing TN Core and Scale, but much of it is about running on top of ESXi, which I have very little experience with. I've decided to try and virtualize almost everything in my home environment (except my gaming PC...for now) and want to give Proxmox a go.
You're in luck. I have been virtualizing TN CORE on Proxmox for around 9 months now with no issues.

In my few weeks of testing Promox on a dedicated bare metal server, I've had several Win10 VMs running well with no issues. I've enabled IOMMU and have been able to pass through HBAs and NIC to those VMs, so I feel semi-confident in giving it a shot with a virtualized instance of TN Scale. My Linux command line experience is limited, but I can follow a how-to guide like no one's business. :D Just being honest about my Linux-fu.

I've not tested virtual anything with TN Scale yet; I figured I'd come here first and hopefully avoid a lot of mistakes and frustration. :)
Are you extensively using the Apps feature on SCALE? If not, run CORE instead. It's more stable and its ZFS ARC handling is way better (you can use 99% of the allocated RAM). In my opinion, SCALE only makes sense if you plan to use the Apps feature extensively.... But then, you're running Proxmox!!! You can probably spin up another VM that is better suited for it.

My question is regarding boot and data drives for TN itself. As I understand, it's best practice to pass an HBA and it's drives directly to TN for storages/shares. What about the boot drives? For bare metal anything, I always have a mirror setup for the boot drive.
It's best practice to pass the HBA for the ZFS data array, not really the boot drive. TrueNAS is designed to be a "firmware", so loss of boot drive is virtually inconsequential. You can recreate that boot drive in mere minutes as long as you have the config file. In fact, since you're virtualizing, you don't really need to use a physical drive. Just install it on a ZVOL on the boot drive of your Proxmox install. In my experience, Proxmox is actually not built like a firmware like TrueNAS is, so this one, you probably do want to mirror it.

Hope that helps.

Bonus tip:
I'd probably switch to enterprise SSD's for the Proxmox drives where you plan on installing VM's to. Consumer drives are notoriously slow on Proxmox when you're running some real intensive IO operation like installing OS, bulk file transfers, PBS or Ceph. In my experience, any of those operations will spike your IO delay to 30+% that the system is almost unusable and all VM's on that drive will crawl or stop responding entirely.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
so my plan is to use my 2 x 1TB NVMEs (which atm are used for special metadata but i can remove them from the pool) [..]
Are you sure about that? IIRC a metadata vdev cannot be removed from the pool once it is there. I may be wrong here, but better safe than sorry.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Bonus tip:
I'd probably switch to enterprise SSD's for the Proxmox drives where you plan on installing VM's to. Consumer drives are notoriously slow on Proxmox when you're running some real intensive IO operation like installing OS, bulk file transfers, PBS or Ceph. In my experience, any of those operations will spike your IO delay to 30+% that the system is almost unusable and all VM's on that drive will crawl or stop responding entirely.
Would that include SSDs like the Samsung 980/990 PRO? Their QVO drives are of course a bad idea, but the PROs ...
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Are you sure about that? IIRC a metadata vdev cannot be removed from the pool once it is there. I may be wrong here, but better safe than sorry.
It should be the same rule as for the "main" data vdev: Can remove if the pool is fully made of mirrors; cannot remove any vdev if raidz# is involved somewhere.
 

KingKaido

Dabbler
Joined
Oct 23, 2022
Messages
23
It should be the same rule as for the "main" data vdev: Can remove if the pool is fully made of mirrors; cannot remove any vdev if raidz# is involved somewhere.
okay good haha, i was just googling around trying to see if it was possible, i think dodged a bullet using striped mirrors for my setup, i plan to upgrade to raidz2 soon but i need to buy 2 more drives, but yeah i think I'm done with the special metadata vdev, i haven't really seen the benefit since most of my data is videos and when i run vms /apps i use a nvme pool

Also worst case i have a 1x 10TB backup pool with my key data on it (the rest can be downloaded again), so I'd just have to wipe the pool and restore
 

KingKaido

Dabbler
Joined
Oct 23, 2022
Messages
23
Screenshot 2023-10-05 at 08.27.12.png

i have the option to remove the metadata vdev, but tbh it could 'error' when you actually try remove it,
ill report back when i do the the big move to proxmox!
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Would that include SSDs like the Samsung 980/990 PRO? Their QVO drives are of course a bad idea, but the PROs ...
According to this link below, you should stay away from the 800 series (both EVO and PRO). It only has data for the 950 PRO, but the number looks good, so I would say the 900 series is probably a safe choice. The 800 series, on the other hand.... those single-digit numbers are atrocious that even high quality HDD's would give you better performance.

 
Last edited:

KingKaido

Dabbler
Joined
Oct 23, 2022
Messages
23
Hey again, sorry to use this thread again but tomorrow I’m finally going switch to proxmox but i have a dilemma,

I currently have a debian (docker) vm on truenas scale, but when i install proxmox and create the truenas vm, and restore, I’m assuming i can’t run the debian vm in the new truenas scale vm, because it would be nested nested virtualisation...

Would this work as a temporary solution until i recreate the vm in proxmox? Or would i need to backup the data to a local machine with an app like syncthing and then recreate the vm in proxmox and copy back the files?
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
IIRC there is direct support for Docker in Proxmox, so you would not need the VM per se. Also, if the VM is important to you, there is a disaster recovery plan in place, right? ;-) If so, that would be a great opportunity to test it.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
IIRC there is direct support for Docker in Proxmox, so you would not need the VM per se. Also, if the VM is important to you, there is a disaster recovery plan in place, right? ;-) If so, that would be a great opportunity to test it.
Proxmox does not have Docker support. It has LXC support, which is different. Well, I am still running 7.3 (been dragging my feet to upgrade to 8.0), so I'm not sure if 8.0 adds support for it.
 

daryusnslr

Cadet
Joined
Oct 26, 2023
Messages
9
I hope it's OK for me to jump in your discussion with a follow up question (this is my first time posting on this forum!). I intend to set up something similar to what KingKaido describes, i.e. Proxmox VM for TrueNAS using HBA PCIe pass through. What would be the best way to access the ZFS vdevs created by this virtualized TrueNAS, by other VMs that run on the same proxmox node (e.g. a Nextcloud VM)? would i need to create an NFS share? I'm a noob in virtualization, have only been using TN Scale on bare metal in the past.
 

KingKaido

Dabbler
Joined
Oct 23, 2022
Messages
23
I hope it's OK for me to jump in your discussion with a follow up question (this is my first time posting on this forum!). I intend to set up something similar to what KingKaido describes, i.e. Proxmox VM for TrueNAS using HBA PCIe pass through. What would be the best way to access the ZFS vdevs created by this virtualized TrueNAS, by other VMs that run on the same proxmox node (e.g. a Nextcloud VM)? would i need to create an NFS share? I'm a noob in virtualization, have only been using TN Scale on bare metal in the past.
Hey, yeah pretty much NFS,
i plan to connect all my VMs back to the TrueNas VMs with NFS Shares, i also might look into locking the NFS Shares via ip addresses for security reasons but yeah, for nextcloud i think ill create a seperate (debian x dockerr) VM for it, allocate 2gb ram, run nextcloud AIO, and setup the container to store all its config files via the NFS share to, so the VM doesn't need that much storage allocated to it, adn then connect to my (truenas) data via NFS or SMB
 

daryusnslr

Cadet
Joined
Oct 26, 2023
Messages
9
Hey, yeah pretty much NFS,
i plan to connect all my VMs back to the TrueNas VMs with NFS Shares, i also might look into locking the NFS Shares via ip addresses for security reasons but yeah, for nextcloud i think ill create a seperate (debian x dockerr) VM for it, allocate 2gb ram, run nextcloud AIO, and setup the container to store all its config files via the NFS share to, so the VM doesn't need that much storage allocated to it, adn then connect to my (truenas) data via NFS or SMB
Thanks! OK so hopefully it works... I'm still a bit nervous about going down this path, as my system's primary function will be storage, + couple VMs for miscellaneous purposes; so TrueNAS on bare metal would have been ideal based on what I've learned on the forum; but Nextcloud apps on TN Scale are too annoying when one needs to update them. I'm curious, what made you choose TN scale vs TN Core to virtualize on Proxmox? and why NC AIO vs NcVM ? I've tried the latter on a different system and the scripted initial setup went very smoothly.
 

KingKaido

Dabbler
Joined
Oct 23, 2022
Messages
23
Thanks! OK so hopefully it works... I'm still a bit nervous about going down this path, as my system's primary function will be storage, + couple VMs for miscellaneous purposes; so TrueNAS on bare metal would have been ideal based on what I've learned on the forum; but Nextcloud apps on TN Scale are too annoying when one needs to update them. I'm curious, what made you choose TN scale vs TN Core to virtualize on Proxmox? and why NC AIO vs NcVM ? I've tried the latter on a different system and the scripted initial setup went very smoothly.
im still debating TrueNas Scale vs Core for VM, i started using TrueNas Scale last year so im more comfortable with it & linux , but everyone says TrueNas Core is more stable plus it uses RAM more efficiently( on Scale due to a linux limitation, it can only use 50% of the available RAM, but there is a workaround where you set the zfs_arc_max vaule, until TrueNas Scale gets better RAM management built in), i think ill use scale

for nextcloud, i must thank you for showing me this, it looks like a painless way to nextcloud, before seeing NcVM, i was using nextcloud with TrueCharts, but they keep on breaking the updates, which got fustrating, plus i recetly learned about NC AIO and tried it on my docker debian VM and it all seemed fine, setup wise so i was just going to contine using that tbh, but yeah NcVM looks very tempting lmao
 
Top