Question on P2V Migration Stragegy

maxtoid

Cadet
Joined
Jun 4, 2020
Messages
3
Hello all,

I've been researching virtualizing my current TrueNAS installation due to consistent issues with jails. From all the research I've done I feel like I have a pretty clear understanding of the process, but just wanted to run my plan by those with far more experience than me.

Currently I have a SuperMicro SuperStorage 6049P-E1CR24L server running TrueNAS 12.0-RELEASE - I have the OS installed on a Samsung 970 Pro 256GB NVMe M.2 SSD and a second NVMe SSD for the Jails storage (Samsung 970 EVO Plus 512GB NVMe M.2 SSD). The 6049P has 2 10Gbe ports onboard via an Intel X557 and I have a PCIe add-on card for 2 more 10Gbe ports via an Intel X550-T2 card.

After my recent upgrade to TrueNAS 12.0 (after I upgraded all my storage pools and jails) I began having significant issues with the jails. I don't know enough about FreeBSD to troubleshoot them reliably and I can't downgrade to 11.3 without losing my data since I upgraded the storage pools already, so I'm making the move to virtualize and move away from jails (a move I'd been wanting to make for a while due to having issues with jails even on 11.3, but this situation is simply accelerating my timeline).

I should mention that I'm looking at Proxmox for this setup. The guides recommend ESXi, but from my research it looks like Proxmox is more stable with FreeNAS/TrueNAS these days than it used to be. Any reason I should reroute and go with ESXi over Proxmox?

From all my research, it seems like I should be able to backup all the data from each of my jails (since I'll have to recreate them from scratch in docker anyway), and follow this approach to get my desired virtual setup:
  1. Pull the 24 3.5in HDDs from the chassis (just for extra safety, I recognize this step isn't actually required but doesn't hurt during the reformat process).
  2. Install Proxmox on the 256GB NVMe SSD that TrueNAS was previously installed on.
  3. Configure the 512GB NVMe SSD for VM storage.
  4. Setup a VM for TrueNAS and install TrueNAS 12.0-RELEASE
  5. Re-insert all 24 3.5in HDDs and pass both (I believe the 6049P has two controllers that handle the 24 3.5in HDDs) storage controllers through to the TrueNAS VM.
  6. Pass the X550-T2 card through to the TrueNAS VM for 10Gbe connectivity.
  7. Setup the TrueNAS 12.0 similar to how I currently have it setup, then import the zpools and re-setup the shares.
  8. Setup docker and portainer and setup all previous jails from scratch as docker instances.
Anything I'm missing or anything in this plan that raises a red flag?

The one other question I had is I had a few jails that relied on mount points such as Plex and NextCloud - My research shows that I can mount NFS shares in docker but I'm wondering if that is the best method of approach given my new setup. If I'm trying to not run any jails in the TrueNAS VM, then it seems like NFS mount points should be sufficient for Plex and Nextcloud, but just wanted to double check people's thoughts on that.

Appreciate any tips / insight / recommendations!
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Hello,

I used FreeNAS on EXSi 2016-2019. Around 2019 I had been running pfSense virtualized on proxmox with forwarded external PCIe QuadPort NIC successfully for about 3 years. Me, being a usual suspect around these forums I had been sticking closely to advice provided here, resisting temptation to put FreeNAS on Proxmox. As my experience with ESXi steadily deteriorated and inversely so, with Proxmox, I migrated all to Proxmox. Not a single regret.

Red Flags? I need to start with those.
- You really want to passthrough the Broadcom 3008 SAS3 AOC controller to the VM. This is the most important part of a successful virtualization mission. I would not consider it executed before drivers were blocked from kernel load in the host.

- Storage configuration: Typically you'd want a single pool, or divide pools by performance requirement and occasionally in specific cases by redundancy requirement.
Closer to the typical forum guidelines would be a single pool out of the 8tb's, using a mixture of drives in each vdev (the 8tb's at least). 18 drives, perfect 3xRaidz2 - 6 drive wide.

-The Raidz1 on 5x10TB drives is a massive <yikes>.

-WD80EFAX, are these part of the SMR scandal or not?

- You don't need to pass through NICs, at least not for starters. In case you experience network speed issues that somehow are wierd, this is a route to experiment if a passed through NIC would solve.

On the proxmox setup plan you have, a few comments:
- You have such a beautiful amount of overkill in the host drive setup.

- The proxmox system drive will grant free space for you to store VMs.
That is, your 256GB will be plentiful alone, adding the 512GB NVMe directly to the host is a waste, merely for VM storage.
I'd see two use cases:
I have not tried to pass through an individual NVMe drive - If it can be done in a safe manner,
I'd look to use it as LARC, OR as a fast storage in TrueNAS, with intention to host VMs.

TrueNAS excels at delivering storage. Thus, In proxmox setups I like to have the virtualized TrueNAS serving data to host VMs. In my setup, I've a cluster where one virtualized instance of TrueNAS is serving the cluster with storage for VMs.

Random Heads up:
- Be careful during setup, the 'local-zfs' setup dialogue is hidden during setup (browse around during installation and you'll find). It is important as otherwise you'll loose out on replication of this storage area, of VMs. Something that'll come in handy when playing with clusters (which is an unavoidable path....).

- Follow the pass through recommendations in the pve manual. Do the checks if driver modules are actually being blocked.

- Startup & shutdown orders - a big hassle for All-in-One setups, client VMs hosted by a TrueNAS Vm need the storage to become available prior to booting. In ESXi this was a major inconvenience. In my experience, this is a non-issue in proxmox (maybe - guessing wildly - the VM's keep trying to start, rather than timing out as per ESXi standard?). In case you'd like, boot order and delays can be set individually.

- Lack of guest agent; this means the gracefulness in shutdowns are not equal in proxmox to that of ESXi. There are potential fixes, by experimenting with different CPU emulations. I've used default successfully. I recognize this to be the largest pitfall of proxmox & bsd at this point in time.

- There are a lot of things that you can learn and be inspired by from the ESXi documented implementations around the forum. Some of the scripts, setups and cool features are really well thought out. Look through the resource section or search, IIRC Spearfoot and Stux have done excellent script setups, also joeschmuck has a fantastic thread. Insights from these readings are definitely traversal to proxmox setups.

- You've a dual socket system. Those have some performance penalty interaction with passthrough nvme drives if not configured correctly. I'll leave a link to a source on the proxmox forum.
In case the link dies, TLDR; enable NUMA, enable dual sockets to the VM, set CPUtype to Host to allow for proper mapping, and full potential.
Be mindful of CPU settings, as it can have impact upon migration of VM's within a cluster. Definitely if host CPUs are different.


I wish you the best of luck.
Hope to hear more about your pilgramage to proxmox ;)
 
Top