This content follows TrueNAS 25.04 (Fangtooth) releases. Use the Product and Version selectors above to view content specific to a different software release.
TrueNAS 25.04 introduces support for Containers (named Instances in pre-25.04.2 releases), enabling lightweight isolation similar to jails in TrueNAS CORE.
In TrueNAS 25.04.2 (and later) virtual machines are created and appear on the Virtual Machines screen.
Legacy VMs created in 25.04.0 or 25.04.1 using the Instances feature and some VMs migrated to those versions from TrueNAS 24.10 continue to function and appear on the Containers screen.
Legacy VMs on the Containers screen do not autostart in 25.10 or later.
Virtual machines in 25.04.2 (or later) are created and appear on the Virtual Machines screen.
VMs created in 25.04.0 or 25.04.1 using the Instances feature continue to function and appear on the Containers screen.
VMs created in 24.10 or earlier are located depending on migration path:
Previously migrated to 25.04.0 or 25.04.1
VMs with Zvols that were imported using the Move option appear on the Containers screen.
VMs with Zvols that were imported using the Clone option appear on the Virtual Machines screen.
Direct upgrade to 25.04.2
VMs on 24.10 systems that upgrade directly to 25.04.2 (skipping 25.04.0/25.04.1) automatically migrate to the Virtual Machines screen without manual action.
Users with existing VMs on the Containers screen should consider migrating associated zvols to the Virtual Machines screen at this time to ensure compatibility with future TrueNAS releases.
For more information, see Migrating Containers VMs.
Containers (Linux system containers) are an experimental feature intended for community testing only.
Functionality could change significantly between releases, and containers might not upgrade reliably.
Use this feature for testing purposes only—do not rely on it for production workloads.
Long-term stability is planned for future TrueNAS Community Edition releases.
Make all configuration changes using the TrueNAS UI.
Operations using the command line are not supported and might not persist on upgrade.
For assistance or to discuss this feature with other TrueNAS users, visit our community forums. To report bugs, submit an issue on TrueNAS Jira.
TrueNAS includes built-in virtualization capabilities that let you run multiple operating systems on a single system, maximizing hardware utilization and consolidating workloads.
A virtual machine (VM) is a software-based computer that runs inside your TrueNAS system, appearing as a separate physical machine to the operating system installed within it. VMs use virtualized hardware components, including network interfaces, storage, graphics adapters, and other devices, providing complete isolation between different operating systems and applications.
VMs offer stronger isolation than containers but require more system resources, making them ideal for running full operating systems, legacy applications, or services that need dedicated environments.
Enterprise-licensed High Availability (HA) systems do not support virtual machines.
What system resources do VMs require?
TrueNAS assigns a portion of system RAM and a new zvol to each VM.
While a VM is running, these resources are not available to the host computer or other VMs.
Virtualization requires:
x86 machine running a recent Linux kernel
Intel processor with VT (Virtualization Technology) extensions, OR
AMD processor with SVM extensions (AMD-V)
Users cannot create VMs unless the host system supports these features.
Creating a Virtual Machine
Before creating a VM, obtain an installer .iso or image file for the OS you intend to install, and create a zvol on a storage pool that is available for both the virtual disk and the OS install file.
If the VM needs to access local NAS storage, you need to create a network bridge to allow communication.
See Accessing TrueNAS Storage from a VM below for more information.
To create a new VM, go to Virtual Machines and click Add to open the Create Virtual Machine configuration screen.
If you have not yet added a virtual machine to your system, click Add Virtual Machines to open the same screen.
Select the operating system you want to use from the Guest Operating System dropdown list.
Compare the recommended specifications for the guest operating system with your available host system resources when allocating virtual CPUs, cores, threads, and memory size.
Change other Operating System settings per your use case.
Select UTC as the VM system time from the System Clock dropdown if you do not want to use the default Local setting.
Select UEFI from the Boot Method dropdown, unless using an older OS that requires Legacy BIOS.
Select Enable Secure Boot to enable cryptographic verification of boot loaders, operating system kernels, and drivers during VM startup.
This security feature prevents unauthorized or malicious code from running during the boot process by checking digital signatures against trusted certificates.
Secure Boot is required for Windows 11 and some Linux distributions, and can be optional or unsupported for older operating systems.
Select Enable Trusted Platform Module (TPM) to provide a virtual TPM 2.0 device for the VM.
TPM provides hardware-based security functions, including secure key storage, cryptographic operations, and platform attestation.
This is required for Windows 11 and enhances security for other operating systems that support TPM.
Select Enable Display to enable a SPICE Virtual Network Computing (VNC) remote connection for the VM.
The Bind and Password fields display. If Enable Display is selected:
Enter a display Password
Use the dropdown menu to change the default IP address in Bind if you want to use a specific address as the display network interface. Otherwise, leave it set to 0.0.0.0.
The Bind menu populates any existing logical interfaces, such as static routes, configured on the system.
Bind cannot be edited after VM creation.
If you selected Windows as the Guest Operating System, the Virtual CPUs field displays a default value of 2.
The VM operating system might have operational or licensing restrictions on the number of CPUs.
Do not allocate too much memory to a VM. Activating a VM with all available memory allocated to it can slow the host system or prevent other VMs from starting.
Leave CPU Mode set to Custom if you want to select a CPU model.
Use Memory Size and Minimum Memory Size to specify how much RAM to dedicate to this VM.
To dedicate a fixed amount of RAM, enter a value (minimum 256 MiB) in the Memory Size field and leave Minimum Memory Size empty.
To allow for memory usage flexibility (sometimes called ballooning), define a specific value in the Minimum Memory Size field and a larger value in Memory Size.
The VM uses the Minimum Memory Size for normal operations, but can dynamically allocate up to the defined Memory Size value in situations where the VM requires additional memory.
Reviewing available memory from within the VM typically shows the Minimum Memory Size.
Select the network interface type from the Adapter Type dropdown list. Select Intel e82585 (e1000) as it offers a higher level of compatibility with most operating systems, or select VirtIO if the guest operating system supports para-virtualized network drivers.
The VirtIO network interface requires a guest OS that supports VirtIO para-virtualized network drivers.
Select the network interface card to use from the Attach NIC dropdown list.
If the VM needs to access local NAS storage, attach a network bridge interface.
Click Next.
Upload installation media for the operating system you selected.
TrueNAS does not have a list of approved GPUs at this time, but TrueNAS does support various GPUs from NVIDIA, Intel, and AMD.
As of 24.10, TrueNAS does not automatically install NVIDIA drivers. Instead, users must manually install drivers from the UI. For detailed instructions, see Installing NVIDIA Drivers.
Confirm your VM settings, then click Save.
Adding and Removing Devices
After creating the VM, you can add or remove virtual devices.
Click on the VM row on the Virtual Machines screen to expand it and show the options, then click device_hubDevices.
An active VM displays options for settings_ethernetDisplay and keyboard_arrow_rightSerial Shell connections.
When a Display device is configured, remote clients can connect to VM display sessions using a SPICE client, or by installing a 3rd party remote desktop server inside your VM.
SPICE clients are available from the SPICE Protocol site.
If the display connection screen appears distorted, try adjusting the display device resolution.
Use the State toggle or click stopStop to follow a standard procedure to do a clean shutdown of the running VM.
Click power_settings_newPower Off to halt and deactivate the VM, which is similar to unplugging a computer.
If the VM does not have a guest OS installed, the VM State toggle and stopStop button might not function as expected.
The State toggle and stopStop buttons send an ACPI power down command to the VM operating system, but since an OS is not installed, these commands time out.
Use the Power Off button instead.
Installing an OS
After configuring the VM in TrueNAS and attaching an OS .iso file, start the VM and begin installing the operating system.
Some operating systems can require specific settings to function properly in a virtual machine.
For example, vanilla Debian can require advanced partitioning when installing the OS.
Refer to the documentation for your chosen operating system for tips and configuration instructions.
Installing Debian OS Example
Upload the Debian .iso to the TrueNAS system and attach it to the VM as a CD-ROM device.
This example uses Debian 12 and basic configuration recommendations.
Modify settings as needed to suit your use case.
Click Virtual Machines, then ADD to use the VM wizard.
Configure settings as needed.
Select the physical interface to associate with the VM.
Installation Media
Installation ISO is uploaded to local storage.
If the ISO is not uploaded, select Upload an installer image file.
Select a dataset to store the ISO, click Choose file, then click Upload. Wait for the upload to
complete.
GPU
Leave the default values.
Confirm Options
Verify the information is correct and then click Save.
After creating the VM, start it. Expand the VM entry and click Start.
Click Display to open a SPICE interface and see the Debian Graphical Installation screens.
Press Enter to start the Debian Graphical Install.
a. Enter your localization settings for Language, Location, and Keymap.
b. Debian automatically configures networking and assigns an IP address with DHCP.
If the network configuration fails, click Continue and do not configure the network yet.
c. Enter a name in Hostname.
d. Enter a Domain name.
e. Enter the root password and re-enter the root password.
f. Enter a name in New User.
g. Select the username for your account or accept the generated name.
h. Enter and re-enter the password for the user account.
j. Choose the time zone, Eastern in this case.
Detect and partition disks.
a. Select Guided - use entire disk to partition.
b. Select the available disk.
c. Select All files in one partition (recommended for new users).
d. Select Finish partitioning and write changes to disk.
e. Select Yes to Write the changes to disks?.
Install the base system:
a. Select No to the question Scan extra installation media.
b. Select Yes when asked Continue without a network mirror.
Install software packages:
a. Select No when asked Participate in the package usage survey.
b. Select Standard system utilities.
c. Click Continue when the installation finishes.
After the Debian installation finishes, close the display window.
Remove the device or edit the device order.
In the expanded section for the VM, click Power Off to stop the new VM.
a. Click Devices.
b. Remove the CD-ROM device containing the install media or edit the device order to boot from the Disk device.
To remove the CD-ROM from the devices, click the and select Delete.
Click Delete Device.
To edit the device boot order, click the and select Edit.
Change the CD-ROM Device Order to a value greater than that of the existing Disk device, such as 1005.
Click Save.
Return to the Virtual Machines screen and expand the new VM again.
Click Start, then click Display.
What if GRUB does not start automatically?
If GRUB does not run when you start the VM, enter the following commands after each start.
At the shell prompt:
Enter FS0: and press Enter.
Enter cd EFI and press Enter.
Enter cd Debian and press Enter.
Enter grubx64.efi and press Enter.
To ensure it starts automatically, create the startup.nsh file at the root directory on the VM. To create the file:
Go to the Shell.
At the shell prompt, enter edit startup.nsh.
In the editor:
a. Enter FS0: and press Enter.
b. Enter cd EFI and press Enter.
c. Enter cd Debian and press Enter.
d. Enter grubx64.efi and press Enter.
Use the Control+s keys (Command+s for Mac OS) then press Enter.
Use the Control+q keys to quit.
Close the display window
To test if it boots up on startup:
a. Power off the VM.
b. Click Start.
c. Click Display.
d. Log into your Debian VM.
Configuring Virtual Machine Network Access
Configure VM network settings during or after installation of the guest OS.
To communicate with a VM from other parts of your local network, use the IP address configured or assigned by DHCP within the VM.
To confirm network connectivity, send a ping to and from the VM and other nodes on your local network.
Debian OS Example
Open a terminal in the Debian VM.
Enter ip addr and record the address.
Enter ping followed by the known IP or hostname of another client on the network, that is not your TrueNAS host.
Confirm the ping is successful.
To confirm internet access, you can also ping a known web server, such as ping google.com.
Log in to another client on the network and ping the IP address of your new VM.
Confirm the ping is successful.
Accessing TrueNAS Storage From a VM
By default, VMs are unable to communicate directly with the host NAS.
If you want to access your TrueNAS SCALE directories from a VM, to connect to a TrueNAS data share, for example, you have multiple options.
If your system has more than one physical interface, you can assign your VMs to a NIC other than the primary one your TrueNAS server uses. This method makes communication more flexible but does not offer the potential speed of a bridge.
To create a bridge interface for the VM to use if you have only one physical interface, stop all existing apps, VMs, and services using the current interface, edit the interface and VMs, create the bridge, and add the bridge to the VM device.
See Accessing NAS from VM for more information.
Migrating Instances VMs
The storage volumes (zvols) for virtual machines created using the Instances option in TrueNAS 25.04.0 or 25.04.1 can migrate to new VMs created in using the Virtual Machines screen options in 25.10 and later.
The process involves:
Identifying the hidden storage volumes (zvols) associated with the Instance VMs.
Determining which zvol contains the actual VM data by checking the volume size.
Renaming (and moving) the zvols to a new dataset where they can be seen and used by a new VM.
(Highly Recommended) Configuring zvol properties to match those of natively-created VM zvols.
Creating a new VM and selecting the migrated zvol as the storage volume.
Before You Begin
Before beginning the process:
Identify the zvol names associated with the Instance VM.
Take a snapshot or back up the zvol for the Instance VM.
Using ZFS commands to rename and move an existing zvol can damage data stored in the volume. Having a backup is a critical step to restoring data if something goes wrong in the process.
Verify the VM is operational and has Internet access, then stop the VM before you upgrade to the 25.10 or a later release.
Identify the dataset where you want to move the volume in 25.10 or later.
We do not recommend renaming or moving the volume more than once, as it increases the risk of possible data corruption or loss.
You do not need to log in as the root user if the logged-in admin user has permission to use sudo commands.
If not, go to Credentials > Users, edit the user to allow sudo commands, or verify the root user has an active password to switch to this user when entering commands in the Shell screen.
Migrating a Zvol for an Instance VM
This procedure applies to the zvol for an Instance VM that has data you want to preserve, and access from a new VM in 25.10 or later.
While in a 25.04.01 or a later maintenance release:
Go to Instances, click on Configuration, and then Manage Volumes to open the Volumes window.
The Volumes window lists all Instance VMs and the associated storage volumes (zvols).
Record the volume name or take a screenshot of the information to refer to later when entering commands in the Shell screen.
Zvol names are similar to the VM name but not identical.
Optionally, you can highlight all the listed information and copy/paste it into a text file, but this is not necessary.
While on the Instances screen, verify the VM is operational and the network is operating as expected.
One way to verify external network access is to check Internet access. Stop the VM before upgrading.
Repeat for each zvol that you plan to migrate into a new VM in later releases.
Go to Datasets, locate the pool associated with Instances (Containers), and take a recursive snapshot to back up all Instances VM zvols.
These zvols are in the hidden .ix-virt directory created in the pool Instances uses, selected when you configure the feature.
To verify the pool, you can go to Containers > Configure > Global Settings and look at the Pool setting.
Go to System > Update, and update to the next publicly available maintenance or major version release.
Follow the release migration paths outlined in the version release notes or the Software Releases article.
After updating from a 25.04.x release, VMs created using Instances screens show on the Containers screen, and are stopped.
Some VMs experience issues various issues like network connectivity, or are stopped and do not start.
Refer to the troubleshooting tips below for more information. 25.10 releases correct some issues encountered in 25.04.2.4 VMs that are migrated.
Troubleshooting VM Issues
If upgrading from 24.10 to 25.04, VMs are visible and running, but are expected to have issues because the 25.04 release does not fully support these older VMs.
VMs with a Windows OS installed could require converting to VirtIO-SCSI disks to get reconnected to the Internet.
To restore connectivity, try clean-mounting the system from the mounted drive from within the VM, and then on the TrueNAS system (host).
Follow this by removing driver syntax added to raw QEM files.
If a new VM is created in 25.04.2.1 and it fails to run, stop all containers.
In the VM configuration, delete the current NIC, then select the bridge before selecting the NIC again to restore functionality.
VMs created using the Instances feature initially show on the Virtual Machine screen as running when they are not running, but this state corrects on its own.
If a VM with Windows OS is created in 25.04.0 using the Virtual Machine screens (not Instances in 25.04.1) the VM should run.
If this VM cannot find the NIC, delete the NIC in the configuration from the Devices screen for that VM, and then reconfigure it to restore functionality.
Go to Containers to see which VMs are listed, then click Configuration, and then Manage Volumes to open the Volumes window.
Take a screenshot of the volumes listed, or copy/paste the volumes and VM information into a text file to use later in this procedure.
Go to System > Shell. Exit to the Linux prompt for the system.
Note: This is where the logged in admin user needs sudo permissions, or where the root user must have a password configured to enter the following commands to find, rename/move, and verify each Instance zvol is properly configured.
Enter the following commands at the Linux system prompt:
Storage conventions differ based on VM history:
Migrated VMs (from pre-Incus TrueNAS) use custom/default_* zvols for actual VM data
VMs created in 25.04.0 or 25.04.1 use .block zvols for actual VM data
Small .block files (56K) are stubs and should not be migrated
Storage conventions differ based on VM history:
Migrated VMs (from pre-Incus TrueNAS) use custom/default_* zvols for actual VM data
VMs created in 25.04.0 or 25.04.1 use .block zvols for actual VM data
Small .block files (56K) are stubs and should not be migrated
a. Locate the hidden zvols for the Instance VMs by entering:
sudo zfs list -t volume -r -d 10 poolname
Where:
-d 10 shows datasets up to 10 levels deep
poolname is the name of the pool associated with the Instance VMs.
If you have multiple pools associated with the Instance VMs, repeat this command with the name of that pool to show hidden zvols in that pool.
The .ix-virt directory contains the zvols used in Instance VMs. Check the USED or REFER columns to identify the actual VM storage:
For migrated VMs: Use the custom/default_* zvol (typically several GB or more)
For VMs created in 25.04.0 or 25.04.1: Use the .block zvol that shows significant storage usage (not 56K stubs)
Ignore: Stub .block files showing only 56K, and zvols not in the .ix-virt directory
The output includes other zvols in the pool if your system has non-instance VMs configured in the pool name entered in the command.
Example Command Output
Example showing migrated VMs (custom/ zvols with actual data):
re-minir-102% sudo zfs list -t volume -r tank
NAME USED AVAIL REFER MOUNTPOINT
tank/.ix-virt/custom/default_vm2410linux-8cppg 40.6G 1.66T 40.6G - ← Migrate this (actual data)
tank/.ix-virt/custom/default_vm2410win-mvqznj 40.2G 1.66T 40.2G - ← Migrate this (actual data)
tank/.ix-virt/virtual-machines/vm2410linux.block 56K 1.66T 56K - ← Stub (ignore)
tank/.ix-virt/virtual-machines/vm2410win.block 56K 1.66T 56K - ← Stub (ignore)
tank/vms/previously-migrated 35.1G 1.70T 35.1G -
Example showing VMs created in 25.04.0/25.04.1 (.block zvols with actual data):
qe-realmini% sudo zfs list -t volume -r tank
NAME USED AVAIL REFER MOUNTPOINT
tank/.ix-virt/virtual-machines/TrueNAS.block 6.98G 2.55T 6.98G - ← Migrate this (actual data)
tank/.ix-virt/virtual-machines/fdsa.block 25.9M 2.55T 248M - ← Migrate this (actual data)
tank/.ix-virt/virtual-machines/debian.block 56K 2.55T 56K - ← Stub (ignore)
tank/.ix-virt/virtual-machines/mint.block 56K 2.55T 56K - ← Stub (ignore)
In the examples above:
Zvols with custom/default_* in the path showing significant storage (40+GB) are migrated VMs to migrate
Zvols with .block extension showing significant storage (6.98G, 25.9M) are native Incus VMs to migrate
Small .block files at 56K are stubs and should be ignored
After successfully migrating and confirming functionality of all VMs, the remaining stub .block files (56K) in .ix-virt/virtual-machines/ can optionally be deleted to clean up the hidden dataset.
b. Rename (and move) each volume in the .ix-virt directory to a new location where you can select it when configuring a new VM.
Repeat for each volume you want to migrate to a new VM. Do not rename or move stub .block files (56K).
default_debian1-urec9f or TrueNAS.block is the name of a hidden zvol in the examples, and the name given to the migrated zvol.
We do not recommend renaming the migrated zvol to minimize potential issues with the migration process.
For .block zvols, you can keep or remove the .block extension in the target name.
vms is the dataset in the examples as the location to store the migrated zvols for VMs. Change this to the location on your system.
This renames and moves it to the specified location, and returns to the system Linux prompt.
To verify the zvol moved, enter the sudo zfs list -t volume -r tank command again. The output should show the zvol in the new location.
c. (Highly Recommended) Set zvol properties to match those of natively-created VM zvols.
Enter the following command for each zvol you migrated:
For migrated VMs (custom/ zvols):
sudo zfs set volmode=default primarycache=all secondarycache=all tank/vms/default_debian1-urec9f
For VMs created in 25.04.0 or 25.04.1 (.block zvols):
sudo zfs set volmode=default primarycache=all secondarycache=all tank/vms/TrueNAS.block
Where:
tank is the pool name.
vms is the dataset where the zvol is stored.
default_debian1-urec9f or TrueNAS.block is the name of the zvol
This command sets the volume properties to match those used by zvols created through the Virtual Machines screen, ensuring optimal performance and behavior.
Containers VMs use different property settings that are not optimal for virtual machine workloads.
After completing the commands listed above for each zvol you want to migrate. Go to Datasets and verify that all volumes you migrated show on the screen.
Create the new VM using the migrated zvol. Repeat these steps for each zvol you migrated.
Go to Virtual Machines, click on Add to open the Create Virtual Machine wizard.
a. Complete the first screen by entering a name for the new VM. Select the operating system used by the Instances VM, enter a brief description, then, if using the Bind setting, enter a password. Click Next.
b. Configure the CPU and Memory settings, and then click Next.
c. On the Disks wizard screen, select Use existing disk image, click in Select Existing Zvol and select the volume moved from the Instances VM.
If you move multiple zvols, refer to the screenshot or text file with the VM/zvol list to select the correct zvol for this new VM.
d. Click Next until you get to the confirmation screen, then click Create to add the VM.
After adding the new VM, click on it to expand it, and click Devices.
Click Edit for the Disk device, and enter 1000 in the Device Order field.
Setting the disk to 1000 makes the disk device the first in the boot order for the VM.
Setting the disk to first in boot order over a CD-ROM device with an OS on it, if added when creating the VM, prevents the volume from being overwritten by booting from that CD-ROM device.
Click Save.
Return to the Virtual Machines screen, expand the VM, then click Start to verify it opens as expected and has Internet access.