Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

FreeNAS on Proxmox. What's the current state of play?

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE
Joined
Sep 13, 2014
Messages
141
I'm in the early stages of planning my next FreeNAS server build and I'm strongly leaning toward going the virtualization route (using an ASRock X470D4U with an M1015 and possibly a PCIe Intel quad port NIC passed through to the FreeNAS VM). My goal is to replace my current main "X10" server (see sig), which in turn would become my secondary backup server. The plan is that in the event that something happens to the new "X470" server, I should be able to just move the disks back to my X10 server and import the pool.

I really like the look of Proxmox (and the fact it's FOSS is important to me) vs. the admittedly more known of a quantity but non-Libre VMware ESXi. In doing my initial research on these forums, I've seen a handful of mentions of Proxmox not being ideal as it's KVM based but I'm struggling to find any examples of or details on why KVM and FreeNAS don't play nicely together. The other issue is that of the search results I found on this forum, most if not all are several years old, so may be out of date.

Would someone please provide more details on whether/why FreeNAS and KVM don't play nice, or at least point me in the right direction?

Is anyone else running FreeNAS on Proxmox? Any thought, opinions, or bits of advice you could share would be welcome.
 
Last edited:

sretalla

Wizened Sage
Joined
Jan 1, 2016
Messages
3,925
why FreeNAS and KVM don't play nice, or at least point me in the right direction?
It's not that they don't/can't, but there is no support for the qemu guest agent baked into FreeNAS (whereas ESXi is), so your best bet is to emulate VMware drivers for all the stuff that you can (VGA, Network, SCSI).

If you follow the best practice for virtualizing properly, it's probably going to be fine. https://www.ixsystems.com/community...uide-to-not-completely-losing-your-data.12714

Is anyone else running FreeNAS on Proxmox?
I run my TrueNAS core nightly VM on it... Nothing of note other than lack of support for the guest agent, so no clean shutdowns from the GUI. I have no important data there.
 

SubtleOrc

Newbie
Joined
May 24, 2020
Messages
2
I'm running it at the moment but it's in test mode for me. I have come from running Solaris 11.2 on a HP Microserver N36/N40 and that has been rock solid for 8 years now.. so my idea of stable and others might be slightly different.
 

jasestu

Newbie
Joined
Jun 8, 2020
Messages
2
I've got just FreeNAS up and running on Proxmox on my old i7-2600k based system. I've given it 3x2tb drives in raidz to play with and a 128gb SSD cache.
It's running and looks nice, but transfer speeds seem low (like 20MB/s) - and this is copying from a local ntfs SATA drive across to the zfs pool (transferring existing files into the pool for use going forward). Not sure if that's related to Proxmox, or something else in my configuration is bottlenecking it.
Seems VMing a NAS is often called out as a bad idea, but I only have the 1 spare machine and want to run pfSense, Win 10 and a NAS, so I'm lumping it all under Proxmox. Incidentally, I did try running ESXi, Hyper-V and Ubuntu server as other approaches, but Proxmox by far has been the cleanest for my situation - as long as I can be comfortable that FreeNAS is performing to its potential.
 

sretalla

Wizened Sage
Joined
Jan 1, 2016
Messages
3,925
transfer speeds seem low (like 20MB/s)
What NIC have you used in Proxmox for the FreeNAS VM? I have found that the VMnext 3 NIC works well (since VMware drivers are included in FreeNAS).
 

jasestu

Newbie
Joined
Jun 8, 2020
Messages
2
What NIC have you used in Proxmox for the FreeNAS VM? I have found that the VMnext 3 NIC works well (since VMware drivers are included in FreeNAS).
VirtIO.
Now that I've had a chance to run some tests - I'm basically saturating my gigabit ethernet (110 mb/s write, 120 mb/s read) so I'm going to resist the urge to keep tinkering since I can't ask any more of it (until I put in some 10gb cards and switch).

<minutes pass>

Now tested it from another VM on the same box as the NAS and according to NAS Performance Tester 1.7 I'm seeing the below numbers, so yeah, happy with performance. Must have been something else bottlenecking that local transfer - I'll have to figure it out in due course since I do want to use that route to backup the NAS to an external drive that I rotate with another in an offsite location.

-----------------------------
Running warmup...
Running a 400MB file write on I: 5 times...
Iteration 1: 555.56 MB/sec
Iteration 2: 664.48 MB/sec
Iteration 3: 566.58 MB/sec
Iteration 4: 638.99 MB/sec
Iteration 5: 566.57 MB/sec
-----------------------------
Average (W): 598.43 MB/sec
-----------------------------
Running a 400MB file read on I: 5 times...
Iteration 1: 429.18 MB/sec
Iteration 2: 160.00 MB/sec
Iteration 3: 573.04 MB/sec
Iteration 4: 290.70 MB/sec
Iteration 5: 441.50 MB/sec
-----------------------------
Average (R): 378.88 MB/sec
-----------------------------
 

overshoot

Member
Joined
Jul 16, 2019
Messages
69
I am actually using FreeNAS in Proxmox with a 10Gb network card on both end and it is working fine.
Speed between my MacPro and FreeNAS is reaching 10Gb using iPerf3.

I have had to pass-through the HBA card and the SFP+ card for best performance.
When I was letting Proxmox sharing the 10Gb network adapter, I would be getting a brief spike at 10Gb and see slow down around 100Mb/s rapidly for some reasons.

So far so good for me with the 11.3 version and a few macOS clients.
It's been more stable with macOS since the upgrade to 11.3.

I am switching my customer with a similar config from FreeNAS on a Dell T430 Baremetal to Proxmox + FreeNAS as he will need a Windows 10 VM.
Crossing fingers...
 

sretalla

Wizened Sage
Joined
Jan 1, 2016
Messages
3,925
When I was letting Proxmox sharing the 10Gb network adapter, I would be getting a brief spike at 10Gb and see slow down around 100Mb/s rapidly for some reasons.
I would start explaining that with a guess that it's a hardware buffer offload thing... I see it too in my gigabit adapter that's not supported well in Proxmox, needing to turn off the hardware offloading with:
ethtool -K eno1 tso off gso off where eno1 is the NIC.

(actually I add it as a post-up line in the bridge config for that NIC's bridge in /etc/network/interfaces )
 
Last edited:

overshoot

Member
Joined
Jul 16, 2019
Messages
69
Thanks for the info.
Since I have other VMs running and these don't need 10Gb, I believe having the 10Gb interface dedicated to my FreeNAS VM makes more sense.

Good to hear though there is a work around for that.
 

fyboqyovjy

Neophyte
Joined
Jul 6, 2020
Messages
10
I currently do my first steps with Proxmox & FreeNAS.
Everything is working good so far.

But is there any way to install the QEMU guest agent in the FreeNAS VM?

If I try to shutdown the FreeNAS VM (Either via Proxmox or via FreeNAS itselt), it won't shutdown, but reboot instead.
I can't turn it off...
 

fyboqyovjy

Neophyte
Joined
Jul 6, 2020
Messages
10
Change the processor to Qemu64.
It is known issue

I've read about the CPU types in the Proxmox documentation and it says:
In short, if you care about live migration and moving VMs between nodes, leave the kvm64 default. If you don’t care about live migration or have a homogeneous cluster where all nodes have the same CPU, set the CPU type to host, as in theory this will give your guests maximum performance.

Since I don't plan to migrate the VM to other nodes, I've set the CPU type to host.
And indeed. Now I can properly shutdown the VM.

Thanks for the hint!
 

GeneL

Neophyte
Joined
Oct 8, 2020
Messages
9
So, this thread is a little old, but I just encountered it. I am running FreeNAS on Proxmox and it seems to be working pretty well. I am running a Ryzen 5 1600 6 core processor, a ASRock X470 Taichi mother board with 32 GB of memory. This is a home lab system. I have run a Samba server on it for years (well, it and it's predecessor processor and mother board). With Samba I used a product called Greyhole which is a simplistic redundancy solution. Greyhole is good for dealing with large files with low turnover, but I was more and more running git and eclipse on my client computers and that was giving it trouble, so I decided it was time to upgrade. I chose FreeNAS because I knew about and well because it's free.

Having read the advise about virtualzing FreeNAS (here) I decided I wanted FreeNAS to have control of its main pool disks. I considered using PCI pass though, but I also wanted the disks in an external enclosure because of other things I do with the system. I found a nice little USB 3.2 enclosure for 4 drives on Amazon (this one). This device is nice because it appears as 4 separate USB devices that can be assigned to different VM's by Proxmox. It is limited to 10Gb/s total for all 4 drives. I put 3 6TB WD Red Pro drives in this box and assigned them to the FreeNAS VM. (Two of these drives were recovered from the Greyhole pool). The USB pass through allows FreeNAS complete control of the drives. Currently I have 12GB of memory assigned to the FreeNAS VM. (I am running FreeNAS 11.3-U4.1)

After the Greyhole pool was completely transferred to the FreeNAS system I used its remaining two 4GB drives as a mirrored pair that is used just for backup. (So now by FreeNAS has two pools Main_pool and Backup.)

I am running FreeNAS with a kvm64 virtual processor and virtio Ethernet adapter. When I do large file reads over the network it saturates the 1GB/s Ethernet, so I am happy with that performance. Also, I have had no trouble with startup or shut down.

I made no attempt to install any VM guest tools on the FreeNAS system (Most of my systems run Debian or Mint and I am just not too familiar with FreeBSD yet).

Things I have learned:
If you export a pool from FreeNAS you can import it to the Proxmox server and access it directly. But, if Proxmox wants to upgrade it and you allow it, you won't be able to import it back to FreeNAS.

The USB C 3.2 interface is adequate for my needs (small web site, email server, Plex server, software development, CAD). However, I am pretty sure I could get better performance with a dedicated PCI SCSI adapter.

FreeNAS is in some way aware it is virtualized and can interact with Proxmox. More about that later.

ACL's are confusing as hell.

Finally, I want to talk about the issue that brought me here. I configured the VM to has a minimum of 4GB and a max of 12GB of memory. It used 4GB all the time, if I really stressed it the cache would grow and more than 4GB would be used, but as soon as the load was removed it went back to use 4GB. I changed the minimum to 6GB and sure enough it uses 6GB sometimes going over a little and then going back to the minimum setting. This why I say FreeNAS is aware of being virtualized and is interacting with the virtual host (Proxmox).

So I came to the forums to see if anyone else was doing this and had this problem.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
13,538
Finally, I want to talk about the issue that brought me here. I configured the VM to has a minimum of 4GB and a max of 12GB of memory. It used 4GB all the time, if I really stressed it the cache would grow and more than 4GB would be used, but as soon as the load was removed it went back to use 4GB. I changed the minimum to 6GB and sure enough it uses 6GB sometimes going over a little and then going back to the minimum setting.

So I came to the forums to see if anyone else was doing this and had this problem.
Why is this a problem? ZFS expects to use system memory for caching. That's part of the deal. ZFS will use up to "all available memory" for caching. This doesn't mean it will consistently use every bit of it; for example, if the ARC is filled with a bunch of data from a large file, and you close the file and delete it, there are no longer any references to that, so that memory is freed. If you open a large program on the NAS, there too, you will see ARC released if space is needed for it. I'm *guessing* that the latter is what you are seeing, and it is just more obvious because you have so little memory.

And on that topic: You do need to boost your minimum to 8GB. You know that warning you saw when you downloaded FreeNAS, that it requires 8GB minimum? That means that you really do need to do that. Failure to do so used to result in loss of pools and other interesting problems. Unless you like discovering new and strange problems. The NAS middleware is massive and needs memory, as does the ARC.

This why I say FreeNAS is aware of being virtualized and is interacting with the virtual host (Proxmox).
News to me. If you get something like a memory balloon driver running, this would "interact" in this manner, but it's a bad idea overall and you shouldn't do it.
 

sretalla

Wizened Sage
Joined
Jan 1, 2016
Messages
3,925
FreeNAS is in some way aware it is virtualized and can interact with Proxmox. More about that later.
No, that's not the case unless you're running the guest agent and utilities (which are not included and difficult to compile on FreeNAS/TrueNAS).

As mentioned by @jgreco , what you're doing is a bad idea, 8GB minimum or don't expect to run without problems.

ACL's are confusing as hell.
Not specific to FreeNAS.
 

GeneL

Neophyte
Joined
Oct 8, 2020
Messages
9
First I have not installed any guest agent or balloon driver. Whatever is happening should not be due to something I loaded, because this is a pretty straight forward installation of FreeNAS, that for now I have tried to keep as minimal as possible. I went to get a listing from pkg but the list is 377 entries, so not now. I can up the memory to a minimum of 8GB and probably will.

However, consider the image below is from the dashboard, obviously FreeNAS has 12 GIB configured but is using only 6.6 GiB. Which is just ever so slightly above what I configured as the "minimum memory". If I had not sat and looked at this carefully I would never know that there was some limit on how much memory FreeNAS would keep in use. It always uses what ever I configure at the minimum. I really wanted to configure FreeNAS with 16GiB max and 4 GiB min so it could just use what it wanted, but right now it looks like it will just use the minimum.

So, if you guys think I should up the minimum to 8 GiB I will certainly do that. But I still don't understand the behavior. I'm kind of loath to experiment with this server. I think I will spin up another test machine to torture and see if I can quantify the behavior. It'll probably take a couple of days. I'll get back to you.

Gene

Shot0001.jpg
 

NickF

Member
Joined
Jun 12, 2014
Messages
94
I don't want to be that guy...but I am confused. Virtualizing FreeNAS on ESXI makes sense in some specific applications. But virtualizing it on Proxmox makes very little sense, since Proxmox natively has ZFS baked into it. Why not use the native ZFS in the hypervisor?

IXSystems is literally taking the concept of Proxmox and building TrueNAS SCALE with that idea...

As for the memory, of course it will use the minimum. ZFS is always going to try to use all of your unused memory...
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
13,538
I don't want to be that guy...but I am confused. Virtualizing FreeNAS on ESXI makes sense in some specific applications. But virtualizing it on Proxmox makes very little sense, since Proxmox natively has ZFS baked into it. Why not use the native ZFS in the hypervisor?
Your PC has a filesystem built into it. Why don't you just store your files on that instead of on FreeNAS? :smile:

A hypervisor and a NAS are two very different things. A hypervisor does not specialize in file storage and can't easily handle serving up CIFS, NFS, AFP, Active Directory, replication, etc.
 

GeneL

Neophyte
Joined
Oct 8, 2020
Messages
9
I don't want to be that guy...but I am confused. Virtualizing FreeNAS on ESXI makes sense in some specific applications. But virtualizing it on Proxmox makes very little sense, since Proxmox natively has ZFS baked into it. Why not use the native ZFS in the hypervisor?

IXSystems is literally taking the concept of Proxmox and building TrueNAS SCALE with that idea...

As for the memory, of course it will use the minimum. ZFS is always going to try to use all of your unused memory...
First, about being "that guy", I admit it can be irritating to be looking something up, finally find someplace where the question was asked and the only answer is "Why on earth do you want to do that". But in this case it's a fair question and I actually evaluated this possibility. To jgreco's point there is a lot of stuff in FreeNAS that is not available in Proxmox. I hope to get to experiment with active directory and NFS more than I have in the past. Also, the management interface is really nice.

But, in my case there is another reason and that has to do with system philosophy. I built this system as an experimental platform for experimenting with operating systems, file systems, web servers and TCP/IP comms. I currently do research on wireless data communications for a living and have done OS and compiler work in the past, so it's kind of a continuing education project. And I have wrecked a lot of virtual machines in the process of my experiments. Because of this, the philosophy of the system is "don't mess with the VM host platform". I observe it and experiment with it by configuring and running unusual VM configurations, but I don't modify it. The Proxmox installation is as vanilla as I can make it and run the system. It's also not backed up. All of the VM's are backed up and I have backups of my data both on site and off site. But I should be able to reload Proxmox and restore all the VMs from their last backup.

So, I did investigate ZFS on Proxmox; but, I decided that using it would involve just entirely too much mucking about with the VM host.

I do like the tools and the features of FreeNAS and may eventually put it on it's own physical system. But the decision was basically made over issues of system stability. If I blow up ZFS on Proxmox it might cripple the whole system. If I blow it up on a VM, I may have to restore the VM from backup and reload data from backups, but I am prepared for that.

This also explains why I consider the question of why the interaction between the FreeNAS memory system and the Proxmox memory system operates the way it does interesting. With no VM aware code loaded, they should not be able to interact. But my testing makes it appear that they do. I would like to pull down the FreeBSD source and look at the memory manager and see what is really happening. But, sadly, I don't think I'm going to have time.

Gene
 
Last edited:

SillyPosition

Junior Member
Joined
Dec 31, 2018
Messages
20
Hey @GeneL , I'm thinking of doing the same+-
I have a server running freenas, and I want a fast headache-free way to migrate to proxmox and freenas virtualized.
I have a slight misunderstanding on the installation part,
If I install proxmox to one of the drives I have attached, how do I then passthrough my sata controller to freenas (for managing pools)?
All I have is a supermicro x11ssh board which has an onboard controller, I have nothing aside of it.
So If I passthrough the sata controller, how would proxmox have access to its storage?
 
Top