FreeNAS on Proxmox. What's the current state of play?

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I have a slight misunderstanding on the installation part,

Do you? I don't think you've misunderstood at all. Your head seems to be screwed on at least generally straight.

If I install proxmox to one of the drives I have attached, how do I then passthrough my sata controller to freenas (for managing pools)?
All I have is a supermicro x11ssh board which has an onboard controller, I have nothing aside of it.
So If I passthrough the sata controller, how would proxmox have access to its storage?

It wouldn't. Just because you want to do something, doesn't make it possible or practical.

First, Proxmox isn't known to work well with FreeNAS. ESXi is. Proxmox might be able to be made to work with FreeNAS, but it is a significant variable.

Typically you need to arrange for storage for the hypervisor to boot from, and also to run VM's from. You cannot run a FreeNAS VM from storage being provided by FreeNAS, because FreeNAS isn't running until FreeNAS is booted. This is called a bootstrap paradox.

Therefore you will be arranging some additional storage for Proxmox. You more or less already figured this out, which is great. :smile: It means the rest of this will probably make sense.

ESXi will allow you to boot from USB thumb drives, but won't let you build a VM on it. The typical way to set up ESXi is to have an additional controller to either handle the FreeNAS drives, or if you're doing anything that even vaguely requires reliability, you get a RAID controller that is well-supported by ESXi, hook up two SSD's to it, configure it for RAID1, and then voila, hypervisor boot and VM storage is there. If you like maintaining a lower parts inventory, you can use an LSI low-end RAID controller (you know the same ones we crossflash to IT mode for use with FreeNAS).

You will need to figure out what to do for Proxmox.

I believe the 8 SATA ports on the X11SSH show up as a single controller, so it is all-or-nothing to pass that through.

There is also an M.2 NVMe slot IIRC. Your best bet is probably cramming a boot device in there. Otherwise you are looking at burning a PCIe slot. Someone will inevitably show up and suggest you jam a bunch of USB thumb drives in, but these will tend to burn out quickly.
 

GeneL

Cadet
Joined
Oct 8, 2020
Messages
9
As near as I know, you can't do that. If you want to pass through the SATA controller you will have to have a separate SATA controller. If you have a free mother board slot you can get a non-raid controller very cheap. (About $30 on Amazon). That's why I found my USB 3 solution really satisfying because I did not have to PCI pass through. But it looks really easy. Anyway, you can't give the main disk controller to a guest machine.

Gene
 

GeneL

Cadet
Joined
Oct 8, 2020
Messages
9
Please do not post incorrect information.
The nature of civil discourse is such that the minimum of manners is that if you're going to say someone is "wrong", you at least explain in what way you think they are wrong.

Gene
 

SillyPosition

Dabbler
Joined
Dec 31, 2018
Messages
20
As near as I know, you can't do that. If you want to pass through the SATA controller you will have to have a separate SATA controller. If you have a free mother board slot you can get a non-raid controller very cheap. (About $30 on Amazon). That's why I found my USB 3 solution really satisfying because I did not have to PCI pass through. But it looks really easy. Anyway, you can't give the main disk controller to a guest machine.

Gene

Thanks GeneL
So do you actually operate the entire freenas VM and the disks attached to it, only via the USB3 hub, which is mounted into the VM itself. I assume the flash disks for Freenas OS are also mounted into that enclosure.
I misread it at the first time, now it makes alot of sense.. so eventually the onboard SATA controller is accessed and used only by the hypervisor?

I do have a spare PCI port on my motherboard. but I wasn't sure what controller is good to use with freenas.
Your solution is very interesting but that enclosure isn't cheap. Its probably a good idea only if you don't have the spare PCI port.
What is the impact of getting a regular nonraid controller? I see mostly recommendations for the supermicro lsi3008, which is quite expensive. Do I even care for raid controller, given that freenas/zfs handles the storage, mirroring etc?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
The nature of civil discourse is such that the minimum of manners is that if you're going to say someone is "wrong", you at least explain in what way you think they are wrong.

Gene

I quoted what was wrong. I would have expected that if I quoted one single thing and called it "incorrect," and as a moderator, asked someone not to post incorrect information, that this would be sufficiently clear. I do not *think* they are wrong. They are in fact wrong.
 

GeneL

Cadet
Joined
Oct 8, 2020
Messages
9
Ok, but I maintain it is not incorrect.

Let's start by defining what the "main controller" is. From the operating system's point of view the controller it booted from is the "main" one. So, one could reasonably argue that I should have said "boot controller", but I am parochial enough to view everything from the OS's point of view.

Now, the Promox documentation says (in reference to PCI passthrough): "the host must not use the card" and proceeds to tell you how to blacklist it to modprobe.

I maintain the host system will need to use to the disk controller it boots from, and that would preclude using that controller with PCI Passthrough.

If your position is that I should have said "boot disk controller" instead of "main disk controller", I'll buy that, but you would have made things clearer to future readers of this thread if you had said so.

If I have misinterpreted the documentation, or you know something I don't, I'm anxious to know what you know.
 

GeneL

Cadet
Joined
Oct 8, 2020
Messages
9
Thanks GeneL
So do you actually operate the entire freenas VM and the disks attached to it, only via the USB3 hub, which is mounted into the VM itself. I assume the flash disks for Freenas OS are also mounted into that enclosure.
I misread it at the first time, now it makes alot of sense.. so eventually the onboard SATA controller is accessed and used only by the hypervisor?

I do have a spare PCI port on my motherboard. but I wasn't sure what controller is good to use with freenas.
Your solution is very interesting but that enclosure isn't cheap. Its probably a good idea only if you don't have the spare PCI port.
What is the impact of getting a regular nonraid controller? I see mostly recommendations for the supermicro lsi3008, which is quite expensive. Do I even care for raid controller, given that freenas/zfs handles the storage, mirroring etc?

That's now quite how it is. Each of the four drives in the enclosure appears as a different USB device (I assume there is one controller for each drive and an internal hub). So, each of the drives can be used with USB pass through individually. I have given the three drives to the FreeNAS using USB pass through. So, its not the USB host controller or hub that is passed through, just three of the drive controllers. (The fourth slot is empty). The FreeNAS system boots from a Virtualized disk on the host (it is flash, but FreeNAS does not know that), when the FreeNAS system boots it sees a root controller and root hub, but those are virtual, only the disk controllers and drives are real.

Let me say that my system is a "home lab" or "hobby" system. I am not really qualified to recommend hardware for production systems. (If the stuff I build professionally survives 14 days in the open ocean, I have met my reliability requirements). If you are also setting up a "home lab" or "hobby" system then consider this: PCIe SATA controllers are fairly generic, the 4 lane/4 port controllers are going to be performance constrained by the drives, not the PCIe bus and probably not by the controller chip. I have a 4 port card that I paid about $30 for. It can handle one HDD at the same speed as the motherboard controller, but might not keep up with a good SSD (I haven't tried it and there is still the limit of 6MBs on the SATA bus). Also the recommendations I have read discourage using a hardware raid controller with ZFS. So I would think you would want a non-raid controller anyway. The Supermicro controller looks really nice and if you need 8 drives it could be the way to go. There is the issue of FreeBSD/FreeNAS compatibility which you could read up on. Frankly for the cost of one of the cheap 4 port cards I might be tempted to just buy one and experiment.

So there are my thoughts and opinions on the subject.
Good luck with your system.
Gene
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Ok, but I maintain it is not incorrect.

You can maintain whatever you like, but this is compsci, and we're talking demonstrably factual issues. You are incorrect. If you post your incorrectness here, you will get significant pushback.

Let's start by defining what the "main controller" is. From the operating system's point of view the controller it booted from is the "main" one. So, one could reasonably argue that I should have said "boot controller", but I am parochial enough to view everything from the OS's point of view.

Now, the Promox documentation says (in reference to PCI passthrough): "the host must not use the card" and proceeds to tell you how to blacklist it to modprobe.

I maintain the host system will need to use to the disk controller it boots from, and that would preclude using that controller with PCI Passthrough.

If your position is that I should have said "boot disk controller" instead of "main disk controller", I'll buy that, but you would have made things clearer to future readers of this thread if you had said so.

If I have misinterpreted the documentation, or you know something I don't, I'm anxious to know what you know.

I'm not interested in a Bill Clinton wordsmithing about the definition of the term "main". The fact of the matter is that you could boot the hypervisor off of an NVMe SSD or USB thumb drive. Neither of those have a "disk controller" as the term is normally understood, so you can definitely pass the systemboard's "main disk controller" through to a guest OS. It is also possible to add a different storage controller for boot. For example, design requirements in the professional world often require that system storage be protected, let's say RAID1, so it isn't unusual to add in a LSI 9270CV-8i and a trio of disks or SSD's to get fully protected storage for the hypervisor. The onboard SCU or SATA controller is actually useless for this, so despite being the "main" controller, it's unusable for anything other than passthru.

Finally, you can boot your hypervisor via PXE and pass through *all* storage devices to a guest, or distribute them amongst guests (plural), so even the things you say in the message I've quoted in this post are incorrect, because there is no reason that you need a hard disk controller of any sort to boot, and that brings us around to my original request, please do not post incorrect information, it adds confusion to an already complicated area for newbies. I had already described an option in post #21 that suggested using an NVMe SSD for Proxmox boot, which would have allowed the poster to pass thru the X11SSH's controller, which was literally right above your "can't pass the controller" comment.

Because a reasonable solution that disproved your comment in post #22 had already been posted as the last two paragraphs of post #21 right above it, I didn't feel it particularly necessary to go into detail when I admonished you in #23. Please try to remember that participants, even moderators, are community members unless they have an "iXsystems" badge. It's particularly rude to go around demanding that people re-explain to you something that's already been discussed in-thread. I don't get paid to manufacture bespoke explanations, or to participate on these forums at all, for that matter.
 

mikecentola

Cadet
Joined
Oct 29, 2020
Messages
2
Hey folks!

Sorry to beat a dead horse again and I'm obviously playing with a non-recommended setup here, but I have a couple questions.

Hardware is a Dell R510 w/ H700 hw raid card & 12 x 3TB SAS drives + 2 x 240GB SSD RAID1. It is part of my proxmox cluster of 6 servers.

Proxmox is installed on the SSD's with an extra 100G LVM-thin. Was able to create a VM with a 32G lv for TrueNAS and get it installed with no issues at all. The plan is to pass the single 18TB RAID10 LUN to TrueNAS. I know I won't get any of the features of ZFS and will have to rely solely on the hwraid for failure detection, but I've come to terms with that and I'm essentially using TrueNAS because I love the interface and want an easy way to set up the SMB and NAS shares for my employees.

I'm struggling with getting the qemu-guest-agent port to work. I was able to clone the repo (QEMU Guest Agent patched for FreeBSD) and it complained at first about Cannot open /usr/ports/Mk/bsd.port.mk which I saw in the issues that it needs full ports to complile.

I ran portsnap fetch and portsnap extract which completed without any issues.

When I go to complile qemu-guest-agent I'm getting this error:

Code:
make: "/usr/ports/Mk/bsd.port.mk" line 1175: Unable to determin Os version
Either define OSVERSION, install /usr/include/sys/param.h or define SRC_BASE.


I am very familiar with linux variants, but haven't worked with FreeBSD in many many years. Any help would be awesome!
 

Tomek_

Cadet
Joined
Nov 13, 2020
Messages
6
Do the build in a jail instead:
  • configure the essentials and install ports-mgmt/portdowngrade
  • portdowngrade emulators/qemu to r538082 or perhaps port the aborche patches to the newer version
  • build from aborche qemu repo pointing your MASTERDIR to the downgraded qemu port
  • install manually from jail to your TrueNAS following the instructions from aborche

Before you can run the agent you need to get the virtio_console running which isn't available on TrueNAS. You'll need to compile the module yourself so go to your jail again and follow the instructions for building a kernel, you'll need to checkout the 12.2 release to your /usr/src and then build modules/virtio and manually move it back to TrueNAS.

Hope this helps.
 

mikecentola

Cadet
Joined
Oct 29, 2020
Messages
2
Do the build in a jail instead:
  • configure the essentials and install ports-mgmt/portdowngrade
  • portdowngrade emulators/qemu to r538082 or perhaps port the aborche patches to the newer version
  • build from aborche qemu repo pointing your MASTERDIR to the downgraded qemu port
  • install manually from jail to your TrueNAS following the instructions from aborche

Before you can run the agent you need to get the virtio_console running which isn't available on TrueNAS. You'll need to compile the module yourself so go to your jail again and follow the instructions for building a kernel, you'll need to checkout the 12.2 release to your /usr/src and then build modules/virtio and manually move it back to TrueNAS.

Hope this helps.

Thank you. I could probably follow that with some googling. I'm still trying to decide on whether to keep pluggin with TrueNAS in a VM over top of HW RAID10 or to just drop back to a debian or centos server with webmin and doing everything manually.
 

fsociety3765

Explorer
Joined
Feb 2, 2021
Messages
61
Do the build in a jail instead:
  • configure the essentials and install ports-mgmt/portdowngrade
  • portdowngrade emulators/qemu to r538082 or perhaps port the aborche patches to the newer version
  • build from aborche qemu repo pointing your MASTERDIR to the downgraded qemu port
  • install manually from jail to your TrueNAS following the instructions from aborche

Before you can run the agent you need to get the virtio_console running which isn't available on TrueNAS. You'll need to compile the module yourself so go to your jail again and follow the instructions for building a kernel, you'll need to checkout the 12.2 release to your /usr/src and then build modules/virtio and manually move it back to TrueNAS.

Hope this helps.
Hi,

I've been trying to decipher your post in regards to achieving this. Are you able to elaborate? I have a jail setup for the build. First step seems to be to get the virtio_console up and running. On the same jail I have install git and cloned the latest truenas build repo to /usr/build. You mention following the instructions but I'm not sure what instructions you mean. The instructions for the repo I cloned I guess would do a full build of TrueNAS where I just need to build the virtio module right?

Are you able to advise?

Any help would be greatly appreciated.

Thanks,

FS
 

Tomek_

Cadet
Joined
Nov 13, 2020
Messages
6
Checkout the vanilla freebsd in your jail:
svn co https://svnweb.freebsd.org/base/release/12.2.0/ /usr/src

Build just the virtio module:
cd /usr/src/sys/modules/virtio
make


Your .ko binary is going to be somewhere like (depending on architecture):
/usr/obj/usr/src/amd64.amd64/sys/modules/virtio/console/

Copy that somewhere local and load the module with kldload.
 

fsociety3765

Explorer
Joined
Feb 2, 2021
Messages
61
Checkout the vanilla freebsd in your jail:
svn co https://svnweb.freebsd.org/base/release/12.2.0/ /usr/src

Build just the virtio module:
cd /usr/src/sys/modules/virtio
make


Your .ko binary is going to be somewhere like (depending on architecture):
/usr/obj/usr/src/amd64.amd64/sys/modules/virtio/console/

Copy that somewhere local and load the module with kldload.
Thanks! Made some progress.

Checked out the freebsd source and built the virtio module. I have a virtio_console.ko now. I have moved this to /boot/kernel.

But when I run kldload, I just get operation not permitted. I have tried "kldload virtio_console", "kldload virtio_console.ko", and "kldload /boot/kernel/virtio_console.ko"

All return:
Operation not permitted

Is this perhaps some jail setting? I have had a look over the settings for the jail but nothing sticking out at me that would allow this operation.

Any ideas?

Thanks.
 

fsociety3765

Explorer
Joined
Feb 2, 2021
Messages
61
You need to load the module on your TrueNAS kernel, outside of jail.
Awesome. Thanks.

Got that loaded. Copied to the /boot/modules directory and also edited the loader.conf to load the module on boot.

I have cloned the aborche repo down inside the jail. It mentions something about editing some file before doing the make, but I don't seem to have this file. /usr/ports/emulators/qemu.

If I try to just run make inside the repo, it throws an error:

root@qemu-guest-agent-builder:~/qemu-guest-agent # make
make: "/usr/share/mk/bsd.port.mk" line 32: Cannot open /usr/ports/Mk/bsd.port.mk
make: "/root/qemu-guest-agent/Makefile" line 174: Malformed conditional (${ARCH}
== "amd64")
make: "/root/qemu-guest-agent/Makefile" line 178: Malformed conditional (${ARCH}
== "powerpc")
make: "/root/qemu-guest-agent/Makefile" line 182: Malformed conditional (${ARCH}
== "powerpc64")
make: "/root/qemu-guest-agent/Makefile" line 186: Malformed conditional (${ARCH}
== "sparc64")

I see in your list of things there is something about ports. Are you able to point me in the right direction? I have so far installed ports-mgmt/portdowngrade. I'm not sure I understand what I need to do with that though.

Thanks,

FS
 

Tomek_

Cadet
Joined
Nov 13, 2020
Messages
6
The aborche repo looks to have been updated to support qemu 5.0.1 so I'm assuming the portdowngrade is no longer necessary. Do you have the ports installed in your jail? If not do
Code:
portsnap fetch
and
Code:
portsnap extract
before attempting to build the agent.
 

fsociety3765

Explorer
Joined
Feb 2, 2021
Messages
61
The aborche repo looks to have been updated to support qemu 5.0.1 so I'm assuming the portdowngrade is no longer necessary. Do you have the ports installed in your jail? If not do
Code:
portsnap fetch
and
Code:
portsnap extract
before attempting to build the agent.
OK. That's done.

The make completed successfully. I ran make install, but I'm assuming that actually needs to be run on TrueNAS? Do I need to copy some files to my jail mount point to get this last bit done?

Feels like I'm very close now. Thank you.

FS
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
also edited the loader.conf to load the module on boot
You need to make a loader tunable, not edit the file directly if you want it to continue to work after a reboot.
Type:LOADER
Variable: virtio_console_load
Value:YES
 
Top