Is it difficult to use the OSS4 sound drivers in TrueNAS?

Cabanel

Dabbler
Joined
Dec 12, 2022
Messages
15
Hello, I've been using FreeBSD as my daily driver on a desktop for several years. I'm pretty good with OSS4 and the sysctl options for 'bitperfect mode' and 'real-time audio' in FreeBSD. I am also good at configuring musicpd. I also have extensive experience with Linux systems and with ALSA, pavucontrol, pulseaudio and PipeWire.

I will be purchasing the following hardware for the NAS:

I wonder if on one of the TrueNAS versions can use the audio drivers from FreeBSD (or Linux)?

As you can see, the hardware I'm going to use has connections for analog audio and it's obviously a huge asset if I can play music and movies directly from the NAS. Is this possible on TrueNAS? If this is not possible or very difficult, I might be better off installing FreeBSD directly and manually configuring it as a NAS.

Regards!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I wonder if on one of the TrueNAS versions can use the audio drivers from FreeBSD (or Linux)?

TrueNAS is an appliance OS. It is not meant to be modified, have drivers added to it, and is missing lots of the tools one might need in order to manipulate the system to do this. No compilers, no linkers, no include files. If you do try to force the issue, expect your NAS updates/upgrades to break your changes, and the changes may also break your ability to do those updates.

Instead, you need to do something along the lines of a jail or virtual machine where you then make the host hardware available to the jail/virtual machine. Inside those environments, you have a lot more freedom to do as you wish.
 

Cabanel

Dabbler
Joined
Dec 12, 2022
Messages
15
Instead, you need to do something along the lines of a jail or virtual machine where you then make the host hardware available to the jail/virtual machine. Inside those environments, you have a lot more freedom to do as you wish.
Wouldn't it be easier to do it the other way around? Using FreeBSD as host, then virtualizing TrueNAS in VirtualBox or Bhyve (on FreeBSD). That seems easier to do than what you mean.

I've looked around a bit and it does indeed seem to be the case that for certain use cases it's better to use a full distro instead of something that focuses on NAS: https://forum.kodi.tv/showthread.php?tid=350766

I recommend the custom route with an actual linux server os. Ubuntu for me. I have also used a pure Debian base. They give me flexibility. I can do all kinds of other things with my "nas" that one can't with a proper nas device. It does take a bit more to get running right, but the benefits outweigh the added work. Once it's setup though it's something even Ron Popeil would be proud of. Set it and forget it.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Wouldn't it be easier to do it the other way around? Using FreeBSD as host, then virtualizing TrueNAS in VirtualBox or Bhyve (on FreeBSD). That seems easier to do than what you mean.

It might seem like it. But you are talking to the person who wrote the book on virtualizing TrueNAS.


Neither bhyve nor VirtualBox are known to be stable or safe for virtualizing TrueNAS. TrueNAS is not just some random crap-arse webserver VM or other trite virtualization workload; you really need top notch virtualization support to have even a CHANCE of having it work correctly. That limits you to Proxmox (experimental) or ESXi (known to be solid on appropriate server hardware).
 

Cabanel

Dabbler
Joined
Dec 12, 2022
Messages
15
Neither bhyve nor VirtualBox are known to be stable or safe for virtualizing TrueNAS. TrueNAS is not just some random crap-arse webserver VM or other trite virtualization workload; you really need top notch virtualization support to have even a CHANCE of having it work correctly. That limits you to Proxmox (experimental) or ESXi (known to be solid on appropriate server hardware).
Aren't you exaggerating a bit with that statement? I've tried three different Linux distros on VirtualBox + FreeBSD. And I actually noticed something very unexpected, namely that after more than 21 hours of using these systems I have not had a single bug or hiccup when using these systems. In practice, in the first hours of use (sometimes) I'm going to notice several bugs in many different Linux distros when I start installing and using them barebones. This was not the case with three completely different distros I tested on VirtualBox + FreeBSD. (Alpine Linux, MX Linux and Clear Linux) I think you're unfairly critical of the stability of VirtualBox on FreeBSD 12.3. On the other hand, you may be right that using pure FreeBSD as a NAS will be even more stable, and you do indeed want 100% stability instead of 99.5%. But don't you think that VirtualBox + FreeBSD will get close enough to 100% stability for certain things? What I think I'm going to do is install TrueNAS on my desktop in VirtualBox first and see how stable it is with long-term use. If I don't run into any issues I guess I could try using it on the new hardware? I read your link but I don't think there are realistic situations mentioned that will happen in my case.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Aren't you exaggerating a bit with that statement?

No, I'm not. Proper virtualization of TrueNAS requires the passthrough of your storage controller (SATA AHCI, PCH SCU, LSI HBA, etc.) which is known to be nonexistent, flaky, or poorly implemented on many hypervisor implementations. The demands of the I/O loads presented via PCIe passthru to the storage controller require an implementation of passthru that doesn't have issues or randomly crap out. Not 99.5%. It really does need to be 100%. VMware, with its technical staff of over a thousand developers, has invested a huge amount of time in perfecting these features, including plain PCIe passthru, SR-IOV, and VF support, so that their large enterprise customers who are paying for the high performance hundred thousand dollar quad CPU hypervisors decked out with all the SAN I/O and flash storage and 100Gbps ethernet are happy. ESXi had mastered this a decade ago when I wrote the virtualization articles. In the meantime, Proxmox has introduced "experimental" (their word, not mine) support for PCIe passthru back in 2018, which seems to work reasonably well on many, but not all, platforms.

The next ones I expect to catch up are Xen/XCP-ng, due to its maturity and heavy use in the cloud hosting industry, and then also Hyper-V, which was hobbled for a long time by a lack of support for FreeBSD.

The desktop "Type 2" hypervisors are generally not suitable because they usually don't support PCIe passthru.

And I actually noticed something very unexpected, namely that after more than 21 hours of using these systems I have not had a single bug or hiccup when using these systems.

Why's that unexpected? You have common operating systems operating on an idealized hardware platform. This means that you're not going to have to cope with the ins and outs of Realtek ethernet chipsets performing poorly, flaky PC architecture AHCI controllers that aren't quite up to spec, etc. There's a lot of value in the idealized hardware platform offered in the virtual environment. It sands off a lot of the rough edges of the PC world.

But don't you think that VirtualBox + FreeBSD will get close enough to 100% stability for certain things?

Of course it will. But the goal in these forums is not "for certain things". You COULD build a TrueNAS VM for the exclusive purpose of hosting jails or containers; this trivializes the workload back down to what I previously referred to as a "crap-arse webserver VM or other trite workload". We don't expect you to be doing that. There are better solutions for doing those things, operating systems designed specifically for those use models. Here in the TrueNAS Forums, we expect you to be trying to build a TrueNAS network attached storage. It is assumed that you are going to have a bunch of disks, requiring lots of disk and network I/O. We expect that your NAS will be a significant workload for a hypervisor, and, more specifically, it will be really heavy and demanding on the I/O, which means that certain lightweight hypervisors need not apply.

I read your link but I don't think there are realistic situations mentioned that will happen in my case.

Fine. I lean little-l libertarian (not the way-out crazy sort), which means that I respect your right to be incorrect and make your own mistakes, and I will not and know I cannot force you to listen to the things I'm saying. But it might be worth noting that I've been sitting here for a decade working with people on virtualizing their FreeNAS hosts, and in that time, I've taken the time to document what's been found to work, as well as what has been found not to work. Your choice.
 

Cabanel

Dabbler
Joined
Dec 12, 2022
Messages
15
The demands of the I/O loads presented via PCIe passthru to the storage controller require an implementation of passthru that doesn't have issues or randomly crap out. Not 99.5%. It really does need to be 100%.
There is no such thing as 100%. I agree with you that you have to try your best to get as close as possible to it. I also agree that TrueNAS barebones with the right hardware is extremely close to that 100% data security, maybe closer than any other solution.

bhyve/NVMe isn’t just faster than bhyve/VirtIO—it’s faster than KVM/VirtIO as well.
It's also stable now, so what's your problem with bhyve/NVMe? It has gigantic high I/O.

VMware has the problem of having a very large code base and is actually slower than bhyve in (many) workloads. The problem with millions of lines of code is that simplicity is the best security. So you can say that VMware has a lot of 'zero days' anyway that hackers may or may not be aware of. They recently had another security issue with windows that had the highest 'vulnerability severity rating' I've ever seen.

I'm not going to use this NAS in a life-critical context. A family member is a photographer and wants to be able to backup the photos she saves on her MacBook Pro. That's the main reason for the NAS, but I think it would also be nice if it could play audio and video directly from the NAS hardware. That's why FreeBSD + TrueNAS in a VM seems so perfect to me. I just installed TrueNAS in a VM on FreeBSD and so far it works flawlessly and relatively fast for my slow hardware: https://i.ibb.co/48sHs6k/pekwm-screenshot-20221213-T163140-1920x1080.png

She will be using it mainly for simple backups via eg rsync, and (maybe) things like Nextcloud, Plex Media Server, Transmission, Sonarr, .. What is your opinion of using it this way?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
There is no such thing as 100%.

When thousands of systems operate 24/7 for years and years, which at this point has been established, and the rough edges are known, then the difference between 99.999999999% and 100% is effectively meaningless. We know ESXi's hardware passthru to be completely stable on certain platforms with the cards that are needed for TrueNAS. This is essentially the same thing I mean when talking about HBA's in the RAID controller sticky. There are billions of aggregate problem-free run-hours on the LSI HBA's and we understand this to mean "100% or so close as not to matter."

It's also stable now, so what's your problem with bhyve/NVMe? It has gigantic high I/O.

I didn't say I had a problem with that. So, if I understand what you're saying, PCIe passthru works swimmingly well for NVMe devices on bhyve. There aren't a lot of people who care, because even iXsystems has admitted that bhyve has lots of other problems, and has given up using it internally. The problem is that high performance NVMe would need to be closely coupled with very high speed networking, which bhyve doesn't seem to be particularly competent at. I say that, pained, because I would love to toss VMware into the fiery depths of hell and instead have a great hypervisor based on BSD, which I've been using since the '80's, and am intimately familiar with. The problem is that the use case for hypervisor-based TrueNAS installs is primarily lower performance hard disk based NAS units, which -- as you may have guessed from the community documents -- I've been running successfully for more than a decade now.

My own experiences with bhyve are more along the lines of the iXsystems experience. I have a build farm that does massively parallel builds of FreeBSD, but it requires a crushing amount of workload on the cluster. To reduce this, I also have a bhyve based system that iterates over the image list and sequentially updates any that are out of date. The really cool thing, the thing I desperately wanted, was for this to be able to build its images inside a BSD RAM disk, reducing wear on local flash resources and theoretically being faster, being in RAM and all. The ability to fully script stuff from within the shell environment was amazing. Unfortunately, it would periodically lock up.

VMware has the problem of having a very large code base and is actually slower than bhyve in (many) workloads. The problem with millions of lines of code is that simplicity is the best security. So you can say that VMware has a lot of 'zero days' anyway that hackers may or may not be aware of. They recently had another security issue with windows that had the highest 'vulnerability severity rating' I've ever seen.

That's as may be, and it is certainly inconvenient to do your patching of VMware hypervisors if you want 100% NAS uptime. However, both TrueNAS and ESXi require occasional patching, and FreeBSD has the occasional security notice as well. I'm not seeing anything compelling here. ESXi is basically the Cadillac of hypervisors and is much more likely to be compatible with random workloads. However, a lot of the hazards in the VMware ecosystem have to do with software from other projects. A year ago, we listened with some horror over lunch as VMware's Bob Plankers gave insights into Log4j in the very early days of that issue; it turns out that it is helpful to have a company with a large engineering department able to remediate those issues.

What is your opinion of using it this way?

I don't know that you've given me a specific use case. In any case, it doesn't really matter. As I said before, I respect your right to be able to make your own mistakes and take responsibility for what you do. I would personally be hesitant to commit anything important to a virtualized NAS on a platform that I didn't have confidence in and an unproven design. There's no guarantee that it must fail, and certainly there must be a few alternative paths to success. I would rather take the path well tested, because I have better things to do than trying to argue about how many angels can fit on the head of a pin, or discovering for myself the hard way why a given idea is the wrong way to do it.
 

Cabanel

Dabbler
Joined
Dec 12, 2022
Messages
15
Thanks for all the info.

Something I've found is people saying they've been running a recent version of windows server on bhyve for over a year with no blue screen whatsoever. So without one crash. That was three years ago.

I'm surprised to hear that you don't like bhyve's network performance. In VirtualBox it is by no means a significant bottleneck on my system, the network itself is the biggest bottleneck.

bhyve is much faster in networking than VirtualBox: https://people.freebsd.org/~novel/misc/vboxbhyvebench_oct2016/iperf.png
For my usage scenario bhyve is many times faster at networking than what is needed.

I'm going to test both VirtualBox and bhyve extensively and make a choice based on that experience which of the two it will eventually be.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Something I've found is people saying they've been running a recent version of windows server on bhyve for over a year with no blue screen whatsoever. So without one crash. That was three years ago.

During the same time that iX was busy ripping it out of their network? Interesting. Kinda vaguely. Oh, wait, I've got VM's that are up for hundreds of days too. That's not really a difficult trick. On ESXi. On bhyve, I seem to get lockups every few days.

I'm surprised to hear that you don't like bhyve's network performance.

Why? It's been known to be problematic enough for years, that the Foundation finally threw some funding at it for FreeBSD 13. Large environments are probably still impractical; some of us run hundreds of VM's per system, which isn't something that is known to work well on bhyve.

bhyve is much faster in networking than VirtualBox:

Nobody uses VirtualBox for any serious virtualization.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Something I've found is people saying they've been running a recent version of windows server on bhyve for over a year with no blue screen whatsoever. So without one crash. That was three years ago.

I'm surprised to hear that you don't like bhyve's network performance. In VirtualBox it is by no means a significant bottleneck on my system, the network itself is the biggest bottleneck.
During the same time that iX was busy ripping it out of their network? Interesting. Kinda vaguely. Oh, wait, I've got VM's that are up for hundreds of days too. That's not really a difficult trick. On ESXi. On bhyve, I seem to get lockups every few days.

I am one of those running Windows Server 2016 on TrueNAS. Without a single hiccup. I might have to add that there is virtually no load on these two Windows servers. We are an open source company. All servers performing actual work - disk, network, computing - are FreeBSD or Ubuntu.

We have two Windows servers to run Active Directory. Only Active Directory.

@Cabanel what you are possibly missing from @jgreco's posts is the assumption that you want to put load on a NAS server. Reading and writing of Gigabytes if not Terabytes of data daily. At least that's what we build NAS systems for. And no, a VM inside virtual box is not up to that task. Not even the hypervisor is.

HTH,
Patrick
 

Cabanel

Dabbler
Joined
Dec 12, 2022
Messages
15
Why? It's been known to be problematic enough for years, that the Foundation finally threw some funding at it for FreeBSD 13. Large environments are probably still impractical; some of us run hundreds of VM's per system, which isn't something that is known to work well on bhyve.
That is from the context of a large company that virtualizes huge amounts of data. I have informed you that it is a NAS for home users. The desktop computer I am currently using is less powerful than the hardware of the new NAS build. TrueNAS in the VM on my desktop is responsive for me over the network, the performance is also more than high enough when transferring data. It might also be helpful for you to know that the network we're talking about is something that won't go faster than 140mb/s in nPerf and may never go over 40mb/s in real life situations. All your advice is always about enterprise situations, and I understand why, TrueNAS is the most used for that. But you also hopefully realize that it is equally useful for home users. The build you saw in the link is less than $300 and is more than powerful enough to run TrueNAS fast, stable and secure (even in a VM). I know TrueNAS recommends 8GB RAM but I've been using FreeBSD on my desktop for over 4 years with 4GB RAM and I also use ZFS. I have never had a full system crash or a single corrupted file in over 4 years. 4GB RAM is very safe for TrueNAS (for home users I mean). On windows/macOS/Linux, it's ultimately a miracle if I never would have a full system crash in over 4 years considering how actively I use my desktop. I have already used all three systems and they crash easily. What I mean is that FreeBSD which is less tested than windows/macOS/Linux is much more stable and robust. How frequently something is used in companies does not always determine how qualitative and stable something is.

To use bhyve in a gigantic company in a production environment, that is of course 'risky'. Which doesn't change the fact that it can be more stable than VMware or KVM in specific situations. But for smaller companies, lab environments and home use I think there are few if any real downsides to mention with bhyve. There are even important advantages over the best virtualization tech:

When configured properly, bhyve guests perform similarly to VMware and KVM guests, and in some cases outperform them.

In terms of security, there are two advantages that bhyve has:
-The more compact codebase makes it easier for security experts to audit and reduces the chance of bugs and possible exploits
-bhyve is used less and is therefore not an attractive target for hackers to invest time and resources in

By the way, what I frequently read on the FreeBSD reddit is the following:
I have run Windows in bhyve with no issues.

The NAS is for home use and I often see that the NAS devices for home use only have a limited number of USB/LAN ports and in exceptional cases one or two HDMI ports. Ultimately, what people do with it in home use situations 99% of the time is similar to what you can achieve with a cheap USB or a cheap (micro)SD card. I feel that the NAS systems that are now being delivered to consumers are far too expensive for the limited number of simple tasks that most home users do with them.

My setup has high definition audio/micro connections on the front and back, 7.1 audio, fast PCI express, lots of fast storage, D-sub and many more USB connections, and connections for mouse and keyboard, HDMI.. With FreeBSD + TrueNAS in a VM I can simply do everything a standard home user NAS does with high performance and so much more, and the entire system consumes very little power too. What this setup gives me is 10x more flexibility and options than what you see in a standard home NAS. Plus my build is cheaper than many of the ready-made solutions.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
All your advice is always about enterprise situations, and I understand why,

I should have stopped reading at this line, said to the guy who's written so much material targeted at making complicated topics accessible to SOHO and hobbyist users; I virtually never talk about anything for enterprise situations, because iXsystems enterprise support typically handles it (I've seen enterprise customers in this forum like maybe a dozen times over the years) but foolishly I didn't stop and instead I pressed on...

I know TrueNAS recommends 8GB RAM but I've been using FreeBSD on my desktop for over 4 years with 4GB RAM and I also use ZFS.

You misspelt 16GB, and you're saying this to the person that raised the minimum required to 8GB many years ago. Which turned out to be sufficiently insufficient within the last year that they raised it to 16GB. Your desktop experience with FreeBSD is not relevant. There are people who use FreeBSD+ZFS on sub-1GB systems, but the fact is that TrueNAS really does need at *least* 8GB for stable operation. The need to support ARC for two pools (boot and main), the rather bloated size of the middleware (IMO), the need for network buffers (mbufs) for expanded network receive and transmit queues, etc., etc., are all very different than your desktop use model.

When configured properly, bhyve guests perform similarly to VMware and KVM guests, and in some cases outperform them.

Basically irrelevant. I can go out to Bonneville and hop in a rocket car, and wow it outperforms my SUV by a lot, but at the end of the day I need a street legal vehicle that is also fuel efficient, capable of carrying cargo, pleasant to drive on long trips, compatible with passengers, easily refueled now and then, etc.

In that same way, guest performance isn't really a huge factor as long as it doesn't suck, but an inability to keep VM's functional for more than a day or two is a critical factor in an environment with hundreds of VM's.

I have run Windows in bhyve with no issues.

That is a crappy and meaningless yardstick. Who cares about running Windows? Some of us want to be able to run real workloads with stability. Even if you're just a hobbyist with five or six VM's, you don't want to have to be restarting various VM's many times per week just because bhyve locked up.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
That is from the context of a large company that virtualizes huge amounts of data. I have informed you that it is a NAS for home users.
Wow! You sure know how to write.

Now you need to learn about reading answers to your posts, and could do with improved skills in assessing situations.
From a grand total of six posts, all in this thread, you're picking up an argument with the forum ever-helpful, well-beloved and revered Resident Grinch—close to 17k posts in over eleven years. We're here to help, but you're setting yourself for a rather cold welcome.

ZFS in general and TrueNAS in particular are quite resource hungry. 16 GB is the minimal recommended RAM for a basic setup. This also applies to basic home setups. Virtualising TrueNAS is dangerous. Also, and perhaps especially, in home settings.
If you do want to proceed with virtualisation, you've been pointed to guidelines to follow. The hypervisor will need more than 16 GB RAM. ESXi with proper HBA pasthrough is the sole recommended option. You're welcome to be a guinea pig with HBA pasthrough in Proxmox. Anything else is likely to fail and lose all your data. (I suspect that the photographer does not keep her whole archive on her MacBook and would not appreciate losing it all to a crash of your little home NAS.)

If you do want to have an audio-enabled NAS with a small footprint, your best option is to start from a regular FreeBSD or Linux distribution and set it up for server duties for your specifications. There will be no GUI niceties but you can trim the system down to save resources.

In terms of security, there are two advantages that bhyve has:
-The more compact codebase makes it easier for security experts to audit and reduces the chance of bugs and possible exploits
-bhyve is used less and is therefore not an attractive target for hackers to invest time and resources in
Security by obscurity… That's an option, but probably not the best one.
 
Last edited:

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
@Cabanel , I will probably not change your thinking, but will still try. And if I don't succeed, perhaps it is helpful to others.

There is in my view one fundamental flaw in your reasoning, and it is one we see relatively frequently here (so you are certainly not the only one). It is the approach to "test" something on a single system (not 100, 1000 or 10000) under normal conditions for a short time (not 5+ years), see no obvious errors, and then conclude that the solution works. In reality, though, here is what has really happened:
  • Irrespective of all further points, the results are anecdotal. If they are positive I am really happy for you! But to conclude anything from that on a generalized level is wrong. It would be like me saying "It is not dangerous for any person in the world to be stung by a wasp", because I am lucky enough to not have an allergy against this. But my mother will die from the same thing happening to her, unless she gets ER treatment in a hospital in less than 30 minutes. (And that is with her carrying an injection kit 24/7 to extend the 10 minutes should would otherwise "last".) And what if I have developed an allergy since the last time I was stung?
  • The results of your test are a relatively superficial observation (at least that's how I read it) of some casual playing around. No failure scenarios (hardware, software, electricity issues, thermal issues, etc.) were applied. It is the equivalent to an address form from an online shop that was tested only for ZIP codes that contain purely numbers. As soon as the first UK resident wants to order something, there is going to be a problem.
  • The real test is how a system handles adverse conditions. I do software development for business critical back-end systems as my day job. And what I learned over the decades is that in a reliable and resilient system only about 20-30% of the code do the work that the business is interested in. The rest is about e.g. checking input data, check things like disk space and network connections, and loads of logging in case you missed to check for something. (Plus funny things like performance and security, of course.) You only survive (more or less) with an immense amount of paranoia. It is like putting on your sealt belt when driving the car. I never needed mine, but that doesn't make me think it is unnecessary.
  • You are arguing that your use-case is somewhat lightweight, in terms of load and criticality of the data. The first part I agree with, although I draw other conclusions. More on that in a second. The second part (about criticality) is in conflict with your description of one user being a photographer. The latter always means that data are business critical, even if no money is attached. But I have yet to meet a person who would be relaxed about loosing their photos. As to the workload, a low one will reduce stress on components in most situations. But there are moments, esp. if their is a problem with a hardware component, where there is no room for things not working perfectly.
What all the people I know expect from a NAS (as opposed to a single USB disk) is to "guarantee" data safety above all else. Additional functionality is by definition a lower priority. Even if you require e.g. the video surveillance part of Synology, that will still be priority 2, even if its is mandatory. Those are different things, although commonly not distinguished properly.

I wanted to write something about your number of posts vs. that of @jgreco , but @Etorix beat me to that.
 

Cabanel

Dabbler
Joined
Dec 12, 2022
Messages
15
Thanks for the replies, there are still a few things I'd like to get a little more clarity on.

ZFS in general and TrueNAS in particular are quite resource hungry. 16 GB is the minimal recommended RAM for a basic setup. This also applies to basic home setups. Virtualising TrueNAS is dangerous. Also, and perhaps especially, in home settings.

-I want to believe it but I don't see/understand why it would need so much RAM usage. I virtualized TrueNAS yesterday (and today) with 1.97 GB RAM allocated to TrueNAS. This is not because I think 1.97 GB of RAM is enough but I am using a desktop that simply only has 4 RAM total for the whole system. You know it's a test setup. TrueNAS on its own has never crashed and shows no sign of ever crashing (with only 1.97 RAM). What did happen, when installing Nextcloud, the 'installer' crashed halfway through the installation, it reported that there was no RAM available. In the end this wasn't a huge problem either as when I tried to install it again it resumed from the point where it crashed and completed the entire installation. The intention is to run Nextcloud and two or three similar services. 4.5 GB of RAM allocated to the VM seems sufficient for this to me. Do you agree that no strange things will happen with TrueNAS if it has access to 4.5 GB RAM and runs a maximum of 4 services similar to Nextcloud?

My FreeBSD system with the PeKWM window manager has +- 60 MB of active RAM usage (in idle). All the rest of the RAM can be redirected to other applications/processes if they need RAM.

The hypervisor will need more than 16 GB RAM. ESXi with proper HBA pasthrough is the sole recommended option. You're welcome to be a guinea pig with HBA pasthrough in Proxmox.
-Have any large-scale tests been done that show statistically that ESXi or Proxmox are really more stable than what you get with VirtualBox on FreeBSD? VirtualBox on FreeBSD is very stable to me, always has been. And I've also used VMware on OpenSUSE in the past, I virtualized macOS. I have also often virtualized windows on various virtualization software. In my experience, 'what is commonly thought' is not always correct. Statistics are also not always easy to interpret correctly, if no mistakes have already been made during the design of the sample. But statistics is better than nothing. So my question: are there many independent statistics on the stability and security of the various virtualization software out there? (preferably tested on different operating systems such as FreeBSD)

Anything else is likely to fail and lose all your data.
-If you use AHCI as a controller for two different .vhd volumes on two different hard drives, can you have a safe RAID configuration? It is already unlikely that the VM will ever crash. Because it seems completely stable. Suppose it were to crash, that doesn't mean there would be data loss at all, would it? I haven't encountered a corrupt file in over 4 years on FreeBSD. Suppose data were to be corrupted somewhere during the crash, then I still have the RAID protection. I don't see why it would be unsafe.

If you do want to have an audio-enabled NAS with a small footprint, your best option is to start from a regular FreeBSD or Linux distribution and set it up for server duties for your specifications. There will be no GUI niceties but you can trim the system down to save resources.
-Agreed, but that may make it too complicated for the NAS user to configure new things. For me this would work. But for the person who would use the NAS, it is useful to be able to use the (TrueNAS) GUI.

Apart from these things, I would like to say how bad/well it currently works. The stability of the VM doesn't seem to be an issue. I don't think it's going to crash, ever. That's how it seems now. I know I haven't tested it long enough to be sure. The CPU usage of my entire FreeBSD system + TrueNAS VB VM + Nextcloud plugin hovers around 1.4% CPU usage on average. Every +- 15s there is a super short spike to +- 30% CPU usage and that spike disappears immediately. This is not a fixed pattern or anything, just how it behaves on a random average. So it seems efficient enough to me to use as a real NAS setup in practice.

Logging into the TrueNAS GUI from a cell phone or laptop via Wi-Fi is rather a snappy experience with the current test setup. I don't see any problems there. It changes when I use Nextcloud. Logging in is rather slow, but fast enough. Switching to the menus in Nextcloud is not snappy, but doable. Uploading and downloading large files in the GUI is fast enough 'on the network'. Though there's room for improvement when I do it 'on the host'. About the network, I think I have a network bottleneck, which means that it cannot really go faster on systems that are connected via Wi-Fi. For the download/upload performance in Nextcloud there I don't see any significant problems just that I would expect it to go faster on the host intself. Something I have noticed is a glitch in Nextcloud, if I play multiple audio/videos in succession in Nextcloud, the 'audio/video player' of Nextcloud no longer works. I'm just talking about starting the audio/video, it's not that it has problems while playing. I can easily solve this by changing the menu in Nextcloud and then the player will work again. I'm not going to go into detail here because this is probably a glitch in Nextcloud that has nothing to do with TrueNAS. There are no real showstoppers in using Nextcloud on my current test setup that allocates 2GB RAM to TrueNAS.

My .vdi partition of VirtualBox is installed on the SSD. I also tested for the experiment whether it matters if I change .vhd partition to a slow HDD for the Nextcloud zone, but I don't see any performance difference when I switch between SSD or a HDD for this .vhd partition. I don't know where the bottleneck is for upload/download performance in Nextcloud when I'm on the host (FreeBSD). Maybe it's VirtualBox, maybe bhyve is faster.

My general conclusion at the moment is that it is doable via VirtualBox. In terms of stability, I don't see a problem, so the only potential issue I've seen is performance. I have seen a bottleneck with the upload/download in Nextcloud. Although this does not play a role on my current network, it is of course always better that the bottleneck is resolved for the situations where I use Nextcloud in a browser on the host itself. CPU usage can also be improved, although it is very acceptable. So I think bhyve can potentially lower the CPU usage and improve the upload/download bottleneck in Nextcloud.

One last thing I would like to mention, normally for TrueNAS you always need a disk to install the OS on and then one or more disks for storage. Suppose you have a large SSD, then I can run everything on FreeBSD + VB on one SSD. You do have the risk that that one disk can fail, but my point is that it is nice to have this feature.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Thanks for the replies, there are still a few things I'd like to get a little more clarity on.

...

One last thing I would like to mention, normally for TrueNAS you always need a disk to install the OS on and then one or more disks for storage. Suppose you have a large SSD, then I can run everything on FreeBSD + VB on one SSD. You do have the risk that that one disk can fail, but my point is that it is nice to have this feature.
This clarifies TrueNAS and separate boot disk;
I have no comments on the rest, (it was a bit long).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Have any large-scale tests been done that show statistically that ESXi or Proxmox are really more stable than what you get with VirtualBox on FreeBSD?

No, of course not. There is no magic lab of a thousand ESXi machines, a thousand Proxmox machines, and a thousand VirtualBox machines, with someone keeping crash statistics.

Well, wait, there sort of is. And it's me. The way this really works is that I've been working with people here on the forums for many years. A long time ago, we had problems with ESXi users that had dodgy setups with RDM's or full disk VMDK's, and we had failures of these setups. The configuration that seemed not to fail in catastrophic ways was to pass through a PCIe disk controller directly to FreeNAS. After noticing that there was a pattern, we identified that if you did PCIe passthru, on certain Supermicro systems newer than certain generations, it was rock stable and no one was experiencing problems with those setups.

In a similar way, I noticed that we had a lot of AMD APU users coming in with 4GB of RAM who had tried ZFS and ended up with catastrophic pool corruption. We never did get to the specific issue there, since it was infrequent, and the users generally did not have the skills to debug, but it became clear that it was memory related and not APU related, so I set the minimum memory requirement for the FreeNAS project to 8GB for ZFS users. This is well documented.

I have been busy documenting what works, what might not work, and what doesn't work for more than a decade. Obviously I do not have a lab of thousands of machines, but I do have access to thousands of users. I am conservative and cautious because I gather people are using ZFS for data storage reliability.

You are welcome to do whatever you want though.

Do you agree that no strange things will happen with TrueNAS if it has access to 4.5 GB RAM and runs a maximum of 4 services similar to Nextcloud?

I doubt anyone agrees with that. I wouldn't expect it to work with zero services. You will be stressing the hell out of the ARC, the middleware will force processes to be swapped out, and performance will be terrible. The minimum memory requirement is 16GB, and adding services like Nextcloud or Plex increases this number. It appears that certain Samba configurations are a contributor towards that 16GB, so if you don't use SMB, perhaps you can still get away with 8GB.

Either way, it is worth noting that iXsystems is not targeting the "small memory" crowd. They're writing the software to run on their TrueNAS Enterprise systems, which used to be sold with a minimum of 64GB RAM, but I believe is now a higher number.
 

Cabanel

Dabbler
Joined
Dec 12, 2022
Messages
15
The minimum memory requirement is 16GB, and adding services like Nextcloud or Plex increases this number. It appears that certain Samba configurations are a contributor towards that 16GB, so if you don't use SMB, perhaps you can still get away with 8GB.

I have (now) allocated the VM some more RAM: 2400MB. When I monitor RAM and SWAP usage it seems that this is enough to boot TrueNAS (and it is set to boot Nextcloud as well). In any case, no SWAP memory is used after a full startup of TrueNAS + Nextcloud. Maybe 4500MB would indeed be a bit tight, I'll allocate 5600MB RAM to the 'real build' just to be sure and then I can check if it's sufficient for all the tasks.

I've been monitoring the performance in Nextcloud a bit more. When uploading, the performance is between 14 MB/s and 23 MB/s, it fluctuates very quickly between these values. Downloading large files over 500 MB is surprisingly stable and remains consistent above 23 MB/s (always). I mentioned that Nextcloud's menus were slow. Some menus are slow the first time you open them. Eg the 'files' and 'photos' menu are slow, especially 'photos' was very slow. But then when you open them again later it goes much faster, around 5 seconds to open the menu completely. Other menus such as 'activities' are always snappy.

In the end I think this performance could be better but it's OK. I'm losing a bit of performance with VirtualBox. And Nextcloud itself is something that is also not as fast as it could be. Much of the code is JavaScript.. It also largely relies on Python packages which is also a programming language that gives slow performance. And MySQL is also slower than PostgreSQL.

I've read that Nextcloud and Samba don't work well together, so I wasn't planning on that.

I only have two questions at the moment:
1.The IDE controller in VirtualBox had 'Use Host I/O Cache' enabled automatically. With the AHCI controller it was automatically disabled and I checked it. Should this be on or off for the .vhd device in AHCI controller?
2. Do you know if you can virtualize TrueNAS via bhyve. I assumed this would be possible but I'm not sure.

Can I use the bhyve instructions for FreeBSD and eg replace the FreeBSD.iso with the TrueNAS.iso as in the example below?
Code:
# sh /usr/share/examples/bhyve/vmrun.sh -c 1 -m 1024M -t tap0 -d guest.img -i -I TrueNAS-13.0-U3.1.iso guestname
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
-I want to believe it but I don't see/understand why it would need so much RAM usage. I virtualized TrueNAS yesterday (and today) with 1.97 GB RAM allocated to TrueNAS.
I don't know where RAM usage goes in TrueNAS (ARC, SMB, middleware, whatever). The recommended minimum is 16 GB for CORE 13 and 8 GB for SCALE—without any VM, container or other extra. @jgreco explained how these figures have been validated, and we generally trust the recommendations
A test setup with little test data may have been temporarily happy with 2 GB. Do NOT stretch your luck with a real amount of real data.

The intention is to run Nextcloud and two or three similar services. 4.5 GB of RAM allocated to the VM seems sufficient for this to me.
If so, this figure comes on top of the basic 8-16 GB recommendation, not within it.

Do you agree that no strange things will happen with TrueNAS if it has access to 4.5 GB RAM and runs a maximum of 4 services similar to Nextcloud?
I agree than "no strange thing" will happen: The system will badly lack memory and accordingly behave badly.
Just like crossing the crocodile pond on a paper bridge will end badly in an entirely predictable manner.

-If you use AHCI as a controller for two different .vhd volumes on two different hard drives, can you have a safe RAID configuration? It is already unlikely that the VM will ever crash. Because it seems completely stable. Suppose it were to crash, that doesn't mean there would be data loss at all, would it?
Oh yes it will. ZFS assumes actual physical control of the drives and requires certainty as to when a piece of data is firmly committed to stable storage. Virtual drives take this certainty away.
Virtual drives are fine for tests and learning to play with the GUI. No real data should ever be trusted to virtual disks. Never ever.
It only takes one crash of the hypervisor with critical data in flight that ZFS trusted was committed to disk but was held in hypervisor cache to loose the pool. ZFS constantly generates and rewrites pool-critical metadata. There is no zfsck. This is not a case of "bad things may happen", this is "bad things WILL happen". 100%, and some more.

If you do want to have an audio-enabled NAS with a small footprint, your best option is to start from a regular FreeBSD or Linux distribution and set it up for server duties for your specifications. There will be no GUI niceties but you can trim the system down to save resources.
-Agreed, but that may make it too complicated for the NAS user to configure new things. For me this would work. But for the person who would use the NAS, it is useful to be able to use the (TrueNAS) GUI.
Fair enough. But for your own sake, make your user-friendly NAS setup out of OpenMediaVault or any other non-ZFS solution. In an unsupported configuration, ZFS heavy use of metadata and lack of repair/recovery tools will actively harm you.

One last thing I would like to mention, normally for TrueNAS you always need a disk to install the OS on and then one or more disks for storage. Suppose you have a large SSD, then I can run everything on FreeBSD + VB on one SSD. You do have the risk that that one disk can fail, but my point is that it is nice to have this feature.
If you were to do it the other way around, with TrueNAS bare metal and audio player and other services in VM, I'd be open to hacking your way to use a single drive for both a system partition and a VM/container partition on the basis that the valuable data is in a safe HDD pool, which TrueNAS has full physical control over, that a failure of the non-standard boot+VM drive will "only" require the pain to reinstall, and that you take full responsibility for the non-standard setup and the self-inflicted complexity it involves.
ZFS storage on virtual disks is just asking to lose your data. Full stop.
 
Top