Is it difficult to use the OSS4 sound drivers in TrueNAS?

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
What was then always answered to that question is that XFS and EXT crashed very often at random at that time, and that you had much easier data loss with XFS and EXT than with ZFS on FreeBSD during power cuts.

That's true, but a random Quora discussion does not make ZFS the "default recommended" filesystem.

I think ZFS has been the default choice for the FreeBSD installer for quite some time now. It is correct to say that ZFS is now the default.

I just checked on FreeBSD 13.0. Both UFS and ZFS are offered as options by the installer. There is no "default." It would be incorrect to call something a default where it is not a default. It has been this way for some time now.

There are also the performance differences which can sometimes be significant depending on the workload. In general tests, ZFS is often faster than UFS in more areas.

ZFS can certainly be faster where there are memory and CPU resources available. However, ZFS has (had?) trouble working in the sub-1G RAM department; maybe this is no longer the case, perhaps @Patrick M. Hausen is familiar with the current state here. For those of us who run services on small virtual machines (128MB, 256MB) ZFS is simply not viable.
 

Cabanel

Dabbler
Joined
Dec 12, 2022
Messages
15
One of the previous problems with ZFS was that at various points in the past, there were just a few people working on it on the FreeBSD team; now, the FreeBSD version is ZoL which is being worked on by a much greater number of people. This is more likely to result in better stability.
More people working on something does not always yield a better result. Think, for example, of Haskell, which was developed by large teams from the brightest academic circles. Dr Mark Tarver wrote Qi all on his own, and Qi is extremely similar to Haskell. Qi is objectively superior to Haskell in several key areas.

Another example: Think of all the developers who use an unproductive programming language like JavaScript. A team of 500 JS developers is going to be less productive than a team of 100 developers using Rebol.

FreeBSD also often benefits from work moving to Linux. Think for example of the LifeCam HD-3000 which is one of the most popular webcams that Microsoft has ever produced. It no longer works in windows10/11 but it works fine in FreeBSD via webcamd. Although FreeBSD's development team will likely be smaller than Microsoft's.

Productivity also plays a big role. There are countless large teams that accomplish little more than drinking a cup of coffee every day and talking small talk.
 

Cabanel

Dabbler
Joined
Dec 12, 2022
Messages
15
Both UFS and ZFS are offered as options by the installer. There is no "default." It would be incorrect to call something a default where it is not a default. It has been this way for some time now.
It may be a subtle change that not everyone notices, but the installer used to look like this: https://i.stack.imgur.com/esLn2.png

Now the installation looks like this: https://www.youtube.com/watch?v=j7hUHqjwyZc&t=230s

This is not an alphabetical ranking.. My question to you is: why do you think the FreeBSD team decided to change this to the current layout?

That statement seems to ignore the many situations in which the installer for FreeBSD defaults to offering ZFS.

Yes, it is about zillion times better than UFS. Switching to ZFS should be a complete no brainer with anything that has 4GB of RAM or better. I'd still go for it even with 2GB boxes, had nothing but pain with UFS for years. Garbage filesystem.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
ZFS can certainly be faster where there are memory and CPU resources available. However, ZFS has (had?) trouble working in the sub-1G RAM department; maybe this is no longer the case, perhaps @Patrick M. Hausen is familiar with the current state here. For those of us who run services on small virtual machines (128MB, 256MB) ZFS is simply not viable.
No, sorry. I don't run anything with less than 2G, anymore. I do discourage ZFS for VMs, though, when colleagues ask me. Reason being that thin provisioning of disk images, be it VMFS or zvols on ZFS, does not work due to the copy on write nature of ZFS. Even with the lightest write load inside the VM, the guest will fill your entire image eventually.

I use Ext4 or UFS2 inside VMs and snapshot/backup from outside.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
This is not an alphabetical ranking.. My question to you is: why do you think the FreeBSD team decided to change this to the current layout?

So your inference is that they "changed" what they intended to be the "default", but couldn't be arsed to change what the documentation says in several places?

It could be that they just wanted to encourage more people to use ZFS and test things out. I know, having lived through the era, that lots of people were hesitant back in the 7.0 days to use ZFS, for example.

Hey, look, I just booted up my installer image and my mouse cursor is hovering over the line that says "UFS". Surely this was intentional signalling by the FreeBSD developers that UFS is the default.

There is no evidence here of an intentional change of "default", and you seem to need to torture the concept a bit to read into a menu that offers two options a "default" which isn't there. Surely they could tack "(Default)" onto it if they wanted it to be seen as a default. They didn't.

That statement seems to ignore the many situations in which the installer for FreeBSD defaults to offering ZFS.

That just seems to be someone who subscribes to your weird logic inferring default.

Yes, it is about zillion times better than UFS. Switching to ZFS should be a complete no brainer with anything that has 4GB of RAM or better. I'd still go for it even with 2GB boxes, had nothing but pain with UFS for years. Garbage filesystem.

What a load of tripe. I've got several thousands of UFS filesystems under management here and it is super-rare to have any sort of issues with them. I am far more terrified of ZFS in that there is no recovery toolset if there happens to be a corruption or problem with the disk; once an error is introduced into a pool with ZFS, it is very hard to expunge it short of dumping the pool and reloading it. That's really ugly.
 
Joined
Jun 15, 2022
Messages
674
Missing from this thread: I want to run TrueNAS on my Xbox.
:tongue:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Missing from this thread: I want to run TrueNAS on my Xbox.

Don't you mean, I want to run Xbox on my TrueNAS CORE using PCIe passthru to an emulator running in Kubernetes on SCALE?

We don't halfarse our ridiculousness around here, at least not usually.
 
Joined
Jun 15, 2022
Messages
674
Don't you mean, I want to run Xbox on my TrueNAS CORE using PCIe passthru to an emulator running in Kubernetes on SCALE?

We don't halfarse our ridiculousness around here, at least not usually.
I'm new; I didn't want people to take me seriously. :wink:
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Could I please have MS Multiplan 1.06 on CP/3.0 on my Commdore 128? Or Turbo Pascal 3.0? SCNR
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Damn, I wish I hadn't gotten rid of all the Commodore gear. It would be real fun to see if I could get ZFS up and running on the pair of 8250's and 4040 I had, on the SuperPET 9000. I had an eclectic mix of hardware back in the day, including a Commodore 16, Ohio Scientific C4P, the Fortune 32:16, etc. Now so many years later, I actually have the space to store stuff but I foolishly got rid of it years ago. :-/
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Weirdly enough, Solaris 11 DOES use ZFS in virtual machines, called Logical Domains. That's through a special processor function first introduced in the SPARC T-1 processor.

Of course, Solaris 11 LDOMs, (aka Logical Domains), tend to use SAN LUNs directly passed from the I/O Domain, (or all in one Control Domain), to the LDOM(s).

And yes, Solaris ZFS does work perfectly fine on SAN LUNs, even with their own RAID whatever. Those SAN devices are of course, Enterprise grade, (like EMC, Hitachi, etc...). The only time we saw pool corruption was our fault, (clustered Control Domains, attempting to bring up a LDOM on more than 1 Control Domain... as I said, our fault).
 
Joined
Jun 15, 2022
Messages
674
Damn, I wish I hadn't gotten rid of all the Commodore gear. It would be real fun to see if I could get ZFS up and running on the pair of 8250's and 4040 I had, on the SuperPET 9000. I had an eclectic mix of hardware back in the day, including a Commodore 16, Ohio Scientific C4P, the Fortune 32:16, etc. Now so many years later, I actually have the space to store stuff but I foolishly got rid of it years ago. :-/
We must think alike...I was thinking I'd be lucky just to get a Commodore PET on 10Base-T Ethernet.
 
Top