Comments on: Open ZFS vs. Btrfs | and other file systems https://www.truenas.com/blog/open-zfs-vs-btrfs/ Fri, 16 Feb 2024 19:51:21 +0000 hourly 1 https://wordpress.org/?v=6.4.4 By: Matthew https://www.truenas.com/blog/open-zfs-vs-btrfs/#comment-5595 Mon, 20 Jul 2020 03:58:07 +0000 https://www.ixsystems.com/?p=57392#comment-5595 Canonical has provided support for OpenZFS in Ubuntu since (I believe) Ubuntu 16.04. Prior to this, to get ZFS support on Ubuntu, I believe the easiest method was to install a third party ZFS package that would compile and install ZFS for you. A (sometimes painful) problem with this approach is the following: sometimes when you install a new Canonical provided kernel, the third party ZFS compilation would fail. Thereafter, if you rebooted into the new kernel, there would be no ZFS module for that kernel, and you would be without ZFS support.
So, Canonical’s ZFS support is convenient and increases reliability. Every kernel update from Canonical will include a corresponding ZFS kernel module (also provided by Canonical). The ZFS kernel module (and associated tools) are provided by Canonical maintained packages in the official Ubuntu package repositories.
There are various opinions as to whether and how there is a conflict between the ZFS license (CDDL), and the Linux license (GPLv2). I believe Canocial’s position is that everything is okay because they are providing ZFS as a kernel module (as opposed to building ZFS directly into the kernel itself). I believe some GPL proponents claim that Canonical’s approach is (or might be) a violation of the GPL. I don’t believe I have ever heard anyone claim it is a violation of the CDDL.
As for ZFS support in the official Ubuntu installer, a quick web search indicates that may have arrived in Ubuntu 19.10.

]]>
By: Bob Son of Bob https://www.truenas.com/blog/open-zfs-vs-btrfs/#comment-5594 Sun, 15 Mar 2020 23:24:44 +0000 https://www.ixsystems.com/?p=57392#comment-5594 In reply to Chris.

Can’t I make the same monolithic argument about network manager (in implementation) and systemd (in principle)?

]]>
By: Dan https://www.truenas.com/blog/open-zfs-vs-btrfs/#comment-5593 Fri, 21 Jun 2019 21:57:19 +0000 https://www.ixsystems.com/?p=57392#comment-5593 In reply to GreyGeek.

“Broken by design” doesn’t mean that the brokenness is deliberate (which would indeed be absurd to suggest without some pretty strong evidence), just that fundamental design decisions result in an irredeemably broken product. Two examples of this in btrfs deal with parity RAID: (1) parity isn’t checksummed, and (2) there’s a write hole.

]]>
By: david https://www.truenas.com/blog/open-zfs-vs-btrfs/#comment-5592 Sun, 18 Nov 2018 21:36:29 +0000 https://www.ixsystems.com/?p=57392#comment-5592 In reply to Evi1M4chine.

Yes, i realize i am posting to an old article, but i have to ask, Did you even read the article at all?
The article:
I have moved OpenZFS-formatted multi-terabyte USB drives from my FreeNAS system to a Raspberry Pi 3 running FreeBSD and run my backup routine without issue
Your post:
Worse even on small single-board computers, like those ARM devices that are so popular nowadays. (E.g. Raspberry PI.) Those would make very nice NAS solutions
Do let me know how my 16GB ram server has mounted 53TB of storage and still functions as a small app server.
According to your post, woudlnt this machine need at least 53gb of ram for ZFS?

]]>
By: Mike "Donnie Baker" S. https://www.truenas.com/blog/open-zfs-vs-btrfs/#comment-5591 Sun, 18 Nov 2018 13:12:25 +0000 https://www.ixsystems.com/?p=57392#comment-5591 Opinions are good, bad and ugly. Sometimes all at once.
Thanks for letting us know one nerd’s take on openzfs.
The only thing standing in zfs way (to 1st tier Linux support) is a company springing up and heading the charge. 75 percent sure this will happen eventually with ZoL at some point in a decade’s time.
However, another point of measure is money. Synology is huge and seems to think btrfs is good enough for business. They compete with ixsystems directly. We could go all day and into 2020 with points and counter points. The licensing issue is weak but will eventually be moot with enough money and time behind the right project. I’m not in any position to judge that last point for sure just offering my IT pro/business take on it.
To be fair bsd-ish-ness has a role to play on the backend and is bullet proof and battle tested for the right mission. But the same can be said for Linux too. The last point I’ll make is that today, I can run the Linux kernel from my msft windows 10 machine natively. No one here would have saw that in 2000. Does bsd, zfs have a similar anecdotal Hero Epic like this; where the enemy capitulated so thoroughly?

]]>
By: Divyank Shukla https://www.truenas.com/blog/open-zfs-vs-btrfs/#comment-5590 Fri, 06 Jul 2018 12:19:17 +0000 https://www.ixsystems.com/?p=57392#comment-5590 https://www.diva-portal.org/smash/get/diva2:822493/FULLTEXT01.pdf
For ZFS and btrfs comparisons.

]]>
By: kamtaot https://www.truenas.com/blog/open-zfs-vs-btrfs/#comment-5589 Tue, 29 May 2018 10:27:31 +0000 https://www.ixsystems.com/?p=57392#comment-5589 Very nice article! I learned some new things from this article. I started using FreeNAS since September 2017 and till now I have not faced any issues. I do not have any experience with Btrfs (except that I installed OpenSUSE Leap 15 couple of days back). That is when I decided to check the difference between these file systems and reached this article.
I do not know much technical details of ZFS but I am one highly satisfied FreeNAS user and probably due to ZFS (I mean it may be one of the reasons).

]]>
By: c3d https://www.truenas.com/blog/open-zfs-vs-btrfs/#comment-5588 Tue, 15 May 2018 16:32:47 +0000 https://www.ixsystems.com/?p=57392#comment-5588 Thanks for the good writeup. Indeed, having Red Hat “pull the plug” on btrfs is a bit disappointing.
Now, to be fair, btrfs may be a bit unstable in some cases. Within my first week of using btrfs on Fedora, I lost data to a filesystem bug, which had not happened to me in eons, except with zfs (see below). The initial problem was apparently that I had not done what GreyGeek suggests regarding qcow2 storage files for KVM virtual machines, and I ended up with a file containing a few million extents, something that btrfs had apparently some trouble with. The second problem, compounding the first, was that btrfsck just died looking at the disk, which is not cool.
That being said, I had been using btrfs for a while on a Synology NAS with nary a peep, so it’s not like you lose data every day when you use btrfs. But there is some truth to the fact that it’s not completely mature yet… The bug with btrfsck was relatively basic. I sent a fix to the mailing list, so at least now it does not die, though it still does not know how to repair the disk.
So was my experience with ZFS any better? Well, as I hinted above, not quite. It’s annoying that you have to add it manually on Linux because of license conflicts. That means it’s not an option for my Synology NAS, for example. It also has some administrative traps. For example, the first time I used it on a Mac external USB disk, I forgot to “export” before unmounting the disk, tried to attach it to another computer, and could not find my way out of that (it may be simple, it’s just that having to “export” when you unmount a disk is a rather unusual step, and how to recover after you did that is apparently non-obvious, if at all possible). So I lost about 350G of pictures to ZFS that day, though fortunately that was only data copied for a quick test, so no real harm done. Frankly, I almost gave up on ZFS that day.
I returned to ZFS recently for my largest disk array (12T in RAID5, 5 disks). I did it mostly because it’s portable, unlike btrfs, so I can read my disk both from Linux and macOS (as long as I export it ;-). And I do agree that from an admin point of view, it’s quite nice. But as I said, not without quirks.
For instance, another somewhat surprising thing is that when things go south, ZFS by default suspends I/Os on a pool. I can guess the rationale, but if your pool is a single disk, you might as well return an error immediately. Right now, I have a bad disk which I used to test ZFS corner cases. Created a sparse image on it. It failed. Good, ZFS won’t accept to corrupt my backup, bonus points for that. But then, things from bad to worse. First, the pool is in “UNAVAIL” state because of “too many I/O errors”, and the disk is in “FAULTED” state. That makes sense, but as I wrote earlier, if it’s a single disk pool with no redundancy, what does ZFS hope to achieve by suspending I/Os instead of returning an error? Magic-based restoration of the lost bits?
What is even more frustrating is that I cannot unmount it (probably because there are some I/Os pending). I cannot even see the I/O errors, because it tells me “errors: List of errors unavailable (insufficient privileges)” (as root). So now I have this bad disk which ZFS won’t release no matter what I try, but that I cannot write to. Hmmm, looking in my dictionary, “bug” is the correct way to describe that behaviour.
So ZFS is not all bad, but frankly, if it’s “really as good as people say it is” to quote your article, I hope the people in question are not me, because after losing some data to a good disk due to an easy-to-make mistake, and then having to wait for a system reboot to be able to get rid of a bad disk, I’d say there is still some room for improvement 🙂 [In my face: I’m a developer, I love open source, I should just stop complaining and fix it, right?]
Still, I’m a bit puzzled why Apple gave up on ZFS and decided to build their own APFS instead. I suspect it was mostly a strategic decision related to their need to have it work well on small devices such as the Apple Watch. But it’s a bit of a shame. At least, you can use ZFS for data disks and I suspect once I get a good enough disk, it will prove quite usable.

]]>
By: John Judenrein https://www.truenas.com/blog/open-zfs-vs-btrfs/#comment-5587 Wed, 09 May 2018 11:33:26 +0000 https://www.ixsystems.com/?p=57392#comment-5587 > I invite you to start that journey with a simple question: “Can you verify without a doubt that your data has not suffered from bit rot?” I look forward to your answer.
Yes.
$ sudo btrfs scrub status /
scrub status for
scrub started at Wed May 9 12:27:17 2018 and finished after 00:03:44
total bytes scrubbed: GiB with 0 errors
You have very low standards if you think being able to do that is somehow impressive.

]]>
By: Corrodias https://www.truenas.com/blog/open-zfs-vs-btrfs/#comment-5586 Fri, 10 Nov 2017 23:37:56 +0000 https://www.ixsystems.com/?p=57392#comment-5586 In reply to Evi1M4chine.

Please tell me more about the limits of ZFS. As far as I’m aware, as long as you’re not using de-duplication, the memory requirements for ZFS are pretty tame: maybe 1 GB. It wouldn’t be able to cache much, but it doesn’t need to cache much on a desktop, no moreso than any OS needs to cache lots of disk I/O.
Async writes would simply mean the last second or two of writes didn’t get saved, which isn’t usually the end of the world. A sudden power loss will accomplish the same thing whether or not your application *believed* that it finished writing your videogame save just before the power went out. But I think you could run it with all sync writes if you really wanted to.

]]>