Jump to content

Why I prefer ZFS over btrfs


TRS-80

Recommended Posts

I am not a big fan of btrfs in general.  But rather than shit up other people's threads where they mention it, I decided to create my own here where I can lay out my arguments, and then I can link to it instead and people can read it if they want.

 

Some people are not going to like to hear this, so I will back it up by saying I have done fair amount of research into these advanced file systems over the last several years.  It is a topic of great interest for me.  At the same time, I view all knowledge as provisional, and maybe some things changed in the meantime, so discussion and factual corrections are welcomed.

 

I will also try and spare the reader my typical walls of text by breaking this up into some sections.  :D

 

Why write this now?

 

Finally we are getting some hardware (thanks to Kobol, and others) which finally make this feasible (on ARM, which I feel is preferable for NAS when compared to x86), so I am seeing an influx of people into the Kobol Club, some of them mentioning that they are using btrfs.  So I thought maybe now is a good time to write this stuff down.

 

License

 

Anyone who knows me can tell you I am quite a proponent of GPL (I am much more of a "Free Software" than an "Open Source" person for example).  The reasons for that are in the links, but I mention this because btrfs could be considered the "Free Software" preferred choice, as it is licensed GPL, while ZFS has a lot of questions around the CDDL license.  And yet I still prefer ZFS.  I am just trying to highlight that I don't "hate" btrfs, in fact I root for them to succeed.  But there are just too many technical and other reasons why I prefer ZFS.  So I will try and lay out the rest of those now.

 

Data Loss

 

Probably my biggest argument against btrfs.  I just read way too many reports of data loss on btrfs.  Which is totally unacceptable IMO in one of these "advanced filesystems" which are supposed to prevent exactly that.

 

In fairness, I have heard btrfs proponents say that only applies to certain RAID modes (5?), and if you mirror it's not a problem.  However any mention at all of data loss have just totally put me off from btrfs.

 

Development Resources

 

Another very strong argument.  Lawrence Livermore National Laboratory have received massive federal funding to develop ZFS on Linux for many years now (of course the project nowadays is officially re-branded as "Open ZFS" but in truth almost all the development went into ZoL in the past, and still continues that way, regardless of the name change).

 

When Oracle bought Sun, many of original people jumped ship including the original ZFS authors Matt Ahrens and others.  Guess where they went to work?  At LLNL, to work on ZoL, which LLNL use for their supercomputers.  And they have been improving it for many years now!

 

It's just going to be very hard for btrfs to compete with those sort of development resources, IMO.  I realized this some years ago already, even when ZoL was considered much less stable than it is today (back then the conventional wisdom was to run ZFS only on Solaris-like OS, but this is no longer the case).  But I saw the trajectory and the resources behind it and a few years later here we are and I was exactly right.  And I see no change in any relevant circumstances (feds still funding LLNL, etc.) to change this trajectory in any way.

 

In Closing

 

I guess I did not have as many arguments as I thought.  But I think those are enough.  I may come back and edit this later if more things come to me.  In the meantime, I welcome discussion.

 

Cheers!  :beer:

 

 

Edited by TRS-80
clarify license wording, thanks to feedback from Macer in IRC
Link to comment
Share on other sites

17 hours ago, TRS-80 said:

I just read way too many reports of data loss on btrfs

 

Guess why you hear so often about data loss with btrfs?

 

Three main reasons:

 

1) Shooting the messenger: Putting a btrfs on top of hardware raid, mdraid or lvm is as 'great' as doing the same with ZFS but 'Linux experts' and NAS vendors did it and still do it. Data corruption at the mdraid layer resulting in a broken btrfs on top and guess who's blamed then? https://github.com/openmediavault/openmediavault/issues/101#issuecomment-473920806

 

2) btrfs lives inside the kernel and as such you need a recent kernel version to escape old and well known bugs. Now look at kernel versions in popular distros like Debian and you get the problem. With ZFS that's different, most recent ZoL versions still run well with horribly outdated kernels. Hobbyists look into btrfs wiki, see that a certain issue has been fixed recently so they're gonna use it forgetting that the 'everything outdated as hell' distro (AKA Debian) they're using is still on an ancient 4.9 kernel or even worse.

 

3) Choice of hardware. ZFS is most of the time used on more reliable hardware than btrfs and that's mostly due to some BS spread by selected FreeNAS/TrueNAS forum members who created the two urban myths that 'running ZFS without ECC RAM will kill your data' (nope) and 'you'll need at least 8GB of RAM for ZFS' (also BS, I'm running lots of VMs with ZFS with as less as 1GB). While both myths are technically irrelevant a lot of people believe into and therefore invest in better hardware. Check which mainboards feature ECC memory and you'll realize that exactly those mainboards are the ones with proper onboard storage controllers. If you believe you need at least 8 GB of RAM this also rules out a lot of crappy hardware (like eg. vast majority of SBCs).

 

The main technical challenge for majority of modern filesystem attempts (even for journaled ext4) is correct write-barrier semantics (more on this in the referenced github issue above). Without those both ZFS and btrfs will fail in the same way. So using an SBC with flaky USB host controller combined with crappy USB-to-SATA controller in external enclosure is recipe for desaster with both ZFS and btrfs. ZFS is more mature than btrfs but guess what has happened when ZFS was rather new over a decade ago: people lost their pools due to broken write-barrier semantics caused by erratic HBAs and drive cache behavior (reporting back 'yeah, I've written the sector to spinning rust' while keeping data in some internal cache and after a power loss the pool was gone).

 

The last time I wasted time with ZFS on ARM it was a true desaster. Over a year ago I tried to run several ODROID HC2 as Znapzend targets for a bunch of fileservers but always ran into stability issues (freezes) after a couple of days/weeks. Switched to btrfs just to check whether all the HC2 might have a hardware problem: a year later realized by accident that the HC2's in question all had an uptime of 300 days or above (I stopped doing kernel updates on isolated Armbian machines since Igor bricked already way too much of my boards over the past years).

 

Do I prefer btrfs over ZFS? Nope. Being a real ZFS fanboi almost all the storage implementations here and at customers are based on ZFS. On x86 hardware due to not only relying on OpenZFS team but also the guys at Canonical who care about 'ZFS correctly built and fully working with the kernel in question' (on Debian installations pulling in the Proxmox kernel which is latest Ubuntu kernel with additional tweaks/fixes).

 

And that's the main difference compared to the situation with Armbian. There is no such team that takes care about you being able to access your ZFS pools flawlessly after the next kernel update. Which is the reason why the choice of filesystem (btrfs or ZFS) at my place depends on x86 (ZFS) or ARM (btrfs). And that's not because of the CPU architecture but solely due to with x86 being able to rely on professionals who take care about the updates they roll out vs. situation on ARM with a team of hobbyists who still try harder and harder to provide OS images for as much different SBC as possible so it's absolutely impossible to properly test/support all of them. Playground mode.

Link to comment
Share on other sites

Always great to see you coming back around and giving your input, tkaiser.  A lot of the reason that attracted me to Armbian in the first place was studying all those old threads where you shared all your research.  And knowing that a lot of that went into Armbian.

 

I read through your reply quickly, but I will need to study it in more detail so I can return and give it the reply it deserves.

 

Until then, cheers, and happy Friday mate!  :beer:

Link to comment
Share on other sites

I tried to install ZFS on my Helios 4. Gave me some 32-bit kernel error. So btrfs it is.

 

Anecdote: my Helios 4 has survived probably over 100 kernel freezes (because of broken DFS patches) and btrfs scrub finds no problems. bittorrent verify also shows no problems.

 

Overall, I'm satisfied with btrfs. Now as a user of OMV, I would love OMV 6 with btrfs support :).

 

edit: I will also note this is with RAID5, which is strongly recommended against.

Link to comment
Share on other sites

15 hours ago, tkaiser said:

situation on ARM

It depends on which level we are talking. I assume AWS Graviton servers have a big team of paid staff to take care of both the hardware and the software, for example.

 

Now, if we are talking about $40 ARM SBC's, I think "hobbyists" and people in "playground mode" is a good way to describe the main public of those devices. If we also add "students", I think we have a complete description of the kind of use you can reasonably expect for that kind of boards, which the manufacturers make them for.  And, particularly, we describe the kind of people who willingly contribute to a project aimed to make an OS for these boards.

 

On the other hand, I consider absolutely unrealistic to expect the same level of support for such a project as you expect for one that is backed by a big company with a big staff. As unrealistic as expecting a group of volunteers, who spend their free time on the project out of good will and because they like it, to focus on a reduced number of boards they are not interested in, and only for the NAS use case, "just because I say so".

 

And it is not only unrealistic, but IMO also unnecessary. I find it hard to imagine a system administrator pulling an OPi Zero out of his pocket in front of the corporate board and saying "Hey, this is going to be our new support for the crucial data. Don't worry, it is going to run Armbian out of a Sandisk Extreme Pro, which has been proven to be the most reliable SD card in many forum posts".

 

----------

 

Now, focusing on our topic, I tried btrfs a couple years ago, attracted mainly because of the snapshots feature. But when I was trying to compile some software I needed, and discovered it was not even mature enough to allow creation of swapfiles on it, I just decided to go back to good ol' ext4. I haven't tried again since, maybe I should :)

Link to comment
Share on other sites

1 hour ago, JMCC said:

I find it hard to imagine a system administrator pulling an OPi Zero out of his pocket in front of the corporate board and saying "Hey, this is going to be our new support for the crucial data. Don't worry, it is going to run Armbian out of a Sandisk Extreme Pro, which has been proven to be the most reliable SD card in many forum posts".

 

Who's talking about system administrators? It's about Armbian users now encouraged to rely on ZFS based on proposals at the top post in this thread and eg. support for ZFS in Armbian's bloated motd output.

 

Are these users aware of the two main problems putting their ZFS pools or the data inside at risk?

  • Are they aware of the 'correct write barrier semantics' issue especially with crappy storage hardware as it's definitely more rule than exception with SBC (some random USB-to-SATA bridge combined with freezes or power losses might be recipe for disaster with both ZFS and btrfs)?
  • Are they aware that if they install ZFS via DKMS they might end up with something like this (a kernel update will be rushed out for certain boards, building zfs or spl modules will fail in certain combinations and affected users will realize only after next reboot for the kernel update to take effect that their pools are unavailable). How much testing does each and every kernel update receive? You know the answer just like me. And it's easy to understand the difference with Ubuntu or Debian on amd64 where there are only just a few kernel variants that can be tested through with reasonable efforts)

And then there's another issue called 'confusing redundancy with backup' which affects an awful lot of home users. They think if they do some sort of RAID their data would be save. In situations where crappy storage hardware with old filesystems will just result in some unnoticed silent data corruption using advanced attempts like ZFS or btrfs can result in the whole pool/filesystem becoming inaccessible. While professionals then restore from backup the average home users will just cry.

 

If users are encouraged to rely on a certain FS implementation based on FUD (data losses with btrfs) or theoretical superiority '(the guys at Lawrence Livermore National Laboratory' for example) that's a bit unfair since they're affected by practical limitations of the hardware Armbian is mostly used on and the practical limitations of another group of guys. Btrfs on the other hand is built right into the kernel and receives a lot more testing than the ZFS situation at Armbian allows for (dealing already with +30 kernel branches and complaining about having no resources constantly since years).

 

BTW: thank you, I edited my post above and exchanged ARM with Armbian. 

Link to comment
Share on other sites

19 minutes ago, tkaiser said:

users now encouraged to rely on ZFS

How can a casual discussion in the "General chit-chat" subforum and a check in motd be turned into officially "encouraging users to rely on ZFS"?

 

22 minutes ago, tkaiser said:

Are these users aware of the two main problems putting their ZFS pools or the data inside at risk?

Now they are, thank you for your contribution :thumbup:

Link to comment
Share on other sites

 

20 hours ago, tkaiser said:

btrfs lives inside the kernel

 

So does ZFS, essentially (the down low parts).  The only reason it's built as modules or whatever is because of license incompatibility (CDDL) and because Linux kernel developers don't trust Oracle.

 

20 hours ago, tkaiser said:

ZFS is most of the time used on more reliable hardware than btrfs and that's mostly due to [...]

 

Check which mainboards feature ECC memory and you'll realize that exactly those mainboards are the ones with proper onboard storage controllers. If you believe you need at least 8 GB of RAM this also rules out a lot of crappy hardware (like eg. vast majority of SBCs)

 

This is interesting.  I am well aware of common urban myths you speak of, and I agree with you about them.[0]

 

However, are such myths the most important factor leading to using better hardware for ZFS?  Or is ZFS documentation somehow better?  Or...?  I don't know.

 

But this is an interesting point, maybe you are on to something here.

 

20 hours ago, tkaiser said:

you need a recent kernel version

 

20 hours ago, tkaiser said:

Putting a btrfs on top of hardware raid, mdraid or lvm is as 'great' as doing the same with ZFS

 

20 hours ago, tkaiser said:

Choice of hardware.

 

3 hours ago, tkaiser said:

correct write barrier semantics [...] might be recipe for disaster with both ZFS and btrfs

 

3 hours ago, tkaiser said:

confusing redundancy with backup

 

None of these things apply uniquely to ZFS, in fact they apply equally to both (as you even state yourself in many cases).

 

3 hours ago, tkaiser said:

based on FUD (data losses with btrfs) or theoretical superiority '(the guys at Lawrence Livermore National Laboratory' for example)

 

I am trying to argue in good faith here.  And I have read a lot of reports about data loss on btrfs.  Therefore I would not characterize that as FUD.  I have no reason to make that up.  I (personally) have not seen the same sort of reports from ZFS users (even after quite a lot of reading on the topic).

My point (which you seem to have completely missed) was they have a lot of development resources behind them.  I like how you conveniently also skipped over the fact that these are the original authors of ZFS from Sun.  The fact they work now at LLNL was just connecting the dots for anyone who may be unaware of that fact (and making point they have lots of (government) funding behind them).
 

21 hours ago, tkaiser said:

Which is the reason why the choice of filesystem (btrfs or ZFS) at my place depends on x86 (ZFS) or ARM (btrfs). And that's not because of the CPU architecture but solely due to with x86 being able to rely on professionals who take care about the updates they roll out vs. situation on ARM with a team of hobbyists

3 hours ago, tkaiser said:

Btrfs [...] receives a lot more testing than the ZFS situation at Armbian allows for 

 

Well, unlike you I don't feel the need to get my digs in on handful of people who I know personally who are trying to make things better by improving the situation in whatever way they can, working with very limited resources.  But, manners aside, the facts are essentially: you are making same argument I made above about resources going into development.  In case of ZFS on x86 that's quite a lot.  And much less so on ARM currently (that I am aware of, I am not actually sure who is working on this specifically, nor how much of a priority it is for OpenZFS project, so maybe I start following that more closely again now, as the time seems right for it).

 

I agree it's very early days for ZFS on ARM.  Which is why I suppose, that for all my interest and arguments in favor of ZFS (in general), I personally am still not running it on any ARM devices, either.  Because I suppose that, deep down, my gut feeling has still been that we are not quite there yet.  And I guess, now that I think about it, it's for same reasons as you point out (resources in development).  In fact the situation on ARM is flip side of (development resources) coin that I made in favor of ZFS (in general).

 

[0] Lots of RAM is only needed for de-duplication, and you will still be better off with ZFS with no ECC than traditional filesystem, anyway, as at least now you will be more often aware of data corruption.  ECC only brings this to even higher levels of reliability, but we are now talking about difference between (~) very^4 reliable and very^5 reliable level (or whatever).

Link to comment
Share on other sites

4 hours ago, JMCC said:

How can a casual discussion in the "General chit-chat" subforum and a check in motd be turned into officially "encouraging users to rely on ZFS"?

 

Maybe that's the perception because I am a Moderator?  And/or considered part of "the team" even though I only work on Docs, forum, writing, etc.?

 

And since it was apparently not already clear, opinions in this thread are my own, and do not represent any official Armbian project position.

 

The MOTD part, well I can't speak for @Igor however it looks to me like he is just trying to give the people what they want, as there have been a lot of clamoring about it by users (look in Kobol Club as there have been some number of threads and posts about it lately).

 

Anyway, it's always easy to stand on the side and throw rocks.

Link to comment
Share on other sites

5 hours ago, tkaiser said:

It's about Armbian users now encouraged to rely on ZFS based on proposals at the top post in this thread


We are not encouraging anyone to use ZFS or other FS. Especially not on crappy hardware. I am well aware of the problems you are exposing.

If you peek into the Helios forums, you will see many people are trying hard to bring ZFS on it. I just help them a bit to get to the Debian / Ubuntu "professional" level of ZFS support. Also ZFS bloatware in the motd is present only if user uses it.

 

5 hours ago, tkaiser said:

Are they aware of the 'correct write barrier semantics' issue especially with crappy storage hardware


No, why should they be? And why this should be our problem? Most of people will remain uneducated and most of them will choose ZFS since they heard it's cool but they don't know nothing about. There are lots of Rpi (and similar junk USB based NAS) users out there. Why would we worry about that either?

 

5 hours ago, tkaiser said:

How much testing does each and every kernel update receive?


Much much more than in back the times and we are improving it all the time. Compare Armbian with Linaro not with simple x86/Rpi Canonical problems and remember where upper level stuff comes from.

 

5 hours ago, tkaiser said:

Btrfs on the other hand is built right into the kernel and receives a lot more testing than the ZFS situation at Armbian allows for (dealing already with +30 kernel branches and complaining about having no resources constantly since years).

 

Things has been changed since early days. We are converging kernels towards united arm64/32 binary but we stay on separate builds for practical reasons. Upper level stuff like ZFS or BTRFS is identical in meson64, rockchip64, sunxi64.

We are still having a bit more recent kernels - we have to - than upstream so it should be better.

Link to comment
Share on other sites

This discussion is interesting because I can relate to both sides.

As an end user - having multiple filesystems to choose from is great, choosing the right one is where I need to spend my time choosing the right tool for the task. Having a ZFS DKMS is great, but if after an update my compiler is missing a kernel header, I won't be able to reach my data. Shipping a KMOD (with the OS release) might give me access to my data after each reboot/upgrade. The ECC/8GB RAM myth is getting old but it has garnered enough attention that most newbies won't read or search beyond the posts with most views.

 

Armbian as a whole is always improving - a few weeks ago iSCSI was introduced for most boards (and helped me replace two x86_64 servers with one Helios64) - and an implementation of ZFS that works out of the box will be added soon enough. That being said, this is a community project - if a user wants 24x7 incident support, then maybe the ARM based SBCs + Armbian are not the right choice for them. Having a polite discussion with good arguments (like this thread) is what gets things moving forward - We don't have to agree with all the points, but we all want to have stable and reliable systems as our end goal.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines