ZFS on Linux/CentOS support resources?

Status
Not open for further replies.

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I have another server that's going to be running under CentOS 7 that needs a good dose of ZFS-y goodness. RedHat is unfortunately allergic to ZFS, so most of the work needs to be done by hand. That's OK as far as it goes, and I've found some guides (e.g., here and here) that look pretty thorough, they leave additional questions. My goal is to document the process (as I've started here) in a way that's reasonably reliable, but I'm seeing some inconsistent behavior that I don't quite understand.

So what support resources are available? There are CentOS forums, but the prevailing attitude there seems to be "if you install ZFS and it breaks, you get to keep the pieces." There don't appear to be any ZFSonLinux forums, but there's the zfs-discuss mailing list. It's very low-volume at this point.

So, what support resources are out there for this combination? Or aren't there any (beyond "use the source, Luke"), which would make the whole undertaking pretty dicey?
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
If you just want the benefits of a zfs filesystem, as opposed to the fun of installing the root filesystem on zfs, why not do a standard install of Centos and run zfs on some extra disks? This is all automated and consistent with automated updates etc.

https://github.com/zfsonlinux/zfs/wiki/RHEL-and-CentOS

The zfs-discuss list may be low volume, but you do tend to get rapid answers from people who know what they're talking about.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Well, it's not exactly that I want to do root-on-ZFS for the fun of it, but Neth (like SME, from which it was forked) really expects to run on a single filesystem. My gut thinks that just mounting a pool on /home, for example, would end up being more disruptive than just installing the whole system on ZFS. But if upgrades break things, that calculus could change. And the apparent requirement (as noted on the page you linked to), when upgrading CentOS minor versions (e.g., 7.3 to 7.4), to remove the zfs packages, install an upgraded version of the ZoL repo, then reinstall those packages, sounds both disruptive and dangerous (seems like that would keep you from accessing your root pool).

One piece of uncertainty is that, while the write-ups I'm looking at are pretty recent, they're dealing with older versions (zfs 0.6.5 vs. the current 0.7.2, grub2-2.02-0.17.0.1 vs. grub2-2.02-0.65), and it's unclear what, if any, effect that would have on anything. (Edit: though from the grub2 changelog, there don't appear to be any changes relevant to zfs between those two versions)
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
I am claiming no expertise. Just that it is simple to follow the standard setup, and it looks as though it would need a lot of manual intervention to keep what you are proposing working.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I run ZFS on Linux with the Gentoo distribution. It's rock stable for me. They are full root, plus additional space, in a different ZFS pool;

Desktop - 64GB SATA DOM mirrored to 500GB spinner, for root. Extra space on spinner is general storage, (it's un-mirrored).
Media server - 1TB mSATA & 2TB spinner, have 24GB each for mirrored root. Remainder striped for my media, (wierd striping flash and spinner...)
Laptop - 500GB mSATA, 2 x 30GB partitions mirrored for root, remainder un-mirrored for general storage

Here are some links, (the first is not all that useful for someone familar with ZFS command line);

https://wiki.gentoo.org/wiki/ZFS
https://wiki.gentoo.org/wiki/User:Fearedbliss/Installing_Gentoo_Linux_On_ZFS
https://www.funtoo.org/ZFS_Install_Guide

Note that it's easier if you are going to mirror root. That way you can install as normal using EXT4 on one mirror, and create the root pool on the other. Then dual boot until you have tamed Grub2.

I also use alternate boot environments for my software upgrades. That way if something goes south, I have a chance that the prior BE will still be good. (Just don't upgrade the pool features until all boot environments support those features.) Mine is something like this;

Code:
root@laptop:~# zfs list -t all -r rpool/root
NAME					USED  AVAIL  REFER  MOUNTPOINT
rpool/root			 9.05G  9.44G	96K  legacy
rpool/root/20170831	 680K  9.44G  5.28G  legacy
rpool/root/20170923	 528K  9.44G  5.96G  legacy
rpool/root/20170929	5.01M  9.44G  5.97G  legacy
rpool/root/20171028	 892K  9.44G  5.52G  legacy
rpool/root/20171123	9.05G  9.44G  5.73G  legacy
rpool/root/20171123@1   859M	  -  5.28G  -
rpool/root/20171123@2  31.2M	  -  5.96G  -
rpool/root/20171123@3  27.9M	  -  5.97G  -
rpool/root/20171123@4   401M	  -  5.52G  -

Let me know if you have any specific questions.
 
Status
Not open for further replies.
Top