Linux filesystems

InQuize

Explorer
Joined
May 9, 2015
Messages
81
I just tried TrueNAS-SCALE-20.10-ALPHA and confirmed my expectation that there might not be support for any filesystem other than ZFS, again..

This was frustrating and very limiting since the first version of FreeNAS used. Basically, the only major disadvantage of the system that leaves questions about otherwise perfect solution. While highly inconvenient, the whole situation kind of, sort of made sense on FreeBSD, but IMO on a Debian based OS it would be ridiculous to deny native filesystems existence and enforce ZFS as an only option for a storage server.

I understand that:
- ZFS is a main focus for the OS;
- upstream support for other filesystems in FreeBSD might be limited;
- licensing might be an issue;
- since TrueNAS strives to be a cross-platform GUI/API "framework" it gets quite harder for devs to maintain feature sets in both Unix and Linux flavors
What I do not understand is:
- why multi-pool setups with filesystems suitable for use case seem completely not though of, unsupported?
(my goal apart from having a redundant ZFS pool with snapshots and all other perks, is to be able to have separate non-redundant (probably single disk) mounts that I'm still able to share via same ecosystem while keeping additional hardware requirements and costs down)
- do you think it is ok, that I can not even workaround in CLI?
- will this always be that way? Now seems like the best time to discuss it before Scale grows into something hardcoded, non-flexible and labor intensive to modify.

Please, if you feel anything like this, share your thoughts.
Other opinions are, of course, welcome too.

scale-share.png

scale-share2.png
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
why multi-pool setups with filesystems suitable for use case seem completely not though of, unsupported?
Because one of the cornerstones of this storage appliance product is ZFS. If you don't want to use ZFS for everything, then FreeNAS/TrueNAS isn't for you.
Also, a single EXT4 volume isn't a pool, so can't be part of a multi-pool setup in your version of how it should be. So for example, when you want to set up a replication job (that uses zfs send | recv in the background), it can't see that volume as source or target... I guess that would require a lot of work to find a solution that does replication at block level with snapshots and incrementals that goes from ZFS to whatever... if they don't do that, you'll probably complain about how the replication system is inconsistent and can't deliver what you want for all of your "pools").

since TrueNAS strives to be a cross-platform GUI/API "framework" it gets quite harder for devs to maintain feature sets in both Unix and Linux flavors
This is why it's only coming up now, OpenZFS2.0 is available on both Linux and FreeBSD with enough commonality that the devs can rely on that.

do you think it is ok, that I can not even workaround in CLI?
It's an appliance, so, yes.

will this always be that way?
Probably. But, there are a litany of options available to you to publish block storage with a linux filesystem on top if you use KVM or Docker in SCALE.

What exactly is your objection to using ZFS (and if it's such a problem? why not just use Debian on your own... or OMV, RockStor or one of the others if you must have a GUI? or Xigmanas if you really want ZFS and some other filesystems too?)

ZFS can present a single-drive pool to deliver what it seems you're asking for separately to your "ZFS pool" which you say you do want (although if you spec your system for the ZFS pool, there won't be a lot of room for the cost-cutting you think the other filesystem will bring anyway).
 
Last edited:
Joined
Jan 18, 2017
Messages
524
What I do not understand is:
- why multi-pool setups with filesystems suitable for use case seem completely not though of, unsupported?
(my goal apart from having a redundant ZFS pool with snapshots and all other perks, is to be able to have separate non-redundant (probably single disk) mounts that I'm still able to share via same ecosystem while keeping additional hardware requirements and costs down)

As @sretalla stated it can present a single disc pool which you can extend in the future to increase it's size.

I would like to know what use cases do you believe ZFS is unsuitable for?
 

InQuize

Explorer
Joined
May 9, 2015
Messages
81
I did just that.
If you would like an insight into my use case, it is described in my signature in great detail. But, please, try not sidetrack this thread from its subject. As you can see I virtualized quite heavily, and the more I try to improve it, the more I believe I have headroom for optimizing density. Think of this setup as a wildly functional lab and recreational exercise.
My delta between avg. and max. CPU utilization is very high, so you my limiting factors are (by priority) RAM, PCIe lanes and storage space&ports. SATA ports are limited to not increase power consumption to much (250Wh already hurts). HDDs themselves are very old and meant to be that way as it does not make sense economically to upgrade just yet to more dense ones in terms of data stored on them. Getting more PCIe/64gb+ RAM means either stupid expensive modern hardware of awfully power-hungry multi-CPU sockets like 1366/2011/similar. Even finally moving to 64GB on that LGA1151 system that I'm working on means investing ~$360.
By utilizing my drive space the way I did, I hit my perfect balance for redundancy vs space, while keeping costs low and usable (77%) vs raw ratio as high as possible:
Code:
(0.8x 1.82TB + 4x 0.965x 3.64TB) / (3x 1.82TB + 4x 3.64TB) ~= 0.77

By using e.g. ext4 instead of ZFS these non-redundant drives I could further lower my RAM footprint. I already turned `primarycahce=metadata` for datasets on these drives. I could benefit from some tunables, but they are system wide and I simply can not afford them affecting redundant pool. So there are at least two inconveniences that make to want to use something else.

why not just use Debian on your own... or OMV, RockStor or one of the others if you must have a GUI? or Xigmanas
Apart from that inconvenience I like TrueNAS, would very much like to stay and for it to improve.
I could use something additionally to TrueNAS, but that is 180 degree from optimization I am aiming for. Even if I put a neighbor VM on the same host it would introduce a lot of overhead (a lot more RAM for additional OS than additional filesystem would take instead; also interconnect latencies and throughput limitations). That is apart from the fact that multiple storage solutions is ridiculous too, as one could suffice.

single EXT4 volume isn't a pool, so can't be part of a multi-pool setup
It's getting messy in terminology. You can not use just vdev, cant you? So, by 'multi-pool' I mean instead of making single giant ZFS pool, use separate similar entities of different filesystems as different tools, each best suited for the task.


for example, when you want to set up a replication job (that uses zfs send | recv in the background), it can't see that volume as source or target... I guess that would require a lot of work to find a solution that does replication at block level with snapshots and incrementals that goes from ZFS to whatever...
That is straight out of the question. I'm talking only about basic NAS functionality like file sharing using common protocols from these filesystems with the same GUI tools, that's all, not supporting every feature, mixing and matching one type to another, nothing crazy like that..
 
Last edited:

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
Multi-node scale-out and hence multi-pool systems will be supported in 20.12 release ... but everything is ZFS based.

The use of ZFS is very intentional. It provides a huge number of features, but more importantly is very safe, scalable and supportable. Yes, other file systems may require less RAM, but if we can't support a user or customer and data is lost, we end up paying a heavy price as a company. Copy-on-Write technology is a god-send for our support team.

By using ZFS, we also have built in replication tools for moving data between systems efficiently and incrementally. SCALE can replicate, migrate and import pools from the existing FreeNAS, TrueNAS CORE and TrueNAS Enterprise systems.

Could another file system be used in future.. yes. It will be based on demand... add a suggestion to our bug tracker and explain the benefits of what you want, but also think about the reliability and supportability of the feature being requested. If others also want that feature, we will review. Any developer who wants to contribute code is welcome.
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
Pretty much what I'm getting out of this is you don't want to spend the money to upgrade your system, and have spent time figuring out how to optimize your fileystems for your older hardware, but you like the TrueNAS GUI.

Take it from someone who has been in a similar position, you either spend the money and go all in with ZFS and TrueNAS....or you find another solution, it's not worth it to try and it fit to your use cases.

I'm just hopping back on board the TrueNAS train after years of back and forth since I wasn't that fond of BSD and bhyve, so I rolled my own storage box with Arch, ZFS on Linux, Netdata, and Cockpit. It's not nearly as nice but it gets the job done until SCALE is stable.
 

jsclayton

Dabbler
Joined
Aug 27, 2020
Messages
15
I can throw in an additional use case for this. I use ZFS for everything except media storage. It's simply more economical for me to use MergerFS + SnapRaid and be able to add drives whenever Easystores go on sale. It works great, but I'd love for my NAS to be more of an appliance than a raw Ubuntu system for everything else - Docker, VMs, monitoring and alerting, backups, etc.

With the recent ALPHA update to the developer notes, it's clear that Docker and Kubernetes is the future of plugins. I can certainly make a Docker container that mounts the MergerFS array, and with the right settings it should be possible to expose it to the host and other containers. Since all the apps that use the media array are Docker-based I think I could figure out a way to expose it to those other containers, but that prevents it from being accessed via the network.

And that's where I've also wanted the exact same use case as @InQuize - expose a share to the MergerFS array mounted in /mnt.

Perhaps there's functionality or services that can be exposed to or via the "plugins" that can allow them to notify the system of a path that's available for sharing?
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
It is what you make it after all, just don't expect support for your Frankenstein-like creations haha

I've been using ZFS to store my media for years.
 
Joined
Jan 18, 2017
Messages
524
It's simply more economical for me to use MergerFS + SnapRaid and be able to add drives whenever Easystores go on sale.

Thank you, after looking those up I agree with your statement and understand better why the initial question came up.
 

InQuize

Explorer
Joined
May 9, 2015
Messages
81
The use of ZFS is very intentional. It provides a huge number of features, but more importantly is very safe, scalable and supportable.
By using ZFS, we also have built in replication tools for moving data between systems efficiently and incrementally.
Maybe, I'm not making myself clear. I've been using FreeNAS for over 5 years and am very well aware about all its advantages and unique properties, hell, they are the reason I've been using ZFS all this time and never switched to another NAS solution once (also very grateful for this product). So, I'm not at all trying to debate whether ZFS "is the way" and you do not need to advertise it to me, I was sold on it long time ago.
But there is a need to use something else in addition to ZFS. And right now, TrueNAS is very inflexible at that direction. I can not stress enough that not all data is equal. By limiting ourselves to ZFS exclusively, we neglect the fact there are apllications where tons of TBs do not benefit from any of ZFS treats, like @jsclayton mentioned above about media storage. It is often an easily recoverable data that does not need snapshoting, backups, replication, and even CoW. Caching is very debatable and subjective. And coincidently the least important data may take up much greater amounts space, like in this example with recoverable media. So using ZFS for such makes zero sense, it is just a waste of RAM. Again, I am saying all that not against ZFS (I do not see myself using anything else for actually valuable data in near future), but in defence of using something else as an auxiliary mounts.

Yes, other file systems may require less RAM, but if we can't support a user or customer and data is lost, we end up paying a heavy price as a company.
RAM is not the issue. This exact attitude does not win iXSystems anything other than broken expectations from new users as they come in hearing that TrueNAS is such a great appliance, that comes up in any discussion about NAS/SAN solutions, just to find out about all the limitations ZFS is dictating, that they are not ready to jump on that train from the get-go and there is no flexibility whatsoever to make transition more painless and learning curve less overwhelming. I believe that exactly the latter is the actual reason for many disappointments. This fortunately was not my story, but most people might not be prepared.

I strongly believe that you should not make decisions like that for the customer, but instead educate them on best practices, make clear visions of expected levels of reliability and state right upfront what filesystem has the most support and that else is just a diversity. You are not expected to bear responsibility for someone's negligence. As a joke, I'd say the whole situation is similar to theoretical case, where system would simply not boot if RAM error correction capabilities were not verified.

ESXi is another popular appliance that has similar concept. It is so stripped & locked down that after realizing that fact, I made my mind up in that, it was build in a manner of providing a fool-proof product suitable even for the dumbest corporate customers. It seemed like a windows-style "Next" pusher ecosystem, I hated how non-admin friendly it was, and switched soooo fast, even though it is sure a very reliable and mature solution that has undeniable advantages, yet one's subjective opinion always makes up the verdict. And make not mistake thinking that it is justifiable if VMware was successful with this approach, it would be the same as praising MS and Win plague.

Pretty much what I'm getting out of this is you don't want to spend the money to upgrade your system, and have spent time figuring out how to optimize your fileystems for your older hardware, but you like the TrueNAS GUI.
I want many things, upgrading would be splendid, what I can afford for my lab and does it actually makes sense is completely different story. For sure, I am not the ony one, and among businesses not everyone is a moneybag ready to participate in a diminishing returns race for absolute best practice solution. As much as I am prone to perfectionism, I realize how important it is to comprehend "perfect is the enemy of good" and "do your best with what you got" principles.
And it is not like a caprice to be able to manage suggested functionality in GUI exclusively (yes I could bodge something together using standard Linux tools), but rather confusion that situation forces to make said bodge and strengths of otherwise great solution turn into weaknesses that fast. Forgive me, if I sound overly dramatic.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
Thanks InQuize, This explanation helped. We are open to suggestions on what else to add to TrueNAS SCALE. Our plate is pretty full through the 1st quarter with getting SCALE completed as envisaged. If you have specific suggestions, we'd love to get them via the bug tracker and see if we get some confirmation from other members of the community.

We do try not to restrict open source users in what they do with the product, but do try to warn or protect users from dangerous activities. Our reputation depends on having fewer data loss stories floating around. Data loss is much worse than application crashed.... I agree there's a conflict here.
 

InQuize

Explorer
Joined
May 9, 2015
Messages
81
Our plate is pretty full through the 1st quarter with getting SCALE completed as envisaged. If you have specific suggestions, we'd love to get them via the bug tracker and see if we get some confirmation from other members of the community.
And I did not expect less, I'm sure you guys are making happen a great deal of job behind the scenes. My intention is merely to shed some light on the subject for developers to be able to keep it in mind while building the rest of the solution.
It is clear that making a proper feature request would produce a lot more results than forum discussions. The purpose of this thread is to get perspectives clearer before submitting any request and possibly engage others to define where community stands on that. Possibly, this thread could be then useful as a more in-depth clarification of reasoning to actual bug request.

We do try not to restrict open source users in what they do with the product, but do try to warn or protect users from dangerous activities. Our reputation depends on having fewer data loss stories floating around. Data loss is much worse than application crashed.... I agree there's a conflict here.
It is only commendable that you care so much about minimizing the risks for us, but as I said, what causes data loss is as important as the actual percentage of catastrophic failures. If you are transparent about your product warranties, if it is not some blackbox of voodoo alchemistry and your customers understand the consequences of their use patterns, no one would consider blaming your company for own lack of preconsideration. Even if they do in case of false judgment, this does not damage your reputation as others see it objectively.
 

jsclayton

Dabbler
Joined
Aug 27, 2020
Messages
15
If you have specific suggestions, we'd love to get them via the bug tracker and see if we get some confirmation from other members of the community.

@InQuize would you agree that a good summary of a feature request would be to allow non-pool paths to be used as shares? I think that, along with Docker/k8s to mount the file system, would satisfy my use case.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
When thinking through a request, I'd recommend you provide a good top down view of the needs. For example:
Top level: Efficient (80%) expandable storage for media files
Mid-leveL; File system that supports adding single drives (doesn't need to support snapshots) - can be a separate pool
Technical Recommendation: Allow mergefs etc on non-ZFS pool drives and share via SMB

That way, if we can find a good solution for the top level need that is different but more easily supported, we can propose it. Please remember that whenever we do a feature, we have to think about how to test it to a level which is very solid and work out how we can support users.
 
Joined
Jan 18, 2017
Messages
524
I'm pretty sure a guiding principal of product design is to make it as hard for the user to hurt themselves as possible. The reason is because if something bad can happen it will and people will blame or sue anyone they can to avoid responsibility. I do have to praise VMware as they have made a product that works very well for the vast majority of users and though i hate to say it I even have to respect MS from time to time because their product (office,windows and server) sometimes are severely flawed (updates in-particular) but they also work most of the time( :eek: ) and command market dominance (I believe server not as much)

I'm curious if a solution other then adding official support for other filesystems, perhaps a light weight docker container/VM controlling the addition filesystem/pool could be accepted? something similar to a plugin that can be installed in one click and possible configured in a gui at another IP?
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
The ask seems to be not only that the "plugin" manage the additional drives and file system, but that the filesystem also be shareable via the standard protocols.
 

InQuize

Explorer
Joined
May 9, 2015
Messages
81
@InQuize would you agree that a good summary of a feature request would be to allow non-pool paths to be used as shares? I think that, along with Docker/k8s to mount the file system, would satisfy my use case.
Initially, I thought it would be much easier to put into words, but given how much effort it required to explain here, we really should think it through. Honestly, I never had an opportunity to file a feature request, so this is not trivial for me.
I think, the gist of request should say something like: "Allow standard file shares (NFS/SMB) pointing to non-ZFS mounts".

As I see it (which probably requires fact checks), when we mount a filesystem it becomes a transparent directory for the OS, so it should be pretty universal as to what actual filesystems we might be able to use this way. At least with linux filesystems and NFS shares, it should be that way, as they use Unix-like permissions. But I suspect that SMB with its ACL could introduce a lot of headaches, have no idea how this permission translation works. May be @anodos could give us an idea how much effort would it require to implement this. This might affect what is achievable and what is too much to ask.
WebDAV is probably easy. AFP I do not use, do not care much, won't even speculate. iSCSI should probably be excluded from request altogether, as I agree, block level access to non-ZFS storage could make more harm.

Also, I would not rush it right now. Turned out that I upgraded to TrueNAS 12 relatively early (even though I waited for initial reviews), at that time there was not much going on in "Installation and upgrade" section apart from minor issues with GUI themes and charts, but looking at it now, like @morganL said:
Our plate is pretty full
and not with SCALE development.


I'm curious if a solution other then adding official support for other filesystems, perhaps a light weight docker container/VM controlling the addition filesystem/pool could be accepted? something similar to a plugin that can be installed in one click and possible configured in a gui at another IP?
Sorry, I don't follow. Especially why there should be plug-in for it. Right now there's a simple backend restriction in middleware that prevents what I'm proposing.


@InQuize along with Docker/k8s
a light weight docker container/VM
Any further integration with VMs and CTs should be a separate step IMO. To break down problem into smaller ones.
 
Last edited:

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,545
Supported filesystems for file-sharing services will always be limited to some subset of what is available on Linux. Typically a service needs to be properly configured for the underlying filesystem (they often have different capabilities). I have not researched mergefs, but past attempts to introduce tiering have failed due to issues with data integrity or loss of information (xattrs and ACLs).

Please file a feature request on Jira and it will be given due consideration, but we typically shy towards what is actually supportable in an enterprise environment. This is the correct process and there can be a back-and-forth discussion in the jira ticket. I am not a decision-maker regarding what goes into the product, Jira is the correct method to make sure that it's raised with the appropriate members of our team.
 

InQuize

Explorer
Joined
May 9, 2015
Messages
81
OK

I am not a decision-maker regarding what goes into the product
I was only asking for your technical expertise, and not speaking for the team.

@jsclayton
I did a bit more research. I was interested in MergerFS myself (not in this context), never gave it a try though. As mentioned on github, it is a FUSE (runs in userspace) filesystem. It appears to be a whole different difficulty level to implement, than vanilla filesystems that I am proposing and that already have Debian support.
For example, NFSv4 export of FUSE FS may be possible, but v2/v3 already gives trouble. Also quote from 'man exports'
exports(5) said:
fsid=num|root|uuid
NFS needs to be able to identify each filesystem that it exports. Normally it will use a UUID for the filesystem (if the filesystem has such a thing) or the device number of the device holding the filesystem (if the filesystem is stored on the device).
So even NFS needs to be aware of mount details, I can not even imagine how complex it would be in case of SMB.
That's why I limited feature request to kernel-space FS types. Otherwise, it would be too ambitious.
 

InQuize

Explorer
Joined
May 9, 2015
Messages
81
Revisiting this thread,
I made tests in latest Scale Nightly and left a comment in Jira ticket, which is otherwise is pretty stale:
Just tried it mounting ext4 like so:
/mnt/<pool>/ext4-mount
and like that GUI allows to create shares.

NFSv3 works as expected, v4 I could not enable because of service settings issue.
SMB dumps core when I access this share.

TrueNAS-SCALE-21.06-MASTER-20210617-212918
 
Top