FreeNAS & TrueNAS Plans - 2020 and Beyond!

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Would it be considered safe, or a Good Idea by any stretch to stay with Freenas until TrueNas SCALE is stable and production ready?
There is no reason you can't stay with FreeNAS 11.3-U5 or TrueNAS 12-U1 (and successive versions) until SCALE becomes stable. I think most some of us want to switch over to a Linux based system eventually.
 
Last edited:

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
I think most of us want to switch over to a Linux based system eventually.
Speak for yourself. :tongue: I happen to very much like FreeBSD and would like to stay with it if possible. Although I do see TrueNAS switching to Linux based at some point in the future as predicted in my post on the first page of this thread. The enterprise support for FreeBSD just isn't there unfortunately.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Speak for yourself. :tongue: I happen to very much like FreeBSD and would like to stay with it if possible. Although I do see TrueNAS switching to Linux based at some point in the future as predicted in my post on the first page of this thread. The enterprise support for FreeBSD just isn't there unfortunately.
My apologies! @Jailer I knew I'd get some folks who actually like FreeBSD. It does work and I can use it fine but for myself I have noticed that Linux applications are updated often and typically have to be ported to FreeBSD for native FreeBSD users to take advantage of them which takes sometimes a long time to occur.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
One Last Question. Would it be considered safe, or a Good Idea by any stretch to stay with Freenas until TrueNas SCALE is stable and production ready? I mean, I will continue testing, but eventually I will likely be switching over to scale anyway so ... waiting helps me save a step. As far as pool features and data, I don't need to import pools: I have enough free resources I can always just rsync data to an upgraded pool. In other words upgrades don't need to be done "in place", so to speak. And I may just purchase them anyway, as it seems it may be feasible considering the density that could be available.

FreeNAS 11.3-U5 is very good and stable. The advantage of TrueNAS 12.0 is that it uses OpenZFS 2.0 which is common for FreeBSD and Linux. From TrueNAS 12.0-U2 we plan to have a migration to TrueNAS SCALE capability which will be tested and supported. However, you could wait with FreeNAS 11.3-U5 and do the upgrades when you are ready.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
If you want to recommend two or three specific pcie card models, I'd be happy to order them and test all three of them in the Dell 14th Gen of your choice, and report my findings, of course.
For now, I'm just curious about which specific model you're using.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
[..] I think most of us want to switch over to a Linux based system eventually.
Although I have been a Linux person since 1995, I am not sure I welcome this change that much. I can understand that simply for available know-how ixSystems want to do this switch. Plus the hardware vendor support is obviously broader and the community also does their part in testing. But Linux has become kind-of the JavaScript framework of *nix systems. What I mean by that is that I personally perceive the rate at which things are re-done as too high for my liking (just like every year multiple JavaScript frameworks appear that do the same thing as twenty others, just differently).

While there is merit to improving things, stability is often more important. And stability not only means that things work as expected. But also the rate of change is a factor. If a new framework saves me 20% development time that sounds great. But in the enterprise evolution, and by that investment protection, is typically what gets you the much better ROI. Because the 20% development improvement are more than eaten up by effort in other areas (esp. operations).
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Same here - Linux does not have jails. Thanks but no thanks.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
I can understand that simply for available know-how ixSystems want to do this switch.
Thats (mostly) not part of the reasoning behind it.
Support for industry-standard containers and solid (production ready) gluster support, where mostly the reasoning behind it.

But Linux has become kind-of the JavaScript framework of *nix systems. What I mean by that is that I personally perceive the rate at which things are re-done as too high for my liking (just like every year multiple JavaScript frameworks appear that do the same thing as twenty others, just differently).
In don't really get this, it's not like the Linux kernel introduces a fork of itself every year.

While there is merit to improving things, stability is often more important. And stability not only means that things work as expected. But also the rate of change is a factor. If a new framework saves me 20% development time that sounds great. But in the enterprise evolution, and by that investment protection, is typically what gets you the much better ROI. Because the 20% development improvement are more than eaten up by effort in other areas (esp. operations).
You tend completely ignore the fact that most FreeBSD features (drivers and software) are "just" Linux ports.


What I get from your comments is a bad tryout at finding reasoning behind your believes.
Believes are fine, but don't sell them like facts.

In stark contrast with @Patrick M. Hausen who just gave a solid reason not to switch, based on facts:
Jails are great.
 

inman.turbo

Contributor
Joined
Aug 27, 2019
Messages
149
For now, I'm just curious about which specific model you're using.

It looks like all the servers I've tried so far have the same card. LSI 9211-8i. I see a couple LSI 9207-4i4e here, but we haven't tried them yet.
So that's Dell Poweredge R440 w/ LSI 9211-8i and WD RED SA500 SSD's. And Poweredge R540 w/ LSI 9211-8i and INTEL DC S3700's. All SSD So far. Haven't tried a big pool yet. Will try soon, those are LSI 9207-4i4e w/ WD RED 10 TB 5400RPM 256 MB Cache, CMR.

I have tried swapping cards, drives and cables in all cases but all with identical parts. In other word I haven't tried changing the card model or different types or brands of drives or cable, just more of the same.

I also will test power outputs today if I get time b/c @ornias told me 12 happens to be more aggressive with power. Grasping for straws a bit I know, but I have had power issues in this location before ...
 
Last edited:

inman.turbo

Contributor
Joined
Aug 27, 2019
Messages
149
On a side note I tried SCALE out of curiosity on one of the affected R440's, first on metal for a few hours, then virtualized letting writes run all night. No issues or errors. Nothing in depth in terms of testing here. Just a basic mirror raid pool and (random) writes were done over NFS 10gbe rj45.
 

inman.turbo

Contributor
Joined
Aug 27, 2019
Messages
149
Ok I don't want to high jack this thread here, but on the same server I ran SCALE overnight on I just installed Truenas Core. The plan was to get some error logs to post, however now I am getting NO ERRORs. I even have smart and cam access and everything looks good ?? Going to try running some random i/o for awhile.

Now I'm thinking this must be power related, but multiple machine same cabinet 2 UPS's + 2 PSU's in the rack, and obviously each machine has its own power supply, some two. Common factor is the circuit breaker panel. Perhaps the circuits were overloaded yesterday and output was reduced but not enough to trip the UPS's? Hope I'm not sending you guys a red herring but on the other hand it would be nice and easy fix if this is just a power issue. It's really odd. UPS have had plenty of output during disaster testing to run everything for quite some time. But in this case no outages have been logged.

Power looks good ATM. Going to start logging temperature for a day or so.

Now running SCALE and CORE side by side on identical servers, to see if the errors start again and if so whether they occur on both and can be attributed in some way to OpenZFS.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
What’s holding you? You prefer Linux that’s fine , you are in the wrong place
No, since TrueNAS SCALE brings the requested features (and is based on Debian Linux), that's not right.
 

ramib

Dabbler
Joined
Dec 31, 2019
Messages
41
No, since TrueNAS SCALE brings the requested features (and is based on Debian Linux), that's not right.
I am on Arch with ZFS pools after migrating the data just for testing , I love Arch and if not Arch there will be no Linux here, I miss the jails already since docker is shitty and I fell like less in control and looks like load average is a bit Higher than what I had in FBSD
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Docker is a lot less irritating when using the ZFS driver, which I understand supposedly only works with Ubuntu (though it does work well there). Still, it has enough rough edges that I'd happily trade it for jails if I could.
 

ramib

Dabbler
Joined
Dec 31, 2019
Messages
41
Docker is a lot less irritating when using the ZFS driver, which I understand supposedly only works with Ubuntu (though it does work well there). Still, it has enough rough edges that I'd happily trade it for jails if I could.
technically it should work on any linux, works very good on My Arch here (k. 5.10.9) and the import was like I was in FBSD , just zpool import (with or without -f depends if you released the pool before in TrueNAS)
docker is also working fine, all the containers are on the ZFS pool
but still miss the BSD :), first time running linux on my NAS for many years
 
Last edited:
Top