Looking for suggestions on a complicated nas4free to TrueNAS upgrade

OneOldAdmin

Cadet
Joined
Jun 15, 2023
Messages
3
I'm going start be requesting to hold you laughter. I laugh and cry about this enough lately :)

I need to upgrade my nas in-place. Having to do a restore from backup is possible as an emergency last resort, but otherwise would require significantly more downtime then i want.

The complication is two-fold.
1. It's about 10 years old.
2. The system is hacked to meet some needs that were either unavailable in freenas at the time, and/or to work around bugs from that version.

The original changes we made under the hood were
* Manual network config. LAGG with multiple vlan tags.
* Manually partitioned disks ; and manually added them to config.xml

We had two 2 physical ssd's for slog and cache. We wanted to carve them up into partitions to so we had more control over how it is allocated for each pool. This change required manually partitioning the disks, and manually updating the config file so the freenas ui would see it. It also comes with the acceptance that we forfeit the ability to modify disk configuration in the ui (or at least proceed with caution).

For the last 10 years, the only changes we make in the ui is to add/remove/resize datasets, and nfs exports. On rare occasion we run into a dataset that cannot be resized in the ui. In those cases the the new sizing is not actually applied to the dataset, and we simply make the required update manually to match the ui.


Looking for suggestions on a sane path to take. Two considerations so far:
1. Boot into a freebsd 13 live cd. verify that the pools can be accessed properly. If all is good, install freebsd 13 on disk to replace freenas and manually maintain zfs and nfs moving forward.
2. The same as above but on TrueNas. I don't see a livecd in the download section. Configuring I assume will be extremely painful unless it can automatically detect what we have going on our data disks.

FreeBSD 9.3
Nas4Free 1.7 (found in /conf/config.xml)

OS installed on LSI hardware raid 1
2x SSD's
- each has several partitions for logs
- each has several partitions for cache
- Each zpool uses 1 cache and 1 log partition from each of the disks
11x disks for pool1
5x disks for pool2
2x spare disks (available to both pools)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
config.xml sounds like we might be talking about FreeNAS 0.7, which is pre-iXsystems and is a completely different product. There is no code carryover from FreeNAS 0.7.

There is no "live CD" for TrueNAS because the concept makes no sense. It's an installable appliance, and while you can boot to singleuser and do basic maintenance in a crisis, it is only functional if installed.

TrueNAS brings with it a lot of great functionality to keep your system running smoothly, but if you've partitioned your disks, you are probably hosed. TrueNAS is designed for storage servers with a bunch of bays where you put a bunch of hopefully equally sized disks and set them up as a storage (data) pool, while you use an SSD or two for a boot pool to store the appliance OS. While you may be able to butcher it to work differently, you will inherently lose the capabilities to do stuff like automatic disk replacements when you have a failure, and a variety of other things will not work (quite?) correctly either. You may be better off with FreeBSD, but you'll also lose out on a lot of neat functionality. If your system is really that old, perhaps look at acquiring an inexpensive used server and loading it up with your data. We have hardware suggestions that are highly TrueNAS compatible over in the Resources section. You can then use your existing server as a backup/replication target or some other cool thing.
 

OneOldAdmin

Cadet
Joined
Jun 15, 2023
Messages
3
Yes this is ancient. Nevermind TrueNas, this is prior to becoming FreeNas. (IIRC, nas4free was rebranded as freenas. i could be mistaken).

I hear you on your statement about running a nas on a live cd. Don't fully agree "anymore", but totally understand the reasoning. It just is what is; i'll wipe the tears and get over it :)
My use case was just to do a quick test without modifying the installed system. Being able to confirm that freebsd under the latest truenas is making proper use of my hardware and zpools (without any weird i/o or stability issues) would provide a lot of assurance before upgrading. If truenas doesn't work out for some reason, i can plan to move to native freebsd instead.

Just to be clear, it would be a new install of truenas or bsd; but keep my existing zpools. You couldn't pay me to attempt using the upgrade button :). Truenas is preferred as I will start using iscsi and s3 in the near future. At the same time I want to get people off of the cli.

As for the hacks:
I think I remember seeing that the project added support for lagg+vlan tags years ago, so that's one hack I don't need to carry over.

Disk partitions.. I found post from 2021 confirming that it still was not supported, and I'm guessing that's likely still the case. I couldn't find another approach. I was over budget already and had to use the disks that we already purchased. The goal was that each pool has l2arc, and slog. In the event we lose an ssd, the pools would still have l2arc and slow. I haven't used freebsd since 5.1 nor have I've used zfs elsewhere so I'm a bit out of area of expertise. I'm not sure what the best approach on 2 disks would be. All i can say is we did pull tests and did not notice any issues.

Our current setup is maxed out with 24 disks. The os is installed on hardware raid. The cache and log hack job spans across two SSD disks. All remaining disks are the same model 1.8T disks. We did yank each type of disk after installation to confirm that we could hot swap everything.
We only use NFS. All changes in the last 10 years have been to adjust datasets, and export them via nfs. For that reason, it wouldn't be the end of the world if we had to go to freebsd instead of truenas.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
IIRC, nas4free was rebranded as freenas. i could be mistaken

Exactly backwards in fact. The FreeNAS project sold its trademark to iXsystems and then became nas4free.

Don't fully agree "anymore", but totally understand the reasoning.

Feel free to explain what sort of meaningful "live CD" you could run. From a certain perspective, it already is a live CD in that it is a specific image that runs (similar to a live CD), but it runs from the boot pool. However, certain bits do get overwritten as part of the appliance configuration. To have a true live CD, you'd actually have to be able to get meaningful functionality running, and that's just not tenable.

My use case was just to do a quick test without modifying the installed system.

That's mostly meaningless. TrueNAS doesn't have a hardware validation tool, which means it has been left up to us here in the forums to provide guidance.

i can plan to move to native freebsd instead.

If it's not stable under TrueNAS, there's a good chance it won't be under FreeBSD either. TrueNAS is about 99.9% standard FreeBSD, just with an appliance framework overlaid.

You couldn't pay me to attempt using the upgrade button :).

Upgrades are only supported from older versions of FreeNAS or TrueNAS, and that doesn't include the pre-iXsystems FreeNAS. An "upgrade" is just an overwrite of the existing firmware with the new firmware, and then it runs a database conversion on the configuration database. That doesn't work if you don't have the iXsystems-designed configuration database.

I think I remember seeing that the project added support for lagg+vlan tags years ago, so that's one hack I don't need to carry over.

All basic networking is supported. More sophisticated stuff, such as CARP, OSPF, or BGP is not.

Disk partitions.. I found post from 2021 confirming that it still was not supported, and I'm guessing that's likely still the case.

Right. So in the early days we saw a bunch of people who had stuff like two 3TB drives and a new 6TB drive and desperately wanted to turn it into a RAIDZ2 by putting two partitions on the 6TB. The problem is, this breaks stuff like automated drive replacement, because if the 6TB drive breaks, how do you replace it in code?

Sun designed ZFS to be used on large disk arrays and expected full disks to be components. iXsystems adopted that model because it makes a lot of sense from a lot of perspectives, but it is inconvenient to hobbyists who are looking for cheap ZFS hacks.

Avoid disk partitions, the system definitely won't help you create them but also won't stop you. If you force the issue, you break certain things, which may be acceptable to you, but accept responsibility for your own choices if you go that route. It is not GOING to be supported. Splitting devices for a SLOG may make sense if you have a super fast device like an Optane.

Truenas is preferred as I will start using iscsi and s3 in the near future

Presumably you've read the block storage guide.


The os is installed on hardware raid.

Not supported. ZFS does not want hardware RAID. See the HBA sticky.


The boot pool is ZFS so it applies to the boot pool. There is a minor window of exception in that you can use an IR mode HBA to get redundant booting; see my resource at


This is an exception because the IR firmware is known to work just fine, and uses the same drivers as the IT.
 

OneOldAdmin

Cadet
Joined
Jun 15, 2023
Messages
3
Feel free to explain what sort of meaningful "live CD" you could run. From a certain perspective, it already is a live CD in that it is a specific image that runs (similar to a live CD), but it runs from the boot pool. However, certain bits do get overwritten as part of the appliance configuration. To have a true live CD, you'd actually have to be able to get meaningful functionality running, and that's just not tenable.

...

That's mostly meaningless. TrueNAS doesn't have a hardware validation tool, which means it has been left up to us here in the forums to provide guidance.
No. Being able preview an os temporarily is priceless. Alternatively I could swap out the disks with the os, and try it on new disks; but it's a real pain and requires physical access (twice). An iso image can managed easily over bmc. Anything beyond that point i would agree is meaningless.

If it's not stable under TrueNAS, there's a good chance it won't be under FreeBSD either. TrueNAS is about 99.9% standard FreeBSD, just with an appliance framework overlaid.
No not really. I'd be pretty confident that native freebsd will run fine. Given the age of my installation, and the hacks done to it, I would be very surprised if I could simple jump to the latest truenas without a battle. It's not a question of stability. The question is do i jump to native freebsd (simple), or truenas which requires a lot more work. I'm not about to go through 10 years of release notes.

My thought was a new install of truenas, and just import my existing pools. And evaluate what will be needed.

A lot of this could be discovered while running off an iso for an hour or two, and most of it tested in a vm.
and then it runs a database conversion on the configuration database.
Right, that's where my concern is.

Avoid disk partitions, the system definitely won't help you create them but also won't stop you. If you force the issue, you break certain things, which may be acceptable to you, but accept responsibility for your own choices if you go that route. It is not GOING to be supported. Splitting devices for a SLOG may make sense if you have a super fast device like an Optane.
Sorry im still not following what the problem is.
A pool has 2 caches. 1 from disk A, and one from disk B.
Same with cache. 1 from A, one from B.

Losing a disk "A" means that both pools will suffer from loosing a cache and slog disk. However both pools still have a cache and slog available from disk "B".
If use Disk "A" for cache on pool1, and disk "B" for slog on pool1, then losing either disk will cause a total loss of cache or slog for pool1. Pool 2 wouldn't have any cache/slog configured at all.

As for being supported, i think that ship sailed a very long time ago. Nobody is expecting this to be supported; im just exploring options here.


I looked briefly at the link you sent for block storage, but i didn't see the connection to s3 (object storage). Looking around I see minio requires a block device. Is this what you are referring to?

Not supported. ZFS does not want hardware RAID. See the HBA sticky.
...

The boot pool is ZFS so it applies to the boot pool. There is a minor window of exception in that you can use an IR mode HBA to get redundant booting; see my resource at
I haven't read the HBA sticky but I understand what you're referring to, and I don't think it applies. All my data disks (and the cache/slog disks) are passed through. Disks are presented to the OS natively, without interference from the controller.

The OS is running on UFS. Maybe zfs on root wasn't mature enough at that time.


appreciate the clarification on the freenas / nas4free.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I haven't read the HBA sticky but I understand what you're referring to, and I don't think it applies. All my data disks (and the cache/slog disks) are passed through. Disks are presented to the OS natively, without interference from the controller.
You need to read the sticky, which will explain that what you said is not right. Whole controller (and must be HBA) or it's set for failure.

appreciate the clarification on the freenas / nas4free.
Now called Xigmanas in case you want to give that a go instead.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
No. Being able preview an os temporarily is priceless. Alternatively I could swap out the disks with the os, and try it on new disks; but it's a real pain and requires physical access (twice). An iso image can managed easily over bmc. Anything beyond that point i would agree is meaningless.

If you have BMC access, use a virtual thumb drive.

Right, that's where my concern is.

You're not eligible because you don't have a configuration database.

I haven't read the HBA sticky but I understand what you're referring to, and I don't think it applies. All my data disks (and the cache/slog disks) are passed through. Disks are presented to the OS natively, without interference from the controller.

That's not sufficient.

The OS is running on UFS. Maybe zfs on root wasn't mature enough at that time.

ZFS on root didn't exist at the time.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
My thought was a new install of truenas, and just import my existing pools. And evaluate what will be needed.
Given the nature of your existing pool that would be my suggestion - possibly even a command-line import using -o readonly=on flags if you want to be exceptionally cautious about not writing anything to this pool until you're sure it works, although that could impact your ability to do functional tests.

As for being supported, i think that ship sailed a very long time ago. Nobody is expecting [split partitions for L2ARC/SLOG] to be supported; im just exploring options here.

It's "supported" from an OpenZFS perspective to do this, so you can expect to be able to import the pool - but because splitting the device isn't something that's "supported" in the TrueNAS middleware, the webUI will likely display duplicate device names (eg: da12 will show up for both a cache and log device) and performing things like a drive replacement or offline through the webUI may not function correctly (eg: you try to take the cache partition da12p1 offline, TrueNAS expects it to be a full disk, it gives you an error stating the device is in use because da12p2 is attached to the pool as a log dev)

As mentioned by @jgreco if your BMC/IPMI allows it, you could install TrueNAS onto a virtual thumbdrive.
 
Top