Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

From FreeNAS 11.3 to TrueNAS Core 12 - Upgrade Stories

Stilez

Senior Member
Joined
Apr 8, 2016
Messages
465
A heck of a reply - thanks for this.
I'm ..... curious if I should upgrade to the beta after all.
Is it possible to change my train (yes) and switch to beta, then on next reboot, switch to the old version, if problems occur?

I recall changing train, being a real problem - does it lock out previous boot instances?
  1. Yes boot environment choice should be fine, even with trains changed
  2. It's zfs. Run "zfs snap -r <SYSTEM_POOL_NAME>@BeforeTrialUpgrade" and you'll always have your existing setup to revert to from single user mode, regardless of what the upgrade process does,
  3. You can always export the pre-upgrade config, clean install old version, reimport
  4. Config is a standard sqlite3 file anyway so any number of softwares on every platform can read and rewrite your more complex settings if required, any time, which you should never need. That could be useful if you make functional changes in 12 and later want to backport them to say 11.x for any reason, as there isn't a way to import a later config into an earlier version AFAIK if the config structure has changed.
If you want to upgrade, the safest way is do all those things.
  1. Backup existing config + secrets file, off the server, snapshot your existing system, switch trains and upgrade, and don't upgrade pool features.
  2. If you have spare disks, then an alternative is to mirror and split the system pool, or clean install and import your old config, and again don't upgrade pool features, leaving your current boot/system disks intact and unchanged. Any issues, swap boot disks back.
Any of those, or any combination, should be 100% safe.

I would say, that at this point unless you have pressing need, I'd wait for RC1 - it's due out fairly soon and there are a few annoyances believed fixed in it that if you haven't already moved, it's maybe not as worthwhile moving to beta2 just to upgrade to rc1 2 weeks later. I upgraded early because there was a feature I needed badly enough for my pool to actually be reliable, that outweighed the annoyances. If that's not you, then I'd suggest maybe give it the extra couple of weeks till the rc. But if you want the new fun things enough, yes it's pretty safe.

Upgrade works, I did that initially with beta1. When beta2 happened I grabbed spare disks, downloaded the ISO, and clean installed and imported my backed up config. I figured after a few years, and changes in ZFS, let's give the system pool a clean start too. Both ways worked fine.
 
Last edited:

diskdiddler

Dedicated Sage
Joined
Jul 9, 2014
Messages
2,169
I suspect RC1 may contain the ZSTD support for volumes right? (via GUI) so perhaps it would be preudent.
 

Stilez

Senior Member
Joined
Apr 8, 2016
Messages
465

Stilez

Senior Member
Joined
Apr 8, 2016
Messages
465
One other reason for waiting for RC1 - some very specific issues I reported, affecting pool replication, look likely to be fixed in it.

They could be relevant if you replicate the pool.to get the benefits of ZSTD (as your post suggests you might intend):

1. Middleware front/back.end disconnections, which mean when you try and do some things in the webui it fails first time or takes a while. Like looking up available trains and updates.

2. ZFS replication (send/recv) refused to run if the pool or dataset stream being sent had > about 1000 snaps, so it had to be done piecemeal.
(Technically an internal limits on holds, nvlist size, was created but set much too small, there was no such limit in older ZFS used in <= 11.x. ZFS uses these holds internally to.prevent snaps being destroyed while send is going to access them, so 12-betas couldn't create holds for > about 1000 snaps at a time.)

3. Scrub is now efficient enough that if scrubbing a fast deduped pool, it can actually drive the system into CPU starvation calculating hashes. There's never been a need to throttle checksumming/hashing on read before, because ram and disk IO (especially on writing) have always been the limiting factors.

Devs say that first 2 are fixed for RC1, and 3rd is reproduced and being worked on at the moment.

Like I said, the betas have some annoyances, but nothing actually damaging.
 
Last edited:

diskdiddler

Dedicated Sage
Joined
Jul 9, 2014
Messages
2,169
They are working MUCH faster than I've ever expected them to do, it's kind of amazing.
I don't know how they're doing it to be honest - the alpha / betas are coming out super quick.
 

adrianwi

Neophyte Sage
Joined
Oct 15, 2013
Messages
1,119
Hats off to all you brave testers!

Has anyone upgraded to 12.0 and then tried to upgrade the version of FreeBSD within a jail?
 

Peter Brille

Member
Joined
Mar 6, 2015
Messages
204
I already upgraded my home but production system to beta2. I also made a video about how I migrated my jails and my bhyve guests.
I backed up ALL my personal data to external disks as well as iocage with iocage export and did zfs send for my bhyve guests.
Then I installed freshly and created new ZFS Volumes with new dataset encryption. It's actually an improvmenet.
Then with new beta I copied over all my data which took like a couple of days.
iocage import did the job quite well for iocage and zfs receive for bhyve.
I also made a video about my migration but it's in german only, sorry :smile:
 
Last edited:

Peter Brille

Member
Joined
Mar 6, 2015
Messages
204
Hats off to all you brave testers!

Has anyone upgraded to 12.0 and then tried to upgrade the version of FreeBSD within a jail?
Yes I did with iocage upgrade but from the outside, not from the inside (freebsd-upgrade).
It worked just fine.
 

deafen

Member
Joined
Jan 11, 2014
Messages
71
My apologies if this has been asked and answered elsewhere - I was not able to find the right magic keywords to search for it. Am I correct in assuming that I will need to upgrade my 11.2U8 system to 11.3 before upgrading to TN12C?
 

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
16,769
Not necessarily, but what's holding back the update to 11.3?
 

deafen

Member
Joined
Jan 11, 2014
Messages
71
Not necessarily, but what's holding back the update to 11.3?
Things have been working fine, and even though it's a home server, it's also a production datastore for my wife's photography business, so I tend to be conservative with changes.

The dedup performance improvements and hybrid zpools are really compelling features, though, so I'm planning to upgrade at RC1. I just wanted to make sure I understood how I'm supposed to go about that.

Thanks!
 

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
16,769
If it's being used for work, definitely wait until release, at least.
 

Stilez

Senior Member
Joined
Apr 8, 2016
Messages
465
The dedup performance improvements and hybrid zpools are really compelling features, though, so I'm planning to upgrade at RC1. I just wanted to make sure I understood how I'm supposed to go about that.
When you want to upgrade, it'll appear as an upgrade as normal for 12-RELEASE, or change trains in the update page to beta, and upgrade to 12-beta2.1 or rc1, and in due course those will be upgradable to 12-RELEASE. So that's easy enough. Eric's given a good word of caution. I've found beta 2.1 rock solid for data handling and SMB, but if its your business you can always go slow. Depends how urgently you *need* any feature. But 12 shouldn't lose you data, put it that way. Seems safe in that sense. But your risk,if you do.

Dedup is superb, but it relies on hybrid pools (is adding special vdevs, whether they are used for metadata, small files or dedup data or any combination.) You won't see dedup improvements if you aren't using special vdevs for the dedup tables (DDT) - and if you are, choose your SSDs with great care, see my resources on choosing ssds for modern versions of TrueNAS like 12:
You also won't see much (or best) change if you just add the new special vdevs to the pool. The existing data will stay where it is, on existing disks. You'll need for best results, to replicate the pool to a new pool with the special vdevs defined from the start. That way its all there and all written to the right devices.
 

Yorick

Dedicated Sage
Joined
Nov 4, 2018
Messages
1,700
Dedup seems like a really expensive idea. Two times 480gb Optane is around usd 1200, plus memory, I’m assuming 256gb, that buys a lot of storage.

Not to say it can’t be worth it - but the space savings have to be above the roughly 60ish TiB you could add to your pool with that.
 

Stilez

Senior Member
Joined
Apr 8, 2016
Messages
465
Dedup seems like a really expensive idea. Two times 480gb Optane is around usd 1200, plus memory, I’m assuming 256gb, that buys a lot of storage.

Not to say it can’t be worth it - but the space savings have to be above the roughly 60ish TiB you could add to your pool with that.
A fair idea, but flawed. There are 2 questions and these are their tl;dr answers, elaborated a bit below:
  1. SSD quality - Is Optane really needed

    Is Optane really needed? Dunno but seems plausible *in my case* given Im seeing 1/4M IOPS on each in routine use - but quite likely to be "YMMV" for many other people's servers. Note that my backup server is happy with Samsung Pro or even EVO - but it's also not under the same complex mixed in-use demands and if an SSD dies I don't care as it's unlikely my main server will die as well in the 36 hours it'll take to fit a new cheap SSD and re-replicate.

  2. Cost effectiveness - cost of dedup vs. cost of extra disks without dedup, if so

    Is it better simply to buy HDDs and not use dedup? No. Flat out impractical for many reasons - cost, feasibility and plain jawdropping disk count. Your estimate of the equivalent disk count is an entire order of magnitude out, see below. So dedup's a given, here, unfortunately. The only question is cheapest way to make it work fast, not whether or not to use it.

SSD QUALITY

I've said elsewhere, it may be that good SSD is enough. I'm doing a replication to a test pool with Samsung Pro's instead, and its not showing any signs of stalling.

What I don't know is what's actually needed, to be *sure* it'll keep up in actual use where there might be heavy R and W in parallel (2TB file transfer against background scrub or 2nd big read?). I'm not a tech lab, or a ZFS dev. I just like it to work well, and be sure what's in there isn't a problem under load. Would usual enthusiast SSDs be okay? They might well be fine. Nobody's actually got data any way, so when I bought I played safe. Theres no data to say anything else.

Overkill or not, is unknown at this point. The advantages are clear, but are they advantages a pool *needs*? Enough to be worth the extra price tag? I wish I could say. I don't have a clue yet. I'm not even sure I know how to test it, except by time and circumstance, because i want it to work for my workload and if there are tests of optane va enthusiast for workloads like mine on deduped fast nearline mirrored ZFS with periodic parallel use, im not sure what they are or how to construct them.


COST EFFECTIVENESS - DEDUP+SSD+RAM VS MORE HDD?

Your estimate is really badly out. Like, massively. By an entire order of magnitude. Because (to use your figure) you aren't just adding 60TB storage space. You're adding 60TB of *redundant* *used* space on a *single server*.

Lets look at that. To add 1 TB of usable pool space with 3 way mirrors (the only sensible choice if one is avoiding RaidZ parity calcs and other RaidZ disadvantages) is 3TB raw space. But ZFS likes to run <=50-60% full (it slows down if it gets past about 60-70%, theres stats on that even with big pools) so really you need to add 5-6TB raw space to get that 1TB extra capacity. Now double it because you do take backups too, and of course the backup server is also similarly redundant. So there's a 10-12x multiple going on. Add 1TB actual pool capacity in use = add 10-12 TB raw disk space. Plus HBA capacity, power use, and of course HDDS have an additional ongoing replacement cycle too.

Your "just add 60TB and don't dedup" has just added about 600-700 TB of raw storage, a ton of support hardware (PSU/HBA/backplanes) - and a commitment to buy that size of HDD space every 5 years or whatever it is when HDDs wear out and the warranty has expired. Even that frankly jawdropping figure ignores the fact that new data dedups against old, which is probably another 30-50% reduction/expansion.

I posted the real-world calculation for my own pool in another thread a while back.

Dedup's use cases are *extreme* data size reduction. In my case, for example, the maths goes like this (roughly)
  • 40 TB of data now. Say 80TB in a while. ZFS likes to run with quite a lot of free space, so that 80% should still ideally only be about 60% full . So about 133 TB raw pool capacity. I like to run 3 way mirrors. That's about 400 TB raw disk space. Double it because backup server. 800 TB raw capacity in theory between them. 0.8 PB for a home server and backup? Ridiculous. Using say enterprise 8 TB disks? That's 100 disks at £200 each. And connectors/backplanes. And power costs.
  • Enable dedup? Now 13 TB, 1/3 of the size. But my future writes will also dedup more (more likely to already have copies in the existing 40TB) so my deduped size wont double in 5 years. It might go up by 50%, say 18 TB deduped. At 60% full and 3 way mirrors, thats 90TB, or 11 disks. Maximum 22 disks inclusing a 2nd backup server.
  • This is when and why one uses dedup. Not to just shave off a few tens of percent. When storage cost and scale is actually prohibitive otherwise.
  • Other use cases are limited disk size, or limited bandwidth (historically one could cut data sent by a huge amount if replicating or backing up/restoring)
The point is, this is inherent in the data size, choice of mirroring with 2 failure tolerance in a set as protection, and the fact disks die/fail so redundancy is needed across 2 servers in order to never need to worry about it. Short of skipping to RaidZ (a Bad Idea for performance and resilver speed) there's little one can do to avoid a lot of disks for that level of safety across 2 servers.

Running dedup is the only way I know to make it practical, and still get 250-500 MB/sec to the server when I'm moving 1 TB directories around. If it's not clear what quality SSDs are needed, that's a secondasry problem not a primary one. But Optane also adds a redundant ultra-low latency SLOG and guarantees lowest latency on all metadata, all DDT. Given that they are pulling 1/4 *million* IOs each at times, I'd say it was a decent call compared to another 80-100 HDDs and their HBAs/PSUs.
 
Last edited:

Yorick

Dedicated Sage
Joined
Nov 4, 2018
Messages
1,700
That rabbit hole went deep :).

I think we are saying the same thing: Unless your needs are of the “3-way mirror on dual servers” variety, dedup is more trouble than someone bargained for.

For home use, raidz-2 is completely reasonable. Backing it up to a second server might not happen: The only thing taking up that much space is the BD collection, and in the case of a case, it’ll have to be digitized again.

And home users, for better or worse, start looking at dedup, and then wonder why it doesn’t perform well for them.

Two raidz2 vdevs of 6 drives each are right around that upper limit of cost for dedup, and that’s 2x40 TiB of storage, to be used however one might see fit. More than even an ardent BD collector will use, likely.

Point taken that an 860 evo will work. Which changes the math. And: We are back to “3-way mirrors because raidz doesn’t perform”. Someone in that situation hopefully knows why they’re de-duping, and more power to them.

Your garden variety home TrueNAS Core user is using a single raidz2 vdev over Gig Ethernet. Don’t grow that storage with dedup. Just grow that storage, is what I am saying: I think we are saying the same thing, there.
 

Stilez

Senior Member
Joined
Apr 8, 2016
Messages
465
That rabbit hole went deep :).

I think we are saying the same thing: Unless your needs are of the “3-way mirror on dual servers” variety, dedup is more trouble than someone bargained for.

For home use, raidz-2 is completely reasonable. Backing it up to a second server might not happen: The only thing taking up that much space is the BD collection, and in the case of a case, it’ll have to be digitized again.

And home users, for better or worse, start looking at dedup, and then wonder why it doesn’t perform well for them.

Two raidz2 vdevs of 6 drives each are right around that upper limit of cost for dedup, and that’s 2x40 TiB of storage, to be used however one might see fit. More than even an ardent BD collector will use, likely.

Point taken that an 860 evo will work. Which changes the math. And: We are back to “3-way mirrors because raidz doesn’t perform”. Someone in that situation hopefully knows why they’re de-duping, and more power to them.

Your garden variety home TrueNAS Core user is using a single raidz2 vdev over Gig Ethernet. Don’t grow that storage with dedup. Just grow that storage, is what I am saying: I think we are saying the same thing, there.
Yeah, there's a wide range in home usage, and to take your examples, a huge difference between digitising a BD collection and some PC backups, and using it to download or stream media in the home, and handling/moving around a 40TB collection of virtual machines at 500GB to 1 TB each. More so if also using iSCSI to replace PC local disks by ZFS snapshotted ones for data protection. At that point IO speed, and resilver speed, really is a factor and mirror vs. RaidZ is a big deal even if it costs.

(A less discussed mirror benefit is, when rebuilding the pool, I can strip down in stages from 3 way to 2 way to 1 way, and add disks to the copy. So a copy can be started as 1 or 2 way and disks from the old pool added to the new pool to thicken up the mirrors in the background once initially replicated. I've saved a lot of disk purchases that way. I can also "borrow" a disk or 2 for the desktop in a desktop PC emergency and resilver again after - 2 way is probably safe enough short term.)

Like you say, every user should consider their own use case. Not so easy when so many aren't that familiar with the "ins and outs" of it all.
 
Last edited:

ornias

Senior Member
Joined
Mar 6, 2020
Messages
473
Dedup seems like a really expensive idea. Two times 480gb Optane is around usd 1200, plus memory, I’m assuming 256gb, that buys a lot of storage.

Not to say it can’t be worth it - but the space savings have to be above the roughly 60ish TiB you could add to your pool with that.
1200 is peanuts for corporate systems.

If you run a node using 24 mirrored SSD's, for serving lots of VM's, 1200 dollars to get deduplication might be very interesting.
 

diskdiddler

Dedicated Sage
Joined
Jul 9, 2014
Messages
2,169
How is the testing going? Many of you guys come across any real killers?
I'm thinking about jumping on the next release.
 

ThreeDee

Senior Member
Joined
Jun 13, 2013
Messages
357
How is the testing going? Many of you guys come across any real killers?
I'm thinking about jumping on the next release.
I'm not doing anything complicated .. just some SMB shares and a couple jails .. but Beta2 has been rock solid for me
 
Top