12.0-BETA2 now available

velocity08

Dabbler
Joined
Nov 29, 2019
Messages
33
I dont know if this is the issue, but if everything else is OK its *possible*

One of the bugs in BETA-1 (fixed BETA-2) had the effect that the front and back end of middleware dropped connections a lot. That meant that the web UI issued commands to the server backend that didn't always complete properly. Also responses from the backend that should have updated the web UI didnt. One side effect is that checking for updates and executing them, was affected - because that involves the front and back ends communicating too.

The webUI didnt reliably communicate properly the request, or didnt get back the response that an update was available. Failed many times.

Check in /var/log/messages and /var/log/middleware.log, for things that look like stuff is disconnecting or failing to connect fully.

If thats it the 2 choices are keep trying (worked eventually for me!), or else back up your config and clean install, then import your config again.

I just used the ISO method on boot via USB was a quick and painless exercise.

unfortunately even after the upgrade to BETA2 the update function in the GUI or changing trains is non existent.

see after screen shot from BETA2.

would this be considered a bug? should i log a bug report?

""Cheers
G

Screenshot from 2020-08-17 18-57-48.png
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Great to hear its working so well!
1/4 million IOPS is a lot. - Are you using Optane SSDs as special VDEVs?
What is the workload you are testing your pool with... protocol, r/w mix, iosize?
Thanks! There are teething issues with OpenZFS 2.0 and the Betas - so far filed 2 kernel panic issues and a good.couple of bugs that I can only think "how have these not been noticed till now?!" compared to 11.3's FreeBSD/Ilumos ZFS. But considering its an entire new port and OpenZFS 2.0 itself is still porting around the edges, I am hopeful it's just teething issues and not a sign that ZoL had any weaker quality control standards compared to the original FreeBSD ZFS. And of course betas should be expected to have bugs! Either way no regrets about early-migrating my pool to 12 and the sw will surely be fixed good over time. Even with these issues its still such a net gain I wouldn't go back.

Yes, special vdev is indeed Optane. Mirrored 905p 480 GB. Because of the near guaranteed ideal handling of mixed loads and sync loads (ultra low latency even under mixed RW). No other SSD comes close and is assured of that. I think the 0.25M IOPS workload was during the local migration of my old 40TB pool to the new devices via ZFS send -R | ZFS recv. Took 8 days on 11.2/11.3 last year. 18 hours on 12. But its been awesome seeing it "just cope" with scrubs, resolvers, and local + SMB bulk file moves. Those are my main protocols. So far I'm just tidying up from migrating, so nothing more advanced. With the pool replicated fresh, IO sizes and mixes are pretty much 4k DDT+spacemaps+metadata, 128k file contents and everything else. RW mixes vary, depending what I'm doing in parallel.

Special vdev + sequential scrub/resilver is incredible compared to 11.3. Like, 1 ~ 1.5 days down to hours.

Large scale snapshot destroy -r (~ 15k snaps as a script of thousands of destroy -r NAME), several a second rather than O(~ 5-30 sec) each

Also I should add a new datapoint from last nights work..... Deleting large directories from Windows via SMB on deduped pool, I'm now getting 600~1000 files/second deleted (!!!) vs stalling ("calculating...") behaviour on 11.3.

Seriously, benchmark that stuff. Its amazing, when ones been used to dedup even on good mirrored SAS3 HDDs on 11.3.

Sorry if this sounds bubbly and fanboi-ish but like I said, if you'd been where I've been, trying to get decent dedup previously, youd be the same.....
 
Last edited:

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
I just used the ISO method on boot via USB was a quick and painless exercise.

unfortunately even after the upgrade to BETA2 the update function in the GUI or changing trains is non existent.

see after screen shot from BETA2.

would this be considered a bug? should i log a bug report?

""Cheers
G

View attachment 40856

There are no updates past BETA2... so this would be expected behaviour.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
Yes, special vdev is indeed Optane. Mirrored 905p 480 GB. Because of the near guaranteed ideal handling of mixed loads and sync loads (ultra low latency even under mixed RW). No other SSD comes close and is assured of that. I think the 0.25M IOPS workload was during the local migration of my old 40TB pool to the new devices via ZFS send -R | ZFS recv. Took 8 days on 11.2/11.3 last year. 18 hours on 12. But its been awesome seeing it "just cope" with scrubs, resolvers, and local + SMB bulk file moves. Those are my main protocols. So far I'm just tidying up from migrating, so nothing more advanced. With the pool replicated fresh, IO sizes and mixes are pretty much 4k DDT+spacemaps+metadata, 128k file contents and everything else. RW mixes vary, depending what I'm doing in parallel.

Special vdev + sequential scrub/resilver is incredible compared to 11.3. Like, 1 ~ 1.5 days down to hours.

Large scale snapshot destroy -r (~ 15k snaps as a script of thousands of destroy -r NAME), several a second rather than O(~ 5-30 sec) each

Also I should add a new datapoint from last nights work..... Deleting large directories from Windows via SMB on deduped pool, I'm now getting 600~1000 files/second deleted (!!!) vs stalling ("calculating...") behaviour on 11.3.

Thanks for the data. Dedupe is complex to benchmark since the performance varies based on dedupe ratios, but these results are promising. Looks like 10X faster is a reasonable estimate.

If you do see more kernel panics, please provide any context about what the system was doing at the time.
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Thanks for the data. Dedupe is complex to benchmark since the performance varies based on dedupe ratios, but these results are promising. Looks like 10X faster is a reasonable estimate.

If you do see more kernel panics, please provide any context about what the system was doing at the time.
Dedup ratio ~ 3.1 ish. 13TB actual data 40TB original data size.

Also 10x understates it. Seriously. This is a lot more than 10x across the board. If it were described as 50x, (or if cautious, "10x to 50x"), that would be fairer. For my pool. Not everyone will see that gain I guess, but moving a pool that's deduped for good reason with appropriate hardware and good HDDs, to 12.0 with decent special vdevs, should get a ton more than 10x. At the least, dropouts and latency issues should resolve and that's equivalent to quite a few multiples.
 
Last edited:
Joined
Jan 4, 2014
Messages
1,644
I had TrueNAS 12-beta1 set up on a test server and recently updated to 12-beta2. The first thing I noticed was sluggishness on the test server. I also noticed that TrueCommand was telling me that CPU use was consistently high. I switched back to 11.3-U4.1, back up to 12-beta2 and then down to 12-beta1. The results of these actions are reflected in the TrueCommand graph below.

screenshot.502.png


What this reveals is that idle CPU use increases slightly in switching from 11.3-U4.1 to 12-beta1, but then there's a massive jump in CPU use going from 12-beta1 to 12-beta2. Note that I started with a fresh install of 11.3-U4.1. There are no jails running. It's a bare-bone system.
 
Last edited:
Top