9.3.1 -> 9.10 Success

Status
Not open for further replies.

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
I finally had the opportunity to make the jump to 9.10. tl;dr: it's great with minimal, insignificant issues

I did have to install twice; the first reboot had issues with being unable to find libedit.so.7 and therefore not being able to actually run anything. It was also reporting "getty repeating too quickly". I'm assuming the install failed to load some files somehow, but I didn't take the time to investigate much.

I rebooted back to my last 9.3.1 environment, deleted the bad 9.10 env and installed a second time. I didn't notice anything different between the two in particular (I used 'freenas-update -T FreeNAS-9.10-STABLE update' for both), but the reboot was fine this time.

Well, I did see the "database locked" notice and when I realized I couldn't add any new tunables (for bhyve) I rebooted and the database came back fine.

My jails are functioning normally. The only issue is that smartctl can't access the drives from the jail. Even when supplying the device type I get back:
Smartctl open device: /dev/ada0 [SAT] failed: Inappropriate ioctl for device
But, the same command works fine in the new 9.10 jail that I created. That's right I now have a variety of different jail environments and without destroying my jail dataset.

I think the trick is destroying the template dataset prior to creating the new jail. The template is created as a dataset and the jails are then cloned from there. Normally you can't delete an ancestor dataset, but you can promote the children and then delete as it's no longer a dependency. Maybe there will be some issue that I haven't encountered yet, but so far all the jails are performing just fine. I'd certainly be interested in hearing more about what the perceived issues were with the design decisions that supposedly lead to being unable to create 9.10 jails without dumping the the entire jail dataset.

I upgraded all installed packages prior to upgrading to 9.10 so I'm not sure if there's an issue there yet, but it seems others have resolved those issues. I haven't seen any issues so far with "pkg upgrade", but I don't have any outdated packages either.

I'm also experiencing the issue where cron is running with UTC times, but that's not a huge issue and I think I've seen that rebooting may resolve that as well.

Oddly enough, I also haven't had an issue with Crashplan. It seems like the linux requirement was properly detected and the kernel module loaded properly. I plan on installing in a bhyve vm as soon as possible regardless.

Oh, and while I haven't had a chance to experiment much yet iohyve has already created the /iohyve symlink for me.

And I swear, the system is using 5% less CPU overall.

All in all I'm really liking this release.
Cheers!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm having fatal problems with 9.10. My main system is still on 9.3.1. Talked to a dev and we're "hoping" the issue I'm having is fixed in the next FreeNAS release. So here I am, waiting patiently for the next FreeNAS release. :)

Please share how you cleaned up your templates though. I believe I "need" to do this as one of my problems with the 9.10 build. :)
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
The sequence I remember is basically starting with: "zfs destroy <template>". That reported the datasets that were dependent on the template. I then ran "zfs promote" on those and tried again with "destroy". I can check my "zfs history" output this evening. There may have been some stray warden files or directories that I removed as well, but I think those were limited to old jails that had already been destroyed.

It also turns out I misspoke regarding the cron timezone issue. I must have misread the timestamp as everything is being run at the correct times. One more issue down!
 
Joined
Apr 9, 2015
Messages
1,258
The last update started giving me repeating too quickly errors, rolled back to the 4-18 update and everything was fine. This may have been due to setting an IPV6 address for the GUI but after rolling back and updating again the same problem came back so I am not sure. I noticed the other day that my jails installed under 9.3 are actually showing 10.3 versioning now so at some point in time the jails may have upgraded. I am making the guess that when inside of a jail using the uname -mrs should give the jail version vs the overall system version however.

The initial upgrade so far has been ok. I waited till a couple versions were down before making the leap but I am not running anything extravagant.
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
I don't think there's a way to upgrade a jail; I think your assumption about "uname -mrs" is incorrect. I think 9.3 jails should have a version of uname that supports "uname -U" flag for reporting the userland version. All my jails are from older templates though, so uname reports 10.3 even in my old jails. I'm not sure if there's another reliable way to report what template they came from.
 
Joined
Apr 9, 2015
Messages
1,258
You are correct uname -U does show 903000 and in the main system shows 1003000.
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
Here are the results of my "zpool history". First of all, I thought this was a a perpetual log, but it looks like it is limited to some degree. I see the first command that created my pool, but then it jumps to earlier this year. I'm automating snapshot destruction, so there are a ton of those and I suppose they're pushing out other entries.

Anyway:
Code:
2016-04-12.21:30:00 zfs destroy tardis/jails/.warden-template-standard
2016-04-12.21:30:26 zfs destroy -r tardis/jails/.warden-template-standard
2016-04-12.21:30:53 zfs promote tardis/jails/image-server-93
2016-04-12.21:30:59 zfs destroy -r tardis/jails/.warden-template-standard
2016-04-12.21:31:30 zfs promote tardis/jails/tardis-vm
2016-04-12.21:31:35 zfs destroy -r tardis/jails/.warden-template-VirtualBox-4.3.12

So you can see where I try to destroy the template (from memory those failed reporting the dependent clones), then use '-r' to get rid of the snapshots (be careful not to use "-R" which would destroy the clones as well), then I promote the dependent clone(s), then destroy the dataset again. And then I do the same with the virtual box template and jail.

I also just upgraded some packages in the new and older jails without issue.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
So if I'm understanding this correctly you would have to promote each jail individually that you created from the 9.3 template using "zfs promote [jail template path]" before you destroy the 9.3 jail template since each jail is a clone of the template, correct?
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
Hmm, maybe. It's possible that promoting just one of the jails could leave the template as a leaf. Certainly promoting all of the jails should remove any dependencies on the templates.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
So you only had to promote one jail to be able to remove the template?
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
I don't remember to be honest. I know I've promoted multiple jails, but I've also deleted multiple templates. I'm just saying it's possible that promoting a single dataset could restructure the dependencies enough that the template becomes a leaf.
 
Status
Not open for further replies.
Top