Announcing FreeNAS 9.2.1.3-RELEASE

Status
Not open for further replies.
J

jkh

Guest
Hey folks!

In our never(?)-ending quest to continue to add polish to the 9.2.1-BRANCH of FreeNAS, we are very pleased to announce 9.2.1.3-RELEASE! It's now up on http://download.freenas.org - come and get it!​
This point release for 9.2.1 adds ZFS replication status, fixes various issues found in 9.2.1.2 in CIFS, AFP, FTP, serial console support, and other areas.​
A list of all bugs fixed in 9.2.1.3-RELEASE can be found here.​
High level features for 9.2.1.3:​
* Samba (SMB/CIFS support) upgraded to version 4.1.6​
* Netatalk (AFP support) upgraded to version 3.1.1​
* ZFS replication status is now provided in ZFS Replication UI​
* The bug preventing FTP from starting when logging to system dataset has been fixed.​
We've worked very hard to nail all sorts of issues in this series of 9.2.1.x point releases, hopefully without doing anything destabilizing at the same time, and are confident that we've managed to polish this branch to a pretty high gloss (which is what you want in a NAS!). We certainly could not have done so without all of your testing and feedback over the last couple of months, so thanks!

- The FreeNAS Engineering Team​
 

bagnose

Cadet
Joined
Jan 16, 2014
Messages
4
Previous GUI upgrades have been fine, but this time (9.2.1.2 -> 9.2.1.3) I get:

Code:
Environment: Software Version: FreeNAS-9.2.1.2-RELEASE-x64 (ce022f0) Request Method: POST Request URL: http://cube/system/firmwizard/?X-Progress-ID=f778c7c4-1305-442e-8ce5-f78e3b12aac7 Traceback: File "/usr/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 107. response = middleware_method(request, callback, callback_args, callback_kwargs) File "/usr/local/www/freenasUI/../freenasUI/freeadmin/middleware.py" in process_view 158. return login_required(view_func)(request, *view_args, **view_kwargs) File "/usr/local/lib/python2.7/site-packages/django/contrib/auth/decorators.py" in _wrapped_view 22. return view_func(request, *args, **kwargs) File "/usr/local/lib/python2.7/site-packages/django/views/generic/base.py" in view 69. return self.dispatch(request, *args, **kwargs) File "/usr/local/lib/python2.7/site-packages/django/contrib/formtools/wizard/views.py" in dispatch 236. response = super(WizardView, self).dispatch(request, *args, **kwargs) File "/usr/local/lib/python2.7/site-packages/django/views/generic/base.py" in dispatch 87. return handler(request, *args, **kwargs) File "/usr/local/lib/python2.7/site-packages/django/contrib/formtools/wizard/views.py" in post 291. self.storage.set_step_data(self.steps.current, self.process_step(form)) File "/usr/local/www/freenasUI/../freenasUI/system/forms.py" in process_step 125. wizard=self) Exception Type: TypeError at /system/firmwizard/ Exception Value: done() got an unexpected keyword argument 'form_list'


:(
 
J

jkh

Guest
Read the README file for 9.2.1.2-RELEASE very carefully. :). Yes, I do mean 9.2.1.2
 

bagnose

Cadet
Joined
Jan 16, 2014
Messages
4
Thanks Jordan. I fix.shed it. Appreciate all the hard work and fabulous support!
 

ajohnson

Dabbler
Joined
Feb 25, 2013
Messages
18
Did 4600 get fixed? It's a showstopper and still listed as screened, so this is why I ask. Thanks.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Thanks to the team. I will test this one out this weekend unless 4600 isn't resolved. Is there really a 9.2.1.4-Beta coming out to address this issue? I don't use replication so it shouldn't impact me however I don't want to do two upgrades in a row. I like to do one and evaluate it over a period of time.
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
Thanks guys
 

Sir.Robin

Guru
Joined
Apr 14, 2012
Messages
554
ZFS Replication Status in GUI??? Sweet!! Now i want a ZFS Replication trigger button aswell :p

Thank you all! :)
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
ZFS Replication Status in GUI??? Sweet!! :p
This feature alone is making me jump to the new release immediately. Guinea pig here I come.
 
J

jkh

Guest
Yes, yes it was, though it's still not clear what the impact of that scenario is (this is when "ssh dies" during the replication). We'll look into that further for 9.2.1.4, but in the meantime, I have replications working well in 9.2.1.3 and would deem it "safe to use" (unless your link between replicated machines is somehow really bad, perhaps - not clear what's triggering the problem for william at all).
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
Yes, yes it was, though it's still not clear what the impact of that scenario is (this is when "ssh dies" during the replication). We'll look into that further for 9.2.1.4, but in the meantime, I have replications working well in 9.2.1.3 and would deem it "safe to use" (unless your link between replicated machines is somehow really bad, perhaps - not clear what's triggering the problem for william at all).
Replication is broken for me as well. 1GbE link between boxes. Bummer.
 
J

jkh

Guest
Replication is broken for me as well. 1GbE link between boxes. Bummer.

More details please! Broken how? What are the symptoms? What are you doing? I can replicate between these two 9.2.1.3 machines using a 1GbE link all day long now. What's different with your setup?

Please folks, don't just say "it's broken!" since that's like going to your mechanic and saying "my car doesn't work" ("doesn't work HOW?" "just doesn't work!" "What are you trying to do!?" "Drive my car!" "And??" "It doesn't work!") those exchanges are just frustrating for everyone. :)
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
Apologies... you're quite right. Here are the details.

I setup a new replication between two 9.2.1.3 boxes. PULL was upgraded from 9.2. PUSH was upgraded from 9.2.1.2.

Snapshots are setup on pool "alpha" as recursive, every 2 hours.
Replication is setup to replicate from "alpha" on PUSH to "beta" on PULL.

Both pools are encrypted. After upgrading to 9.2.1.3 on PUSH, I unlocked the pool "alpha". The PULL box was already booted and unlocked. The replication immediately failed though the log implies replication was attempted prior to finishing the unlock.

Log:
Code:
Mar 21 10:35:35 ironthrone kernel: GEOM_ELI: Device gptid/98aaaec1-ae02-11e3-8d8f-005056b01bec.eli created.
Mar 21 10:35:35 ironthrone kernel: GEOM_ELI: Encryption: AES-XTS 128
Mar 21 10:35:35 ironthrone kernel: GEOM_ELI:    Crypto: hardware
Mar 21 10:35:44 ironthrone kernel: GEOM_ELI: Device gptid/994bae4c-ae02-11e3-8d8f-005056b01bec.eli created.
Mar 21 10:35:44 ironthrone kernel: GEOM_ELI: Encryption: AES-XTS 128
Mar 21 10:35:44 ironthrone kernel: GEOM_ELI:    Crypto: hardware
Mar 21 10:35:54 ironthrone kernel: GEOM_ELI: Device gptid/9a917149-ae02-11e3-8d8f-005056b01bec.eli created.
Mar 21 10:35:54 ironthrone kernel: GEOM_ELI: Encryption: AES-XTS 128
Mar 21 10:35:54 ironthrone kernel: GEOM_ELI:    Crypto: hardware
Mar 21 10:36:01 ironthrone autosnap.py: [tools.autosnap:58] Popen()ing: /sbin/zfs snapshot -r -o freenas:state=NEW alpha@auto-20140321.1036-2m
Mar 21 10:36:01 ironthrone autosnap.py: [tools.autosnap:234] Failed to create snapshot 'alpha@auto-20140321.1036-2m': cannot open 'alpha': dataset does not exist usage:    snapshot|snap [-r] [-o property=value] ... <filesystem|volume>@<snap> ...  For the property list, run: zfs set|get  For the delegated permission list, run: zfs allow|unallow
Mar 21 10:36:02 ironthrone kernel: GEOM_ELI: Device gptid/9c39455f-ae02-11e3-8d8f-005056b01bec.eli created.
Mar 21 10:36:02 ironthrone kernel: GEOM_ELI: Encryption: AES-XTS 128
Mar 21 10:36:02 ironthrone kernel: GEOM_ELI:    Crypto: hardware
Mar 21 10:36:02 ironthrone autorepl.py: [tools.autorepl:195] Could not determine last available snapshot for dataset alpha: cannot open 'alpha': dataset does not exist
Mar 21 10:36:11 ironthrone kernel: GEOM_ELI: Device gptid/9dc8ee6f-ae02-11e3-8d8f-005056b01bec.eli created.
Mar 21 10:36:11 ironthrone kernel: GEOM_ELI: Encryption: AES-XTS 128
Mar 21 10:36:11 ironthrone kernel: GEOM_ELI:    Crypto: hardware
Mar 21 10:36:20 ironthrone kernel: GEOM_ELI: Device gptid/9f5da50b-ae02-11e3-8d8f-005056b01bec.eli created.
Mar 21 10:36:20 ironthrone kernel: GEOM_ELI: Encryption: AES-XTS 128
Mar 21 10:36:20 ironthrone kernel: GEOM_ELI:    Crypto: hardware
Mar 21 10:36:34 ironthrone notifier: Stopping collectd.
Mar 21 10:36:35 ironthrone notifier: Waiting for PIDS: 3059.
Mar 21 10:36:35 ironthrone notifier: Starting collectd.


I've had snapshots run since the unlock and no replication attempts logged. I disabled replication then re-enabled it to try and get replication to attempt again but it does not appear in the log as attempting replication.

The reports does show PUSH is TXing small amounts of data over the network and PULL is RXing small amounts of data. But nothing explains what this is. It has to be related to replication as if I turn off replication, this network activity dies. The TX and RX on the graphs happen once a minute or so then stop. It is picket fencing across the graph.

Same setup I had when both were on 9.2.0 and replication worked.

This is the web UI status:
Code:
CRITICAL: Replication alpha -> 172.21.14.7 failed: None
OK: The volume lab01 (ZFS) status is HEALTHY
OK: The volume alpha (ZFS) status is HEALTHY
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
More details please! Broken how? What are the symptoms? What are you doing? I can replicate between these two 9.2.1.3 machines using a 1GbE link all day long now. What's different with your setup?

Please folks, don't just say "it's broken!" since that's like going to your mechanic and saying "my car doesn't work" ("doesn't work HOW?" "just doesn't work!" "What are you trying to do!?" "Drive my car!" "And??" "It doesn't work!") those exchanges are just frustrating for everyone. :)

Hey.. welcome to what I see day-in and day-out!
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
Ok, I cloned the source and pulled up the replication script to look into the issue more. I found that replication creates a log in /tmp/ so I checked out that folder and found hundreds of logs. About one a minute, oddly corresponding to the network activity. Checked out the contents and found the answer:

(from PUSH)
Code:
cannot unmount '/mnt/beta/.system/syslog': Device busy


Ok, easy enough. I created a new dataset under beta on PULL. 'beta/alpha' Then I updated the replication on PUSH to use this dataset so it could wipe it out and do whatever without worrying about the .system dataset in the root of beta on PULL.

Now network traffic is increasing steadily, only one new tmp log file has been created (empty so far, 6 minutes later) and things look to be replicating.
 
J

jkh

Guest
Did 4600 get fixed? It's a showstopper and still listed as screened, so this is why I ask. Thanks.


Just to be clear - YES, the show-stopper was fixed. That bug has morphed to cover an entirely different problem (bugs do that sometimes) which is non-fatal and only even happens if your ssh connection dies, which is pretty rare unless your network is supremely flakey or people like to power off your NAS without warning. Even so, the next replication will pick up and update the status, so that's an entirely recoverable problem and will be fixed in 9.2.1.4 in any case.
 
Status
Not open for further replies.
Top