Looking for dedicated, slightly masochistic BETA testers!

Status
Not open for further replies.
J

jkh

Guest
Hi folks,

So, the first “blessed” nightly for testing (though most of them have been pretty good) is the 20140513 build (build hash: 4014d52). We still don’t have an official test plan ready for everyone yet (it’s hard work and still in progress), but those of you who wish to simply do ad-hoc testing on it, we’d really appreciate your impressions and opinions!

As folks can see from the MANIFEST file, the git repos/hashes which correspond to this build are:

git@gitserver.ixsystems.com:/git/repos/freenas-build/freenas.git 4014d52023f395c7925711a46a51449e6b36d21a
git@gitserver:/git/repos/freenas-build/trueos.git 93134bf3014b8dfa3cdf27d649c55599beeef504
git@gitserver:/git/repos/freenas-build/ports.git ba29abffb2b9cfa40af44cddcb8ea70432ac12e3

What this translates to on github, for those following along in the external repo, is:

https://github.com/freenas/freenas.git 4014d52023f395c7925711a46a51449e6b36d21a
https://github.com/trueos/trueos.git 93134bf3014b8dfa3cdf27d649c55599beeef504
https://github.com/freenas/ports.git ba29abffb2b9cfa40af44cddcb8ea70432ac12e3

A fairly straight-forward mapping, in other words.

Now, for those of you who don’t really speak code or git log, don’t worry - that’s not a prerequisite for membership / participation in this list! It’s simply one good way of finding out everything that has changed between the 9.2.1.5-RELEASE tag and the tip of 9.2.1-BRANCH, which is what the nightly 9.2.1.6-BETA build are rolled from!

The most salient points to pay attention to with this build are:
- Samba updated to 4.1.7
- More control over the system dataset (especially now that RRD information can be stored there). See Settings->System Dataset screen
- Support for multithreaded iSCSI
- Pool import times substantially improved
- CIFS service disable bug fixed
- Various AD/LDAP bugs fixed

Appended is a lightly formatted (and edited for length and content) list of git log commit messages cross-referenced to tickets addressed. Thanks for your testing!

Ability to rename warden templates added
Groups for samba
- use groupmap for classic mode
- add group to domain for domain controller mode
- import users in domain mode
Ticket: #4771
Ticket: #4181

Only save rrds to tar file when not using the rrd dataset
Catch up reporting module with rrd dataset
Ticket: #4698

Nuke the previous system dataset when migrating

Don't crash if netmask is not configured correctly
Ticket: #4265

If create isn't checked, don't create the directory
Ticket: #4958

The system dataset is all grown up now and has moved into it's own house

Delete plugins if on volume or dataset that is being destroyed
Ticket: #4879

Custom interface status for LACP LAGG
Ticket: #4855

Handle case of not available CAP in zpool list
Ticket: #4968

Use simple text input for FTP passive ports
Ticket: #4956

Only move the rrd directory if it's archived

Use system dataset for rrd graphics
Ticket: #4634

Use -m 2 for istgt in case multithread is enabled
Ticket: #4935

Add a GUI field to enable istgt in multithreaded mode
Defaults to off.
Ticket: #4935

Fix sync_disk to get the enabled disk first
Ticket: #4430

Add FREENAS:STATE to snapshot info in freenas-debug output.
This can be useful in diagnosing snapshot replication problems.
Ticket: #4809

Nuke freenas and freenas.local from /etc/hosts
- This is causing issues with AD joining domains... gethostbyname() is
returning names from these 127.0.0.1 entries and setting the wrong
servicePrincipalName.

Set the From address according to what is in Settings -> email
Ticket: #4566

Remove "delete_child" from default acl for everyone
Ticket: #4910

Fix the plugin update button problem
Ticket: #4590

Remove trailing slash
Ticket: #4669

Enable the virtualbox networking.
Ticket: #4814

Allow re-configuration of a samba4 domain
Ticket: #4595

Yet another acl setting program, much faster than python version.

fix jails shutdown sequence
Ticket: #4851

Remaining deprecated usage of xml nodes
Ticket: #4869

Deprecated test of xml element
Ticket: #4868

Update django to 1.6.4

Add istgt_args to istgt rc script
Ticket: #4935
Ticket: #4935

Update aria2 port
Ticket: #4348

Hookup with pam
Ticket: #4628
 
J

jkh

Guest
Hi folks,

Just to follow-up, there are also a couple of additional resources people can use to see what’s been fixed in 9.2.1.6-BETA. First, there’s the list of bugs already closed between 9.2.1.5 and 9.2.1.6-BETA:
https://bugs.freenas.org/projects/freenas/issues?query_id=78
If you have experienced any of the bugs on that list, verifying that they’re truly fixed is always very helpful to us!

Second, there’s the list of bugs that still need to be fixed before 9.2.1.6-RELEASE goes out:
https://bugs.freenas.org/projects/freenas/issues?query_id=59

If you run into a bug in 9.2.1.6-BETA that you feel to be significant (or, even more importantly, a show-stopper) please be sure to report it, also citing the date of the release you’re running (e.g. 9.2.1.6-BETA-20140514) so we can correlate it against our ongoing efforts.

Finally, if you have tested a BETA to any great degree and want to confirm this by sending an email to this list, or if you have any questions / concerns with things you’ve found during your testing, please by all means mail them to freenas-testing@lists.freenas.org. This is a very low-traffic list and I’m sure the subscribers would like to see more feedback about releases that other folks found good / bad / other.

Thanks!

- Jordan
 

reqlez

Explorer
Joined
Mar 15, 2014
Messages
84
Hi Jordan!

Just got a new ESXi lab server so will definitely start testing. Hey on another iSCSI note ... what is the progress on kernel iSCSI target ? last time I heard about it was months ago... or is that planned when freenas upgrades to 10 branch ?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Just a note reqlez, performance of iscsi isn't going to increase drastically with kernel iscsi. The problem isn't that iscsi is slow. The problem is that ZFS has hefty needs if you want to do random IO with it.
 
J

jkh

Guest
Hey on another iSCSI note ... what is the progress on kernel iSCSI target ? last time I heard about it was months ago... or is that planned when freenas upgrades to 10 branch ?

We've actually merged the kernel iSCSI code into the freenas master branch with the idea of making it an option in 9.2.2. The hardest part is simply reworking all the middleware pieces to write out kernel iSCSI configuration files instead of istgt configuration files, and also of course adding a switch to the GUI so that you can choose which one you want. The biggest advantages to ICT aren't performance, since iSCSI is actually pretty simple, it's additional features. We'll see how it progresses in 9.2.2!
 

reqlez

Explorer
Joined
Mar 15, 2014
Messages
84
Hi Guys ... thanks for the reply. I guess I was confused regarding why kernel iscsi was better :) But does it have to do anything with sync writes ? I heard that iscsi currently on freenas does not do "proper" sync writes even if you set it on datastore property. I guess i'm trying to choose if I should go with NFS instead... ill have to install some VMs and see how they perform I guess. I was more concerned about stability versus performance, cyberjock.

But hey... there are a few things that need improvement, like the replication code, etc ... but if I was the project manager I would focus on the samba fallout first lol

Keep doing a good job guys ! ever since I started using freenas I became a BSD freak... now I don't want to install centos VMs anymore, only FreeBSD :P
 
J

jkh

Guest
Hi Guys ... thanks for the reply. I guess I was confused regarding why kernel iscsi was better :) But does it have to do anything with sync writes ? I heard that iscsi currently on freenas does not do "proper" sync writes even if you set it on datastore property.

You heard wrong. Sync writes work fine with either kernel iSCSI or istgt. A lot of things would break if that didn't work!
 

reqlez

Explorer
Joined
Mar 15, 2014
Messages
84
You heard wrong. Sync writes work fine with either kernel iSCSI or istgt. A lot of things would break if that didn't work!

So i guess the Myth that is going around the forums that NFS does "safer" sync writes than iSCSI with Sync Enabled on dataset is busted ?
 
J

jkh

Guest
So i guess the Myth that is going around the forums that NFS does "safer" sync writes than iSCSI with Sync Enabled on dataset is busted ?


I don't know who started that myth, but it's not even vaguely supported by the evidence. Most VMWare shops use iSCSI. It's less complicated (as a protocol) and lower overhead than NFS, and NFS file locking and cache flushing semantics are certainly more complicated since it's a file-level rather than a block-level protocol.
 

reqlez

Explorer
Joined
Mar 15, 2014
Messages
84
I don't know who started that myth, but it's not even vaguely supported by the evidence. Most VMWare shops use iSCSI. It's less complicated (as a protocol) and lower overhead than NFS, and NFS file locking and cache flushing semantics are certainly more complicated since it's a file-level rather than a block-level protocol.

Interesting... well, since TrueNAS is vmware certified now ... i guess you can't say that the BSD iscsi target implementation is "unsafe" or anything ... Maybe the myth is busted after all :)
 
J

jkh

Guest
Interesting... well, since TrueNAS is vmware certified now ... i guess you can't say that the BSD iscsi target implementation is "unsafe" or anything ... Maybe the myth is busted after all :)

I can say from direct experience that passing the VMWare certification for iSCSI involved tested it in configurations even we had never thought of. It's been severely beaten up now, and passed with flying colors. The FreeNAS and TrueNAS iSCSI implementations are also identical.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
There is confusion.. but I'll let jgreco explain.

@jgreco - Explain this shiz yo!
 

reqlez

Explorer
Joined
Mar 15, 2014
Messages
84
On another note, got this in vmware logs, after 4 seconds it reconnects ... maybe bad cable :P Sorry not trying to hijack the thread ...

"Lost access to volume
5356dd39-96da891c-1fbc-f8bc123b14f8
(iscsids01) due to connectivity issues. Recovery
attempt is in progress and outcome will be
reported shortly.
info
16/05/2014 1:42:39 PM
iscsids01
"
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Um I'm not sure what I'm supposed to be commenting on.

1) Kernel iSCSI would be vaguely preferable to userland due to lower latency (no user/kernel boundary crossings) but I would expect not a huge deal with modern gear

2) iSCSI by default lacks sync write; this is bad for VM consistency. I do not advocate using iSCSI without sync=always unless your VM's are not valuable. If your VM's are not valuable then by all means have at it.

3) NFS is a preferable protocol for VMware, not because it is simpler, but because it doesn't lock your VM's into ESXi VMFS format, which means you can do maintenance on them even from the FreeNAS CLI if you want/need. Imagine shutting down a VM to recover its state from yesterday's snapshot. Easy to do from the FreeNAS CLI with NFS. Near impossible with iSCSI unless maybe you only had one VM on the datastore...

4) NFS sucks for sync write without a SLOG or setting sync=disabled. It used to be that we didn't have sync=disabled and used a systemwide flag to disable the ZIL, which was Extremely Dangerous To Your Pool. That no longer exists. I've yet to hear the final word on how safe sync=disabled is but it appears to be safe/r/ than the old hack. Of course you still would only disable this if you didn't value your VM's.

So that being said, I am a little leery of telling people to set sync=disabled on NFS, while we know iSCSI with sync=standard to be safe.

As far as writes in general goes, I am not aware of any changes that would revisit the general advice:

Avoid RAIDZn for small block write style datastores (iSCSI, NFS VM storage, etc). IOPS and stripe size considerations can burn excessive space and cause fragmentation pain.
 

reqlez

Explorer
Joined
Mar 15, 2014
Messages
84
Um I'm not sure what I'm supposed to be commenting on.

1) Kernel iSCSI would be vaguely preferable to userland due to lower latency (no user/kernel boundary crossings) but I would expect not a huge deal with modern gear

2) iSCSI by default lacks sync write; this is bad for VM consistency. I do not advocate using iSCSI without sync=always unless your VM's are not valuable. If your VM's are not valuable then by all means have at it.

3) NFS is a preferable protocol for VMware, not because it is simpler, but because it doesn't lock your VM's into ESXi VMFS format, which means you can do maintenance on them even from the FreeNAS CLI if you want/need. Imagine shutting down a VM to recover its state from yesterday's snapshot. Easy to do from the FreeNAS CLI with NFS. Near impossible with iSCSI unless maybe you only had one VM on the datastore...

4) NFS sucks for sync write without a SLOG or setting sync=disabled. It used to be that we didn't have sync=disabled and used a systemwide flag to disable the ZIL, which was Extremely Dangerous To Your Pool. That no longer exists. I've yet to hear the final word on how safe sync=disabled is but it appears to be safe/r/ than the old hack. Of course you still would only disable this if you didn't value your VM's.

So that being said, I am a little leery of telling people to set sync=disabled on NFS, while we know iSCSI with sync=standard to be safe.

As far as writes in general goes, I am not aware of any changes that would revisit the general advice:

Avoid RAIDZn for small block write style datastores (iSCSI, NFS VM storage, etc). IOPS and stripe size considerations can burn excessive space and cause fragmentation pain.

Hi.

I agree that by using the NFS you are not locked to VMFS, but ... when you mention snapshot, what snapshot are you talking about ? Because in VMware you can just use "Revert to previous snapshot" in the GUI. Or are you talking about ZFS snapshot, then i guess i would agree because if you have more than one VM in there, the snapshot will restore al VMs to yesterday, and you cannot selectively restore... unless you mount the ZFS snapshot, then share it as iscsi and use the vmware tools to extract the VMDK from that ZFS snapshot.

But then again, i should never restore a ZFS snapshot of a VM unless i had no other choice ... since the ZFS snapshot is not going to be consistent unless your VM is off.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
VMware's snapshot functionality is funky at best, and prevents you from doing certain things like migrating VM's under certain circumstances.

The VMware snapshot is not really going to be any more consistent than the ZFS one would be, the difference is that you don't need to muck around to get access to it. Basically a snapshot of a live VM disk is always going to have a risk of consistency problems due to the fact that you are effectively just "turning off" the VM at that instant in time.

In general, this is a point in favor of NFS.

As too is the fact that you can do other manipulations on your VM files without being totally dependent on a VMware hypervisor.
 

reqlez

Explorer
Joined
Mar 15, 2014
Messages
84
Well... VMware does provide an option to "quiescence" the file system during snapshot, it uses windows API to makes sure that all programs that support this feature flush the data to disk before the snapshot, if i'm not mistaken. One drawback about VMware snapshots is that you cannot run them for too long, you have to delete them or your performance drops severely ... So I just use them before doing some software updates, for example, and then delete them. Backup tools that are designed for VMware use the VMware backup API to see what blocks have changed since previous snapshot to do an "incremental" backup.

I wonder if ZFS has a similar performance penalty if you keep too many snapshot for too long ? Anybody with experience ?

WoW I just realized the potential for the following project: Figure out how to interface ZFS with VMware API to have VMware quiescence the filesystem during a ZFS snapshot !
 
Status
Not open for further replies.
Top