*

Status
Not open for further replies.

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
Has anyone seen this error before?

Its occurring when Apple users connect to FreeNAS via smb. It allows them to transfer data but sometimes randomly pops up and stops the transfer and even unmounts the volume.

Searching online (see link below) people are suggesting it is as a result of FreeNAS not supporting encrypted passwords (is that true?). Also if this were the case then why can users sometimes be fine but not other times?

http://prowiki.isc.upenn.edu/wiki/M..._displays_when_connecting_to_a_Windows_server

Appreciate any advice on the matter.

Cheers
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
That sounds like a bunch of crap. Once authenticated, you are authenticated. Passwords aren't required after you've authenticated, cached or not.

No clue what is going wrong, but I'm betting Apple's implementation is broken (who'd have thunk it) and they are somehow screwing up the Samba service, which says "hey, you are really broken and talking crap to me, let's redo this authentication thing" and then it fails because its not expected behavior.

That's just my feel for the problem. It is well documented that Apple's implementation, which was bought from a 3rd party a year or two ago, is total crap.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
Has anyone seen this error before?

Its occurring when Apple users connect to FreeNAS via smb. It allows them to transfer data but sometimes randomly pops up and stops the transfer and even unmounts the volume.

Searching online (see link below) people are suggesting it is as a result of FreeNAS not supporting encrypted passwords (is that true?). Also if this were the case then why can users sometimes be fine but not other times?

http://prowiki.isc.upenn.edu/wiki/M..._displays_when_connecting_to_a_Windows_server

Appreciate any advice on the matter.

Cheers
I'd increase logging verbosity in your CIFS config and grab a bottle of ibuprofen (for the inevitable migraines associated with debugging SMB). Do it during 'off hours' so that you can easily isolate the OSX client. OSX changed smb clients since that wiki was written so it's guaranteed to not be helpful.

A few other thoughts:
  • Post your smb4.conf file.
  • Try to reproduce the problem while navigating to the server via "cifs://<ip of server>/share" rather than "smb://<ip of server>/share". This will downgrade your connection to SMB1, which is generally more stable on OSX (but lacks nice features).
  • A cursory review of complaints online indicate that majority of instances of this error are related to how OSX handles metadata, writing it as a "dotfile" (._). I believe you can change a setting in OSX to prevent it from creating .DS_STORE metadata on network shares.
 
Last edited:

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
Thanks all. I've done some more testing on this and although it's not 100% it looks like it's an issue with some files not being happy landing on a GZIP9 dataset. There was one particular file a user had of 50GB and it would fail when the dataset was set to GZIP9 but after he disconnected and I set it back to default LZ4 it went over fine. I put the dataset back to GZIP9 he tried again (after deleting the file obviously) and again it failed. I'll do some more work on it tomorrow but it looks promising. It's worth noting that a couple of users were transferring about 1TB each today and most of the data went across fine but they both seemed to have a file or two that just wasn't happy.


Sent from my iPhone using Tapatalk
 
Last edited:

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
I was running iostat and watching the transfers and when the dataset was set to GZIP9 iostat would record an initial transfer of about 70Mbps which is about normal and then would fall off to about 2 Mbps. It would sit there for about 1 or 2 mins and then fail with the error on the users side. When set to LZ4 it hovered the whole time between 60Mbps and 100Mbps and went across fine.


Sent from my iPhone using Tapatalk
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I was running iostat and watching the transfers and when the dataset was set to GZIP9 iostat would record an initial transfer of about 70Mbps which is about normal and then would fall off to about 2 Mbps. It would sit there for about 1 or 2 mins and then fail with the error on the users side. When set to LZ4 it hovered the whole time between 60Mbps and 100Mbps and went across fine.

Sounds like the ZFS write cache got filled, and because of compression it cause ZFS to stop so it could flush the write cache and was timing out Samba before ZFS caught up on writes.
 

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
Did you think I would benefit from a SLOG?


Sent from my iPhone using Tapatalk
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Did you think I would benefit from a SLOG?

Nope. SLOG won't fix that problem. Your only options are to not use gzip-9 compression, add a more powerful CPU that can handle the gzip-9 compression, or deliberately force your write cache to some arbitrarily low number (200MB, for example) so that you can ensure that on a write cache flush it *will* finish before Samba times out.

If this is for home file storage and nothing like VMs or anything, setting the write cache limit isn't a bad option. It may slightly affect throughput of your zpool during periods of high writes.
 

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
Thanks cyberjock. How do I change the write cache size and what's is set to as default? Many thanks for your advice.


Sent from my iPhone using Tapatalk
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
The default is set based on testing ZFS does during pool mounting. No clue how to change the cache size off the top of my head. This wouldn't normally be something I'd recommend, so I didn't bother trying to remember it when I experimented with it a while back.

Honestly, you're probably better off just not using gzip-9. Do you *really* need that slightly improved compression ratio?
 

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
In testing I've found on sample data it doubles the capacity and as I've got 240TB I'd like to start with it if I can.


Sent from my iPhone using Tapatalk
 

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
Thanks for all your advice.


Sent from my iPhone using Tapatalk
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
ZOMG.

What's the specs on that system? I'd expect any E-5 system to be able to do gzip-9 without problems, and I wouldn't build a system that big without 64GB of RAM minimum...
 

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
It's got 256GB RAM and 2 x Intel Xeon E5-2640 processors.


Sent from my iPhone using Tapatalk
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Hmm. I'd expect that to be plenty of power, but it could be the cache is so big because you've got so many vdevs, and so much RAM.

You'll have to google how to force the cache to something smaller. You'll probably have to experiment to figure out what a good value is. I'd guess between 1 and 2GB is probably going to be the place to be. But I've never done what you are trying to do before, so my guess could be horribly off.
 

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
Ok thanks I'll have a play. Yes the vdevs are big 8+2 so perhaps that coupled with the GZIP9 is the issue. Thanks again for the advice.


Sent from my iPhone using Tapatalk
 

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
Update: I double checked this and it is defiantly the case that having GZIP9 on the dataset causes the problem and LZ4 is fine. The file is large 50GB so assume this coupled with the GZIP9 is the issue. I've also tried this on another test server with much lower specs but with a two drive mirrored vdev and it's a similar issue. It's doesn't actually crash and dismount but painfully slower at Kbps try's to copy over. It predicted 6hrs for the transfer of the 50GB but after 20mins I stopped it. That rules out the large vdevs.


Sent from my iPhone using Tapatalk
 
Last edited:

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
Update: I double checked this and it is defiantly the case that having GZIP9 on the dataset causes the problem and LZ4 is fine. The file is large 50GB so assume this coupled with the GZIP9 is the issue. I've also tried this on another test server with much lower specs but with a two drive mirrored vdev and it's a similar issue. It's doesn't actually crash and dismount but painfully slower at Kbps try's to copy over. It predicted 6hrs for the transfer of the 50GB but after 20mins I stopped it. That rules out the large vdevs.


Sent from my iPhone using Tapatalk
What if you transfer the file via scp/sftp? Does it also affect Windows clients?
 

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
I'll try and let you know. I tried AFP and it fails too.


Sent from my iPhone using Tapatalk
 

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
Same issue from a Windows machine ie going very slow but it doesn't seem to kick you off like it did on the Mac. It's strange as the client is saying its transferring at between 20MBps and 40MBps but iostat says incoming is 2MBps. It predicts 50mins to transfer the 50GB file. Ill let it run and see what happens.

Update: client speed has dropped to 14MBps and falling iostat still say 2MBps predicted finish time 'About 1 Hour'.

Another Update: it's locked out and looks like it's about to crash. It had transferred 10GB of the file so perhaps that's the limit of an individual file on GZIP9 dataset?

Yea crashed and brings the server to its knees.


Sent from my iPhone using Tapatalk
 
Last edited:
Status
Not open for further replies.
Top