BIG problems with SMB

Status
Not open for further replies.

spitfire

Dabbler
Joined
May 25, 2012
Messages
41
Hi,

I'm having issues with SMB/CIFS performance - from the client end I'm first seeing very good speeds (70-90MB/s), which then drop to ~30 MB/s, and after ~3GB of transerred data the graph dives to 0, mostly resulting in an error on windows side. Whenever I have to copy a big file over SMB, I either limit the transfer on client side (ie. using TotalCommander - i set it to limit to 10 MB/s), or pause the transfer (on Windows 8.1 using the pause button in explorer's copy dialog) - see attached graph.

On the server side, I'm seeing little to no activity during the file transfer - looking at 'zpool iostat -v <pool> 1' writes (and even reads) happen rather sporadically.
When monitoring ARC ('arcstat.py -f read,hits,miss,hit%,arcsz 1') I see hot ratio of 99-100% most of the time.
I'm using a RAIDZ2 zpool with 4x1TB drives, on a Core2Duo system (E8400) with 8 GBs of RAM.
I have dedup enabled globally (which I have good reason for), I've enabled autotune, which has set the following tunables:

Code:
kern.ipc.maxsockbuf:2097152
net.inet.tcp.delayed_ack:0
net.inet.tcp.recvbuf_max:2097152
net.inet.tcp.sendbuf_max:2097152
vfs.zfs.arc_max:4936306816
vfs.zfs.l2arc_headroom:2
vfs.zfs.l2arc_noprefetch:0
vfs.zfs.l2arc_norw:0
vfs.zfs.l2arc_write_boost:40000000
vfs.zfs.l2arc_write_max:10000000
vm.kmem_size:10534415360


I was exeriencing same issues even on a machine with 32GB of RAM (on http://www.supermicro.com/products/motherboard/Atom/X10/A1SAM-2550F.cfm)

Any ideas how to troubleshoot it/fix it?
 

Attachments

  • file.png
    file.png
    9.5 KB · Views: 327

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
What version of FreeNAS, tell us more about your hardware, turn if dedup and the tunables. You haven't read the rules or the stickies. Both of these tell you what info to put into a thread to get help and tell you that deduplicatin needs lots of memory like 64-128 GB. You shouldn't be using it anyways because I doubt your data will dedup very well.
 

spitfire

Dabbler
Joined
May 25, 2012
Messages
41
What version of FreeNAS, tell us more about your hardware, turn if dedup and the tunables. You haven't read the rules or the stickies. Both of these tell you what info to put into a thread to get help and tell you that deduplicatin needs lots of memory like 64-128 GB. You shouldn't be using it anyways because I doubt your data will dedup very well.

Sorry for that, here are the missing data:
FreeNAS-9.3-STABLE-201502050159
Intel Core2Duo E8400
8GB RAM (DDR2 non-ECC)
1Gbit NIC (Realtek RT8169)
4*1TB HDD

The data I have already are deduplicated quite well:
Code:
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP
strg01        3.62T  1.50T  2.13T         -    31%    41%  1.26x

Right now I'm trying to backup a few similar VMs, so I guess they should dedup very well too (they do on Windows Server using it's deduplication).

I'd be happy with speeds like 10MB/s, as long as it works (and doesn't break during copy).
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
You networking problems are from your realtek nic. They are terribly and you should have an Intel nic. Also like I said before your hardware isn't good enough to use dedup so turn it off.
 

spitfire

Dabbler
Joined
May 25, 2012
Messages
41
You networking problems are from your realtek nic. They are terribly and you should have an Intel nic. Also like I said before your hardware isn't good enough to use dedup so turn it off.

Like I said before I had same performance issue on a machine with 32GB of RAM (on http://www.supermicro.com/products/motherboard/Atom/X10/A1SAM-2550F.cfm) - BTW it had 4 Intel NICs 2 of which were bonded using LACP (using a switch that was configured for it)
.
This was happening only with SMB/CIFS - any idea why?


##Edit

BTW I forgot I have 2 Intel Pro/1000 PCI NICs in my drawer, I'll replace RT8169 with one of them..
 
Last edited:

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Autotune shouldn't be enabled. Your hardware is insufficient for deduplication. Depending on circumstances, 32GB RAM may also be insufficient for deduplication.

Create a dataset. Make sure dedup is turned off. Share it via CIFS and try to recreate your problem. For the sake of completeness, post the following:
  • /usr/local/etc/smb4.conf
  • relevant messages from /var/log/messages and /var/log/samba4/log.smbd
 

spitfire

Dabbler
Joined
May 25, 2012
Messages
41
Autotune shouldn't be enabled. Your hardware is insufficient for deduplication. Depending on circumstances, 32GB RAM may also be insufficient for deduplication.

Create a dataset. Make sure dedup is turned off. Share it via CIFS and try to recreate your problem. For the sake of completeness, post the following:
  • /usr/local/etc/smb4.conf
  • relevant messages from /var/log/messages and /var/log/samba4/log.smbd

Thanks! I will try that later.

I have few questions for now:
Isn't 32 GBs of RAM enough to handle a 3TB zpool (I was using 4*1TB in RAIDZ1 on that configuration + 2 SSDs, one 120 and another 60GB for l2arc )?
I was having same issues on 9.2.1.x and 9.3 on that hardware. Not to mention - it wasn't able to handle 1 client on samba, with nothing else going on.

When shouldn't autotune be used? Only on low-spec systems, or at all?
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
you need muuuuuuuch more ram for dedup. otherwise your system will suddenly deny access to the zpool.

just disabling dedup does not work, you need to create a new pool.
 

spitfire

Dabbler
Joined
May 25, 2012
Messages
41
you need muuuuuuuch more ram for dedup. otherwise your system will suddenly deny access to the zpool.
From what I've read it depends on amount of data stored/size of a pool. How much RAM do I need per 1TB of pool/deduped data?


just disabling dedup does not work, you need to create a new pool.
Don't you think just disabling it is enough? If everything is working fine - why do that? If I understand it correctly it's deduplication (at write time) that's requiring so much resources..




Can someone try to answer the questions I've asked in my previous post?
 

spitfire

Dabbler
Joined
May 25, 2012
Messages
41

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
http://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSDedupMemoryProblem is also a good read.
have some spare disks around? test your setup with the same data and no dedup.

48gb ram at least, l2arc only uses up to 25% for metadata (maybe there is a sysctl for) so you would need a bigger ssd.

but do yourself a favor, test it without dedup on a NEW pool.


btw: lacp and cifs: you setup only failover, did you?
 

spitfire

Dabbler
Joined
May 25, 2012
Messages
41
http://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSDedupMemoryProblem is also a good read.
have some spare disks around? test your setup with the same data and no dedup.

48gb ram at least, l2arc only uses up to 25% for metadata (maybe there is a sysctl for) so you would need a bigger ssd.

but do yourself a favor, test it without dedup on a NEW pool.


I was just preparing some smaller disks to test it - I'll try it first without dedup, then I'll see.
I also have a 120GB SSD somewhere - might as well use both of them.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm using a RAIDZ2 zpool with 4x1TB drives, on a Core2Duo system (E8400) with 8 GBs of RAM.
I have dedup enabled globally (which I have good reason for), I've enabled autotune, which has set the following tunables:

I'm sorry, but there is no reason you could EVER come up with to tell me dedup is a good idea with 8GB of RAM. Are you aware of the RAM requirements dedup requires?

Here's the warning against dedup from our manual...

ZFS v28 includes deduplication, which can be enabled at the dataset level. The more data you write to a deduplicated volume the more memory it requires, and there is no upper bound on this. When the system starts storing the dedup tables on disk because they no longer fit in RAM, performance craters. There is no way to undedup data once it is deduplicated, simply switching dedup off has NO AFFECT on the existing data. Furthermore, importing an unclean pool can require between 3-5GB of RAM per TB of deduped data, and if the system doesn't have the needed RAM it will panic, with the only solution being adding more RAM or recreating the pool. Think carefully before enabling dedup! Then after thinking about it use compression instead.

On a serious note, when someone tells me they want to use dedup, they'd BETTER have a cubic buttload of RAM. I'm talking 64GB of RAM at the lower end. You can literally need up to 800GB (this is not a typo.. I really mean Gigabytes) of RAM for just 1 TB of data. The warning is dead serious and you should take it very seriously. The downside is even if you turn it off, you are still stuck with deduped data and the only way to fix it is to erase all of the data on your pool (or destroy the pool and recreate from backups).

You shouldn't even be thinking about L2ARCs until you have MUCH more RAM, and only when you know your workload calls for it.

Not to sound judgmental, but you really need to stop and read up on our documentation, for stickies etc. It sounds like you've made quite a few mistakes and need to be more informed about the choices you are making (both with hardware and software).

Quite literally, there's probably 100 places on the planet where the cost of the RAM and the disk space saved will make dedup something worth using. Disk space is so incredibly cheap in comparison to RAM that dedup literally has no use except in extremely specific situations. I'd bet my life saving you are NOT there with a dedup ration of just 2. If it were 100:1, then maybe.
 
Status
Not open for further replies.
Top