Multiple instances of CIFS/SMB to manage I/O load

Status
Not open for further replies.

bittenoff

Dabbler
Joined
Nov 26, 2017
Messages
20
A performance problem that I'm observing is that when some PCs are doing large amounts of I/O to FreeNAS over CIFS, the backend smbd processes get swamped and regular file access suffers a lot.

Thus I am looking at ways to mitigate this problem.

If I were doing this with say NetApp, I'd create a virtual server with a new instance of CIFS and give it new IP addresses.

What I would like to be able to do is run additional instances of smbd, preferably on their own IP addresses however FreeNAS only has a single instance of CIFS.

How do I do solve this problem with FreeNAS? Run FreeNAS or FreeBSD in a jail?

Or do I need to look at another approach, such as using different IP addresses for file I/O with big files and use ipfw to limit bandwidth?

Advice welcomed.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Advice welcomed.
Can you give us a rundown of the hardware you are using, how many clients you are dealing with and what sort of file access it is?
The details of all these things matter when it comes to finding a solution.
 

nojohnny101

Wizard
Joined
Dec 3, 2015
Messages
1,478

bittenoff

Dabbler
Joined
Nov 26, 2017
Messages
20
Let me make it simpler for you.

NAS provides 1GE for clients.

Imagine 5 clients writing data to SMB at 180Mbit/sec (lets assume this runs for 12 hours a day) and 5 clients that want to do random file I/O (24x7) and not be delayed more than 100ms.

All connecting to CIFS. Single instance.

smbd gets flooded handling I/O for the 5 clients writing out large amounts of data.

The 5 clients wanting to do random I/O get starved.

On a NetApp server, it is easy to partition different workloads by creating individual instances of CIFS with their own IP address and managing the I-O/network capabilities for each CIFS instances.

On FreeNAS, what do I do?

Writing "post your hardware configuration" is not helpful.
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
Is your network card a decent Intel which offloads a bit of work from the CPU, or a Realtek "I can't believe it's a network card" that adds CPU overhead?

Samba being what it is, this could affect the answer. So could other hardware, which is why we ask. The version of FreeNAS is also important.

I'm curious how multiple SMB servers deal with file locking. Or do you just give clients different shares?
 

BaT

Explorer
Joined
Jun 16, 2017
Messages
62
Let me make it simpler for you.

NAS provides 1GE for clients.

Imagine 5 clients writing data to SMB at 180Mbit/sec (lets assume this runs for 12 hours a day) and 5 clients that want to do random file I/O (24x7) and not be delayed more than 100ms.

All connecting to CIFS. Single instance.

smbd gets flooded handling I/O for the 5 clients writing out large amounts of data.

Writing "post your hardware configuration" is not helpful.

I'm not sure where your idea of a single smbd instance comes from. smbstatus invocation shows a list of the connected users and PIDs of their INDIVIDUAL smbd processes, one per connection. Actually, it's not the best design pattern, but it definitely contradicts the idea of a single smbd instance for everyone.

If you experience delays while interacting with Samba, it's very likely an issue with the hardware setup (in a broad sense). Low link capacity, not enough CPU cores, low memory, un-optimal disk layout and disks by themselves. SMB not the quickest protocol and will lose to FTP, for example, but I'd start with other parts of the system first.
 
Last edited by a moderator:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Writing "post your hardware configuration" is not helpful.
Nevertheless, it is part of the rules for a reason. Hardware matters very much with FreeNAS and if you can't give us more details, how can you expect any help?
You could have a suboptimal hardware configuration that is creating the problem that you are blaming on the software.
Also, if you are hitting the system with a high random file workload with small files, you may need a SLOG (Separate Log device) to cache the writes to be written to the physical disks in transaction groups.
The information you are giving is not what we need to know. Answer questions and then you can get answers that might illustrate the changes you need to make to get the system working.
On a NetApp server
This is NOT a netapp, it is better.
 

bittenoff

Dabbler
Joined
Nov 26, 2017
Messages
20
Maybe link aggregation?

Already configured.

Also, if you are hitting the system with a high random file workload with small files, you may need a SLOG (Separate Log device) to cache the writes to be written to the physical disks in transaction groups.

Hmm, putting the ZIL on separate devices is something I'm considering, so SLOG too?

http://www.freenas.org/blog/zfs-zil-and-slog-demystified/

and

https://www.cyberciti.biz/faq/how-to-add-zil-write-and-l2arc-read-cache-ssd-devices-in-freenas/

I do need to look into this.

As for NetApp, there's a LOOOOOONG way for FreeNAS to go to catch up. Failover, clustering, virtual servers, mobile IP addresses, etc. At a very simple level, FreeNAS does what NetApp does, but there is a huge gulf between NetApp cDot and FreeNAS. Huge.

Why do people do multiple instances of CIFS? Multiple authentication domains is the most obvious, along with browsability in multi-tenant environments. Security is the big one.

e.g. the CIFS server that I use for HR to share data with wants to be on VLAN 504 whereas the CIFS server that I use for the lab test environment is on VLAN 344. I really don't want the lab techs to have access to the same instance of CIFS that shares out data for HR. Maybe CIFS server implementation is secure but in the event it is not, I want as many security barriers as possible between a curious lab tech and payroll.

Ideally there would be a "jail template" that I could run as a CIFS/NFS/CIFS+NFS server for an assigned IP address with a given set of datasets or volumes. With such a template, I can then push all data serving into jails and leave the "root" environment for O&M.
 

bittenoff

Dabbler
Joined
Nov 26, 2017
Messages
20
I'm not sure where your idea of a single smbd instance comes from. smbstatus invocation shows a list of the connected users and PIDs of their INDIVIDUAL smbd processes, one per connection. Actually, it's not the best design pattern, but it definitely contradicts the idea of a single smbd instance for everyone.

I hear what you're saying.

I think what I want to say is that "this CIFS share is high priority" and "that CIFS share is low priority". I don't know if samba is capable of that?
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
Hardware and read/write performance aside, for multiple authentication domains you need multiple smbd. As far as I know (and I don’t have time to digg through smb.conf) the interface selection is global. Running multiple smbd would require hacking FreeNAS https://wiki.samba.org/index.php/Multiple_Server_Instances

But, we have the magic of jails. Since you have already split the network up in vlans you can set up those interfaces in FreeNAS and create a jail for each. Then configure smb to your hearts content in each jail.
 
Last edited by a moderator:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Interesting as this is, if you can't answer any questions, I am putting you on ignore.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
Hmm, putting the ZIL on separate devices is something I'm considering, so SLOG too?
ZIL is built in, SLOG is the separate device.
 

bittenoff

Dabbler
Joined
Nov 26, 2017
Messages
20
But, we have the magic of jails. Since you have already split the network up in vlans you can set up those interfaces in FreeNAS and create a jail for each. Then configure smb to your hearts content in each jail.

Yes, this is what I'm thinking. Some jail plugins that are orientated around file serving (CIFS/SMB/NFS/iSCSI) could be interesting.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Yes, this is what I'm thinking. Some jail plugins that are orientated around file serving (CIFS/SMB/NFS/iSCSI) could be interesting.
It sounds to me like you are trying to replace a commercial product (NetApp) with a free software package and you are basically making a feature request asking for something to be added to the free software to suit the way you have used your commercial software in the past.
I don't think you will get any traction on that. Just me thinking.
The folks that give FreeNAS away are the same folks that develop TrueNAS. TrueNAS being the software package that iXsystems put on their hardware platform that they sell. TrueNAS is much more feature complete and does support the high availability options you are asking about. If you contact iXsystems and purchase their equipment, they will include a copy of TrueNAS and they will support you, possibly even configuring the software to work in the way you desire. I just wouldn't expect it to be made available in the free version.

You also shouldn't expect anyone to help you if you won't answer their questions.
 
Last edited by a moderator:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I think that prioritizing things in Samba is a bit of a red herring.

If your problem is on the network side, as it sounds (the constant writes are already filling up the pipes), give the priority clients a dedicated link or QoS the others into submission.
If the problem is disk performance, add more vdevs.
If the problem is CPU, add more CPU. But this one is unlikely, it's not particularly hard to saturate even 10GbE with multiple SMB connections.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I think that prioritizing things in Samba is a bit of a red herring.

If your problem is on the network side, as it sounds (the constant writes are already filling up the pipes), give the priority clients a dedicated link or QoS the others into submission.
If the problem is disk performance, add more vdevs.
If the problem is CPU, add more CPU. But this one is unlikely, it's not particularly hard to saturate even 10GbE with multiple SMB connections.
OP would not give up any information on the configuration, so no way to know what the problem is.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 
Status
Not open for further replies.
Top