Milestone XProtect / SAMBA performance issues

Status
Not open for further replies.

scottsee

Cadet
Joined
Aug 5, 2016
Messages
8
Good Morning..

We are attempting to migrating away from Windows Storage Spaces to FreeNAS as our Milestone XProtect video archive server but have ran into some performance issues that may or may not have to do with SAMBA or CIFS configuration or general interoperability between Microsoft Windows and FreeNAS. I'll do my best to make this short. Any assistance is appreciated.

We have two 45drive.com Storinator chassis, both identical in hardware. One is Windows 2012 R2 with Storage Spaces, the other is FreeNAS 9.10

FreeNAS: 9.10 Stable
Platform: Intel(R) Core(TM) i3-3240 CPU @ 3.40GHz
Memory: 32711MB ECC
Controllers: 2 x Rocket 750 sata controllers
Disks: 45 x 4TB WD RED drives - Nine raidz groups, containing 5 disks each for a total of 121TB storage.
Network: 2 x Intel X540 dual port 10gb nics / 40gb LACP lagg0
Reference links:
http://www.45drives.com/products/
https://www.milestonesys.com/our-products/video-management-software/xprotect-expert/

Required Workload:
Reads: 1600 to 1800 open files reads at any given time.
Writes: 1.5TB to 2.2TB data written to disk ever 24 hours
Video Files 30%: between 1MB and 16MB in size
Supporting files 70%: 1KB to 50KB in size
Current Files count: 10 million'ish
30 day churn rate: 100%
Growth expectations: 20% in the next 180 days.

I have installed most recent version of FreeNAS on one of our Storinators. Validated the system in a test environment for about 30 days but discovered very quickly after the device brought into production there was a very impactfull issue with the way the Milestone XProtect software and FreeNAS communicate. Milestone R&D came back with some very good information on how their software moves data during the archive process:

"The performance/behavior observed between Milestone and FreeNAS is related to the way in which Milestone internally handle threading/locking. A normally brief lock is taken on the source database bank during the table moving process, and it seems this lock takes several seconds due to slow IO on the FreeNAS shares resulting in images buffering to memory until the end of that lock."

I have sense moved the archive server back to our windows storage space server and collected various SMB captures with Wireshark between our Milestone recording servers, Windows Storage Spaces and FreeNAS to better understand the problem. One thing stands out above the rest. All SMB transmissions to the FreeNAS server have a max TCP windows size of 1024 and eventually drop to 500'ish across all data block transfers regardless of size. I am also observing large percentage of Zero Window size packets when communicating with FreeNAS due to the smaller scope of TCP windows.

It is possible that I am not looking at this from the right angle, please let me know if you would like additional information.
 
Last edited:

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Good Morning..

We are attempting to migrating away from Windows Storage Spaces to FreeNAS as our Milestone XProtect video archive server but have ran into some performance issues that may or may not have to do with SAMBA or CIFS configuration or general interoperability between Microsoft Windows and FreeNAS. I'll do my best to make this short. Any assistance is appreciated.

We have two 45drive.com Storinator chassis, both identical in hardware. One is Windows 2012 R2 with Storage Spaces, the other is FreeNAS 9.10

FreeNAS: 9.10 Stable
Platform: Intel(R) Core(TM) i3-3240 CPU @ 3.40GHz
Memory: 32711MB ECC
Controllers: 2 x Rocket 750 sata controllers
Disks: 45 x 4TB WD RED drives - Nine raidz groups, containing 5 disks each for a total of 121TB storage.
Network: 2 x Intel X540 dual port 10gb nics / 40gb LACP lagg0
Reference links:
http://www.45drives.com/products/
https://www.milestonesys.com/our-products/video-management-software/xprotect-expert/

Required Workload:
Reads: 1600 to 1800 open files reads at any given time.
Writes: 1.5TB to 2.2TB data written to disk ever 24 hours
Video Files 30%: between 1MB and 16MB in size
Supporting files 70%: 1KB to 50KB in size
Current Files count: 10 million'ish
30 day churn rate: 100%
Growth expectations: 20% in the next 180 days.

I have installed most recent version of FreeNAS on one of our Storinators. Validated the system in a test environment for about 30 days but discovered very quickly after the device brought into production there was a very impactfull issue with the way the Milestone XProtect software and FreeNAS communicate. Milestone R&D came back with some very good information on how their software moves data during the archive process:

"The performance/behavior observed between Milestone and FreeNAS is related to the way in which Milestone internally handle threading/locking. A normally brief lock is taken on the source database bank during the table moving process, and it seems this lock takes several seconds due to slow IO on the FreeNAS shares resulting in images buffering to memory until the end of that lock."

I have sense moved the archive server back to our windows storage space server and collected various SMB captures with Wireshark between our Milestone recording servers, Windows Storage Spaces and FreeNAS to better understand the problem. One thing stands out above the rest. All SMB transmissions to the FreeNAS server have a max TCP windows size of 1024 and eventually drop to 500'ish across all data block transfers regardless of size. I am also observing large percentage of Zero Window size packets when communicating with FreeNAS due to the smaller scope of TCP windows.

It is possible that I am not looking at this from the right angle, please let me know if you would like additional information.
A few observations:
1) That's not much RAM for the amount of storage you have.
2) Those Rocket 750s are also potentially a problem. I've heard negative comments about their performance under freebsd. A much better choice is something based on the LSI SAS2008 or SAS3008 chipset.
3) You may want more CPU

A few suggestions:
1) Try killing case sensitivity by doing the following: (a) create new dataset with "case sensitivity" set to "insensitive". (b) create new samba share pointing to new dataset with auxiliary parameter "case sensitive = true". This basically gets samba to skip translating unix / windows case sensitivity.
2) Try killing anything that touches extended attributes. Remove "streams_xattr" from the list of VFS Objects, and add the following "auxiliary parameters" to your CIFS config under "services" -> "CIFS".
Code:
ea support = no
store dos attributes = no
map archive = no
map hidden = no
map readonly = no
map system = no

3) Try disabling oplocks by adding auxiliary parameter "oplocks = no" in your share config.
4) Try fine-tuning your send / receive buffers in samba. This is something you'll have to experiment with. See here for more information: https://www.samba.org/samba/docs/man/manpages/smb.conf.5.html#SOCKETOPTIONS
5) Look into VFS_preopen. https://www.samba.org/samba/docs/man/manpages/vfs_preopen.8.html Note that it might interfere with ZFS's own caching mechanisms, but it may be worth looking into.

Ultimately, you may end up bottlenecking on CPU (which is what I've seen happen when having lots of simultaneous 16KB writes to a samba share). I'd say at least part of the problem is that your hardware is somewhat insufficient for the scale you're dealing with. I'd want to have an E5-1650 with 128GB+ RAM for that workload.
 
Last edited:

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
I'll echo what @anodos said: you have insufficient memory, and likely insufficient CPU.

When it comes to ZFS, you really have to throw out everything you know about "sufficient hardware" and "best practices". ZFS uses a crap-ton of memory (to say the least), and can be fairly demanding in other ways as well, what with calculating checksums for every piece of data in the system. That's not to say you get nothing for this demand: ZFS is one of, if not the, best filesystems out their for data security.

In your case, if you really want to use ZFS for your data, it's best to go back to the drawing board. You'll want at least a 100+GB of memory, and you'll probably benefit from a cache drive or two. However, ZFS may not be the right fit for your organization, especially if the data you're storing on this array isn't super critical.
 

scottsee

Cadet
Joined
Aug 5, 2016
Messages
8
Good morning and thank you for your responses.

I'm in agreeance with your observations, the hardware is not capable of supporting the full workload without noticeable performance degrigation within our security department. This video data, though its churn rate is very quick is extremely critical to the overall operation of the business. I've begun the process of replacing the FreeNAS hardware with more capable hardware as well as began working on finding a converged solution for scaling based on growth. Breaking our workload up across multiple FreeNAS devices has it's benefits.

Do you believe these observed issues are strictly due to lack of hardware resources on the FreeNAS device and not specifically connected to SAMBA or CIFS? Can I test this by converting my CIFS shares to iSCSI? I would love to validate capabilities of Mem, CPU, and Disk for Science.

@anodos, I'll make your suggested changes on Monday morning.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Good morning and thank you for your responses.

I'm in agreeance with your observations, the hardware is not capable of supporting the full workload without noticeable performance degrigation within our security department. This video data, though its churn rate is very quick is extremely critical to the overall operation of the business. I've begun the process of replacing the FreeNAS hardware with more capable hardware as well as began working on finding a converged solution for scaling based on growth. Breaking our workload up across multiple FreeNAS devices has it's benefits.

Do you believe these observed issues are strictly due to lack of hardware resources on the FreeNAS device and not specifically connected to SAMBA or CIFS? Can I test this by converting my CIFS shares to iSCSI? I would love to validate capabilities of Mem, CPU, and Disk for Science.

@anodos, I'll make your suggested changes on Monday morning.
iscsi will not have the same overhead that samba has, but it brings it's own issues to the table. Once you're using iscsi you'll start seeing significant performance degradation after your pool is 50℅ full. So it'll be an apple / oranges comparison.
 

scottsee

Cadet
Joined
Aug 5, 2016
Messages
8
iscsi will not have the same overhead that samba has, but it brings it's own issues to the table. Once you're using iscsi you'll start seeing significant performance degradation after your pool is 50℅ full. So it'll be an apple / oranges comparison.

Will iSCSI have the same hardware requirements as CIFS? 50% may be acceptable considering the inexpensive nature of this type of JBOD/ZFS solution. Thinking out loud; dual proc E5 servers with 256gb mem, quad 10gb isci cards and 45 WD 4TB gold drives are considerably less expensive than adding the additional shelf's to our enterprise storage. Even if quotas are enabled at 50% pool size. Roughly speaking that would equate to 60TB of block level capacity per enclosure in their current spool configuration. Each recording server only had a 30TB requirment. I could use one FreeNAS for each and buy 2 more for replication..
 
Status
Not open for further replies.
Top