Filecopy speed falls off a cliff after a point (around RAM size) on multiple hardware platforms tested !

Longreen

Cadet
Joined
Jan 15, 2019
Messages
7
So I have installed Freenas now several times read many guides and watched instruction videos etc for hours.

I have a very persisting problem which happens on different hardware and is the exact same problem.

I initiated a file copy of some big files (40 GB size or even 70 GB size) and the file copy stays around 1 GB per second (using 100g NIC gives almost same speed as using the 10 gig NIC) after a point (seems to match the RAM size of the systems so either 64 GB or 128 GB) the copy speed falls off a cliff and goes down to around 80 MB per sec ! and stays that way until i reboot the Freenas servers at which point it again copys at around 1 GB per sec until again i copied total files worth the RAM size of the server and it falls down to around 72-80 MB again and stayts there for all consecutive file copys until another restart of the Freenas server.

Tried with ZFS encryption on pool or without and it makes no difference, problem is the same.

Tried to move system dataset (syslog and/or Reporting Database to either the OS boot drive (an SSD) or to the pools itself, again makes no difference.


Hardware config 1: E3-1275v6 Xeon, tried pools with 8 x HGST 1TB Enteprise drives or 8 x 1.9 TB Samsung Enterprise all SSD pool. 64 GB RAM

Hardware config 2 tested: Dell R640 Xeon (tested both E5 2699 v4 and Xeon E5-1680 v3) 128 GB ram

PC/workstation to filecopy to (both Xeon E5 1680 v3 and E2175G (E3 Xeon)


NIC's tested: Intel X550 T-2 and Mellanox ConnectX4 100 Gig dual port (tested both direct connection between Freenas and workstation as well as via a 10 gig and a 100 gig Enterprise class switces)

I am my wits end and would highly appreciate some advice and help !

Thank You
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Help needed : Filecopy speed falls off a cliff after a point (around RAM size) on multiple hardware platforms tested !

Please don'tont cross post.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Help needed : Filecopy speed falls off a cliff after a point (around RAM size) on multiple hardware platforms tested !

Please don't cross post.
 

Longreen

Cadet
Joined
Jan 15, 2019
Messages
7

Many posts with similar topic yet different issues on these forums, also hundreds of people made reddit posts etc with similar Freeans issues yet again not pointing to a specific root cause and no resolutions to the issue. So what do You suggest i do except kindly ask on these official forums ? made a detailed posts hoping not to be lambasted by the friendly Freenas Gurus :smile:
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Please kindly delete one of your post so that when someone helps you, others can more easily find the same answers. Cross posting angers the gurus
 

F!$hp0nd

Dabbler
Joined
Apr 18, 2016
Messages
13
Sounds too me like your zpool is not setup correctly. What I mean is that if you 64 or 128GB of RAM, then some of it will be dedicated to ZFS operations which could be the issue. Ideally with your current hardware I would set it up like this and see what happens.

First, you can never full saturate a 10 gig connection with just spin disk (unless you stripe everything with no redundancy. Great performance but basically raid0 = any drive fails you loose your data).

So set up a test bench like this (Can only store like 4TB but it should be pretty fast).

Intel E5 processors
(If you are using a connection through raid card use it in HBA mode).
128GB Ram
vdevs are in mirror (2 drives per vdev) (4 vdevs with 2 HGST drives per vdev).
Then use your 1.9TB SSDs as cache drives. (I have found that cache drives (ZIL) vs L2ARC depends on the data you are storing, if you have a lot of data, but each user is accessing their own thing then L2ARC is the best to use. If you have multiple users opening up a database file and simotanously using data, then ZIL in mirrored LOG is best). From my experience with both FreeNas and OpenZFS is that in order to improve performance overall just add ssd's as cache (L2ARC) to a zpool, because often it does both write and read caching at the block level.

If you are adding SSDs as vdevs to your zpool which is also spin disk vdevs, this won't do much. As ZFS is not a tiered storage platform, so your pool will only operate at the fastest speed of the average vdev.

Second, Jumbo Frames has a lot to do with the transfer rate. A good 10gig switch will make all the difference.

open up the terminal and do zpool iostat -v while doing a data transfer and see how each vdevs takes data. You will notice your cache takes most of it and then once transfer is done it will take some time to offload to the zpool.

Also, the way in which you create vdevs matters.

A single raidz will improve read speed but will have the write speed of a single drive.
2 raidz vdevs will improve read speed two fold but write speed will only improve a little.
3 raidz vdevs will improve read speed two fold and write will improve 2 fold.

The fastest vdevs for spin are always mirrored (but you loose half your raw storage because it mirrors the drives = raid10).

vdevs in mirror (2 drives per vdev) will increase more linerally as you add more mirrored vdevs.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Lets step back from the numbers... HOW are you transferring the files? SMB? AFP? iSCSI?

Is any other data on the pool? What is the client? Mac, PC, or something else? There are an insane number of variables at play here. We could talk about the pool layout, storage interface type (sas, nvme, data), network protocol, network interface and driver types, thermals, and a lot more. We need to know exactly how your setup looks on both client and server side.
 
Top