Transfer Speeds Drop off severely during file transfers

Status
Not open for further replies.

drwoodcomb

Explorer
Joined
Sep 15, 2016
Messages
74
I set up a 10 Gigabit SFP+ connection between my Windows 10 PC and my FreeNAS server. I cant figure out why the transfer speed starts out really fast and then drops off dramatically.

In this test I am transferring a 24GB folder containing 50 movies from my FreeNAS iSCSI (10X Western Digital Se 3TB in RAIDZ2) to my Windows 10 machine (Samsung 960 Evo 1TB NVME M.2 Drive)

s!Alu7MuObVBQMsUfv1mCCYnQVbGnR

https://1drv.ms/i/s!Alu7MuObVBQMsUfv1mCCYnQVbGnR

The transfer is pretty good in this scenario. My issue is with the opposite.

In this next test I am transferring a 24GB folder containing 50 movies from my Windows 10 machine (Samsung 960 Evo 1TB NVME M.2 Drive) to my FreeNAS iSCSI (10X Western Digital Se 3TB in RAIDZ2)

https://1drv.ms/i/s!Alu7MuObVBQMsUU5-MJd_JcFG94i
s!Alu7MuObVBQMsUU5-MJd_JcFG94i


The transfer starts out very fast and then drops off about half way. I am wondering if anyone has any idea what could be causing this. Are there some settings in the SFP+ card that could be causing this? Any help would be greatly appreciated


FreeNAS SPECS:

Operating System: FreeNAS 11.1

Motherboard:
Supermicro X9SCM-iiF
CPU: Intel Xeon E3-1240 v2 (4c/8t)
RAM: 24GB DDR3 ECC UDIMMs

Storage HBA: IBM LSI ServeRAID M1015 SAS SATA PCIe RAID SAS9220-8i Flashed in IT mode
Storage Pool: 12X WD SE 3TB Hard Drive WD3000F9YZ in RAIDZ2
Redundat Boot Device: 2X Samsung 32GB MUF-32BA/AM USB 3.0 Flash Drive BAR

Ethernet Interface: MNPA19-XTR 10GB MELLANOX CONNECTX-2 PCIe X8 10Gbe SFP+ NETWORK CARD
SFP+ Transeiver: Finisar FTLX8571D3BCL 10GB 10GBASE-SR/SW SFP-10G-SR WTRD1 SFP+ Transceiver
 
Last edited by a moderator:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
How full is the pool? And why, oh why, are you using iSCSI for this?

https://forums.freenas.org/index.ph...res-more-resources-for-the-same-result.28178/

My guess would be that it is a combination of RAIDZ2, which you should not use with iSCSI, and possibly some fragmentation on the storage that's causing it to slow down. Because ZFS has no visibility into what you are actually trying to accomplish, you're really shooting yourself in the foot.

Even if you had used SAMBA, the recommended amount of RAM for 12x 3TB HDD's is 36GB, and for iSCSI it may be substantially more.
 

drwoodcomb

Explorer
Joined
Sep 15, 2016
Messages
74
I wanted to make a correction. I accidently wrote 12X 3TB HDD but its actually 10 HDD. I realize that I technically still have less RAM then recomended since for that I should have 30GB RAM but I only have 24GB RAM.

Thank you for sharing the informative link on iSCSI. The reason I chose to use iSCSI was for two reason.

1) I wanted to be able to run video games directly from the FreeNAS server and for whatever reason a lot of games wont run from an SMB share but will run from a iSCSI device.

2) I wrongfully read previously that iSCSI had less overhead because it accesses data at the block level.

As for how full is my pool? To be honest I find it really confusing to read what FreeNAS is trying to tell me. For example in windows my iSCSI drive says 4.65TB free of 5.99TB. but in FreeNAS it says 8.6TB used and 12.2TB available.

Here is a screen capture from Storage>Volumes on FreeNAS. Maybe you can make heads or tails of it:

s!Alu7MuObVBQMsUzBKqYw0ZRCTw_R

https://1drv.ms/i/s!Alu7MuObVBQMsUzBKqYw0ZRCTw_R

I also wanted to mention that I tried testing transfer speeds between my client and one of my FreeNAS SMB share and it showed improved results. Do you know if there would be a way to optimize iSCSI? You said RAIDZ2 could be contributing. What configuration would you recommend on a 10HDD array for maximum performance? The only reason I've been using iSCSI is to get games to run from the server. For everything else I am using SMB shares.

Here are the transfer images:

SMB to Client:

s!Alu7MuObVBQMsU7tlhjWrLs3yV-Y

https://1drv.ms/f/s!Alu7MuObVBQMsU7tlhjWrLs3yV-Y

Client to SMB:

s!Alu7MuObVBQMsU_3NhH8p7CNHVro

https://1drv.ms/f/s!Alu7MuObVBQMsU_3NhH8p7CNHVro
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It feels like I had this discussion like four hours ago.

https://forums.freenas.org/index.php?threads/iscsi-using-twice-the-space.61123/#post-434657

Close 'nuff. No happy answers here. You can throw more resources at this and make it work faster, I suspect. If you want fast, use mirrors and leave lots of free space. And by free space I don't mean the "80%" that is commonly quoted for file-based ZFS storage. If you only fill your mirror pool 10-25%, and have an appropriately sized ARC, you will get a very fast and very responsive system on the write side of things.
 

drwoodcomb

Explorer
Joined
Sep 15, 2016
Messages
74
I think it’s safe to say we’re all very grateful that there are people on here like you willing to help us dummies out.

I will definitely look into ARC. In the iSCSI article that you linked above you wrote that iSCSI is passing requests for certain blocks over the network, to read or write them. From what you wrote I get the impression that the FreeNAS operating system is somewhat less aware of what is happening since the client is controlling the filesystem.

If thats the case, will ARC be benificial for iSCSI? Also I think I read somewhere that ARC can only be as big as the amount of system RAM. So in my case since I have 24GB of RAM the SSD I use for ARC can only be 24GB? Is that accurate? Sorry for all the questions.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
btw, I've been meaning to ask whether you still have your 24x2TB iSCSI target? With 8 striped vdev's of 3 way mirrors. How much RAM do/did you have it in that system? I've been meaning to use it as an example of iSCSI done correctly.

If you want fast, use mirrors and leave lots of free space
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
btw, I've been meaning to ask whether you still have your 24x2TB iSCSI target? With 8 striped vdev's of 3 way mirrors. How much RAM do/did you have it in that system? I've been meaning to use it as an example of iSCSI done correctly.

Not in production. Basically it was too capex-intensive, and to do it right, I'd need a pair, because the only decent way to handle software upgrades is to evacuate a datastore and then shut it down, update, bring it up, evacuate the other unit, shut that down, update, bring it up. So if you start doing the math, if you want to make sure that you don't cross the 60% capacity mark, that means that each filer needs to be kept 30% or less.

The big limiting factor was the 2TB disk size. That worked out to a 16TB pool, but if you can only use half of that, that's ~8TB, and if you have a pair of them then that's around ~5TB/each or ~10TB total usable between two filers. That's around where I'd like to be, but the cost to get there is pretty high, and once you've purchased the storage, you're kinda locked in. Seagate is now creating 3, 4, and 5TB disks in the 2.5/15mm form factor, and the 5TB was going for $240. The current VM usage in the shared iSCSI inventory here is already ~6.5TB, and that doesn't count several TB of stuff on local DAS RAID1 that ought to be on shared. Going forward I'd probably want to have a solution capable of handling 12-14TB, which probably means I want two filers with about 24TB pool each, so that during an evacuation I only get to around 60% utilization on either unit. That works out to probably being workable on the 3TB drives, but really the cost diff isn't that great for the 4's. Some people have been shucking them from external USB's but those drives appear to be crippled. Apparently the 5TB's are also available that way, allegedly crippled as well. So if we limit this to the ~$180 for a legitimate 4TB, that's $4300 per host just for drives. Plus another host. So it'd be probably around $14K to get to where I'd like to be with FreeNAS for VM storage.

So there is 128GB RAM and 2 x 512GB NVMe SSD in there for 1TB of L2ARC. The really cool thing is that almost all data actively used is read from ARC or L2ARC, meaning that if you sit there and run zpool iostat, you often see long runs of zeroes in the read column. It's pretty crazy, and very fast. Storage nirvana, maybe. :smile: I'd love to have another one and 4TB disks, but I just can't justify the expense.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I think it’s safe to say we’re all very grateful that there are people on here like you willing to help us dummies out.

I will definitely look into ARC. In the iSCSI article that you linked above you wrote that iSCSI is passing requests for certain blocks over the network, to read or write them. From what you wrote I get the impression that the FreeNAS operating system is somewhat less aware of what is happening since the client is controlling the filesystem.

If thats the case, will ARC be benificial for iSCSI? Also I think I read somewhere that ARC can only be as big as the amount of system RAM. So in my case since I have 24GB of RAM the SSD I use for ARC can only be 24GB? Is that accurate? Sorry for all the questions.

ARC refers to the system memory used to cache for ZFS. It will use as much of it as it can, because every time you pull something from memory it is 1000x faster than pulling it from disk.

L2ARC is ARC-on-SSD, where the system takes ARC entries and pushes the less heavily used ones out to SSD. The idea is that if something is being used semi-frequently, it might be faster to pull it from SSD than to go out to the pool and pull it from HDD again.

L2ARC is largely pointless in most cases if you don't have a system that is busy, because the average person will not notice the difference between a block being retrieved from RAM, SSD, or HDD on a system that isn't busy. They're all pretty fast.

L2ARC isn't recommended until you get up to at least 64GB RAM, or at least max out your system RAM (older X9 systems with 32GB cap).

iSCSI tends to cause a lot of fragmentation over time, as writes update existing data. This slows down reads of data, even things you think "should be" sequential, and ZFS administrators mitigate this through use of gobs of ARC/L2ARC. See immediately preceding post.
 

drwoodcomb

Explorer
Joined
Sep 15, 2016
Messages
74
That makes perfect sense. Thanks for all you help!
 
Status
Not open for further replies.
Top