iSCSI ESXi Transfers dying out with FreeNAS

Status
Not open for further replies.

dambrosioj

Dabbler
Joined
Apr 1, 2015
Messages
23
So I have been using FreeNAS now for a couple years and everything has been great, but I recently bought a Dell R710 Server to use ESXi on. I got everything setup and I have iSCSI for the datastore for the VM's, but the problem is the transfer speed while using FreeNAS is really strange. I am on 10GBe, and the transfer of a large file will start out around 600/Mb's a second and thats great, but about halfway through the transfer of the file it always dies and transfers the rest around 70mb/s. I have no idea why this is happening.

I wasnt sure if I needed to add more RAM or create a SLOG? Before I did anything I figured I would ask the experts. Below is my configuration:

FreeNAS:
i3
16GB EEC RAM
6x7200 RPM Seagate Drives in RaidZ2
Two Mellanox 10GBe cards (1 for iSCSI, 1 for CIFS/NFS transfers) both mounted into ESXi
mtu set to 9000

ESXi:
Dell R710
Two Mellanox 10GBe cards directly connected to FreeNAS
Mtu set to 9000
64Gb Ram
Two 6-Core Xenons

I have my VM's running off iSCSI datastore, when i transfer something from SMB or NFS to a share on FreeNAS it transfers around 300mb/s steady. But when i transfer from the SMB to NFS to the VM that is where the issue happens. It starts around 600/Mb/s then just dies to 70mb/s

Can anyone point me to what I am doing wrong, I read that since im not maxed out on ram that L2ARC probably would just hurt things.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I wasnt sure if I needed to add more RAM or create a SLOG? Before I did anything I figured I would ask the experts. Below is my configuration:

FreeNAS:
i3
16GB EEC RAM
6x7200 RPM Seagate Drives in RaidZ2
Two Mellanox 10GBe cards (1 for iSCSI, 1 for CIFS/NFS transfers) both mounted into ESXi
mtu set to 9000
From the FreeNAS Hardware Recommendations:
To use iSCSI, install at least 16GB of RAM if performance is not critical, , or at least 32 GB of RAM if good performance is a requirement.
Basically, FreeNAS loves RAM... so the more RAM the better. You currently don't have enough to deliver good iSCSI performance. So add more RAM...

The safe way to implement VM storage on FreeNAS is to use a ZIL SLOG device and turn synchronous writes ON for the VM dataset. In a nutshell, not just any SSD will work well as a SLOG device; you want one with low latency, supercapacitor/battery backup, fast writes, and high durability. I mention several suitable devices (the NVMe-based Intel DC P3700 & 750 and the lower-performance SATA-based Intel DC S3700) in this thread:

https://forums.freenas.org/index.ph...csi-vs-nfs-performance-testing-results.46553/

Using jumbo packets can be problematic; I would drop back to the default MTU size until you get everything else (RAM and SLOG device) figured out.

Good luck!
 

dambrosioj

Dabbler
Joined
Apr 1, 2015
Messages
23
From the FreeNAS Hardware Recommendations:
Basically, FreeNAS loves RAM... so the more RAM the better. You currently don't have enough to deliver good iSCSI performance. So add more RAM...

The safe way to implement VM storage on FreeNAS is to use a ZIL SLOG device and turn synchronous writes ON for the VM dataset. In a nutshell, not just any SSD will work well as a SLOG device; you want one with low latency, supercapacitor/battery backup, fast writes, and high durability. I mention several suitable devices (the NVMe-based Intel DC P3700 & 750 and the lower-performance SATA-based Intel DC S3700) in this thread:

https://forums.freenas.org/index.ph...csi-vs-nfs-performance-testing-results.46553/

Using jumbo packets can be problematic; I would drop back to the default MTU size until you get everything else (RAM and SLOG device) figured out.

Good luck!

Ok so I did some more testing this week and things got even stranger.

I now have 32gb of ram installed and have not noticed really any difference. I set all network connections back to the default and mtu of 1500. Started a transfer and boom! 500mb/s

I think i solved it, then i made another transfer and right back down to around 100mb/s max. I though well that is strange, so i disabled my network connection in the VM then re-enabled it. The first transfer again boom! 500mb/s no problem and steady. Second, third, and fourth transfer just get worse and worse till there at about 50mb/s and fluctuate.

Why does this happen?
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Ok so I did some more testing this week and things got even stranger.

I now have 32gb of ram installed and have not noticed really any difference. I set all network connections back to the default and mtu of 1500. Started a transfer and boom! 500mb/s

I think i solved it, then i made another transfer and right back down to around 100mb/s max. I though well that is strange, so i disabled my network connection in the VM then re-enabled it. The first transfer again boom! 500mb/s no problem and steady. Second, third, and fourth transfer just get worse and worse till there at about 50mb/s and fluctuate.

Why does this happen?
Hmmm... I dunno. It could be that you're getting high transfer rates until the cache is exhausted, after which they plummet.
How are you measuring transfer rates? Iperf? ATTO disk benchmark? Crystal DiskMark?
 

dambrosioj

Dabbler
Joined
Apr 1, 2015
Messages
23
Hmmm... I dunno. It could be that you're getting high transfer rates until the cache is exhausted, after which they plummet.
How are you measuring transfer rates? Iperf? ATTO disk benchmark? Crystal DiskMark?

Sounds like the cache is getting exhausted, I am guessing there is no way to improve that without ZIL SLOG device?

I was just transferring files to measure speed, for real world results.

When I use a measuring tool like parkdale I get good speeds...at least I think so, since that is with like 10 vm's running. Just to note though I run the file transfer "real world" tests with no vm's powered up.
 

Attachments

  • Speed.png
    Speed.png
    267.7 KB · Views: 1

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Sounds like the cache is getting exhausted, I am guessing there is no way to improve that without ZIL SLOG device?

I was just transferring files to measure speed, for real world results.

When I use a measuring tool like parkdale I get good speeds...at least I think so, since that is with like 10 vm's running. Just to note though I run the file transfer "real world" tests with no vm's powered up.
I was talking about the ARC. You can go to Reporting->ZFS in the FreeNAS GUI and see how large your ARC is and the hit ratio, among other things. Ideally, your Hit Ratio will always be >= 90%. You may need yet more RAM... or it could be that your system is performing just dandy and we're obsessing over benchmarks, which don't always accurately reflect real-world performance.
 

dambrosioj

Dabbler
Joined
Apr 1, 2015
Messages
23
I was talking about the ARC. You can go to Reporting->ZFS in the FreeNAS GUI and see how large your ARC is and the hit ratio, among other things. Ideally, your Hit Ratio will always be >= 90%. You may need yet more RAM... or it could be that your system is performing just dandy and we're obsessing over benchmarks, which don't always accurately reflect real-world performance.

My ARC is above 90%. The problem is that the real world tests are the ones that are giving me the problems of transfer speeds not being consistent.

The benchmarks all report back fine and what I would expect. I guess maybe in real world tests I am expecting to much?
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
My ARC is above 90%. The problem is that the real world tests are the ones that are giving me the problems of transfer speeds not being consistent.

The benchmarks all report back fine and what I would expect. I guess maybe in real world tests I am expecting to much?
Could be.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
You could test setting sync=disabled on the datastore to see if that had any impact.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Status
Not open for further replies.
Top