Quick Summary: ESXi NFS sequential writes seem to be capped at 250 MB/s when talking to a beefy NFS pool over 10GbE.
NOTE: This isn't the usual "why are sync writes slow" question. At least I don't think so.
Box #1
VMware ESXi 6.0 U2
Supermicro X9SRE-F, E5-1650V2, 16GB ECC memory
Intel X710-DA4 10GbE
1 local SSD with 1 test VM VMDK
Box #2
FreeNAS 9.10.2 U2
Supermicro X9SRE-F, E5-1650V2, 64GB ECC memory
Intel X710-DA4 10GbE
1 pool:
- 3 x Intel DC S3710 200GB striped SLOG
- 3 x Samsung SSD 850 Pro 1TB striped
- atime=off, sync=always
- recordsize: I've tried 16K and 128K
- Local tests show this pool can handle sync writes @ 700 MB/s
Both boxes are connected via DAC. iperf testing shows that the boxes can talk at nearly 10 Gb/s in either direction. I've tried with and without jumbo frames.
On the test VM, I added a 40 GB thin-provisioned disk that's served from the FreeNAS box via NFS. When I run various disk tests (Iometer, CrystalDiskMark), the sequential write tests on this disk seem to be capped at around 250 MB/s even though the FreeNAS filesystem can easily exceed that.
When I check the FreeNAS graphs, I don't see any CPU, memory or disk bottlenecks. I've run zpool iostat -v 1 on the FreeNAS box and I see the writes evenly spread across the striped SLOG and striped data disks.
The most curious thing is that the network graphs on ESXi and FreeNAS during the write tests show a nearly flat ceiling at 2 Gb/s.
Is there something in ESXi or FreeNAS that limits/caps the NFS write performance? Maybe a setting that defaults to a single stream/thread/connection that needs to be increased?
NOTE: This isn't the usual "why are sync writes slow" question. At least I don't think so.
Box #1
VMware ESXi 6.0 U2
Supermicro X9SRE-F, E5-1650V2, 16GB ECC memory
Intel X710-DA4 10GbE
1 local SSD with 1 test VM VMDK
Box #2
FreeNAS 9.10.2 U2
Supermicro X9SRE-F, E5-1650V2, 64GB ECC memory
Intel X710-DA4 10GbE
1 pool:
- 3 x Intel DC S3710 200GB striped SLOG
- 3 x Samsung SSD 850 Pro 1TB striped
- atime=off, sync=always
- recordsize: I've tried 16K and 128K
- Local tests show this pool can handle sync writes @ 700 MB/s
Both boxes are connected via DAC. iperf testing shows that the boxes can talk at nearly 10 Gb/s in either direction. I've tried with and without jumbo frames.
On the test VM, I added a 40 GB thin-provisioned disk that's served from the FreeNAS box via NFS. When I run various disk tests (Iometer, CrystalDiskMark), the sequential write tests on this disk seem to be capped at around 250 MB/s even though the FreeNAS filesystem can easily exceed that.
When I check the FreeNAS graphs, I don't see any CPU, memory or disk bottlenecks. I've run zpool iostat -v 1 on the FreeNAS box and I see the writes evenly spread across the striped SLOG and striped data disks.
The most curious thing is that the network graphs on ESXi and FreeNAS during the write tests show a nearly flat ceiling at 2 Gb/s.
Is there something in ESXi or FreeNAS that limits/caps the NFS write performance? Maybe a setting that defaults to a single stream/thread/connection that needs to be increased?