Eds89
Contributor
- Joined
- Sep 16, 2017
- Messages
- 122
Hi all,
I have built an ESXi/FreeNAS AIO box for home use, and while synthetic tests within my VMs using tools such as CrystalDiskMark seem to provide very reasonable read and write speeds, when I start copying actual files around things take a dump and drop to sub 1Gbps speeds.
For info, the machine is:
If I copy an approx 1GB file over the virtual switch from a test VM stored on the 960 Evo SSD, the transfer completes within a couple of seconds and I see a spike up to about 5Gbps receive on the virtual nic on the destination VM from within task manager
If I run the same test with an 8GB size, read speeds come up at around 710MB/s, writes being about 350MB/s
If I do the same copy over the network of an approx 9GB file for comparison, it spikes at the beginning, then sits at around 60MB/s, then another spike towards the end of the copy (see attachment).
I cannot figure out this behaviour. Even if the underlying pool of disks becomes a limit, it is capable of much more than 60MB/s!
When doing the 1GB file copy, I can see my SLOG hitting about 600MB/s writes, and if I have 32GB RAM, surely it can cope with a 9GB file much better than this? It fits in RAM, it fits on the SLOG, so why does it drop to such poor speeds with larger files this this?
Cheers
Eds
I have built an ESXi/FreeNAS AIO box for home use, and while synthetic tests within my VMs using tools such as CrystalDiskMark seem to provide very reasonable read and write speeds, when I start copying actual files around things take a dump and drop to sub 1Gbps speeds.
For info, the machine is:
- Supermicro X9SRL-F motherboard with 6 core Xeon E5-2618L v2
- 64GB DDR3 ECC RAM
- 24 bay chassis with 9207-8i controller and Intel 24 port SAS expander
- ESXi booting from USB drive, and Samsung 960 Evo 250GB NVMe SSD as a datastore
- Samsung SM953 NVMe PCIe SSD as SLOG
- Intel quad port gigabit ethernet adapter
- ESXi 6.5 free latest build installed and booting from USB drive
- 960 Evo SSD set as a datastore for hosting FreeNAS VM (and test VMs)
- FreeNAS provided 4 vCPU and 32GB RAM, with one LAN vnic for SMB clients and one for ESXi loopback iSCSI storage (both vmxnet3 so 10Gbps capable)
- FreeNAS and ESXi connected to same virtual switch with no physical adapters connected for storage network
- FreeNAS has LSI controller passed through as a PCIe device, as well as the entire SM953 NVMe SSD as SLOG
- FreeNAS has pool of 2 mirrored vdevs consisiting of 4x 7200rpm 2TB Hitachi drives, SM953 SSD as SLOG, and a 64GB vmdk stored on the 960 Evo as L2ARC
- One zvol created at 1TB on this pool with sync=always, and iSCSI configured to allow ESXi to connect to this VM/zvoland create a datastore
- Other VMs then stored on this iSCSI datastore
If I copy an approx 1GB file over the virtual switch from a test VM stored on the 960 Evo SSD, the transfer completes within a couple of seconds and I see a spike up to about 5Gbps receive on the virtual nic on the destination VM from within task manager
If I run the same test with an 8GB size, read speeds come up at around 710MB/s, writes being about 350MB/s
If I do the same copy over the network of an approx 9GB file for comparison, it spikes at the beginning, then sits at around 60MB/s, then another spike towards the end of the copy (see attachment).
I cannot figure out this behaviour. Even if the underlying pool of disks becomes a limit, it is capable of much more than 60MB/s!
When doing the 1GB file copy, I can see my SLOG hitting about 600MB/s writes, and if I have 32GB RAM, surely it can cope with a 9GB file much better than this? It fits in RAM, it fits on the SLOG, so why does it drop to such poor speeds with larger files this this?
Cheers
Eds