websmith
Dabbler
- Joined
- Sep 20, 2018
- Messages
- 38
Hi,
I have a weird problem.
I have upgraded my FreeNAS Server from a harddrive -> SSD only pool.
I only use the pool for my VM's via ESXi.
I have a SuperMicro X10SRi-F with a Xeon E5-1620v3 - 128GB DDR4 ECC RAM, Mellanox ConnectX-3 running Ethernet Connected to a Mellanox 40/56Gbit switch
ESXi server with Dual Xeon E5-2650L v3, 256GB RAM, Mellanox ConnectX-3 also connected to the Mellanox switch.
The SSD's are attached to a SAS3 Avago 9400 controller - since I wanted to add in a few NVME drives as well.
The SSD's are all 960GB Intel DC D3-S4510 - which have a Read/write speed around 400-500MB/s Intel ARK
I have partitioned a Intel Optane 900P 280GB, with a chunk for a SLOG and the remaining for a L2ARC.
My pool is layed out as follows:
Zfs properties
Tunables
I have recently upgraded to a 40Gbit/s network
Network speed between ESXi -> FreeNAS
Which is not 40Gbit/s but good enough to handle the speed of my pool I hope.
When I write locally using sync writes I get:
if I remove the SLOG I get:
Which is more in the ballpart of what I expected for 4 SSD mirrors considering that I am writing all zero's to a compressed pool.
But when doing it via NSF its just damn slow compared (373 MB/sec)
Turning off compression on the pool and testing again locally it drops to the speed of almost a single disk:
I am a bit baffled that my SSD only pool performs this "relatively" poor compared to my spinner pool with just 6 disks that could give me 350MB/s via NFS no problem.
Why is my SSD pool so slow with Sync=forced even locally?
And why is the NFS Speed even slower? Is the limit in the NFS implementation in either ESXi or FreeNAS?
When I write via NSF with a single dd I never see the network utilization on the FreeNAS box go above 4Gbit/s - which fits with the approximate speed i get.
Its not that I want to write heaps of data very fast, I am aware that my pool is great for VM's, lots of IOPS, low latency etc, but its sad that a single thread cannot push data when needed any faster.
Any good ideas of what to try to make it run faster?
Or do I just have to be happy with what I have?
Thanks in advance
Bjørn
I have a weird problem.
I have upgraded my FreeNAS Server from a harddrive -> SSD only pool.
I only use the pool for my VM's via ESXi.
I have a SuperMicro X10SRi-F with a Xeon E5-1620v3 - 128GB DDR4 ECC RAM, Mellanox ConnectX-3 running Ethernet Connected to a Mellanox 40/56Gbit switch
ESXi server with Dual Xeon E5-2650L v3, 256GB RAM, Mellanox ConnectX-3 also connected to the Mellanox switch.
The SSD's are attached to a SAS3 Avago 9400 controller - since I wanted to add in a few NVME drives as well.
The SSD's are all 960GB Intel DC D3-S4510 - which have a Read/write speed around 400-500MB/s Intel ARK
I have partitioned a Intel Optane 900P 280GB, with a chunk for a SLOG and the remaining for a L2ARC.
My pool is layed out as follows:
Code:
pool: vms state: ONLINE scan: scrub repaired 0 in 0 days 00:15:23 with 0 errors on Sun Sep 1 00:15:23 2019 config: NAME STATE READ WRITE CKSUM vms ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/b209b095-77b2-11e9-a7ed-a0369f09f4e8 ONLINE 0 0 0 gptid/b27d8dae-77b2-11e9-a7ed-a0369f09f4e8 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gptid/b331bac9-77b2-11e9-a7ed-a0369f09f4e8 ONLINE 0 0 0 gptid/b39c11c0-77b2-11e9-a7ed-a0369f09f4e8 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 gptid/b40ecb77-77b2-11e9-a7ed-a0369f09f4e8 ONLINE 0 0 0 gptid/b466218f-77b2-11e9-a7ed-a0369f09f4e8 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 gptid/14f69fa3-d94b-11e9-b4eb-00110a6c4808 ONLINE 0 0 0 gptid/15306c6f-d94b-11e9-b4eb-00110a6c4808 ONLINE 0 0 0 logs gptid/fe943ab6-a541-11e9-b39f-00110a6c4808 ONLINE 0 0 0 cache gptid/0c30960d-a542-11e9-b39f-00110a6c4808 ONLINE 0 0 0 errors: No known data errors
Zfs properties
Code:
root@vmnas:/mnt/vms/esxi # zfs get all vms NAME PROPERTY VALUE SOURCE vms type filesystem - vms creation Thu May 16 10:17 2019 - vms used 529G - vms available 2.84T - vms referenced 96K - vms compressratio 1.97x - vms mounted yes - vms quota none local vms reservation none local vms recordsize 128K local vms mountpoint /mnt/vms local vms sharenfs off default vms checksum on default vms compression lz4 local vms atime off local vms devices on default vms exec on default vms setuid on default vms readonly off default vms jailed off default vms snapdir hidden default vms aclmode passthrough local vms aclinherit passthrough local vms canmount on default vms xattr off temporary vms copies 1 default vms version 5 - vms utf8only off - vms normalization none - vms casesensitivity sensitive - vms vscan off default vms nbmand off default vms sharesmb off default vms refquota none local vms refreservation none local vms primarycache all default vms secondarycache all default vms usedbysnapshots 0 - vms usedbydataset 96K - vms usedbychildren 529G - vms usedbyrefreservation 0 - vms logbias latency default vms dedup off default vms mlslabel - vms sync standard local vms refcompressratio 1.00x - vms written 96K - vms logicalused 956G - vms logicalreferenced 13.5K - vms volmode default default vms filesystem_limit none default vms snapshot_limit none default vms filesystem_count none default vms snapshot_count none default vms redundant_metadata all default vms org.freenas:description local
Tunables
Code:
root@vmnas:/mnt/vms/esxi # zfs get all vms NAME PROPERTY VALUE SOURCE vms type filesystem - vms creation Thu May 16 10:17 2019 - vms used 529G - vms available 2.84T - vms referenced 96K - vms compressratio 1.97x - vms mounted yes - vms quota none local vms reservation none local vms recordsize 128K local vms mountpoint /mnt/vms local vms sharenfs off default vms checksum on default vms compression lz4 local vms atime off local vms devices on default vms exec on default vms setuid on default vms readonly off default vms jailed off default vms snapdir hidden default vms aclmode passthrough local vms aclinherit passthrough local vms canmount on default vms xattr off temporary vms copies 1 default vms version 5 - vms utf8only off - vms normalization none - vms casesensitivity sensitive - vms vscan off default vms nbmand off default vms sharesmb off default vms refquota none local vms refreservation none local vms primarycache all default vms secondarycache all default vms usedbysnapshots 0 - vms usedbydataset 96K - vms usedbychildren 529G - vms usedbyrefreservation 0 - vms logbias latency default vms dedup off default vms mlslabel - vms sync always local vms refcompressratio 1.00x - vms written 96K - vms logicalused 956G - vms logicalreferenced 13.5K - vms volmode default default vms filesystem_limit none default vms snapshot_limit none default vms filesystem_count none default vms snapshot_count none default vms redundant_metadata all default vms org.freenas:description local
I have recently upgraded to a 40Gbit/s network
Network speed between ESXi -> FreeNAS
Code:
root@vmnas:/mnt/vms/esxi # iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 4.00 MByte (default) ------------------------------------------------------------ [ 4] local 10.10.10.201 port 5001 connected with 10.10.10.182 port 32904 [ 5] local 10.10.10.201 port 5001 connected with 10.10.10.182 port 60258 [ 6] local 10.10.10.201 port 5001 connected with 10.10.10.182 port 35163 [ 7] local 10.10.10.201 port 5001 connected with 10.10.10.182 port 42069 [ 8] local 10.10.10.201 port 5001 connected with 10.10.10.182 port 47079 [ 9] local 10.10.10.201 port 5001 connected with 10.10.10.182 port 35295 [ 10] local 10.10.10.201 port 5001 connected with 10.10.10.182 port 42450 [ 11] local 10.10.10.201 port 5001 connected with 10.10.10.182 port 31983 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 4.88 GBytes 4.19 Gbits/sec [ 10] 0.0-10.0 sec 4.36 GBytes 3.74 Gbits/sec [ 9] 0.0-10.0 sec 4.53 GBytes 3.89 Gbits/sec [ 5] 0.0-10.0 sec 3.36 GBytes 2.87 Gbits/sec [ 6] 0.0-10.0 sec 4.11 GBytes 3.51 Gbits/sec [ 8] 0.0-10.1 sec 5.11 GBytes 4.35 Gbits/sec [ 11] 0.0-10.1 sec 4.07 GBytes 3.45 Gbits/sec [ 7] 0.0-10.3 sec 4.07 GBytes 3.39 Gbits/sec [SUM] 0.0-10.3 sec 34.5 GBytes 28.7 Gbits/sec
Which is not 40Gbit/s but good enough to handle the speed of my pool I hope.
When I write locally using sync writes I get:
Code:
root@vmnas:/mnt/vms/esxi # dd if=/dev/zero of=test2.bin bs=1M count=16k conv=sync 16384+0 records in 16384+0 records out 17179869184 bytes transferred in 29.987023 secs (572910125 bytes/sec)
if I remove the SLOG I get:
Code:
root@vmnas:/mnt/vms/esxi # dd if=/dev/zero of=test~32.bin bs=1M count=16k conv=sync 16384+0 records in 16384+0 records out 17179869184 bytes transferred in 5.910511 secs (2906664182 bytes/sec)
Which is more in the ballpart of what I expected for 4 SSD mirrors considering that I am writing all zero's to a compressed pool.
But when doing it via NSF its just damn slow compared (373 MB/sec)
Code:
[root@nasexsi:/vmfs/volumes/c871d057-89ed8108] time dd if=/dev/zero of=test4.bin bs=1M count=16k 16384+0 records in 16384+0 records out real 0m 46.05s user 0m 24.61s sys 0m 0.00s [root@nasexsi:/vmfs/volumes/c871d057-89ed8108]
Turning off compression on the pool and testing again locally it drops to the speed of almost a single disk:
Code:
root@vmnas:/mnt/vms/esxi # dd if=/dev/zero of=test5.bin bs=1M count=16k conv=sync 16384+0 records in 16384+0 records out 17179869184 bytes transferred in 35.156742 secs (488664989 bytes/sec)
I am a bit baffled that my SSD only pool performs this "relatively" poor compared to my spinner pool with just 6 disks that could give me 350MB/s via NFS no problem.
Why is my SSD pool so slow with Sync=forced even locally?
And why is the NFS Speed even slower? Is the limit in the NFS implementation in either ESXi or FreeNAS?
When I write via NSF with a single dd I never see the network utilization on the FreeNAS box go above 4Gbit/s - which fits with the approximate speed i get.
Its not that I want to write heaps of data very fast, I am aware that my pool is great for VM's, lots of IOPS, low latency etc, but its sad that a single thread cannot push data when needed any faster.
Any good ideas of what to try to make it run faster?
Or do I just have to be happy with what I have?
Thanks in advance
Bjørn