Slow write speed

Status
Not open for further replies.

Ekhaskel

Cadet
Joined
Mar 28, 2017
Messages
6
Hi
I am trying to use FreeNAS 9.10 in ESXi 5.5 environment.
To check the FreeNAS performance I installed on the same ESXi host 3 virtual machines: WIndows server (for tests), and 2 servers as iSCSI targets - Windows 2016 storage server and FreeNAS 9.10.
I configured iSCSI targets on 2016 storage server and on FreeNAS, connected both via iSCSI to the same ESXi host, created 2 ESXi storages and created 2 disks for Windows server, one on each storage/iSCSI target.

I found that FreeNAS shows great read performance and poor write performance.

WIndows 2016 storage server:

Sequential Read (Q= 32,T=32) : 1226.038 MB/s
Sequential Write (Q= 32,T=32) : 1486.867 MB/s
Random Read 4KiB (Q= 32,T=32) : 317.606 MB/s [ 77540.5 IOPS]
Random Write 4KiB (Q= 32,T=32) : 336.897 MB/s [ 82250.2 IOPS]
Sequential Read (T= 1) : 980.413 MB/s
Sequential Write (T= 1) : 592.062 MB/s
Random Read 4KiB (Q= 1,T= 1) : 17.581 MB/s [ 4292.2 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 11.352 MB/s [ 2771.5 IOPS]

FreeNAS server:

Sequential Read (Q= 32,T=32) : 1835.275 MB/s
Sequential Write (Q= 32,T=32) : 494.319 MB/s
Random Read 4KiB (Q= 32,T=32) : 305.985 MB/s [ 74703.4 IOPS]
Random Write 4KiB (Q= 32,T=32) : 72.096 MB/s [ 17601.6 IOPS]
Sequential Read (T= 1) : 1039.625 MB/s
Sequential Write (T= 1) : 122.260 MB/s
Random Read 4KiB (Q= 1,T= 1) : 17.148 MB/s [ 4186.5 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 10.962 MB/s [ 2676.3 IOPS]

On absolutely same virtual hardware, FreeNAS provided much better read performance (1.8GB/sec vs 1.2GB/sec ) than Windows iSCSI and poor write performance (494MB/sec vs 1.48 GB/sec)

BTW - dd based test shows great read/write speed on HDD on FreeNAS virtual machine:

dd if=/dev/zero of=test bs=1G count=8
8+0 records in
8+0 records out
8589934592 bytes transferred in 4.833425 secs (1777194125 bytes/sec)

dd of=/dev/null if=test bs=1G count=8
8+0 records in
8+0 records out
8589934592 bytes transferred in 3.223010 secs (2665190125 bytes/sec)

Any ideas where is the problem?

Thanks

E.
 

Ekhaskel

Cadet
Joined
Mar 28, 2017
Messages
6
Still no clue. Any tests with any target and disks config shows the same consistent result - FreenNAS has great read speed, poor write speed. 8-(
Any ideas?
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
Hello there,

let's start with the basics:
what are your systemspecs ?
What is your current pool layout ?
output of zpool status -v
 

Ekhaskel

Cadet
Joined
Mar 28, 2017
Messages
6
Hi
ESXi 5.5 on Dual E5-2660/64GB RAM/RAID 10 Adaptec 72450 22 HDD SATA 5TB Seagate Enterprise / FusionIO PCIe 1.2TB
3 virtual machines on same ESXi - : WIndows server (for tests), and 2 servers as iSCSI targets - Windows 2016 storage server and FreeNAS 9.10.
Regarding pool layout - I tested few different configuration with the same result. The first thing I have to note is that because system is virtual - the disks are virtual also. I am not bypath-ing Adaptec and/or HDD's to Freenas, I just created virtual disks on RAID10 and in SSD and connected them to virtual machine with FreeNAS, exactly like to machine with StarWind.
- 5TB disk is on RAID10
- logs and cache disks are on FuisonIO PCi-E SSD

I attached screenshot from volume manager. I created 2 disks - one to share via NFS, second to share via iSCSI.

da1 - on SSD for cache
da2 and da3 - on SSD for logs
da4 - RAID10 on HDD

[root@freenas ~]#zpool status -v
pool: FreeNAS_5TB
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
FreeNAS_5TB ONLINE 0 0 0
gptid/35e80e68-14c3-11e7-8349-005056be56c5 ONLINE 0 0 0
logs
mirror-1 ONLINE 0 0 0
gptid/367cd5fe-14c3-11e7-8349-005056be56c5 ONLINE 0 0 0
gptid/36c3dfbc-14c3-11e7-8349-005056be56c5 ONLINE 0 0 0
cache
gptid/3630fded-14c3-11e7-8349-005056be56c5 ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da0p2 ONLINE 0 0 0

errors: No known data errors
[root@freenas ~]#

Thanks for any ideas.

E.
 

Attachments

  • freenas.png
    freenas.png
    16.3 KB · Views: 811
  • freenas2.png
    freenas2.png
    20.6 KB · Views: 796

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
OK ...
So here we go:
You're currently having the worst possible setup if you want to use Freenas & ZFS and you're after performance ....
Do yourself a favor and get a proper HBA (or at least passthrough the disks in JBOD-mode, if possible) to unleash the full speed of write potential of ZFS on your machine.
Create 3 striped mirror VDEVs for example and you will see how fast it will fly ... ;)
ZFS wants full access and control to the disks without any additional hurdle that will (not might ...) create problems.

Additionally, it's counter productive to put SLOG and L2ARC onto the same device, because of how these devices work.
I presume you have created VMDKs for these devices as well. Which will induce an additional negative administrative overhead and slow down everything...

BTW: You have a well working working iSCSI initiator in Windows. Why on earth are you using Starwind ?
Nothing will bring you better performance than a well integrated Kernel iscsi driver (which starwind is not ....)

Try to read up on these topics in the forum :)
 

Ekhaskel

Cadet
Joined
Mar 28, 2017
Messages
6
First of all - thanks for quick answers, I really appreciate your help.
Regarding Starwind - I performed few tests and found that Starwind provides better performance than pure Windows; in addition it has RAM/SSD cache functionality that pure Windows 2016 storage server iSCSI target doesn't have - and I need it for better random access.
Regarding my current setup. I know that FreeNAS developed to use software RAID and manage disks directly. In any case - I am trying to run it under ESXi and expecting to see at list pure hardware performance with some fee :cool:. As I see - read works properly but write is about 20-30% of real hardware performance (I believe that 70%-80% fee is too much :cool:.
Do you have some benchmarks of FreeNAS via 10Gb LAN? What numbers may I expect?
Any clue how to manage the existing setup (with virtual disks via ESXi) to see better write performance?
BTW - I tested system without SLOG on SSD and got the same write performance.
Thanks a lot.
E.
 

snaptec

Guru
Joined
Nov 30, 2015
Messages
502
The question is not if you get problems, it's only WHEN.
FN really wants an HBA. HWRaid is not supported nor recommended. I think you won't get any advice with this setup.
You are using iscsi sync writes?
So every write goes from FN to esxi layer, to the vmdk, to the hw raid controller and back a sync. Long way to go.
You can test to disable sync. But you won't run that live ^^


Gesendet von iPhone mit Tapatalk
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
Windows 2016 has storage space with built-in storage tiering, so that beats Starwind as well. but i'm not a fan anyway.
That being said, it has nothing to do with the initial setup.

With your current choices and configurations, you're stuck and you will not be able to see the real benefits of ZFS.
The SLOG in your config is probably useless, because you're running everything on ASYNC anyway ...
Not a good thing if you care about your data ...
Additionally, You've even breaking the bitrot detection on your pool ....
 

Ekhaskel

Cadet
Joined
Mar 28, 2017
Messages
6
Thanks. Will try to setup FN without HW Raid later.
Did you ever see benchmarks of FN iSCSI via 10GB LAN? What may I expect as good results?
E.
 

snaptec

Guru
Joined
Nov 30, 2015
Messages
502
That depends on use case, load, hardware (cpu, ram), pool setup(how many disks? mirrors? raidz2? how many vdevs?)....
 
Status
Not open for further replies.
Top