9.2.1.6 - ISCSI & ESXI 5.5 Slow Performance

Status
Not open for further replies.

ndboost

Explorer
Joined
Mar 17, 2013
Messages
78
Alright so here is my layout..

freenas 9.2.1.6-RC
Intel G2030
32GB DDR3-1600 ECC Kingston
Super Micro X9SCM-F
Intel Pro 1000/pt quad gig nic
dual onboard gig nics

single esxi host with a dedicated vDS switch for ISCSI traffic, two seperate vmKernels on different subnets, and in freenas a single portal with two IPs again seperate matching subnets.

I am getting horrendous write performance on larger files and I cant figure out why.

stg01 has a dual gig LACP bonded NICs on VLAN 20 192.168.20.12 for NFS
also has two gig NICs on VLAN 40 (192.168.40.98 & 41.99) specific for iscsi traffic

All VMs are on VLAN 20 192.168.20.x, esxi is configured to use 40.100, 41.100 for vmkernel iscsi traffic..

local stg01 write to /mnt/vmfs 10gb file ~ 846 Mbit/s (3x WD Raptor 3000HLHS sata2 Raidz1)
local stg01 write to /mnt/vault 10gb file ~1663 Mbit/s (5x WD Red 2TBs sata3 Raid Z1)

nfs write from an ubuntu VM on VLAN 20 to /exports/downloads (/mnt/vault/downloads) 10gb file ~ 568 Mbit/s
iscsi write inside vm to /tmp (vmdk mounted via iscsi in esxi with two paths) 10gb file ~ 56 Mbit/s

After configuring iscsi to be on seperate subnets and its own VLAN 40.. Slight
iscsi write inside vm to /tmp (vmdk mounted via iscsi) 50mb file ~ 424 Mbit/s
iscsi write inside vm to /tmp (vmdk mounted via iscsi) 500mb file ~ 76.8 Mbit/s

I don' think i'm CPU bound but that very well may be it, at this point i'm about ready to start throwing parts at it and hoping something sticks lol.. I have a 30GB SSD coming for ZIL as a few people have recommended such a thing for NFS writes and ISCSI writes.. I realize I should ideally have at least 64GB of mem but this board caps at 32GB.. We're talking about a 6TB pool here, nothing too major.

edit: here are a bunch of images of my configs, and the esxi 5.5 reporting of performance for that iscsi storage device..

http://i.imgur.com/neqQkkT.png
http://i.imgur.com/TjkCaLm.png
http://i.imgur.com/uAf0eDu.png
http://i.imgur.com/xLtpZbU.png
http://i.imgur.com/zI3LvZB.png
http://i.imgur.com/LUS7gWA.png
http://i.imgur.com/Hj15YtD.png
 
Last edited:

ndboost

Explorer
Joined
Mar 17, 2013
Messages
78
forgot to mention MTU is 1500.. i can try seting it to something higher to see if that might help?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I don' think i'm CPU bound but that very well may be it, at this point i'm about ready to start throwing parts at it and hoping something sticks lol.. I have a 30GB SSD coming for ZIL as a few people have recommended such a thing for NFS writes and ISCSI writes.. I realize I should ideally have at least 64GB of mem but this board caps at 32GB.. We're talking about a 6TB pool here, nothing too major.

I hate to say it, but you *are* going to need a ZIL for NFS. But the pool size isn't your problem here. The problem is the workload. VMs are horrendous to use on ZFS without some damn powerful hardware. As someone that owns that motherboard I can tell you that it is not a particularly good board for running VMs from because of the 32GB limit. VMs are literally a random read and random write workload. That's the worst conceivable workload for ZFS you can possible come up with. Your options on how to get more performance are between zero and none. I used to run a single VM from my box, and I did it only for testing. I knew better than to expect it to perform, and it didn't perform well in the slightest.

I can appreciate your disappointment and desire to not buy more hardware, but if the ZIL doesn't bring things up you're going to be back here after you get the ZIL in service and the only other good option is to drop some cash on a platform that is appropriate for VMs.

Keep in mind that ZIL is useless for iSCSI. So if you were wanting to go with iSCSI and looking for performnace wins, you are SOL bro. Sorry.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I hate to say it, but you *are* going to need a ZIL for NFS.

The fortunate bit is that you always have a ZIL.... every pool has one The unfortunate bit is that you meant that he needs a SLOG device to offload the ZIL onto.

pedantry, pedantry...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, yeah....
 

ndboost

Explorer
Joined
Mar 17, 2013
Messages
78
alright, that's what you and I were talking about in irc a week or so ago CyberJock.. I'll give the ZIL a shot to boost NFS performance and concede that ISCSI might not get any better. Looks like I know what I want for christmas :) lol...
 

newlink

Dabbler
Joined
Oct 24, 2014
Messages
11
You have to enable jumbo frames (in the virtual switches, lans and in Freenas as well) to reach a decent speed, if you have round robin enabled you also need to change the IOPS limit to 1 (1000 is the default)
 

ndboost

Explorer
Joined
Mar 17, 2013
Messages
78
You have to enable jumbo frames (in the virtual switches, lans and in Freenas as well) to reach a decent speed, if you have round robin enabled you also need to change the IOPS limit to 1 (1000 is the default)
how do I enable Jumbo Frames in FreeNAS? Ive done it in the various parts in esxi..
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You have to enable jumbo frames (in the virtual switches, lans and in Freenas as well) to reach a decent speed, if you have round robin enabled you also need to change the IOPS limit to 1 (1000 is the default)

That's not overly helpful advice. He's not even saturating one link, so adding more isn't going to solve his problem. ;)
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
alright, that's what you and I were talking about in irc a week or so ago CyberJock.. I'll give the ZIL a shot to boost NFS performance and concede that ISCSI might not get any better. Looks like I know what I want for christmas :) lol...

If the iSCSI resources are separate zvols/datasets, you could manually set sync=always and they'd make use of SLOG then.

Which model is the 30GB SSD? Certain ones make better or worse devices for SLOG. Ideally you want SLC NAND or battery-backed RAM, but quality MLC (eg: Intel DC series) with overprovisioning can work well too.
 

ndboost

Explorer
Joined
Mar 17, 2013
Messages
78
actually after upgrading to 9.2.1.8-RELEASE, enabling the expiremental kernel i'm getting way better writes now.. These numbers seem pretty high though.

sudo dd if=/dev/zero of=1g.img bs=1M count=500 - 116MB/s - 928 MBit/s
sudo dd if=/dev/zero of=1g.img bs=1M count=2000 - 76.1MB/s - 696 MBit/s

I can't test anything larger yet as none of my VMs are provisioned with big enough disks.. I am provisioning a new vm now with a 40gb disk.. ill be able to test once thats done.

If the iSCSI resources are separate zvols/datasets, you could manually set sync=always and they'd make use of SLOG then.

Which model is the 30GB SSD? Certain ones make better or worse devices for SLOG. Ideally you want SLC NAND or battery-backed RAM, but quality MLC (eg: Intel DC series) with overprovisioning can work well too.
Its some Kingston Desktop class SSD I had laying around.. Not sure really, I haven't installed it yet. I have a dedicated pool and single zvol for the iscsi interface to my esxi host.

I'd like to use the ZIL for both NFS shares (which are on a differnet pool) and the iscsi vol as well if possible..
 
Last edited:

bestboy

Contributor
Joined
Jun 8, 2014
Messages
198
I cannot really contribute to a solution, but here are some of my thoughts:

a) Did you try the new CTL iSCSI implementation in FreeNAS? The old istgt implementation has performance issues... IIRC something about threading and user land limitations.
b) Are reads handled as bad as writes?
c) Regarding jumbo frames: Be advised that back in the days the generator polynomial for the CRC32 checksum in the Ethernet frame trailer was designed for payloads of 1500 bytes.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If the iSCSI resources are separate zvols/datasets, you could manually set sync=always and they'd make use of SLOG then.

Which model is the 30GB SSD? Certain ones make better or worse devices for SLOG. Ideally you want SLC NAND or battery-backed RAM, but quality MLC (eg: Intel DC series) with overprovisioning can work well too.

He's trying to improve performance. sync=standard with iSCSI will always always always be faster than sync=always unless you can show me an SSD that is infinitely fast. Literally sync=standard makes async writes infinitely fast and the bottleneck is the network latency to respond, process that info, etc.
 

ndboost

Explorer
Joined
Mar 17, 2013
Messages
78
Looks like a large write of 10gb came out to 23.4 MB/s ~187.2 Mbit/s
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Looks like a large write of 10gb came out to 23.4 MB/s ~187.2 Mbit/s

Yeah, that's pretty disappointing speeds. :(

Do these things in this order.. do not skip any steps:

1. Get rid of your damn release candidate. That's unsat and you should be spanked for using an RC that is that old. Upgrade to 9.2.1.8-RELEASE.
2. Afterwards enable the experimental kernel iscsi.
3. Create an zvol for your iscsi target
4. Setup the zvol and use it.
5. Redo your tests.


That's going to be "the" way to use iscsi in 9.3+. The "experimental" isn't really experimental as it's what is used in FreeBSD 10. It's just 'experimental' because it hasn't been extensively tested for TrueNAS users, but it does work and it should be the fastest choices you have.
 

ndboost

Explorer
Joined
Mar 17, 2013
Messages
78
Yeah, that's pretty disappointing speeds. :(

Do these things in this order.. do not skip any steps:

1. Get rid of your damn release candidate. That's unsat and you should be spanked for using an RC that is that old. Upgrade to 9.2.1.8-RELEASE.
2. Afterwards enable the experimental kernel iscsi.
3. Create an zvol for your iscsi target
4. Setup the zvol and use it.
5. Redo your tests.


That's going to be "the" way to use iscsi in 9.3+. The "experimental" isn't really experimental as it's what is used in FreeBSD 10. It's just 'experimental' because it hasn't been extensively tested for TrueNAS users, but it does work and it should be the fastest choices you have.
All of the above were already done as of the last two write tests :P
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well poop. Guess you're out of options there. :/

Go get a system with more RAM and an L2ARC. ;)
 

ndboost

Explorer
Joined
Mar 17, 2013
Messages
78
Alright, dropped the 30gb zil in.. realized it can only attach to a single pool so I attached it to my main pool.. seems it either hurt performance or I have something else hogging disk bandwidth..

Writing to the pool which has the ZIL attached.. looks to me like performance increased by ~400 Mbit/s..

Code:
[root@stg01] /mnt/vault/downloads# dd if=/dev/zero of=1g.img bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 41.223373 secs (254364436 bytes/sec)


This is writing to an NFS share across a gig line. This is not writing to my vmdk iscsi target zvol's.. Im still working on moving a VM over to test that out again.

Code:
dd if=/dev/zero of=1g.img bs=1M count=10000
10485760000 bytes (10 GB) copied, 188.823 s, 55.5 MB/s


reading that same file, i didn't benchmark reads before the ZIL..
Code:
dd if=1g.img of=/dev/null bs=16k
1048576000 bytes (1.0 GB) copied, 9.38472 s, 112 MB/s


Code:
[root@stg01] ~# zilstat -M
      N-MB     N-MB/s N-Max-Rate       B-MB     B-MB/s B-Max-Rate    ops  <=4kB 4-32kB >=32kB
         0          0          0          0          0          0      0      0      0      0
         0          0          0          0          0          0      0      0      0      0
         0          0          0          0          0          0      0      0      0      0
        72         72         72        143        143        143   1095      0      0   1095
        21         21         21         31         31         31    238      0      0    238
        50         50         50         74         74         74    565      0      0    565
        80         80         80        119        119        119    912      0      0    912
        30         30         30         44         44         44    338      0      0    338
        73         73         73        108        108        108    824      0      0    824
        63         63         63         95         95         95    732      0      0    732
        63         63         63         93         93         93    713      0      0    713
        71         71         71        110        110        110    847      1      0    846
        61         61         61         90         90         90    694      0      0    694
        61         61         61         90         90         90    694      0      0    694
        43         43         43         63         63         63    488      0      0    488
        67         67         67         99         99         99    756      0      0    756
        61         61         61         90         90         90    693      0      0    693
        61         61         61         90         90         90    694      0      0    694
        73         73         73        107        107        107    822      0      0    822
        52         52         52         78         78         78    597      0      0    597
        46         46         46         68         68         68    519      0      0    519
         0          0          0          0          0          0      0      0      0      0
         0          0          0          0          0          0      0      0      0      0
         0          0          0          0          0          0      0      0      0      0
^C
         0          0          0          0          0          0      0      0      0      0
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, you kind of failed your benchmarking exam. doing tests from /dev/zero if you didn't disable compression is pointless. Performance will be higher because those zeros compress very well.

You also never told me what the ZIL drive is, but I'm almost willing to bet the cost of a nice ZIL-appropriate drive that your drive isn't a good fit for a ZIL, regardless of whatever numbers you have. ;)

There's a bunch of other comments i'd make, but to be honest, I'm not really interested in explaining it much tonight. I'm tired from work, etc etc etc. I have no doubt that once you try using this you'll realize this isn't working out. So I'll let you keep doing it.
 
Status
Not open for further replies.
Top