With Ssd zil and l2arc would nfs provide better throughput than iscsi?

Status
Not open for further replies.

datnus

Contributor
Joined
Jan 25, 2013
Messages
102
Hi everyone,
I have tried dd in local zfs, I could reach 250MBps.
However, if I dd from vmware esxi to freenas iscsi, I could get only 60MBps over 2x1GB link.

Btw, 32 GB ram, 30 gb zil, 260 gb l2arc

With ssd zil, I have tested and seen the latency is a bit higher than iscsi in sync=always mode.

Anyone here has tested the throughput of nfs with zil and l2arc compared to iscsi?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You can't just do a compare like you are asking. I mean you 'can' but it wont add much value. Your pool "condition" plays a part in performance too. What I mean is how much free space a pool has and how that free space is distributed in the pool. This usually is based on how your pool has been used over its lifetime.
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
Yes I've compared nfs with ESXi vs iSCSI. With iSCSI & sync=alyways vs nfs I found nfs to be just a hair faster. That combined with the convenience of having file level access to the ESXi files on the nfs share pretty much sold me on nfs.

Without knowing more about your system I'd say your #s are about right. The local dd is doing async if you set sync=always on your dataset and do a local dd test you can then see what overhead nfs & esxi throw into the mix. On my system I'm using a very low latency slog device and a 10GB link to ESXi and I get about 100MBps.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
And a lot of people I've seen found iscsi to be a better choice than NFS. In particular because there is no sync writes with iscsi. That's why I said that a direct NFS to iscsi isn't really a fair comparison. Depending on a lot of factors beyond just hardware specs affect the performance of both NFS and iscsi.
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
And a lot of people I've seen found iscsi to be a better choice than NFS. In particular because there is no sync writes with iscsi.

Well that's the issue, you are really only a hair better off(protection wise) with default iSCSI vs NFS /w sync=disabled for ESXi. It's really 6 or a half dozen of the other, the risk is pretty much the same if you really dig deep into it. In other words a fair comparison is default iSCSI vs NFS sync=disabled or iSCSI sync=always vs NFS default. Just because iSCSI by default is doing async makes it a bad choice to recommend to the average Joe on this forum.

Think about it with default iSCSI, ESXi is pushing writes to the iSCSI device which are getting buffered in RAM and dumped periodically by ZFS to disk according to ZFS's own internal workings, if the OS kernel panics, someone pulls the wrong power cord, you just lost a bunch of data that ESXi thought was commited to disk. We often compare FreeNAS/ZFS to a hardware RAID card, so it's like running your ESXi with a hardware RAID card with a big cache and not battery backup for the cache and that's a very well documented way to kill your VMFS volumes in ESXi. So I look at using iSCSI /w sync=standard as the same thing, iSCSI /w sync=always and a SLOG device(which is like the battery backup).

That's why I said that a direct NFS to iscsi isn't really a fair comparison. Depending on a lot of factors beyond just hardware specs affect the performance of both NFS and iscsi.

Agreed, there are other factors to weigh in the comparison.
 

datnus

Contributor
Joined
Jan 25, 2013
Messages
102
Nfs vs. Iscsi with sync=always in term of throughput.
Zil is to help nfs to balance latency with iscsi
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
Nfs vs. Iscsi with sync=always in term of throughput.
Zil is to help nfs to balance latency with iscsi

SLOG device for your ZIL is to cut down the latency on anything that must be written to disk before returning some kind ack(NFS or not). iSCSI /w sync=always will use the ZIL, NFS will use it for sync writes(which is all ESXi does).

NFS > iSCSI /w sync=always for throughput from a ESXi Linux VM doing a ton of writes, not much but faster.
 

ror

Dabbler
Joined
Sep 18, 2013
Messages
11
Well that's the issue, you are really only a hair better off(protection wise) with default iSCSI vs NFS /w sync=disabled for ESXi. It's really 6 or a half dozen of the other, the risk is pretty much the same if you really dig deep into it. In other words a fair comparison is default iSCSI vs NFS sync=disabled or iSCSI sync=always vs NFS default. Just because iSCSI by default is doing async makes it a bad choice to recommend to the average Joe on this forum.

Think about it with default iSCSI, ESXi is pushing writes to the iSCSI device which are getting buffered in RAM and dumped periodically by ZFS to disk according to ZFS's own internal workings, if the OS kernel panics, someone pulls the wrong power cord, you just lost a bunch of data that ESXi thought was commited to disk. We often compare FreeNAS/ZFS to a hardware RAID card, so it's like running your ESXi with a hardware RAID card with a big cache and not battery backup for the cache and that's a very well documented way to kill your VMFS volumes in ESXi. So I look at using iSCSI /w sync=standard as the same thing, iSCSI /w sync=always and a SLOG device(which is like the battery backup).

Then this would implies that unaware users using FreeNAS with iSCSI and ESXi are at great risk! Shouldn't iXSystems change the default FreeNAS configuration of iSCSI in terms of sync setting to always enable to protect the data?

Now, I read from the BSD forum that the problem is the with istg (forgot the exact deamon name responsible for iSCSI) iSCSI implementation on FreeBSD. If you use OmniOS or Open Indiana, then the commstar iSCSI implementation follows the standard from VMWare ESXi of confirming writes.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Shouldn't iXSystems change the default FreeNAS configuration of iSCSI in terms of sync setting to always enable to protect the data?

Why? How would they even do that? You can put a file extent wherever you wish. Are you proposing that all ZFS datasets be created with sync=always?

As with most technology, it is the responsibility of the administrator to have some knowledge and understanding of the technology, and to read the fine manual, and to configure the system appropriately.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Why? How would they even do that? You can put a file extent wherever you wish. Are you proposing that all ZFS datasets be created with sync=always?

As with most technology, it is the responsibility of the administrator to have some knowledge and understanding of the technology, and to read the fine manual, and to configure the system appropriately.

+1. You can't fix stupid. Just like the phrase "Engineer something so that a stupid person can use it and someone will find someone stupid enough to not be able to use it".
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
While simultaneously locking out clever uses. (Hi Apple iOS closed ecosystem)
 

ror

Dabbler
Joined
Sep 18, 2013
Messages
11
Why? How would they even do that? You can put a file extent wherever you wish. Are you proposing that all ZFS datasets be created with sync=always?

As with most technology, it is the responsibility of the administrator to have some knowledge and understanding of the technology, and to read the fine manual, and to configure the system appropriately.

Well, by default, NFS has sync enabled but iSCSI is not. So yes, I am proposing that once you enable iSCSI service, make the default to sync=always everywhere. Because unaware users using iSCSI will be in a world of hurt after a power loss.

Why is the behavior of iSCSI with sync=standard different from NFS? Why can't it works just like NFS where if the client wants sync (i.e. ESXi), then honor sync write? I read from the BSD forum that the issue is with istgt (iSCSI deamon) implementation in BSD not following iSCSI standard. I think I read somewhere that if you use commstar iSCSI implementation in Solaris based OS, then sync write works as expected.

As an argument, when a user installs Windows 7, firewall is enabled by default in order to protect the user but he/she can turn it off. So the default config should always strive to protect the data and system.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, by default, NFS has sync enabled but iSCSI is not. So yes, I am proposing that once you enable iSCSI service, make the default to sync=always everywhere. Because unaware users using iSCSI will be in a world of hurt after a power loss.

And so you propose to damage other users in a substantially worse way: you want to see their overall performance tank for normal fileserver operations if-and only if- they happen to configure iSCSI? Talk about violation of POLA.

Why is the behavior of iSCSI with sync=standard different from NFS? Why can't it works just like NFS where if the client wants sync (i.e. ESXi), then honor sync write? I read from the BSD forum that the issue is with istgt (iSCSI deamon) implementation in BSD not following iSCSI standard. I think I read somewhere that if you use commstar iSCSI implementation in Solaris based OS, then sync write works as expected.

My guess would be that it is because the design goals behind istgt aren't entirely compatible with your goals. Remember that FreeNAS didn't write the implementation. It is a careful arrangement of existing capabilities in order to allow non-UNIX-power-users to take advantage of a mature UNIX OS.

As an argument, when a user installs Windows 7, firewall is enabled by default in order to protect the user but he/she can turn it off. So the default config should always strive to protect the data and system.

This isn't really a FreeNAS issue. The way I read this is a "you'd like a set of imperfect defaults to be made better for your unusual case but worse for everyone else's case so that you don't have to undertake the responsibility of reading the manual" issue.

People do all sorts of stupid things. They don't put a UPS on their system. Don't expect anyone to be shipping them a free UPS in order to strive to protect their data and system.

And the sync issue is hardly limited to FreeNAS. Pretty much every NAS has a similar issue, and the advice from several of them has been to "disable sync to increase performance."

It will be mildly interesting to see if the new in-kernel iSCSI target implementation addresses any of this in any way.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
ror,

I'm thinking that the reason why the defaults aren't as conservative as you might expect is that ZFS' default is sync=standard. That honors sync write requests as well as allows for maximum performance for non-sync writes.

My guess is that this is one of those things where 2 different software packages aren't optimized for each other "out of the box". I know in the past when I've had to deal with Unix servers, this would be one of those situations where you expect the Unix server admin that you are paying to maintain the server to get it right.

Not that I'm defending either "side" of the argument, just explaining why it might be like it is.
 

ror

Dabbler
Joined
Sep 18, 2013
Messages
11
Well, I don't know the inner working of FreeNAS software development/implementation and stuff like that, but maybe there should be BIG sticky warning that if one uses iSCSI default config then there is a huge risk of data loss and corruption if there is power loss.

I'm not here to argue with you FreeNAS guru, but to point out the obvious with regards to data protection.
 

ror

Dabbler
Joined
Sep 18, 2013
Messages
11
Ok, I see your point. ;) Maybe if you put the "iSCSI data corruption WARNING!!!" in your signature then it would also help.

I have to admit that I never saw the iSCSI sync=standard warning in any stickies as it's not "in my face" anywhere until I did further testing and ran into the sync write issues.
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
It's not in any stickies, and I personally don't plan to add anything. At some point people have to learn for themselves, and I've spent so much time trying to get people to "RTFM" that I just give up. If they don't want to do homework for themselves that's totally on them. Notice that I haven't updated my guide despite there being a FreeNAS release... first time ever since I started making the guide with 8.0.4.

Agreed if you are wading into the deep end of the pool by using ZFS & ESXi you are big enough to read some documentation.

The only bad thing is too many posts have been made saying don't use NFS with ESXi just use iSCSI instead. So the sheep just follow suite.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It's funny, I did some looking a year or so ago and more than one NAS product advised disabling sync with ESXi for performance reasons, no discussion of the fallout ... then when the NAS product became VMware Certified the manual was revised to delete that advice. Presumably to become VMware-safe. :smile:
 

ror

Dabbler
Joined
Sep 18, 2013
Messages
11
What about on OpenSolaris, OmniOS, Open Indiana, and Nexenta which use comstar for iSCSI instead of istgt? On those system, I would assume that iSCSI = standard is default config, would sync write be honored? In other word, is is safe to use iSCSI on those systems as compared to the danger on FreeNAS or FreeBSD?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
What "danger"? Danger that someone fails to read the docs?

I don't really care. I think it is fine to expect people to have to do some configuration when you have a service as complex as iSCSI.
 
Status
Not open for further replies.
Top