iscsi -> nfs = win (for me)

Status
Not open for further replies.

alpaca

Dabbler
Joined
Jul 24, 2014
Messages
24
Was running a bunch (25ish) production xenserver VMs across 4 hosts on a FreeNAS mirrored storage pool, served over ISCSI with pretty standard settings and sync=always. We started to get an influx of users complaining about "sluggishness" when interacting with various services. Very brief benchmarking proved nothing and the pool is nowhere near 50% capacity. Just out of experimentation, I setup a new dataset and NFS share, sync=always, for xen.

After migrating a few vms over to the NFS storage, those users (without being told) sent "Thanks!" emails. Ooook, ran a few more benchmarks with vms split across new NFS and existing iscsi storage, again nothing glaringly obvious as to the difference. Performance was very similar with the quick and dirty (and hated/useless) benchmarks as well as actual file transfers and mariadb slave testing. Interestingly enough, my own experiences over ssh sessions with some of the NFS migrated vms did feel "snappier".

I have read a few posts on here that highlight NFS perhaps being better suited for xen vs iscsi with vmware, perhaps there is a bit at play here. Unintended (stupid me) bonus of the now total move to NFS backed xen...backups are MUCH easier. We don't have any invested in "true" HA storage, most redundancy at the VM/app level. But being able to simply mount the backup (zfs replicated) datasets over NFS, with 15min snapshot intervals is awesome!

Curious if any other xen users out there have experienced similar?
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
I found the same thing with ESXi6. I was running round robin load balanced iscsi connections prior.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I've heard this several times but I haven't actually tried NFS for VM storage recently. I'm guessing there's something to it, but I don't know exactly what.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
One opinion i've read is that iscsi isn't exactly a "standard".

Nfs has evolved but is very much standardized into *nix, being that esxi and freenas are both *nix they just play nice.

Take what you want from that, but it kind of makes sense.
 

Pointeo13

Explorer
Joined
Apr 18, 2014
Messages
86

kspare

Guru
Joined
Feb 19, 2015
Messages
508
I don't even come close to saturating a single link during normal use, unless I'm doing something major and evacuating my drives I get about 80% use. For me a single 10gb link works fine.

I did have that set but again because I didn't use that much bandwidth. If you require and can actually pull more than 10gb of bandwidth iscsi may be your only way.
 
Joined
Dec 29, 2015
Messages
7
I went to use iscsi connections so I could archive retired VM's to deep storage. I blew a couple days messing with settings following guides. Apparently our enterprise network, and IP setup just does not play right with iscsi. I setup NFS in 5 minutes and it works fantastically. Vmware 5.5 was happy with it.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Another advantage to NFS over iSCSI is space reclamation. As your iscsi image grows it doesn't shrink. This doesn't help with zfs fragementation.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Another advantage to NFS over iSCSI is space reclamation. As your iscsi image grows it doesn't shrink. This doesn't help with zfs fragementation.

Actually, that'd more likely be a downside of NFS. With iSCSI based zvols, TRIM is supported, so, yes, the iSCSI image can shrink.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So let me provide some insight into this. Some may be things you already know, but I'll cover them again so others will understand.

- Starting with FreeNAS 9.3 iscsi by definition has a certain advantage because it is kernel based.
- ESXi treats its iSCSI client as top tier, with NFS being a second-rate citizen.
- Xen treats its NFS client as top tier, with iSCSI being a second-rate citizen.
- iSCSI provides certain benefits with regards to VAAI support.
- With iSCSI you should create a zvol, which means pre-reserving your disk space. It also sticks you to a max block size of 8KB by default, which can create its own problems (and its own benefits). Smaller blocks require more disk space for metadata too (which has its own problems).
- NFS has specific limitations. You can't do "multipath" with NFS like you can with iSCSI. It also defaults to a maximum block size of 128KB by default which can create its own problems (and its own benefits).
- Smaller blocks can improve performance in some situations, while larger blocks improve performance in others. (like ateeter-totter)
- NFS can't "trim" free space, but iSCSI supports TRIM. (Do not confused the 'used space' on a zvol with space that is 'reserved'. They are not the same thing and it seems that most people do not recognize or understand that difference.)

Now at the protocol level itself:

-NFS has file limits and other buffers that can create problems when you exceed their designed limits. So long as you don't hit those limits, things won't come crashing down (and believe me, if you start getting NFS throttled, it will come crashing down).
- iSCSI has limits on how many queued I/O you can have per target. So if you plan to run dozens of VMs, you are better off having multiple targets (which means multiple datastores and multiple zvols/extents). (hint: You'd be amazed at how much this can matter). I've heard that somewhere around 4-5TB is what many people have found to be the 'sweet spot' for running VMs. So if you plan to run lots of VMs, store them in multiple datastores that don't exceed 4-5TB if possible.

But, that's just a thumbrule. If you plan to do things like run an exchange server, mysql database, or things that are going to potentially use a LOT of small I/O you may want to give that VM its own datastore running on its own target. Likewise you could have a 50TB target running dozens of VMs, and so long as you don't end up running out of I/O queue, you'll never care.

The NFS versus iSCSI debate is pretty fierce. Each brings its own advantages and disadvantages to the table. A properly designed iSCSI service can outperform NFS in ways that cannot be matched with NFS. Most notably because iSCSI on 9.3+ is kernel based. Most people in the forum are not aware of all of the little intricacies involved with iSCSI, and therefore cannot easily judge what they need to do to improve performance. So switching to NFS when you've hit some barrier you are not able to identify via CLI tools in FreeNAS is often not possible because of a lack of experience with identifying problems with iSCSI and/or NFS. But, assuming you are using ISCSI and being bottlenecked for something related to iSCSI, switching to NFS will seem to resolve the issue. Likewise there are issues NFS has that are rectified by switching to iSCSI.

NFS provides a crapton of sysctls if you really want to tweak NFS, just like iSCSI has its own (much smaller) set of sysctls. It's a science all on its own, and I firmly believe that people like the OP can convert from one to the other (in this case iSCSI to NFS) and may see a notable performance gain as a result. There's no need to question if the OP thinks the performance is faster or not.

But, at the end of the day, you are likely to be much happier sticking to iSCSI for ESXi and NFS for Xen. In every case I've seen while working at iX, if you follow that thumbrule and still have poor performance, there's probably something either misconfigured or not configured appropriately for your workload on the ESXi host/Xen server or on the FreeNAS box. I'm not necessarily talking things like sysctls and such. Even small things like "don't make iscsi targets bigger than 5TB" can make a major difference in performance.

Personally, for me the only four things I consider when trying to decide for/against NFS/iSCSI is:

1. Do I need multipath or will LACP do? If I need multipath then I must use iSCSI.
2. Do my workloads demand that I have the ability to handle trim? If so then I must use iSCSI.
3. Do I want to have access to the individual VM files in the event that the ESXi/Xen server goes down? If so, then I should stick to NFS.
4. Do I have a need to not reserve 50% (or more) of my storage for a zvol? If so then I should use NFS.

Notice none of those directly inquire about performance? ;)

To conclude my (already too long) post, trading out iSCSI for NFS (or vice versa) in the name of performance is usually a straw man's argument. There are specific scenarios on when you are forced to not use NFS because of a performance barrier (and the same is true for iSCSI), but if you have that level of knowledge you don't need to ask yourself which one to use. You'll already know. A properly configured iSCSI/NFS can generally perform as well as you'll need it to perform, so long as you do your homework. That last part.... "so long as you do your homework" doesn't come easy though. Sometimes its easier to simply switch and claim victory if it performs better.. ;)

P.S. - No offense intended to anyone in this thread. It's important that you get the job done when there's a problem. If performance is slow its important to get performance fixed, regardless of whatever secret sauce you use. :)
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Very well written!

At the end of the day I switched to NFS for the zfs sync functions and disaster recovery. not so much a performance thing!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The NFS versus iSCSI debate is pretty fierce. Each brings its own advantages and disadvantages to the table. A properly designed iSCSI service can outperform NFS in ways that cannot be matched with NFS. Most notably because iSCSI on 9.3+ is kernel based.

And that differs from NFS ... how? A properly designed NFS service can kick iSCSI's fsckin' arse in ways that cannot be matched with iSCSI.

Hint: BSD NFS service is provided by the kernel too. Being "kernel based" isn't a new concept, except perhaps for iSCSI. The Guelph code was introduced sometime before 4.3-Reno, which means that it's been kernel resident for at least a quarter of a century. You can see some discussion of it in the 4.4BSD SMM in chapter 6. I'm too lazy to actually dig into the archives to see just when it was introduced.

But it is nice to have iSCSI finally getting a little modern.

Just havin' a little fun pokin' the n00b in the eye :smile:
 
Status
Not open for further replies.
Top