So let me provide some insight into this. Some may be things you already know, but I'll cover them again so others will understand.
- Starting with FreeNAS 9.3 iscsi by definition has a certain advantage because it is kernel based.
- ESXi treats its iSCSI client as top tier, with NFS being a second-rate citizen.
- Xen treats its NFS client as top tier, with iSCSI being a second-rate citizen.
- iSCSI provides certain benefits with regards to VAAI support.
- With iSCSI you should create a zvol, which means pre-reserving your disk space. It also sticks you to a max block size of 8KB by default, which can create its own problems (and its own benefits). Smaller blocks require more disk space for metadata too (which has its own problems).
- NFS has specific limitations. You can't do "multipath" with NFS like you can with iSCSI. It also defaults to a maximum block size of 128KB by default which can create its own problems (and its own benefits).
- Smaller blocks can improve performance in some situations, while larger blocks improve performance in others. (like ateeter-totter)
- NFS can't "trim" free space, but iSCSI supports TRIM. (Do not confused the 'used space' on a zvol with space that is 'reserved'. They are not the same thing and it seems that most people do not recognize or understand that difference.)
Now at the protocol level itself:
-NFS has file limits and other buffers that can create problems when you exceed their designed limits. So long as you don't hit those limits, things won't come crashing down (and believe me, if you start getting NFS throttled, it will come crashing down).
- iSCSI has limits on how many queued I/O you can have per target. So if you plan to run dozens of VMs, you are better off having multiple targets (which means multiple datastores and multiple zvols/extents). (hint: You'd be amazed at how much this can matter). I've heard that somewhere around 4-5TB is what many people have found to be the 'sweet spot' for running VMs. So if you plan to run lots of VMs, store them in multiple datastores that don't exceed 4-5TB if possible.
But, that's just a thumbrule. If you plan to do things like run an exchange server, mysql database, or things that are going to potentially use a LOT of small I/O you may want to give that VM its own datastore running on its own target. Likewise you could have a 50TB target running dozens of VMs, and so long as you don't end up running out of I/O queue, you'll never care.
The NFS versus iSCSI debate is pretty fierce. Each brings its own advantages and disadvantages to the table. A properly designed iSCSI service can outperform NFS in ways that cannot be matched with NFS. Most notably because iSCSI on 9.3+ is kernel based. Most people in the forum are not aware of all of the little intricacies involved with iSCSI, and therefore cannot easily judge what they need to do to improve performance. So switching to NFS when you've hit some barrier you are not able to identify via CLI tools in FreeNAS is often not possible because of a lack of experience with identifying problems with iSCSI and/or NFS. But, assuming you are using ISCSI and being bottlenecked for something related to iSCSI, switching to NFS will seem to resolve the issue. Likewise there are issues NFS has that are rectified by switching to iSCSI.
NFS provides a crapton of sysctls if you really want to tweak NFS, just like iSCSI has its own (much smaller) set of sysctls. It's a science all on its own, and I firmly believe that people like the OP can convert from one to the other (in this case iSCSI to NFS) and may see a notable performance gain as a result. There's no need to question if the OP thinks the performance is faster or not.
But, at the end of the day, you are likely to be much happier sticking to iSCSI for ESXi and NFS for Xen. In every case I've seen while working at iX, if you follow that thumbrule and still have poor performance, there's probably something either misconfigured or not configured appropriately for your workload on the ESXi host/Xen server or on the FreeNAS box. I'm not necessarily talking things like sysctls and such. Even small things like "don't make iscsi targets bigger than 5TB" can make a major difference in performance.
Personally, for me the only four things I consider when trying to decide for/against NFS/iSCSI is:
1. Do I need multipath or will LACP do? If I need multipath then I must use iSCSI.
2. Do my workloads demand that I have the ability to handle trim? If so then I must use iSCSI.
3. Do I want to have access to the individual VM files in the event that the ESXi/Xen server goes down? If so, then I should stick to NFS.
4. Do I have a need to not reserve 50% (or more) of my storage for a zvol? If so then I should use NFS.
Notice none of those directly inquire about performance? ;)
To conclude my (already too long) post, trading out iSCSI for NFS (or vice versa) in the name of performance is usually a straw man's argument. There are specific scenarios on when you are forced to not use NFS because of a performance barrier (and the same is true for iSCSI), but if you have that level of knowledge you don't need to ask yourself which one to use. You'll already know. A properly configured iSCSI/NFS can generally perform as well as you'll need it to perform, so long as you do your homework. That last part.... "so long as you do your homework" doesn't come easy though. Sometimes its easier to simply switch and claim victory if it performs better.. ;)
P.S. - No offense intended to anyone in this thread. It's important that you get the job done when there's a problem. If performance is slow its important to get performance fixed, regardless of whatever secret sauce you use. :)