FreeNAS 9.3 iSCSI fails to be recognized by ESXi 5.5

Status
Not open for further replies.
Joined
Aug 25, 2014
Messages
89
I have an R&D project using a new SuperMicro jbod with twenty four 3.5" drive slots, no RAID card, 128GB EEC RAM. Currently I am testing with eight 4TB SAS drives and three 120GB SATA drives.

I posted this issue before and did not find an answer, and I have way more questions and answers now.

FreeNAS 9.3 iSCSI share fails to be recognized by ESXi 5.5 and the answer may be inside one of the ESXi log files although I have not found the correct log or I have misinterpreted what I am reading. Does anyone know what log file(s) I should be looking into to see why ESXi 5.5 won't format the iSCSI share? The ESXi 5.5 Host Storage Adapter sees the iSCSI share. ESXi 5.5 Storage sees the iSCSI share but when I try and Add Storage from the configuration window I get right up to where the iSCSi share should be formatted but it hangs and quits after 10 minutes?

Update: I installed an older copy of FreeNAS 9.2.1.9 and in 1 hour I had a 13.8TB RAIDZ2 iSCSI share up and formatted by my ESXi 5.5 host. It works great by the way.

I am considering upgrading to FreeNAS 9.3 and I am wondering if my iSCSI share will make the transition?

Any and all help or suggestions are appreciated.

Thanks :)
 
Joined
Oct 2, 2014
Messages
925
I can tell you that iscsi works with 9.3, as LONG as you update. my original install of 9.3 iSCSI and CFIS didnt work lol....As for if your iSCSI share will transition i would assume so, but dont take my word on it.
 
Joined
Aug 25, 2014
Messages
89
I found out what my problem was in FreeNAS 9.3. I took the 4 day course from Linda Kateley and when we went over iSCSI share and next to the last step we built an Extent. Linda had said if you have SSDs in your pool you should choose 4096 for logical block size so I kept choosing 4096 even though 512 is default. After I upgraded my functioning FreeNAS 9.2.1.9 to 9.3 and upgraded the iSCSI shares failed again and after going over my notes I said the only variable I had not played with was logical block size in Extents.

I made a new Extent for a 100GB using 512 logical block size and it was my first iSCSI share from FreeNAS 9.3 to work with ESXi.

So I made a new 150GB share and it's Extent was set to 1024 and it failed.

Using the failed 150GB iSCSI share I made a new Extent at 512 and everything works.

I now have a 100GB, 150GB and a 3TB iSCSI share all with a logical block size set to 512 and they all run great.

By the way when I built my first 40 iSCSI shares and used 4096 Logical Block Size they worked just great on my Windows 7 PC, a 2008R2 server and my MacBook Pro laptop, so it was not a FreeNAS problem it is a restriction coming from ESXi.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Linda had said if you have SSDs in your pool you should choose 4096 for logical block size so I kept choosing 4096 even though 512 is default.

This is a 100% incorrect setting and should not be in a course. VMware does not support any iSCSI logical block size other than 512 bytes.
 
L

L

Guest
If you have an ssd zfs will chose an ashift of 12 for a 4k block.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
even the part with ashift is wrong....omg. i really hope ixsystem will stop this.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
If you have an ssd zfs will chose an ashift of 12 for a 4k block.

That is for the ZFS block size, not the iSCSI extent logical block size which must always be 512 bytes when serving ESXi until VMware says otherwise.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Wow.. confusion is everywhere. /sigh
 
L

L

Guest
how is the ashift wrong? it is a freebsd thing to set ashift to 12 by default. You have to set it yourself in other openzfs distros. I have run perf tests on several distros with ashift of 9 and perf can as much as double with 12 on advanced format disks. I haven't tried to set to 9 on freenas.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
how is the ashift wrong? it is a freebsd thing to set ashift to 12 by default. You have to set it yourself in other openzfs distros. I have run perf tests on several distros with ashift of 9 and perf can as much as double with 12 on advanced format disks. I haven't tried to set to 9 on freenas.

Except FreeNAS is NOT FreeBSD. FreeNAS 9.1 and 9.1.1 created pools that were ashift of 9, no matter what. Even in 9.2.0, it would sometimes screw up and create pools that were ashift 9 despite the drives being 4k sectors, and vice versa (it was supposed to be smart about it, but it didn't work). Only in 9.2.1+ was the ashift=12 actually forced and was the default. Even in 8.0.4 to the end of 8.x, you ended up with an ashift of 9 unless you checked the checkbox on pool creation to use 4k sector sizes.

There's actually several stickies that discuss the ashift fiasco as FreeNAS was not really doing things "properly" until 9.2.1 (in my opinion). Prior to 9.x, you could set it manually, and you could make sure to set it appropriately. But in 9.2.0 and prior, it was automatically decided for you, and was wrong more often than not. If you have a pool that was created prior to 9.2.1, there's a good chance you're getting a bunch of warnings in your "zpool status" output.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Increasing logical block size is a way to strictly force initiator to align all requests in a way preferable for ZFS. It would be great, but in practice it is so strict that VMware does not even support it. On the other side, 4KB there won't help that much really, because FreeNAS 9.3 by default creates ZVOLs with block size of 16KB or even more, and any smaller access is misaligned. I am not sure how many initiators can handle so large block size to continue use that idea.

Tthe most realistic practice now (much less strict, but unfortunately much less efficient too) is to follow Advanced Format disks example: report logical block size of 512 bytes, and physical block size equal to ZVOL block size (like 16KB). That is what new iSCSI in FreeNAS 9.3 does by default. I don't know how many initiators get real use of that, but at least FreeBSD initiator uses that information to properly align newly created partitions on iSCSI shares. Microsoft SQL server on the other side does not like physical sectors above 4KB, so there is a checkbox in WebUI to hide that value from it.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
how is the ashift wrong? it is a freebsd thing to set ashift to 12 by default. You have to set it yourself in other openzfs distros. I have run perf tests on several distros with ashift of 9 and perf can as much as double with 12 on advanced format disks. I haven't tried to set to 9 on freenas.

ZFS block size/ashift has absolutely nothing to do with it. VMware simply does not support anything other that a 512-byte logical block size, and if you want the LUN to be usable by ESXi you need to present it as such.

http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=2091600
 
Joined
Aug 25, 2014
Messages
89
Thanks for everyone posting all the great information in here. I will take 75% of the blame for using the 4096-byte logical block size instead of 512-byte logical block size (default) in FreeNAS and it was my desire for speed. Once Linda Kateley said in her class that the 4096-byte logical block size would run faster I was determined to get the fastest SAN I could get.

As an off shoot of this discussion I would like to report that I copied a fairly large VM from one of my hosts and on an Open Filer SAN to my PC then copied it to my test ESXi server connected to my new FreeNAS SAN and it runs great. I am testing on a different brand of host than my Data Center uses so the only gotcha was the VM's NIC was unusable. While the copied VM was up and running I edited it's settings and added a NIC and we were off to the races. Now that I have several iSCSI shares and everything is running I am loving my FreeNAS SAN. Now some bandwidth testing, adding and removing drives and I believe I am almost ready to move FreeNAS to my Data Center.

Thanks again to all the FreeNAS Forum Users who posted on my Problem, and I am sure Linda Kateley learned as much as I have from this issue and all future students will benefit from this experience. In closing I would like to recommend Linda's class for those with an immediate need to come up to speed on FreeNAS as 99% of what I learned appears to be golden.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
when the reputation is gone... all the best for your system.
 
Status
Not open for further replies.
Top