TrueNAS iSCSI - XCP-ng

jcpt928

Cadet
Joined
Jan 16, 2023
Messages
4
This one is going to be pretty self-explanatory. TrueNAS Scale - running solid as a VM, have tested NFS, SMB, etc.; and, after working through some [senseless] issues with getting TrueNAS to actually let XCP-ng connect to it (there are clearly some standards not being adhered to in TrueNAS), I am running into this oddity.

1673924760200.png


A single host in the pool is connecting to the iSCSI LUN, and, neither of the others. I can confirm that both "unplugged" hosts can see the portal, the targets, and the LUN; but, neither of the others will connect. I can confirm TrueNAS is receiving the login requests when connecting; but, am seeing this interesting tidbit...

"Jan 16 19:08:21 truenas kernel: [14964]: iscsi-scst: Negotiated parameters: InitialR2T No, ImmediateData Yes, MaxConnections 1, MaxRecvDataSegmentLength 1048576, MaxXmitDataSegmentLength 262144,"

Is that "MaxConnections 1" what I think it is? If so, how do I modify it?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It is fairly common for targets to include a default setting limiting to one initiator, because otherwise Windows users will tend to try to hook multiple Windows machines up to an iSCSI "share" holding an NTFS filesystem, which doesn't work.

Is that "MaxConnections 1" what I think it is?

Probably.

If so, how do I modify it?

Don't know. Why are you trying to use SCALE for iSCSI, though? iXsystems made it clear awhile back that they were focusing SCALE development on their scale-out stuff, and that iSCSI was not a focus on SCALE. This makes sense given that iXsystems sponsored development of a high performance iSCSI subsystem on CORE, while SCALE most likely uses whatever (possibly substandard) crap is available for Linux. I'm reasonably certain that iX did NOT write the Linux iSCSI stuff.
 

jcpt928

Cadet
Joined
Jan 16, 2023
Messages
4
It is fairly common for targets to include a default setting limiting to one initiator, because otherwise Windows users will tend to try to hook multiple Windows machines up to an iSCSI "share" holding an NTFS filesystem, which doesn't work.



Probably.



Don't know. Why are you trying to use SCALE for iSCSI, though? iXsystems made it clear awhile back that they were focusing SCALE development on their scale-out stuff, and that iSCSI was not a focus on SCALE. This makes sense given that iXsystems sponsored development of a high performance iSCSI subsystem on CORE, while SCALE most likely uses whatever (possibly substandard) crap is available for Linux. I'm reasonably certain that iX did NOT write the Linux iSCSI stuff.
I actually ran into even MORE issues with CORE - namely that CORE wants to ping the initiator as a prerequisite to allowing "login", and, I could not, for the life of me, get around that issue (despite confirming that ICMP did actually work between target and initiator). If you've got any ideas on that one, that would be helpful.

I was unaware of any such news from Xi on the difference in focus.
 

jcpt928

Cadet
Joined
Jan 16, 2023
Messages
4
I actually ran into even MORE issues with CORE - namely that CORE wants to ping the initiator as a prerequisite to allowing "login", and, I could not, for the life of me, get around that issue (despite confirming that ICMP did actually work between target and initiator). If you've got any ideas on that one, that would be helpful.

I was unaware of any such news from Xi on the difference in focus.
Here is a thread explaining the same thing that I ran into with CORE.

 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I actually ran into even MORE issues with CORE - namely that CORE wants to ping the initiator as a prerequisite to allowing "login", and, I could not, for the life of me, get around that issue (despite confirming that ICMP did actually work between target and initiator). If you've got any ideas on that one, that would be helpful.

I was unaware of any such news from Xi on the difference in focus.

You've misunderstood what's going on, most likely. iSCSI has its own connection status protocol, also called "ping", and when a connection is unresponsive for more than five seconds, it may be dropped and reset in order to restore operations. A busted, nonperformant, or otherwise janky-arse iSCSI setup will generally result in connection drops, and is a sign that it isn't going to work out well. If you:

1) Have less than 64GB ARC available to dedicate to iSCSI, or
2) Are using RAIDZ for your block storage, or
3) Are using 1GbE ethernet, or
4) Have allocated more than 40% of your pool

you are substantially increasing the chances of running into performance problems. Mostly the things listed in the Block Storage resource.


I was unaware of any such news from Xi on the difference in focus.

iX is focused on building the scale-out support for SCALE.

Here is a thread explaining the same thing that I ran into with CORE.

That shows an iSCSI ping timeout.
 

jcpt928

Cadet
Joined
Jan 16, 2023
Messages
4
You've misunderstood what's going on, most likely. iSCSI has its own connection status protocol, also called "ping", and when a connection is unresponsive for more than five seconds, it may be dropped and reset in order to restore operations. A busted, nonperformant, or otherwise janky-arse iSCSI setup will generally result in connection drops, and is a sign that it isn't going to work out well. If you:

1) Have less than 64GB ARC available to dedicate to iSCSI, or
2) Are using RAIDZ for your block storage, or
3) Are using 1GbE ethernet, or
4) Have allocated more than 40% of your pool

you are substantially increasing the chances of running into performance problems. Mostly the things listed in the Block Storage resource.




iX is focused on building the scale-out support for SCALE.



That shows an iSCSI ping timeout.
I currently have 2 other iSCSI solutions in-place in the environment, and, have utilized at least 2 additional solutions, with no performance issues whatsoever - running over more than capable switching fabric (multi-gig, and, definitely NOT Ubiquiti). The issue in the thread occurs BEFORE any data transfer occurs [in the process of merely trying to connect to a target\LUN from an initiator], and, so, is not correlated with your comments regarding capacity, bandwidth, etc. - I would agree with you otherwise, if that was not the case. You are correct that iSCSI itself has a status protocol; but, it is quite obviously not failing on the other solutions currently in-place.

You are hitting the nail on the head, in a way, however, that there is something amiss with TrueNAS - because it's the only one I continue to have issues even getting up and running.

I do know what I'm doing here, which is why I have turned to the forums after having exhausted my own diagnostic efforts.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You are correct that iSCSI itself has a status protocol; but, it is quite obviously not failing on the other solutions currently in-place.

Regardless, that is what the console messages you quoted show.

You are hitting the nail on the head, in a way, however, that there is something amiss with TrueNAS - because it's the only one I continue to have issues even getting up and running.

You are free to submit a Jira bug report via the link at the top of the screen. This carries with it a very good chance that it will actually be looked at by mav@, the fellow who wrote the new FreeBSD iSCSI target stuff. In my experience, iXsystems is very interested in making sure even obscure stuff like iSCSI works well.

I do know what I'm doing here, which is why I have turned to the forums after having exhausted my own diagnostic efforts.

I can only read in between the lines based on what you say.
 
Top