iSCSI block size reporting issues on 22.12?

m9x3mos

Dabbler
Joined
May 13, 2021
Messages
34
Was wondering if someone can provide some ideas.
I am testing a new server with TrueNAS Scale 22.12 and a replacement for my old server running Legacy FreeNAS.
When setting up an iSCSI target to be used for MS SQL storage I am running into an issue with the reported block size.
On the extent it is set to 512 and I have tested with the checkbox for "Disable Physical Block Size Reporting" both checked and unchecked and no difference.
1676156346568.png


The problem is when I go to restore a database to the new drive in Windows 11 after I mount it with iSCIS initiator, I am getting this error.
1676157957282.png


If I manually create the zvol that is the target via command line to force it down to 512 on the zvol directly, it is fine. The problem as well is though that takes up a lot more space than what is allocated to the target disk.
When I created this zvol I used this command as 512 wasn't and option in the UI.
1676156459568.png


Any help would be greatly appreciated. Going to do some other tests to see if I can figure it out but might need to submit as a bug.
 
Last edited:

m9x3mos

Dabbler
Joined
May 13, 2021
Messages
34
On my old FreeNAS install that works you can see the block size that is being used on the ZVol is higher as well
1676157865661.png


And on the new one that I most recently tried
1676157872079.png

Which shows me this error in SQL Server when trying to restore.
1676157936601.png
 

m9x3mos

Dabbler
Joined
May 13, 2021
Messages
34
I was also able to find the local config for the associate which also shows the blocksize set to 512 as well.
When I set it do display reporting I didn't see this config file change in /etc/scst.conf though which I wasn't expecting.
1676167030470.png
 

m9x3mos

Dabbler
Joined
May 13, 2021
Messages
34
I tested this today on the latest Core iteration and it wasn't a problem.
There was no change testing on Scale nightlies either.
 

m9x3mos

Dabbler
Joined
May 13, 2021
Messages
34
Does anyone have any ideas? So far this is the only thing preventing me from replacing my old freenas setup.
For mssql storage should I maybe try setting up nfs shares instead?
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Does anyone have any ideas? So far this is the only thing preventing me from replacing my old freenas setup.
For mssql storage should I maybe try setting up nfs shares instead?

Could be a bug.. thanks for reporting.

Any chance you could test with Angelfish... it may have been introduced in 22.12.0. I'd be surprised if it wasn't caught earlier.
 

m9x3mos

Dabbler
Joined
May 13, 2021
Messages
34
Could be a bug.. thanks for reporting.

Any chance you could test with Angelfish... it may have been introduced in 22.12.0. I'd be surprised if it wasn't caught earlier.
Hello @morganL,
I updated the ticket with the same information but will put it here too if it helps anyone else.

I can confirm that this is a problem on TrueNAS-SCALE-22.02.4

I set up a new test server on that version and followed the same steps for setting up the ZVol with a 32kb sector size on this environment and then left the block size on the extent at 512 and checked to disable physical block reporting.

After adding that to windows and attempting to restore a SQL db to it, I got this error:
1676908202734.png

When I use fsutil on it for the sector information I see that the performance size is 32kb from the zvol

1676908216006.png



However, the same setup on Core gives me this
1676908234408.png

I did upload a debug report from the test that I just did if that helps.

So it does appear the way that the iSCSI target presents itself on Angelfish is the same as Bluefin but different than core and does not allow SQL DBs restored to it.
 

m9x3mos

Dabbler
Joined
May 13, 2021
Messages
34
I added this to the ticket too but I just noticed that even though the zvol was create as thick provisioned (not sparse checked) it is reporting in windows as thin provisioned as well.
This might have something to do with it.
1676937677007.png
 

William Luke

Cadet
Joined
Sep 5, 2016
Messages
4
I've applied the fix from here: https://ixsystems.atlassian.net/jira/software/c/projects/NAS/issues/NAS-120303

And confirmed that my scst.conf now shows "lb_per_pb_exp 0" for the relevant extents:

Code:
 DEVICE lun60-esxi-sql-c03-dbdata {
        filename /dev/zvol/Greenlight-Archive/lun60-esxi-sql-c03-dbdata
        blocksize 512
        lb_per_pb_exp 0
        read_only 0
        usn
        naa_id 0x6589cfc000000e96b4575a3e67a77d02
        prod_id "iSCSI Disk"
        rotational 0
        t10_vend_id TrueNAS
        t10_dev_id
    }


However, despite restarting iscsi on truenas, re-creating the zvols and extents, and rebooting the esxi hosts and windows servers I'm still seeing the incorrect physical sector size:

Code:
Number LogicalSectorSize PhysicalSectorSize         Size
------ ----------------- ------------------         ----
     1               512              32768 268435488768


Did anyone else manage to get this fix to work, and if so is there something I'm missing?

Both SQL server, and failover cluster disk replication fail to work if the physical size is set like this
 

William Luke

Cadet
Joined
Sep 5, 2016
Messages
4
Ahhh - never mind, did a little more digging, and I believe the lb_per_pb_exp option is not available in the scst version that's in 22.12.1

Don't suppose anyone's got any workarounds to make iscsi extents behave and honour the "disable physical blocksize reporting" in 22.12.1 ?
 

William Luke

Cadet
Joined
Sep 5, 2016
Messages
4
Ahh perfect, that's not too far away!

I noticed that the changes seem to set the physical block size to be reported as 4k, I'm pretty sure in earlier versions it reported this at 512, would that be right, and is there a way to override it?

Reason I ask is that although MS SQL is happy with either 512 or 4k, other things such as MS Failover Cluster Disk Replication only work if the sector sizes are the same at source and destination, and 512 is what a lot of SANs will report, so to use the iscsi on TrueNAS as a target for the replication it would need to report 512 as well
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I noticed that the changes seem to set the physical block size to be reported as 4k, I'm pretty sure in earlier versions it reported this at 512, would that be right, and is there a way to override it?
I had it showing as 512 physical block size in a test release:

Code:
[root@esxi01:~] esxcli storage core device capacity list
Device                                                                    Physical Blocksize  Logical Blocksize  Logical Block Count        Size  Format Type
------------------------------------------------------------------------  ------------------  -----------------  -------------------  ----------  -----------
mpx.vmhba0:C0:T4:L0                                                                      512                512                    0       0 MiB  512n
naa.6589cfc000000f9ac6827a522cd3e646                                                   16384                512              4194336    2048 MiB  Unknown
t10.ATA_____INTEL_SSDSC2BB480G4_____________________redacted**********__                4096                512            937703088  457862 MiB  512e
t10.ATA_____INTEL_SSDSC2BB480G4_____________________redacted**********__                 512                512            937703088  457862 MiB  512n
naa.6589cfc000000ffb07b8cdec1d0531c5                                                     512                512              6291488    3072 MiB  512n


The 2G e646 LUN is with the new setting disabled and exposes the 16K volblocksize - the 3G 31c5 LUN is with it enabled and gets called out as 512n by VMware. RDMing it to Windows also called it out as 512 across the board, although it does identify the underlying 16K in the SlabSize:

1679585227977.png

Hopefully .2 fixes things for you.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Hi @William Luke !

I thought I recognized that Jira ticket number; the fix required upstream changes in the SCST software, and should be landing in 22.12.2 - scheduled release date is March 28th.

Apologies, but that is now April 11... a few bug fixes have delayed it.
 
Top