Slow FC qlogic speed with p2p connection

luqasz

Dabbler
Joined
Sep 25, 2018
Messages
10
Hi

I know that this is a freeNAS forum. I am using freeBSD 11.2-RELEASE-p3. Compiled kernel to use target mode.

ctladm portlist
Code:
Port Online Frontend Name	 pp vp
0	NO	 camsim   camsim   0  0  naa.50000006cc960301
1	YES	ioctl	ioctl	0  0
2	YES	tpc	  tpc	  0  0
3	YES	camtgt   isp0	 0  0  naa.2100001b3214e3e8


sysctl -a
Code:
device	isp
device	ispfw
net.isr.dispatch: direct
dev.vgapci.0.%desc: VGA-compatible display
dev.isp.0.wake: 0
dev.isp.0.use_gff_id: 1
dev.isp.0.use_gft_id: 1
dev.isp.0.topo: 0
dev.isp.0.loopstate: 10
dev.isp.0.fwstate: 3
dev.isp.0.linkstate: 1
dev.isp.0.speed: 4
dev.isp.0.role: 1
dev.isp.0.gone_device_time: 30
dev.isp.0.loop_down_limit: 60
dev.isp.0.wwpn: 2377900720055968744
dev.isp.0.wwnn: 2377900720064984758
dev.isp.0.%parent: pci2
dev.isp.0.%pnpinfo: vendor=0x1077 device=0x2432 subvendor=0x1077 subdevice=0x0137 class=0x0c0400
dev.isp.0.%location: slot=0 function=0 dbsf=pci0:2:0:0 handle=\_SB_.PCI0.RP01.PXSX
dev.isp.0.%driver: isp
dev.isp.0.%desc: Qlogic ISP 2432 PCI FC-AL Adapter
dev.isp.%parent:


I've exported zvol with sync=disabled to my debian 9 box. Created fs on int and mounted. I can see ~200 MB/s transfers. No more. Link is negotiated at 4 GB. Both cards same qlogic 2460. Same firmware.

I've noticed that some people have / had same issues
https://forums.freenas.org/index.ph...ibre-channel-target-mode-das-san.27653/page-7
https://forums.freenas.org/index.ph...e-is-fine-read-stuck-at-200mb-s-4gb-fc.40307/

Even when I use backend ramdisk to exclude any IO issues, I still have ~200 MB/s

Also posted a thread at freeBSD forums
https://forums.freebsd.org/threads/zfs-errors-while-using-ctld-with-fc-card-iscsi.67663/
When I use sync=disabled SYNCHRONIZE CACHE(10) errors go away.
 
Last edited:

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
This should be moved to the off-topic section.
From your FreeBSD post: "Same problems using same LUNs on iSCSI but without errors in dmesg."
Is this ZFS backed? If so what the pool config look like? Whats the zvol record size?
 

luqasz

Dabbler
Joined
Sep 25, 2018
Messages
10
Code:
[lkostka@ibox ~]$ zfs get all zroot/debian
NAME		  PROPERTY			  VALUE				  SOURCE
zroot/debian  type				  volume				 -
zroot/debian  creation			  Sun Sep 23 22:17 2018  -
zroot/debian  used				  15.5G				  -
zroot/debian  available			 1.01T				  -
zroot/debian  referenced			15.5G				  -
zroot/debian  compressratio		 1.00x				  -
zroot/debian  reservation		   none				   default
zroot/debian  volsize			   20G					local
zroot/debian  volblocksize		  8K					 default
zroot/debian  checksum			  on					 default
zroot/debian  compression		   lz4					inherited from zroot
zroot/debian  readonly			  off					default
zroot/debian  copies				1					  default
zroot/debian  refreservation		none				   default
zroot/debian  primarycache		  all					default
zroot/debian  secondarycache		all					default
zroot/debian  usedbysnapshots	   0					  -
zroot/debian  usedbydataset		 15.5G				  -
zroot/debian  usedbychildren		0					  -
zroot/debian  usedbyrefreservation  0					  -
zroot/debian  logbias			   latency				default
zroot/debian  dedup				 off					default
zroot/debian  mlslabel									 -
zroot/debian  sync				  disabled			   local
zroot/debian  refcompressratio	  1.00x				  -
zroot/debian  written			   15.5G				  -
zroot/debian  logicalused		   15.5G				  -
zroot/debian  logicalreferenced	 15.5G				  -
zroot/debian  volmode			   default				default
zroot/debian  snapshot_limit		none				   default
zroot/debian  snapshot_count		none				   default
zroot/debian  redundant_metadata	most				   inherited from zroot

All LUNs are zvol backed.

sysctl.conf
Code:
vfs.zfs.txg.timeout=1

vfs.zfs.prefetch_disable=0
vfs.zfs.scrub_delay=0
vfs.zfs.top_maxinflight=128
vfs.zfs.resilver_min_time_ms=5000
vfs.zfs.resilver_delay=0
vfs.zfs.l2arc_noprefetch=0


loader.conf
Code:
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
coretemp_load="YES"
isp_load="YES"
ispfw_load="YES"
zfs_load="YES"

vfs.zfs.prefetch_disable=0
vfs.zfs.min_auto_ashift=12
vfs.zfs.arc_max="16G"
vfs.zfs.vdev.cache.size="512M"
 
Last edited:

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Ok but what about the pool. zpool list -v
 

luqasz

Dabbler
Joined
Sep 25, 2018
Messages
10
Ok but what about the pool. zpool list -v
Code:
NAME		 SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG	CAP  DEDUP  HEALTH  ALTROOT
zroot	   1.81T   767G  1.06T		-		 -	13%	41%  1.00x  ONLINE  -
  mirror	 928G   409G   519G		-		 -	16%	44%
	ada0		-	  -	  -		-		 -	  -	  -
	ada3		-	  -	  -		-		 -	  -	  -
  mirror	 928G   358G   570G		-		 -	10%	38%
	ada1		-	  -	  -		-		 -	  -	  -
	ada2		-	  -	  -		-		 -	  -	  -
cache		   -	  -	  -		 -	  -	  -
  ada4	   168G  17.9G   150G		-		 -	 0%	10%
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
You should have a record size of 64k not 8k for hosting OS images.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Does this apply to all LUNs exported with iSCSI / FC ?
This is set during the creation of a zvol or dataset. It's a zfs property not ctl.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Ok. I just wanted to ask if setting block size to 64k is a good practice ? Or "it depends on your use case" ?
It depends on your case. if all of your files are less than 64k, you can read a file in one read operation. It's a game of averages. If I had a database that only worked in 8k chunks, 8k would make sense. For a media streaming server you may benefit from a larger block.

If you dig into oracles documentation for configuring zvol backed LUNs for storing OS images, 64k is the sweet spot. You may consider reporting the LUNs as 512 or 4k for compatibility reasons. For example if this is not set ctl will report the sector size to be 64k and ESXi only works with 512 or 4k.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Dont take my word as golden as I'm still sorting a lot of this out myself. Digging through big Os documents and the FreeBSD handbook have been two great resources.
 

luqasz

Dabbler
Joined
Sep 25, 2018
Messages
10
Thx for all your replies and information. I did some simple test with dd and ramdisk backed LUN.
Code:
dd if=/dev/sdc of=/dev/null

Reading and writing gives me ~180 MB/s.

Tests with xfs created on LUN gives same results.
I just don't get it.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Well the only common point is ctl. to be clear you had the same result with two filesystems, fiber channel, iSCSI, physical disk, and RAM disk. So its not ZFS/pool, network, fibre channel, or the disks. I am assuming that local dd tests yield the speeds you're expecting. Time to look at ctl tuning.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Here are my sysctl setting for ctl that may or may not be relevant.
Code:
kern.cam.ctl.block.num_threads: 14
kern.cam.ctl.lun_map_size: 1024
kern.cam.ctl.debug: 0
kern.cam.ctl.worker_threads: 3
 

luqasz

Dabbler
Joined
Sep 25, 2018
Messages
10
Code:
kern.coredump_devctl: 0
kern.racct.rctl.throttle_pct2: 4294967295
kern.racct.rctl.throttle_pct: 4294967295
kern.racct.rctl.throttle_max: 4294967295
kern.racct.rctl.throttle_min: 4294967295
kern.racct.rctl.devctl_rate_limit: 10
kern.racct.rctl.log_rate_limit: 10
kern.racct.rctl.maxbufsize: 16777216
kern.features.rctl: 1
kern.cam.ctl2cam.max_sense: 252
kern.cam.ctl.iscsi.maxtags: 256
kern.cam.ctl.iscsi.login_timeout: 60
kern.cam.ctl.iscsi.ping_timeout: 5
kern.cam.ctl.iscsi.debug: 1
kern.cam.ctl.ha_role: 0
kern.cam.ctl.ha_link: 0
kern.cam.ctl.ha_id: 0
kern.cam.ctl.ha_mode: 0
kern.cam.ctl.block.num_threads: 14
kern.cam.ctl.max_ports: 256
kern.cam.ctl.max_luns: 1024
kern.cam.ctl.time_io_secs: 90
kern.cam.ctl.lun_map_size: 1024
kern.cam.ctl.debug: 0
kern.cam.ctl.worker_threads: 2
vfs.zfs.version.ioctl: 7
debug.fail_point.status_sysctl_running: off
debug.fail_point.sysctl_running: off
hw.bus.devctl_queue: 1000
hw.bus.devctl_disable: 0
hw.usb.xhci.ctlstep: 0
 

luqasz

Dabbler
Joined
Sep 25, 2018
Messages
10
Well the only common point is ctl. to be clear you had the same result with two filesystems, fiber channel, iSCSI, physical disk, and RAM disk. So its not ZFS/pool, network, fibre channel, or the disks. I am assuming that local dd tests yield the speeds you're expecting. Time to look at ctl tuning.

I do not have a 10g network yet. I will test that as soon as I receive my card. Will post my results here.

I will do tests with debian instead of freeBSD to exclude any FC related issues.
 
Last edited:

luqasz

Dabbler
Joined
Sep 25, 2018
Messages
10
I got my 10G mellanox card. With iSCSI I do noe have any errors and I can saturate 10G. With FC I can get ~200MB/s.

ctl.conf
Code:
lun win_rest {
	path /mnt/vms/windows_d.img
	blocksize 4096
	option unmap on
	size 500G
	device-id win_rest
}

lun win_sys {
	path /dev/zvol/zroot/win_sys
	blocksize 512
	option unmap on
	size 120G
	device-id win_sys
}

portal-group pg0 {
	discovery-auth-group no-authentication
	listen 172.31.2.1
}

target iqn.2018-03.pl.netng:pc-target {
	auth-group no-authentication
	portal-group pg0
	lun 0 win_sys
	lun 1 win_rest
}

target naa.2100001b329e76b6 {
	auth-group no-authentication
	port isp0
	lun 0 win_sys
	lun 1 win_rest
}
 

FastCode

Cadet
Joined
Apr 8, 2019
Messages
1
I got my 10G mellanox card. With iSCSI I do noe have any errors and I can saturate 10G. With FC I can get ~200MB/s.

ctl.conf
Code:
lun win_rest {
    path /mnt/vms/windows_d.img
    blocksize 4096
    option unmap on
    size 500G
    device-id win_rest
}

lun win_sys {
    path /dev/zvol/zroot/win_sys
    blocksize 512
    option unmap on
    size 120G
    device-id win_sys
}

portal-group pg0 {
    discovery-auth-group no-authentication
    listen 172.31.2.1
}

target iqn.2018-03.pl.netng:pc-target {
    auth-group no-authentication
    portal-group pg0
    lun 0 win_sys
    lun 1 win_rest
}

target naa.2100001b329e76b6 {
    auth-group no-authentication
    port isp0
    lun 0 win_sys
    lun 1 win_rest
}
Were you able to fix the problem with FC?
 
Top