Hello,
I'm doing some experiments in my lab with iscsi multipath.
I have a server with freenas 8.3RC1 (installed today); the server is a dell 2940 with 2 xeons, 12Gb of memory and 2 320gb sas disks.
I want to do some speed tests so I configured the disks as raid0.
A simple dd on the freenas server shows that the disk can transfer about 300MB/sec
Then I created a scsi target with 2 interfaces on different subnets (192.168.0.35/24 and 192.168.10.1/24)
On another server with proxmox vm I configured a scsi disk using multipath; the configuration seems correct and it shows both paths as active
This is my /etc/multipath.conf
I tried to measure the read speed with dd on the mounted disk and I got about 116MB/sec
If I disable one of the interfaces I get about the same speed
Here is the iperf output for the interfaces
Is it a normal behaviour ? From what I read the multipath should increase the performance using more than one connection but it does not seem to work in my case.
I'm doing some experiments in my lab with iscsi multipath.
I have a server with freenas 8.3RC1 (installed today); the server is a dell 2940 with 2 xeons, 12Gb of memory and 2 320gb sas disks.
I want to do some speed tests so I configured the disks as raid0.
A simple dd on the freenas server shows that the disk can transfer about 300MB/sec
Code:
[root@freenas] /mnt/Raid0# ls -lh Disk00 -rw-r--r-- 1 root wheel 80G Mar 20 23:01 Disk00 [root@freenas] /mnt/Raid0# dd if=test of=/dev/null bs=32k count=524288 80000+0 records in 80000+0 records out 2621440000 bytes transferred in 8.653268 secs (302942191 bytes/sec) [root@freenas] /mnt/Raid0#
Then I created a scsi target with 2 interfaces on different subnets (192.168.0.35/24 and 192.168.10.1/24)
On another server with proxmox vm I configured a scsi disk using multipath; the configuration seems correct and it shows both paths as active
Code:
root@server:~# multipath -ll 330000000eab5e18d dm-17 FreeBSD,iSCSI Disk size=80G features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=2 status=active |- 7:0:0:0 sde 8:64 active ready running `- 8:0:0:0 sdf 8:80 active ready running
This is my /etc/multipath.conf
Code:
blacklist { wwid * } blacklist_exceptions { wwid "330000000eab5e18d" } defaults { polling_interval 2 selector "round-robin 0" path_grouping_policy multibus getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n" rr_min_io 1000 failback immediate no_path_retry fail }
I tried to measure the read speed with dd on the mounted disk and I got about 116MB/sec
Code:
root@server:~# dd if=/dev/dm-17 of=/dev/null bs=32k count=524288 524288+0 records in 524288+0 records out 17179869184 bytes (17 GB) copied, 148.239 s, 116 MB/s
If I disable one of the interfaces I get about the same speed
Code:
root@server:~# ifconfig eth1 down root@server:~# multipath -ll 330000000eab5e18d dm-17 FreeBSD,iSCSI Disk size=80G features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 7:0:0:0 sde 8:64 failed faulty running `- 8:0:0:0 sdf 8:80 active ready running root@server:~# dd if=/dev/dm-17 of=/dev/null bs=32k count=524288 524288+0 records in 524288+0 records out 17179869184 bytes (17 GB) copied, 149.271 s, 115 MB/s root@server:~#
Here is the iperf output for the interfaces
Code:
root@server:~# iperf -c 192.168.0.35 ------------------------------------------------------------ Client connecting to 192.168.0.35, TCP port 5001 TCP window size: 23.8 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.0.252 port 55732 connected with 192.168.0.35 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec root@server:~# iperf -c 192.168.10.1 ------------------------------------------------------------ Client connecting to 192.168.10.1, TCP port 5001 TCP window size: 23.8 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.10.2 port 60092 connected with 192.168.10.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.10 GBytes 942 Mbits/sec
Is it a normal behaviour ? From what I read the multipath should increase the performance using more than one connection but it does not seem to work in my case.