Peppermint
Cadet
- Joined
- Jul 8, 2014
- Messages
- 9
Hi,
I am working on a network storage for our esxi cluster based on freenas.
The cluster consists of three HP DL 360 G8 each with a Intel X540-T2 card. At first I tried to export the share via NFS. The performance was very bad, as I described already in this post. Now I switched to iSCSI. The three esxi servers are connected to the freenas box via a managed netgear switch that way:
freenas port 1 10.1.0.1/24 VLAN 11
freenas port 2 10.2.0.1/24 VLAN 12
esxi1 port 1 10.1.0.2/24 VLAN 11
esxi1 port 2 10.2.0.2/24 VLAN 12
esxi2 port 1 10.1.0.3/24 VLAN 11
esxi2 port 2 10.2.0.3/24 VLAN 12
esxi3 port 1 10.1.0.4/24 VLAN 11
esxi3 port 2 10.2.0.4/24 VLAN 12
There is no routing between the vlans and the switch is only used for iSCSI traffic. Everything is running on 10GbE full-duplex.
Frenas box:
2x Xeon E5-2609 @ 2,5GHz
128GB RAM
Intel x540 T2 dual port 10 GbE-card
LSI 9300 12Gb/s SAS 8i with 12 HGST UltraStar 15K600 (6 of them in a software RAID 10)
iSCSI extent:
Extend: 2TB zvol on six disks configured as raid 10
Portal: 10.1.0.1 Port 3260, 10.2.0.1 Port 3260
iSCSI Settings on esxi:
Two vmks for iSCSI, each assigned only to one physical nic
Path: dynamic
Path policy: round-robin
Performance:
Running a dd (bs=2M count=7500) in a single vm located on the iSCSI storage, I see a peak at about 1.2GBit/s on both interfaces. Running a dd in two vms simultaneously the peak stays at 1.2GBit/s. The Pool can do 2.3GByte/s write and about 5GByte/s read (measeured with iozone and dd). So the disks should not be the bottleneck…
Because performance is still not what it should be, I assume that this is affected by the intel ixgbe driver issue….
It would be great, if anyone can confirm this or even got an hint how to improve performance =)
Thanks,
Peppermint
I am working on a network storage for our esxi cluster based on freenas.
The cluster consists of three HP DL 360 G8 each with a Intel X540-T2 card. At first I tried to export the share via NFS. The performance was very bad, as I described already in this post. Now I switched to iSCSI. The three esxi servers are connected to the freenas box via a managed netgear switch that way:
freenas port 1 10.1.0.1/24 VLAN 11
freenas port 2 10.2.0.1/24 VLAN 12
esxi1 port 1 10.1.0.2/24 VLAN 11
esxi1 port 2 10.2.0.2/24 VLAN 12
esxi2 port 1 10.1.0.3/24 VLAN 11
esxi2 port 2 10.2.0.3/24 VLAN 12
esxi3 port 1 10.1.0.4/24 VLAN 11
esxi3 port 2 10.2.0.4/24 VLAN 12
There is no routing between the vlans and the switch is only used for iSCSI traffic. Everything is running on 10GbE full-duplex.
Frenas box:
2x Xeon E5-2609 @ 2,5GHz
128GB RAM
Intel x540 T2 dual port 10 GbE-card
LSI 9300 12Gb/s SAS 8i with 12 HGST UltraStar 15K600 (6 of them in a software RAID 10)
iSCSI extent:
Extend: 2TB zvol on six disks configured as raid 10
Portal: 10.1.0.1 Port 3260, 10.2.0.1 Port 3260
iSCSI Settings on esxi:
Two vmks for iSCSI, each assigned only to one physical nic
Path: dynamic
Path policy: round-robin
Performance:
Running a dd (bs=2M count=7500) in a single vm located on the iSCSI storage, I see a peak at about 1.2GBit/s on both interfaces. Running a dd in two vms simultaneously the peak stays at 1.2GBit/s. The Pool can do 2.3GByte/s write and about 5GByte/s read (measeured with iozone and dd). So the disks should not be the bottleneck…
Because performance is still not what it should be, I assume that this is affected by the intel ixgbe driver issue….
It would be great, if anyone can confirm this or even got an hint how to improve performance =)
Thanks,
Peppermint