Slow speed with Fibre Channel Disks on FreeNAS 11

Status
Not open for further replies.

IceBoosteR

Guru
Joined
Sep 27, 2016
Messages
503
Hello guys,

maybe you can help me with my problem.
At the moment I have a third FreeNAS Server which is running on an ESXi host (alone at the moment).
So the host has 2x Xeon L5630 (4c/8t each) and 56 GB ECC RAM and a QLogic dualport 4GB FC card which is passed thru to the FreeNAS vm which has 8vCPU and 40GB RAM. Via FC I have connected a NetApp DS14MK4 wiht dual ESH4 controller and 14x 450GB 15.000RPM disks.
I have only one port connected - and I have really slow read and write speeds. While copying data to that machine, every minute the transfer drops to 0kb/s for some seconds and goes up to about 72MB/s in peak. This is waaay to slow. Everything is connected with gigabit and those disks are in one RAIDZ3 (yeah i know the risks about vdev size - does not matter here).
I did some previous tests, one FC cable all disks in RAID0 under Windows and a performance of nearly 400MB/s in read, and 130MB/s in write operation was seen.
I know there is an impact as FreeNAS has to write parity also, but it there a way to speed this things up? I can also see 100% disk busy, but I did not expect this from 15k disks...
Maybe the FC-ESH controller is the bottleneck, is multipathing an option here to speed those things up?

I cannot believe that the disks are sooo slow.

Regards
IceBoosteR
 

Attachments

  • speed.png
    speed.png
    1.2 MB · Views: 502
  • FNNetApp Disks.png
    FNNetApp Disks.png
    186.1 KB · Views: 490

IceBoosteR

Guru
Joined
Sep 27, 2016
Messages
503
No one? :(
 

IceBoosteR

Guru
Joined
Sep 27, 2016
Messages
503

IceBoosteR

Guru
Joined
Sep 27, 2016
Messages
503
Got an update on that.

I have ordered another FC cable. Pluged it in aaand BOOM.
FreeNAS detects now 28 drives instead of 14 and GEOM is creating partitions like da15p1. While importing the pool now, the system crashes and reboots.
System got confused by all those disks.
After that, pluged out the new cable, started the system and imported the pool. After one hour, the GUI was trying to import the pool, /var/log/messages writes something about VDEV state changes and did not more, I rebooted the machine. Next try the same. After half an hour I clicked the "X" button while importing and went to SSH. With "zpool status" I saw that the pool was imported but I could not access the data or start/stop a service. With that action I literally destroyed the pool. Well it was only for backup so I detached the pool, rebooted but nothing changes to the disks. Under "view disks" I only saw 14, under /dev/da* there were 28. So never mind so I wiped all the disks and rebooted. And after that I could see "View Mulitpath" in GUI and gmultipath worked also.
But performance sucked again. I figured out that multipath was in Active/Passive mode, so I changed it manually to Active/Active, created a new pool and tested again. Same performance.
Destroyed it again and made a mirror for more throughput, no change to performance
So in theory : Multipath does not worked porperly for me as the disks should be fast enough (have a mirror and hey, this are 15k drives), the controllers are bad (have now A/A multipathing), and I got no other ideas...
Maybe the ethernet driver?

But this does not explain the 100% in disk busy....
 

Matt83k

Cadet
Joined
Dec 16, 2017
Messages
9
Hello,

i'm sorry for up this post after a long moment but when i reading this i remember one thing for IceBoosteR : If you don't plug the port 1 with the port 1 the sytem see the double of your disk.
i had the same surprise the first time between a bay hp and hp proliant dl360... i don't make attention and the system seeing 24 drives.

I have a question : you make the test between a 2008 server and a freenas but the link it's a ethernet cable ? and the connection between freenas and netapp it's a fc cable ?
 

IceBoosteR

Guru
Joined
Sep 27, 2016
Messages
503
Hi,
If I only plug in 1 FC cable to one controller I do see 14 disks. Two cables each one to one controller showed me 28 disks. So that let me enable Mutlipathing. But I suggest the IO limit or bottleneck comes from the ESH4 NetApp controller, as they might be slow af when it comes to writes...

The test I did was 1: A Windows Server VM with the Qlogic card passthru and setup a raid0/stripe on Windows and make the speed test.
2: FreeNAS VM also on ESXi wiht HBA passthru and RAIDZ1, Mirror, wahtever. Stucked at slow write speed. Data came to the VMs with Gigabit speed
So the connection were:
Data-> Ethernet->Host->VM->FC-Disks
 
Status
Not open for further replies.
Top