9.3.1 is....slow?

Status
Not open for further replies.

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Only two things have changed, I updated to 9.3.1 and to the latest as of today, as well updated the firmware on my lsi cards to the recommended firmware.

Now instead if getting 6-8GB/s per link i'm getting around 500-600MB/s

I'm trying to reset all my auto tunes tonight to see if that helps at all, but i'm close to reverting back to 9.3 as it was working pretty damn perfect, we really did the update just to get to the latest lsi firmware.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
I should also mention i'm using iscsi with 12 mirrored vdevs, 64gb ram, Intel 750 for l2arc.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
I'll simpify.

I have dual 10GB Links, I can hit 60-80% data rates. call it whatever you want, but the point is i'm down to 8%.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I wasn't aware that there were any 100 gigabit adapters supported, and that you have two of them, that's amazing.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Are you done? can we move on and deal with the issue of the update being slow?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
What make/model are your 100GbE NICs? Maybe there is a driver issue.

Have you run iperf and dd (both locally and remotely) to determine if it is network, datastore, or datastore & network related?
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Chelsio S320E-LP-CR

I haven't gotten a chance to dig that far into it, I was more curious if others have had similar issues.

I have noticed that removing all the tunables and re-adding them it seems to help the problem.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Are you done? can we move on and deal with the issue of the update being slow?

Sure.. but you've confused the hell out of me too! Then you also said this...
I'll simpify.

I have dual 10GB Links, I can hit 60-80% data rates. call it whatever you want, but the point is i'm down to 8%.

But above you said
Now instead if getting 6-8GB/s per link i'm getting around 500-600MB/s

So can you explain what your network topology is, what your throughput used to be (with proper units), and what your throughput is (with proper units). Because it sounds like you think you're getting 100Gb/sec links, were getting those speeds, but now are getting only 10Gb, but think that's 8% of 100Gb. So I don't even understand what I'm reading...

It would also be appreciated if you'd be a little less pedantic about us telling you to use proper units. If you can't use proper units then we have a nearly impossible time figuring out *if* you even have a problem. Thanks.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
the Adaptor I have is listed above.

I have cisco nexus 5020 10GB switch, my esxi 6 hosts have intel 520DA dual 10gb adaptors.
I'm using vmware to load balance the links with iscsi, so no lacp or anything like that.

We keep our vms in thick provisioned state, usually when moving from local disks on the host to the icsi storage we would see rates of about 600 mbit/s (around 60% usage on each link) right now we're seeing about 50-100m.bit per link.

I did have some values for l2arc turned up to try and utilize the intel 750 pci that we have, but i've found if I reset it to the default tunables setting it appears to work faster... not sure. I have been slammed with work the past few days and obviously not able to put in the proper testing.

My problem is that all I did was update to 9.3.1 and upgrade the firmware on my lsi adaptors to be compliant.

Sounds like it's just me having the problem so i'll have to migrate the data off one of the servers and do some testing.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Okay.

What's the specs of this system?

After a reboot it takes time for the L2ARC to really warm up and become useful. Depending on various factors it can range from a few minutes to a few days.

Can you post a debug file of your system?
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
the Adaptor I have is listed above.

I have cisco nexus 5020 10GB switch, my esxi 6 hosts have intel 520DA dual 10gb adaptors.
I'm using vmware to load balance the links with iscsi, so no lacp or anything like that.

We keep our vms in thick provisioned state, usually when moving from local disks on the host to the icsi storage we would see rates of about 600 mbit/s (around 60% usage on each link) right now we're seeing about 50-100m.bit per link.

I did have some values for l2arc turned up to try and utilize the intel 750 pci that we have, but i've found if I reset it to the default tunables setting it appears to work faster... not sure. I have been slammed with work the past few days and obviously not able to put in the proper testing.

My problem is that all I did was update to 9.3.1 and upgrade the firmware on my lsi adaptors to be compliant.

Sounds like it's just me having the problem so i'll have to migrate the data off one of the servers and do some testing.

Let me fix this for you since you didn't read the link from above.

the Adaptor I have is listed above.

I have cisco nexus 5020 10Gb switch, my esxi 6 hosts have intel 520DA dual 10Gb adaptors.
I'm using vmware to load balance the links with iscsi, so no lacp or anything like that.

We keep our vms in thick provisioned state, usually when moving from local disks on the host to the icsi storage we would see rates of about 600 megabytes/s (around 60% usage on each link) right now we're seeing about 50-100 megabytes per link.

I did have some values for l2arc turned up to try and utilize the intel 750 pci that we have, but i've found if I reset it to the default tunables setting it appears to work faster... not sure. I have been slammed with work the past few days and obviously not able to put in the proper testing.

My problem is that all I did was update to 9.3.1 and upgrade the firmware on my lsi adaptors to be compliant.

Sounds like it's just me having the problem so i'll have to migrate the data off one of the servers and do some testing.
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
Did you get this fixed?
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
No, i've just been too busy, we've started to build out a vmware vsan option as well and moved all of our file servers and terminal servers onto that storage for now until I can get this figured out.

vsan works great, but freenas always makes our terminal servers appear to run faster. but for now I just don't have the time to put into this.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Interesting, I took my other intel 750 ssd from my other server to use as a zil and test our NFS. Everything works great. speed is great!

What we are doing because we don't really want to use lacp is we are balancing vms between the two 10gb links we have on the server.

NFS, ZIL and L2Arc for the win?
 
Status
Not open for further replies.
Top