New hardware, slow transfer speeds

Status
Not open for further replies.

arvdsn

Dabbler
Joined
Jul 25, 2016
Messages
11
Hi,

I've got a HP Proliant ML350 Gen9 server with a JBOD attached to it through IT-flashed LSI 9211-8i card. FreeNAS is running as a VM (with 8 cores of e5-2609v4 and 50GB RAM, obv. ECC rdimm) on ESXi 6 with HBA passthrough. Shares to ESXi are NFS (I am however experimenting with iSCSI).

Back in the days I was running mirrored vdevs (internal transfer speeds were about 120MB/s) but later I reformatted everything to zraid1 because I wanted more storage space, and while there has been a decrease it was still OK (around 70-80MB/s or so). Recently, I upgraded my hardware and now speeds are down to around 30MB/s and sometimes as low as 8-10MB/s. It starts higher and then goes down, which makes me look at the ARC. And this happens on both zraid1 and mirrored vdevs (more details below).

ARC stats:
  • Mainly documents and media
  • 2:21PM up 5:01, 2 users, load averages: 11.69, 15.15, 16.20
  • 610MiB / 11.9GiB (freenas-boot)
  • 16.2TiB / 29TiB (tank, zraid1)
  • 1.78TiB / 3.62TiB (vol1, mirrored vdevs)
  • 44.48GiB (MRU: 29.55GiB, MFU: 14.93GiB) / 56.00GiB
  • Hit ratio -> 83.57% (higher is better)
  • Prefetch -> 23.43% (higher is better)
  • Hit MFU:MRU -> 66.64%:26.73% (higher ratio is better)
  • Hit MRU Ghost -> 0.34% (lower is better)
  • Hit MFU Ghost -> 0.07% (lower is better)

Some dd-stats:

tank - zraid1
Code:
[root@zfs] /mnt/tank# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 93.866476 secs (1143903414 bytes/sec) ~1144MB/s


vol1 - mirrored vdev
Code:
[root@zfs] /mnt/vol1# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 104.745244 secs (1025098403 bytes/sec) ~1025MB/s

^ Not sure I trust that test, wouldn't mirrored vdevs give better speed with dd than zraid1? And especially not that high?

Some iperf-stats (W10 to FN-box):

Code:
[root@zfs] /mnt/vol1# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ ID] Interval       Transfer     Bandwidth
[  5] local 10.0.0.10 port 5001 connected with 10.0.0.51 port 50389
[  5]  0.0-10.1 sec   461 MBytes   383 Mbits/sec


I've read a lot about increasing the RAM to push up the arc hit ratio and this is something I will try. There's around 60 gigs more ram available in the server right now, and another 64 to be added soon. Further down the road, the CPU will be replaced with a e5-2650v4.

Server is essential in my business so I can't afford shutting it down too often, but I do plan on moving all storage from the JBOD to the server and add a HP SAS Expander when I get everything I need (not due to the speed issue, but rather temperature and noise).

So I guess I'm clueless as to why it's not performing better than it is. I have a few VMs running (which is the reason I haven't given more RAM to FN), but nothing that should impact the speed. At least I don't think.

Is there anything else I'm missing that I can do in addition to increase the RAM? Worth mentioning, I'm using a fresh installation of FreeNAS (so 9.10, latest as of writing). Appreciate any help I can get.

PS. I just now realize this may be better suited in Storage sub-forum. I apologize for that, please move on your discretion.
 
Last edited:

arvdsn

Dabbler
Joined
Jul 25, 2016
Messages
11
Did you make any headway with this?

Nothing concrete, but it looks like the speeds are decreasing severely when the ARC is "overloaded" (excuse my phrasing, I don't know what it's doing to be honest). Thus, increasing RAM should help.

I haven't restarted anything yet due to travel. In about a week and a half I will be back home and should have gotten most of the things I'm waiting for, so I'll report more then.
 

arvdsn

Dabbler
Joined
Jul 25, 2016
Messages
11
Ok, so an update. System now has an E5-2650v4 (full allocation of CPU) and 140GB ram. New ARC stats:

  • Mainly documents and media
  • 12:46AM up 1 day, 59 mins, 1 user, load averages: 0.71, 0.46, 0.37
  • 630MiB / 11.9GiB (freenas-boot)
  • 18.0TiB / 29TiB (tank)
  • 1.98TiB / 3.62TiB (vol1)
  • 126.82GiB (MRU: 75.94GiB, MFU: 50.88GiB) / 160.00GiB
  • Hit ratio -> 81.59% (higher is better)
  • Prefetch -> 67.30% (higher is better)
  • Hit MFU:MRU -> 50.19%:20.20% (higher ratio is better)
  • Hit MRU Ghost -> 0.81% (lower is better)
  • Hit MFU Ghost -> 0.63% (lower is better)

An rsync of a big file to the same zpool gives me ~60-65MB/s (sustained) on both the zpools, but vol1 sometimes dips down to 30-40MB/s (but the disks here are 4+ years old, replacing upon fail).

I guess it's better now. Ant it works; the system uptime isn't that high yet. I guess I'll do some testing in a few weeks but it does seem like it should be quicker considering the hardware. I've also enabled autotune, but won't reboot the server just yet.

Any ideas?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
What's the inactive ram reading. And the swap used?
 

arvdsn

Dabbler
Joined
Jul 25, 2016
Messages
11
What's the inactive ram reading. And the swap used?
Image%202016-09-24%20at%2010.52.55%20fm.png


Edit; sorry for retina image.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Not the FreeBSD bug that causes Inactive RAM to grow during rsync To iscsi devices, thus shrinking ark and causing pathological memory searches before finally swapping bug, then ;)
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
You're getting 1/3 of gigabit network speeds. You should fix that first since it probably isn't related to any of your hardware unless you tell us you are running realtek NICs.

Sent from my Nexus 5X using Tapatalk
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
An rsync of a big file to the same zpool gives me ~60-65MB/s (sustained) on both the zpools, but vol1 sometimes dips down to 30-40MB/s (but the disks here are 4+ years old, replacing upon fail).

I guess it's better now. Ant it works; the system uptime isn't that high yet. I guess I'll do some testing in a few weeks but it does seem like it should be quicker considering the hardware. I've also enabled autotune, but won't reboot the server just yet.

Any ideas?

This is not too far off the speeds I was used to seeing in my pool with similar type of data (smaller media files).
I kind of grew used to not getting better speeds.
The difference between 5year old WD greens and brand new WD RED were... still within the same range. From a user perspective, I'd never notice the difference.
 

arvdsn

Dabbler
Joined
Jul 25, 2016
Messages
11
Not the FreeBSD bug that causes Inactive RAM to grow during rsync To iscsi devices, thus shrinking ark and causing pathological memory searches before finally swapping bug, then ;)

Well, look at that. I had no idea :)

You're getting 1/3 of gigabit network speeds. You should fix that first since it probably isn't related to any of your hardware unless you tell us you are running realtek NICs.
But the copy is internal and not through the network so while you're right in that it should be looked into, it shouldn't matter in this case.

Either way - I'm not sure what I was doing back then, but running it again now gives me near gigabit and ESXi reports NIC being a NetXtreme BCM5719 Gigabit Ethernet (it's an integrated 4 x 1gbit nic from HP).

Code:
------------------------------------------------------------
Client connecting to 10.0.0.51, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.10 port 36166 connected with 10.0.0.51 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec

This is not too far off the speeds I was used to seeing in my pool with similar type of data (smaller media files).
I kind of grew used to not getting better speeds.
The difference between 5year old WD greens and brand new WD RED were... still within the same range. From a user perspective, I'd never notice the difference.

Yeah, I figured as much about the disks there. But they are failing left and right and considering how Seagate suffered a lot of failures with the old 2TB branch of disks I can't say I really put much faith in the ones I have left (even though they are .12 and not .11)...

Maybe I just want more than I can get :)

As a side note, ARC Hit ratio is down to 77.42% and Hit MFU:MRU is 55.30%:27.10% now.
 
Last edited:

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
That's pretty close to mine, at the time. The statistics did improve some when trippling RAM (the system did not completely seize transfers), but no real "read/write" performance bump.

  • 8:45AM up 28 days, 12:55, 1 user, load averages: 0.00, 0.00, 0.00
  • 551MiB / 29.8GiB (freenas-boot)
  • 22.0TiB / 38TiB (wd60efrx)
  • 12.15GiB (MRU: 11.39GiB, MFU: 777.87MiB) / 16.00GiB
  • Hit ratio -> 75.41% (higher is better)
  • Prefetch -> 1.48% (higher is better)
  • Hit MFU:MRU -> 53.09%:43.33% (higher ratio is better)
  • Hit MRU Ghost -> 0.70% (lower is better)
  • Hit MFU Ghost -> 2.93% (lower is better)
 
Status
Not open for further replies.
Top