arvdsn
Dabbler
- Joined
- Jul 25, 2016
- Messages
- 11
Hi,
I've got a HP Proliant ML350 Gen9 server with a JBOD attached to it through IT-flashed LSI 9211-8i card. FreeNAS is running as a VM (with 8 cores of e5-2609v4 and 50GB RAM, obv. ECC rdimm) on ESXi 6 with HBA passthrough. Shares to ESXi are NFS (I am however experimenting with iSCSI).
Back in the days I was running mirrored vdevs (internal transfer speeds were about 120MB/s) but later I reformatted everything to zraid1 because I wanted more storage space, and while there has been a decrease it was still OK (around 70-80MB/s or so). Recently, I upgraded my hardware and now speeds are down to around 30MB/s and sometimes as low as 8-10MB/s. It starts higher and then goes down, which makes me look at the ARC. And this happens on both zraid1 and mirrored vdevs (more details below).
ARC stats:
Some dd-stats:
tank - zraid1
vol1 - mirrored vdev
^ Not sure I trust that test, wouldn't mirrored vdevs give better speed with dd than zraid1? And especially not that high?
Some iperf-stats (W10 to FN-box):
I've read a lot about increasing the RAM to push up the arc hit ratio and this is something I will try. There's around 60 gigs more ram available in the server right now, and another 64 to be added soon. Further down the road, the CPU will be replaced with a e5-2650v4.
Server is essential in my business so I can't afford shutting it down too often, but I do plan on moving all storage from the JBOD to the server and add a HP SAS Expander when I get everything I need (not due to the speed issue, but rather temperature and noise).
So I guess I'm clueless as to why it's not performing better than it is. I have a few VMs running (which is the reason I haven't given more RAM to FN), but nothing that should impact the speed. At least I don't think.
Is there anything else I'm missing that I can do in addition to increase the RAM? Worth mentioning, I'm using a fresh installation of FreeNAS (so 9.10, latest as of writing). Appreciate any help I can get.
PS. I just now realize this may be better suited in Storage sub-forum. I apologize for that, please move on your discretion.
I've got a HP Proliant ML350 Gen9 server with a JBOD attached to it through IT-flashed LSI 9211-8i card. FreeNAS is running as a VM (with 8 cores of e5-2609v4 and 50GB RAM, obv. ECC rdimm) on ESXi 6 with HBA passthrough. Shares to ESXi are NFS (I am however experimenting with iSCSI).
Back in the days I was running mirrored vdevs (internal transfer speeds were about 120MB/s) but later I reformatted everything to zraid1 because I wanted more storage space, and while there has been a decrease it was still OK (around 70-80MB/s or so). Recently, I upgraded my hardware and now speeds are down to around 30MB/s and sometimes as low as 8-10MB/s. It starts higher and then goes down, which makes me look at the ARC. And this happens on both zraid1 and mirrored vdevs (more details below).
ARC stats:
- Mainly documents and media
- 2:21PM up 5:01, 2 users, load averages: 11.69, 15.15, 16.20
- 610MiB / 11.9GiB (freenas-boot)
- 16.2TiB / 29TiB (tank, zraid1)
- 1.78TiB / 3.62TiB (vol1, mirrored vdevs)
- 44.48GiB (MRU: 29.55GiB, MFU: 14.93GiB) / 56.00GiB
- Hit ratio -> 83.57% (higher is better)
- Prefetch -> 23.43% (higher is better)
- Hit MFU:MRU -> 66.64%:26.73% (higher ratio is better)
- Hit MRU Ghost -> 0.34% (lower is better)
- Hit MFU Ghost -> 0.07% (lower is better)
Some dd-stats:
tank - zraid1
Code:
[root@zfs] /mnt/tank# dd if=/dev/zero of=tmp.dat bs=2048k count=50k 51200+0 records in 51200+0 records out 107374182400 bytes transferred in 93.866476 secs (1143903414 bytes/sec) ~1144MB/s
vol1 - mirrored vdev
Code:
[root@zfs] /mnt/vol1# dd if=/dev/zero of=tmp.dat bs=2048k count=50k 51200+0 records in 51200+0 records out 107374182400 bytes transferred in 104.745244 secs (1025098403 bytes/sec) ~1025MB/s
^ Not sure I trust that test, wouldn't mirrored vdevs give better speed with dd than zraid1? And especially not that high?
Some iperf-stats (W10 to FN-box):
Code:
[root@zfs] /mnt/vol1# iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 64.0 KByte (default) ------------------------------------------------------------ [ ID] Interval Transfer Bandwidth [ 5] local 10.0.0.10 port 5001 connected with 10.0.0.51 port 50389 [ 5] 0.0-10.1 sec 461 MBytes 383 Mbits/sec
I've read a lot about increasing the RAM to push up the arc hit ratio and this is something I will try. There's around 60 gigs more ram available in the server right now, and another 64 to be added soon. Further down the road, the CPU will be replaced with a e5-2650v4.
Server is essential in my business so I can't afford shutting it down too often, but I do plan on moving all storage from the JBOD to the server and add a HP SAS Expander when I get everything I need (not due to the speed issue, but rather temperature and noise).
So I guess I'm clueless as to why it's not performing better than it is. I have a few VMs running (which is the reason I haven't given more RAM to FN), but nothing that should impact the speed. At least I don't think.
Is there anything else I'm missing that I can do in addition to increase the RAM? Worth mentioning, I'm using a fresh installation of FreeNAS (so 9.10, latest as of writing). Appreciate any help I can get.
PS. I just now realize this may be better suited in Storage sub-forum. I apologize for that, please move on your discretion.
Last edited: