Tuning Hyper V VM Hosting Freenas Box 10Gb Well Performing

Status
Not open for further replies.

Stikc

Cadet
Joined
Jan 13, 2017
Messages
8
So i know i followed the freenas recommended hardware post to build this except the 750nvm being used as both Slog and Cache so please don't get mad :) So in a Win8 VM running off ISCSI Zvol based extent i was getting about 150MB Write and 400MB read before upgrading to the 750. I was using a 120GB Sandforce for slog (I know) and a 250GB Samsung 850Pro for Cache. Upon turning sync=disabled to find if that was my issue it shot to around 900MB write and still 400MB read. After the install of the 750 and sync=always i am now at about 700MB write and 600MB read. Tested using HDtunePro with file size 10,000MB and 512KB file length. All testing is done in a production environment with all VM's ruunning. While i know this is great performance i cant help but wonder why its not better.. My disks hardly get hit at all, Arc hit is around 90% and my network doesn't go above 6Gb/s. I am thrilled with my writes but i feel my reads could be much better. My L2Arc hardly gets touched so that's why i don't care to much about sharing with the stupid fast 750NVMe. I have tried some tunables with my network card and maybe got it abit faster but now i'm at a loss. The main reason i want faster reads is to decrease backup times. the servers basically just idle at all times but have a large amount of storage on them that takes forever to backup using Veam Incremental backups. Any tips would be greatly appreciated!
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
I fear I repeat myself, but... you need to start at the FreeNAS box and work your way out. Locally, what are your read and write speeds (with compression disabled)? Once you've confirmed things are good locally, move on to the network piece. Have you used iperf to confirm that you can actually get 10Gbps across that network?

Assuming you don't have a bottleneck with the storage or the network, I suspect you're limited from an IOPS perspective. You say you have another 10 VMs running while you're doing this testing... do things improve if you shut those VMs down and then test?
 

Stikc

Cadet
Joined
Jan 13, 2017
Messages
8
With only 2 main VM's running at and idle (cant turn them off as its in production) it has no impact the the original numbers. Iperf shows 0.0-10.0sec Transfer 9,36GBytes Bandwidth 8.03GBytes. so i think the hard limit reads are not a network issue. As far as testing internaly on the box. What is the best way to do that and not mess up my production unit? I also thought maybe iops but i can load it up with requests from vms and it stays the same as well. hardly ever hits the disks..
 

Stikc

Cadet
Joined
Jan 13, 2017
Messages
8
Ok, i found a way. built a new dataset with no compression mounted it and used
# dd if=/dev/zero of=testfile bs=1M count=50000
611MB/s
then
# dd if=testfile of=/dev/zero bs=1M count=50000
1877MB/s

This was with all VM's operational
 

Stikc

Cadet
Joined
Jan 13, 2017
Messages
8
I just realized i forgot to run Iperf the other way and sure enough you are correct in that i get only 3.92Gbits/sec. Now to play around and try to figure out why..
 

Stikc

Cadet
Joined
Jan 13, 2017
Messages
8
Ok so, I have checked the duplex on everything and it all looks fine. I have played with the tunables for the 10gb card and got everything pretty good at 8gb both ways. im quite happy with this setup so thanks very much for the advice in what direction to go for all this!
 
Status
Not open for further replies.
Top