Hi guys,
Would appreciate some insights into some of the information I see on my NAS server.
My NAS setup is used for an VMWare lab for about 10 employees with around 30 active VMs.
We have 2 ESXi servers, connected via a dedicated vlan to a dedicated 1Gbps nic on the NAS.
The NAS server itself is built on ASRock 2750D4I board with 16 GB of RAM. with 1TBx4 (7200) disks in a RAID 10 that serve as the main datastore for the VMs via NFS, and another 2TB disk for just a general CIFS share.
As of now, I have no ZIL setup, and have sync writes disabled. I know the risks of data loss and i understand and accept them, but doesn't mean I wouldn't love to fix this issue in the future.
Also, there is no need for real time performance. I just need the lab to be operational & OK. I would be more then happy if I can just saturate my current 1Gbps link.
Anyway, now back to the issues :). While checking the reporting tab, I see numbers that I cant seem to understand and this is where I hope you guys can come in and help out:
1) When looking at the network tab, I usually don't go over 500 at peaks, and stay at around 300 on a regular workload. (igb0 is storage, igb1 is the CIFS)
I thought this might be an issue with my networking, but running iperf from one of the ESXi hosts via the storage network shows that everything is normal.
So I guess the first question is, is this normal? shouldn't I see spikes that fully saturate my link when I have a lot of VMs running and doing stuff?
2) The second issue is with regards to the performance of my RAID. It seems to be quite low, in regards to the link saturation I see at (1). I'm talking specifically about the RAID as it seems that when I tried running a test on a temp NFS I've created on my additional single 2TB disk, I get far better performance. From the same VM I've tested with a RAID disk, and my secondary disk and these are the results:
RAID:
single disk:
So we obviously see that I can easily saturate my single disk, which for some reason is not even close on my RAID. Even if I consider that I have around 300MBps regular usage on it, I still need to see something in the region of 50-60 MBps right? Am I missing something here?
So any thoughts and suggestions would be more then welcome. I'm really trying to figure out when is missing :(.
Thanks!
Would appreciate some insights into some of the information I see on my NAS server.
My NAS setup is used for an VMWare lab for about 10 employees with around 30 active VMs.
We have 2 ESXi servers, connected via a dedicated vlan to a dedicated 1Gbps nic on the NAS.
The NAS server itself is built on ASRock 2750D4I board with 16 GB of RAM. with 1TBx4 (7200) disks in a RAID 10 that serve as the main datastore for the VMs via NFS, and another 2TB disk for just a general CIFS share.
As of now, I have no ZIL setup, and have sync writes disabled. I know the risks of data loss and i understand and accept them, but doesn't mean I wouldn't love to fix this issue in the future.
Also, there is no need for real time performance. I just need the lab to be operational & OK. I would be more then happy if I can just saturate my current 1Gbps link.
Anyway, now back to the issues :). While checking the reporting tab, I see numbers that I cant seem to understand and this is where I hope you guys can come in and help out:
1) When looking at the network tab, I usually don't go over 500 at peaks, and stay at around 300 on a regular workload. (igb0 is storage, igb1 is the CIFS)
I thought this might be an issue with my networking, but running iperf from one of the ESXi hosts via the storage network shows that everything is normal.
So I guess the first question is, is this normal? shouldn't I see spikes that fully saturate my link when I have a lot of VMs running and doing stuff?
2) The second issue is with regards to the performance of my RAID. It seems to be quite low, in regards to the link saturation I see at (1). I'm talking specifically about the RAID as it seems that when I tried running a test on a temp NFS I've created on my additional single 2TB disk, I get far better performance. From the same VM I've tested with a RAID disk, and my secondary disk and these are the results:
RAID:
single disk:
So we obviously see that I can easily saturate my single disk, which for some reason is not even close on my RAID. Even if I consider that I have around 300MBps regular usage on it, I still need to see something in the region of 50-60 MBps right? Am I missing something here?
So any thoughts and suggestions would be more then welcome. I'm really trying to figure out when is missing :(.
Thanks!