I'm quite new to both freenas and esxi, but wanted to know if some of the results I am seeing are normal, or if I should dig into tuning (if that will help).
My freenas setup is as follows:
The freenas system has 6 - 1 TB drives with 3 on the motherboard SATA controllers, and 3 on an SiI3124 based add in card.
There is 4GB of ram on the freenas MB, but I am guessing it shows 3301 as thats what is available to the filer portion of the system.
I have a 1Gb switch that connects the freenas box to a Dell T110 II server running esxi. The esxi box has a 250gb hard drive internal to itself as one datastore and the NFS share from freenas as another.
My question surrounds the latency I should expect to see from a datastore standpoint from the esxi box. I have been using a CUCM (Cisco Callmanager) install to test real world performance of the datastores and different configs, and while throughput is good the latency is a bit troubling.
During the install, from the esxi standpoint I see the following average write latencies:
Installing CUCM to local datastore: 15ms
Installing CUCM to freenas (all 6 drives striped zfs): 389ms
Installing CUCM to freenas (pool of 2 sets of 3 drives in raidz): 2893ms
Installing CUCM to freenas (all 6 drives in one raidz volume): 4732ms
The CPU utilization on the freenas box never gets above 50%, so I'm wondering if this is just to be expected with a NAS. I know zfs requires a good bit of RAM to get good performance, so is that probably my bottleneck? Unfortunately the motherboard in the freenas box only supports 4GB, so I can't upgrade that.
Any suggestions to tweak the system?
My freenas setup is as follows:
Code:
OS Version: FreeBSD 8.2-RELEASE-p2 Platform: Pentium(R) Dual-Core CPU E6300 @ 2.80GHz Memory: 3301MB System Time: Sat Jun 18 02:51:21 EDT 2011 Uptime: 2:51AM up 2:37, 0 users Load Average: 0.00, 0.01, 0.10 FreeNAS Build: FreeNAS-8.0.1-BETA2-i386
The freenas system has 6 - 1 TB drives with 3 on the motherboard SATA controllers, and 3 on an SiI3124 based add in card.
There is 4GB of ram on the freenas MB, but I am guessing it shows 3301 as thats what is available to the filer portion of the system.
I have a 1Gb switch that connects the freenas box to a Dell T110 II server running esxi. The esxi box has a 250gb hard drive internal to itself as one datastore and the NFS share from freenas as another.
My question surrounds the latency I should expect to see from a datastore standpoint from the esxi box. I have been using a CUCM (Cisco Callmanager) install to test real world performance of the datastores and different configs, and while throughput is good the latency is a bit troubling.
During the install, from the esxi standpoint I see the following average write latencies:
Installing CUCM to local datastore: 15ms
Installing CUCM to freenas (all 6 drives striped zfs): 389ms
Installing CUCM to freenas (pool of 2 sets of 3 drives in raidz): 2893ms
Installing CUCM to freenas (all 6 drives in one raidz volume): 4732ms
The CPU utilization on the freenas box never gets above 50%, so I'm wondering if this is just to be expected with a NAS. I know zfs requires a good bit of RAM to get good performance, so is that probably my bottleneck? Unfortunately the motherboard in the freenas box only supports 4GB, so I can't upgrade that.
Any suggestions to tweak the system?