Steven Sedory
Explorer
- Joined
- Apr 7, 2014
- Messages
- 96
Hey there,
I'm reaching out to other virtualization and storage engineers for suggestions on configuring my new hardware for the best Hyper V VM storage setup. I have two nodes that are setup in a failover cluster. I have no problem setting up the cluster/storage, but I'm sure there is someone out there who has achieved the best performance.
Note: this cluster is for hosting local VM's (servers) on-site for a client. There will be roughly 15 Server 2012 R2 VM's running on this cluster that each run various basic services (AD, DNS, DHCP, small SQL DB's, Exchange, etc.)
What I have for the SAN:
-Dell R720xd
-2x six core cpu's
-128GB RAM
-24x 15k 600GB SAS drives
-2x 10GB NIC's
What I have for the nodes (two of the following):
-Dell R620
-2x eight core cpu's
-64GB RAM
-8x 10k 300GB SAS drives
-2x 10GB NIC's
Right now, I have the cluster setup with each node access an iSCSI target/extent that is presented on FreeNAS (running on the 720xd).
-The iSCSI target is accessed via a single portal (single IP),
-Single IP is a LAGG interface setup with LACP on the two 10GB NIC's (with a virtual port channels on the two Nexus switches they are plugged into)
-no authentication or any advanced iSCSI settings set
-using a zvol, not file based extent
-have set 4K block in two places when asked (as NTFS uses 4K by default)
-dedup is on
-compression is off
-Single LUN from all this is 4.5TB (I made the zvol more than small enough for the "80% ZFS rule", as the I could of made it as large as 5.8TB if I remember correctly)
I'm writing this because performance is sub par. I was copying some VMs to the CSV (the cluster shared volume established on top of the iSCSI LUN), and once the ARC cache filled up, the transfer went from around 1Gbps (1GB, not 10GB) to completely halting at times, and then resuming for periods with only small spikes up to a couple hundred Mbps. Please note that I did not have jumbo frames enabled at that time (I'm going to reconfigure the LAGG here shortly and try again). I'm sure the jumbo frames will help, but I don't think it's the solution to the "locking up" of the LUN when transferring large amounts of data.
Your input is much appreciated. I'm open to SMB 3.0 for the CSV's, or whatever other solutions are out there. Thank in advance guys.
I'm reaching out to other virtualization and storage engineers for suggestions on configuring my new hardware for the best Hyper V VM storage setup. I have two nodes that are setup in a failover cluster. I have no problem setting up the cluster/storage, but I'm sure there is someone out there who has achieved the best performance.
Note: this cluster is for hosting local VM's (servers) on-site for a client. There will be roughly 15 Server 2012 R2 VM's running on this cluster that each run various basic services (AD, DNS, DHCP, small SQL DB's, Exchange, etc.)
What I have for the SAN:
-Dell R720xd
-2x six core cpu's
-128GB RAM
-24x 15k 600GB SAS drives
-2x 10GB NIC's
What I have for the nodes (two of the following):
-Dell R620
-2x eight core cpu's
-64GB RAM
-8x 10k 300GB SAS drives
-2x 10GB NIC's
Right now, I have the cluster setup with each node access an iSCSI target/extent that is presented on FreeNAS (running on the 720xd).
-The iSCSI target is accessed via a single portal (single IP),
-Single IP is a LAGG interface setup with LACP on the two 10GB NIC's (with a virtual port channels on the two Nexus switches they are plugged into)
-no authentication or any advanced iSCSI settings set
-using a zvol, not file based extent
-have set 4K block in two places when asked (as NTFS uses 4K by default)
-dedup is on
-compression is off
-Single LUN from all this is 4.5TB (I made the zvol more than small enough for the "80% ZFS rule", as the I could of made it as large as 5.8TB if I remember correctly)
I'm writing this because performance is sub par. I was copying some VMs to the CSV (the cluster shared volume established on top of the iSCSI LUN), and once the ARC cache filled up, the transfer went from around 1Gbps (1GB, not 10GB) to completely halting at times, and then resuming for periods with only small spikes up to a couple hundred Mbps. Please note that I did not have jumbo frames enabled at that time (I'm going to reconfigure the LAGG here shortly and try again). I'm sure the jumbo frames will help, but I don't think it's the solution to the "locking up" of the LUN when transferring large amounts of data.
Your input is much appreciated. I'm open to SMB 3.0 for the CSV's, or whatever other solutions are out there. Thank in advance guys.