If you are connected over gigabit network, you will saturate your network before ever crunching those hard drives.
Couple points here:
1. You're going to need way more memory than 8GB, and probably want more than 16GB, if you want more than "homelab" level performance in VMware.
2. VMFS + parity RAID = poor performance. You want to use mirrored vdevs or your write latencies will suck.
3. LACP and iSCSI don't mix, use MPIO instead.
You'll want to look at SSDs for SLOG ("write cache") only at this stage, especially with that little RAM. Check the stickied thread at the top of this subforum about "insights into SLOG/ZIL"
I am using LACP and MPIO. My VMware hosts each have 2 ISCSI NICs and the VMFS is set to Round Robin.
Should I still get rid of LACP?
I will probably put 32GB of ram in the production box.
Yes, get rid of it. Link aggregation will make those two interfaces share a single IP address, meaning the iSCSI traffic will only ever go across one path. Separate them so they can't route to each other (subnets < VLANs < two physical switches) and add both IPs to the portal.
Much better; what kind of workload will the VMs have and how many of them are you planning to run?
The SLOG should be a SSD, correct? If so I have several SSDs ready to deployment. How do I size it? Ill read if you point me to articles.
10gbe nics will be here friday
You mirror them in freenas.Any other tips you can provide me?
How do I put my drives in mirrors but still present them as one to ESX?