Super Micro Motherboard

Status
Not open for further replies.

jerryjharrison

Explorer
Joined
Jan 15, 2014
Messages
99
I have reviewed the hardware guidelines, and have built 4 FreeNAS servers on SuperMicro X10SRL-F motherboards, but all of them are for personal use, and while I treat them as if they are a mission critical production system, they really aren't.

I am about to launch a real mission critical build for my company's infrastructure, which will serve as the primary storage for 10 or 11 VMWare virtual machines that are VERY mission critical. In doing some research, I believe that the ideal motherboard to build off of is the:

SuperMicro https://www.supermicro.com/products/motherboard/Xeon/C600/X10SRH-CLN4F.cfm

This motherboard is offered in a rack mount pre-built server from SuperMicro that includes redundant power supplies, 256G of RAM, and I will load it with 2 mirrored 32G SATA DOM's for boot, and about 6 8T drives in a Z3 pool.

My questions are:
1. Anyone have any experience with this motherboard?
2. My previous builds have used the onboard sata controllers instead of a dedicated LSA controller. Is this an acceptable practice?
3. Any concerns with the Intel® i350-AM4 Quad port GbE LAN?

Thanks.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,456
My previous builds have used the onboard sata controllers instead of a dedicated LSA controller. Is this an acceptable practice?
I'm not sure which you're concerned about, but both using onboard SATA and using a SAS HBA are perfectly acceptable.
Any concerns with the Intel® i350-AM4 Quad port GbE LAN?
None at all; the i350 works very well.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I already see red flags. You should probably call ix and get a quote from them. You mention using raidz3 with virtual machine which will give you very slow performance and you don't even mention a slot. For vm's you want mirror vdevs and a good slog.
 

jerryjharrison

Explorer
Joined
Jan 15, 2014
Messages
99
Thanks Dan.
SweetAndLow, I intend to set up the pool as an iscsi connection to the VM Ware server. I have not begin to research the exact configuration for the best underlying structure, so the Z3 was included simply to communication that I was not expecting all of the drives to result in a pool of 48T...
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Thanks Dan.
SweetAndLow, I intend to set up the pool as an iscsi connection to the VM Ware server. I have not begin to research the exact configuration for the best underlying structure, so the Z3 was included simply to communication that I was not expecting all of the drives to result in a pool of 48T...
Except they will be all in the same pool... How else would you do it?
 

jerryjharrison

Explorer
Joined
Jan 15, 2014
Messages
99
They will absolutely be in the same pool, but some form of raid (mirroring to Z3) will result in a reduced amount of storage availability...

I am new to iscsi, and to configuring the FreeNAS for use by a VM box, so all of that will be researched prior to configuring. If you have suggestions, please let me know.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
They will absolutely be in the same pool, but some form of raid (mirroring to Z3) will result in a reduced amount of storage availability...

I am new to iscsi, and to configuring the FreeNAS for use by a VM box, so all of that will be researched prior to configuring. If you have suggestions, please let me know.
I will say first, what @SweetAndLow said is absolutely true and I am only trying to give you more detail to help grow your understanding. Mirrored vdevs are the way to go for virtualization.
To me, the first question should be how much storage do you need, not today, but in the next 3 to 5 years, because you want to buy a chassis that can support that amount of storage and you should provision the drives in the system at the start so the data can be spread across all the mirror sets. Here is why, for performance reasons, when doing iSCSI and for VMs, you want to have as many spindles (disks) as possible, because each disk has a finite amount of speed it can render to the system and you get faster performance by having more disks. You also need to consider the number of virtual machines that you will be running because each additional VM will want more IO bandwidth. Most of the time, using a small number of large (8TB) capacity drives is a bad solution because you only have a few disks. To give you an example, we have a storage system where I work that is dedicated to virtualization and they use 600GB SAS drives (48 of them) to get the highest IOPS they could without using SSD storage. If I recall correctly, that only gives them about 14TB of storage for the virtual machines in that cluster but it is fast.
Here is some info about the type of drives:
https://www.hgst.com/company/media-room/press-releases/hgst-ships-fastest-highest-capacity-15k-rpm
Not saying you need to have 15k SAS drives, but the design of the system behind iSCSI for storing VMs is not the same as bulk storage for a regular file server where quantity of data is preferable over speed. For virtualization, the speed of the storage is crucial to the responsiveness of the VM.
Another question to consider is the type of interface you will need between the compute cluster and the storage host. I would think that a minimal configuration would be dual 10GB network link, so I am not sure why you are concerning yourself with the quad port 1GB networking on the system board. What are you running for the VM host (not just VMware) and how many VMs are you running?

PS. That system with the 15k RPM drives, if it were built today, would probably use SSDs instead.
 
Last edited:

jerryjharrison

Explorer
Joined
Jan 15, 2014
Messages
99
Thanks Chris. We currently have an end of life EMC SAN that is providing the storage. It is 6T in size, and is connected to the 2 VM Servers (Dell 730R's) via 1G ethernet connections. The VM Servers each have 5 virtual machines, but I anticipate adding another 2 VM's to each of the machines. One VM will be a SQL database, and the other an application server. The previous 5 will eventually be phased out as the 2 new VM's become the production application. This is a small setup, but looking down the road, the FreeNAS server would benefit the company significantly by providing shared file access as well as hosing a NextCloud instance. I was trying to cover all of those storage needs in my planning.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
If you are doing VERY MISSION CRITICAL work (the kind where a major failure could end your business), you need to call iX and get a quote for a real system. iX will get you good hardware but, most importantly, it will get you a "throat to choke" at 2am when the box goes Tango Uniform.

That said, if you're intent on doing this yourself, a few thoughts:
- Consider two separate pools. One pool of striped mirrors for VM performance, one pool of RAIDZ2 drives for bulk storage - your Nextcloud data, etc. I do exactly this, and support ~40 VMs (none of which are insanely busy, but I do run a database cluster, a log management system, etc.). The VM pool will need an ultra-fast SLOG and, with 256GB RAM, you would likely see benefit from an L2ARC. NVMe is where it's at. If you're going to add substantial NVMe resources, you might consider a different motherboard that supports more PCIe lanes (a dual-proc E5 would double the number of lanes supported).
- Potentially consider SSDs for your super-fast stuff. This would be a third pool.
- You should run 10G connectivity to your VM hosts. Either one network with a switch or, since you've only got 2, a dual-port card in the FreeNAS with each port directly connected to one host (more cost effective if you don't plan to scale massively).

If you're going to do this yourself, I would build a very detailed design doc and post it here. You've got gaping holes in what you've posted so far.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Another item to consider, along with what we already threw out there, is that iXsystems has units with redundant controllers so that if a controller did go offline for some reason, it would not take your production network down. It may cost a bit more than buying parts and putting it together yourself, but for a critical part of the infrastructure, it could be well worth the investment.
 
Status
Not open for further replies.
Top