Yeah, those were 2014's hypervisor design, which I was still having to buy even at the end of 2015, because the Xeon D stuff just wasn't really quite ready for prime time.
The X10SRW is a great platform because it has immense flexibility. The board itself can go either E5-16xx or E5-26xx, of course, which means that you can either do an inexpensive E5-1650v3 (one of the best bang-for-buck in the Xeon lineup) or something crazy on the far side of the E5-269x's like the 2697 v4, which gets you approximately 41GHz of bang, or double the 1650v3. 128GB (4x32) of full speed memory is only ~$800, can be doubled to 256GB but at reduced speed. There's a pricey full speed option too, which probably only makes sense for the E5-26xx's.
On the expansion side of things, three PCIe x8 slots in the 1U is pretty good expansion capability, and there's 5 in a 2U.
I chose to stick a high quality RAID controller (the
AOC-S3108L-H8iR) in there with the LSI supercap option, which gives ESXi a nice write cache. The 2GB RAM is a great Supermicro bonus, the LSI 9361-8i only comes with 1GB. This might be unnecessary for the application, because a lot of the storage is actually SSD. Now part of this that you have to remember is that when operating gear at a significant distance (14 hours away in this case), you do things redundantly where possible. So there are three WD Red 2.5" 1TB HDD's, two in RAID1 and a standby. Those are for "slow" bulk storage. Then there's two 480GB datastores made out of 5 Intel 535 480GB's, two sets of two mirrors and a standby. Now the thing is, the 535's have a relatively low write endurance ... 40GB/day. So the system also has a nonredundant Intel S3710 that's rated for 10 drive writes per day. That's what shoulders the noisy stuff.
I had originally intended to go with Intel S3500's for the SSD RAID1 datastores, but you know me, kinda cheap.
So in the end, what's been happening over the years is that we've been slowly downsizing at our locations. As an example, what used to occupy ~8 racks back in the 1990's was downsized to ~2 by the mid-2000's, and now is 97 VM's on a handful of hypervisors, though we still have some physical machines left in addition right now. Those should go away this year, replaced by some more VM's and a NAS box.