I've seen dual 10Gb saturated. I don't think I've even worked on a box with quad 10Gb. You're talking serious money and there are very very few uses for that. At some point its smarter to have multiple servers than 1 really big (read: expensive) server.
Depends what you're doing. Quad 10GbE isn't horribly expensive on the server side of things, maybe $200 per port. It's usually the switch side that kills ya, since your typical small 10GbE switch is still up around $400 per port - and you've got to buy a number of them.
If you haven't worked on a box with quad 10GbE, that's going to change. The technology is here. Early adopters look to having one or two interfaces and then maximizing those, but eventually it turns to topology and convenience, where the point isn't necessarily to have four saturated 10GbE but rather an intelligent network design that allows maximum performance out several arbitrary legs at once. This mirrors how 100M and 1GbE were deployed, but in the case of 10GbE there's been years of lag because the technology hasn't gotten as much rapid traction - 1GbE actually turns out to be pretty sufficient and extremely cost-effective for all sorts of needs. The lifecycle of 1GbE has come as a bit of a surprise to those of us who date back to 10base5 days and the rapid evolution that took us from 10Mbps to gigabit in just a bit more than a decade.
Anyways, the real point is that 10GbE is less of an annoyance than LACP with n x 1GbE, and with the latest generation of hardware is just about there.
Second you need enough CPU and RAM to push that kind of bandwidth just from the ARC
RAM is easy, RAM is (relatively) cheap and basically infinitely fast for the purposes of this discussion. CPU is the issue. ZFS is fundamentally using a host processor as your RAID controller, and getting that to work without bottleneck is going to be a problem. If we wanted to, just hypothetically, saturate 4x10GbE, I note that even the basic math says that's 40 times more difficult than saturating 1GbE...
I also note the OP's choice of processor is relatively poor. The 2620v2 is a 2GHz part while the 2637v2 is a 4 core 3.5GHz part and/or the 2643v2 is a 6 core 3.5GHz part. The extra memory made available by the second slow part could be very helpful, but will probably not entirely offset the overall effect of having used slower parts.
Third you need a dozen or more vdevs (or some other config capable of handling the I/O *and* throughput)
Eh, maybe. It really depends on what sort of I/O load is on it. Random VM data is always going to be the killer, and you've probably not got a chance to get anywhere near there unless you go with SSD vdevs. Probably lots of them. But if you can get (most|all) of the working set into the ARC and you're heavy on reads, you might find that even a relatively crappy pool flies ... right up 'til you try to access data that isn't part of the working set.
the memory and performance thing, the only real way to know is to run it. You are well spec'd. Have double the bandwidth of most high end rigs. Ram is plentiful with room for more if need be.
orly... I woulda thought something Fortville would be considered high end... be fun to play with an XL710-QDA2, 80Gbps of yummy network goodness... (note to self, ...write...letter..to...Santa...)
But the pool is slow, you have no flash based storage, and you haven't defined any real workloads beyond the fact that data integrity is secondary to speed. I like to see and make things go fast... so I'll follow along. Plus I like that board if it can be proven by those that go before. Pretty cost effective 10Gb setup for a small number of servers, imho. Plus mpio goodness ;). Fire that thing up and give it a work out.
The big problem with that board is that it is actually designed for one of their
2U ZFS appliance designs, and so if you're only dropping a single CPU in there, which is what you're probably doing for a FreeNAS box, then you only get a single PCIe slot. I spent a lot of time wringing my hands over the possibility of making a hypervisor out of those 2U boxes, but their prebuilt only offers the 2x10GbT. I'm guessing you could request that they custom-build you the ones with the 2T2OS6 but never got as far as the sales inquiry.
Personally if it is actually a "go fast" rig I'd have some ssd based pools. 8 will make your pool faster than your network. But you could save the trouble by just shoving them in the esxi boxes on hardware raid if raw speed is really the deal. More likely you are more interested in additional storage and flexibility than raw power. The other question is if you don't need the data integrity of ZFS... why bother with its overhead
Well the usual issue is that in a hypervisor cluster, you want to be able to migrate VM's. At which point putting the storage on hardware DAS RAID is kind of sucky. You can do HBA's with a shared external RAID shelf (think: HP MSA P2000, etc) but you're still limited as to how many hosts can attach. iSCSI gets attractive because it leverages ethernet, which means you can always repurpose the gear if it turns out you guessed wrong.