This is sooooooo... new, supermicro don't even have the pictures on their web for the chassis you mention :)
That's odd, wonder what happened. It'd look like any of the other HH CSE216's.
https://www.supermicro.com/products/chassis/2U/213/SC213A-R740LP.cfm
Speaking of freenas , is freenas faster than LSI 9261 ? Same raid10 , same machine, same drives, which one is faster ?
The LSI's probably faster, but is limited in features/capabilities. ZFS is of course consuming the local CPU to get what it does done.
Also, in the context of providing ESXi datastores, the LSI has the advantage of being direct attach storage (DAS), so it should almost always win that hands down. FreeNAS has to be a separate machine over the network. It's very difficult to compete with local disk.
P.S. This Raid controllers have a "spike speeds" and really confuses me to determine how fast is it ?! Speed jump very high in the beginning then it slows down, I can't even tell what is going on actually on the transfer side. I did enable write cache knowing the importance of it , but everything else is default.
Easy peasy. What's happening is write cache. Depending on the size of your particular controller's on-board cache, what happens is that you start writing to the RAID controller and those writes go in the cache and the controller says "OK written" right away. In the meantime the controller starts poking the hard drive and saying "hey slow sluggish sleepyhead, I need you to write this." And a bunch of this builds up, as long as the RAID controller has write cache free.
So if you start a large operation like, oh, let's say a dd to a VM disk, what'll happen is that you'll experience insane write speeds for the first few seconds while the cache on the RAID controller fills. Then you'll slam into a wall. The wall is actually the speed of the underlying disks. Here's an example from a VM to a 18GB VM disk.
So you see here that the VM starts out for the first two or three seconds writing out at a blistering 400MB/sec, since the controller has 2GB of cache, of which I want to say it uses half for write cache. But by 4 seconds, we see the average speed reported has dropped to 265MB/sec, which is because what really happened is that around 3 seconds in, the write cache was full and speeds suddenly dropped to about 70MB/sec. The overall average reported speed starts dropping but in reality it just goes off a cliff:
See, it runs at a high speed there for about 4 seconds, then suddenly slams into the ~60-70MB/sec wall that is sustainable. Now in theory the underlying drives are able to write at around ~100-110MB/sec so that's vaguely disappointing. But it shows the behaviour. Reads, on the other hand, are a whole different story. The drives just read at a sustained ~60-70MB/sec.
SSD via the LSI, on the other hand...
So the weird thing there is that it writes faster than it reads.
