Well, we live to learn. I'm even considering a single CPU / MB configuration now. It could be enough for a lot less $I bet I say that at least once a day :)
Well, we live to learn. I'm even considering a single CPU / MB configuration now. It could be enough for a lot less $I bet I say that at least once a day :)
Thanks for that insight, I also was actually thinking about the E5-1650v3. I see that the X10SRL gives you only PCI-E x8 while the X9DRE-TF+ has PCI-E x16 and I guess that it gives you more actual lanes rather than turning them on and off depending on what hardware is connected like the X10SRL.
But I'm not planning to use anything more than 1 HBA (maybe 2 in the distant future) and 1 dual port 10Gbps ethernet card so probably 2(or 3 max) PCI-Ex8 slots could be enough.
So my choice is currently is based on CPU cores rather than PCI-E availability but I haven't done enough tests on my test machine to see if more cores give you any benefit. I see also that both TrueNAS and 45drives are selling dual CPU servers (2x2.6 or 2x2.4 if I remember correctly) so there may be some use for them but there could be other considerations in their particular setups.
Talking about PCI-E do you think that there's a reason for my case to get 2308 based HBA (or MB with integrated 2308) or sticking with the M1015 is enough in my case? My guess is that 24xSAS HDD will not oversaturate the PCI-Ev2x8 of the M1015 but if I add an expansion someday to the second port the 4GB/s (8 x 500MB/s per v2 lane) could in theory limit the maximum I could get from all HDDs. Not that it will actually reach that speeds in any real life scenario...
None comes to mind :)But the question I have for you is, what sort of hardware would you be putting into your FreeNAS server that has an x16 connector?
X10SRL + E5-1600 doesn't support LR-DIMM. That leaves out any modules bigger than 16GB or limits memory to 128GB, correct?
nvd0: <INTEL SSDPEDMW400G4> NVMe namespace nvd0: 381554MB (781422768 512 byte sectors) nvd1: <Samsung SSD 950 PRO 512GB> NVMe namespace nvd1: 488386MB (1000215216 512 byte sectors)
[root@storage3] /# dd if=/dev/nvd0 bs=1048576 count=32K of=/dev/null 32768+0 records in 32768+0 records out 34359738368 bytes transferred in 14.371266 secs (2390863690 bytes/sec) [root@storage3] /# ^0^1 dd if=/dev/nvd1 bs=1048576 count=32K of=/dev/null 32768+0 records in 32768+0 records out 34359738368 bytes transferred in 21.282606 secs (1614451658 bytes/sec) [root@storage3] /#
I honestly don't know what that means. Just saw that Supermciro and Intel has those adapters
The PLX PCI-e switches work 100% like you'd expect them to. And they do all the cool things you'd expect them to do (not that said cool things are useful when running two x4 devices off a x8 interface).The safer choice may be just to go with the PLX. I usually have a dislike of magic hardware because experience says that at some point you may end up needing to swap in something else (motherboard dies, etc). As far as I know, the PLX should be much more compatible with everything. It won't be making it possible to boot off NVMe but that shouldn't be a concern for FreeNAS users. But do bear in mind that my NVMe experience doesn't actually include the two AOC's under discussion, at least not yet :) I much prefer to cram things in actual PCIe slots because the 2.5" bays are always full!
And they do all the cool things you'd expect them to do
You have odd expectations for PCI-e switches. :DI didn't see "clean the house and make dinner while fixing all my crappy code" in the feature list. Hm.