BUILD Advice on VFX Studio File Server

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, let me tell you, if you want to go single processor, a combination like the X10SRL, an E5-1650v3, and 128GB of 32GB DIMM modules should be less than $2,000. The 1650 is likely to be more than would be needed for all but the most demanding NAS. I picked up the 1650 because we're standardized on the WIO version of that board, the X10SRW, and are using it for hypervisors.

The problem is you have to be really careful about your PCIe; the way they get seven slots on the X10SRL is through careful reduction of the number of PCIe lanes in each slot. The reason you MIGHT want to actually stick with dual in a situation where you want massive I/O capability is because of the additional lanes the PCIe from the second CPU buys you. In that case the 2637 or maybe the 2643 really is your friend. A dual 2643 is possibly overkill in the same way that the 1650 I selected was overkill; it's a minor cost bump to ensure that the system was uncompromisingly the fastest thing it could be.
 

wuxia

Dabbler
Joined
Jan 7, 2016
Messages
49
Thanks for that insight, I also was actually thinking about the E5-1650v3. I see that the X10SRL gives you only PCI-E x8 while the X9DRE-TF+ has PCI-E x16 and I guess that it gives you more actual lanes rather than turning them on and off depending on what hardware is connected like the X10SRL. But I'm not planning to use anything more than 1 HBA (maybe 2 in the distant future) and 1 dual port 10Gbps ethernet card so probably 2(or 3 max) PCI-Ex8 slots could be enough.

So my choice is currently is based on CPU cores and RAM rather than PCI-E availability but I haven't done enough tests on my test machine to see if more cores give you any benefit. I see also that both TrueNAS and 45drives are selling dual CPU servers (2x2.6 or 2x2.4 if I remember correctly) so there may be some use for them but there could be other considerations in their particular setups. One more thing - I was planning to use 256GB memory but X10SRL + E5-1600 limits you to 128GB, right?

Talking about PCI-E do you think that there's a reason for my case to get 2308 based HBA (or MB with integrated 2308) or sticking with the M1015 is enough in my case? My guess is that 24xSAS HDD will not oversaturate the PCI-Ev2x8 of the M1015 but if I add an expansion someday to the second port the 4GB/s (8x500MB/s per v2 lane) could in theory limit the maximum I could get from all HDDs. Not that it will actually reach that speeds in any real life scenario but still I don't know if there is any real benefit in getting a PCI-E v3 HBA.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Thanks for that insight, I also was actually thinking about the E5-1650v3. I see that the X10SRL gives you only PCI-E x8 while the X9DRE-TF+ has PCI-E x16 and I guess that it gives you more actual lanes rather than turning them on and off depending on what hardware is connected like the X10SRL.

A sixteen port HBA using PCIe 2.0 x8 has the potential to move 4000 MBytes/sec if you look strictly at the PCIe bus. The average modern drive peaks at 200MBytes/sec. Not a problem.

A Chelsio dual 40GbE T580-CR uses PCIe 3.0 x8. Due to the upgrade to PCIe 3, it has nearly twice the bus bandwidth, 985MBytes/sec per lane, or 7.8MBytes/sec. Now admittedly there's some potential for contention there...

But the question I have for you is, what sort of hardware would you be putting into your FreeNAS server that has an x16 connector?

But I'm not planning to use anything more than 1 HBA (maybe 2 in the distant future) and 1 dual port 10Gbps ethernet card so probably 2(or 3 max) PCI-Ex8 slots could be enough.

s/could/would/

So my choice is currently is based on CPU cores rather than PCI-E availability but I haven't done enough tests on my test machine to see if more cores give you any benefit. I see also that both TrueNAS and 45drives are selling dual CPU servers (2x2.6 or 2x2.4 if I remember correctly) so there may be some use for them but there could be other considerations in their particular setups.

I think the general problem that the hardware vendors run into is that they want to provide something that's good enough while still being expandable in various directions without all the gotchas. See, I can easily sit here and tailor a solution to a problem because it's what I've done for years; my knowledge is not something I have to pay someone else for. However, if you're a sales engineer for a storage company, knowing all the ins and outs of all this stuff gets rather complicated as the matrix of possibilities expands. For example, I eschewed the obvious choice of the X10SRL when I was building a storage system here because we already had other X10SRW based systems and I wanted the option to maintain a smaller number of SKUs in spare inventory. It had some design implications but they were acceptable.

Talking about PCI-E do you think that there's a reason for my case to get 2308 based HBA (or MB with integrated 2308) or sticking with the M1015 is enough in my case? My guess is that 24xSAS HDD will not oversaturate the PCI-Ev2x8 of the M1015 but if I add an expansion someday to the second port the 4GB/s (8 x 500MB/s per v2 lane) could in theory limit the maximum I could get from all HDDs. Not that it will actually reach that speeds in any real life scenario...

I think you've got a sufficient handle on that to answer your own question.
 

wuxia

Dabbler
Joined
Jan 7, 2016
Messages
49
But the question I have for you is, what sort of hardware would you be putting into your FreeNAS server that has an x16 connector?
None comes to mind :)

What about memory then. I read on the Supermicro site that X10SRL + E5-1600 doesn't support LR-DIMM. That leaves out any modules bigger than 16GB or limits memory to 128GB, correct? I was planning to start with 256GB which leaves me two choices: X10SRL + E5-2600 CPU or dual cpu board.
( btw I edited my previous post but you were just too fast with the answer :) )
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
X10SRL + E5-1600 doesn't support LR-DIMM. That leaves out any modules bigger than 16GB or limits memory to 128GB, correct?

The E5-16xx lacks LR-DIMM support, that much is correct. The rest, I have no idea where you came up with that. Intel claims 768GB, and even the older E5-1650v1 was 256GB.

I can tell you I've got a bunch of machines with 4 x M393A4K40BB0-CPB in them (4 x 32GB = 128GB) and that works fine. Hynix makes the HMA84GR7MFR4N-TF which should also be fine, but the Samy 32G stuff is cheaper (currently sub-$200 a stick).

I am pretty sure we did run 256GB of it in one of the boxes. It won't run at 2133 in that config, but rather 1866. I know for certain we ran that experiment with 8 of the 16GB modules when we first dabbled with the 1650v3 when it first came out.

I haven't tried the 64GB Samy M393A8G40D40-CRB because I just can't get my boss (that's also me) to crack open the wallet for $10K in memory. :smile: At that point, it potentially gets more attractive to shell out a little extra for the dual configuration, in order to get the second set of RAM slots.

But I can tell you I've been cramming 128GB into lots of stuff lately and it makes life nice, especially when you do it as 32's so you have the option to cram more in.
 

wuxia

Dabbler
Joined
Jan 7, 2016
Messages
49
I guess things have changed since last time I checked or probably I have remembered it wrong. Thank you very much for your time and the excellent in-depth information. It was very helpful to me.
 

wuxia

Dabbler
Joined
Jan 7, 2016
Messages
49
I was given a green light for getting some NVMe storage so I was thinking of getting two Samsung 950 PRO 512Gb NVMe or 2x Intel Solid-State Drive 750 400GB for L2Arc. The decision for this came after analysing our work pattern. We're mostly working on 1 project at a time and our projects are about 1-2 TB but in a typical day we access maybe ~ half a TB. In addition to that we have all our software deployed centrally on the server so being able to cache as much as possible will give a very nice boost. I suspect that it would need more RAM for this setup though. I'm kind of leaning toward the Samsungs because of the bigger capacity and supposedly better endurance. I've seen that these drives are rated at 2.5GB/s so even if that speed is not reached in real-life usage stripping two should in theory give a very good L2Arc. I'm not sure which PCIe adater to get as I think that performance wise there shouldn't be any difference so sharing any experience is welcome.

What do you think?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Having worked with both, the 950 Pro is nicer from a space perspective, while the 750 looks like a more solid bit of gear, with the heatsink and the supercaps that allow it to function as a SLOG.

I've currently got a 512G 950 Pro as L2ARC and a 400G 750 as SLOG, with plans to get another 950 Pro as need be. Let me continue this from a different machine...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Okay. So, here, I can cut and paste. These are both wicked fast but not to the same extent. The Samsung is slower.

Code:
nvd0: <INTEL SSDPEDMW400G4> NVMe namespace
nvd0: 381554MB (781422768 512 byte sectors)
nvd1: <Samsung SSD 950 PRO 512GB> NVMe namespace
nvd1: 488386MB (1000215216 512 byte sectors)


Code:
[root@storage3] /# dd if=/dev/nvd0 bs=1048576 count=32K of=/dev/null
32768+0 records in
32768+0 records out
34359738368 bytes transferred in 14.371266 secs (2390863690 bytes/sec)
[root@storage3] /# ^0^1
dd if=/dev/nvd1 bs=1048576 count=32K of=/dev/null
32768+0 records in
32768+0 records out
34359738368 bytes transferred in 21.282606 secs (1614451658 bytes/sec)
[root@storage3] /#


Okay, so here's the thing. The Intel unit is faster and has a heatsink, which means that under load it is probably going to win out in a head-to-head competition. However, my thinking for L2ARC was that the larger device was slightly more desirable, plus the fact that it isn't constantly getting written to probably means that the lack of a heatsink isn't a big deal. Plus it's cheaper. Go take a look at

https://forums.freenas.org/index.ph...spx4-m-2-to-pcie-converter.39735/#post-247276

because that kind of option might also allow the use of some additional M.2 SSD.
 

wuxia

Dabbler
Joined
Jan 7, 2016
Messages
49
That's seems like a nice adapter. BTW it turns out I have an Intel 750 here so I'll put it in my test system to see how good it is. And while we have the card version I see that Intel have also a 2.5 inch version with NVMe interface. This allows you to use 2 of those while taking a single PCIex8 slot with an adapter.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Careful, you probably need PCIe bifurcation for that to be a possibility.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I honestly don't know what that means. Just saw that Supermciro and Intel has those adapters

Most motherboards are not set up to support two or more PCIe devices on the same slot. Their BIOS implicitly knows the slot-to-PCIe-lane configuration of the product. PCIe bifurcation support means that the BIOS has been made aware that a PCIe slot might have multiple devices connected to it, and the hardware designed to allow it.

For example, the AOC-SLG3-2E4R has two ports directly connected to PCIe x8. But to the average system with the average BIOS, this presents a quagmire; it appears that there are two separate PCIe x4 devices, and it hasn't configured the slot for that. Bifurcation support basically means the BIOS is set up to consider the possibility and look around before provisioning PCIe lanes.

The alternative is to build in a PCIe PLX switch, as is done with the AOC-SLG3-2E4. That looks like a single device on the PCIe bus that has additional devices behind it, so this is not confusing to the average BIOS. But it's more expensive and more watts.
 

wuxia

Dabbler
Joined
Jan 7, 2016
Messages
49
So I need to check if the motherboard supports PCIe bifurcation and get AOC-SLG3-2E4R or just play safe and get the AOC-SLG3-2E4 with the PLX switch, got it.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The safer choice may be just to go with the PLX. I usually have a dislike of magic hardware because experience says that at some point you may end up needing to swap in something else (motherboard dies, etc). As far as I know, the PLX should be much more compatible with everything. It won't be making it possible to boot off NVMe but that shouldn't be a concern for FreeNAS users. But do bear in mind that my NVMe experience doesn't actually include the two AOC's under discussion, at least not yet :smile: I much prefer to cram things in actual PCIe slots because the 2.5" bays are always full!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The safer choice may be just to go with the PLX. I usually have a dislike of magic hardware because experience says that at some point you may end up needing to swap in something else (motherboard dies, etc). As far as I know, the PLX should be much more compatible with everything. It won't be making it possible to boot off NVMe but that shouldn't be a concern for FreeNAS users. But do bear in mind that my NVMe experience doesn't actually include the two AOC's under discussion, at least not yet :) I much prefer to cram things in actual PCIe slots because the 2.5" bays are always full!
The PLX PCI-e switches work 100% like you'd expect them to. And they do all the cool things you'd expect them to do (not that said cool things are useful when running two x4 devices off a x8 interface).

Since there's no proprietary magic at work and PCI-e is setup to allow complex topologies, there should be no issues whatsoever.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I didn't see "clean the house and make dinner while fixing all my crappy code" in the feature list. Hm.
You have odd expectations for PCI-e switches. :D

From memory, PLX's stuff does very nice things like dynamic lane allocation (useful for GPUs) and direct PCI-e device to device communication.
 
Status
Not open for further replies.
Top