New Build Help and advice

shulemmosko

Dabbler
Joined
Feb 5, 2024
Messages
14
Hello all,

I have a video editing team that needs to be able to work off a shared drive editing huge video files.

I have put together a build and would like some more insight and advice since I'm new to TrueNAS :)

Here are the specs I'm looking at...

SERVER

Processor:
2x 3.60Ghz Gold 5122 Quad-Core - Total of 8x Cores
RAM : 6x 32GB PC4-25600-R (3200Mhz) - Total of 192GB of memory
Power Supply: 2x 1100W Platinum
Storage: 2x 400GB SSD SAS 2.5'' - 12GBPS (for boot drive)
6x Micron 9300 Pro 15.36TB NVMe U.2 Enterprise Solid State Drive (for data drive)
Network Daughter Card: Dual-Port 10GB SFP+ + Dual-Port 1GB RJ-45 - DELL_0C63DV
PCIe NIC/Drive Slots: Dual-Port 25GB SFP28 - DELL_000M95

SWITCH

UniFi USW-Aggregation-Pro 28x 10G SFP+ 4x 25G SFP28 per

Workstations

iMacs connected to the unifi switch via the following adapter.

Sonnet Technologies Solo 10G Thunderbolt 3 to 10GBASE-T Ethernet Fanless Adapter

------------

Any input, advice, Etc. is appreciated.

Thank you!!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You need to narrow down "editing" - is this a "copy things to the workstations and back once they're done" or a "read directly from the NAS while editing" sort of scenario?
 

shulemmosko

Dabbler
Joined
Feb 5, 2024
Messages
14
You need to narrow down "editing" - is this a "copy things to the workstations and back once they're done" or a "read directly from the NAS while editing" sort of scenario?
Ideally, they would like to read directly from the NAS while editing.

Thanks!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Ok, so I guess the question is down to CPU. 8 cores feels a bit wimpy for that, especially spread out over two sockets. My baseline would probably be 16 cores, but single/CPU. I guess you're buying used, hence the R640/R740ish system? I would probably aim towards an R6515 or R7515, to get more PCIe lanes for the SSDs
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Well, yes and no. SMB benefits from fast cores, but you do want more cores to deal with everything going on with multiple users, especially if you want to take advantage of low latency.
 

shulemmosko

Dabbler
Joined
Feb 5, 2024
Messages
14
Well, yes and no. SMB benefits from fast cores, but you do want more cores to deal with everything going on with multiple users, especially if you want to take advantage of low latency.
Got it, so 1x 3.00Ghz Gold 6154 18-Core would be better?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Yes and no. The problem with the Xeon Scalable v1/v2 platforms is that PCIe lanes are very scarce in single-Socket systems, with just 48 total from the CPU. Six SSDs take half of that, another 8 for the HBA (even if not in use), another 8 for the NIC makes 40... So you have 8 lanes leftover in the end, which is not conducive to future expansion and is already a best-case I arrived at without reviewing how lanes are really assigned.
 

shulemmosko

Dabbler
Joined
Feb 5, 2024
Messages
14
Yes and no. The problem with the Xeon Scalable v1/v2 platforms is that PCIe lanes are very scarce in single-Socket systems, with just 48 total from the CPU. Six SSDs take half of that, another 8 for the HBA (even if not in use), another 8 for the NIC makes 40... So you have 8 lanes leftover in the end, which is not conducive to future expansion and is already a best-case I arrived at without reviewing how lanes are really assigned.
So if I throw in a second Gold 6154 18-Core CPU, will that be better?
 

shulemmosko

Dabbler
Joined
Feb 5, 2024
Messages
14
Im starting with 6x Micron 9300 Pro 15.36TB NVMe U.2 Enterprise Solid State Drive (for data drive) and want to be able to add another 6
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Does an R740 have a suitable backplane for that? I think that 8 U.2 bays are available, but I don't know if there's the option for 12.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
I think @Ericloewe has the right of it. I have no experience of that many NVMe drives you are going to need a lot of fast CPU and a lot of RAM - I suspect a lot more that your 192GB specified. YOu will also need a lot of lanes. Maybe an Epyc based solution?
At least with RAM you can add more quite easily - so make sure you have lots of RAM expansion capability

Other threads on this forum imply that this isn't as easy as it sounds. Maybe this is one for IX?
 

shulemmosko

Dabbler
Joined
Feb 5, 2024
Messages
14
Hi @Ericloewe , will the following be a better than the xeon based build?

Processor: 2x 2.30Ghz EPYC 7451 24-Core - Total of 48x Cores
RAM :6x 32GB PC4-21300-R (2666Mhz) - Total of 192GB of memory
Power Supply: 2x 1100W Platinum
RAID Controller: HBA330 Pass-Through SAS Non-RAID 12GB/S
Storage:
2x 400GB SSD SAS 2.5'' - 12GBPS
6x NEW Micron 15.36TB NVMe SSD
Network Daughter Card: Dual-Port 25GB SFP28 - DELL_0R887V
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
So that would be like an R6245?
RAM :6x 32GB PC4-21300-R (2666Mhz) - Total of 192GB of memory
6 DIMMS is very little for a dual-Epyc system, and the memory controllers in those things just are not at all optimized to operate with three DIMMs (per CPU). This really feels like it should be single-socket Epyc territory, say an R6515. With eight DIMMs.
RAID Controller: HBA330 Pass-Through SAS Non-RAID 12GB/S
Storage:
2x 400GB SSD SAS 2.5'' - 12GBPS
Consider dropping the HBA and using small NVMe SSDs as the boot devices. SATA would also be an option, but you may or may not be able to cable up the bays appropriately, depending on the server.
Network Daughter Card: Dual-Port 25GB SFP28 - DELL_0R887V
Not a good choice at all to use with Core. More acceptable with Scale. You would want whatever Intel NIC meets your needs (XXV710 or E810 for 25GbE, presumably). On Gen 14 systems, to have basic networking (for setup, maintenance, etc.) add something like a 0C63DV rNDC for dual I350 1GbE and dual X520 SFP+ 10GbE or a 099GTM for 10GBase-T instead of SFP+. On Gen15, you always have onboard 1GbE NICs (even though they're crappy Broadcoms). Probably skip the Dell OCP NIC 2.0-in-disguise Broadcom NICs on Gen 15.
 

shulemmosko

Dabbler
Joined
Feb 5, 2024
Messages
14
@Ericloewe the R6515 only has 10 bays I need to be able expand at least to 12 bays 15.3tb each

should I lower the gb per DIMM and have 16 DIMM's or more?
 

shulemmosko

Dabbler
Joined
Feb 5, 2024
Messages
14
@Ericloewe , how about the following?

Processor: 2x 2.30Ghz EPYC 7451 24-Core - Total of 48x Cores
RAM : 24x 8GB PC4-21300-R (2666Mhz) - Total of 192GB of memory
Network Daughter Card: Dual-Port 10GB RJ-45 + Dual-Port 1GB RJ-45 - DELL_099GTM
PCIe NIC/Drive Slots: Dual-Port 25GB SFP28 - MELLANOX_MCX4121A-ACAT
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
@Ericloewe the R6515 only has 10 bays I need to be able expand at least to 12 bays 15.3tb each

should I lower the gb per DIMM and have 16 DIMM's or more?
So an R7515, 16-bay.
@Ericloewe , how about the following?

Processor: 2x 2.30Ghz EPYC 7451 24-Core - Total of 48x Cores
RAM : 24x 8GB PC4-21300-R (2666Mhz) - Total of 192GB of memory
Network Daughter Card: Dual-Port 10GB RJ-45 + Dual-Port 1GB RJ-45 - DELL_099GTM
PCIe NIC/Drive Slots: Dual-Port 25GB SFP28 - MELLANOX_MCX4121A-ACAT
You need to be looking at multiples of 16 DIMMs, odd memory configurations are not great. That means either 128 GB or 256 GB, take your pick. And the Connect-X4 is still a problem.
 
Top