Hardware tips for mid-high end home media server

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
All this for a home video server?
Fair point. I suggested RDIMM based on the amount on RAM, and this has drifted to dual Xeon Gold servers. For "one game server" and "nothing incredibly intensive", one CPU may well do, but that's for @dirtynas to decide.

Fair point, I have opted for builds with less CPU with higher thread speed. Of each of the following, I am leaning towards the Dell R740XD, as it has an IPMI and larger power supplies, with some seemingly better networking stuff.
Unless the transcoding GPU is very powerful, 920W should do. Nearly all Supermicro boards have IPMI (all dual CPU, and all single CPU with a trailing "F"). I defer to @jgreco to rule which is the best choice, or the lesser evil, between Intel 10 GbE-on-copper and Mellanox "QSFP+" for 10 GbE (something weird here)…
In any case, an extra SFP+ card would not be so expensive that networking should be decisive.

- 512GB - (16x)32GB DDR4 Registered DIMM memory module -> Maybe not RDIMM
"Registered" or "REG" is RDIMM. All fine here.

- 192GB DDR4 RAM Installed (12 x 16GB PC4-2133P) -> Maybe not ECC or RDIMM
I don't know what that "P" means, but I've only seen it on RDIMM modules, and never heard of RDIMM that is not also ECC.

To complete the above builds I also need the following hardware. Am I missing anything? Will the HBA below work?
In all cases, you'll need to add a GPU (check there's a suitable slot!) and replace the RAID controller by a plain HBA (LSI 9300). Preferably one with the right kind of connector in the right place so as not to have to re-do the cabling. -8i is enough if there are SAS expanders on the backplane, as appears to be the case for the Supermicro offers.
Flashing the HBA with the IT firmware is a nice finishing tough, but not strictly required. Getting rid of RAID hardware is necessary for secure ZFS operation.

any decent 2.5 inch ssd as a boot drive (seems the Dell build is missing a SFF slot though, so I suppose some adapter is necessary)
Or a M.2 NVMe. Or a SATADOM.
 
Last edited:

dirtynas

Cadet
Joined
Sep 22, 2022
Messages
9
Thanks everyone for your help so far.

I have narrowed down the following as the base of the build unless someone has a strong reason to consider otherwise.
- 512GB DDR4 RAM Installed (16 x 32GB PC4-2133P)
- 2x Intel Xeon Gold 6140 24.75MB 140W LGA3647 (36-Cores Total)
- X11DPi-NT
- 12 bays

I plan to add the following to complete the build:
NOTE: I consider this to be an upgradable part, but I have one of these lying around already.
Not sure if this is overkill or underkill really, but I guess I thought these would be cheaper

Is there anything missing from this build? I would expect to add some ethernet cables and maybe a power cable and be good to go, with the only remaining hurdles being Software related: getting Truenas Scale setup, like the OS and the vdev configuration and whatever else goes into that.

There needs to be some vpn configuration, but I expect I can set that up as a separate project which is not limited by the hardware of this build.

EDIT: added power cables, trying to figure out what cables are needed to power hdd now
EDIT2: after rereading, I feel I am very confused on how the HBA connects with the backplane and 12 HDD. Should I expect that the backplane handles all the power and data connections for the 12 hdd and just that the HBA needs to be connected to the backplane by the 2x Mini SAS to Mini SAS cables I linked above?
 
Last edited:

dirtynas

Cadet
Joined
Sep 22, 2022
Messages
9
replace the RAID controller by a plain HBA (LSI 9300). Preferably one with the right kind of connector in the right place so as not to have to re-do the cabling. -8i is enough if there are SAS expanders on the backplane, as appears to be the case for the Supermicro offers.
Can you please elaborate on 'Preferably one with the right kind of connector in the right place'. I have been watching some videos and reading some docs but I am struggling to foresee what this might mean or what the HBA config should be.
 

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925
Can you please elaborate on 'Preferably one with the right kind of connector in the right place'. I have been watching some videos and reading some docs but I am struggling to foresee what this might mean or what the HBA config should be.
Study of pics of HBA's will show you that some have connectors facing the top edge of the card near the bracket, some at the rear edge of the card facing the horizontal. Your "chosen" has the latter.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Thanks everyone for your help so far.

I have narrowed down the following as the base of the build unless someone has a strong reason to consider otherwise.
- 512GB DDR4 RAM Installed (16 x 32GB PC4-2133P)
- 2x Intel Xeon Gold 6140 24.75MB 140W LGA3647 (36-Cores Total)
- X11DPi-NT
- 12 bays
General comment: I have no doubt that this system is a very strong server. Just make sure that it is not way over-specced and way overpriced for your needs. Unless there will be more apps/VMs than the first post suggest, a single CPU may do it.

Specific comments: As pictured in the eBay listing, the chassis exposes all PCIe slots… for half-height cards. That's going to be an issue for a GPU. To fit a full-height, dual slot GPU you need a chassis with a riser to hold the card in horizontal position—which will block some slots and raise further questions as to the number and distribution of extension cards.

Samsung 970 EVO Plus SSD 1TB NVMe M.2
As boot drive? (Too large)
Or as (non-redundant!) pool for your apps/VMs?
It is not supported, and not recommended, to use it for both purposes.

Is there anything missing from this build? I would expect to add some ethernet cables
Since you're adding a 10 GbE SFP+ NIC to a system which already has 10 GbE from onboard Base-T ports, some optical cabling is obviously on order.

EDIT2: after rereading, I feel I am very confused on how the HBA connects with the backplane and 12 HDD. Should I expect that the backplane handles all the power and data connections for the 12 hdd and just that the HBA needs to be connected to the backplane by the 2x Mini SAS to Mini SAS cables I linked above?
A backplane handles power and signal. A complete system shall come with its own cables and everything wired. SAS connectors come in "MiniSAS" (SFF-8087, rectangular) and "Mini SAS HD" (SFF-8643, square) varieties, and can come at different positions on the card (back or middle/front, facing top).
Which is why I would suggest to wait until you get hold of the system, see what is there and then order a plain HBA which has "the right kind of connector into the right place" so everything just fits by replacing the card—and no extra cable needed. The picture shows MiniSAS HD on the back, which would just fit the HBA you picked; but this is likely a stock picture rather the actual system, which may or may not be configured exactly this way.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

DigitalMinimalist

Contributor
Joined
Jul 24, 2022
Messages
162
Also make sure your graphic card fits in the server.
Will only work with Riser in 2U... the selected server is not really fitting to your needs imho...

It's just a media server with transcoding -> Quicksync is great for transcoding -> Alder Lake iGPU, or even Intel ARC dGPU.
I don't see the need for 36 XEON cores - I would prefer the much faster Single Core Performance of Alder Lake - just take a 13700K, or 13900K if you want loads of cores.
Despite that: 128GB RAM will be totally sufficient, 10GBE NIC is fine and a 8, or 16 port HBA controller and a 4U case with sufficient space for the harddrives, e.g. Inter-Tech 4U-4416, or equivalent

jus tmy 2 cents

 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
It's just a media server with transcoding -> Quicksync is great for transcoding -> Alder Lake iGPU, or even Intel ARC dGPU.
I don't see the need for 36 XEON cores - I would prefer the much faster Single Core Performance of Alder Lake - just take a 13700K, or 13900K if you want loads of cores.
I agree that QuickSync would be good for transcoding, but IMHO Alder Lake is "too new" to be recommended now—or then a "non-K" CPU without hybrid architecture. Raptor Lake 13xxx CPUs are not even released!

Despite that: 128GB RAM will be totally sufficient,
That's the concern… Design request is for 12-16 drives, starting now with 18 TB drives with a view to adding more vdevs. That's a lot of storage (72 TB for 6-wide raidz2, 144 TB for 2 vdevs), and with some services/VMs on top of it, the 128 GB RAM of a Core platform is going to be a minimum to run TrueNAS.
I'm all for pricing alternatives tough.
 

elorimer

Contributor
Joined
Aug 26, 2019
Messages
194
It's just a media server with transcoding
OP hasn't said anything about transcoding or for that matter what is being served to what and where. 36 Xeon cores!! The biggest part of the budget is the electric bill and the A/C load.
 

DigitalMinimalist

Contributor
Joined
Jul 24, 2022
Messages
162
OP hasn't said anything about transcoding or for that matter what is being served to what and where. 36 Xeon cores!! The biggest part of the budget is the electric bill and the A/C load.

see initial post - was - at least for me - very clear that we deal with high resolution videos and transcoding... and not so much high core count workloads...

Workloads:
  • plex for high quality video (VR 8k)
  • high quality video transcoding
  • hosting 1 (or more?) game server (Minecraft, 'ARK: Survival Evolved')
  • docker/container support/loads (nothing incredibly intensive)
 

elorimer

Contributor
Joined
Aug 26, 2019
Messages
194
see initial post - was - at least for me - very clear that we deal with high resolution videos and transcoding
Yes, quite right. I got the 8k but missed the transcoding. Still if plex is serving 8k video to an 8k TV, there is no transcoding, and if transcoding down to 1080 maybe background transcoding is the way to go. No hardware transcoding in this mix either.
 
Last edited:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
EDIT2: after rereading, I feel I am very confused on how the HBA connects with the backplane and 12 HDD. Should I expect that the backplane handles all the power and data connections for the 12 hdd and just that the HBA needs to be connected to the backplane by the 2x Mini SAS to Mini SAS cables I linked above?

Suggest a read-over of


Which doesn't exactly cover your question, but does cover the meta-issues of what SAS is and how SAS expanders work. Backplanes can either have an SAS expander on them, in which case you really need nothing more than a single SFF8087/8643 cable from the HBA to the backplane, or no expander on it, which then needs 4 HBA lanes (one SFF8087/8643 cable) for every 4 drives, which tends to be a PITA. Feel free to come back with further questions.
 

dirtynas

Cadet
Joined
Sep 22, 2022
Messages
9
Thanks all for your comments about whether the build meets my needs. I appreciate the overall patience here, as I understand my needs were not well defined. I have collected some related comments:

General comment: I have no doubt that this system is a very strong server. Just make sure that it is not way over-specced and way overpriced for your needs. Unless there will be more apps/VMs than the first post suggest, a single CPU may do it.
All this for a home video server?
I don't see the need for 36 XEON cores - I would prefer the much faster Single Core Performance of ...
36 Xeon cores!!
I appreciate these comments and they seem reasonable. I generally agree/expect I need less cpu/cores/threads and instead more speed per thread. However, I was struggling finding prebuilt rack servers in the 12-24 LFF bay range that have anything comparing favorably to the 'Xeon 6140', which seems to be a huge bargain nowadays relative to its performance. If you know a processor/build that compares favorably in the used marketplace, please let me know.

To be more specific, glossing over ebay listings for 'server 12 lff' is not so promising. It comes across as higher speed cpus are too uncommon/new for me to find builds in a reasonable price range, but I could be overlooking something. Higher speed cpus are also typically not incorporated in builds supporting so many hdd (at least on ebay).

The biggest part of the budget is the electric bill and the A/C load.
I glossed over power consumption, but I guessed this would not be so big as to shape the build. I saw one (admittingly questionable) source mention $25-150 yearly cost per 'Xeon 6140' depending on usage.

As for further defining my build needs:
I am a Software Developer with 5 years of experience; I like writing crappy automation scripts and executing inefficient brute force solutions that rely on my hardware to overcompensate. When I mentioned transcoding, I was referring to scripts that batch locate and compress files. Additionally, files will be batch transcoded with different settings and compared in terms of visual artifacts. There will also be batch upsampling in some cases. AFAIK, the only time transcoding needs are for 'real-time' application are for down transcoding (i.e. 8k -> 4k), but I am really not sure (lack the knowledge/foresight).
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I appreciate these comments and they seem reasonable. I generally agree/expect I need less cpu/cores/threads and instead more speed per thread. However, I was struggling finding prebuilt rack servers in the 12-24 LFF bay range that have anything comparing favorably to the 'Xeon 6140', which seems to be a huge bargain nowadays relative to its performance. If you know a processor/build that compares favorably in the used marketplace, please let me know.

This is difficult to find because of the market dynamics. The used server market is dominated by gear that has come out of the corporate data center or cloud computing facilities, which are where the majority of Xeons tend to come from. Both market segments tend to value high density CPU's, because the cost of a server platform is added to the cost of the CPU. For example, ten years ago, a high end CPU was the E5-2697v2 was perhaps $2200 for 12 cores, so a complete basic server might have cost about $7000 for 24 cores, or about $300 per core. When you start looking at the typical price to buy an E3-1230v2 system, this ends up being very competitive, because of two factors --

One, add-on cards like RAID controllers cost big bucks. So even though you could build a credible E3-1230v2 system for ~$1200 without RAID and 10G, adding those cards to a much smaller server is costly. It costs a lot more to outfit a quartet of E3-1230v2's with a full set of cards than it does a single E5-2697v2.

Two, load balancing and capacity expansion is much easier on bigger systems.

This means that there are driving forces that encourage big hosting shops to look at the high core count CPU's even if they are lower clock parts.

Typically in the older X10 gear, I've recommended people look at stuff like the single-socket E5-1650v3 or v4 because it is a workstation grade part that is optimized towards core speed. Comparing this to the similar E5-2643v3 or v4 which are similar dual socket parts at similar clock speeds is instructive once you look at the MSRP, which for the dual socket part is about 3x the single socket part. This means that you just don't see a lot of the high core speed E5-26xx parts on the used market, and those that do show up end up commanding a bit of a price premium because they are more valued by home hobbyists and some of us IT "refurb" guys.

The lowest prices will tend to be for the CPU's that aren't selling in great quantity, and those are going to be the ones flooding out of data centers in great quantity but that have nothing particular remarkable about them. They're the generic Honda Accord's of this market, relatively inexpensive, suitable for many tasks, easily available.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Thanks all for your comments about whether the build meets my needs. I appreciate the overall patience here, as I understand my needs were not well defined.
You're welcome. Take your time to think it over—especially if you're inclined to spend around $3000 on it.
There is still no precise estimate of your needs for services/apps/VMs, i.e. an estimate of the number of cores and RAM you'd need to set on top of what ZFS may require for the storage part (6*18 TB Z2 = 72 TB to begin with, double that with a second vdev).

I appreciate these comments and they seem reasonable. I generally agree/expect I need less cpu/cores/threads and instead more speed per thread. However, I was struggling finding prebuilt rack servers in the 12-24 LFF bay range that have anything comparing favorably to the 'Xeon 6140', which seems to be a huge bargain nowadays relative to its performance. If you know a processor/build that compares favorably in the used marketplace, please let me know.
I'm not very familiar myself with rackmount hardware because I don't use these: There's no place I could put a noisy rack in my house!
@jgreco has given valuable insights on the market, which explains why you're finding dual Xeon in this 12-24 LFF size. From a quick look at Dell offerings, it seems that the single socket servers in this size are EPYC (R7000 series), and the model I look at as example would not take double-slot extension cards.
Options:
  • Keep searching for a 2U 12LFF server (or at least a chassis) which would take a RTX 2080.
  • Find a GPU which would fit in existing 2U 12 LFF servers (single-slot and/or half-length, so rather a server-style GPU with a blower fan, which would probably fit the airflow model of the chassis better than a gamer-type triple fan GPU).
  • Go for 4U. For instance (no recommendation, I just browsed the store of of one the sellers you had selected), this Supermicro 4U 24 LFF chassis could take any (E-)ATX motherboard and the RTX 2080 GPU:
 
Last edited:

Redemption

Dabbler
Joined
Aug 3, 2022
Messages
32
If you have an Azure P1 license, you can use the application proxy to access all your on-premise applications. No need to set up a VPN! You can then protect your login with MFA etc. I access home assistant and other apps this way. The only reason I prefer over VPN is that I can use any computer to access my applications.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I glossed over power consumption, but I guessed this would not be so big as to shape the build. I saw one (admittingly questionable) source mention $25-150 yearly cost per 'Xeon 6140' depending on usage.
Well, that very much depends on what you pay per kWh. In large parts of Europe 50 ct/kWh are considered a bargain these days.

The Xeon 6140 has a TDP of 140 W. On the other hand, if we take the 50 ct/kWh and the 150 USD per year, that would mean 300 kWh per year. Makes 0.82 kWh per day, which would roughly equate to 35 W average power consumption. For 25 USD per year this would mean around 6 W on average.
 

elorimer

Contributor
Joined
Aug 26, 2019
Messages
194
With six drives, 2 10GbE and 2 6140s, maybe an average of 200w-250w total, on the low side? (I have a 95w CPU with 5 drives that idles at 146 and draws 175 under load.) Running all the time at 50ct/kWh, that's 900-1200E per year. Take off something because you don't need as much heat in the winter. Having invested 4000 for the first year, a backup would also be, um, advisable but it wouldn't need to be such a beast.

If this is being spec'd out for background batch processing rather than file storage/serving, a lot depends on how much over how long, and with what software. It sounds like this is more for experimenting than accumulating five versions of the same file for Plex to choose from. It might be better to spec a TrueNAS file server running 24/7 and then a beast running occasionally to do the batch processing and save the files back. MIght be worth experimenting with the software and the processing loads before making the leap.
 
Last edited:
Top