Lower-power CPU with lots of PCIe lanes?

oguruma

Patron
Joined
Jan 2, 2016
Messages
226
Can anybody recommend a server-grade CPU/motherboard combo that has a lot of PCIe lanes, is relatively affordable (new), and is low power consumption, and of course works well with Truenas SCALE?

Looking for something that can support a few apps and maybe a couple smaller VMs, but that has a lot of PCIe (PCIe 3 or later) lanes to support NVMe drives.

I'd like to build an all nvme nas with between 8-12 nvme drives
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Yeah, what you're looking for does not exist.
relatively affordable (new)
Forget about that.
server-grade CPU/motherboard combo that has a lot of PCIe lanes
I'd like to build an all nvme nas with between 8-12 nvme drives
8-12 drives directly connected means 32-48 lanes, which not chump change. That would need at least a dual Xeon E5, dual Xeon Scalable v1/v2, single Xeon Scalable v3/v4 or single Epyc system. That gets expensive and power hungry quickly.

You can keep things under control by using a PCIe switch or two instead of directly connecting all disks.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
There are weird PCIe cards for lots, (and I mean LOTS), of NVMe drive cards. Like 8 or more. They do share bandwidth, but only if they are being accessed at the same time. This would reduce or eliminate the system board need for lots of PCIe slots, NVMe slots or PCIe expansion connectors.

But, I do agree with @Ericloewe, their are no serious options for low power, direct control of 8 - 12 NVMe that is also affordable as new.

For practical purposes, NVMe is still cutting edge when it comes to more than a few, (aka 1 to 3). AMD's Epyc is really massive when it comes to PCIe lanes, but anything Epyc is costly because it is both designed as a higher end server chip. And any board that supports Epyc needs to be well thought out, (like all PCIe slots are 16 lanes electrical).
 

Jakub1

Cadet
Joined
Jan 11, 2024
Messages
7
Yeah, what you're looking for does not exist.

Forget about that.


8-12 drives directly connected means 32-48 lanes, which not chump change. That would need at least a dual Xeon E5, dual Xeon Scalable v1/v2, single Xeon Scalable v3/v4 or single Epyc system. That gets expensive and power hungry quickly.

You can keep things under control by using a PCIe switch or two instead of directly connecting all disks.

Wouldn't Siena Epyc fit that bill? It's just very expensive at the moment.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
"Just" very expensive, not to hit the second hand market for quite some time, and not that "low power" (except by contrast with >400 W server CPUs). Also note that M.2 drives typically power at rest but are limited in capacity while U.2 can be capacitous (even much more than spinning drives) but use significant power even at idle (6-8 W).

It's possible to fit about ten drives with a reasonably priced single socket LGA3647 or LGA2066 (there are some well-priced bundeles on eBay for X11SRM-VF, which exposes all 48 lanes, and a matching Xeon W-2135/2145). But this won't be "new" and not quite "low" power.
 
Top