First build, fast, quiet and cool... too many choices, so many unknowns.

serfnoob

Dabbler
Joined
Jan 4, 2022
Messages
23
Hello Wonderfull people,
I've have never "built a PC" and there is so much to learn and not sure I can catch up. Every time I think I have a good combo worked out I find a post about an issue of some component or driver and start looking for a new combo. Is there a perfect solution, I doubt it or there would be a sticky at the top saying BUILD THIS.

Now after lurking the forums and coming across 2 problems for every 1 solution I find I am turning to you for advice, recommendations and wisdom.

What I am after.

1: Fast and big ZFS storage looking to direct connect 10gbe (jumbo frames) from NAS to main computer then have the NAS sit on the router (1gbe or WiFi) for slower operations like backing up the computers in the house, macs via TimeMachine & a PC. Really its just fast and big SMB with 1 real client.

2: Has to be quiet and cool. I don’t have a server room, a cellar or anywhere I can forget about noise and heat. I don't want to hear this thing unless the HDD array is being hit. My house is hot enough (Australia) already without running a small heater in a small room 24/7.

3. Also would like to run Jellyfin, Pihole & Pfsense, for reasons

4: Reliability now and if possible future proofing to 40gbe NIC when that type of gear becomes affordable.

I have on hand
1 x Sonnet 1-Port Solo 10G Ethernet PCIe 3.0 card, currently in the old, soon to be replaced mac pro. It can be can be repurposed if needed.

And due to a very bad experience with QNAP the following storage too
6 x Segate IronWolf pro 8TB
2 x Samsung 980 pro 500gb M.2

With the above in mind I have looked at build sizes from mini ITX through to E ATX, sorry if I get my terms all muddled but I’ve only been researching the PC world for about 10 days.

I looked at the off the shelf offerings from TrueNAS, very expensive here in OZ with no local supplier. After the horrible experience I've just had with QNAP I would hate to be returning gear internationally.

So I started with components that sort of matched the specs of the of the Mini XL and I got super carried away putting combinations together... sorry.

My first 4 builds would use the same base configuration bellow

RAM: Micron 32GB DDR4 ECC 2666MHz RDIMM $329
PSU: be quiet! SFX-L Power 500W 80+ $139
Case: SilverStone DS380 12 3.5 bays total $249
Cooler: Noctua Nh-u9b-se2 $170
Fans: Noctua NF-S12A PWM x 3 $107
$994 AUD + Shipping and all the little extras
Some might need a HBA
OEM LSI 9201-16i 6Gbps 16P SAS HBA ? $286


Build A:
Board: ASMB-260T2-22A1 4 channel 8 Core $1204

Pros: 2 x onboard 10gbe + 1gbe (RJ45s), 8 x SATA, ECC, SFF, low TPD
Cons: Only 1 x m.2 so mirrored system? Non upgradable CPU, PCIe x 4 too slow for future 40gbe?


Build B:
Board: Supermicro A2SDI-H-TF-O $1202 AUD shipped
is this what they use in the Mini XL +?

Pros: 2 x onboard 10gbe + 1gbe (RJ45s), 12 x SATA, ECC, SFF, low TPD
Cons: US Board supplier (warranty issues), Only 1 x m.2 so mirrored system? Non upgradable CPU, PCIe x 4 too slow for future 40gbe? 8 of the SATA channels are from a SoC controller that I have read might have issues with TrueNAS?


Build C:
Board: ASRock Rack Intel Xeon D1541 SoC $1865

Pros: 1 PCIe 3.0 x 16 slot, 1 PCIe 3.0 x 8 slot, 2 x M.2, ECC, SFF, low TPD
Cons: 45w CPU, 2 x 10G fiber, SFP+ (need adapters), Needs a HBA to run enough


Build D:
Board: ASRock X470D4U2-2T Micro $765
CPU: AMD Athlon 3000G $129

Total $894

Pros: 2 x onboard 10gbe + 1gbe (RJ45s), Two PCIe 3.0 x 8 slot, 2 x M.2, ECC, SFF, CPU 35w TPD, future 40gbe NIC possibility?
Cons: US Board supplier “warranty issues?”, Only 2 cores on CPU, Will need a HBA to run enough drives



Build E:
Board: Z490D4U-2L2T $988
CPU: Intel Core i5-10500T $300 AUD approx.
Cooling: be quiet! Dark Rock 4 CPU Cooler $119
RAM: G.Skill 32G (2x16G) F4-3200 $179

Total $1586

Pros: 2 x onboard 10gbe + 1gbe (RJ45s), 1 x PCIe 3.0 x16 + 1 x PCIe 3.0 x8 + 1 x PCIe 3.0 x1, 2 x M.2, SFF, CPU 35w TPD, future 40gbe NIC
Cons: z490 chipset already stuck at 10th get CPU?, Non ECC, hard to find CPU, Will need a HBA to run enough drives


Leaving the microATX idea and thinking “big” opens up possibilities but limits desk space :(

Looking at 3 cases

1# be quiet SILENT BASE 802 $219
Pros: Looks easy to build in, USB C, 7 x 3.5 bays & 3 x 2.5 bays + includes ok fans
Cons: need to purchase 4 x be quiet! HDD Cage $19 each to fill it up


2# be quiet! Dark Base Pro 900 v2 $399
Pros: Looks easy to build in, USB C, 7 x 3.5 bays & 2 x 2.5 bays + includes good fans, bonus 2 x 5.25 ODB that can pack another 3 x 3.5 via a SilverStone FS303B 2 x 5.25" to 3 x3.5" $109
Cons: need to purchase 4 x be quiet! HDD Cage $19 each to fill it up


3# Fractal Design Define R5 Black ATX $189
Pros: 8 x 3.5 bays & 2 x 2.5 bays, bonus 2 x 5.25 ODB that can pack another 3 x 3.5 via a SilverStone FS303B 2 x 5.25" to 3 x3.5" $109
Cons: Only 2 average fans, no USB C



Build F:
Board: Supermicro MBD-C9Z590-CG $1200 AUD approx.
CPU: Intel Core i5-10500T $300 AUD approx.
Cooling: be quiet! Dark Rock 4 CPU Cooler $119
RAM: G.Skill 32G (2x16G) F4-3200 $179

Total $1586

Pros: z590 chipset, PCIe 4.0, 1 x PCIe x16 + 1 x PCIe x8 + 1 x PCIe 3.0 x1, 3 x M.2?
Cons: 1 x 10gbe Marvell AQC113C, Non ECC, only 4 SATA, hard to find CPU, Will need a HBA to run enough drives


Build G:
Board: GA-Z590-AORUS-MASTER $299
CPU: Intel Core i5-10500T $300 AUD approx.
Cooling: Noctua NH-U9B SE2 $107
RAM: G.Skill 32G (2x16G) F4-3200 $179

Total $885

Pros: z590 chipset, PCIe 4.0, 1 x PCIe x16 + 1 x PCIe x8 + 1 x PCIe 3.0 x4, 2 x M.2 PCIE 3.0 + 1x M.2 PCIe4, WIFI 6, USB C
Cons: 1 x 10gbe Marvell AQC113C, Non ECC, only 6 SATA, hard to find CPU, Will need a HBA to run enough drives



My noobnees knows no bounds! Perhaps as I researched I started to make more sensible systems and just maybe the last one "G" is best bang for buck with semi future-proofness.
Having no idea what goes into the "building' part what type what extra cabling and thermal pastes and such might I need to complete the physically assemble this box as I have really never done this. Are there other bits I need to order at the same time?



Kind regards,
Owen.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680

These are incompatible, unless you're only looking to half-fill the 12-bays or something like that. Please run your choices through:


Failure to properly size your PSU is bad.

2 x onboard 10gbe + 1gbe (RJ45s)

You have mislabeled this "Pro". In general, 10GBase-T is a "Con". The Intel X550 series copper stuff may be workable, most other copper isn't. The network switches for copper are pricey and noisy because they burn watts like energy is free. The good choices for 10G are based on Intel and Chelsio SFP+, with a few honorable mentions such as Solarflare and Mellanox cards. See the 10 Gig Networking Primer.


Is there a perfect solution, I doubt it or there would be a sticky at the top saying BUILD THIS.

Huh. Yeah, you'd think someone would have written that, eh. Start out with


This post talks about the generalities of it all, which you should read if you want to understand the WHY. It also has a link in it to the current Hardware Recommendations guide; there's also the Quick Hardware Guide. We strongly recommend going with a "server grade" board for reasons outlines in my original Hardware Suggestions post. There's also some post rattling around that talks about specific things to look for when shopping for prebuilt eBay used servers. I don't have a link handy but it exists.

Cons: z490 chipset already stuck at 10th get CPU?

Reading in between the lines, you seem to be a little concerned about slightly older hardware? Really, the biggest thing that would have me guide someone away from Sandy and Ivy Bridge based E3 systems (a full decade plus old) are that they're limited to 32GB RAM. Even old systems hit the point of sufficiency for basic fileserving. It is totally possible to burn lots of cash getting the "latest and greatest", from which you will see approximately zero benefit.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
The build cannot be quieter than six spinning hard drives.

The DS-380 is horrible to work with and will have trouble keeping drives cool (especially if it gets hot in your part of Australia…). If six drives is all you'll ever need, the Fractal Design Node 304 and a mini-ITX board with everything included will do it fine—but that's a significant "if" and a hard constraint on the motherboard.

@jgreco doesn't like copper 10G Base-T; he's certainly right in that, but for a home use case, with just one client on 10G, I'd argue that an onboard Intel X550 NIC in the NAS and an Asus XG-U2008 swich (2*10G + 8*1G, unmanaged passive and silent) will do it. Forget about 40G—the HDDs will not even saturate 10G.
The Presto Solo 10G is based on ACQ107 and is not appropriate for TrueNAS; it will be fine for the client workstation.

Mirroring the boot drive is of little use for home.

Build B is absolutely fine (see my signature), including the SoC. Don't forget a Noctua NF-A6x25 on the CPU to keep it cool: The passive heatsink assumes server-type airflow, which it doesn't get in a consumer case. Motherboard A is more an unknown, but if ASMB does it right it should be an equivalent alternative to Supermicro.
AsRockRack D1541D4U-2O8R (C) has an onboard LSI HBA, you only need the breakout cables if they are not supplied. Micro-ATX, so would not fit in the DS-380 anyway ;) But it's quite expensive. I suppose it's hard to find a second hand Xeon D-1500 motherboard (Supermicro X10SDV models) down under… (It already takes some patience in Europe.)
(D) is also micro-ATX. It has 6 SATA ports, so need no HBA, but you'd want a slightly beefier CPU (Ryzen 3000?) and ECC UDIMM, not RDIMM.
Forget Z490 and Z590 (E-G) for a NAS. The use case doesn't need the latest and greatest, and these consumer platforms do not support ECC. If you can still find a C242/C246 motherboard and a matching Core i3-8000/9000, the platform would be fine with ECC UDIMM.

There's no perfect solution, but I hope the above splattering of comments can help narrow down what's fine for you.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
doesn't like copper 10G Base-T; he's certainly right in that, but for a home use case,

Okay so I just gotta take the bait here and troll you right back, especially when you suggest a phenomenally overpriced, underfeatured, dumb, COPPER switch from a company not known for switchgear. You're seriously going to suggest the
Asus XG-U2008
the famous 5 year old pile of poo from Asus that managed to rack up a whole two reviews on NewEgg, both one-star?

Ditch the copper and go get something like a MikroTik CSS610-8G-2S-IN for only a hundred bucks. You are literally better off getting the MikroTik, a managed switch, inexpensive, competent switching features like VLAN's, and hell, even if you MUST go copper, get the MikroTik anyways, put a copper SFP+ in it, and you STILL get a better solution than that silver Asus PoS.

I hope @Etorix can take this in good humor. ;-) ;-)

I don't like copper 10Gbase-T because it's a fundamentally stupid technology that hasn't seen the uptake we were all hoping for. The industry is now sabotaging it further with incompatible 2.5G and 5G incremental improvements in a pathetic bid to return to the glory days we had 20 years ago where 1G enterprise gear sold for good money. That's mostly driven by WiFi6 AP issues. As far as I can see, 10Gbase-T is a dead-end technology. @Spearfoot actually made me think about that and concede that point over in this thread, see post #6. I had basically been playing a waiting game for many years but there's a point at which something obviously just isn't going to happen.

So you have to ask yourself, is investing in a dead end technology really a good idea? Or should you just go with the less expensive and technically better option?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I hope @Etorix can take this in good humor. ;-) ;-)
I certainly do, especially if I get to learn some insights as a result (and hopefully the original poster as well).

I don't like copper 10Gbase-T because it's a fundamentally stupid technology that hasn't seen the uptake we were all hoping for. The industry is now sabotaging it further with incompatible 2.5G and 5G incremental improvements in a pathetic bid to return to the glory days we had 20 years ago where 1G enterprise gear sold for good money. That's mostly driven by WiFi6 AP issues.
Thanks to your earlier posts, I'm aware that 10GBase-T is as far as copper wires can go—and perhaps a little too far. That it failed to become mainstream. That Nbase-T is a poor attempt to go faster than 1G on copper without scaring consumers away. (Incidentally, the most obvious achievement of the 2.5G NICs that are invading the latest ranges of consumer motherboards is that they make Aquantia 10G NICs look high end by comparison. Should that be regarded as a success for 10GbE, only about ten years too late, or as a further disaster?)

As far as I can see, 10Gbase-T is a dead-end technology. @Spearfoot actually made me think about that and concede that point over in this thread, see post #6. I had basically been playing a waiting game for many years but there's a point at which something obviously just isn't going to happen.

So you have to ask yourself, is investing in a dead end technology really a good idea? Or should you just go with the less expensive and technically better option?
A dead end, 10Gbase-T certainly is. But it is, almost, drop-in compatible with legacy Gigabit Ethernet: Just throw in a compatible cheap switch and a pair of better cables where it matters. On-board SFP+ is still a rarity, even with server motherboards.

OP's inquiry is about a home network with one NAS, one privileged client on 10G and the rest on legacy Gigabit (possibly with some appliances not really doing better than 100M, as I see with my satellite STB). For this setting, I do think that a dead-end upgrade to a legacy infrastructure is a reasonable option. And I'm not convinced that going optical is (a) less expensive and (b) worth the hassle.

Okay so I just gotta take the bait here and troll you right back, especially when you suggest a phenomenally overpriced, underfeatured, dumb, COPPER switch from a company not known for switchgear. You're seriously going to suggest the

the famous 5 year old pile of poo from Asus that managed to rack up a whole two reviews on NewEgg, both one-star?
Anecdotal report for anecdotal report, my XG-U2008 has still not blown any port and does the advertised job of silently passing packets at faster-than-1GbE-speed between two privileged devices through 1 metre copper cables. So, yes, I was suggesting the device.
It might not work for 30m, or 100m, runs. But on or around a desk it does the job—I even suspect that Cat7 cables are overkill in this setting.

At 202,- E new in my local market (VAT included) I was not sure compared to what it is overpriced. Definitely not rackmount 10GbE switches. (I do not challenge that Asus is not a reputable supplier for data centre switches. But that's not the market here.) There are not many 10GbE switches with consumer-friendly price tags. The QNAP M408 switches are more capable (manageable, possible SFP+/base-T combo ports), but more expensive (and OP apparently had a bad experience with a QNAP NAS).

Ditch the copper and go get something like a MikroTik CSS610-8G-2S-IN for only a hundred bucks. You are literally better off getting the MikroTik, a managed switch, inexpensive, competent switching features like VLAN's, and hell, even if you MUST go copper, get the MikroTik anyways, put a copper SFP+ in it, and you STILL get a better solution than that silver Asus PoS.
Lovely tip, thanks! At 90,- E it's even cheaper than the CRS305-1G-4S+IN I have (on ServeTheHome.com recommendation), and better suited to the use case.
Still, it only makes sense if at least one device has SFP+, not if both are Base-T.

Sub-question: How do you rate DAC for SFP+ with respect to 10G over copper?
I already know from your previous posts that both are "crap". But is one even crappier than the other? Because a user who's not a network engineer and looks for cables for a pair of SFP+ ports is bound to end up either with a DAC cable (crap, but ready-to-use) or with some optical fibre without the proper termination (worse than crap if one can't use it at all…).
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I get to learn some insights as a result (and hopefully the original poster as well).

Even I learn stuff sometimes. It's always a good day when I do!

Sub-question: How do you rate DAC for SFP+ with respect to 10G over copper?
I already know from your previous posts that both are "crap". But is one even crappier than the other? Because a user who's not a network engineer and looks for cables for a pair of SFP+ ports is bound to end up either with a DAC cable (crap, but ready-to-use) or with some optical fibre without the proper termination (worse than crap if one can't use it at all…).

DAC is lovely for limited use cases.

1) It is fixed-length.

I find my racks are already busy and like to make (for copper RJ45) or buy (for fiber) to-the-centimeter custom cabling to reduce the amount of slack. Vendor-supplied DAC cables are very expensive and typically available only in meter increments; some companies like FS.COM will sell somewhat better lengths. Once you invest $80 on a Dell 1M SFP+ DAC cable, and you discover you want to reorganize your rack and need a longer one, that's a hell of a disincentive to make changes. By way of comparison, a 1M fiber is only a few bucks. You do have to buy the SFP+'s for fiber of course. Prices are better if you can go with used/generic though.

2) It is vendor-coded the same on each end.

This is a problem if you want to go from a vendor-locked card to a vendor-locked switch. When you buy the proper SFP+ optics and stick them in your gear, and you do this as a "permanent addition" to the card, you guarantee yourself the ability to use that card with ANY other device you wish.

3) Damaged DAC cables are a total loss.

Sometimes bad stuff happens in racks. A crimped or bent fiber is likely to break, and to be a total loss. But at only a handful of dollars, who gives a damn. Just grab another and go. However, a damaged DAC cable is ALSO a total loss, and you are less likely to have the correct spare DAC cable on hand. I keep a kit of spares for every SFP+ optic in use at each data center, which all fits in a nice little 3D printed box. This and two or three spare fiber is sufficient for any cabling emergency. What's the sparing situation like if you have vendor-locked DAC cable requirements? How much does it cost to maintain complete sparing?

4) DAC limitations

Passive DAC is only really available at "patch cable" lengths. Both passive and active DAC cables have to run from an endpoint to another endpoint. Unlike fiber, you cannot have patch panels or significant distance. This makes stuff like runs between racks rather dicey.

5) DAC strengths

In general, 10G SFP+ optics are dirt cheap, either on the used market, or new generics. This is largely a function of their relative maturity, and that they are not in as high a demand as they once were, with the advent of 25G/40G/100G. However, passive DAC is likely to be less expensive than optics for speeds greater than 10G. It is worth pricing both out.

6) Latency and power issues

In theory, less translation circuitry should favor passive copper DAC slightly, and also lower power consumption. This is probably negated with active DAC.

So to answer your question:

10GBASE-T is crap because it is a crap technology, as outlined in previous posts.

10G DAC is roughly equivalent to 10G SFP+ from a technological point of view, with some constraints.

10G DAC is often less practical than 10G SFP+ from an ease-of-use point of view.

DAC can occasionally make sense. It depends.
 

serfnoob

Dabbler
Joined
Jan 4, 2022
Messages
23
Thanks for the input @jgreco & @Etorix.

I get to learn some insights as a result (and hopefully the original poster as well).
Even I learn stuff sometimes. It's always a good day when I do!
It seems I get to have the most fun as I’m learning more than I can/want to everyday.

Forget Z490 and Z590 (E-G) for a NAS. The use case doesn't need the latest and greatest, and these consumer platforms do not support ECC
I knew ECC was preferred but no reading more on here I'm not going to fight for non-ECC.

You have mislabeled this "Pro". In general, 10GBase-T is a "Con". The Intel X550 series copper stuff may be workable, most other copper isn't. The network switches for copper are pricey and noisy because they burn watts like energy is free. The good choices for 10G are based on Intel and Chelsio SFP+, with a few honorable mentions such as Solarflare and Mellanox cards. See the 10 Gig Networking Primer.
Up until now I have been basing by build around RJ45 because I thought I could forgo a switch and directly attach 1 client (edit suite that already has 10gbe RJ45) to the TrueNAS in their own IP range (Jumbo frames for video), also connect the NAS on LAN via 1gbe or WiFi with a different IP range for the other computers that don’t need the speed.

Is this possible and practical with TrueNAS?…

I don't like copper 10Gbase-T because it's a fundamentally stupid technology that hasn't seen the uptake we were all hoping for
SPF+ has many advantages but I’m not sure any of them would be realised in my set up.
I only need a single 10gbe connection with a max run of 10 metres, and thought I might not need a switch for the 10gbe as I mentioned above, direct connect. My house is old and already built so running fibre through it is not an option.

Reading in between the lines, you seem to be a little concerned about slightly older hardware? Really, the biggest thing that would have me guide someone away from Sandy and Ivy Bridge based E3 systems (a full decade plus old) are that they're limited to 32GB RAM. Even old systems hit the point of sufficiency for basic fileserving. It is totally possible to burn lots of cash getting the "latest and greatest", from which you will see approximately zero benefit.
I’m not against old tech at all, I was just trying to put a low power system together with parts that ARE available… not what might to pop up on ebay. If I didn't mind running my own coal fired power plant I could turn my old mac pro dual X5690 into a server...

The QNAP M408 switches are more capable
From memory I don't think these do jumbo frames

Forget about 40G—the HDDs will not even saturate 10G.
Would never manage to saturate a 40gbe connection with the amount of spinning disks I can house. A small pool of m.2 would get close?
40gbe is a down the road Idea… just trying to build once with the future in mind.

Build B is absolutely fine (see my signature)
It seems to tick all the boxes except the 2 pies slots



Maybe it’s easier to work this from another angle. What is the best available board/CPU options for low TPD and onboard 10gbe, RJ45 or SFP+ With PCIe 3.0 8x & 4x slot avalible + 2 x M.2

Another round of searching has me with the below.

ASRock Rack C3758D4U-2TP I think this is almost perfect, bit tight on PCIe lanes (US Supplier though) $1111
ASRock Rack D1541D4U-2O8R ticks all boxes but a little heavy on the power consumption & price $1865
Needs
SFP+ to RJ45: Intel E10GSFPT Compatible 10GBASE-T SFP+ Copper RJ-45 30m $93
or
ASRock Rack D1541D4I-2L2T not enough PCIe slots unless you can bifurcate? $1782


Thanks again for all your help.
I have the funds now and would like to buy before the supply chain makes this impossible...
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
A NAS CPU stays idle most of time, only sustaining 100% load when doing a scrub. So "low TDP" probably does not matter as much as you think.

You do not appear to have a use case for a L2ARC and/or a SLOG. 2*M.2 for a 2-way mirror pool to host VMs is fine. Mirrored M.2 boot drives is overkill for a home NAS.
With onboard 10GbE NIC and enough SATA ports, I'm unsure about the requirement for two PCIe slots.
Pending careful checks about lane sharing, or the Marvell 9172 controller of the C3758D4U-2TP, all these embedded boards look fine. It's down to supply chain and/or warranty issues—and which features you really need.
 

mihies

Dabbler
Joined
Jan 6, 2022
Messages
32
I'd be really nice if there were CPU idle power consumption specs listed along the TDP.
 

serfnoob

Dabbler
Joined
Jan 4, 2022
Messages
23
Even when my 5,1 mac pro dual X5690 sleeps it still raises the ambient temp in that room a few degrees C, sitting idle even more… slapping the Jesus out of it rendering in C4D or AE turns it into a blast furnace.
I’ve read lots of threads about the money saved via your power bill with low TDP CPUs being negated by the much lower initial cost of regular CPUs.
The heat though, the heat makes the fans work and the fans make noise.

I want quiet, cool, fast and future proof. Happy spend now so I don’t have to do this for at least another 5 years :)

Last board I’ll suggest is very hard to find from retailers

D1541D4U-t28r

Looks like it will do it all.

Any red flags with this set up?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
No red flag from me. 45W is "low TDP" as I understand it, but note that the LSI HBA and 10G NIC under their passive heatsinks are hot spots. Pay attention to air flow when putting this motherboard in a consumer case.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'd be really nice if there were CPU idle power consumption specs listed along the TDP.
I’ve read lots of threads about the money saved via your power bill with low TDP CPUs being negated by the much lower initial cost of regular CPUs.

Just so you guys are aware, low TDP CPU's have the same watt-burn at idle as the non low-TDP versions. They are not energy saving parts. They are simply designed for environments where you cannot dissipate large amounts of heat, such as microservers, embedded applications, etc.

Low TDP CPU's tend to burn more energy and watts in many cases, because instead of a quick burst of energy resulting in a large amount of work done quickly, the low TDP part has to run at its max capabilities for a much longer period of time in order to complete the workload.

If you have a very bursty workload that isn't consuming lots of CPU, then low TDP and normal parts perform the same and burn the same power. That's your typical NAS use case.
 

mihies

Dabbler
Joined
Jan 6, 2022
Messages
32
Yea, I guess it all boils down to how those CPUs throttle down when idle and lower TDP might not be relevant. Hence it'd be nice to have that info. Not sure if theory of low TDP consuming more energy because they are slower is true, though. I guess it all boils down to their energy efficiency, which could very well be better with higher TDPs, but I assume it is mostly related to the CPU generation and node process they were built on.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Not sure if theory of low TDP consuming more energy because they are slower is true, though.

Well of course you're not sure. If you were sure, we wouldn't be having this discussion.

Some people argue for lower TDP CPU's on the basis that they "use less power".

However, the CPU cores between a low TDP CPU and a full blast CPU primarily differ in the clock regulation, and so if they end up idling at the same amount of power consumption, you get an unexpected result: the low TDP CPU may have to run for a much longer time at its crippled clock speed to match the same amount of work that the normal CPU might have gotten done in half the time. The normal CPU guns the engine and burns somewhat more power for a much shorter period of time before returning to idle, quite often saving power in the process.

The lower TDP parts never "use less power", except in one special case: if you have an unbounded workload, such as, let's say, computing SETI@Home blocks, where the CPU will just keep going forever regardless of what you do. This result is only because the low-TDP CPU ends up doing less work. Less work, less power consumed.

The rest of the time, the best win available on CPU's has been to let the CPU ramp up and complete the workload quickly. Running a high intensity workload on a low performance core tends to burn more energy.

If you hadn't noticed recent trends, Intel Alder Lake and Apple's M1 have both introduced Performance and Efficiency cores, which reflects that reality. Instead of just stuffing the die with a bunch of efficiency cores and hoping for the best, what you really want is to be able to turn off the performance cores when you don't need them, and shuffle sleepy tasks over onto efficiency cores.

Historically, that wasn't possible, and it wasn't always sufficient to trust the OS to do the throttling, so TDP-limited CPU's have been a thing.
 

mihies

Dabbler
Joined
Jan 6, 2022
Messages
32
Well, do you have any data to prove the point? I'm curious, not saying isn't true.
You can think of it like this as well: with higher CPU clocks, the power consumption is not linear, but more steeper the higher frequency. If you have a specific job, perhaps a low power CPU might consume less energy due to that fact. While yes, the other one would do it faster, but also consume more energy in doing so.
So, if you have two identical CPUs which differ only by boost frequencies, I'd say the slower one still wins overall on power consumption. I can be very well wrong on this :smile:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, do you have any data to prove the point?

Yes, we sell and refurbish server gear here, and have observed such behaviour when comparing CPU performance. It's a bit tricky because you need to measure total power consumption over a period of time during which a given workload is presented for processing. The results aren't entirely consistent, and the factors are a bit complicated. I've been balancing TDP since the days of the 486; the modern era started back in the mid 2000's, I'd say, and back then we had bought some Opteron 240EE's (30W TDP) in order to avoid setting the server room on fire. I've also had a bunch of opportunities to play with these in more recent gear. Overall I'm not impressed. There's a technical argument for TDP engineering and it has to do with the ability to dissipate heat, often passively.

If you're looking for me to go and produce a graph to prove my point, you're out of luck. We do that kind of work for real money, and generally I don't have to prove my points to clients who've come here for engineering advice anyways.

While yes, the other one would do it faster, but also consume more energy in doing so.

This doesn't seem to apply cleanly to reality, however. I've seen numerous cases where you get near-max power consumption out of a CPU by running full load on only some cores. You can get really screwy things to happen if you look for the pain points, and that's definitely worth doing --- it's also easier to measure! --- but measuring actual power consumption for a real workload over a given period of time tends to favor the faster CPU.

I'd say the slower one still wins overall on power consumption.

Even were that to be true, though, do you want to pay the premium price for such a CPU, and are you willing to live forevermore with the hard-capped speed? Wouldn't it be better to just turn off some of the turbo settings in BIOS or cap the speed from the FreeNAS command line? Same result, cheaper price, more flexible in the future if you decide you made a mistake.
 

mihies

Dabbler
Joined
Jan 6, 2022
Messages
32
Ok, so you are speaking from experience, fair enough. Assuming that there is really no significant difference in idle states (and that's probably mostly true unless CPU has some special cores like mentioned M1), there are really no compelling reasons to invest into low TDP, I agree.
I still wish for idle power consumption on CPUs specs though. :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
idle power consumption on CPUs specs though.

That's a great point. I can tell you that this is not a constant; I know our E5-2697v2's idle at a higher watt burn than our E5-2620v2's, for example. I had some really rather detailed notes on this at one point because I had an opportunity to do some significant side-by-side comparisons of several different CPU's, but I'm not sure where those notes are buried away. I've talked about this before;



so you can maybe get some vague sense of the baseline idle differences between CPU's. Anyways, yes, I feel your pain, it'd be nice for this to be documented somewhere. Unfortunately, it would be extremely costly for anyone but Intel to undertake such a project.
 

serfnoob

Dabbler
Joined
Jan 4, 2022
Messages
23
I feel like i'm hijacking my own thread here but I'm still trying to build my very quiet, very cool,10gbe NAS.
In the last few weeks parts have just disappeared. the shelves are going bare here in OZ.
So plans are changing.

I have found a local supplier of the
ASRock D1541D4U-2T8R uATX Server Motherboard $1900
8 core 16 thread 2.10ghz - 2.7ghz, 45w
2 x PCIe 3.0 8x, 4x 288 pin ECC 6 x onboard SATA
8 x SATA via LSI3008
2x m.2 via Marvell 9172
2x 10gbe RJ45 via Intel® X540
Do the above 3 controllers all work with trueNAS
even if they run a bit hot?

This board seems to have it all, except on board GFX... how do I install with out a Graphics card


Fractal Design Define R5 $199.00
up to 8/11 x 3.5 + 2 x 2.5
Short on fan locations


be quiet! Dark Rock 4 Cooler $119.00
specs say its very quiet
Can it be fitted to the D1541?

Noctua 140mm NF-P14S Redux Edition x 3 $39.00/each
Case has fans already, but I want quiet as possible


Micron 16GB DDR4 ECC 2666MHz RDIMM x 2 $199.00/each
There seems to be 3 different versions of this ram,
Is this compatible and a good option?



be quiet! Pure Power 11 Gold Modular 550W $119.00
Quiet, fully modular and just a little more than the bill.
or the
be quiet! Pure Power 11 Gold Modular 750W $169
to be safe but sitting around 50% efficiency?


Total $2860 ~ with shipping

About $2000 less than a weaker off the shelf QNAP with only 8gb ram and no 10gbe
Will this be much quieter and cooler?


This is all still working under the assumption that I can direct attach edit suite to NAS via 10gbe
And connect the NAS to the router via the 1gbe for the less important computers.
Not investing in any old copper switches here.

Wouldn't it be better to just turn off some of the turbo settings in BIOS or cap the speed from the FreeNAS command line? Same result

I don't know how to do such magic...
Would love to get your ideas on a build for my requirements doing things your way as you clerkly have decades more experience than me.
I'm a month in and only feeling more confused about sockets, chipsets and bios.

Seems there should be a way to do this with an i3 and save coin... but I dont know how to turn down clocks and lower the heat produced.


Thanks again for all the information and patients.

Kind regards,
Owen.
 
Top