suggestions for a low power system to replace that old dual xeon macpro

John Doe

Guru
Joined
Aug 16, 2011
Messages
635
while you two still figure out who is having the longest, may I drop a white paper issued by Intel into the ring of 40+ years old.

grdjrt.JPG

Formula to calculate TDP:

P=CV^{2}f

where C is capacitance, f is frequency, and V is voltage.


"...The Intel power specification for components is known as Thermal Design Power, or TDP. The TDP value, along with the maximum junction temperature, defines Intel’s recommended design point for thermal solution capability. ..."

Source (page 5):


so what do we see here, main value for TDP is voltage and frequency. Intel stipulated very clear it is for thermal solution capability.
In case we can differentiate from this work done or computer bits calculated, would this be a very interesting story and worth to dig in.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Coming back to the OP, I still happen to think the mini itx or flex atx version of the d-1508 is the one to get. I’d run the VM on a SSD, do everything else on spinners, call it a night.

however, if you need a lot of reading material to make the decision making process more convoluted (ie integrate even more data) then head over to serve the home, a great web site on SOHO servers with a lot of actual power data. (Direct yourself into the low power server section)

That’s where I realized the small power consumption delta between the atom c3xxx and d-15xx CPUs I was considering. With 2.5” spinners, I doubt the CPU will ever be binding you. It’s going to be disk I/o, esp. if you run the VM on a decent SSD (or two in mirror, depending how mission-critical the thing is).

the larger flex atx board would allow you to fit a bifurcated riser card (like the AOC-SLG3-2M2), host the VM on two NVME m.2 cards at PCIe 3.0x4 speeds, and still have a spare PCIe 3.0x8 slot left over.

But if the VM can be run on a single NVME SSD and you don’t need a SLOG, then the flex atx board already has a m.2 NVME Slot at PCIe 3.0x4 built in. That’s where I run my SLOG at the moment and the other m2 mSATA slot holds a SSD for my metadata L2ARC.

This flexibility is why I prefer the flex atx board even though finding a case for it is more difficult. I currently have everything running off just the board, no PCIe cards (yet). But I’m ready in case that ever comes up.
 

hotdog

Dabbler
Joined
Apr 14, 2014
Messages
44
No; 10GBaseT has some backward compatibility with caveats.

1. A minimum of Cat6A cable. A quality Cat6 cable should work at a shorter distance but its better to go with Cat 6A. Cat5 is not recommended.
2. Check your end device. While gigabit networks are often compatible with 10/100 devices not so with 10GbaseT. 10/100/1000 devices should be fine. The problem may be with a 10/100 device.
with end device you mean the router/switch or the workstations?
 

hotdog

Dabbler
Joined
Apr 14, 2014
Messages
44
Coming back to the OP, I still happen to think the mini itx or flex atx version of the d-1508 is the one to get. I’d run the VM on a SSD, do everything else on spinners, call it a night.

however, if you need a lot of reading material to make the decision making process more convoluted (ie integrate even more data) then head over to serve the home, a great web site on SOHO servers with a lot of actual power data. (Direct yourself into the low power server section)

That’s where I realized the small power consumption delta between the atom c3xxx and d-15xx CPUs I was considering. With 2.5” spinners, I doubt the CPU will ever be binding you. It’s going to be disk I/o, esp. if you run the VM on a decent SSD (or two in mirror, depending how mission-critical the thing is).

the larger flex atx board would allow you to fit a bifurcated riser card (like the AOC-SLG3-2M2), host the VM on two NVME m.2 cards at PCIe 3.0x4 speeds, and still have a spare PCIe 3.0x8 slot left over.

But if the VM can be run on a single NVME SSD and you don’t need a SLOG, then the flex atx board already has a m.2 NVME Slot at PCIe 3.0x4 built in. That’s where I run my SLOG at the moment and the other m2 mSATA slot holds a SSD for my metadata L2ARC.

This flexibility is why I prefer the flex atx board even though finding a case for it is more difficult. I currently have everything running off just the board, no PCIe cards (yet). But I’m ready in case that ever comes up.

2 questions:
- Flex atx is a subset of the micro atx, so any matx case with good airflow should work? I am thinking of the Node 804.
- Given I will use mirrored SATA DOMs for system, VMs can't be there, too?

and tnx again for all experts here, also the 40+ ones! I learned a lot in short time because of you spending your time.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
verify screw locations. IIRC, M-ATX is a more common designation and any flex atx board will fit in a M-ATX capable case.

call me overly cautious, but I’d use the satadom slots just for booting. Proper satadom drives are very expensive vs. how much they store, you’re much better off using the mSATA or NVME slot for a less expensive stick to hold the VM data set.

I would also not want to deal with the complexities of a boot volume going kaplooie because the VM expanded too much or a system upgrade ate all the free space. That's why I use 64GB SATADOMs and they're only 25% full.
 
Last edited:

hotdog

Dabbler
Joined
Apr 14, 2014
Messages
44
verify screw locations. IIRC, M-ATX is a more common designation and any flex atx board will fit in a M-ATX capable case.

call me overly cautious, but I’d use the satadom slots just for booting. Proper satadom drives are very expensive vs. how much they store, you’re much better off using the mSATA or NVME slot for a less expensive stick to hold the VM data set.
the board seems to fit:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
while you two still figure out who is having the longest

I was thinking more along the lines of fullest...

, may I drop a white paper issued by Intel into the ring of 40+ years old.

Please! :smile:

View attachment 34844

Formula to calculate TDP:

P=CV^{2}f

where C is capacitance, f is frequency, and V is voltage.


"...The Intel power specification for components is known as Thermal Design Power, or TDP. The TDP value, along with the maximum junction temperature, defines Intel’s recommended design point for thermal solution capability. ..."

Source (page 5):


so what do we see here, main value for TDP is voltage and frequency. Intel stipulated very clear it is for thermal solution capability.
In case we can differentiate from this work done or computer bits calculated, would this be a very interesting story and worth to dig in.

TDP is clearly related to voltage and frequency. If it was computed from voltage and amps, and we were to introduce a time period, that'd get us joules, which could be sufficient to make this a stand-in for energy consumed.

Unfortunately, it isn't. And Intel says so. (Page 8, definition of TDP): "TDP is not the maximum power that the processor can dissipate."

Worse, in the intervening 40 years, the complexity has increased substantially. Back in the early days of microprocessors, a µP would consume a relatively constant amount of power, regardless of whether or not it was doing any useful work. Its clock was driven by an external crystal and the complexity level was relatively low, a few thousand transistors total for the likes of the 6502/8080. This led to relatively even and predictable power consumption.

Modern CPU's do extensive power and frequency management to minimize power consumption, and one of the side effects of this is that it is very difficult to generate a workload that fully stresses a CPU. Modern CPU's have a lot of subsystems, as I previously noted. The interesting thing about the "L" CPU's is that they aren't magically more efficient, it's just that a different set of rules is applied to them to cause them to live within a tighter dissipation budget.

But the real question here was related to "suggestions for a low power system." My point is that you don't get an "L" CPU to create a low power system. That isn't what the "L" CPU's are about. You put less workload on ANY of the E5 CPU's and the power consumed will be lower. The "L" just causes the L CPU to do less work per unit time, and that's about thermal design.
 

hotdog

Dabbler
Joined
Apr 14, 2014
Messages
44
But the real question here was related to "suggestions for a low power system." My point is that you don't get an "L" CPU to create a low power system. That isn't what the "L" CPU's are about. You put less workload on ANY of the E5 CPU's and the power consumed will be lower. The "L" just causes the L CPU to do less work per unit time, and that's about thermal design.
So what is your opinion on a Pentium D1508 like Constantin suggested?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
with end device you mean the router/switch or the workstations?

With modern networking, basically any endpoint. That includes router ports, switch ports, workstations, anything else you plug a cable into.

When gigabit networking was standardized 20 years ago, it was coming out of a bad era of networking, where things had gone from 10base-5 ("thicknet") and 10base-2 ("thinnet"), which were the first widely deployed ethernet technologies, both of which were 10Mbps half duplex, to 10base-T, which evolved from AUI. AUI involved taking an RX/TX pair up to a tap on the coax cable. 10base-T modified that strategy to take the RX/TX pair all the way to a central location ("hub"). Then along came 100Mbps, meaning networks had to pick one speed or the other. This was also a horrible era of crossover cables because there was no auto MDI/MDI-X.

Gigabit ethernet brought sanity. With the advent of switching came the ability for much higher network throughput, and the ability to support 10Mbps, 100Mbps, and 1G all on the same network. At the same time, the standard mandated auto-MDI/MDI-X, eliminating crossover cables. The era of modern ethernet came to be. While 802.3ab doesn't actually mandate interoperability with 10Mbps or 100Mbps devices, the practical realities of the day back in the late '90's forced the issue, so basically every gigabit ethernet chipset supported 10/100 as well, just in case it was being plugged into an "older" network, and every switch chipset supported 10/100 because so many of those devices existed back in the day.

Now this seems like a pointless regurgitation of history, but there's a point I'm about to make.

It's taken around 20 years for 10Gbase-T to become a "thing". In that time, almost all 10Mbps devices have been retired, and even 100Mbps devices are not so common. Due to the complexity of maintaining multiple families of standards on a chipset, controllers like the X540 have pretty much universally jettisoned 10Mbps support, and I believe some have also jettisoned 100Mbps.
 

hotdog

Dabbler
Joined
Apr 14, 2014
Messages
44
OK I am thinking about the drives, luckily being on the lower side of space requirements:

5x 2TB SSDs in a RaidZ config
cost: I'll wait for a deal to get one for 200$ -> 1000$
vs.
6x 4TB 2.5 HDDs in two striped tripple mirrored VDEVs
cost: at the moment 152$ for me -> 912$

- will nearly give me about the same usable drive space
- but only with one drive failure allowed, but HOW LIKELY the RESILVERING STRESS will cause a second FAILURE with SSDs?
- would give me much better performance I guess?
 
Last edited:

John Doe

Guru
Joined
Aug 16, 2011
Messages
635
i think it is more likely than you might think it is.

for instance:
you decided to go with raid Z1 and 6 drives (just figures). On day X one disc tells you hey, I am broken, please replace me.
Since you are in this forum, you know, I should get a replacement fast, because as soon as another discs is getting bad, everything might be gone.

after a day or 2 you will have similar replacement type, you go to your NAS, open the case, search for the right disc to take out.
Once you double checked everything for x times, you power on your freenas. Freenas tells you, hey cool, you finally got a disc, now i need to copy everything to the new disc.
Freenas will then keep resilvering the new disc for "days"

usually that resilvering is a very intense stress test for the remaining discs and apparently it is very likely, that another disk will fail during the resilvering.

maybe wait for additional feedback to this topic, as I haven't had a single issue all the years.

by the way in case you need an HBA, RAM, CPU, 3tb WD Red, leave me a note. I am selling some parts on tutti.ch
 

hotdog

Dabbler
Joined
Apr 14, 2014
Messages
44
i think it is more likely than you might think it is.

for instance:
you decided to go with raid Z1 and 6 drives (just figures). On day X one disc tells you hey, I am broken, please replace me.
Since you are in this forum, you know, I should get a replacement fast, because as soon as another discs is getting bad, everything might be gone.

after a day or 2 you will have similar replacement type, you go to your NAS, open the case, search for the right disc to take out.
Once you double checked everything for x times, you power on your freenas. Freenas tells you, hey cool, you finally got a disc, now i need to copy everything to the new disc.
Freenas will then keep resilvering the new disc for "days"

usually that resilvering is a very intense stress test for the remaining discs and apparently it is very likely, that another disk will fail during the resilvering.

This I know everybody is warning about, and it makes sense, but is it also right for SSDs? After my question here I found one opinion here:

by the way in case you need an HBA, RAM, CPU, 3tb WD Red, leave me a note. I am selling some parts on tutti.ch
ahh, a fellow countryman :)
If I go with the X10SDV-2C-7TP4F, the only thing I can think of is RAM I could use from you.
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
with end device you mean the router/switch or the workstations?
yes, for example there are many home routers, switches and docking stations that still use 10/100 ports. Ideally with FreeNAS, you want at least gigabit speed for your infrastructure.
 

hotdog

Dabbler
Joined
Apr 14, 2014
Messages
44
yes, for example there are many home routers, switches and docking stations that still use 10/100 ports. Ideally with FreeNAS, you want at least gigabit speed for your infrastructure.
yeah that's what I've upgraded to with my macpro "build". I am not sure if I could choose a platform with 10Gb ports only without buying a new switch, too.
our network is quite simple: Fritzbox 7490 (firewall/router/switch) to two gigabit switches, from there to workstations, printers, servers, ip phone box. max 6 people working.
(I know this is not really enterprise level, but it's running stable with no problems)
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Time to resilver is a function of how much data has to be resilvered, bus, cpu, etc. for the OP, resilvering would likely be a matter of hours, not days.

As for the switch, mikrotik is offering a whole bunch now with SFP+ ports and 8-24 gigabit ports. That’s what I’d go for - thick trunk to the NAS, fanless, inexpensive.
 

hotdog

Dabbler
Joined
Apr 14, 2014
Messages
44
Constantin, you seem to have 32GB memory sticks in your system, do you happen to know which ones you are using?
Starting with only one 32 gb stick is not recommended because unbalanced?
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
yeah that's what I've upgraded to with my macpro "build". I am not sure if I could choose a platform with 10Gb ports only without buying a new switch, too.
our network is quite simple: Fritzbox 7490 (firewall/router/switch) to two gigabit switches, from there to workstations, printers, servers, IP phone box. max 6 people working.
(I know this is not really enterprise level, but it's running stable with no problems)

I was considering the MikroTic CRS 305-1G-4S for a low cost 10Gb home switch but it almost seems to good to be true; Has anyone had any experience with this switch?

 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Constantin, you seem to have 32GB memory sticks in your system, do you happen to know which ones you are using? Starting with only one 32 gb stick is not recommended because unbalanced?
When in doubt, check the manual. SM recommends the use of memory stick pairs, see page 2-7 but single sticks are OK (though with a unspecified performance hit, see page 2-8). See my memory saga, my board was VERY picky re: it's memory so I'd stick to the "Tested Memory List"! I'll dig up my sources later, I used eBay as well as other vendors.
 

hotdog

Dabbler
Joined
Apr 14, 2014
Messages
44
When in doubt, check the manual. SM recommends the use of memory stick pairs, see page 2-7 but single sticks are OK (though with a unspecified performance hit, see page 2-8). See my memory saga, my board was VERY picky re: it's memory so I'd stick to the "Tested Memory List"! I'll dig up my sources later, I used eBay as well as other vendors.
ok tnx i'll go with the samsung 2x16 udimms from tested memory list i guess. by the time i might upgrade, the prices will hopefully have fallen.
edit: oh the slower 32gb registered sticks are the same price on ebay. I know, more power consumption again for the registered.
 
Last edited:
Top