Build Report: Norco RPC-4224, SuperMicro X10-SRi-F, Xeon E5-1650v4

Joined
Dec 2, 2015
Messages
730
Have you measured the power used by your system at idle? If so, how many hard drives were installed when you made that measurement?

I'm pondering which motherboard and CPU to use, and wonder how much power your system is using. My current system uses about 62W at idle, with six 4TB Reds, and I wonder how much more I would burn with a more serious system like yours.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
Been meaning too. Will get to it
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Have you measured the power used by your system at idle? If so, how many hard drives were installed when you made that measurement?

I'm pondering which motherboard and CPU to use, and wonder how much power your system is using. My current system uses about 62W at idle, with six 4TB Reds, and I wonder how much more I would burn with a more serious system like yours.
I know that you were asking someone else, but I have some information that might help you.
Each of my FreeNAS systems pulls about 160 watts. The big numbers are because of the number of drives. The drives I have are supposed to draw around 5.4 watts at idle and 8 watts in normal operation. That works out to 77 watts for the drives and you would think that indicates the rest of the system (CPU, RAM and system board, SAS controller, etc.) are drawing the other 83 (ish) watts. The thing is, there is some loss from the power conversion in the power supply (efficiency rating) and the fans add up too; I have 4 just on the drives, to keep them cool.
The more drives you have, the more power it takes and the more heat it makes.
 
Last edited:
Joined
Dec 2, 2015
Messages
730
I know that you were asking someone else, but I have some information that might help you.
Each of my FreeNAS systems pulls about 160 watts.
That is useful, and the power draw at idle is not too frightening. Thanks.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
@Kevin Horton - You could probably shave off a few watts between X9, X10 and X11 E3 systems too.
IIRC @SweetAndLow posted numbers somewhere, related to power consumption on E5 systems. I think @depasseg may have some to contribute too.

My guestimation of the 'power premium' between E3-1230 and E5-1620 system would probably be around 20-40watt. But then there are the 'extras' you hinted in your other thread.
When taking larger systems into account there is a hefty premium for HBAs and backplanes. My LSI 9201-16i sits at 16w. Guestimate another 10-15w for a backplane. A couple of additional fans compared to a simple fractal design build would add another 5-10 watts. 10Gbe setups are not energy efficient at all as far as I've seen. A semi-wild guess is approx 10w per NIC.
That'd be a decent guestimation at around 40-50w added on top of the E5 premium.
The upside is that you'd be settled in terms of RAM requirements for any foreseeable future, PCI lanes to feed enough HBAs to setup multiple 'JBOD boxes' or NVMe devices..
In other words, you wont be pressed to upgrade due to lack of performance or capability of the box. On the other hand is the power premium and investment cost that must be out-weighed.
 
Joined
Dec 2, 2015
Messages
730
@Kevin Horton - You could probably shave off a few watts between X9, X10 and X11 E3 systems too.
IIRC @SweetAndLow posted numbers somewhere, related to power consumption on E5 systems. I think @depasseg may have some to contribute too.

My guestimation of the 'power premium' between E3-1230 and E5-1620 system would probably be around 20-40watt. But then there are the 'extras' you hinted in your other thread.
When taking larger systems into account there is a hefty premium for HBAs and backplanes. My LSI 9201-16i sits at 16w. Guestimate another 10-15w for a backplane. A couple of additional fans compared to a simple fractal design build would add another 5-10 watts. 10Gbe setups are not energy efficient at all as far as I've seen. A semi-wild guess is approx 10w per NIC.
That'd be a decent guestimation at around 40-50w added on top of the E5 premium.
The upside is that you'd be settled in terms of RAM requirements for any foreseeable future, PCI lanes to feed enough HBAs to setup multiple 'JBOD boxes' or NVMe devices..
In other words, you wont be pressed to upgrade due to lack of performance or capability of the box. On the other hand is the power premium and investment cost that must be out-weighed.
Thanks @Dice for the SWAG on power consumption. I've got some thinking to do, and some sums on cost of power over a five year lifetime. There is something to be said, and probably some money to be saved, by ensuring that I head down a path that allows me to upgrade to higher storage capacity when needed without having to upgrade the motherboard, CPU and chassis.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
I've got some thinking to do, and some sums on cost of power over a five year lifetime.
I hope you share any calculations down the road, this is an interesting topic (for which I'm considering to include in a resource on long term planning). tag me plz ;)
 
Joined
Dec 2, 2015
Messages
730
I hope you share any calculations down the road, this is an interesting topic (for which I'm considering to include in a resource on long term planning). tag me plz ;)
@Dice - see the results of my back of the envelop calculations here.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
These are the fan threshold commands I used for my fans. Assuming you have the same fans, on the same headers, which is probably unlikely, then you could configure your fans the same as mine.

NF-F12 PWM Specs
1500RPM +-10%
Min 300RPM +-20%

Originally I tried ipmitool sensor thresh "FANA" lower 100 200 300, but ended up getting false critical assertions in Optimal mode. So tried ipmitool sensor thresh "FANA" lower 000 000 300. This worked okay, but 300 caused it to sense low speed (ie 300) as "non critical" event when it'd be better if it was "ok", thus I use 200 now.

ipmitool sensor thresh "FANA" lower 000 000 200
ipmitool sensor thresh "FANA" upper 1600 1700 1800

NF-B9 PWM Specs
1600RPM +-10%
Min 300RPM +-20%

I found it was possible for the fans to go to 1800/1900. When this happens, the IPMI spins all the fans up and makes
a log. I noticed that I solve this in the FAN A by setting the upper critical to ABOVE the real. So the solution is to use 2000 as the upper non-critical.


ipmitool sensor thresh "FAN1" lower 100 200 300
ipmitool sensor thresh "FAN2" lower 100 200 300
ipmitool sensor thresh "FAN1" upper 2000 2200 2400
ipmitool sensor thresh "FAN2" upper 2000 2200 2400

NF-A8 PWM Specs
2200RPM +-10%
Min 450RPM +-20%

ipmitool sensor thresh "FAN3" lower 200 300 400
ipmitool sensor thresh "FAN3" upper 2400 2500 2600
ipmitool sensor thresh "FAN4" lower 200 300 400
ipmitool sensor thresh "FAN4" upper 2400 2500 2600
 
Joined
Dec 2, 2015
Messages
730
It was quite noticable that all the incoming air was by-passing the block of HDs at the top of the bay area and instead coming in the remaining holes below the bay area... you could feel it with your hand.

Norco provides some packaging material with their chassis. It just so happens that its a nice density and just slightly thicker than an HD.

View attachment 13854
(Norco packing foam is just the perfect thickness to make drive tray dummy blocks)

So, I cut some strips slightly wider than the drive trays, then cut those into smaller blocks. They seemed to fit perfectly into the drive trays... and in went my custom designed dummy drive trays. The fit is very good, and the foam is easily removed with the friction fit.

Thankyou Norco.

...

There are some extremely large 'holes' in the fan wall for cabling to be routed through, there are also some smaller holes. I figured air-flow was taking the easy path and just circulating from the motherboard side back to the HD side and through the fan wall.

View attachment 13855
(HD packing foam seems like the perfect draft stopper...)

So I cut some strips of foam and plugged the drafts...

View attachment 13856
(The HD packing foam nicely seals the drafts around the various cables/wires which transit through the holes in the fan wall.)

Thanks for this post. I've got a system based on a Norco RPC-4224 chassis in badblocks testing now, with 8 drives testing and 4 drives idle. At the start, with the empty HD trays filled with foam, but nothing blocking the bulkhead holes, the three HD fans were running at about 65% duty cycle to keep the average HD temperature at 36.5C. I filled the holes in the fan bulkhead, and now 45% duty cycle is enough to keep the HDs at the same average temperature. It'll be interesting to see what duty cycle I need later in January when I've got 16 drives in the system.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
Update with 16 drives.

I installed another 8 drives (8x Seagate IronWolf NAS 4TB), and have been running burnin/stress testing.

( badblocks on new drives and solnet_array_tester on existing pool drives.)

Temperatures seem fine. Even overnight without a/c on

Code:
2017-05-23 16:15:28: Drives are warm, going to 80%
2017-05-23 16:18:28: Maximum HD Temperature: 36
2017-05-23 16:18:28: Drives are warming, going to 50%
2017-05-23 16:21:29: Maximum HD Temperature: 36
2017-05-23 16:24:30: Maximum HD Temperature: 36
2017-05-23 16:27:31: Maximum HD Temperature: 36
2017-05-23 16:30:32: Maximum HD Temperature: 36
2017-05-23 16:33:33: Maximum HD Temperature: 37
2017-05-23 16:33:33: Drives are warm, going to 80%
2017-05-23 16:36:34: Maximum HD Temperature: 36
2017-05-23 16:36:34: Drives are warming, going to 50%
2017-05-23 16:39:35: Maximum HD Temperature: 36
2017-05-23 16:42:36: Maximum HD Temperature: 36
2017-05-23 16:45:37: Maximum HD Temperature: 36
2017-05-23 16:48:38: Maximum HD Temperature: 37
2017-05-23 16:48:38: Drives are warm, going to 80%
2017-05-23 16:51:40: Maximum HD Temperature: 36
2017-05-23 16:51:40: Drives are warming, going to 50%
2017-05-23 16:54:40: Maximum HD Temperature: 36
2017-05-23 16:57:41: Maximum HD Temperature: 36
2017-05-23 17:00:43: Maximum HD Temperature: 36
2017-05-23 17:03:44: Maximum HD Temperature: 36
2017-05-23 17:06:45: Maximum HD Temperature: 36
2017-05-23 17:09:45: Maximum HD Temperature: 36
2017-05-23 17:12:46: Maximum HD Temperature: 37
2017-05-23 17:12:46: Drives are warm, going to 80%


As a reminder, my fan script checks the drive temperatures every minute. The translation of the above is generally the fans are running at 50% keeping all of the drives at or below 36C, every 5 minutes or so the fans kick up to 80% because a drive hits 37C and a minute later they go back to 50%.

I can't hear the variance, so I don't care.

Anyway, there you go. Nothing unusual really. When the drives are idle I imagine they'll alternate between 30/50% instead of 50/80%.

Next test is to get some mprime loading going.

...

So, this is running mprime flatout, with small-ffts and all threads...

Code:
2017-05-23 17:19:29: CPU Temp: 64.0 >= 55, CPU Fan going high.
2017-05-23 17:19:29: CPU Temp: 64.0 >= 62, Overiding HD fan zone to 100%,
2017-05-23 17:22:00: Maximum HD Temperature: 35
2017-05-23 17:22:00: Drives are cool enough, going to 30%
2017-05-23 17:25:02: Maximum HD Temperature: 34


This shows the CPU immediately jumps to 64C, so the CPU fan goes to 100%, and also the HD fans are ramped to 100% to assist with the CPU cooling. The novel thing is that the HDs actual cool down further now (of course) and so the HD fans would dial back to 30% at this temp... if the CPU wasn't demanding more cooling (this behaviour is configurable).

PS: With all fans at 100% I couldn't actually detect the sound of the server in an office without putting my ear within 1" of the front of the case. Switches/Drobos/other pcs were far louder.

And for completeness, when I kill mprime

Code:
2017-05-23 17:26:06: CPU Temp: 49.0 >= 45, CPU Fan going med.
2017-05-23 17:26:06: Restoring HD fan zone to 30%
2017-05-23 17:27:54: Maximum HD Temperature: 34


CPU temperature immediately drops to 49C, Fan zone is restored to 30% duty cycle since maximum HD temp is 34C.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
These are the fan threshold commands I used for my fans. Assuming you have the same fans, on the same headers, which is probably unlikely, then you could configure your fans the same as mine.

NF-F12 PWM Specs
1500RPM +-10%
Min 300RPM +-20%

Originally I tried ipmitool sensor thresh "FANA" lower 100 200 300, but ended up getting false critical assertions in Optimal mode. So tried ipmitool sensor thresh "FANA" lower 000 000 300. This worked okay, but 300 caused it to sense low speed (ie 300) as "non critical" event when it'd be better if it was "ok", thus I use 200 now.

ipmitool sensor thresh "FANA" lower 000 000 200
ipmitool sensor thresh "FANA" upper 1600 1700 1800

NF-B9 PWM Specs
1600RPM +-10%
Min 300RPM +-20%

I found it was possible for the fans to go to 1800/1900. When this happens, the IPMI spins all the fans up and makes
a log. I noticed that I solve this in the FAN A by setting the upper critical to ABOVE the real. So the solution is to use 2000 as the upper non-critical.


ipmitool sensor thresh "FAN1" lower 100 200 300
ipmitool sensor thresh "FAN2" lower 100 200 300
ipmitool sensor thresh "FAN1" upper 2000 2200 2400
ipmitool sensor thresh "FAN2" upper 2000 2200 2400

NF-A8 PWM Specs
2200RPM +-10%
Min 450RPM +-20%

ipmitool sensor thresh "FAN3" lower 200 300 400
ipmitool sensor thresh "FAN3" upper 2400 2500 2600
ipmitool sensor thresh "FAN4" lower 200 300 400
ipmitool sensor thresh "FAN4" upper 2400 2500 2600

I updated my IPMI BMC FW to 3.50 today, in order to get the HTML5 KVM. As part of this my fan thresholds were lost. This caused the fan controller script to keep on resetting the BMC as it was surging the fans. All correct really.

The good thing was I was able to fix it quite easily since I had copied my settings here ;)

In the interest of further documenting things, I thought it might be useful to have a copy of the defaults:

Code:
FAN1			 | 1800.000   | RPM		| ok	| 300.000   | 500.000   | 700.000   | 25300.000 | 25400.000 | 25500.000
FAN2			 | 1700.000   | RPM		| ok	| 300.000   | 500.000   | 700.000   | 25300.000 | 25400.000 | 25500.000
FAN3			 | 2200.000   | RPM		| ok	| 300.000   | 500.000   | 700.000   | 25300.000 | 25400.000 | 25500.000
FAN4			 | 2300.000   | RPM		| ok	| 300.000   | 500.000   | 700.000   | 25300.000 | 25400.000 | 25500.000
FAN5			 | na		 |			| na	| na		| na		| na		| na		| na		| na		
FANA			 | 1400.000   | RPM		| ok	| 300.000   | 500.000   | 700.000   | 25300.000 | 25400.000 | 25500.000


PS: Do not use the above as your settings!

You can see why having fans which spin at 300 and below RPM causes issues with the default IPMI thresholds.
 
Joined
Dec 2, 2015
Messages
730
I updated my IPMI BMC FW to 3.50 today, in order to get the HTML5 KVM.
I'd love to get the HTML5 KVM so I can ditch Java. How well is it working? Which OS are you using on the client to access KVM?

Does Supermicro make Release Notes and Known Issues docs available, so users can figure out which IPMI version for their board has the HTML5 KVM, and what the known issues are?

Thanks for your build threads. They were very helpful when I was planning and assembling my Norco RPC-4224 server.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
I'd love to get the HTML5 KVM so I can ditch Java. How well is it working? Which OS are you using on the client to access KVM?
it works from Chrome on Mac OS X.

So that's a massive improvement ;)

The upgrade from the web gui actually didn't work for me (lost IPMI). I ended up making a Rufus USB with FreeDOS, then copying the DOS exe and the .bin to that.

(after finding a monitor!)

That worked very well. They stress to reset the settings, so the command is

AlUpdate.exe -f <ipmi firmware.bin> -i kcs -r n

the '-r n' is to reset, hence needing to reconfigure my thresholds and passwords etc (default is ADMIN/ADMIN)

I installed REDFISH_3.50

Does Supermicro make Release Notes and Known Issues docs available, so users can figure out which IPMI version for their board has the HTML5 KVM, and what the known issues are?

No. Wish they did.


HTML5 KVM is in REDFISH 3.xx I believe.
 
Last edited:
Joined
Dec 2, 2015
Messages
730
The upgrade from the web gui actually didn't work for me (lost IPMI). I ended up making a Rufus USB with FreeDOS, then copying the DOS exe and the .bin to that.

(after finding a monitor!)
Ouch. I'll hold off a few weeks on the upgrade. I don't own a suitable monitor, so it would be a major PITA if I ended up needing on to complete the update. Life should slow down a bit later this summer after we move, and I'll be closer to friends from whom I could borrow a monitor if needed.

Thanks for the info.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Ouch. I'll hold off a few weeks on the upgrade. I don't own a suitable monitor, so it would be a major PITA if I ended up needing on to complete the update. Life should slow down a bit later this summer after we move, and I'll be closer to friends from whom I could borrow a monitor if needed.

Thanks for the info.
You should always keep an old monitor around (stashed in the back of your closet) so you can pull it out when you need a VGA connection. All three of my SuperMicro boards had to be locally connected to a keyboard, and monitor for the BIOS (UEFI) and IPMI firmware updates. The only way you can do it remotely is if you have a registration code from SuperMicro that you can put into the management console to unlock the feature. It is an extra cost option and I would have bought it, if I could have figured out how to buy it. Admittedly, I gave up after a couple days sending emails and making calls. SuperMicro basically only sells it to clients that are big enough to buy directly from them. If someone knows another way, please share the details.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
You should always keep an old monitor around (stashed in the back of your closet) so you can pull it out when you need a VGA connection. All three of my SuperMicro boards had to be locally connected to a keyboard, and monitor for the BIOS (UEFI) and IPMI firmware updates. The only way you can do it remotely is if you have a registration code from SuperMicro that you can put into the management console to unlock the feature. It is an extra cost option and I would have bought it, if I could have figured out how to buy it. Admittedly, I gave up after a couple days sending emails and making calls. SuperMicro basically only sells it to clients that are big enough to buy directly from them. If someone knows another way, please share the details.
No, you can update supermicro via ipmi you just have to load the firmware on a USB key and it will show up in ipmi update.

Sent from my Nexus 5X using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
No, you can update supermicro via ipmi you just have to load the firmware on a USB key and it will show up in ipmi update.

Sent from my Nexus 5X using Tapatalk
So, you are saying that you can push the firmware update remotely through the network, all you have to do is have it on a USB stick?
Because if you are saying that you go put a USB stick in the system and then walk back to your desk to do the update, that isn't the same thing. I am talking about doing the update completely remotely from my desk without having to go downstairs to where the server is.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
So, you are saying that you can push the firmware update remotely through the network, all you have to do is have it on a USB stick?
Because if you are saying that you go put a USB stick in the system and then walk back to your desk to do the update, that isn't the same thing. I am talking about doing the update completely remotely from my desk without having to go downstairs to where the server is.
I was describing the put usb in and walk back to desk scenario.

Sent from my Nexus 5X using Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
So, here's an interesting thing, I just received a batch of 6 8TB Seagate Ironwolfs, and decided to burn them in in this system, in the top 2 rows.

At idle, they're running 10-12C hotter than the other 4TB drives already installed. I guess this makes sense when you consider that their idle power usage is 50% greater than the other drives, but I found it surprising.

Code:
2017-07-03 17:41:11: /dev/da0: 28
2017-07-03 17:41:12: /dev/da1: 28
2017-07-03 17:41:12: /dev/da2: 27
2017-07-03 17:41:12: /dev/da3: 29
2017-07-03 17:41:12: /dev/da4: 30
2017-07-03 17:41:12: /dev/da5: 30
2017-07-03 17:41:12: /dev/da6: 28
2017-07-03 17:41:12: /dev/da7: 29
2017-07-03 17:41:12: /dev/da8: 29
2017-07-03 17:41:12: /dev/da9: 29
2017-07-03 17:41:12: /dev/da10: 29
2017-07-03 17:41:12: /dev/da11: 29
2017-07-03 17:41:12: /dev/da12: 28
2017-07-03 17:41:13: /dev/da13: 29
2017-07-03 17:41:13: /dev/da14: 28
2017-07-03 17:41:13: /dev/ada3: 40
2017-07-03 17:41:13: /dev/ada4: 41
2017-07-03 17:41:13: /dev/ada2: 40
2017-07-03 17:41:13: /dev/ada5: 39
2017-07-03 17:41:13: /dev/ada1: 42
2017-07-03 17:41:13: /dev/ada0: 41


The new drives are ada*

This is with all fans at 100%. I had tried a burnin, but cancelled it when 3 of the new drives hit 45/46C.

I've now got some Noctua NF12 iPPC 3000rpm fans on order, and will try the system with those fans installed.

Will also try relocating the drives to the bottom rows, which seem to have better thermal characteristics.
 
Top