Build Report: Norco RPC-4224, SuperMicro X10-SRi-F, Xeon E5-1650v4

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Will also try relocating the drives to the bottom rows, which seem to have better thermal characteristics.

Just been experimenting with the 8TB drives positioning in the bays...
Code:
2017-07-03 20:50:15: /dev/da0: 29
2017-07-03 20:50:15: /dev/da1: 31
2017-07-03 20:50:15: /dev/da2: 30
2017-07-03 20:50:15: /dev/da3: 31
2017-07-03 20:50:16: /dev/da4: 29
2017-07-03 20:50:16: /dev/da5: 30
2017-07-03 20:50:16: /dev/da6: 28
2017-07-03 20:50:16: /dev/da7: 28
2017-07-03 20:50:16: /dev/da8: 32
2017-07-03 20:50:16: /dev/da9: 31
2017-07-03 20:50:16: /dev/da10: 41
2017-07-03 20:50:16: /dev/da11: 41
2017-07-03 20:50:16: /dev/da12: 41
2017-07-03 20:50:16: /dev/da13: 41
2017-07-03 20:50:16: /dev/da14: 39
2017-07-03 20:50:16: /dev/da15: 38
2017-07-03 20:50:16: /dev/ada0: 27
2017-07-03 20:50:16: /dev/ada1: 28
2017-07-03 20:50:17: /dev/ada2: 28
2017-07-03 20:50:17: /dev/ada3: 27
2017-07-03 20:50:17: /dev/ada4: 26
2017-07-03 20:50:17: Maximum HD Temperature: 41


da10..da15 are the 8TB drives... still quite hot at the bottom.

da10..da13 are in the bottom left 2x2 bays
da14..da15 are in the bottom right 2 bays.

I think its quite interesting that the 2x2 block of hot drives seems to run a few degrees hotter than the two drives which have some other cooler drives to act as a heat sink. Also, da15 has no drive to its right... as its at the edge of the chassis.

I think i'll try mixing the drives up a bit...
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
So I found out something today. 1) all seagate ironwolfs > 4TB spin at 7200rpm. 2) there are now two models of the 8TB ironwolf.

http://www.seagate.com/www-content/.../en-us/docs/ironwolf-hdd-ds-1904-8-1703us.pdf

ST8000VN0022 and ST8000VN0004

The 0004 model has power characteristics similar to the 4TB models. The 0022 model uses nearly double the power. Guess which version I have.

Every watt in is heat... So, if you're getting Ironwolf 8TB drives do try to get the (I assume newer) 0004 models.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
One must be a helium drive, the other an air-filled drive. The latter is significantly cheaper, so I expect that to be the new one, even at the cost of additional power.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Another update:

So I tested spreading the 8TB drives out at the edges of the array. Interesting results. seems to be 2-3C cooler.

Screen Shot 2017-07-04 at 4.38.55 PM.png


Red drives are the 8TB drives. On the edges of the array, with cooler drives adjacent, they tend to run cooler. Bottom drives are consistentlty 1C cooler.

(and let this put to rest the assumption that drive device ids tells you anything about where they are located ;))

Code:
2017-07-04 16:32:38: /dev/da0: 31
2017-07-04 16:32:38: /dev/da1: 31
2017-07-04 16:32:38: /dev/da2: 32
2017-07-04 16:32:39: /dev/da3: 32
2017-07-04 16:32:39: /dev/da4: 31
2017-07-04 16:32:39: /dev/da5: 29
2017-07-04 16:32:39: /dev/da6: 38
2017-07-04 16:32:39: /dev/da7: 38
2017-07-04 16:32:39: /dev/da8: 37
2017-07-04 16:32:39: /dev/da9: 37
2017-07-04 16:32:39: /dev/da10: 31
2017-07-04 16:32:39: /dev/da11: 29
2017-07-04 16:32:39: /dev/da12: 30
2017-07-04 16:32:39: /dev/da13: 31
2017-07-04 16:32:39: /dev/da14: 29
2017-07-04 16:32:39: /dev/da15: 29
2017-07-04 16:32:39: /dev/ada0: 38
2017-07-04 16:32:40: /dev/ada1: 30
2017-07-04 16:32:40: /dev/ada2: 31
2017-07-04 16:32:40: /dev/ada3: 38
2017-07-04 16:32:40: /dev/ada4: 30
2017-07-04 16:32:40: Maximum HD Temperature: 38
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Been running overnight idle tests. Results are in. Having the hot drives spread as per above results in peak temperatures dropping from 42C to 39C.

Which I find interesting. Will try an alternate arrangement now.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I'm revising this build a bit over the following weeks/months. The plan is to basically convert it to an ESXi based FreeNAS AIO, very similar to my baby version pilot system

I'll be upgrading the RAM, fans, adding a PCIe SSD for SLOG, adding some SATA SSDs for boot/l2arc etc, adding 10gbe and adding a 36 port SAS expander.

I'm using the SAS expander because I need to be able to pass through all drives/controllers into FreeNAS, and that means I can't really use the on-board SATA anymore. I could add another HBA, but then I'd be out of PCIe slots, or I'd be bottlenecking various devices... and I have plans for the 16x slot already.

The 36 port expander means I can dual-link up to 28 bays off a single HBA and reclaim a PCIe3 x8 slot, and reclaim an HBA too... and I have other things to do with a spare HBA :)

And the 36 port expander was pretty much the same price as two HBAs... so its even.

All the parts have now arrived :)

Have already upgraded memory to 128GB using 4x 32GB RDIMMs, and already have the Intel X550-T2 10gbe adapter installed and tested. I was delayed by a few weeks due to RMA on the 10gbe card.

The Intel RES2CV360 arrived recently, so I wanted to test its functionality and begin to map out where it will be installed...

IMG_3829.jpg


Think I will mount it around about here. Between the motherboard and fan bulkhead. I've already pulled one of the molex connectors back out from the drive bay section and it will reach the power input on the SAS expander nicely. The heat sink gets hot, so being in the air flow path is not a bad thing. Unfortunately the short cables that come with the expander won't be much good.

Haven't really worked out how I'm going to secure the expander. I think some plastic insulator sheet of some sort might be in my future.

While testing/setting up, I found that this section from the RES2CV360 manual

RES2CV360 Ports.jpg


Cable Routing using a x8 wide-port capable 6 Gb SAS/SAS RAID Controller
To ensure contiguous drive mapping when using x8 wide-port capable 6 Gb SAS/SAS RAID Controller with a SAS expander card, the system must be cabled as follows:

  • Cables from the SAS Expander to the hot swap backplane must be connected in order: A–D for 16-drive configurations, and A–F for 24-drive configurations.
  • The cables from the SAS controller can be attached to any of the remaining connectors on the SAS expander card.

Is fairly important. If you don't cable the drive bays A->F (and presumably G) then the slot# in sas2ircu <#> display gets confused.

If you do, then you get a nice display with the sasexpander and connected drives, listed with increasing slot numbers and enclosure number. (each HBA is an enclosure and each Expander seems to be an "enclosure"), so 0 and 1 for my two HBAs, and then the expander is Enclosure 2 connected to enclosure 1.

So, the next thing to do is test if I can set a drive to boot off the expander...

Which apparently works fine. Need to ensure the HBA has its option rom installed, that option roms are enabled for the PCIe slot its installed for, you can then configure the HBA boot order in the avago bios utility, and then you can select any of the devices attached to the entire sas topology (apparently) from the SM bios' boot screen, under HD disk priority.

Sweet. This means I'll be able to fallback to direct booting off the HBA, or should be able to simply pass it in to boot with ESXi. Won't even need a VMDK for FreeNAS. I think.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Nice. Looking forward to more details. I LOVE the photos.
 

Nerull

Cadet
Joined
Apr 19, 2018
Messages
2
I'm revising this build a bit over the following weeks/months. The plan is to basically convert it to an ESXi based FreeNAS AIO, very similar to my baby version pilot system

I'll be upgrading the RAM, fans, adding a PCIe SSD for SLOG, adding some SATA SSDs for boot/l2arc etc, adding 10gbe and adding a 36 port SAS expander.

I'm using the SAS expander because I need to be able to pass through all drives/controllers into FreeNAS, and that means I can't really use the on-board SATA any more. I could add another HBA, but then I'd be out of PCIe slots, or I'd be bottlenecking various devices... and I have plans for the 16x slot already.

The 36 port expander means I can dual-link up to 28 bays off a single HBA and reclaim a PCIe3 x8 slot, and reclaim an HBA too... and I have other things to do with a spare HBA :)

And the 36 port expander was pretty much the same price as two HBAs... so its even.

All the parts have now arrived :)

Have already upgraded memory to 128GB using 4x 32GB RDIMMs, and already have the Intel X550-T2 10gbe adapter installed and tested. I was delayed by a few weeks due to RMA on the 10gbe card.

The Intel RES2CV360 arrived recently, so I wanted to test its functionality and begin to map out where it will be installed...

View attachment 20764

Think I will mount it around about here. Between the motherboard and fan bulkhead. I've already pulled one of the molex connectors back out from the drive bay section and it will reach the power input on the SAS expander nicely. The heat sink gets hot, so being in the air flow path is not a bad thing. Unfortunately the short cables that come with the expander won't be much good.

Haven't really worked out how I'm going to secure the expander. I think some plastic insulator sheet of some sort might be in my future.

While testing/setting up, I found that this section from the RES2CV360 manual



Is fairly important. If you don't cable the drive bays A->F (and presumably G) then the slot# in sas2ircu <#> display gets confused.

If you do, then you get a nice display with the sasexpander and connected drives, listed with increasing slot numbers and enclosure number. (each HBA is an enclosure and each Expander seems to be an "enclosure"), so 0 and 1 for my two HBAs, and then the expander is Enclosure 2 connected to enclosure 1.

So, the next thing to do is test if I can set a drive to boot off the expander...

Which apparently works fine. Need to ensure the HBA has its option rom installed, that option roms are enabled for the PCIe slot its installed for, you can then configure the HBA boot order in the avago bios utility, and then you can select any of the devices attached to the entire sas topology (apparently) from the SM bios' boot screen, under HD disk priority.

Sweet. This means I'll be able to fallback to direct booting off the HBA, or should be able to simply pass it in to boot with ESXi. Won't even need a VMDK for FreeNAS. I think.

Im looking to do a similar build to this and just have a quick question. If you arent going to be using exsi and need passthru then all you need to do to power the 6 backplanes is connect them to the 6 onboard sata ports? will this lead to any bottlenecks with mechanical hdds?
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Im looking to do a similar build to this and just have a quick question. If you arent going to be using exsi and need passthru then all you need to do to power the 6 backplanes is connect them to the 6 onboard sata ports? will this lead to any bottlenecks with mechanical hdds?

Each of the six backplanes needs to connect to 4 Sata/sas ports.

I’m confused by your question :)
 

Nerull

Cadet
Joined
Apr 19, 2018
Messages
2
Each of the six backplanes needs to connect to 4 Sata/sas ports.

I’m confused by your question :)

yeah i was confused about it all back then too lol. Spent the day googling and figured it all out. Thanks for the reply tho! Norco case inc for my build

What was confusing me was how you could go from the sata ports to connect 4 drives, didnt realize it was connecting to 4 sata ports each. Once i found that out it made things a lot clearer
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
What was confusing me was how you could go from the sata ports to connect 4 drives, didnt realize it was connecting to 4 sata ports each. Once i found that out it made things a lot clearer
Or you can use a SAS controller. Still need a 4 lane SAS cable to each of the backplanes. I imagine that is why the SAS expander was added.
 
Joined
Dec 2, 2015
Messages
730
I've now got some Noctua NF12 iPPC 3000rpm fans on order, and will try the system with those fans installed.
Thanks for mentioning these higher speed fans. I built my RPC-4224 server with 1500 rpm Noctua NF12 hard drive cooling fans, and it worked quite well for many months. But, after we moved to a different house, the server is now living in a slightly warmer environment, and now the HD temperatures are a bit warmer than I would like when they are heavily loaded for a long period (the warmest HD would hit 41°C on rare occasions).

I recently switched out the 1500 rpm fans for the 3000 rpm iPPC fans, and found that they make a huge difference. It took a couple of days to optimize the PID gains for the new fans, but now everything is working perfectly. The server is just as quiet as before during the majority of the time, when less than 1500 rpm is needed. But now the server has the cooling capacity to handle the infrequent cases when the server is working hard without having the hard drive temperatures start to edge up to 40°C.
 

Benny_rt2

Cadet
Joined
Dec 28, 2018
Messages
1
Hi,

I've enjoyed reading through this thread even though it is a few years old. I may try your IPMI fan script as I just purchased a 2018 norco 4224 and using a SM mobo too. One thing I noticed is that Norco (or IPC Direct [norco's reseller]) has two versions of the drive caddies for sale. One version is the newer version caddy similar to what I have in my new 4224 and looks same as your photos. These newer versions do not have the airflow cuttoff slide.

I think Norco still sells their older drive caddy seperately which I think (I've not confirmed yet) does have the sliding vent as seen in this vid from 2013:
https://www.youtube.com/watch?v=FDImEf-IA-4&t=2s

http://www.ipcdirect.net/hard-drive...0-rpc-3216-rpc-3116-rpc-2208-rpc-1204-ss-500/
compared to the newer version here without the sliding vent:
http://www.ipcdirect.net/hard-drive...4-ss-500-ds-24e-ds-24d-ds-12e-ds-12d-ds-1500/

Anyway, at $5 USD per tray it doesn't sound bad, but buying 24 of them isn't exactly cheap either. Also, IPC Direct charges huge amounts for USPS standard shipping which kindof negates the sale. Thought I'd mention it though incase anyone wanted to swap out a few empty caddies for the older design which has the sliding vent they can close.

Benny


...
It was quite noticable that all the incoming air was by-passing the block of HDs at the top of the bay area and instead coming in the remaining holes below the bay area... you could feel it with your hand.

Norco provides some packaging material with their chassis. It just so happens that its a nice density and just slightly thicker than an HD.

View attachment 13854
(Norco packing foam is just the perfect thickness to make drive tray dummy blocks)

So, I cut some strips slightly wider than the drive trays, then cut those into smaller blocks. They seemed to fit perfectly into the drive trays... and in went my custom designed dummy drive trays. The fit is very good, and the foam is easily removed with the friction fit.

Thankyou Norco.
...
 

rivey

Contributor
Joined
Sep 20, 2017
Messages
123
I to have enjoyed reading this thread. I just received a new RPC-4224 from UPS this afternoon. Both Stux and Kevin Horton mentioned changing there HD fans to the Noctua NF12 iPPC 3000rpm fans and am curious how they are working. I am using a Supermicro X9SRL-F motherboard in my system and am about to transfer to this new case. Will also be ordering an rack to go under my desk and rack rails. Have there been any further pid changes to better make the fans work? Also can you go into more detail on how to install the scripts for this to work. I am still new to these server systems so any help on the scripts would be greatly appreciated. One other question, there were no instructions in the box for the case but obviously they weren't really needed but it would have been nice to know what a couple of the metal pieces in the parts box were. Don't need them but is always helpful to know. I know that one of them is to be used if you were to use redundant power supplies but I sure don't see what the extra fan mounting plate is for. Thanks, Bob Ivey
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Yes. New fans work well.

I was disappointed not to receive a scrap of paper with simple instructions too. Never did work out what all the metal pieces are for. One is for holding the different psu types.

There are two fan wall styles, 3x 120mm and 4x 80mm (iirc). They are interchangeable.

Is this what you are referring too?

120mm fans are bigger and thus tend to run quieter
 

rivey

Contributor
Joined
Sep 20, 2017
Messages
123
Yes. New fans work well.

I was disappointed not to receive a scrap of paper with simple instructions too. Never did work out what all the metal pieces are for. One is for holding the different psu types.

There are two fan wall styles, 3x 120mm and 4x 80mm (iirc). They are interchangeable.

Is this what you are referring too?

120mm fans are bigger and thus tend to run quieter

I was just wanting to verify the exact model of Noctua fans you are using. I am assuming that you are using the 3000 rpm fans at this point. My case did come with the 3x 120mm fan wall as well as the dual OS SSD mount.

Was also curious what changes you have made to the pid script since yours has been in service.

I am now also back to ordering additional parts as my current CPU cooler is a Noctua cooler with a 120 mm fan so I will be getting the one you specified in the thread. Add that to the rack and rails and it feels like I am spending just as much on the case and rack and associated parts as I spent on the X9 MB and all all those parts.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I was just wanting to verify the exact model of Noctua fans you are using. I am assuming that you are using the 3000 rpm fans at this point. My case did come with the 3x 120mm fan wall as well as the dual OS SSD mount.

Was also curious what changes you have made to the pid script since yours has been in service.

I am now also back to ordering additional parts as my current CPU cooler is a Noctua cooler with a 120 mm fan so I will be getting the one you specified in the thread. Add that to the rack and rails and it feels like I am spending just as much on the case and rack and associated parts as I spent on the X9 MB and all all those parts.

I use the same script which is posted here.

I would probably suggest looking at the PIDified version
 

rivey

Contributor
Joined
Sep 20, 2017
Messages
123
I use the same script which is posted here.

I would probably suggest looking at the PIDified version

Forgive me for being dense here, but what is the PIDified version. I did a search for the term and found nothing. Also, as a noob as far as FreeNAS is concerned, can you point me in the direction of a tutorial on how to use the scripts. I have spent hours to find some direction in how to use them. Thanks, Bob
 
Top