Reverted to build #1...

Status
Not open for further replies.

rvassar

Guru
Joined
May 2, 2018
Messages
972
I've been mostly happy with FreeNAS 11u5 on my little Dell Optiplex 790. But as I related to @Chris Moore, as I make greater use of it, it's become obvious I'm doing another build at some point. The major issues are:

1 - Non-Server grade hardware. No ECC, no IPMI, lack of disk expansion options, PCIe lanes bundled uselessly in x4 & x16 slots.
2 - Lack of CPU/Memory capacity to support jails & VM's, larger ZFS pools.
3 - USB boot devices, which have now bitten me repeatedly.

With that in mind I started poking around with the idea of picking up a 2+U rackmount server of recent vintage. Something with at least 6 - 8 DDR3 slots, and 12 drive bays. But my original motivation for the Optiplex chassis was to keep the noise & heat down. Moving to a rack server would require me to address noise & power density issues. Some of my thinking has been summed up in the home racks discussion here:
https://forums.freenas.org/index.php?threads/19-racks-for-home.64450/
and I am still pursuing constructing some kind of office rack with an eye for noise management. I need to find out how much power & cooling capacity I actually have here in the home office, and just how how much air I need to move to keep things cool.

There's nothing like a good experiment!

With that in mind, I decided to revert to the old system I keep around for test purposes, the machine I initially evaluated FreeNAS on. An 11 year old Dell SC1430... I installed a second X5355 CPU bringing it to 8 cores and 240 watts TDP, a SAS HBA to get modern SATA speeds and a small SSD boot device, it already had 16Gb of ECC memory (ddr2).

So far my office is roughly 1.5 deg/C hotter, and I'm pulling almost 100 watts more power, so it costs 20 cents a day more to run. I am actually wondering if I really need the second socket occupied or not. On the noise front, it is louder, but tolerable. I did place it in the tower cabinet in my desk, but I have to keep the door propped open to keep the drive temps down (smartctl is reporting 39 - 43/C). If I close the cabinet door, the fans spin up a little bit and negates any sound dampening benefit, and the drives warm up to 46/C. I'm guessing I need at least a 2 x 120mm fans to keep this in check in a fully enclosed cabinet.
 

cdiddy

Dabbler
Joined
Oct 3, 2017
Messages
39
HI! You description intrigued me, as I deal with hot equipment in enclosed spaces in my line of work on a daily basis. Generally speaking, the assumption is that if your cabinet is wood (or some kind of faux-wood plastic-y stuff, particle board with laminate, etc), then it is a natural insulator. No matter how many fans you put in the PC case to exhaust heat into the enclosed cabinet, eventually you are going to see that heat transfer into the PC case because it cannot escape into the world fast enough. If you have to (or just really want to) keep your box in another box, find some way of venting the cabinet... cut a hole in the back of the desk and mount a really good fan on that, or better yet, do what we do at work when someone "needs to" keep hot electronics in a wooden credenza or cabinet... install a vent in the wall right up into the attic, and put a push fan in the cabinet into the vent and a pull fan in the attic.

I keep my FreeNAS at home (in my study or whatever you call it) on the desk, in the corner. It's a pretty big U-shaped desk so it's out of the way enough for me. And it's dead-silent. Rather than go with server-grade cases and fans, I chose to put my server-grade hardware in a slick gamer case because gamers are OCD about super-power PCs in next-to-silent enclosures. Water-cooled CPU heatsink, the self-contained kind from Cooler Master, and 8 x 140mm super-quiet fans... temps max out at 37-39 C and I can't even hear it 4 feet from my head over the ceiling fan and the computer I'm using right now... I can obviously hear it in a silent room, but I have kids and I live in Texas (non-stop AC and ceiling fans) so there's no such thing as a silent room.
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
Let me echo what cdiddy said; home deployments can be cool and quiet with a quality gamer case. I have three such cases with two of the cases having a 12 drive (3.5") drive capacity and one with a 10 disk capacity. Two of the three cases could also house my 5.25" DDR4 tape drive. All are very quiet.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
HI! You description intrigued me, as I deal with hot equipment in enclosed spaces in my line of work on a daily basis. Generally speaking, the assumption is that if your cabinet is wood (or some kind of faux-wood plastic-y stuff, particle board with laminate, etc), then it is a natural insulator. No matter how many fans you put in the PC case to exhaust heat into the enclosed cabinet, eventually you are going to see that heat transfer into the PC case because it cannot escape into the world fast enough.

Yes, I understand this. I am also in Texas, and have similar conditions, except my house is new construction, EnergyStar 2016 rated. My attic is actually almost the same temp as the rest of the house, thanks to expanded foam insulated eaves. It's basicly built like a giant Yeti cooler.

But... The major writing oversight in my post was this: The 120mm fans I mentioned would be on the cabinet, not in any individual system case. I'm looking at pulling air in the bottom and ducting it up the front to allow rack and or tower cases the pull clean cold air in front to back, then have an upper duct in the back with powered fans to eject the hot air. I'd like to line the ducting with noise absorbing material (carpet? sub-floor padding? acoustic mat?) to help with the noise. I'm actually considering using a Raspberry Pi to manage the environment. The goal is a capable FreeNAS with 4 - 8 drives, and a capable ESXi box for my work & development projects, using second hand COTS server equipment. I have to manage the heat and the noise, such that I can concentrate and/or conduct phone calls, etc...

One of my concerns was just how much heat can I generate before the room becomes uncomfortable, or before the case temps get out of control. This SC1430 with the second quad-core appears to be pushing the limit in the built in cabinet of my desk. But I suspect the second socket heatsink has lost the vacuum in the heat pipes, as it appears to be running fully 10/C hotter than the first.
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
The biggest challenge with your solution is the use of older X5xxx processors. You can match or greatly exceed your current performance with newer CPUs. For example, the CPU charts show a single 4 core X5355 at 3250 units of performance. With multiprocessor overhead expect to get about 5200 units of performance using 240w of power. Compare that to a single 8 core E5-2650L that get you almost 8700 units of performance with a SINGLE 70w CPU. Or consider an E5-2650 that affords 8 core with over 10000 units of performance in a single 95w CPU. In my case, the twelve 7200RPM disks are the heat makers. with 4 to 8 drives, you wil have less of an issue.

With the right tower server, there are easy ways to get both the cooling and quiet you need. I have one system with 12 disks, an E5-2650 CPU, 32GB of RAM, three 140mm fans, and one 10" fan to keep it cool...
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
The biggest challenge with your solution is the use of older X5xxx processors.

This is no solution! It's an experiment...

I did want to get away from the non-ECC Optiplex chassis and USB boot devices, so I'll be keeping the SC1430 for a few months, while I cobble together some bits. But the dual X5355's are not staying. It's not just cabinet temps and room comfort, I'm having trouble keeping some of the drives cool. The upper bay drives are hitting 46 - 48/C, and that will shorten their lives rather quickly.

One of the nice things about experimenting with this chassis is I have 4 or 5 different CPU's laying around in the junk box. I think I even have a dual core 65w Woodcrest chip. Might be interesting to see how that performs, maybe do some scrub time comparisons, etc... My use case is mostly NFS, not transcoding, but I would like to have enough capacity to support a 10GbE interface at some point. (Not in this chassis... PCIe x4 only...) I would also eventually like to host some network services VM's, DNS & DHCP, etc... But they're nothing particularly CPU intensive. If I decide the COTS servers are not going to work, I'll build something with a Supermicro MB, and perhaps go the water cooling route.
 
Status
Not open for further replies.
Top