Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
The goal of this build was firstly to provide a strong 6-bay NAS, Plex server and offsite backup for my Primary NAS. Also, it needs to run ESXi in a performant manner, and will host Windows, OSX, Linux guests, as well as a pfSense gateway. The ESXi installation was the pilot project for an ESXi upgrade I am planning for the Primary NAS.

The Supermicro X10SDV-TLN4F is a Xeon D 1541 MiniITX motherboard. The Xeon D is an 8 core Broadwell CPU which supports 128GB dual channel EEC memory across 4 slots. The board features Dual 10gbe + Dual gigabit ethernet ports, 6 SATA ports, an M.2 2280 PCI NVMe slot and a fully bifurcatable 16x PCIe slot (this means it can support up to another 4 M.2 drives), and IPMI.

The Node 304 is a stylish relatively small 6 bay chassis, which has high WAF, and is thus suitable for use in a living room... with suitable fan control modifications.

Parts List:
Case: Fractal Design Node 304
PSU: Corsair RM550x
Motherboard: Supermicro X10SDV-TLN4F
Memory: 2x 16GB Crucial DDR4 RDIMM
USBs: 2x Sandisk Cruzer Fit 16GB USB2
HDs: 6x Seagate IronWolf 8TB
HD Fans: 2x Noctua 92mm NF-A9 PWM
Exhaust Fan: Noctua 140mm NF-A14 PWM
ESXi Boot device: Samsung 960 Evo 250GB M.2 PCI NVMe SSD
SLOG: Intel P3700 400GB HHHL AIC PCI NVMe SSD
Misc: A PC piezo buzzer and USB3 to USB2 header adapter from ebay

IMG_1058.jpg

Node 304 in all its glory

First things first, install motherboard, ram, psu, boot usbs, freenas, hds and test the memory and burn-in the HDs. Pity I don't have a photo of the Medusa that was all of the SATA cables before I cable managed them.

PS: Its easier to upgrade the HD fans *before* you install the PSU, so you may want to do that first.

IMG_1054.jpg

The Corsair RM550x PSU has more than enough power for this use case, and is a silent 0RPM PSU at less than 50% utilization. It fits in the case, and has a 10 year warranty. It works well to route the PSU power cable along the case edge and back around


IMG_1264.jpg

One of the tricks is wiring up the front panel connectors. Hopefully this picture helps anyone following in my footsteps


IMG_1279.jpg

HD Audio is tucked under the motherboard, USB3 front panel cable is temporarily looped around while I await an adapter, eventually I'll run it between the PSU and motherboard edge to the other side. A PC buzzer speaker is added. M2 and SATA cables are already in place, becasue I've already tested the system.


IMG_1076.jpg

Also, the Node 304 has USB3 ports on its front panel, but the X10SDV only has USB2 headers on the motherboard (the back ports are USB3). In order to connect the front ports to the motherboard you will need a USB3 to USB2 motherboard header adapter. Available for a few bucks off ebay. I routed the USB3 cable from the front panel between the PSU and the edge of the motherboard, then to the adapter, and to the motherboard. The main ATX cable is also looped around.

Fans!

After all the hardware was tested, I then began performing thermal tests. Long story short, the CPU fan is obnoxious at full speed, and the Fractal fans aren't controllable, so in order to keep temperatures satisfactory I had to run them at full speed, and I found that that was too noisy, so I replaced them with Noctua PWM fans.

Now because I'm using 4 PWM fans (2x HD intake, 1x Exhaust and 1x CPU), I'm going to need a fan control script. Luckily I've written a dual-zone fan control script already. Later on I will provide my tuned settings for this case/motherboard.

Screen Shot 2017-08-19 at 2.09.45 AM.png

The X10SDV v2.0 boards supports 4 fans and has 2 fan zones. Fans 1-3 are Fan Zone 0 and Fan 4 is Fan Zone 1. From the factory, the CPU fan is connected to Fan 1. This means all your HD and Exhaust fans need to be ganged off of Fan 4. Or alternatively, you can extend the CPU fan with one of the Noctua fan extension leads to Fan 4, and then connect the HD fans to Fan 1 and 3 and the Exhaust fan to Fan 2.

Screen Shot 2017-08-19 at 2.09.31 AM.png


This means you also have the benefit of being able to see the RPM of *ALL* of your fans.

IMG_1268.jpg


Its easier to install the Noctua fans with their rubber bits if you start in the centre and then do the outsides. Have the tails on the bottom outside edges so they can reach the motherboard headers with no extensions.

IMG_1269.jpg

This fan routes below the corner of the HD, to the FAN1 connector. It just makes it without being too tight.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
IMG_1271.jpg

The exhaust fan was installed with its tail near the DIMM slots, then routed above the DIMMs to FAN2. The CPU fan needs an extension added, and connects to FAN4.

IMG_1274.jpg

Its hard to tell, but here you can see the CPU fan cable. I routed the extension cable from FAN4 along the edge of the motherboard, and back along the PSU/ATX cable with a cable tie to stop it dangling around. The cable running parallel with the DIMMs is the Exhaust fan.

IMG_1305.jpg

CPU fan extension is plugged into FAN4, that's the one on the left, the other fan header is FAN3 and is connected to the remaining HD fan. Its a good fit.

IMG_1310.jpg

Worth noting, that once you install a PCIe card you can't adjust these headers anymore...

IMG_1282.jpg


The built in Fractal fan controller with all its dangling bits was now redundant, and annoying... so I unplugged/removed its wiring harness.

IMG_1275.jpg


So, here's the thing, the serials on these HDs can't be seen without removing them from the case... which is relatively hard... and its very important that you verify the serials before you pull a drive. The solution is to simply label the drives.

IMG_1276.jpg


Only the last 4 digits of the serials vary on these drives... so I don't need to bother with all of the serial. The stickers are lined up with the bottom of the SATA power connector... this means they'll be visible when installed.

Taming the HD rats nest!

IMG_1285.jpg


The bottom SATA power run is cable managed and connected... (sorry about the blur)

IMG_1287.jpg


Then the bottom 3 SATA connectors...

IMG_1288.jpg


The next set of SATA connectors...
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
IMG_1289.jpg

And the final top row of SATA power...

IMG_1298.jpg

(oh, look, a PCIe SSD...)
Now, the SATA cables are looped like so... and the SATA power cable thus... trying to not run power parallel with the signals...

You might wonder why I didn't just do something like this...

IMG_1069.jpg


The problem is, if you do that, then the case won't close, because it has a mesh panel which interferes. Sigh.

IMG_1283.jpg

Also, when doing up the HD retainer thumbscrews I *strongly* advise to hold your hand under them. They have a bad habit of producing metal filings, which would then sprinkle onto your motherboard, and produce bad bad things. You can actually make out some metal fragments in this picture on the last joint of my middle finger!

IMG_1279.jpg


Now, if you're going to run ESXi, you need a datastore to store your FreeNAS vm configuration at a minimum. And you might as well store your FreeNAS virtual disk on that too... and boot off it as well... so I used a Samsung 960 Evo. You could also use an Optane 32GB. You can also use this data store as your L2ARC and swap, but not for your SLOG.

And if you plan to run ESXi VMs, then you'll be needing a slog, unless you like 5MB/s

IMG_1260.jpg


A SLOG would ideally be be fast at writing. And it needs to have Power Loss Protection (PLP). And it should have high endurance. And it will need to fit in the case and on this board... and that pretty much makes this the absolute king of SLOGs... except if you want to pay Ludicrous money, as opposed to just merely crazy. You could probably get away with an Intel 750 400GB, but beware of the endurance rating. This one is actually destined for another server... but I get to play with it. This one can write at about 1.2GB/s continuously, and is rated for 4TB/day for 5 years! Pretty soon there should be some M.2 drives with PLP, but as they will most likely be 22110 form-factor they probably won't fit in the motherboard slot, so you'd need a PCI carrier card.... at least you could load 2 or 4 drives up depending on the carrier. Alternatively, you could use a SATA card I guess... but I wanted to save the PCIe slot for NVMe drives, one initially, and eventually 4 :)

IMG_1295.jpg


Installed.

IMG_1299.jpg


Looking down at the beastly Xeon D ;)

IMG_1300.jpg


Bit hard to take glamour shots... but I think this is the best I could do :)
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
Fan Control

WARNING:
Until you have the fan control script implemented, set your IPMI Fan Mode to Full. In fact... just set your fan mode to full. The script will take it from there.

I use my Hybrid Fan Control script, which is here:
https://forums.freenas.org/index.php?threads/script-hybrid-cpu-hd-fan-zone-controller.46159/

With the following settings:

Code:
###############################################################################################
## CONFIGURATION
################

## DEBUG LEVEL
## 0 means no debugging. 1,2,3,4 provide more verbosity
## You should run this script in at least level 1 to verify its working correctly on your system
$debug = 1;

## CPU THRESHOLD TEMPS
## A modern CPU can heat up from 35C to 60C in a second or two. The fan duty cycle is set based on this
$high_cpu_temp = 70;			# will go HIGH when we hit
$med_cpu_temp = 60;			 # will go MEDIUM when we hit, or drop below again
$low_cpu_temp = 50;			 # will go LOW when we fall below 35 again

## HD THRESHOLD TEMPS
## HDs change temperature slowly.
## This is the temperature that we regard as being uncomfortable. The higher this is the
## more silent your system.
## Note, it is possible for your HDs to go above this... but if your cooling is good, they shouldn't.
$hd_max_allowed_temp = 40;	  # celsius. you will hit 100% duty cycle when you HDs hit this temp.

## CPU TEMP TO OVERRIDE HD FANS
## when the CPU climbs above this temperature, the HD fans will be overridden
## this prevents the HD fans from spinning up when the CPU fans are capable of providing
## sufficient cooling.
$cpu_hd_override_temp = 75;

## CPU/HD SHARED COOLING
## If your HD fans contribute to the cooling of your CPU you should set this value.
## It will mean when you CPU heats up your HD fans will be turned up to help cool the
## case/cpu. This would only not apply if your HDs and fans are in a separate thermal compartment.
$hd_fans_cool_cpu = 1;		  # 1 if the hd fans should spin up to cool the cpu, 0 otherwise


#######################
## FAN CONFIGURATION
####################

## FAN SPEEDS
## You need to determine the actual max fan speeds that are achieved by the fans
## Connected to the cpu_fan_header and the hd_fan_header.
## These values are used to verify high/low fan speeds and trigger a BMC reset if necessary.
$cpu_max_fan_speed	  = 6500;
$hd_max_fan_speed	   = 1800;


## CPU FAN DUTY LEVELS
## These levels are used to control the CPU fans
$fan_duty_high  = 100;		  # percentage on, ie 100% is full speed.
$fan_duty_med   = 70;
$fan_duty_low   = 40;

## HD FAN DUTY LEVELS
## These levels are used to control the HD fans
$hd_fan_duty_high	   = 100;  # percentage on, ie 100% is full speed.
$hd_fan_duty_med_high   = 80;
$hd_fan_duty_med_low	= 50;
$hd_fan_duty_low		= 30;   # some 120mm fans stall below 30.


## FAN ZONES
# Your CPU/case fans should probably be connected to the main fan sockets, which are in fan zone zero
# Your HD fans should be connected to FANA which is in Zone 1
# You could switch the CPU/HD fans around, as long as you change the zones and fan header configurations.
#
# 0 = FAN1..5
# 1 = FANA
$cpu_fan_zone = 1;
$hd_fan_zone = 0;


## FAN HEADERS
## these are the fan headers which are used to verify the fan zone is high. FAN1+ are all in Zone 0, FANA is Zone 1.
## cpu_fan_header should be in the cpu_fan_zone
## hd_fan_header should be in the hd_fan_zone
$cpu_fan_header = "FAN4";	
$hd_fan_header = "FAN1";


Now, if you will be using ESXi, then the script won't have direct access to the IPMI, and thus you need to change this line

Code:
$ipmitool = "/usr/local/bin/ipmitool";


to

Code:
$ipmitool = "/usr/local/bin/ipmitool -I lanplus -H <IPMI IP ADDRESS -U ADMIN -P <PASSWORD> ";


Where <IPMI IP ADDRESS> is the IP you use to login to your IPMI, and <PASSWORD> is the password you use. Note, there is a space after the password and before the "

The other thing is that when running inside ESXi, FreeNAS does not have direct access to the CPU core temperatures, so you need to change the script to read the CPU temperature from IPMI.

So change the following code:
Code:
#	   my $cpu_temp = get_cpu_temp_ipmi();	 # no longer used, because sysctl is better, and more compatible.
		 my $cpu_temp = get_cpu_temp_sysctl();


to:
Code:
	   my $cpu_temp = get_cpu_temp_ipmi();	 # no longer used, because sysctl is better, and more compatible.
#	 my $cpu_temp = get_cpu_temp_sysctl();


And the script will now read the cpu temp via IPMI instead of sysctl.

Follow the instructions in the hybrid fan control thread to install, configure as per above and reboot and it should just work.

You can test it with the debug, etc... and you can check the log with

tail -f /path/to/fan_control_log

etc

NOTE: You will need to adjust the IPMI fan thresholds to prevent fan surging, if you are using the Noctua fans, see this post to set the fan lower thresholds with my recommended settings.

NOTE: sometimes the fan control script can fail to read the CPU temp through spurious timeouts, see this post for a potential solution
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
I plan to follow up with my instructions for setting up ESXi, etc with this configuration... but not tonight. :)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Very nice.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I plan to follow up with my instructions for setting up ESXi, etc with this configuration... but not tonight. :)
WHAT? You think you need sleep or something? :)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
A PC piezo buzzer
Doesn't the board have one integrated? All other Supermicro boards I'm familiar with have one, IIRC.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
Doesn't the board have one integrated? All other Supermicro boards I'm familiar with have one, IIRC.

Don't believe so, and neither does my X10SRi-F, nor any other recent PC mobo I've seen :)

(I bought a five pack... figure they should last me a while :) )
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
You're right. Must've been mistaken. Either that or I've got two buzzers in there ;)
 

Dog

Cadet
Joined
Aug 19, 2017
Messages
6
Thanks for this. I'm planning on basing my first build off of yours. Looking forward to the rest of the report.
 

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
The bottom SATA power run is cable managed and connected... Then the bottom 3 SATA connectors... The next set of SATA connectors... And the final top row of SATA power...

Exemplary cable routing!

Also, when doing up the HD retainer thumbscrews I *strongly* advise to hold your hand under them. They have a bad habit of producing metal filings, which would then sprinkle onto your motherboard, and produce bad bad things.

The same is/was true with my sample of the Node 304, at least during the first few screwdriving acts for each thumbscrew. Very good advice!
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
Ok, most of this is way over my head, BUT, I am curious about ESXi and all it can do. I very very rudimentary know what a hypervisors does but to say I'm a noob is an understatement. I know a few key words lol.

Point is, I do run a Pfsense router. Pretty noob with that as well, but had a bunch of spare parts (i3, 8 gigs of ram, an SSD, you know, just your modest home router LOL) and would be someone interested in changing up my build plans to try and put them both into one server. Is that a thing? Is it even worth it? Not having a router bare metal, is that not an issue? And let's be honest, if I die in counter strike or PUBG because my server/router/Nas hurt my latency due to this ima be pissed lol.

Point is, just leave good enough alone and let my dead silent but stupid powerful Pfsense box live on, or try and merge the two and once again have an i3 and some RAM I have no use for..? Also I assume I will need to go with a powerful Xeon vs the i3 6100 I was planning for this new freenas build.


Sent from my iPhone using Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
With this board, I can use the two gigabit ports as pfSense LAN/WAN.

So that leaves the other two 10gbe ports, one for SAN traffic and one for Normal network traffic.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
With this board, I can use the two gigabit ports as pfSense LAN/WAN.

So that leaves the other two 10gbe ports, one for SAN traffic and one for Normal network traffic.

So you end up with them both virtualized and that isn't an issue at all?

How much work is it to make this type of setup actually work, and what spec changes would I need to make it not slow and suck?


Sent from my iPhone using Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
So, when the fan script spins the fans down, it can actually trigger an IPMI error alert, and when this happens the BMC spins all the fans up to full, and then the hybrid_fan_control script resets the BMC, because the fans are at full instead of minimum ;)

Screen Shot 2017-08-21 at 12.09.18 PM.png


This is because the 140mm fan has a lowest speed of 300 rpm +-20% (ie 240rpm), and 240rpm is below the system default of 300 rpm.

ipmitool sensor | grep FAN
Code:
FAN1			 | 1800.000   | RPM		| ok	| 300.000   | 500.000   | 700.000   | 25300.000 | 25400.000 | 25500.000
FAN2			 | 1500.000   | RPM		| ok	| 300.000   | 500.000   | 700.000   | 25300.000 | 25400.000 | 25500.000
FAN3			 | 1800.000   | RPM		| ok	| 300.000   | 500.000   | 700.000   | 25300.000 | 25400.000 | 25500.000
FAN4			 | 6600.000   | RPM		| ok	| 300.000   | 500.000   | 700.000   | 25300.000 | 25400.000 | 25500.000


Solution is quite simple,

We'll adjust the 92mm fans so the lower non-critical threshold is 300, with critical at 200 and the critical non-recoverable at 100, and the 140mm fan at 200, 100 and 0.

ipmitool sensor thresh "FAN1" lower 100 200 300
ipmitool sensor thresh "FAN2" lower 0 100 200
ipmitool sensor thresh "FAN3" lower 100 200 300

FAN1 & FAN3 are the 92mm HD intake fans, FAN2 is the 140mm exhaust fan, FAN4, which is the CPU fan, which is supermicro stock, should be fine where it is.

This will need to be redone every time you update the IPMI firmware, which sortof sucks, but there ya go.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
This will need to be redone every time you update the IPMI firmware, which sortof sucks, but there ya go.

Which reminds me, its a good idea to start your build by updating the IPMI (which can be done through the web interface). And the BIOS.

The BIOS can be upgraded via an ISO through IPMI virtual media I believe... but I may be wrong on that... I forget. If that doesn't work, then just burn the ISO to a USB and physically connect it.

https://tinkertry.com/supermicro-superserver-bios-12a-and-ipmi-358-released-summer-2017

While doing the IPMI update, when it says wait and poweroff... just wait... it will eventually do its thing and come back... and if you power-off to soon, you'll need to do a DOS based recovery... which is unpleasant. Guess how I know.
 
Last edited:
Top