Can on board X9DR3-F Supermicro mini-sata (x4) be used for Truenas?

ZiggyGT

Contributor
Joined
Sep 25, 2017
Messages
125
I am looking to purchase a supermicro chassis with a X9DR3-F mother board. It has dual 4 channel mini-SAS connectors. Documentation shows this supports 8 SAS drives using these ports. The manual describes different raid configuration but not HBA (unless that is RAID 0). Does anyone have experience with these on-board ports with Truenas? is there any special HBA firmware for these ports? I'd like to reduce the power and heat load as these ports are already on the motherboard. attached is a photo of the area of the MB. The motherboard has limited expansion slots so if I do not need a separate HBA/Raid it would be good.
I would appreciate any insight someone might have on this configuration. Here is the info on this motherboard it is an LGA 2011v2 a step up from the 1366 cpu's in my current setup.

dm_x9drw-3f-b_1 motherrboard zoom.jpg
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
short answer: should be plug and pray play as normal.

long answer:
this should be an intel controller from the chipset.
you can check in the bios, and look for a setting for it (something like SPU) and make sure it's ahci, or at least not raid.
otherwise, it should just work, there will be no raid until you configure raid. as this is intel, it works differently than the LSI IT/IR firmware split acrobatics.
this controller will be SAS1, but should work fine for spinners and backplanes. it will not work as well with SSDs, both being slower (300MB/s) and likely doesn't have any SSD tricks, like TRIM; should still write to them though.

6 PCIe slots is....not something I associate with "limited", but wanting to use the ports makes sense.

you will want to check for firmware/bios upgrades though. there is a bug on a number of the x9 board bios that crashes the system if the date goes over 2020 or something.

 

ZiggyGT

Contributor
Joined
Sep 25, 2017
Messages
125
short answer: should be plug and pray play as normal.

long answer:
this should be an intel controller from the chipset.
you can check in the bios, and look for a setting for it (something like SPU) and make sure it's ahci, or at least not raid.
otherwise, it should just work, there will be no raid until you configure raid. as this is intel, it works differently than the LSI IT/IR firmware split acrobatics.
this controller will be SAS1, but should work fine for spinners and backplanes. it will not work as well with SSDs, both being slower (300MB/s) and likely doesn't have any SSD tricks, like TRIM; should still write to them though.

6 PCIe slots is....not something I associate with "limited", but wanting to use the ports makes sense.

you will want to check for firmware/bios upgrades though. there is a bug on a number of the x9 board bios that crashes the system if the date goes over 2020 or something.

Thanks for the update.
Note: If only one CPU is populated the board has only 3 operating PCIe slots. I only want one 100watt heater in the server.
I did some reading on the Intel C606 chipset and it does seem that it has a non-raid setting. In the version I am looking at uses a riser that has limited slots.
From a DELL site I found:

"HDD ports 0-4 on this board WILL activate if the PERC H310 card is removed and a device is connected to at least one of the HDD 0-4 ports.
The storage controller activated appears to be the integrated Intel C600 chipset 4 port SAS/SATA controller, with PCI ID 8086:1d6b.
This chipset is documented to support SATA III speeds and TRIM but both are untested at this time. (this does not seem to be true based on the link and photo below)
When two or more drives are connected, the BIOS option ROM screen is enabled for "Intel(R) Rapid Storage Technology enterprise", with configuration utility available by typing Ctrl-I during boot. SCU Option ROM is version 3.2.0.1022.
When one drive is connected, the option ROM setup will not appear, but the controller still functions normally.
When no drives are connected at startup, the controller will not be activated.
All ports 0-4 are activated and usable simultaneously in this configuration."

This intel data seems to indicate that only 2 dedicated SATA port are SATAIII. SRU is mentioned in the Intel chip info. Based on this I doubt the Dell thread. Looks like it should work with Truenas. I am not planning on anything but spinners for a while.

Thanks for the help.

C600.png

 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
HDD ports 0-4 on this board WILL activate if the PERC H310 card is removed and a device is connected to at least one of the HDD 0-4 ports.
im confused why you are referencing a dell card here. installing a PERC H310 would have no impact on the onboard controller.
If only one CPU is populated the board has only 3 operating PCIe slots. I only want one 100watt heater in the server.
ahh. only 1 CPU will indeed make it "limited", although these CPUs are not quite "heater" realm as the previous generations were. a 100W TDP will idle down to around 50W, maybe less, and you can control that somewhat by picking the CPU. for example, 2x E5-2630L v2 would max at 120W (60W TDP each) and spend most of their time around 20W-30W probably, while being able to give mmmm. acceptable single thread performance and net you 24 threads plus all your PCIe slots
This intel data seems to indicate that only 2 dedicated SATA port are SATAIII
yes. this board has 2x SATA III, 4x SATA 2, and 8x SAS1/SATA2. if you are only running spinners and SSD boot drives, it wont matter much. the 2 SATAIII ports are usually the first ports in the boot order, so using them for boot can be less annoying.
 

ZiggyGT

Contributor
Joined
Sep 25, 2017
Messages
125
Some chatter on the Intel 606C was on a Dell site. Apparently the mini-SATA ports won't work if there is a Raid card plugged in. I left that note in because I wanted to make sure I don't try that.

I checked on the e5-2630L. I have a Z400 - X5670 in my server now. The X5670 is 95watts and it does not idle. System is about 200 watts. The X9DR3-F still takes DDR3 unbuffered ram up to128GB so I can reuse all my ram. The chips benchmark about the same. I don't do any transcoding, so I don't care too much about the GPU.
https://www.cpubenchmark.net/compare/Intel-Xeon-X5670-vs-Intel-Xeon-E5-2630/1307vs1215

The difference in power cost will more than pay for the $20 for the low power CPU. :smile: Less power likely >$60 Thanks for the advice.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
Apparently the mini-SATA ports won't work if there is a Raid card plugged in.
that sounds more likely to be a damaged board or raid card, as such a connection should not exist.
yup. that would be a space heater. they are barely mediocre on efficiency if they are at load all the time but they idle like 2W lower than their TDP...
unrelatedly. an L5630 is better on power and heat. ( just put 2 in a x8dth i intend to use for experimentation/ testing cards/ testing drives - it had 2x x5650s in it :eek:)
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I checked on the e5-2630L.
[..]
The difference in power cost will more than pay for the $20 for the low power CPU. :smile: Less power likely >$60 Thanks for the advice.
The lower TDP will not result in a huge difference in power consumption. In contrast to your existing CPU, the TDP is really that: a thermal thing. Do not expect a significant difference in power draw when those CPUs are idle. And when not idle, which will not often be the case in a home lab scenario, it will not make a big difference over time.
 

ZiggyGT

Contributor
Joined
Sep 25, 2017
Messages
125
That's disappointing. Looks like 2x E5-2650L cpus is $16. Perhaps if the thermal load is less the fan noise will be less. The board comes with E5-2609v2 CPUs. They are only 4/4 cores/thread and the 2650L 8/16. I talked myself into it. eBay here I come.

Thanks all for the interesting dicussion
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
i just noticed that your title says X9DRW-3F but your post says X9DR3-F.
it's good to be consistant, though it looks like they have the exact same chipset and controller so luckily it doesn't change anything.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
The lower TDP will not result in a huge difference in power consumption.
this is not fully true. a lower TDP is the max the CPU will pull. a CPU with 100W TDP has the potential to pull 100W at load. a 40W TDP CPU will never pull more than 40W. this gives you a predictable thermal load.

additionally, the CPUs after the 1366 have reduced the idle power usage by significant amounts with each iteration, while also greatly increasing the raw processing power per watt, meaning they run at full load for much less time. the faster the CPU can complete and go idle, the less heat and power. this can also mean that a higher power CPU with good idle down can actually be more power efficient than a low power CPU.

I did a methodologically shaky measurement on a bunch of the hardware I have awhile ago. (shaky, because some were full systems while others were not - the E3-1230 v5in particular runs my primary NAS so I wasnt ripping it apart)
CPUCPUmarkiter/sec [H]TDPoffidleloaddiff% of TDP
e3-1220l v22452102.31713517035205.88%
E3-1230 v26199208.0969501055579.71%
E3-1230 v57996351.51801342006682.50%
E5-2430L v2 x25903527.616086236150250.00%
E5-2620 v26260273.99511130240110115.79%
x5650 x257228010250386136170.00%
 

ZiggyGT

Contributor
Joined
Sep 25, 2017
Messages
125
Fixed the post description. Had to check to see what I bought. :smile:

Had a big spreadsheet with all the ebay options trying to figure out the best way to approach the upgrade. Must have copied from the wrong cell. I decided to go with is a little larger chassis than the 1U (those fans are too loud) than I needed but now I have a place to expand. https://www.ebay.com/itm/374125421396 I added this to my current server. It looks a lot like this but driven by a HP Z400 with a X5670 CPU/\.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
this is not fully true. a lower TDP is the max the CPU will pull. a CPU with 100W TDP has the potential to pull 100W at load. a 40W TDP CPU will never pull more than 40W. this gives you a predictable thermal load.
What I meant to say is that a low TDP version CPU will not save a lot of electricity in a typical home lab scenario, or any where the machine is mostly idle. The reason being that most people assume a low TDP CPU will save them a corresponding amount of energy.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
What I meant to say is that a low TDP version CPU will not save a lot of electricity in a typical home lab scenario, or any where the machine is mostly idle. The reason being that most people assume a low TDP CPU will save them a corresponding amount of energy.
it still can, but you have to understand the numbers. JUST using the TDP? no, that will not give the whole picture.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Sure, the argument applies to processors of the same family. It's fairly obvious that between a C3558 doing nothing and a pair of Xeon E5 v4s doing nothing, the C3558 is by far more efficient at doing nothing.
This also holds decently well across similar CPUs, but it should be looked at on a case-by-case basis.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
Sure, the argument applies to processors of the same family. It's fairly obvious that between a C3558 doing nothing and a pair of Xeon E5 v4s doing nothing, the C3558 is by far more efficient at doing nothing.
This also holds decently well across similar CPUs, but it should be looked at on a case-by-case basis.
yes, that's true. that is part of the "Whole picture" I meant.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
note that raidz1 is not recommended for drives over 2tb ( didnt spot this earlier because, frankly, I find the single line painful to parse)
 

ZiggyGT

Contributor
Joined
Sep 25, 2017
Messages
125
note that raidz1 is not recommended for drives over 2tb ( didnt spot this earlier because, frankly, I find the single line painful to parse
I fixed my signature file. You can see that the FGA 2011 CPU will be an upgrade, I can retire the Z400 FGA 1366 and reuse the DDR3 UDIMM Ram. The newer system will be less complicated with less RAID controllers.
The 3 x 6TB system was built before I read enough about ZFS/Freenas (a few years ago). I moved stuff onto it from a bunch of toaster NAS toys. Once built I did not have a good way to reconfigure/rebuild it. It has been running for a long time. It is now my offline Backup system. I did a transfer to the newer build. It was a lot faster over the 10Gb link than from those appliances. :smile: I read the logic and thought I understood it well enough to reconstruct my newer build. It has 3 pools 8x3TB z2. two that are 4 x 3TB Z1 (I thought the rebuild loss risk was manageable) I planned to replicate Pool2 to Pool3 on a schedule but that is not working.

The 3 x 6TB z1 is likely a time bomb. I can now rebuild as a z2 with a 4th 6TB drive (they are a lot cheaper now) as the data is now on my newer build. I am learning.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
The 3 x 6TB z1 is likely a time bomb.
eh, you DO have backups, at least. "avoid raidz1" is kind of a reflex at this point, because what happens is people think raid is a backup and only use raidz1. there was a poster earlier in exactly that position. 1 drive died, another died shortly later, and poof! no data. they are trying to figure out if it's the drives or the controllers or the cables....not a fun time.

as long as you are aware either could go poof before you can resilver them, it's your risk to manage. which, really applies to every disk and array in the world, soooo.

port replicator? being a supermicro chassis, i assume you mean SAS expander? port replicators are usually eWASTE.

a little bit curious why your backup nas needs 6 10 gbe ports.....but to each their own!
 

ZiggyGT

Contributor
Joined
Sep 25, 2017
Messages
125
Backups... I had been ping ponging the stuff I care most about between servers. I have a Win 10 Intel 5420 server with a hardware raid and 5 x3TB SAS drives that I use to backup parts of the server. It is not a good plan, but I try to protect the most important data with redundant locations. I am having trouble with replication between the two smaller pools. It worked well to move content from one server to another, but I cannot get it to work from one pool to another on the same system. What is your backup strategy?

I am using the Supermicro SAS Expander not the toy stuff. I don't see much negative posted about that.

At first, I did not have a 10Gb switch. I was planning on using the server as a switch using the bridge feature. It worked OK, but when I updated the version of FreeNas I had some issues. To resolve I bought the Aruba switch with 4x10Gb SFP+ ports and sold the Quanta 2 port switch I had. I have 6 devices I want to connect. So eventually I will try again to get the bridge to work (the bridge works but the config does not survive a reboot). I have recently found that Truenas does not support multiple ports on the SolarFlare or Intel cards I have but works well with the Mellanox -3 cards. I have seen no info on this. The Mellanox 3 card I have does not work in the HP Z400 but the Mellanox-2 cards do. There is some rough plan. When I run into an obstacle I try another path.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
but I try to protect the most important data with redundant locations.
this is the soul of a backup.
one pool to another on the same system
this is built in and should just work. probably the only way I could say what might be wrong is to see a screenshot.

my backup strategy is to copy everything. I dont have an offsite copy. if my house explodes I will have bigger concerns to worry about.
 
Top