BUILD [UPDATE] X10SLR-F with 8/16 HDD -> Please review the result

Status
Not open for further replies.

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
The AOC-S2308L-L8E and the LSI 9207-8i are based off the same chipset. IBM and Dell use a custom firmware on their cards whereas the Supermicro cards don't need to be /cross/flashed, just flashed to the right version. 836BA and A are build for SATA and SAS drives alike since SAS bays are always compatible to SATA HDDs.

10Gbe is the future -> rather plan for that instead of using LAGG/MPIO. Single faster interfaces are easier to handle and implement than multiple smaller interfaces.

thx for clarification. edited post #1. Now it should be the system that I'm going to build.

I use iSCSI MPIO. Customer only has 1 Gigabit Switches, so 10 GbE is not an option for now. Will work with the dual onboard 1 GbE-Ports.
SAS Controller by Supermicro is PCIe 2.0 and LSI is 3.0. Don't think that this will impact performance that much, but 100 bucks saved on the board and spent for the LSI.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
The LSI 2308 based chipsets are always PCIe 3.0 x8 -> http://www.supermicro.com/products/nfo/storage_cards.cfm

Probably something linked to that USAS08 card or whatever that was called, this was the first hit for me as well as I googled AOC-S2308L-L8e ..

Also you can start without any SAS controllers, just 2x 4x SATA -> SFF-8087 reverse breakout cables. The full 16 bays with SAS capabilities can be added afterwards trough adding two of the said AOC-S2308L-L8e cards with 4x SFF-8087 to 8087 cables.

And I would start on 2x16GB DIMMs -> Samsung M393A2G40DB0-CPB because you might want to upgrade to 64+GB quite fast. You could upgrade to 128GB total using these DIMMs.
 
Last edited:

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
The LSI 2308 based chipsets are always PCIe 3.0 x8 -> http://www.supermicro.com/products/nfo/storage_cards.cfm

Probably something linked to that USAS08 card or whatever that was called, this was the first hit for me as well as I googled AOC-S2308L-L8e ..

unfortunately my distributors also state that it is a PCIe 2.0 card. These different specs are drivin me nuts... either the product description is wrong or the product is listed under the wrong name (on the distributors site).

At the end I will buy the controller which is available in under 3 weeks of delivery :p
 

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
And I would start on 2x16GB DIMMs -> Samsung M393A2G40DB0-CPB because you might want to upgrade to 64+GB quite fast. You could upgrade to 128GB total using these DIMMs.

ah. sry. my mistake. the "kit of 4" was caused by the kingston modules. The Samsung is only 1 module. therefore the 2x 16GB is better than the 1x 32GB. BUT:
different parts, same game -> distributor says: 16GB modules are registered ECC and the 32GB are not.

wtf? the 16GB modules are listed by Supermicro as compatible. i will do so.o_O
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
The 32GB are LRDIMMs (not exactly RDIMMs, but work nonetheless (when on listed on the tested memory list)). The markup is acceptable if you think you would need 256GB RAM and still want to stick with the very same system.
 

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
Post #1 updated. Thanks to all for your comments, especially @marbus90
Changed chassis, motherboard, RAM and SAS-Controller :D

just a question for my own interests: Has somebody any experience with the X10SRL-F and ESXi 6?
 

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
here we go... hardware has almost arrived. Just waiting for the XEON, 2x Supermicro CBL-0281L (SFF-8087) and second SATA-DOM...

But after reading so much about LSI HBAs and the firmware version... Do I need to flash a specific version of firmware on the Supermicro AOC-2308L-L8E?
 

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
ok... think I got it... will download this package ftp://ftp.supermicro.com/driver/SAS/LSI/2308/Firmware/IT/Previous Releases/PH16.0.1-IT.zip. Will erase the card with megarec in DOS and then flash it via UEFI and the included ".nsh"-script in the firmware bundle.

Let me correct myself:
I download the firmware bundle from: ftp://ftp.supermicro.com/driver/SAS/LSI/2308/Firmware/IT/Previous Releases/PH16.0.1-IT.zip

Boot in UEFI shell
erase the firmware like in the ".nsh"-script: sas2flash.efi -o -e 7
Just downgrade the firmware to v16: sas2flash.efi -f 2308IT16.ROM
add SAS-Serial: sas2flash.efi -o -sasaddhi <serial>
done

there is no need to flash the "mptsas2.rom" and the "x64sas2.rom"?
Or do I really need to flash an "empty.bin"-firmware first and then flash the Supermicro P16?

Please correct me, If I'm wrong! Didn't found any explicit infos about downgrading Supermicros HBAs.
 
Last edited:

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
Back from building and installing the setup... Here are the results. Hopefully you can give me some more advice what else I need to do.

First off, I started to marry the hardware:
WP_20150429_002-1.jpg



Everything is connected in place: PSU, 8x 2TB WD RE4 drives, 2x SATA-DOMs (only one shown in the pic) Frontpanel connectors and 3 of the 5 fans (waiting for the power cord extensions)
After that, I looked at the backplane of the chassis. Because the fans are connected directly to the mainboard, I removed the jumpers for the backplane fan-connectors to not get any red LED's and connected all 4-pin power on the backplane.

As this is my first build of this kind: Are there any other thing I should/could connect? (Backplane to MB or something like that)

Software:
Installed FreeNAS from a physical connected USB stick, remotely controlled via IPMI. Nothing special to say here. Mirrored the boot-pool to the second SATA-DOM afterwards in the webGUI.

Config:
I think the main thing to mention are the system fans. Currently I run them on "Full Speed" because the power cord extensions are currently not available for one "backplane" fan and one on the back of the chassis. A bit loud, but HDDs are all at 34-37 °C, while running badblocks.

"burn in":
As stated before, this is my first "big" build. (Yeah, some of you guys think this setup is tiny and cute ;))
Followed the How-To by qwertymodo.

badblocks #1:
First run first round I got about 50k of errors after 7h on 6 drives. WTF? :confused:
Yes, you are right... my fault... Tested some things and created a pool with 3 mirrored vdevs and 2 spares. These errors had to be thrown.

badblocks #2:
First run, first round took about 7h to get to 40% of writing. WTF? :confused:
Yes, you are right again... It was past 3 o'clock at night and I unfortunately used "-ns" instead of "-ws".

badblocks #3:
Currently running at 0x55 reading and comparing at 80%. No errors so far. :cool:


These are my experiences so far. Hope to get the missing power cord extensions for the system fans as soon as possible and will report back on the HDD-temp.


Your turn:
If you have any recommendations, suggestions or questions, please let me know. :)


Just for information:
Have a second X10SRL-F with a E5-1620v3 as ESXi 6 running. Fully compatible and running amazingly fast.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Do not use bigger

No, DO use bigger, if bigger is reasonably available. Bigger disks means your pool is less full. Less full means better fragmentation performance. Less fragmentation means faster. You just need to get your head around the idea of not actually using the extra space, which of course is very annoying.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
No, DO use bigger, if bigger is reasonably available. Bigger disks means your pool is less full. Less full means better fragmentation performance. Less fragmentation means faster. You just need to get your head around the idea of not actually using the extra space, which of course is very annoying.
Don't rip single sentences out of context.

No, more and faster spindles are needed for a VM workload. Do not use bigger, slower/fewer disks.
And all makes sense.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Don't rip single sentences out of context.


And all makes sense.

I think jgreco means "do both" for fun and profit [better performance].
 

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
The chassis has 8 more HDD trays. It's planned to expand the pool by 4 more mirrored vdevs step-by-step. for now there are 3 vdevs with 2 spares.
But the big question is: did I forget any connections for monitoring the hardware? is there any connection between motherboard an backplane for monitoring somethin?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The chassis manual should list any additional connections it might have. They're typically not strictly necessary, but usually integrate with IPMI (PMbus for the PSU, for instance).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I think jgreco means "do both" for fun and profit [better performance].

Correct. Free space increases write performance. I specifically ripped what I did and emphasized and explained my reason.
 

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
all fans and drives are running smotth. Fan-mode "HeavyIO" and all disks are below 37 °C. some drives are at 34 and the lowest ones are at 37.

Amazingly fast storage :)
 

DearestDreamer

Dabbler
Joined
Nov 28, 2015
Messages
42
I just saw that your hardware is nearly identical to mine, we both have the same chassis, motherboard, cpu and hard disks. Our HBA manufacturer is different and I'm going with less RAM, but I'm really glad your build has worked out since that means mine will probably too! Hope you check out my thread here in a few weeks once I get the parts and assemble it.

I heard the 920w PSU is much quieter than the 900w ones, how is your noise level with this server? Also, I know we only have 2 2.5" bays in the rear on this chassis, but is there any way you might be able to install 4 2.5" drives instead of just 2? I won't know until my chassis arrives, so I'd love to hear if you think that might be possible.
 

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
Noise level is acceptable. Have had smaller servers with much more noise...
Regarding more than 2x 2,5" drives, I would have a look at some adapters for the hot-swap bays. Or if you don't mind, you could place it anywhere inside the chassis (without screws) with proper material - not just ducktape (don't have the right word atm for what I mean ;) ).

What I've learned with this build: RAM - more RAM - there is nothing what you will want more. :D (btw: thx @cyberjock and all other FreeNAS pros)

SATA-DOMs might be expensive compared to SSDs, but you can plug it into the internal SATA-DOM-ports and so your OS drive (mirrored) is not occupying any HDD/SSD bay.
Currently I think about the next upgrade to 96GB or 128GB. But I also look for some SSDs for ZIL / L2ARC, but I have to do some more research to understand what's the better choice.
 

TXAG26

Patron
Joined
Sep 20, 2013
Messages
310
Speaking of SATA-DOMs, I just picked up a Supermicro SSD-DM064-PHI to boot ESXI from. I attempted to mirror ESXI to two USB thumb drives, but did not have success with that working. I figure reliability-wise, the SATADOM will be much more resilient than a USB thumb drive. FWIW, just ESXI will boot from the SATADOM, all of the VM's (including FreeNAS), boot from a separate SSD.
 

Fuganater

Patron
Joined
Sep 28, 2015
Messages
477
Just a note. Make sure all your fans are connected to the Mobo and not the backplane.
 
Status
Not open for further replies.
Top