New X11SCL-IF Build

op48no1

Cadet
Joined
Aug 11, 2020
Messages
4
I'd like to report my success with the following build. It's nothing special, but perhaps someone will find the information useful:

Motherboard: Supermicro X11SCL-IF
CPU: Intel Core i3-9100F
Memory: Micron 16GB DDR4 2666MHz ECC UDIMM
Boot Drive: Western Digital Blue SN550 NVMe M.2 2280 250GB PCI-Express 3.0 x4
Case: Supermicro CSE-721TQ-250B

IMG_2718.jpeg IMG_2719.jpeg IMG_2724.jpeg

I'm running with some old drives for testing right now until I have the courage to ask my wife for more money.

Never having used a server motherboard before, I was nervous about configuration, but it was straightforward. Notes:

1) I selected AMI Native Support in setup for NVMe Firmware Source.
2) I installed FreeNAS with BIOS boot (as directed by installer) and it worked fine. Later I reinstalled with UEFI boot and that was fine too.
3) By default, IPMI is shared with the main ethernet port. I used a LAN scanner to find the address, but IIRC it's also displayed on the console at boot.
4) For IPMI login, use user ID 'ADMIN' and the IPMI password printed on the sticker on the CPU socket cover.
5) Default IPMI settings are fine.
6) With the stock case fan, there's a lot of airflow through the drive bay. I've only got one disk in there right now but I think ventilation will be fine.

I have to say, security considerations aside, IPMI is very cool. It's great not having to connect a monitor.

Thanks to the community for build resources.
 

op48no1

Cadet
Joined
Aug 11, 2020
Messages
4
FWIW, on Sunday I bought four 4 TB IronWolf drives and installed them in my server. I picked them up in person at Micro Center.

I intended to buy WD Reds. Micro Center has a lot of them in stock, part number WDBMMA0040HNCNR. The price is $139.99, 40% more than the current WD store price, phooey. I couldn't find definitive info on the web about this part number, but would be interested out of curiosity if anyone knows. My hunch is they're old drives that have been sitting around with old price tags on them. This is what the boxes look like:

WD Red.jpeg


So I bought Seagate. I read everything I could find about IronWolf vs. Red and decided there was no real objective data to recommend the one over the other. I may be sorry but if so I will have learned something. I had no issues at all installing and setting up a RAIDZ2 pool.

I can hear the drives operating if I turn off the air conditioner in the room, but they're more than adequately quiet. Ambient air temperature in the room is 73F and the drives idle at 28-29C, going up to 30-32C under sustained load. I've put about 1.5 TB of data onto the pool so far without any drama. Transfer speed to the box is around 115 MB/s over the gigabit LAN port on the motherboard, so the drives do not seem to be slowing me down.

I will post a follow-up if anything blows up.

As always, YMMV.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
1) I selected AMI Native Support in setup for NVMe Firmware Source.
What options did you have? A driver provided by Supermicro instead of AMI?
3) By default, IPMI is shared with the main ethernet port. I used a LAN scanner to find the address, but IIRC it's also displayed on the console at boot.
You have a dedicated port, though.
 

op48no1

Cadet
Joined
Aug 11, 2020
Messages
4
What options did you have? A driver provided by Supermicro instead of AMI?

Here's what it says in the manual:

Messages Image(2687344274).png


I can guess roughly what this means, but I inferred (correctly, I assume, since it boots) that I wanted to change the option from the default to AMI Native Support.

You have a dedicated port, though.

Yup. I was actually surprised when IPMI turned up on the main LAN port. I only have one network at home, so it's actually much more convenient this way. I can get to both the IPMI console and the main FreeNAS console at the same time from the same browser.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I can guess roughly what this means, but I inferred (correctly, I assume, since it boots) that I wanted to change the option from the default to AMI Native Support.
So it's an option to use UEFI extension ROMs provided by the SSD. I've never heard of an NVMe drive having one, since that's part of the point in having an NVMe standard, but it's conceptually possible.
Yup. I was actually surprised when IPMI turned up on the main LAN port. I only have one network at home, so it's actually much more convenient this way. I can get to both the IPMI console and the main FreeNAS console at the same time from the same browser.
You could have them both on the same network. The gains are positively minimal, but it's possible. I do it because I have extra switch ports I'm not using otherwise, but when I drag a server our for maintenance, I typically only plug in the shared port.
 

op48no1

Cadet
Joined
Aug 11, 2020
Messages
4
So it's an option to use UEFI extension ROMs provided by the SSD. I've never heard of an NVMe drive having one, since that's part of the point in having an NVMe standard, but it's conceptually possible.

Hmmm. I had to look that up. Makes sense. My two biggest concerns with the server-class board were that I would have trouble getting it to boot from NVMe, and that I would go down a rabbit hole of unfamiliar settings trying to configure it. I was pleased and relieved that it didn't give me any trouble at all.

EFI boot is one of those things I know I ought to study in my copious free time, but so far I've been able to avoid getting into it in depth.

You could have them both on the same network. The gains are positively minimal, but it's possible. I do it because I have extra switch ports I'm not using otherwise, but when I drag a server our for maintenance, I typically only plug in the shared port.

That's interesting. I can see how there might be a small advantage to separating them. I'm using Google wifi and have a puck in my home office with the NAS plugged into one port and a PC plugged into the other. I've been digitizing a pile of videotapes and uploads are much faster over the wire than via wifi, of course. Maybe when I'm done I'll try separating out the IPMI port, just for grins.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
EFI boot is one of those things I know I ought to study in my copious free time, but so far I've been able to avoid getting into it in depth
All you really need to know is that instead of an 80s style fixed offset on a block device, you have a FAT32 partition from where an EFI executable is run. Other than that, not much changes for the user. Administration is supposed to be easier, but if you're stuck with an AMI firmware, as most of us are, the bugs are more plentiful than ever.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
It may also be good to know that Intel announced they’d be dropping CSM (old school BIOS boot) with chipsets introduced in late 2020. As far as I know that hasn’t happened yet, but the writing is on the wall: Vendors may provide UEFI only reference code.

I haven’t had any issues with running with CSM off and just UEFI on either SuperMicro or consumer PC builds.
 
Top