All your X11SSH-F questions answered here!

Status
Not open for further replies.
Joined
Dec 22, 2017
Messages
3
As my motherboard gave up last week, as a happy X-mas to me, i'm planning to purchase this Supermicro X11SSH-F (+/-€200,-) is it still a good enough today or is there an "upgraded version" already.
Plans are to run some jails like plex, owncloud and maybe some vm (for testing).
Intel Xeon E3-1220V5 - 3 GHz + 16GB Kingston ECC ram is the beginning
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
i'm planning to purchase this Supermicro X11SSH-F (+/-€200,-) is it still a good enough today or is there an "upgraded version" already.
This is absolutely good okay enough for today and tomorrow, the X11SSM-F if the better unit to purchase. The X11 is the current version being manufactured. Check out this link.

Correction: I read in my mind X11SSM-F, the "H" eluded me. Thanks @Ericloewe for correcting me.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
As I've said many times, the X11SSH-F is a poor choice in almost all cases. The X11SSM-F is the better board.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The X11SSH-F has two wasted PCIe lanes, which would be used by the extra NICs in the -LN4F. The M.2 slot is crippled and has only two lanes.

This package is nearly always more expensive than an X11SSM-F plus an M.2 adapter.
 

mpryandds

Cadet
Joined
Apr 18, 2018
Messages
3
Hello!
I just got the x11ssh-f. I am trying to install server 2016 in UEFI with RAID using Rufus made USB. (mbr for either, NTFS, S Essentials ISO. The motherboard settings are set to RAID, Legacy ROM for RAID, Boot Mode = UEFI, Boot order is set to [UEFI USB Key]. Do you have any suggestions on how to accomplish this? I had an Optane M.2 in but removed it. I have a SMicro DOM in place. I have a TPM2.0 (enabled) and I have BMC 'working'. Maybe I should turn off the TPM and BMC? Thanks very much if you can help! It doesn't find a drive or a raid driver on the 1.13 driver cd during setup. It gets that far only.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You do realize that this forum has nothing to do with Windows Server, right?
 

mpryandds

Cadet
Joined
Apr 18, 2018
Messages
3
I do. I found the site because supermicro x11ssh-f users are here. I do have 15 hard drives from PATA, Sata, Sata2, sata3 and even an old compaq server scsi card and two compaq drives. I will get to the NAS right after I get server installed. I never heard of ix or FreeNAS otherwise. Is it not stand alone (run in hyperv) but installed, right?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Windows Server experience is going to be sparse here, is my point. I do suggest using Microsoft's USB installer creator software thing. It's always worked with Windows installers, for me.
 

mpryandds

Cadet
Joined
Apr 18, 2018
Messages
3
I finally got to install under uefi using proper Rufus partitioning software settings. X11SSH-F, 16 GB ECC 2666 (but E3-1230 v6 processor limits at 2400.); bios v 2.1a, Intel 335 SSDs in raid 1, Optane M.2 not installed- X11SSH is not 'optane ready'. I will load HyperV and look at FreeNAS. Thanks for your forbearance.
 

LimeCrusher

Explorer
Joined
Nov 25, 2018
Messages
87
That CPU doesn't support ECC, as is explicitly mentioned in the Hardware Recommendations Guide, in the Resources section.
You were referring to the Core i3-7100T CPU and I was puzzled by your answer because the datasheet says it does support ECC RAM. Did I get something wrong?

The X11SSH-F has two wasted PCIe lanes, which would be used by the extra NICs in the -LN4F. The M.2 slot is crippled and has only two lanes.
I considered the X11SSH-F to be very interesting but I did not pay attention to this! The crippled M.2 Interface (PCI-E 3.0 x2) hurts :-/
It led me to check out the number of PCIe lanes the embedded Intel C236 can manage: 20 according to the datasheet. For instance, the X11SSM-F has: 1 PCI-E 3.0 x8 (in x16), 1 PCI-E 3.0 x8 and 2 PCI-E 3.0 x4 (in x8). That's a total of 24 lanes, plus there should be other lanes for NIC and so on. I understand the CPU has a PCIe controller and manages certain lanes (up to x16 for recent Intel CPUs). Is there any way to know which lanes are managed by the CPU and which lanes are managed by the chipset?
I would guess that the two x8 slots are wired to the CPU because the chipset seems to only support up to x4?

This package is nearly always more expensive than an X11SSM-F plus an M.2 adapter.
I was not aware of the existence of those adapters. Thanks for the tip.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You were referring to the Core i3-7100T CPU and I was puzzled by your answer because the datasheet says it does support ECC RAM. Did I get something wrong?
Oof, a post from nearly two years ago. I'd actually forgotten about that mess. As usual, blame Intel:
Initially, the i3-7xxx were a fairly delayed, fairly minor launch with lots of confusion, as Tick-Tock fell apart at that time. They're essentially the same parts as the 6xxx models, down to the die stepping. At first, ark.intel.com said they supported ECC. Then they changed it to no (and I think someone may have gotten a statement from Intel saying so). Then they changed it back a few months later.

What do I make of this? 7xxx models are probably fine, but they offer little to nothing over 6xxx models.

Is there any way to know which lanes are managed by the CPU and which lanes are managed by the chipset?
This is specified in the manual. For the X11SSM-F and similar boards, it's the two x8 slots. On boards with SAS3 onboard, it's the SAS controller and the x8 slot.

I would guess that the two x8 slots are wired to the CPU because the chipset seems to only support up to x4?
It would be a very stupid thing to do, as the PCH only has a x4 uplink to the CPU. They'd just be wasting lanes they could use for literally anything useful. If it's even possible, which I suspect isn't.
 

LimeCrusher

Explorer
Joined
Nov 25, 2018
Messages
87
Oof, a post from nearly two years ago. I'd actually forgotten about that mess.
Man, like any newbie, I'm reading this goddamn forum :D Thanks for fixing that confusion.

I found the information in the motherboard manual indeed.

It would be a very stupid thing to do, as the PCH only has a x4 uplink to the CPU. They'd just be wasting lanes they could use for literally anything useful. If it's even possible, which I suspect isn't.
You mean, wiring the x8 PCIe slots to the PCH instead than the CPU would be very stupid right? I am aware that PCH and CPU communicate over the DMI which is technically a x4 PCIe lane. Also, modern Intel CPUs datasheets often state they support 16 lanes in x16 or two x8 or some x4 as well. PCHs datasheets mention only x1, x2 and x4 support.
So I assume what you meant was: it is stupid to wire a x8 (or x16) slot to the PCH because it could not communicate the flux from this slot to the CPU. Rii... ggght ?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You mean, wiring the x8 PCIe slots to the PCH instead than the CPU would be very stupid right?
Right, the extra lanes would go to waste.
 
Status
Not open for further replies.
Top