Supermicro X11 board

Status
Not open for further replies.

Halk

Dabbler
Joined
Aug 22, 2018
Messages
38
So I've just moved over to FreeNAS and looks like my data isn't going to be lost (I was on Debian with a failing Z2 array and didn't want to replace drives so upgraded drive size)...

I'm now looking at getting ECC RAM which means upgrading to a compatible board/CPU. There's lots I need to consider but one of them is the motherboard.

I've been reading the very helpful breakdown on here and looking at the differences between the boards.

At present I have an LSI HBA which is compatible and has 2 SAS ports and 8 drives connected.

The two questions I need to answer about this are...

1) Is there any advantage to going up to higher end boards for more ports if all I'll ever use is 1 for the boot drive and maybe a few later for backup but years down the line?

2) This board seems limited to 64GB of memory. I'm running 8x12TB of drives, and it's really a media server although I may also run some VMs on it. Is that 64GB limit a hard limit, or is that just by using 16GB dimms and if I used 32GB I could have 128?
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
Which specific board are you considering? My X8 board goes to 192GB of RAM; my X9, 512GB and my X10 32GB. The type of memory is also important as DDR4 memory is more expensive than DDR3 and UDIMMs are more expensive than RDIMMs. Right now, the X9 board offers the best deal for those who need a large amount of relatively inexpensive memory, and cost effective CPUs.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
This board seems limited to 64GB of memory. I'm running 8x12TB of drives, and it's really a media server although I may also run some VMs on it. Is that 64GB limit a hard limit, or is that just by using 16GB dimms and if I used 32GB I could have 128?
You didn't specify a particular board. Supermicro makes a whole range of X11 series boards.
You can save significantly on the cost of memory by using an older generation board with Registered DDR3 memory and still be able to upgrade to a higher total amount of memory.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
At present I have an LSI HBA which is compatible and has 2 SAS ports and 8 drives connected.
You didn't specify the model here either, but you should have it flashed to the IT mode firmware regardless. Depending on the model, some LSI controllers are able to connect between 128 and 256 drives by use of SAS expanders. You have to give more details so that your questions can make sense and get useful answers.
 

Halk

Dabbler
Joined
Aug 22, 2018
Messages
38
I was looking at the entry level board, the X11SSL-F.

Once my data is on the new array I'll be flashing the LSI card. At the moment I'm trying to do nothing but get the data off. 2 of the drives chuck out errors frequently so I don't want to do anything more than get the data safe to start with. I'm not likely to use expanders or need more than the 8 drives the card can currently take.

I'm also aware it might be a good idea to go hunting on eBay for 2nd hand server gear yeah... however I wanted to spec up an off the shelf system so that I have a frame of reference for when I do go looking.

As for the tip about the X9 then I'll look into that as well. The goal with the X11 was to find a system I could buy the parts from retailers online and build a competent server with ECC RAM in the most cost effective way.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'm trying to do nothing but get the data off. 2 of the drives chuck out errors frequently so I don't want to do anything more than get the data safe to start with
Sounds entirely reasonable.
I was looking at the entry level board, the X11SSL-F.
Nothing wrong with that system board. The chipset and CPUs that are compatible with that board will not address more than 64GB of memory. It is a hard limit for the architecture.
however I wanted to spec up an off the shelf system so that I have a frame of reference for when I do go looking
You may find that the big selling point of the older hardware is the price of memory. Depending on how much memory you go with. I was on the previous generation boards for my NAS that have a hard limit of 32GB and when I went shopping I knew that I needed a new system board and processor to be able to address more memory, but the price of DDR4 memory is crazy high. Anyhow, I built my "new" system around this board:
https://www.ebay.com/itm/Supermicro...2011-IPMI-w-Heat-Sink-I-O-Shield/113216257183
It uses a Xeon E5 processor and will accept Registered ECC DDR3 memory that is very economical right now. Building a system that had 64GB of memory, I saved over $200 and this board is still able to take more memory if I want more later.
 

Halk

Dabbler
Joined
Aug 22, 2018
Messages
38
I've just astounded myself with the price of memory. I've build loads of PCs in the past and the memory you just pick to suit the rest of the hardware.

It looks like this time around it'll be the polar opposite. I'm likely to find some 2nd hand ECC DDR3 and then find hardware that works for it. I didn't expect the difference to be so stark and I'm now thinking it's not even worth the time to spec up what i had started and I should work backwards from the memory instead.

I'd noticed that there are boards with 8 slots, so I've had a look at ebay and 64GB of DDR3 ECC is £220 or so - steering clear of 8x8GB and going for 4x16GB (and not 16x4GB as I nearly looked at!).

That easily opens the door to 128GB of RAM if I can see it cheap which would mean I could run all the VMs I needed too.

Quick look says not all E5 CPUs take DDR4? Or is this board a way to make that happen....

In any case back to the drawing board and it'll start with RAM.

Thanks!
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,995
So to discuss the SATA expansion question, the X11SSL-F only has 6 SATA ports, maybe look at the X11SSM-F which has 8 SATA ports, assuming you haven't purchased the motherboard already. To add more drives will depend on if you want to run and LSI board in IT mode or just add a normal HBA board like I have been doing. It all comes down to what you want to do to be honest and if you are able to remove heat in the case well.

If you are planning on running VMs, are you talking about running VMs on FreeNAS or running FreeNAS on a VM? I run ESXi on my X11 system and FreeNAS on top of that, works great but maybe that is not what you desire to do. If you desire to build a large system that should do a lot of things, I recommend you write down what you want to do with your system and then design it properly.
 

Halk

Dabbler
Joined
Aug 22, 2018
Messages
38
It's likely to be ESXi with Freenas running in a VM. The only barrier I can think of really would be making sure I can plug in USB mifi devices to the server and have them presented to each VM. I can do that with the windows server running VMware at the moment, but it's been a plan for a while now to move to ESXi and I'm not sure if that will stop it.

At the moment I have 2 servers, both basically re-purposed desktop equipment. Likely I could sell much of that to make a large dent in the server costs - although that's not essential. One server is for VMs and the other is now my NAS and is FreeNAS and was Debian up to a few days ago.

I don't think there's a rush to get the gear so I'm already planning on doing what you suggest and working out what I want and what I can get and seeing if the two can tie up.

I'm more asking if the LSI HBA is going to be the best solution for the job or if there's going to be some kind of benefit in having a motherboard with sata ports on it. All I can really think of is PCIe lanes but it was before windows 7 came out that I was last looking at that and I have a suspicion if I look at it now the bandwidth is likely sufficient no matter what way I go. I think I'd kick myself if I got a slightly cheaper motherboard with less SATA ports/lesser SATA controller and stuck with the LSI card if I later found out it wasn't ideal.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,995
I'm more asking if the LSI HBA is going to be the best solution for the job or if there's going to be some kind of benefit in having a motherboard with sata ports on it.
I believe you said you wer going to use this primarily as a media server which is really slow transfer rates (keeping things in perspective) so SATA ports on the motherboard are fine. HOWEVER if you are planning to use ESXi then you need to plan on how to pass through either the entire SATA controller (the best way) or pass thought individual drives using RDM (not the best way but it works). You will need to understand datastores for ESXi. Now with that all said, you should be looking for an HBA for FreeNAS to pass your hard drives though to the FreeNAS VM. You can use an LSI carn in IT mode or HBA like what I use. Again, you need to figure out your use case for your system.

Lets say you want to run a few VMs, how much storage will you need for each VM? Let me provide an example:
1) FreeNAS VM = 16GB Boot Drive
2) Windows 10 VM = 100GB Drive
3) Ubuntu VM = 80GB Drive
4) Redhat VM = 80GB Drive
Total of 280GB so you need a 300GB drive to store your VMs on and for ESXi to boot from. You can of course add hard drives for more daastores on your motherboard SATA ports to keep things simple and clean. If you plan to use SSD's then you could buy a single 1TB SSD drive and have a great time. Add a few more SSDs and even more joy! While your FreeNAS VM is running you would be able to store all your extra data there. When using the VMXNET3 NIC within ESXi then you will notice ultra fast data transfers. It's pretty nice to play with.

EDIT: You might wnt to read a thread I started a few years back called My Dream System I Think. Just search for it and my name and you should locate it. It has a lot of great data in it and a lot of off topic chit chat.
 

Halk

Dabbler
Joined
Aug 22, 2018
Messages
38
Yes it's to replace the current media server - or in fact it already has! If I'm replacing the motherboard/CPU/RAM on the media server then it may be a good idea to amalgamate the other server which is just for VMs in as well.

Sounds like if I didn't already have the LSI HBA I should be considering buying one anyway!

I'd be running a single drive, perhaps a 512GB SSD and that would be distributed between all of the non-freeNAS VMs. At the moment they get 30GB and it's fine. The FreeNAS VM would get it's own drive and the LSI HBA with 8 drives on it. I could add a 60GB for the ESXi boot drive if needed too.

If I did dump the servers I have then I'd have several spare 120GB/60GB/30GB SSDs, and a spare Polaris 512GB m.2. I'm sure I could get things working between all that. I don't imagine whatever motherboard I bought would have an m.2 port on it but I could always buy a PCIe card for the m.2 and it'd be plenty fast. Possibly a spare 1TB SSD and another 512GB m.2 if I upgrade my main desktop to a 2TB m.2 which I have been thinking about too.

Oh and if anyone does want to read the thread Joe mentioned, here it is.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
Quick look says not all E5 CPUs take DDR4?
Of course not all :) Sandy and Ivy bridge take DDR3

Sent from my mobile phone
 

Halk

Dabbler
Joined
Aug 22, 2018
Messages
38
Ok - that makes sense I think the memory controller is integrated into the CPU now these days.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,995
The FreeNAS VM would get it's own drive
That really isn't needed but it is up to you.

if I upgrade my main desktop to a 2TB m.2 which I have been thinking about too.
I bought a 1TB SSD almost a year ago and installed Windoze 10 on it. It sits in my main computer not powered on and instead I'm running Windoze 7 on my 512GB SSD. I do need to place the Win10 online and grab some updates.
 

Halk

Dabbler
Joined
Aug 22, 2018
Messages
38
I buy games on steam and never get around to playing them, so I do need somewhere to store them - I could of course not download them and they'd be exactly as much use to me....

The FreeNAS VM with it's own drive would just be tidier overall, and would use one of the old SSDs that are lying around!
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
I have three primary systems I am working with to determine which I will keep as my primary FreeNAS system. I am currently testing a system which uses an X8DTE motherboard. I am running FreeNAS under ESXi using a single 6 core E5640 CPU and 48GB of RAM and 12 disks. The nice thing about the this particular motherboard is with two CPU sockets and 12 DIMM slots, I can cost effectively expand my CPU and memory if needed. My current configuration has 48GB of RAM using only half the memory slots. The Intel 5600 CPUs go for less than $10 on ebay and the 8GB DIMMs are around $25 each.

My second system is an X9SRL-F that takes an Intel E5-2600 CPU (V1 or V2). The motherboard has 8 DIMM slots but will support 64GB DIMMs for a total of 512GB of RAM. While is it is a single socket system, the E5-2600 socket will support up to 12 cores in a single CPU. The motherboard uses the same inexpensive memory the X8 uses. CPUs start at $30

My third board is a X10SAE which has an Intel 1150 socket which means it can use server and desktop CPUs It uses UDIMM memory which is more expensive than RDIMM memory. The motherboard is also limited to 32GB of RAM. This system was my primary system for NAS but now as I am looking to incorporate an LTO tape into my storage appliance, the larger memory in my older systems gives my more options with virtualization.
 
Status
Not open for further replies.
Top