Anyone using Xeon E 2300 series? Motherboard confusion here...

paulinventome

Explorer
Joined
May 18, 2015
Messages
62
So as per title. There is a nice Xeon 2324G out there. It's ECC and onboard graphics and reasonably priced and not a huge power draw. Looks ideal for a NAS build.

But I am really struggling to find a suitable motherboard here in the UK. Ideally I need 10gbe on board. I want to use this for a high capacity spinning disc RAID 6 and also a PCI NVMe based volume for shared video and film post production work. This volume would be backed up all the time or maybe it could be RAID 5, but not got that far yet.

There are a myriad of non server boards and a handful of server boards but when it comes to finding something suitable in the UK that is actually purchasable - very little.

So from a real world perspective can I get ECC out of a different chipset board and is running a non server board very much frowned on? I have been running several Synology NASs for 8 years+ and now it's time to change. One the NAS is bedded down I would just like to leave it going.

So any recommedations for chipsets for the LGA1200 socket and boards? What I'm really looking for is

- 8 SATA connections
- M2 onboard for boot
- 10gbe ideally (or PCI if it has to be)
- PCI slot for a 4 x NVMe card, maybe two at some point
- Wake on LAN
- Reasonable power consumption

Would really appreciate some thoughts and advice...

thanks
Paul
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
From my experience, it's almost always cheaper to get an add-in 10Gbe NIC than to buy a motherboard with one. Also, I'd recommend SFP+ over copper if you don't already have the infrastructure. SFP+ is less expensive, better supported, and less power hungry.

ECC is really up to you, and your risk tolerance. However, I'd strongly recommend it. ECC is such a low-hanging way to protect your data in transit, that I'd question the desire to go with ZFS if you aren't willing to go with ECC. ZFS w/o ECC seems backwards to me.

M2 is probably overkill for a boot device, so I wouldn't hold on to that as a hard-and-fast requirement. A motherboard with an extra SATA port will work just as well, and you can always go the USB boot route (though I don't recommend USB unless you are using extremly high-end USB drives).

Running multiple NVMe drives at full speed requires PCIe bifurcation, and not all motherboards support this. Otherwise, you need a PCIe card that has an NVMe switch or multiplexer on it.

Wake on LAN is pretty much supported on every motherboard and NIC out there these days.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
So from a real world perspective can I get ECC out of a different chipset board
'fraid not. Thanks Intel!
d is running a non server board very much frowned on?
Frowned upon is relative, but typically the sort of non-server boards that would work decently are just as expensive or more. These days, availability is a mess for basically everything, which throws a wrench into the typical equation.

That said, for LGA1200, the standard choice would be the Supermicro X12STH-F and variants.
 

paulinventome

Explorer
Joined
May 18, 2015
Messages
62
From my experience, it's almost always cheaper to get an add-in 10Gbe NIC than to buy a motherboard with one. Also, I'd recommend SFP+ over copper if you don't already have the infrastructure. SFP+ is less expensive, better supported, and less power hungry.

ECC is really up to you, and your risk tolerance. However, I'd strongly recommend it. ECC is such a low-hanging way to protect your data in transit, that I'd question the desire to go with ZFS if you aren't willing to go with ECC. ZFS w/o ECC seems backwards to me.

M2 is probably overkill for a boot device, so I wouldn't hold on to that as a hard-and-fast requirement. A motherboard with an extra SATA port will work just as well, and you can always go the USB boot route (though I don't recommend USB unless you are using extremly high-end USB drives).

Running multiple NVMe drives at full speed requires PCIe bifurcation, and not all motherboards support this. Otherwise, you need a PCIe card that has an NVMe switch or multiplexer on it.

Wake on LAN is pretty much supported on every motherboard and NIC out there these days.
Thanks for the reply.

I do have some 10gbe cards so that's fine. I have a copper 10gbe network already and running that to the Synology now.

ECC is a must I feel for all the reasons you say. Getting support for it is complicated. I'd considered the AMD route as apparently the Ryzens have ECC support but then finding documentation is a bit murky. I wanted onboard graphics support and the Ryzen G variants *don't* support ECC.

Thanks for the comment about bifurcation so that will allow me to go down another rat hole of working out what that is and how it works!

I'm hoping with TrueNAS I have better control over drives spinning down - my large storage pool can spend most of it's time spun down, being more power efficient and the NVMe pool being more active. With power costs at the moment and over years this will make a real difference.

thanks again
Paul
 

paulinventome

Explorer
Joined
May 18, 2015
Messages
62
'fraid not. Thanks Intel!

Frowned upon is relative, but typically the sort of non-server boards that would work decently are just as expensive or more. These days, availability is a mess for basically everything, which throws a wrench into the typical equation.

That said, for LGA1200, the standard choice would be the Supermicro X12STH-F and variants.
Thanks, yes, I am noticing that the Xeons themselves are scarce, let alone the motherboards...

A consideration is going back a generation or two but then I'm dubious on the second hand market in terms of being robust. I've done the whole eBay thing many times but you really don't know how hard something has been driven and what the life expectancy of motherboard/cpus are.

But in terms of actual availability that could be a possibility.

Kindest
Paul
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

paulinventome

Explorer
Joined
May 18, 2015
Messages
62
The X11SC* series should still be available new, and it's not very different from the X12ST* stuff. More details here: https://www.truenas.com/community/resources/so-you’ve-decided-to-buy-a-supermicro-x11-xeon-e-coffee-lake-board.107/
Thanks again.

I am struggling to find a combination of xeon and super micro in stock here in the UK. May have to readjust plans!

I originally was going down the Ryzen route, because, well PCI lanes and TDP. What stopped me was reading that only the Threadrippers had ECC but it seems that all the Ryzens apart from the ones with on board graphics have ECC support.

So are people using Ryzen with ECC happily with TrueNas or am I missing something?

thanks
Paul
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
There are a few caveats with Ryzen:
  • The low-end server platform is non-existent with AMD, everything is rather ad-hoc, which means you're at the mercy of motherboard vendors or worse...
  • FreeBSD support for the platform was still not as mature as was desired, but TrueNAS 13 should fix this.
  • Because of the above, there's not much of a community around them like there is for Intel parts.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
A consideration is going back a generation or two but then I'm dubious on the second hand market in terms of being robust. I've done the whole eBay thing many times but you really don't know how hard something has been driven and what the life expectancy of motherboard/cpus are.
From my experience, it's unlikely that you'd run into a hardware problem with second-hand hardware. Silicon is extremely robust, and unless you do crazy things like over-volt it (nearly impossible with server-class hardware), it's life expectancy is on the order of decades.

If you do run into problems, it's usually with PSUs going bad, though that's a component that can usually be found pretty easily if needed. Other things with moving parts, like fans or drives, can also be a source of problems (but again, you can usually find those components if needed).

I would generally stay away from second-hand HDDs.

With all that being said, for a homelab system the risks of bad hardware are usually worth the benefits. Some downtime here or there is relatively inconsequential relative to the savings on the hardware cost. However, in a business case, that downtime is usually *highly* consequential, so the cost of brand-new stuff is worth it.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I'm hoping with TrueNAS I have better control over drives spinning down - my large storage pool can spend most of it's time spun down, being more power efficient and the NVMe pool being more active. With power costs at the moment and over years this will make a real difference.
This is not something that TrueNAS is particularly good at. The reason being that it is made for the enterprise and there spinning down disks is not a requirement. Some people have done it, though, and you will find plenty of threads on the subject.

You should verify with the specification of your disks. Data center disks are not good here. I remember that Synology, about 2 years ago, explicitly mentioned that spin-down is not supported with e.g. the Seagate Exos models.
 

paulinventome

Explorer
Joined
May 18, 2015
Messages
62
There are a few caveats with Ryzen:
  • The low-end server platform is non-existent with AMD, everything is rather ad-hoc, which means you're at the mercy of motherboard vendors or worse...
  • FreeBSD support for the platform was still not as mature as was desired, but TrueNAS 13 should fix this.
  • Because of the above, there's not much of a community around them like there is for Intel parts.
Some good points here, I have no need to feel I'm on any bleeding edges with this, so will refocus on the xeon platform..

thanks for saving me headaches
Paul
 

paulinventome

Explorer
Joined
May 18, 2015
Messages
62
From my experience, it's unlikely that you'd run into a hardware problem with second-hand hardware. Silicon is extremely robust, and unless you do crazy things like over-volt it (nearly impossible with server-class hardware), it's life expectancy is on the order of decades.

If you do run into problems, it's usually with PSUs going bad, though that's a component that can usually be found pretty easily if needed. Other things with moving parts, like fans or drives, can also be a source of problems (but again, you can usually find those components if needed).

I would generally stay away from second-hand HDDs.

With all that being said, for a homelab system the risks of bad hardware are usually worth the benefits. Some downtime here or there is relatively inconsequential relative to the savings on the hardware cost. However, in a business case, that downtime is usually *highly* consequential, so the cost of brand-new stuff is worth it.
I've had issues with both CPU and motherboards off eBay in the past. But also has to be said that usually they work or they don't rather than fail.

I think I've found a open boxed X99 board and as I'm looking for a slightly older xeon then I will just have to try again.

PSU fair point and I would never use a second hand hard drive!

Whilst this is a little more than a homelab it's not bleeding edge either. It's a case of building a NAS that is specifically what I need to work with. Whereas the synologies have been great but the SSD/NVMe side is not where I want it to be.

thanks
Paul
 

paulinventome

Explorer
Joined
May 18, 2015
Messages
62
This is not something that TrueNAS is particularly good at. The reason being that it is made for the enterprise and there spinning down disks is not a requirement. Some people have done it, though, and you will find plenty of threads on the subject.

You should verify with the specification of your disks. Data center disks are not good here. I remember that Synology, about 2 years ago, explicitly mentioned that spin-down is not supported with e.g. the Seagate Exos models.
Okay, so that is interesting.

The synologies I have, the 3612xs and the 1811 will both spin down the drives over night for example. And they've been doing this for 8+ years without any issues I am aware of. These are NAS drives, WD Reds mostly.

My plan was that the large spinning disc pool would really be storage, backed by my LTO8 for achieve. The spinning disc pool would spin down over night and not in use.

Instead the day to day work NAS would be SSD and NVMe based - I do a lot of video work, both single large files but also in post production and vfx work these tend to be single frame file sequences. I figured solid state would allow me to saturate my 10gbe network and also seeking when it's file by file would be a lot less latency this way. Plus as the whole lot is in my office it would be near silent...

So thanks for this heads up, I will now start hunting those threads before I embark and anything too costly!

thanks
Paul
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I think you might be in for a lot of disappointment based on a misunderstanding of ZFS and caching/tiering.
Instead the day to day work NAS would be SSD and NVMe based ... I figured solid state would allow me to saturate my 10gbe network and also seeking when it's file by file would be a lot less latency this way. Plus as the whole lot is in my office it would be near silent...
If what you're talking about here is that you will use a pool of NVME/SSD to do the bulk of the work (then using snapshots and replication to have that on the HDDs), that's fine and all is well.

If you're expecting some kind of magic caching/automatic tiering on top of an HDD pool that will mean that the disks never spin up, that's totally not going to happen.
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Thanks again.

I am struggling to find a combination of xeon and super micro in stock here in the UK. May have to readjust plans!

I originally was going down the Ryzen route, because, well PCI lanes and TDP. What stopped me was reading that only the Threadrippers had ECC but it seems that all the Ryzens apart from the ones with on board graphics have ECC support.

So are people using Ryzen with ECC happily with TrueNas or am I missing something?

thanks
Paul
Try bargainhardware.com
 

paulinventome

Explorer
Joined
May 18, 2015
Messages
62
I think you might be in for a lot of disappointment based on a misunderstanding of ZFS and caching/tiering.

If what you're talking about here is that you will use a pool of NVME/SSD to do the bulk of the work (then using snapshots and replication to have that on the HDDs), that's fine and all is well.

If you're expecting some kind of magic caching/automatic tiering on top of an HDD pool that will mean that the disks never spin up, that's totally not going to happen.
No, I'm expecting this to be a manual process. I hope I am not expecting magic!

One pool of spinning discs and a pool of NVMe, both separate. If there is a way to snapshot that to a section of spinning disc - awesome.

I'm aiming for around 96TB of spinning storage and 8TB of NVMe to start with. I would like a few extra SSDs for a home media pool as well, but not so concerned with that. The 96TB under Raid6 and I am not yet sure about the NVMe. The question around that is that NVMe is actually pretty robust and most individual NVMe have ECC and even degrees of redundancy onboard. In all honesty I don't even know if the NVMe would benefit from RAID - the performance of these would saturate a 10gbe link *anyway*. So is there any benefit to RAID or a parity drive for NVMe?

thanks
Paul
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
If there is a way to snapshot that to a section of spinning disc - awesome
There is... Replication tasks.

In all honesty I don't even know if the NVMe would benefit from RAID - the performance of these would saturate a 10gbe link *anyway*. So is there any benefit to RAID or a parity drive for NVMe?
Yes... NVME or SSDs in general have much higher IOPS capability in each drive, so the usual concern of RAIDZ for use in cases with IOPS requirements (RAIDZ VDEVs deliver the IOPS of one single disk per VDEV, which means around 300 per HDD, hence per VDEV) is mitigated a bit, but not entirely, by the more highly capable drives...

If you're just using that pool for stuff which is mostly sequential reads and writes of large files (maybe that's how it will work for you depending on how your editing software does caching and writing) and that's async, then all will be OK on RAIDZ.

If you use sync writes and/or the IOPS requirement is higher due to things like scrubbing through videos or lots of partial writes that aren't sequential in the file, you may want to pay a bit more attention to the pool geometry (and consider including a SLOG to reduce how much it will suck).
 
Top