SOLVED Server upgrade for ECC RAM

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Hello all,

when I purchased the server I had a different usecase in mind (mainly tinkering with a few VMs, docker and I thought (as many youtube videos suggested) i could also run truenas as a VM)). I then learned that a baremetal install is better and later I also upgraded from 32 GB to 64 GB RAM. So I deployed the machine in January this year:

OS Version:TrueNAS-SCALE-22.12.3.3
Product:B560M-ITX/ac
Model:Intel(R) Core(TM) i3-10100 CPU @ 3.60GHz
Memory:62 GiB

Since then it has been bugging me that I'm not able to use ECC RAM and from what I gathered that would be strongly recommended. I don't want to spend too much cash since the current server is only 9 month old (probably my wife will inherit it and her hold office PC will retire / get sold for a few bucks).

So mainly my two questions would be:

1) Should I upgrade?

2) Would the following hardware be a good upgrade?

I'm planning to run Home Assistant and 2-3 ubuntu VMs and I use it for storage of our photos, documents etc.. Currently I'm using 4x4 TB HDDs and 1x4TB HDD as a hot spare (maybe I will migrate to RAIDZ3 when I swap servers and rely on my B2 backup / local backup on my windows machine during migration, I know that RAIDZ2 and 1 hotspare isn't the best config). + boot drive and 2 SSDs (mirrored for the VMs) Current power draw is around 50 W (mainly idle, the VMs don't do much computing), if I could stay under 100W for the upgrade, that'd be great.

I'm overwhelmed by the choices of mainboards and their names. This is what I came up with:


Mainboardx10srl-f160 Eur used
CPUIntel Xeon E5-2686 v4
or
Intel Xeon e5-2680 v4
80 Eur used
or
40 Eur used
Cooler
RAMMicron 32GB DDR4 ECC PC4 - 2400T MTA36ASF4G72PZ
or
HP DDR4-RAM 32GB PC4-2133P ECC LRDIMM 4R
2x34 Eur used

2x39 Eur used
Power Supply500 Watt Seasonic Core GM Modular 80+ Gold80 Eur new (or something comparable, probably the cheapest seasonic modular PSU I can find)
Casefractal define xl but open to suggestions, 8 3.5" Bays would be nice~ 100 Eur
Total 450-500 Eur (plus cooler, SSDs)
Edit: some notes on the selection:
Is this platform too old? Should I spend a bit more and get a newer generation?
My current machine reports around 93 % of idle for the past 6 months, so I wouldn't need the many cores / threads of the 2683 but for just 30 Eur more than then 2680 I'd take them. I think 8 threads may be enough for me anyway. The only CPU intensive task currently is when paperless consumes new PDFs and does it's OCR.

Any suggestions on the selected hardware would be appreciated and also if this is an upgrade path worthwhile to follow. I'm hoping to get at least 5 years of usage out of that hardware. I know nobody can predict a failure but I would assume the hardware isn't too old to last me some time. From my experience hardware either broke < 2 years or it basically got discarded due to age / performance.

While waiting for responses I did some further reading and came up with a more recent alternative:

Mainboardx11ssm-f160 Eur used
CPUxeon 1240 v6 or xeon 1230 v680-130 Eur used
Coolerscythe Mugen 2recycle from old sandy bridge PC
RAMMEM-DR416L-CL01-EU214x55 Eur new
Power Supply500 Watt Seasonic Core GM Modular 80+ Gold80 Eur new (or something comparable, probably the cheapest seasonic modular PSU I can find)
Casefractal define xl but open to suggestions, 8 3.5" Bays would be nice~ 100 Eur

Total 640-700 Eur (plus cooler, SSDs)
I run SCALE over CORE because when I initially set up everything I read about SCALE beeing better for VM deployment (so far no problems). This however does pose the problem with the memory management, that only half is available for ZFS Cache. Since I upgraded to 64 GB I had no problems so far (although occasionally SWAP gets used), usually 7-10 GB free with 20 GB for Services and 31 GB for Cache. So while I would be limited to 64 GB RAM with this build I doubt that this will be a problem in the future. I don't really plan to extend the storage on this system (maybe swap the 4 TB drives for 6 TB drives as the space requirements grow over time, but no other pool or so).

If you recommend me leaning towards option 2 I can adjust my budget accordingly. If you need any additional information I'll happily provide them.

Best regards!
 
Last edited:

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Not to be impatient, but if there is any information lacking or anything else I should provide to gather meaningful answer, I'm happy to do so :)
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
I would (personally) get a bigger PSU. Spending money on the PSU is good for long term stability (though I have no issues with Seasonic). Consider a larger Platinum.
NAS does not need the latest and greatest hardware.
SMB tends to be single threaded - so a decent speed CPU is a bit better.
I (again personally) would prefer the more cores on the X10 to the fewer faster cores on the X11 if I had VM's etc in my use case. I also much prefer the additional memory that you could add to the X10 rather than the limited X11 you mention
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Xeon E5v4 is old but capable and can take large amounts of cheap RDIMM, while E3v6 would limit you to lower amounts of more expensive ECC UDIMM. Ans noted, the PSU is undesized to go up to 8 drives, and Platinum is better than Gold at the typical NAS power draw.
For the CPU, the sweet spot is probably with x630 or x640 parts: Fewer cores (how much depends on your VMs) but higher clocks (for SMB) than the top x680 parts.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Thank you for your replies!

So basically you'd advise me to go the X10 route? I just checked and saw that either I got results different than my search term or the board was sold out already, only affordable (sub 350 Eur) board I could find was a X10SRi-F, which for my applications shouldn't really matter if it's that or a x10srl-f. Since the board I found was the last one, I already ordered it just in case. I haven't heard back from the vendor if the latest BIOS is flashed or if they can flash it before shipping. On the other hand a v3 CPU is super cheap to get and I could flash it myself as a last resort.
I know I don't want to rush this build but I also read the hardware recommendations guide before posting and already feel that I shouldn't be too far off with my configuration.

I agree with you, the RAM limit was concerning me a bit too. I don't need more currently but it would be nice to be able to upgrade. And yes, that UDIMM was ridiculously expensive.

I had at a look at the PSU thread, currently I'm powering all the drives that are mentioned below with a 550W PSU (be quiet) without any problems.

1696966708598.png


So basically I would be looking at 600W instead of 500W. However the ~ 100 W (calculated with 20 % load) are hopefully not under my idle power. As for the debate whether to purchase gold or above, I'm not sure, depending on the initial cost, if I'd really save some money in the long run. If that's the only concern. I'm leaning towards the 750 W GX, maybe I'll throw in a 10 Gbe NIC or and HBA for additional drives down the road. Would be a shame if I needed to swap PSUs then.

So I'm looking at the following configuration:

Mainboardx10srl-f.155 Eur (used)
CPUE5-2640V410 Eur (used)
Coolerbe quiet! Pure Rock 2*33 Eur (new)
MemorySAMSUNG 32GB 2Rx4 PC4-2400T-R DDR4 Registered Server-RAM Modul R-DIMM REG ECC*60 Eur (used) per, I'm unsure if I won't splurge on 4x32 GB right off the bat.
PSU650 Watt Seasonic Focus PX Modular 80+ Platinum

650 Watt Seasonic Focus GX Modular 80+ Gold

750 Watt Seasonic Focus GX Modular 80+ Gold

750 Watt Seasonic Focus PX Modular 80+ Platinum
106 Eur (new)


130 Eur (new)


116 Eur (new)

can't source atm
CaseFractal Design Define R5115 Eur (new)
HDD5x WD Red Plus 4 tbalready owned
SSD1x500 GBalready owned
SSD2x 240 GBalready owned
* corresponds to M393A4K40CB1-CRC, which is on the super micro site for tested memory

*edit: For future visitors: the X10srl-f is a narrow ILM board, the cooler was not compatible. I went for the NH-U12DX i4

Associated cost (case, PSU, etc.) really adds up and memory still is not too cheap if you want to go large. All in all I would feel that this configuration should work for my purposes.

If you don't have any additional points I should consider I think I'll move forward in the following days.
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
My "spare/test" NAS uses an X10SRi-f, with 128GB RAM in 16GB Dimms and a E5-2660 v3 which is a very similar CPU to what you are proposing there. Its got plenty of slots and 10 SATA3 Ports. The only issue I had was not enough PCIe lanes which is why I switched to a dual socket board and couldn't find a sensibly priced Epyc at the time.

Good board, fair expansion, good SATA, IPMI - that will do nicely. You will need an IPMI license for it to be really usefiul - but they are cheap. Shame there is no M.2 - but that can be fixed with a PCIe card of which you have a few useful slots

Same case as my "HairyNAS" - don't ask. Its a good case BUT the default fans are a bit weak. When I built it - I then had to shut down as things were a bit toasty (the HDD's). I had to replace the fans with high static pressure fans on an adjustable fan controller (to keep the noise down) and tweak until things were right. I replaced all the fans, with 2 at the front and 1 at the rear. I haven't yet put any at the top.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
My "spare/test" NAS uses an X10SRi-f, with 128GB RAM in 16GB Dimms and a E5-2660 v3 which is a very similar CPU to what you are proposing there. Its got plenty of slots and 10 SATA3 Ports. The only issue I had was not enough PCIe lanes which is why I switched to a dual socket board and couldn't find a sensibly priced Epyc at the time.
Except for my cheap HBA which currently only serves the VM pool and the hot spare I didn't need any PCIe slots for my current build so I will have plenty with that upgrade.

You will need an IPMI license for it to be really usefiul - but they are cheap.
I'm not sure I will even need that feature. I haven't connected the server to a keyboard / monitor since I set it up, I manage everything over the web gui. But since it has IPMI and a VGA port I will only need a VGA to HDMI adapter for the initial setup?

Shame there is no M.2 - but that can be fixed with a PCIe card of which you have a few useful slots
10 SATA ports is enough for me currently. I have a 500 GB m2 in my current build as the boot pool, since i did not realize I couldn't use the space for anything else. My plan would be to give that m2 to my wife, take her old 240 GB SSD and pair it with the 256 GB SSD I already own and do a mirrored boot pool. Then I take my remaining 500 GB SSD, maybe purchase another 500 GB SSD for cheap and mirror that as my VM pool.

Same case as my "HairyNAS" - don't ask. Its a good case BUT the default fans are a bit weak. When I built it - I then had to shut down as things were a bit toasty (the HDD's). I had to replace the fans with high static pressure fans on an adjustable fan controller (to keep the noise down) and tweak until things were right. I replaced all the fans, with 2 at the front and 1 at the rear. I haven't yet put any at the top.
Currently the server is living in our office which is currently also used most of the time as a bedroom. This is however only temporary, until our kid decides that nights can be slept through ;) And we also have a spare room, not part of the apartment but part of our network via power dsl adapters. Probably I'll move the server there if it's too loud until the office is purely an office again. I have tons of fans lying around, so hopefully airflow wouldn't be a problem.

I'll still slightly concerned about the reliability of the old / used HW but ECC should beat new consumer hardware for long term reliability.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I haven't heard back from the vendor if the latest BIOS is flashed or if they can flash it before shipping. On the other hand a v3 CPU is super cheap to get and I could flash it myself as a last resort.
Flashing BIOS is done through the BMC. You don't even need a CPU for that. :wink:
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
I'm not sure I will even need that feature. I haven't connected the server to a keyboard / monitor since I set it up, I manage everything over the web gui. But since it has IPMI and a VGA port I will only need a VGA to HDMI adapter for the initial setup?
Well, you might not need it, but you might aprreciate it. Imagine you are on a trip, and, server dies who knows why, or you need to start it, or whatever, any reason that might mean you have to mess with it and are not where the NAS is. With IPMI, you can do all that and more.

Or, imagine it doesn't boot for some reason. etc. Maybe you want to change BIOS settings. It's nice to have for sure.
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
IPMI is suprisingly useful
 

PhilD13

Patron
Joined
Sep 18, 2020
Messages
203
I picked up an X10DRH-CT motherboard which is a good motherboard, with a Supermicro SuperStorage 6028R-E1CR16T 2U Server chassis with redundant power supplies for relatively cheap. I have filled it with 16 8TB drives and 2 800GB SSD's in the rear 2 slots It has been working flawlessly hardware wise. The X10 board is a dual processor one but you can use just one processor if you don't need all the pci expansion slots. It's the system listed in my signature.

I agree IPMI is very useful so if there is a motherboard that has it over one that does not, I suggest getting the one that does It can be really helpful. Truenas scale will connect to the IPMI web interface if there is one (look at page bottom in Scale networking) so no monitor is needed. IPMI also has it's own address so you can bring up the web interface at anytime you need it. No TN or other operating system needed.

I also have a much older 36 bay Supermicro system with a 24 slot expansion chassis attached, I forget what board it has offhand (AMD Opteron processor age) but for it's age it runs TN Scale or Core well. I do have to use IPMI to install software but I think that's the age of the machine.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Thank you all!

I went ahead and started ordering parts:) The main board seller confirmed it's already flashed with latest BIOS.

Seems like I got some reading to do on IPMI.

I have two remaining questions:

1) PSU - should I go for the 750W over 650W?

2) memory: when I follow the link from super micro to their store for compatible RAM I find the part number mentioned above. I could also get Samsung 32GB 2Rx4 PC4-2400T RA1-11-DC0 Server RAM ECC HP M393A4K40CB1-CRC0Q for way cheaper (29 Eur per 32 GB), but it has 0Q added after CRC. I tried to find out differences but I couldn't find any useful information. Did Supermicro maybe just specificy CRCXX and XX can be any combination?

For 29 Eur I'd definitely go 128 Gb from the start.

Do I also need to run memtest on the ECC memory before moving my disks to the new server? Any stress testing recommended before moving my pool?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Stress testing required:
1. Memtest for at least 24 hours beforehand
2. Disk stress test - I suppose not needed but I am assuming that these are disks you have owned and run previously.

Go for the 750W, preferably Platinum for something on 24*7*365*10
 
Last edited:

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Thank you!

The disks were burned it (incl. the hot spare, which was used from my PC, don't know why I purchased NAS drives for that) when I originally purchased them in January.

Memtest is what I figured, especially since these are used modules.

For the PSU I'll go 750W then, but probably gold, since the platinum is 50 Eur more expensive.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
1) PSU - should I go for the 750W over 650W?
Yes. Don't size "just at the edge".
2) memory:
QVL are always frustratingly short, and finding a specific reference from ten years back is an exercice in desperation.
I've never had any issue running any RDIMM module from Micron/Samsung/SK Hynix in any motherboard, and would just go for any manufacturer-branded module without bothering about the QVL.
Do I also need to run memtest on the ECC memory before moving my disks to the new server?
It cannot hurt, and should ease concerns about second-hand parts.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Be glad you did buy the NAS drives. If you had bouight WD Red (Non Pro) then there is a fair chance I would tell you to put them in landfill
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
I purchased 128 Gb of the CRC0Q then :)

@NugentS I bought them for my personal computer, for the server I checked that I use the correct drives. I then became suspicious after I setup the server that I recognized the model numbers in task manager on my PC. I kept one as another backup location and out the other one in as a hot spare. But I will probably either copy the files from my PC again or pull from B2, and recreate the pool as 5x4Tb RAIDZ3 then.
Yeah it was definitely needed to pay attention to not end up with SMR drives.

Edit: ah what the.. I went for the platinum PSU.
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
5*4TB in Z3 - thats err 2 data disks and 3 parity disks - seems a little wasteful. Z2 maybe especially as you won't be able to change the 5 Wide, at least in the near or medium future

Now if you had 8 disks in total then Z3 would be a more sensible setup - and the R5 has 8 disks in the front and 2 * 2.5 in the rear (for boot maybe)

I just consider >50% parity to be a liittle excessive
 

PhilD13

Patron
Joined
Sep 18, 2020
Messages
203
Truenas will recommend a type of zraid when you select the disks for the dataset. I would go with the default recommendation for the drives you have which is probably (without checking) z2. I might in the case of 4TB drives use Z1. You can play with drive configurations with the calculator here. https://wintelguy.com/zfs-calc.pl
It was the one I found Truenas forums recommended.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
I purchased 128 Gb of the CRC0Q then :)

@NugentS I bought them for my personal computer, for the server I checked that I use the correct drives. I then became suspicious after I setup the server that I recognized the model numbers in task manager on my PC. I kept one as another backup location and out the other one in as a hot spare. But I will probably either copy the files from my PC again or pull from B2, and recreate the pool as 5x4Tb RAIDZ3 then.
Yeah it was definitely needed to pay attention to not end up with SMR drives.

Edit: ah what the.. I went for the platinum PSU.
5 disk, z3, well, in case it's your idea that means you won't need backups, not so. A z3 is excessive for 5 disks and it doesn't mean no backups are needed. That's why a z2, striped mirrors shouldn't be thrown out. Striped mirrors allow you to add 2 disks at a time later should you wish to expand. a 5 disk z2 means you have to add 5 disks to add more capacity but has sufficient safety. Backups are for pool failures, restoring data, and other problems like fire, theft, etc.
 
Top