Sanity check my hardware before I order

Boerny41

Cadet
Joined
Jul 15, 2023
Messages
5
Hi there,

I've been running TrueNAS virtualised on old hardware for a year now and have finally decided to build something new/decent. Use case: I don't want to use it just as a NAS, but also to host some VMs/containers. Mostly the usual stuff like Home Assistant, a Docker instance, OPNsense in the future, a Windows VM and maybe Jellyfin.


I've picked out some hardware that I'd like you to have a look at before I actually order it:
  • Ryzen 5 Pro 4650G
    • 6C/12T, ECC, iGPU for 110€
  • Kingston Server DIMM DDR4-3200 CL22 - 2x 32Gb
    • cheap ECC memory
  • Asus Prime B550 Plus
    • cheap motherboard with ecc support and enough PCIe lanes for HBA, 2.5G Ethernet, and possibly a GPU
  • be quiet PurePower 11 400W
    • good efficiency between 0-15W for 50€
  • LSI SAS 9302-8i
    • cheap and recommended in this forum
  • WD Red 4tb - x2 (already in use)
  • x2 SATA SSD as mirrored boot

Some additional questions:

As staited above I'm already using TureNAS with the 2 WD drives. Right now they are just mirrored. Since I'll prorbalby have to add storage in the future I'd like to know how I should set it up. I'd like to be able to add few drives at a time, being forced to add 4 drives at once it not an option for me. With this limitation I think running multiple, mirrored 2 drive vdevs is the only option, right?

If so, will I be able to change from 3 mirrored vdevs with 2 drives each to a raidz2 vdev with a total of 6 drives in the future without having to export the data to an external drive?


Notes:
  1. I know that a server board is usually recommended, but they are quite large and I've read that the idle power consumption is relatively high, so I chose a consumer MB.
  2. Sometimes people claim that the Ryzen 4000 APUs are inefficient at idle, however I've found a few guides where people get the system under 10W idle with additional people able to replicate it.

Do you have any suggestions for improvement or would you change anything?
 

mcmxvi

Cadet
Joined
Jul 15, 2023
Messages
5
Hello,

Depending on your budget, I can relate to your setup and preferences.

I migrated from a 5-bay Synology to a Proxmox server with an AMD Epyc processor, 256GB ECC RAM, and reused my old hard drives by passing them through to TrueNAS Scale with an LSI HBA, following recommendations.

For a cost-effective solution, I suggest looking at Epyc CPU/motherboard/RAM combos on eBay. The prices of 1st and 2nd generation Epycs have dropped, as have 2133/2400 DDR4 ECC memory modules. With a 16/32 core/thread CPU, a dual NIC Supermicro board, and 64-128GB RAM, you can expect to spend around 300-500 USD. Add a Fractal Design Meshify 2 for generous 3.5" storage expansion, and you'll be set for years to come.

Here are some options:
- An 8-core/16-thread with 64GB RAM for 399 USD
- Or a 32-core/64-thread with 64GB RAM for 484 USD

Added bonus: Message the seller, mentioning that you come from the servethehome.com community, and they will provide you with free expedited FedEx shipping.

Best of luck with your new server build!
 

Boerny41

Cadet
Joined
Jul 15, 2023
Messages
5
I have thought about this a lot. While it would be great to have real server hardware, the reality is that I don't need an Epyc. The 4650G is already overkill. The longevity of server hardware would be nice, but considering it's used, I think it's a gamble if it really lasts longer than new consumer hardware.

What finally made me go the consumer route is the idle draw of Epyc systems. It's often quoted at around 80W. Even considering a more realistic 30W idle for a 4650G system, that means 150€ a year just to make up the difference in idle consumption.
 

mcmxvi

Cadet
Joined
Jul 15, 2023
Messages
5
I have thought about this a lot. While it would be great to have real server hardware, the reality is that I don't need an Epyc. The 4650G is already overkill. The longevity of server hardware would be nice, but considering it's used, I think it's a gamble if it really lasts longer than new consumer hardware.

What finally made me go the consumer route is the idle draw of Epyc systems. It's often quoted at around 80W. Even considering a more realistic 30W idle for a 4650G system, that means 150€ a year just to make up the difference in idle consumption.

That was my thinking as well. I had a system with an i7 12700k, 64GB non-ECC memory, etc. I had some concerns about running TrueNAS without ECC memory, even though some say it's no big deal. However, I have my whole digital life from 25 years back stored on it, so even if in the end, it does not matter, it gives me peace of mind at least! Another "nice to have" is, of course, seven full-size 16x PCIe slots, which I've already populated.

When it comes to power draw, I really should put a power meter between my outlet and the server to keep an eye on the consumption. It does get noticeably hotter in the storage closet where the server resides compared to when I was running with a 12700k. I've yet to notice any drastic difference on my power bill, though. But I'm in Norway, and it fluctuates greatly with the way our new export cables work.

During fall, winter, and spring, the added heat from the server would just offset the need for other heating sources (in my case). Not sure if that would apply in your situation (depends on your geolocation and where you place your server, I guess).

We all have different needs. If you don't see yourself needing or wanting the additional resources an Epyc system provides, then you should stick to your plan of going with consumer-grade parts. Just keep in mind that if you do see yourself being bitten by the homelab bug somewhere down the road, it could get more expensive to switch it all out again.

Screenshot 2023-07-16 124751.png Screenshot 2023-07-16 131009.png
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Kingston Server DIMM DDR4-3200 CL22 - 2x 32Gb
Is this on the QVL for the motherboard?

WD Red 4tb - x2 (already in use)
Are these CMR or SMR drives? SMR is bad, CMR is good.

x2 SATA SSD as mirrored boot
Are you planning to use a Type 1 Hypervisor on bare metal such as ESXi or are you running TrueNAS on bare metal and you will virtualize other software on TrueNAS? If you are using a Type 1 Hypervisor then you only need a single boot drive. TrueNAS also does not need two mirrored boot drives provided you maintain a copy of your configuration file, then recovery is fairly fast. But if you want two boot drives, no one will tell you not to.

If so, will I be able to change from 3 mirrored vdevs with 2 drives each to a raidz2 vdev with a total of 6 drives in the future without having to export the data to an external drive?
Nope, you have to backup all your data, destroy your mirrors and recreate a RAIDZ2 vdev, then create datasets and restore your data.
Asus Prime B550 Plus
LSI SAS 9302-8i
If you are using TrueNAS on bare metal:
This MB has 6 SATA ports for your hard drives and M.2 PCIe. I'd purchase a single M.2 PCIe boor drive, remember that it can be slow, fast just means more heat and the boot drive needs not be fast. This would eliminate the need for the LSI card, which draws quite a bit of power. For your use case the LSI card would not enhance your system.

If you are running a Type 1 Hypervisor (ESXi for example), the LSI card makes it easy to pass through the controller and in this situation I would recommend an LSI controller. I it's all TrueNAS, drop the LSI controller. You can always add it at a later date if needed.

I know that a server board is usually recommended, but they are quite large and I've read that the idle power consumption is relatively high, so I chose a consumer MB.
That depends on what board you purchase. Look at my system build for example, this board does not pull a lot of power, the CPU does and the drives do, but the motherboard itself does not. An LSI card will eat power. A server MB will generally cost you a little more (or a lot more depending where you live). Also, while some people may claim high power efficiency, it is not that easy to get in reality. A few watts of power should not be the goal unless the goal is to simply build a low power unit, but you sacrifice the ability to run VM's at a reasonable speed. And current CPU's do have good low power draw these days, they can idle at very low power consumption and switch into high gear and suck power down as needed.

Good luck on your build.
 

Boerny41

Cadet
Joined
Jul 15, 2023
Messages
5
@mcmxvi
I definitely want the extra resources and the premium you get for server parts, but I think overall it is not worth it in my case. I have struggled to plan everything with the limited amount of PCIe the B550 offers, but if I go with SATA boot SSDs I can use the lanes occupied by the M.2 slot and it should work out fine.

Thanks for the screenshots!

----------------------------------------------------

@joeschmuck
Is this on the QVL for the motherboard?
It is not. In fact, I'm not sure if they tested any ECC memory, but both the CPU and MB officially support ECC.
They don't allow to search the QVL for ECC, so you'd have to search each stick on google to find out. I have checked some of the 32GBs tested, but they are all non-ECC.

I don't think it matters, but if Samsung modules are generally better supported, there is a similar one for the same price as the Kingston.

Are these CMR or SMR drives? SMR is bad, CMR is good.
They are CMR ~1.5 years old, bought new back than.

Are you planning to use a Type 1 Hypervisor on bare metal
Yes, I am. TBH, I'm not quite sure if I'll continue with proxmox and virtualise TrueNAS again, or install TrueNAS Bare Metal. If the latter, then TrueNAS will run VMs and "apps".

I know Raid 1 boot disks aren't necessary in either case, but it's relatively cheap and makes things easier if the one drive fails.

I it's all TrueNAS, drop the LSI controller. You can always add it at a later date if needed.
You are absolutely right, for bare metal TrueNAS the LSI card would be wasted.

I'd purchase a single M.2 PCIe boor drive
Right now NVMe SSDs are actually cheaper than 2.5" SATA SSDs, but I need the PCIe slots from the M.2 slot for a network card if I want to use a GPU in the future (AV1 transcoding). Do you know of any way to connect an NVMe drive via the onboard SATA ports? All I could find were SAS -> NVMe or SATA -> M.2 Sata adapters, but not a single SATA -> NVMe one.


A few watts of power should not be the goal unless the goal is to simply build a low power unit
It sort of is, or at least one of the goals. I pay €0.38 per kWh, so each watt costs me €2.97 per year. At that price, idle power consumption becomes important. I wasn't able to find any source that claims <70W for a system with Epyc 7302p and Supermicro H11SSL. 70W idle = 208€/year.
I honestly don't know much about Intel server CPUs, but from what I've seen, newer ones are ungodly expansive, while older ones are basically free, but the motherboards are rare and cost 600€+.

If you have a tip for a decent combo of power efficient Intel CPU + "normally" priced motherboard (<200€) I'd be thankful.



----------------------------------------------------

From my current, limited understanding of ZFS, it should work just fine to add two drives at a time, each as its own mirrored vdev, right?
e.g.

vdev0: 4TB + 4TB mirror \
vdev1: 2TB + 2 TB mirror | = one pool
vdev2: 6TB + 6 TB mirror /

in this case each vdev would allow for 1 drive failure. It seems like RaidZ2 makes more sense once you reach 6 drives but at the current time I just don't need that much storage.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I'd like to say is, buy the good stuff now. If you cut a corner, you will wish you hadn't at a later date.

How much storage do you need/want?

If you were to have a total of 5 hard drives (Spinning Rust) and created a RAIDZ2 with it, then you could replace those 5 drives with higher capacity and the capacity increases. This is one of the easiest ways to increase your capacity with lower costs. Purchase 3 more 4TB CMR drives, copy all your data off your current pool, destroy it, add the 3 drives, recreate a RAIDZ2, restore your data. Make the change before you have too much data to easily backup.

but I need the PCIe slots from the M.2 slot for a network card if I want to use a GPU in the future (AV1 transcoding).
Well I don't think that would be a true statement. Do you feel that your boot drive would be in high use while you needed to transcode? Nope, I don't think so since the boot drive is rarely active after the system bootstraps. But read the User Guide for the motherboard, see what it says. Sharing lanes is not the same as being the sole user of the lanes and typically the User Guide will tell you if you cannot use the M.2 slot with a PCIe card or a SATA port. By a small M.2 PCIe 3 card and test it out if there are no words like that in the User Guide.

I pay €0.38 per kWh
Wow, that is a lot, very sorry to hear that. Why can't power just be cheap. I pay 0.12 euro, well in USD.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
From my current, limited understanding of ZFS
The following resources should help you.


You can find more in my signature.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Top