Looking for feedback on my first build

cbergdev

Cadet
Joined
Dec 9, 2023
Messages
4
Hei.

I am currently doing research and preparing a shopping list for my first NAS build and I would like to get some feedback before I pull the trigger.
I watched a lot of videos and read some articles about ZFS and TrueNAS including the SCALE Hardware Guide and the Community Hardware Guide to identify the components below, but there are still a few details to iron out.

Let me start with my use cases.
1) I plan to upgrade my homelab with two Proxmox servers and build a Kubernetes Cluster. Those VMs and Containers need some place for storage since the servers themselves are too small (I use mini PCs). Also I would like to offload and backup data from my main PC.
2) My Dad has about 10TB of photo and video footage which is currently spread across several external drives.

I want to build two separate systems, the first one at my house serving my homelab and as a leaning experience, and the second one for my dad. Also I plan to use each system as the off-site backup for the other.

Here are the parts I selected so far.
Motherboard: Supermicro MBD-X11SCH-F-O
CPU: Intel Core i3-8300
RAM: 32GB (2 * 16GB Kingston Server Premier ECC DDR4 2666)
Storage: 4 * Seagate Exos 12TB as a Raidz1
Case and PSU: I have an old PC case with a 350W PSU which I plan on using for now to save some cost. Eventually I was thinking about the U-NAS NSC-810A

And here are my questions:
A) For the OS I was thinking to get two M.2 NVMes, but since I do not need much space for it, should I partition them and create separate mirror vdevs with those partitions (one partition from each NVMe), putting the OS on one, a SLOG on another?
B) Do I even need a SLOG? And how big?
C) What about a L2ARC? Maybe on a third vdev on those NVMes? And what size?
D) What do you think about those components?
E) Is there anything else I am forgetting?

Thanks in advance.
Carsten
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
A) For the OS I was thinking to get two M.2 NVMes
Massive overkill for anything short of a serious professional environment.
B) Do I even need a SLOG? And how big?
From the sound of it, you're not looking to use block storage, so the answer is most likely no.
C) What about a L2ARC?
Maybe, but more RAM first would be better. And that's if your ARC hit rate is low, the deadlist hit rate high, and performance less than you'd like. Test first, buy later if needed.
D) What do you think about those components?
I'd stick to DRAM listed by Supermicro as much as possible, it makes compatibility a bit more certain. The PSU sounds like the dodgiest component, 350 W screams Super China Happy Sun Power and Shower Curtains Company Limited and Co. Definitely worth a closer look.
 

cbergdev

Cadet
Joined
Dec 9, 2023
Messages
4
Thanks for your quick reply. It made me read (and reread) some more.

For the OS I decided to go with the Samsung SSD 870 EVO 250GB. It's bigger than necessary but cheaper than smaller ones. Instead of mirroring I will just create regular backups.

From the sound of it, you're not looking to use block storage, so the answer is most likely no.
If I understand correctly block storage is used for iSCSI and zvol which I might use at some point. However, if it is possible to add a SLOG device later(?), I will skip it for now and see how the performance turns out to be.

Speaking of performance, I decided to go with two separate vdevs. One will be a mirror with two of those 12TB disks, used for active data (storage for the VMs and containers, drives for my workstation, and the footage in case of my Dad). The second will be a 3*12TB disk raidz1 for the off-site backups and cold storage. Just to make sure I get this right: I need to create a separate zpool for each of the vdevs for that to work as intended?

I'd stick to DRAM listed by Supermicro as much as possible, it makes compatibility a bit more certain.
Unfortunately I could only find a small list on their side. All of them linked to their shop and all were sold out.

The PSU sounds like the dodgiest component
I double checked and it is actually a 430W unit, but I will put it on the short list. Since I plan on getting a new case in the upcoming months I will need a new PSU anyway.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Unfortunately I could only find a small list on their side. All of them linked to their shop and all were sold out.
They used to list the OEM model numbers in the table, but now you can still get the info by going to the store pages and checking the OEM and part number.
I double checked and it is actually a 430W unit, but I will put it on the short list. Since I plan on getting a new case in the upcoming months I will need a new PSU anyway.
Oof, yeah, that does not look like a good unit.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Storage: 4 * Seagate Exos 12TB as a Raidz1
That's risky, the use of RAIDZ1 is advised against with large drives.

A) For the OS I was thinking to get two M.2 NVMes, but since I do not need much space for it, should I partition them and create separate mirror vdevs with those partitions (one partition from each NVMe), putting the OS on one, a SLOG on another?
Absolutely not.

If you want a redundant boot pool please read the following resource, otherwise just use the cheapest drive you can find and keep a backup of your configuration.

In my signature you can find links to a few resources that would be useful reading.

EDIT:
For the OS I decided to go with the Samsung SSD 870 EVO 250GB. It's bigger than necessary but cheaper than smaller ones.
A sensibile choice.
 

cbergdev

Cadet
Joined
Dec 9, 2023
Messages
4
In my signature you can find links to a few resources that would be useful reading.
That's a lot to read. Haven't had the time to get through all of it, but I already made some changes to my plan.

Since I would love to work on this over the holidays I already ordered the parts I am confident with:
Board: Supermicro MBD-X11SCH-F-O
CPU: Intel Core i3-8300
RAM: 2 * Samsung 32GB DDR4 ECC RAM UDIMM 2666 (That's more than I initially planed and maybe overkill, but the €/GB is ~30% less than the cheapest 16GB Sticks I found, so I basically get 16GB for free)
Boot: Samsung SSD 870 EVO 250GB

As you can see I am still missing the most important part: the storage. And there are two reasons for that.
1) It seems HDDs above 8TB always spin at 7200+rpm and according to what I read ~5400-5600 is better because of lower heat, noise and power consumption. On the other side bigger disks have a lower price per TB. Any advice on that?
2) I am not 100% sure how many disks I need. My latest idea is to have one zpool with one mirror vdev with two disks, for active stuff, kubernetes PVs and VM storage, shared network drives, maybe an iSCSI drive for my Dad so he can access the footage on the NAS from Photoshop directly. A second zpool with one raidz2 vdev with 5 disks will contain cold storage for local users and the backups of the second NAS. Does this make sense? Maybe I should start with the two disk mirror and add the other pool later.

Also, after reading the intro to the resources list, I think I should use CORE instead of SCALE. I have no experience with BSD, but I guess I will not interact with it much and the little I need I should be able to figure out.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
As you can see I am still missing the most important part: the storage. And there are two reasons for that.
1) It seems HDDs above 8TB always spin at 7200+rpm and according to what I read ~5400-5600 is better because of lower heat, noise and power consumption. On the other side bigger disks have a lower price per TB. Any advice on that?
2) I am not 100% sure how many disks I need. My latest idea is to have one zpool with one mirror vdev with two disks, for active stuff, kubernetes PVs and VM storage, shared network drives, maybe an iSCSI drive for my Dad so he can access the footage on the NAS from Photoshop directly. A second zpool with one raidz2 vdev with 5 disks will contain cold storage for local users and the backups of the second NAS. Does this make sense? Maybe I should start with the two disk mirror and add the other pool later.
It all depends on how much storage space you need for your VMs... the brainless way is to go with a single pool of 18TB drives in a 3-way mirror, but it's expensive.
I would suggest a 2-way mirror of SSDs (NVMe or SATA) for the VM storage and a second RAIDZ2 HDD pool for your backup needs (your PC and your dad's photo archive).

Where do you plan to keep this system? If in a rack or in a tech lab I would suggest going for the most convenient €/TB factor instead of looking for a quieter solution.
Also, I assume your dad lives with you... setting up access from outside your LAN it's not the easiest thing to do.

Also, after reading the intro to the resources list, I think I should use CORE instead of SCALE. I have no experience with BSD, but I guess I will not interact with it much and the little I need I should be able to figure out.
Might be a sensible choice since at the moment CORE has a few less bugs (but a few less features).
 
Last edited:

cbergdev

Cadet
Joined
Dec 9, 2023
Messages
4
It all depends on how much storage space you need for your VMs... the brainless way is to go with a single pool of 18TB drives in a 3-way mirror, but it's expensive.
I would suggest a 2-way mirror of SSDs (NVMe or SATA) for the VM storage and a second RAIDZ2 HDD pool for your backup needs (your PC and your dad's photo archive).
I ordered five Seagate Exos X16 | 16TB for a RAIDZ2, and I will also use it for the VMs. If I see, that I need more performance I can add a faster mirror pool later.

Now it's time to wait for all the parts. Thanks for all your help I will let you know how it goes.

Where do you plan to keep this system?
My initial plan is to have it in my office. If the noise gets annoying however, I will clear some space in my storage room.

Also, I assume your dad lives with you
No he is not. My plan is to setup a site-to-site VPN and connect his LAN with mine. That way he can also benefit from other services I run in my homelab, like pi-hole and unbound.
 
Top