Well after trying Scale 22.12 and finding some challenges, I have decided I should probably look at TrueNAS core. No data lost, just some oddities with my use case. I am posting this after reading the hardware guide, since I realize this may not be ideal. I am looking for constructive criticism.
So here is the hardware I am running:
Motherboard: Gigabyte B550m ds3h (Rev 1.3) Firmware: F15
CPU: Ryzen 5600g (with the stock cooler)
RAM: 32G of DDR4 - (non-ECC, yes I know the risks - and plan on swapping them out soon)
Storage:
- Boot: 2x M.2 256G SSDs by Silicon Power, in a QNAP 2.5" SATA "RAID" adapter, In RAID1 Mirrored config
- Pool: 4x NVME Patriot 512G P300 drives, (2 of which are on a PCIE 8x riser board, 2 are directly on the main board)
- 1TB total of pool storage in a raid-z2 pool
Network: Intel x520-DA2 10Gb SFP+, connected via om3 fiber, to a MikroTik CRS310 switch (On board 1GB Realtek - for OOB, or I may disable it entirely)
Power: TFX 300W (the only one I could find to fit the SFF Lenovo case it all lives in)
Add in cards: The NIC, and a x16 nvme riser card for the nvme drives. No GPU of course.
My use case: I host around 64 LXD (sometimes referred to as "Linux containers") system containers that run various web apps, email, API's, and hosted WebDAV storage. Along with a Ansible/cloud-init Lab that I use to test playbooks and configuration scripts that eventually end up in production. I run them in Debian 11 VM's that are hosted on XCP-ng 8.2, running on hosts with the same Intel x520 NICs. The VM hosts have 24GB of RAM, with local nvme storage they boot off of. So I don't sling a ton of data around, just need it to be fairly performant and stable for the most part. For my shared storage today, I run off of an Ubuntu 20.04 LTS server, hosting NFS and SMB shares, with ext4 file systems.. With another Debian box that I use for online backup. That box is mirrored to S3 block storage for the off-site backups, over a 1G uplink to the edge router.
Is the hardware I am using going to work for TrueNAS? Or should I toss it all aside and build something more robust? I have access to other hardware, but most of it is older enterprise surplus grade gear. I felt it would be better to start with all new hardware for a more solid system - and it was what I could find, with all the so called "shortages" of late. But I am not opposed to stepping up, I want to build something that will last a while. I am on the search for ECC RAM, but so far that is the only big no-no I see in the mix.
My goal is to have a solid base for shared storage, to host my groupware collaboration platform, email, and host the few VM's and containers that I run 24/7. This is all in a home-lab, so I try to keep the total power under the 200W range, and this server pool is fairly quiet. All hosts and the NAS will be connected to the core switch, with trunked ports, so everything will be L2 segmented via VLAN's. If I missed any detail, feel free to ask.
Thanks in advance.
So here is the hardware I am running:
Motherboard: Gigabyte B550m ds3h (Rev 1.3) Firmware: F15
CPU: Ryzen 5600g (with the stock cooler)
RAM: 32G of DDR4 - (non-ECC, yes I know the risks - and plan on swapping them out soon)
Storage:
- Boot: 2x M.2 256G SSDs by Silicon Power, in a QNAP 2.5" SATA "RAID" adapter, In RAID1 Mirrored config
- Pool: 4x NVME Patriot 512G P300 drives, (2 of which are on a PCIE 8x riser board, 2 are directly on the main board)
- 1TB total of pool storage in a raid-z2 pool
Network: Intel x520-DA2 10Gb SFP+, connected via om3 fiber, to a MikroTik CRS310 switch (On board 1GB Realtek - for OOB, or I may disable it entirely)
Power: TFX 300W (the only one I could find to fit the SFF Lenovo case it all lives in)
Add in cards: The NIC, and a x16 nvme riser card for the nvme drives. No GPU of course.
My use case: I host around 64 LXD (sometimes referred to as "Linux containers") system containers that run various web apps, email, API's, and hosted WebDAV storage. Along with a Ansible/cloud-init Lab that I use to test playbooks and configuration scripts that eventually end up in production. I run them in Debian 11 VM's that are hosted on XCP-ng 8.2, running on hosts with the same Intel x520 NICs. The VM hosts have 24GB of RAM, with local nvme storage they boot off of. So I don't sling a ton of data around, just need it to be fairly performant and stable for the most part. For my shared storage today, I run off of an Ubuntu 20.04 LTS server, hosting NFS and SMB shares, with ext4 file systems.. With another Debian box that I use for online backup. That box is mirrored to S3 block storage for the off-site backups, over a 1G uplink to the edge router.
Is the hardware I am using going to work for TrueNAS? Or should I toss it all aside and build something more robust? I have access to other hardware, but most of it is older enterprise surplus grade gear. I felt it would be better to start with all new hardware for a more solid system - and it was what I could find, with all the so called "shortages" of late. But I am not opposed to stepping up, I want to build something that will last a while. I am on the search for ECC RAM, but so far that is the only big no-no I see in the mix.
My goal is to have a solid base for shared storage, to host my groupware collaboration platform, email, and host the few VM's and containers that I run 24/7. This is all in a home-lab, so I try to keep the total power under the 200W range, and this server pool is fairly quiet. All hosts and the NAS will be connected to the core switch, with trunked ports, so everything will be L2 segmented via VLAN's. If I missed any detail, feel free to ask.
Thanks in advance.
Last edited: