Future Home Lab

Jamberry

Contributor
Joined
May 3, 2017
Messages
106
Hi guys!
I have read a lot about SLOG and special vdev and think it is fascination technology. Thank you again @HoneyBadger for explaining it so well.
Anyway I wanted to write a post about my future homelab and hoped that you guys could give me some advice. Maybe someone else already kinda made this setup or maybe this could help someone else looking for a home lab. This is gonna touch on a lot of topics and probably is gonna be complicated to discuss. Because of that I try to give the answers or statements numbers hoping this helps to structure the discussion.
But of course you are also very welcome to critique other points of my setup. Main main reason for upgrading is noise! Icing on the cake would be to bring down the 250W base load. Next year that will cost me 650$ per year.

To start with, I think it is probably best to take a look at my current use case and setup and why I wanna switch hardware and the architecture. Most of my hardware I got for free and has grown over time...

OPNsense: I have a old Intel NUC as a "router-on-a-stick" OPNsense. 4GB RAM, single SSD. Works fine. Only downside is missing IPMI and SFP+. My ISP offers a free upgrade to 10Gbit, so yeah, SFP+ would be nice.

TrueNAS: Supermicro Board, Pentium G4560, 64GB ECC. Case is a Rosewill 4500. I hate this case, it is the cheapest case I ever owned, with sharp corners. Two old SSDs for
boot mirror. Two 5bay hotswap caddies. My data pool consists of three vdevs that are mirrors. 4x 8TB disks and 2x 16TB disks. I have three datasets and NFS shares, one is for nextcloud data, the other one is for Proxmox Backups and last one is for family videos and raw footage of my recordings. Rsync backups Proxmox and Nextcloud to an off-site old QNAP NAS. The hot swap caddies turned out to be less usefull than I thought they would. Only one drive die on me and even then I could have handled the downtime by shutting down TrueNAS. I mostly used it to wipe disks from old devices I recycled.
The biggest problem I have with this setup is the noise. The caddies don't have sound dampening and the case itself does not have it either.

Gaming PC: Has a 500GB SSD and a 2TB HDD for games that don't need fast storage. Would be cool to remove the HDD and use iSCSI.

Windows PC: Old Lenovo SFF. Has Plex installed, because some family members have not made the switch to Jellyfin. Is my Windows client to play around and test things. Has a 8TB disk that runs Storj just for fun.

Ubuntu PC: Old Lenovo SFF. Has Jellyfin installed. Can hardware transcode thanks to Intel Quick sync. Wanted to migrate this host to Proxmox but never managed to passtrough the GPU. Now all devices can direct play and transcoding is no longer needed.

Proxmox: Supermicro Board, 32GB ECC, two nvme SSDs mirror for boot and VM storage. There are 10 VMs running selfhosted services. They don't need to much CPU power and also not a lot of disk io. For example, one VM is hosting Mumble, a Teamspeak alternative. That VM is mostly idling, the sharpest IO increase is during updates. A old 4TB single HDD is for unimportant stuff, like a Linux Gameserver and a qbittorrent. I did not put torrents on ZFS, because someone in the forums said, that this will badly fragment ZFS pools.
1: Is that still true if you use SLOG? SLOG does not help with fragmentation at all.
2: Why don't I use docker? When this whole setup started 5 years ago, I have heard a lot of people talk negative about Docker. Some said it is not secure (not that important to me, also I think these issues are gone nowadays). Also performace should not be great and it is bloated. I think a lot of that has changed over the years and a lot of selfhosted services I use offer Docker support, for some this has even become the recommended way to install. On the other hand, with VM templates and Cloud init, I am pretty happy with the way Proxmox has different VMs for different use cases. Would you recommend me looking deeper into Docker and Kubernetes and make the switch? Maybe I have become a lazy old man and I should make the switch to containers? Can you bluebill me on containers? ;)

Hardware wise, I am currently looking into buying a Fractal Define R7 XL. The main reason is, that I think this will bring down HDD noise.
3: The sound dampened case and the rubbers on the disk hopefully help with that.

So now I am thinking about where to go next.

4: Go all in on TrueNAS! Replace all devices with one single TrueNAS system. Replace most of my services with docker and use a few VMs for stuff that does not work in docker (like OPNsense). It would be so cool to have only one system. On the other hand, passtrough NICs to OPNsense sounds like a complexity nightmare :)
5: Still have multiple devices. TrueNAS, a Firewall and a Hypervisor but they both use iSCSI for boot disks and NFS share for VMs. I would only have to manage a single Storage System.
6: Boring option I currently have. Fast local SSDs for the hypervisor and VMs and TrueNAS for "slow" storage.

If I decide to go for 4 or 5, how bad will the performance penalty be? I haven't touched OS with HDDs for over 7 years :) Also I assume that by
7: using SLOG and three mirrored vdevs, performance would be still decent?
8: Read performance should be pretty good if I have 128GB RAM for ARC?
9: How about using one fast nvme SSD for SLOG and two SSDs partitioned into two partitions, one for boot, the other for vdev?
10: Percentage of used storage will sooner or later be 80%. A friend will backup his TrueNAS to mine. So I will not be able to follow the "only 50% full for VMs" rule. Will using 80% have a big impact?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
1: Is that still true if you use SLOG? SLOG does not help with fragmentation at all.

Correct.

Some said it is not secure (not that important to me, also I think these issues are gone nowadays).

The talking about it is gone, yes. It's still a bit of a security dumpster fire.

Go all in on TrueNAS! Replace all devices with one single TrueNAS system. Replace most of my services with docker and use a few VMs for stuff that does not work in docker (like OPNsense).

As much as I like TrueNAS, it isn't a great hypervisor platform. For a good hyperconverged platform, my suggestion is ESXi. You virtualize the TrueNAS according to one of my guides, using it for data storage. You have a separate hardware RAID controller for ESXi; almost any of the LSI 12G SAS ones are good options. You can run VM's just fine on ESXi. You can run Docker on a Linux VM. It's a great homelab platform and well supported here on the forums.

10: Percentage of used storage will sooner or later be 80%. A friend will backup his TrueNAS to mine. So I will not be able to follow the "only 50% full for VMs" rule. Will using 80% have a big impact?

ZFS block storage? It really depends. Fragmentation will be driven up based on how often ${factors}, such as how often you do OS updates or reinstalls, whether you're using thin provisioning, etc. ZFS is a resource hog doing block storage, which is one of the reasons I suggest using a conventional ESXi-compatible RAID controller and just letting ESXi do storage on that. Get two big SSD's and put them in RAID1 and call it a day. Otherwise you're looking at ZFS and fragmentation. Using 80% or even 90% THE FIRST TIME will have zero impact. It will be lightning fast. But the problem is that you immediately start fragmenting as blocks are freed. The idea behind 30-50% utilization is that you tend to get larger clumpings of freed blocks, and that property tends to deplete much more slowly. You might need ten, twenty, fifty cycles of overwrites to get to a bad fragmentation state.
 

Jamberry

Contributor
Joined
May 3, 2017
Messages
106
The talking about it is gone, yes. It's still a bit of a security dumpster fire.
:grin:
As much as I like TrueNAS, it isn't a great hypervisor platform. For a good hyperconverged platform, my suggestion is ESXi.
I am pretty familiar and happy with Proxmox, is there a reason to make the switch?
You virtualize the TrueNAS according to one of my guides, using it for data storage. You have a separate hardware RAID controller for ESXi; almost any of the LSI 12G SAS ones are good options. You can run VM's just fine on ESXi. You can run Docker on a Linux VM. It's a great homelab platform and well supported here on the forums.
I think I would stick to my Proxmox ZFS mirror for that. That is a pretty cheap and fast software RAID.
ZFS block storage? It really depends. Fragmentation will be driven up based on how often ${factors}, such as how often you do OS updates or reinstalls, whether you're using thin provisioning, etc. ZFS is a resource hog doing block storage, which is one of the reasons I suggest using a conventional ESXi-compatible RAID controller and just letting ESXi do storage on that. Get two big SSD's and put them in RAID1 and call it a day. Otherwise you're looking at ZFS and fragmentation. Using 80% or even 90% THE FIRST TIME will have zero impact. It will be lightning fast. But the problem is that you immediately start fragmenting as blocks are freed. The idea behind 30-50% utilization is that you tend to get larger clumpings of freed blocks, and that property tends to deplete much more slowly. You might need ten, twenty, fifty cycles of overwrites to get to a bad fragmentation state.
Damn.. that is what I guessed would be the case. Seems like I will go down the boring route by
- switching my TrueNAS into a new case to get the noise down
- migrating OPNsense to my Proxmox host and adding a SFP+ card

I was hoping to get an eierlegende Wollmilchsau, an egg-laying wool milk and beacon animal, German saying for jack of all trades :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I am pretty familiar and happy with Proxmox, is there a reason to make the switch?

Well, Proxmox doesn't work for everyone. Even its authors describe its PCIe passthru capabilities as "experimental", and this is a critical bit of technology that needs to work correctly in order for TrueNAS to be successfully virtualized.

I have no great love for VMware but I'm also pragmatic -- I recognize what works. That said, if you're willing to put in the time and effort to validate your setup under Proxmox, I believe that most of the failures people have experienced tend to show up very quickly, and also on older hardware. If you can get a system that runs stable for a month, my guess would be you could be good to go.

You should still follow the "Absolutely must virtualize" document, or at least work to understand why stuff in there is there and might not apply to you. A decade ago, we had a steady stream of people coming in here with virtualization dumpster fires when they thought themselves overly clever and exempt. I've worked hard to identify the pain points for everyone and this has mostly worked out well.
 

Jamberry

Contributor
Joined
May 3, 2017
Messages
106
Ohh sorry, there is a misunderstanding here. I am not planning on virtualising TrueNAS! Never!
Some data I have is the most important thing to me. Way more important than any VM I am running. I get a uneasy feeling in my stomach virtualizing OPNsense, let alone TrueNAS! I am a bare metal fanboy :tongue:
 

Jamberry

Contributor
Joined
May 3, 2017
Messages
106
I am still looking for a new motherboard, but I am not under a time pressure.
Today I found a brand new X11SDV-4C-TP8F for 770$. That thing supports RDIMM, has two SFP+ and 12 SATA ports and even a m.2 slot.
Is there any catch to this board?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I've never seen that particular board but have used its predecessors in the X10SDV lineup. Generally very featureful. A bit difficult in some ways, such as needing to contact Supermicro for a custom firmware update tool for the integrated ethernet controller. An impressive amount of I/O in a very small package.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Idle power on the X11SDVs is also said to be a bit on the insane side, especially when compared to X10SDVs.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Insane as in Intel forgot the reason people were interested in paying top dollar for these was for their ability to be efficient and low opex as edge nodes? I did notice the much larger TDP's spec'd for a bunch of these X11SD boards. That'd be too bad, as it switches the value proposition around so that it is mostly just handing Intel extra money for the privilege of a soldered-on CPU.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Something along those lines. They seem to have dropped much of the low-power aspect of the platform while chasing higher performance. My reading of the situation is that they pushed hard for applications with constant workloads (cell towers being the typical example) and had to compromise. I think they're still efficient, but only if they're chewing through work, not if they end up idling a lot.
This may also have been Supermicro dropping the ball somewhat, since they have meaningful influence on the end product's idle power, possibly because they were going to keep X10SDV around for nearly a decade anyway.
C3000 also moved up into some of Xeon-D's territory, which removes some of the need for ultra-low power, which C3000 can more easily accomplish.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Is there any catch to this board?
I'd normally make a quip about "the price?" but by the important "Gbps and PCIe lanes per square inch" metric, that's actually a reasonable cost.

Regular x16 and x8 slots, an x4 and x2 M.2, a bonus x1 mPCIe, and the HD ports having the ability to break out into yet more PCIe ports as U.2 - and gobs of networking on top of that, makes this an impressive baseline board for a build with the potential to be screaming fast.
 

Jamberry

Contributor
Joined
May 3, 2017
Messages
106
Idle power on the X11SDVs is also said to be a bit on the insane side, especially when compared to X10SDVs.
According to servethehome X11SDV-4C-TP8F uses around 51W idle while X10SDV are more between 23W and 30W. It is not great, but I guess adding some SAS add-on card puts them in the same ballpark, doesn't is? Maybe I can save some Watts by disabling 10Gbase-T in BIOS.
 
Top