First Time With TrueNAS

nomb

Cadet
Joined
Mar 15, 2022
Messages
8
Hello everyone,

Firstly, I just wanted to say hello and it's nice to meet everyone.

Secondly, as I am new to TrueNAS and NAS in general, I wanted to just give a summary of where I am at and why and why I made some of the choices I did and get some general feedback / comments.

So I have a Dell R710 2 6-core CPUs and 188GB of memory. It gets really loud and my girlfriend got tired of looking at it and having it in the room it had to be in so I decided to build a newer more modern server. I am a heavy virtualization user (esx/proxmox/etc) and use it for my day job so I decided to build a new one. I knew I wanted it to be able to handle virtualization really well and I knew that I wanted some way to have modern NAS-like functionality.

I ended up building this guy: https://pcpartpicker.com/list/3VrcY9
I had never done RGB before either and figured it'd make the girlfriend enjoy looking at it more so I gave it a shot.

So at this point I spent days going over how I wanted to set it up. Proxmox baremetal and virtualize TrueNAS, TrueNAS baremetal and use it's virtualization, TrueNAS baremetal and virtualize ProxMox, etc.

I decided to go with Proxmox baremetal and virtualize TrueNAS. Reason being I have a lot of terraform scripts I use for spinning up networks in Proxmox and a few other little things. My plan was to get a RAID card and pass that through to the VM however after adding the GPU, since I'm using a micro atx case, all of my PCI slots were blocked. Sigh. So I ended attaching the disks directly to the VM and passing them through that way. I know this isn't ideal.

I installed TrueNAS Scale onto a small disk on the gen. 3 nvme, also added the 4 disks for the pool and a 256GB on the gen. 4 nvme for cache for the pool. I don't think I've seen the cache actually be used yet though. I setup the 4 disks in a raidz2 with the cache disk. I also added the one NIC hooked to the bridge for the internet and another NIC hooked to an internal bridge for VM only communication.

So far it has been running decently I think. I only have a gig network at home and transfering an ISO to the pool gets me around 91MBps. I figured with the layers that was probably ok. I'm not really sure though. I have been confused with how the permission stuff works but slowly am figuring that out. I have had lots of issues getting syncthing running on it. Finally got it running but it doesn't want to see any of the devices on the local lan. I may do a new post for syncthing.

Anyway thanks for taking the time to read. I welcome any thoughts or comments. I did a decent amount of research and know that I'm not running it in the best way possible but welcome any suggestions or new ideas or anything. :D
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Your hardware choices are...interesting....to say the least. I have to say, I never thought I'd see someone intentionally go for RGB lights on a server. Especially with the cost of the new "server", it would have been cheaper to get a nice cabinet and a few quieter fans.

Are you using this as your main desktop as well? Like, GPU pass through to the desktop VM, kind of thing? If not, what are you using the GPU for?

You're doing a lot of crazy things with your disks. First off, since you are using ProxMox, why not do ZFS at the ProxMox layer, and then run a simple SMB or NFS server in an LXC container? You obviously understand that your solution is highly undesirable. Why do that to yourself?

Also, using the cache drive is pretty much useless for your setup. The L2ARC cache basically takes the most commonly used files, and caches them so that your system can more quickly access those files. However, managing the L2ARC costs RAM, and is only really effective when the system has an understanding of what files you are actually using. So unless you're regularly accessing more than (I'm guessing) 30GB or so of the same data (approximately the size of the ARC you can run on your server), you will see no improvement from an L2ARC device.

Depending on your risk tolerance, running 4 disks in striped mirrors will give you much better random I/O than RAIDZ2. However, striped mirrors do have an increased risk of failure (1/3 of the time that you have 2 drives fail, the entire pool will fail).

Also, I find it strange that you went for Scale over CORE. Unless you plan on playing with the hyper-converged storage benefits of Scale, I'd strongly recommend staying with the tried-and-true CORE.

In your shoes, I would suggest the following:
  • Ditch the GPU, get a proper HBA, and do TrueNAS the "Right Way"™.
  • Or, ditch TrueNAS, use ZFS through ProxMox, and run something like OMV5 in an LXC container to manage your NAS.
 

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
Been there, done that - I used ESXi. When was time to upgrade my ESXi server, I went with a X10 1U, so got all the NASes running baremetal. First thing I noticed was that my network performance was 10x better than with TrueNAS as VM (Maybe my fault ;) )
I do have 2 GPUs on my ESXi server, but those are for my windows workstation and my Plex.
I also don understand why you'd need GPU for TN.
Anyway, just wanted to through my $0.02 regarding virtualization of it and doing the wrong way. Mine had the HBAs in pass through and I run it without any issues for years.
 

nomb

Cadet
Joined
Mar 15, 2022
Messages
8
Been there, done that - I used ESXi. When was time to upgrade my ESXi server, I went with a X10 1U, so got all the NASes running baremetal. First thing I noticed was that my network performance was 10x better than with TrueNAS as VM (Maybe my fault ;) )
I do have 2 GPUs on my ESXi server, but those are for my windows workstation and my Plex.
I also don understand why you'd need GPU for TN.
Anyway, just wanted to through my $0.02 regarding virtualization of it and doing the wrong way. Mine had the HBAs in pass through and I run it without any issues for years.
I definitely wish I could have gotten an HBA card in there.
The GPU isn't for my TrueNAS. It's for my gaming/cad/plex VMs . :) I only can fit the one though.
 

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
I definitely wish I could have gotten an HBA card in there.
The GPU isn't for my TrueNAS. It's for my gaming/cad/plex VMs . :) I only can fit the one though.
What I see on your list is a $5000 invoice, maybe with that kind of money you can get couple servers and a good gaming desktop, but like I say to my son: It needs to match the wife decoration ;)
Good luck.
 

nomb

Cadet
Joined
Mar 15, 2022
Messages
8
What I see on your list is a $5000 invoice, maybe with that kind of money you can get couple servers and a good gaming desktop, but like I say to my son: It needs to match the wife decoration ;)
Good luck.
It was definitely a price of convenience. But honestly I've had no issues with performance. The gaming VM flys and I like doing a linked clone for each game from a single template. If I didn't have the goal of trying to keep it as small as possible I would have done a full size case and put two 3080s or 3090s in it and an hba to pass through. It is a cool case though, business in the back and party in the front.

Thank you for your response.
 
Top