MGYVR
Dabbler
- Joined
- Mar 14, 2021
- Messages
- 16
Background/build info:
Dell R730xd (12x 3.5" front bays, 2x 2.5" rear bays)
-Dual Xeon 12core CPUs (w/ HT), 48 logical cores
-64GB ECC RAM
Boot pool
-2x 120GB SATA SSDs mirrored, connected to HBA330 via rear flex-bays
-System data set stored on this pool
Fast pool
-2x 1TB NVMe SSDs mirrored, connected to a 4x M.2 NVMe PCIe 16x expansion card, bifrucated 4x4x4x4
-Syncthing plugin/jail with access to the bulk pool.
-Unifi Controller plugin/jail.
-2-3x Windows VMs, one always on, and 1-2 more that don't need to be running all the time (probably 4GB RAM for each VM)
Bulk pool
-4x Seagate Exos 16TB SAS drives in a single Raidz2 pool, connected via HBA330 front bays
-1x 256GB NVMe SSD as cache drive (from the PCIe expansion card detailed above)
-Basically just a giant SMB share
Essentially this will host a large SMB share (on the bulk pool), and host 2-3x Windows VMs on the fast pool. The SMB share will not get a ton of activity, only a hand full of users working with it, maybe copying in/out less than 10GB/day total. One of the VMs will access the SMB and run pdf search indexing on the relevant files in the SMB share. I will also be running Syncthing and a Unifi Controller as plugins/jails on the fast pool. Syncthing will have access to the large SMB share for syncing off site. The focus is this build is maintaining responsiveness for the Windows machines accessing/reading/writing the SMB share, server will have 4x 1Gbps connections LAG'd into a Unifi switch with plenty of switching capacity to make use of it, and I expect it will rarely have more than 4x+ users reading/writing data to the SMB concurrently. We built this server with the expectation that we will eventually upgrade the network to give this server a faster connection, but that upgrade isn't happening immediately.
I have basically all of this setup and working already, but after extensive testing and reading a ton of documentation I still have a few questions:
1. Should I enable write caching on the 16TB drives in the system BIOS? It defaulted to off when I first installed them, and I have read conflicting reports about the use of write caching with TrueNAS.
2a. I am a little confused about RAM use reporting in TrueNAS. the dashboard tile says I have ~10GB free, but when I go to the VMs screen the text at the top on that view says I only have 0.10 bytes free, and it won't let me boot a VM that is already configured, what's up with that?
2b. One or two of the VMs won't need to be launched at boot, but I need to be able to launch them on demand periodically. I know the ZFS paradigm of free RAM being wasted RAM, but I'm willing to waste ~12GB RAM for this. I am aware of the tuneable to limit the ARC size, would that be the best way to solve this dilemma, and recommendations on size given the system specs (looking at 4GB per VM)? Does this system need more RAM in general?
3. I have one additional 256GB NVMe SSD that I'm not presently using for anything. Any suggestions? The two 256GB SSDs were actually acquired due to a purchasing mistake, but the expense was small enough that they don't need to be returned if a use for them can be found.
Thanks for any responses to my questions, and any input toward a general sanity check on this build!
Dell R730xd (12x 3.5" front bays, 2x 2.5" rear bays)
-Dual Xeon 12core CPUs (w/ HT), 48 logical cores
-64GB ECC RAM
Boot pool
-2x 120GB SATA SSDs mirrored, connected to HBA330 via rear flex-bays
-System data set stored on this pool
Fast pool
-2x 1TB NVMe SSDs mirrored, connected to a 4x M.2 NVMe PCIe 16x expansion card, bifrucated 4x4x4x4
-Syncthing plugin/jail with access to the bulk pool.
-Unifi Controller plugin/jail.
-2-3x Windows VMs, one always on, and 1-2 more that don't need to be running all the time (probably 4GB RAM for each VM)
Bulk pool
-4x Seagate Exos 16TB SAS drives in a single Raidz2 pool, connected via HBA330 front bays
-1x 256GB NVMe SSD as cache drive (from the PCIe expansion card detailed above)
-Basically just a giant SMB share
Essentially this will host a large SMB share (on the bulk pool), and host 2-3x Windows VMs on the fast pool. The SMB share will not get a ton of activity, only a hand full of users working with it, maybe copying in/out less than 10GB/day total. One of the VMs will access the SMB and run pdf search indexing on the relevant files in the SMB share. I will also be running Syncthing and a Unifi Controller as plugins/jails on the fast pool. Syncthing will have access to the large SMB share for syncing off site. The focus is this build is maintaining responsiveness for the Windows machines accessing/reading/writing the SMB share, server will have 4x 1Gbps connections LAG'd into a Unifi switch with plenty of switching capacity to make use of it, and I expect it will rarely have more than 4x+ users reading/writing data to the SMB concurrently. We built this server with the expectation that we will eventually upgrade the network to give this server a faster connection, but that upgrade isn't happening immediately.
I have basically all of this setup and working already, but after extensive testing and reading a ton of documentation I still have a few questions:
1. Should I enable write caching on the 16TB drives in the system BIOS? It defaulted to off when I first installed them, and I have read conflicting reports about the use of write caching with TrueNAS.
2a. I am a little confused about RAM use reporting in TrueNAS. the dashboard tile says I have ~10GB free, but when I go to the VMs screen the text at the top on that view says I only have 0.10 bytes free, and it won't let me boot a VM that is already configured, what's up with that?
2b. One or two of the VMs won't need to be launched at boot, but I need to be able to launch them on demand periodically. I know the ZFS paradigm of free RAM being wasted RAM, but I'm willing to waste ~12GB RAM for this. I am aware of the tuneable to limit the ARC size, would that be the best way to solve this dilemma, and recommendations on size given the system specs (looking at 4GB per VM)? Does this system need more RAM in general?
3. I have one additional 256GB NVMe SSD that I'm not presently using for anything. Any suggestions? The two 256GB SSDs were actually acquired due to a purchasing mistake, but the expense was small enough that they don't need to be returned if a use for them can be found.
Thanks for any responses to my questions, and any input toward a general sanity check on this build!
Last edited: