SuperWhisk
Dabbler
- Joined
- Jan 14, 2022
- Messages
- 19
I have tried to be thorough in my research, but without directly testing all the permutations I’m not sure which would be the best for my use case.
Hardware
Dell R630 server with 2 CPUs, 64GB ECC RAM (32 per CPU), 8x 1TB SATA HDDs (raidz2), 4x 500GB NVMe SSDs, and 2x 120GB SATA SSD boot drives (mirror)
SATA HDDs are connected via an LSI SAS 3008 based HBA card.
Boot SSDs are using the chipset SATA controller.
NVMe drives are on a PCIe 3.0 16x carrier card, with the slot bifurcated 4x4 so each is directly connected to CPU 1. They are Samsung 970 Evo Plus drives with very little writes on them. The server is on a UPS so I am not concerned about the lack of power loss protection on the SSDs.
Goals
My plan is to run SCALE and create a Linux VM pinnned to the second CPU and associated memory to run various applications such as NextCloud and Unifi Controller under Docker or Kubernetes (I want more control than the current plugin/apps interface will allow).
The HDDs in this server will be my primary NAS storage pool hosting all my data from my other computers over NFS, as well as data in NextCloud, also via NFS.
This pool will be regularly replicated to a TrueNAS CORE machine for backup purposes.
With all that background, what would be my best use of these NVMe drives?
From what I have read, there are four primary options (in no particular order):
Hardware
Dell R630 server with 2 CPUs, 64GB ECC RAM (32 per CPU), 8x 1TB SATA HDDs (raidz2), 4x 500GB NVMe SSDs, and 2x 120GB SATA SSD boot drives (mirror)
SATA HDDs are connected via an LSI SAS 3008 based HBA card.
Boot SSDs are using the chipset SATA controller.
NVMe drives are on a PCIe 3.0 16x carrier card, with the slot bifurcated 4x4 so each is directly connected to CPU 1. They are Samsung 970 Evo Plus drives with very little writes on them. The server is on a UPS so I am not concerned about the lack of power loss protection on the SSDs.
Goals
My plan is to run SCALE and create a Linux VM pinnned to the second CPU and associated memory to run various applications such as NextCloud and Unifi Controller under Docker or Kubernetes (I want more control than the current plugin/apps interface will allow).
The HDDs in this server will be my primary NAS storage pool hosting all my data from my other computers over NFS, as well as data in NextCloud, also via NFS.
This pool will be regularly replicated to a TrueNAS CORE machine for backup purposes.
With all that background, what would be my best use of these NVMe drives?
From what I have read, there are four primary options (in no particular order):
- SLOG. It seems like 500GB would be way too big for SLOG unless I was regularly doing 100+GB writes at a speed much greater than the SATA drives could handle. I only have a gigabit network with no imminent plans to upgrade that so I don’t think I would be able to saturate the SATA drives on sustained writes to the pool since the IO is spread across multiple drives.
- 500GB could work for L2ARC at the cost of about 10GB of RAM but I don’t know how much benefit it would provide for similar reasons (or if these consumer drives would just die a quick death in that role).
- I could create a second pool with some or all of the SSDs and use that for the VM boot drive to improve application disk performance, especially random IO. I could connect this pool to the VM over local iSCSI, but once again, I am not sure if I would actually see any real benefit from that (if I could even make it work).
- Finally, I could use some or all of the SSDs as a ZFS Metadata device for the HDD drive pool, to store metadata and smaller files. This seems like it would be most likely to have a impact, by greatly increasing random IO performance on the pool for all use cases, but 500GB also seems rather large for this purpose on a 8x1TB raidZ2 pool.
- I could also do some combination of the above, as I have 4 SSDs to work with here. 1x500GB L2ARC, 3x500GB special vdev mirror with hot spare?