Hardware Validation Question

McFly1800

Cadet
Joined
Apr 25, 2023
Messages
1
I'm doing the RAM calculations for using a Dell 740xd repurposed for TrueNAS Scale.

Specs are:
  • 2 x Intel Xeon Silver 4110 2.1GHz 8C Processors
  • 64GB RAM - Already planning to take this to 256GB or more, ECC.
  • 2 x 400GB Read Intensive 6G SATA 2.5" SSDs (FlexBay) - Boot
  • 2 x SSD (can define sizing) for L2ARC or SLOG - This is going to be for small files written, millions of them, and Veeam-style backups.
  • 12 x 12TB 7.2K 6G 3.5" SATA Hard Drives - don't need to use PERC presumably. Should I swap to something else?
  • 10/25Gbps network
I was calculating deduplication on the bulk drives and based on the notice it sounds like between the drives and stuff I could easily need 600GB+ of RAM if I use dedup on a huge volume. Is this correct? If true, I may just not use dedup - but I wanted to validate my numbers based on the Scale hardware planning document.

A lot of the questions surround smaller use-case scenarios so I wanted to ask for more of an enterprise/larger volume planning discussions scenario.

I'd also like to be able to mirror these with snapshots - so would large amouns of memory also assist for this?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
L2ARC or SLOG - This is going to be for small files written, millions of them, and Veeam-style backups.

You may want to consider a special purpose vdev. You can offload metadata and small files onto a dedicated vdev made of SSD, but this needs to be redundant (loss of vdev == dead pool).

12 x 12TB 7.2K 6G 3.5" SATA Hard Drives - don't need to use PERC presumably. Should I swap to something else?

PERC is fine, RAID is not. You can convert PERC H200 or H310 to HBA mode, and the PERC HBA330 is apparently fine as well (haven't tested firsthand).

I was calculating deduplication on the bulk drives and based on the notice it sounds like between the drives and stuff I could easily need 600GB+ of RAM if I use dedup on a huge volume. Is this correct? If true, I may just not use dedup - but I wanted to validate my numbers based on the Scale hardware planning document.

Dedup is usually a bad idea, but at least you're ballpark-ish. Please take some time to check out the article by Stilez in the Resources section that discusses a practical system sized for dedup.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Caveat - I do not use dedupe, never have.

Dedupe requires vast amounts of RAM. To mitigate against this you can use a metadata / dedupe special vdev to hold the tables instead. Make sure that this vdev is redundant and has more than enough space to hold the tables and is quick. Something like optane (I do love an optane) is good in this position - but they may need to be big ones aka really not cheap and if you can get ahold of them nowadays. Failing that, good (aka enterprise grade) NVMe drives would be most appropriate.

Thats my understanding of things - happy to be corrected

I know you can create two types of special vdev in TrueNAS GUI. One seems to be Dedup specific, the other is called Metadata. Given its all metadata I am unsure where the differences lie (when it comes to metadata). However the dedupe reccomends X GiB for each TiB of general storage. I personally would add a large expansion factor onto that as going too small would probably have major ramifications to pool speed. The Metadata vdev also allows small files to be stored on the SSD's. However this requires manual tuning (setting record sizes and small block sizes on datasets and is something of a ballache as by default it will not store any small files, just metadata. Also I believe that both (either) vdev types need to be present BEFORE you start loading data onto the data vdevs as they only store metadata that written after they are present

I think. Others may have additional comments

1682452546972.png
 
Top