What to look for buying SSDs?

Joined
Dec 26, 2023
Messages
17
I'm currently building a new TrueNAS Scale Server (256GB DDR4, Epyc 16Core, 40G Network). It'll have multiple pools. The planned layouts for the beginning are (it will be scaled in the future, once more space is needed):

- 3 x MIRROR | 2 wide | 18TB SATA HDD
- 1 x RaidZ2 | 6 wide | 20TB SATA HDD

Now I want to add some SSDs to the Server as cache, I personally don't thing L2ARC is needed, but Log & Metadata vdevs might be good.
However I'm confused about the best SSD to buy and a good layout.

Layout wise I'm thinking about 3-way mirrors, to protect those vdevs, is that needed?

As for Hardware, I would like to keep the initial cost reasonable, so I'm not planning to go with enterprise SSDs, and going used is risky for SSDs.
Looking at SSD options, there are multiple metrics to consider, which should I focus on?
  • Read/Write MB/s
    IOPS 4k
  • TBW
  • DWPD
  • MTBF
  • MTTF
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I would like to keep the initial cost reasonable, so I'm not planning to go with enterprise SSDs
Then it doesn't matter what you buy. Also, you didn't provide your Use Case.
 
Joined
Dec 26, 2023
Messages
17
I think the use case is not super relevant for the "write" cache & metadata vdev question. But anyway, it'll be a hybrid/mixed workload, that's the reason for multiple pool layouts. VM storage, small files (documents), larger files (media), maybe a database.
 
Joined
Dec 26, 2023
Messages
17
Then it doesn't matter what you buy.
Well even in the consumer ssd market, you there are multiple options for roughly the same price, so I think it does matter. For caching and metadata vdev should the focus be on performance (MB/s, IOPS), TBW or MTBF/MTTF (given the fact that those vdevs will be mirrored, if that make sense)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Well even in the consumer ssd market, you there are multiple options for roughly the same price, so I think it does matter. For caching and metadata vdev should the focus be on performance (MB/s, IOPS), TBW or MTBF/MTTF (given the fact that those vdevs will be mirrored, if that make sense)
I'm sure someone will have a different opinion but myself, I stick with the brand I prefer, which also means I do some current research to make sure my brand is doing okay or if it has any problems. It is completely a personal preference. If this were an office/company server, I'd buy the enterprise SSDs, doesn't matter where it's used in the NAS. Again, just my opinion and you will find many opinions here.

There are likely a dozen threads on SSDs on the TrueNAS forum, have you searched for those at all? Or a google search for 'truenas ssd metadata' or something like that?
 
Joined
Dec 26, 2023
Messages
17
Yeah I searched for them, and also found some information, e.g. use SSDs with super capacitors for power loss protection or specific models.
However I'm hoping to get some more generic idea with this question. For me having a range for TBW, MTBF, MTTF, MB/s, IOPS is much more valuable than specific SSD models. That way I can compare values of available models in my price range to those "perfect" ranges. That way it would be still valuable in some months when the market changes. And I haven't found such general recommendations or value ranges yet using search or google.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
A SLOG requires excellent performance in mixed use (read and writes) as well as strong endurance and low latency.
A bit dated but still useful read:
 
Joined
Dec 26, 2023
Messages
17
Great article, I'll read it.

SLOG requires excellent performance [...] as well as strong endurance and low latency.
This is also a good information, can we get some numbers here (obviously I don't expect exact numbers, but what range are we talking about). And how is latency noted in technical data?
  • TBW: 100TB, 500TB, 1PB, 50PB?
  • MTBF: <1M hrs, 2M hrs, >3M?
  • IOPS: 50k, 100k, 150k?
  • MB/s: 500, 2000, 6000?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
And how is latency noted in technical data?
Buy an M.2 SSD; you also want PLP (power loss protection). Look in the discussion thread of the resource I linked; generally, Intel's Optanes were the best (but now they are not produced anymore).
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Great article, I'll read it.


This is also a good information, can we get some numbers here (obviously I don't expect exact numbers, but what range are we talking about). And how is latency noted in technical data?
  • TBW: 100TB, 500TB, 1PB, 50PB?
  • MTBF: <1M hrs, 2M hrs, >3M?
  • IOPS: 50k, 100k, 150k?
  • MB/s: 500, 2000, 6000?
You will be able to find that kind of data on the internet, notably with the manufacturers websites and in the white papers they generate. All of these things you are asking for and more are part of the basic specifications of a device. If you put together a nice group of data to rank the various SSD/NVMe drives, please post it. I'm sure several people will find it handy. The only thing I know that will transfer 14,000 MBps is PCIe Gen 5, Gen 4 is 8,000 MBps. Of course that is the theoretical maximum, you will never see that in the real world.
 
Joined
Dec 26, 2023
Messages
17
All of these things you are asking for and more are part of the basic specifications of a device
Yes, if I make that table, I'll share it.

But my question also includes "What is a good value, for given data point?".

As an example TBW, but that is the same question for all those metrics.
  • What TBW value is considered good?
  • What TBW value would be too low?
  • What TBW value would be overkill, because the drive would fail before that mostlikely?
 

DigitalMinimalist

Contributor
Joined
Jul 24, 2022
Messages
162
Don’t make it too complicated…

Take enterprise SSDs with PLP (if used with >1PB left) with your desired connector (NVME, or SATA) and your desired capacity.

I never had an issue with SSDs so far, therefore mirrored SSD is imho sufficient.

As NVMEs (M.2 & U.2) are similar price than SATA, I would go with NVME (PCIe 3.0, or 4.0 - depending on your Mainboard).

I use an ASUS Hyper M.2 with PCIe 3.0 and 4x Micron 7300 1TB. 2x mirrored as VM Pool and 2x mirrored for special vDev for my HDD pool.

L2ARC & SLOG: only consider when your RAM is maxed out and you still need more performance
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
NVME, or SATA
Generally you want SLOG to be on NVMe due to the low latency required; L2ARC as well but the performance impact is less noteworthy.

SLOG: only consider when your RAM is maxed out and you still need more performance
Incorrect, RAM has no direct correlation with SLOG. A SLOG does not increase performance: the better it is, the less awful your syncwrites' performance is.
 
Last edited:
Joined
Dec 26, 2023
Messages
17
I thought RAM is only "used" when reading, so a SLOG makes sense for writes.

And if those values/metrics are not that important as long as the drive has Power-Loss Protection, that also helps. Still personally I think it is a difference between e.g. TBW 100TB or 2PB. If there are no known best-practices, I think my best bet would be to calculate write volume to see how long SSDs will live in the system, correct?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I thought RAM is only "used" when reading, so a SLOG makes sense for writes.
No, a SLOG makes sense only if you have the sync dataset propiety set to always; please read the following resource.

And if those values/metrics are not that important as long as the drive has Power-Loss Protection, that also helps.
None said so. The ideal SLOG has:
  1. PLP, otherwise it's useless;
  2. the highest possible performance in mixed use (concurrent reads and writes), otherwise the pool will hog during writes;
  3. a high TBW, otherwise you will have to change it frequently (over/underprovisioning will help a lot here);
It also ought to be PCIe device (so either M.2 or U.2; U.3 is a ugly chimera) in order to have the lowest possible latency: for the love of all that's solid state do not use spinning rust.

Reading the linked resource is not optional.
 
Last edited:
Top