ZFS on Consumer-SSD pool

thomas-hn

Explorer
Joined
Aug 2, 2020
Messages
82
Can ZFS be used on a pool made of customer (non-enterprise) SSDs like the Samsung 860 Pro or will there be problems, for example, because of a lot of write accesses by ZFS and, therefore, a low lifetime of the SSDs?

Will TrueNAS automatically use TRIM for SSDs or does this have to be enabled separately somehow?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I think you're looking for the word "consumer".

You may use consumer SSD's if your workload is compatible. Expect that resilvers could be somewhat of an impact to overall endurance. Better quality products, such as Samsung TLC, is advisable. Some people have tried the Samsung QLC with bad results.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Will TrueNAS automatically use TRIM for SSDs or does this have to be enabled separately somehow?
Enabled by default.

a lot of write accesses by ZFS and, therefore, a low lifetime of the SSDs?
As already said above, writes depend on what you're doing with the pool... storing media files for access is something that can work really well, but if those files are being constantly edited, maybe not so great.

Don't waste your money using consumer SSDs as L2ARC or SLOG... they will burn out very quickly.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
The Samsung 860 Pro is even MLC. So absent of enterprise-grade, it is probably one of the best choices you can make.
 

thomas-hn

Explorer
Joined
Aug 2, 2020
Messages
82
I think you're looking for the word "consumer".

You may use consumer SSD's if your workload is compatible. Expect that resilvers could be somewhat of an impact to overall endurance. Better quality products, such as Samsung TLC, is advisable. Some people have tried the Samsung QLC with bad results.
Oh, thanks.....my fault...shame on me. I corrected the thread tile, but left my mistake in my question above, so that your comment still matches.
 

thomas-hn

Explorer
Joined
Aug 2, 2020
Messages
82
Enabled by default.


As already said above, writes depend on what you're doing with the pool... storing media files for access is something that can work really well, but if those files are being constantly edited, maybe not so great.

Don't waste your money using consumer SSDs as L2ARC or SLOG... they will burn out very quickly.
I'm thinking about using ZFS on a mirror with two Samsung 860 Pro for storing Proxmox-VMs.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,949
That would work. I used 6 MX500's for VM Storage (Mirrored vdevs) and lost about 20% a year - so a projected lifespan of 4-5 years was I thought just fine.
As previously stated do not use Samsung QVO (which self destructed after a couple of months)
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I used 6 MX500's for VM Storage (Mirrored vdevs) and lost about 20% a year - so a projected lifespan of 4-5 years was I thought just fine.
As previously stated do not use Samsung QVO (which self destructed after a couple of months)
Whereas if you were using QVO as a replication target for a once-a-day snapshot, probably fine (but perhaps a waste of SSD unless you're seeking speed to recover from it).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Oh, thanks.....my fault...shame on me. I corrected the thread tile, but left my mistake in my question above, so that your comment still matches.

No worries. It's important that we be clear when we are discussing these things, because there is definitely such a thing as a "customer SSD pool" for stuff like hosting companies. Customers tend to have unreasonable expectations about being able to abuse SSD storage and think it means "unlimited endurance." This is QUITE different than your question about "consumer" SSD, where we are discussing intelligently deploying inexpensive SSD based on careful consideration of endurance. That's why I wanted to make certain the clarification was obvious to all.

Bearing in mind that I am NOT talking about ZFS here but rather just SSD behind hardware RAID controllers:

I've been running a large fleet of hypervisors on mostly consumer-grade SSD for many years now. This dates all the way back to 2011-2012, when we began to integrate cheap Black Friday 60GB and 120GB SSD's for boot storage and development VM's, and took a significant upswing in 2015 with the $150 Intel 535's on Black Friday. Since then, we've deployed many hundreds of mostly Samsung 850/860/870 EVO and 970 Evo Plus/980 Evo PRO's, and I think, had one failure of the Samsungs.

We're able to get away with this because on most of these hypervisors we have a reasonable workload. Most of our VM's are based on a specialized appliance build of FreeBSD that does not generate a lot of superfluous writes, which includes little things such as disabling atime updates.

The Intel 535's are mostly dead. They were deployed to production gear with somewhat demanding endurance requirements, and with their tepid 40GB/day endurance rating, we knew in advance that we really needed more like 100-150GB/day for some of them, and those did mostly fail over time. We bought them because it seemed like SSD's were getting cheaper and better, and that we'd be replacing them over time anyways, so it did work out. And Intel replaced a bunch of them with 545s's anyways, so, yay, win.

If you are intending to use your SSD's for things like Kubernetes development, where there is a high amount of create/deploy/destroy cycling on a daily basis, you are likely to burn through your endurance fairly quickly. It is better to buy the high endurance stuff for that.
 

ByteMan

Dabbler
Joined
Nov 10, 2021
Messages
32
I have four SSDs which I would like to set up in raidz1 (SSD-only pool)
- 2x - 3.84TB Ironwolf 125 Pro
- 2x - 4TB WD Red SA500
My understanding is that both types are higher-end consumer SSDs.
Both report a 512 byte logical + physical sector size (for what it's worth).

Is there anything special that should be considered (deviating from default settings) with these drives in said configuration?
Should ashift=13 be considered?
Also, my assumption is that the slight capacity difference will not cause an issue?


camcontrol identify adaX

Ironwolf 125 Pro:
device model Seagate IronWolfPro ZA3840NX10001-2ZH104
protocol ACS-3 ATA SATA 3.x
firmware revision SU4SC01B
cylinders 16383
heads 16
sectors/track 63
sector size logical 512, physical 512, offset 0
LBA supported 268435455 sectors
LBA48 supported 7501476528 sectors
PIO supported PIO4
DMA supported WDMA2 UDMA6
media RPM non-rotating
Zoned-Device Commands no

WD RED:
device model WDC WDS400T1R0A-68A4W0
protocol ACS-4 ATA SATA 3.x
firmware revision 411000WR
cylinders 16383
heads 16
sectors/track 63
sector size logical 512, physical 512, offset 0
LBA supported 268435455 sectors
LBA48 supported 7814037168 sectors
PIO supported PIO4
DMA supported WDMA2 UDMA6
media RPM non-rotating
Zoned-Device Commands no
 
Last edited:
Top