Consumer SSDs for System Install

SeaWolfX

Explorer
Joined
Mar 14, 2018
Messages
65
I am setting up a new server that to function as a file-/media-server. It will only have a few users and should not see heavy load.

I have 2 x Samsung 250GB 870 EVO SATA SSDs and 2 x Samsung 512GB 970 PRO NVMe M.2 SSDs that I wanted to use for the OS/System/VMs. Maybe using each set of disk to create two ZFS mirrored pools; one for the OS install and one for other system related stuff that is not actually content storage (for that I have a separate set of HDD drives).

However, I have read some places that using consumer / prosumer SSD will be worn out quickly when used with ZFS. I cannot afford expensive enterprise SSDs so what are my options?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
using consumer / prosumer SSD will be worn out quickly when used with ZFS
That's more true of SLOG and/or L2ARC, but far less so for the boot pool and for data or VM pools. Using those samsung SSDs that you have will be fine for the stated purposes.

I think what you're saying is use the first 2 as the boot pool (and system dataset perhaps) and the other 2 for VMs and other non-content related storage.
 

SeaWolfX

Explorer
Joined
Mar 14, 2018
Messages
65
That's more true of SLOG and/or L2ARC, but far less so for the boot pool and for data or VM pools. Using those samsung SSDs that you have will be fine for the stated purposes.

That is good to hear. I have read in some places (particular on the Proxmox forum) that consumer SSD like the Samsung drives I have bought will be worn out in a few months if running them with ZFS.

I think what you're saying is use the first 2 as the boot pool (and system dataset perhaps) and the other 2 for VMs and other non-content related storage.

Yes, correct, that was my intention. Perhaps using the SATA SSDs for the boot pool and the NVMEs for the other as they will probably see more load.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
I have read in some places (particular on the Proxmox forum) that consumer SSD like the Samsung drives I have bought will be worn out in a few months if running them with ZFS.
That's a perfect example of why you shouldn't believe everything you read on the internet. My little 16GB SATA DOM is going on 6 years and is nowhere near being worn out.
 

SeaWolfX

Explorer
Joined
Mar 14, 2018
Messages
65
That's a perfect example of why you shouldn't believe everything you read on the internet. My little 16GB SATA DOM is going on 6 years and is nowhere near being worn out.

Nice :) Are you running two with ZFS?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I have read in some places (particular on the Proxmox forum) that consumer SSD like the Samsung drives I have bought will be worn out in a few months if running them with ZFS.
Actually that may be in reference to running ZFS on CEPH... I have seen warnings about the churn created by the replication of CEPH in conjunction with ZFS which is to be avoided on consumer drives... anyway, not applicable in our case.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That is good to hear. I have read in some places (particular on the Proxmox forum) that consumer SSD like the Samsung drives I have bought will be worn out in a few months if running them with ZFS.

The Internet is full of people who think they know things.

I'm a professional. I do this stuff professionally. I own my companies, so unlike many engineers, who just buy some stupid party line that you must use "data center" or "enterprise" SSD's in servers, my jerk boss (that's me) won't sign off on PO's for stupid things that aren't needed, because his theory is that it is better to be smart, which allows company resources to go further. We've been using consumer grade SSD's for more than a decade in many demanding roles, such as in ESXi hypervisors. We've had some failures, yes, but these failures have almost always cleanly traced back to exceeding workload write limits, sometimes by design, sometimes due to an unexpected writer.

COULD you POSSIBLY wear them out? Hell yes, of course, it's totally possible. But ZFS isn't some magic SSD-eating beast. You actually have to be writing craptons of stuff to them to kill them. Plus, putting two consumer SSD's in RAID1 (or mirrors in ZFS) is actually going to fail gracefully compared to a single "data center" grade SSD, and probably cost less.

So there are definitely things that you can do that will wear out an SSD. These include frequent writes, such as failing to disable atime updates, or doing frequent updates of large chunks of software, or frequently creating and destroying VM's. If you're the sort that thinks it is fun to have a Terraform workflow in Git that spins up three VM's on command to check the status of your coffeemaker, and then destroys the VM's when the query is done, well, yes, that could wear thru endurance quickly...

On the other hand, some of us get consolidation ratios of several dozen VM's on an SSD datastore. Granted, many of them aren't super-active on the I/O front.

The other things? You've got a five year warranty, and prices fall over time. There's value in getting your stuff done, and even if one of them fails in a year or two, they are not that expensive to replace, you RMA the bad one, and then you end up with a spare.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Nice :) Are you running two with ZFS?
Yes ZFS and no to two. ZFS is the only option for TrueNAS. It's my personal opinion that 2 SSD's in a mirror for a boot device are overkill for most home users.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
The write workload on the boot devices (even if holding the system dataset) is completely negligible for a consumer SSD. Consumer USB thumbdrives, on the other hand, seem have gotten worse in a race-to-the-bottom.
 

SeaWolfX

Explorer
Joined
Mar 14, 2018
Messages
65
The Internet is full of people who think they know things.

I'm a professional. I do this stuff professionally. I own my companies, so unlike many engineers, who just buy some stupid party line that you must use "data center" or "enterprise" SSD's in servers, my jerk boss (that's me) won't sign off on PO's for stupid things that aren't needed, because his theory is that it is better to be smart, which allows company resources to go further. We've been using consumer grade SSD's for more than a decade in many demanding roles, such as in ESXi hypervisors. We've had some failures, yes, but these failures have almost always cleanly traced back to exceeding workload write limits, sometimes by design, sometimes due to an unexpected writer.

COULD you POSSIBLY wear them out? Hell yes, of course, it's totally possible. But ZFS isn't some magic SSD-eating beast. You actually have to be writing craptons of stuff to them to kill them. Plus, putting two consumer SSD's in RAID1 (or mirrors in ZFS) is actually going to fail gracefully compared to a single "data center" grade SSD, and probably cost less.

So there are definitely things that you can do that will wear out an SSD. These include frequent writes, such as failing to disable atime updates, or doing frequent updates of large chunks of software, or frequently creating and destroying VM's. If you're the sort that thinks it is fun to have a Terraform workflow in Git that spins up three VM's on command to check the status of your coffeemaker, and then destroys the VM's when the query is done, well, yes, that could wear thru endurance quickly...

On the other hand, some of us get consolidation ratios of several dozen VM's on an SSD datastore. Granted, many of them aren't super-active on the I/O front.

The other things? You've got a five year warranty, and prices fall over time. There's value in getting your stuff done, and even if one of them fails in a year or two, they are not that expensive to replace, you RMA the bad one, and then you end up with a spare.

Thank you for taking the time with your thorough answer. What you say makes sense and I feel a bit more confident in my plan with the SSD setup.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Nice :) Are you running two with ZFS?
I assume the rational for this question is the thought that ZFS would cause a sooner death than other file systems. If that is correct, the question displays a misunderstanding of what is relevant. The primary driver, as @jgreco described above, is the workload. So instead of thinking in terms of ZFS vs. non-ZFS, you need to think in "what read and write activities will likely happen". If you take that approach and put that statement from the Proxmox forum next to it, you will see that the latter is simply irrelevant for you. It was made in a totally different context (running virtualization workload vs. booting the OS), and so would be comparing apples with pears.

If you go through live with the mindset of "context is important", you will soon start seeing that many people make broad statements (i.e. without providing the context) about all sorts of things. All too often I have heard managers say "spare me the details" and every time I thought "wow, are you stupid". Context is everything, and this is also the reason why I so often just reply to questions on this forum "what is your use-case?".
 

thomas-hn

Explorer
Joined
Aug 2, 2020
Messages
82
COULD you POSSIBLY wear them out? Hell yes, of course, it's totally possible. But ZFS isn't some magic SSD-eating beast. You actually have to be writing craptons of stuff to them to kill them. Plus, putting two consumer SSD's in RAID1 (or mirrors in ZFS) is actually going to fail gracefully compared to a single "data center" grade SSD, and probably cost less.
Could you please explain why a SSD mirror would fail?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Could you please explain why a SSD mirror would fail?
It's going to fail gracefully. Meaning one of the SSDs might fail sooner or later but your system will most probably keep chuggin' along. Then you replace the failed one.

So to summarize:

1. consumer grade SSDs are perfectly fine for the boot pool.
2. they are orders of magnitude better than USB flash drives ("sticks") that people used in the past
3. yes, ZFS being copy on write is prone to wear out cheap SSDs
4. but that does not happen on the boot pool, if ...
5. you set your system dataset to the storage pool - which is the default

Don't worry - be happy.
 
Top