Samsung QVO longevity?

apl

Dabbler
Joined
Jul 11, 2021
Messages
15
Hi all,

my old WD Reds are starting to fail, and I don't feel like replacing them with spinning disks anymore. So I'm thinking of 3x 8TB Samsung 870 QVO SSDs in RAID-Z1. My workload is mostly file storage with 10-100GB written per day:
  • Timemachine for two Macs
  • ZFS snapshots from my work SSD (mostly VMs)
  • Some tarball backups
  • Media files
I don't have particularly high performance requirements, so QVO is fine in that sense, but how is the longevity? I've read on ZFS-on-SSD and QVO longevity from a bunch of sources, but there's nothing too definitive. https://www.cgdirector.com/samsung-evo-vs-qvo-ssds/ says 100GB/d would cause degradation after ten years, but ZFS needs to be factored in too.

Thanks,

AP
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
For your use case, the QVO should be a reasonable compromise of cost vs performance.

The TBW for those drives is low, but not ridiculously so for relatively static files as you're proposing they will host.

In your position, I would feel OK to do what you're proposing.
 

apl

Dabbler
Joined
Jul 11, 2021
Messages
15
Thanks for the response.

I should also mention that all the access except ZFS snapshots are through SMB and shouldn't involve much sync writes, which should allow for optimized writing patterns if I understand correctly.
 

QonoS

Explorer
Joined
Apr 1, 2021
Messages
87
Remember to configure "autotrim=on". This will help longevity and performance in general.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,904
Not too long ago someone mentioned that SMB from Macs is always doing sync writes.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,944
Its not conclusive - but a friend of mine on a Synology tried using QVO's in a RAID 5 setup as an iSCSI setup. The servers were not busy just some infrastructure type servers and a couple of web servers

They lasted about two months (possibly three).
 

apl

Dabbler
Joined
Jul 11, 2021
Messages
15
Thanks for the replies. I now have 3x 8TB QVOs running in RAID-Z1. Three of my old WD REDs are still alive, so I'll make a backup pool of them.

Is there a way to see how many sync write requests a pool is getting?
 

QonoS

Explorer
Joined
Apr 1, 2021
Messages
87
Start with ...

$ zpool iostat -ry your-pool 10

... for example that will list sync/async writes over 10sec intervals. For more options have a look into manual:
$ man zpool-iostat


In your case "sync=disabled" could be an option too, if your VMs/Data are not mission critical and you know what your are doing.
 

apl

Dabbler
Joined
Jul 11, 2021
Messages
15
My VMs are on a different pool that is only snapshot to the QVO pool. Running timemachine from my Mac caused up to about 60 async writes per second, which doesn't sound like much, although there were far fewer async writes.
 

PiepsC

Cadet
Joined
Feb 8, 2022
Messages
4
I know of several people who strongly recommend against QVO if repeated writes are in question. Not personal experience however, just opinions of people when I was assembling my own build
 

Whiskey

Dabbler
Joined
Jul 10, 2021
Messages
29
@apl sorry to resurrect this thread, but I'm leaning to these drives as well. Are you still happy with them?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
The key with these drives is to avoid using them for things like SLOG and L2ARC or other high-churn workloads due to their relatively poor TBW rating.

Otherwise, they perform well enough for things like a file share or media storage.
 

KennyPE

Cadet
Joined
Sep 22, 2022
Messages
6
I'm not understanding why some of you state that the Samsung QVO 8TB drive has a poor TBW rating. The warranty is 2.88PB. As SSDs go, that's not poor. We are not talking about enterprise or data center drives, this is consumer and for consumer - it's great. If it will actually make it to 2.8+PB is another question. I'm also not understanding the issue that someone had with SSD failures after 3 months in raid 5. It makes no sense to me unless unless they had some very serious write amplification going on. NugentS - do you know why they failed? Can your friend provide a SMART report? I'm very, very curious.

I'm in the process of building an 8 bay all Samsung 8TB QVO truenas machine and before I spend all that money on the drives, I'd like to get some good information on why other consumer SSDs are failing and what I need to do to ensure that writes are minimized. 95% of the workload of this unit will be reads, so I don't see the problem.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,944
I don't know why they failed - sorry. They were in a friends NAS, acting as an iSCSI store for some light use VM's. They just died. Since then he is using Red SSD's without any issue.

My suspicion is that using them in something approaching a WORM fashion shouldn't be a big problem. Just don't expect to be able to trash them and get a reliable response.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
I'd like to get some good information on why other consumer SSDs are failing
90% crap firmware, 9% manufacturing issues, 1% natural wear is my estimate.

I am dealing with dying 870 EVOs at the moment, due to some sort of known issue - either crap flash or crap firmware - that makes them wear out way more prematurely than specified.
 

KennyPE

Cadet
Joined
Sep 22, 2022
Messages
6
I don't know why they failed - sorry. They were in a friends NAS, acting as an iSCSI store for some light use VM's. They just died. Since then he is using Red SSD's without any issue.

My suspicion is that using them in something approaching a WORM fashion shouldn't be a big problem. Just don't expect to be able to trash them and get a reliable response.
That's a bit disappointing. If you could ask, I'd appreciate it. I've been testing various NAND for the better part of nine months now. This all started when I discovered an old CF drive in my "drive bag" that hadn't seen power in 18 years (old camera CF), bought a CF reader, plugged it in and it still contain every photo I left on it at that time (so far as I can tell). Since there are no clear answers in various white papers/product sheets/relevant real world testing (not since ~2012), I've set up a test system that basically does drive writes and erases continuously. While this doesn't represent a RAID and certainly not all workload requirements, I felt that a combination of large and small file writes could be representative of "real-world-use." What I've found so far is that QLC NAND is far more resilient than most people believe. The first QLC I tested was a cheap ADATA SU630 QLC drive with a rating of 100TBW. It sat at 600 DWs/300TBW before I pulled the plug on it. . .no errors, all spare NAND intact and no media errors. The only reason I stopped the test is that it had been ~ three months of continuous DWs and I needed to move on. Right now, I'm torturing a TLC drive with a rating of 260TBWs that has long passed the programmable "drive life" parameter and is fast approaching it's TBW warranty with no errors and 100% of it's spare blocks intact.

Most manufacturers create their warranties with a great deal of technological distrust and/or a mentality of "I want to sell more." At this point in my testing, most NAND will far exceed what manufacturers guarantee. I'm retired so I have lots of time, but it seems that the answer is "nobody really knows for certain." I realize that I won't find the answer, either. I'll keep testing none-the-less. After all, these are all electronic devices subject to electrical failure just like any other device. Perhaps your friend was just really, really unlucky.

I'm aware of the Samsung issue with their EVO drives and if there was another choice for 8tb SSDs, I probably wouldn't be buying theirs. Samsung has likely corrected whatever manufacturing defect that caused that issue, but the damage has already been done in consumer confidence (like for you) as far too many people really hate Samsung. As it stands, they are the only option available. If Crucial made 8tb SSDs, I'd be buying theirs for sure. That said, samsung does seem to make good nvme drives and I do have several (MLC and TLC) that don't show any issues, but maybe I was just lucky.

What I'm building my custom NAS for represents my needs (store data to read and don't fail). I understand there are lots of folks out there who run constantly updating databases, VMs and other datasets that frequently change. If that were my case, I'd probably stick with HDDs and replace them every three years regardless of status as these are certainly not work requirements that fit NAND drives right now.

My assumption based on my testing up to this point is that NAND drives will be just fine and outlast HDDs for use cases that are mostly read.

Samsung 8TB QVO drives have a 2.88PB/~360DWs guarantee, but I'm reasonable certain they will reliably last ~600DWs or longer (provided that they have actually corrected their secret issues).

If I'm wrong, I really hope that someone convinces me before I drop more than $5500 for drives.
 

KennyPE

Cadet
Joined
Sep 22, 2022
Messages
6
90% crap firmware, 9% manufacturing issues, 1% natural wear is my estimate.

I am dealing with dying 870 EVOs at the moment, due to some sort of known issue - either crap flash or crap firmware - that makes them wear out way more prematurely than specified.
That's an issue, right? Firmware. It's not like they publish exactly what it's doing and how it does it. It's super-secret-eyes-only-secret-sauce crap that makes it difficult for all of us to really trust what's going on in the background. I, for one, would really love to see the code.
 
Top