Samsung QVO longevity?

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
If Crucial made 8tb SSDs
Well, the MX500 also has a long-standing bug that causes premature wear and is triggered by TRIM, apparently. Not fun, been bitten by that one, too.
 

nabsltd

Contributor
Joined
Jul 1, 2022
Messages
133
I'm not understanding why some of you state that the Samsung QVO 8TB drive has a poor TBW rating. The warranty is 2.88PB.
Because that's just 360TBW per TB of storage. The Samsung EVO 4TB is rated at 2400TBW, which is 600TBW per TB of storage.

In most cases, SSDs are used in a NAS because you need faster writes, and you are writing a lot, so endurance is one of the more important factors. If your data is write-once, read-mostly, you don't need endurance, but you also probably don't need SSD speed.

As you said, the endurance rating might be far lower than what the drive can really handle, but if you have a choice, and prices are similar, I'd go with the drive with the higher manufacturer endurance. Unfortunately, at 8TB per 2.5" bay, you pay a lot if you want more than consumer endurance. In the 4TB/bay range, you can easily get drives with 600-800x capacity for endurance, and if you hunt for used enterprise drives, you can get 2000-5000x capacity for endurance ratings.
 
Last edited:

KennyPE

Cadet
Joined
Sep 22, 2022
Messages
6
With all due respect. . I need and want SSD speed and for those that have 2.5gbe+ home network, they probably do as well. My next stop is 10gbe or 25gbe, but that's another topic. Those running databases/VMs/AI and other rapidly changing data sets should stick to HDDs and/or enterprise solutions.

360TBW / 3 years is and arbitrary value set by Samsung and is a reflection of a business model non-technical people at Samsung can understand. I have absolutely no idea how that is derived nor do I know how accurate it really is. In my opinion, it's simply something that make business sense to them with respect to risk (an important concept that their EVO line taught them).

260TBW is the warranty that Sabrent offers on their micron QLC Rocket Q drives. It's likely wrong as I have been torturing a Sabrent Q that is just shy of 260TBWs with no bad blocks or use of reserve blocks, read errors or write errors. However, the programmable remaining life parameter has been 0% since ~120TBW. I will not stop torturing this drive until I see an error or a spare block used. 12 DWPD and still going.

100TBW is the warranty that ADATA offers for the SU630 256GB drive with QLC NAND I stopped writing to one that passed the 300TBW mark just because I was anxious to move on to another test and drive writes were far too slow. Again, no errors and 100% of remaining spare blocks untouched.

Speaking of slow writes from large data, Yes, QLC NAND can be really bad. A lot of DRAM-less stuff I have seems to fluctuate between 80MB/S to 120MB/s . ..if not for the noise, I'd stick with HDDs. TLC seems to fair better at 120-250MB/s. I have not tested my MLC stuff as I don't see the point right now.

When I manage to see errors on the Sabrent Q drive, I'll move on to a Crucial MX500 (maybe).

Charge trapped flash has come a long way in the last several years and there really isn't a lot of data to support the warranties stated. It's arbitrary and part of whatever business model vs risk manufacturers come up with. It's highly probable that they perform their own in-house write-to-fail testing - if they do, they are very shy about sharing data. I read an article once where a Toshiba engineer stated that modern CTF memory will hold data for a minimum of 10 years unpowered. JEDEC wouldn't hold him to that, but how would you test it? There are general specifications for writes to CTF and for QLC, that is 1000. My personal feeling at this point is that you should absolutely be able to get 1000TBW on QLC NAND before you start having issues.

I said that endurance may be far lower than drive specs leading to a warranty replacement, however, it's my firm belief that endurance is far, far above what companies rate their products.

You say that one should hunt for used enterprise drives (I assume SLC or MLC). Perhaps true, but risky and very, very expensive. You will need to buy/build something capable of NVME SSDs at 15mm NGFF or SAS 15mm. Not something most are willing to do, IMHO. Racks are generally very, very noisy and for most that want to use SSDs - it's the noise they want to get rid of. My focus is on consumer grade stuff that will fit in an off-the-shelf NAS or something home-grown and that means SATA. I'm sure there are plenty of people who have the space and time for an enterprise grade rack solution, but that's not me. Most people are not running a data center and do not need the level of TBWs offered by enterprise level drives (not to mention the power and up-front costs). I just want to save my data quietly and with as little power as possible. . .that means SATA SSDs. Samsung is the only option right now. 4TB drives only means that you use twice the power and need 2x the drives for the same storage. You don't save (much) money by buying generic brands and reliability is likely negligible. You will spend more for 4TB known brands to reach the same level of storage capability.

4TB TLC Crucial SSDs are more expensive and come with a 1PB warranty (less than Samsung's 1.44PB for 4TB QLC). Why? They are using TLC in that drive that should be good for 3 PBW! It's arbitrary. I'll stick with 8TB right now. Unfortunately, that means Samsung.

I think PTC NAND will come out by 2Q 2023. Won't that be fun!

All is my opinion based on various articles on NAND flash memory over the last 10 years of publication. (and my limited testing)

If I'm missing something or you feel I simply don't understand, by all means. . please educate me. I love to learn. I'm nearly finished my "Frankenstein" 8 bay NAS. Just need 4 more 8TB SSDs. . .
 
Last edited:

awasb

Patron
Joined
Jan 11, 2021
Messages
415
With all due respect. . I need and want SSD speed [...]

You say that one should hunt for used enterprise drives (I assume SLC or MLC). Perhaps true, but risky and very, very expensive. [...]

If I'm missing something or you feel I simply don't understand, by all means. . please educate me. I love to learn. I'm nearly finished my "Frankenstein" 8 bay NAS. Just need 4 more 8TB SSDs. . .

First of all: I don't want to sound rude. But at the same time, I think you're not "getting it". If you want to achieve optimized network throughput and need the corresponding data rates from your file server(s), you need to use good hardware. As Monty Python taught us, it is daft to build a castle in a swamp. Cheap hardware will cause nothing but misery. Sooner or later. If you don't want to accept this, you'll need to shift your expectations (and live with somewhat lowered standards):
  • Low/"Affordable" price
  • ssd performance
  • write endurance/general reliability

Pick two.
 
Last edited:

nabsltd

Contributor
Joined
Jul 1, 2022
Messages
133
With all due respect. . I need and want SSD speed and for those that have 2.5gbe+ home network, they probably do as well. My next stop is 10gbe or 25gbe, but that's another topic.
Although I have not yet moved to TrueNAS, my current iSCSI setup over 10Gbps copper gives me 400MB/sec writes to a RAID-5 array of 6x 4TB Seagate SAS drives. I have no SSD cache on the server...it's all spinning rust. So, I could maybe use SSD to get better speed, but if I only had 2.5Gbps networking, I would be just fine without any SSD, as the drives write far faster than 250MB/sec.

So, no, until you hit 10Gbps networking, you really don't need SSD for speed, as long as you have modern hard drives designed for NAS or "Enterprise" use. And, even at 10Gbps, you really only need a smallish SSD cache, because a well-designed array of SSDs can easily support 400MB/sec per drive, which means just 3 SSDs can handle everything a 10Gbps network can send.

Then, once you move to 25Gbps (or faster) networking, you'll find that in theory you might very well benefit from a large array of SSDs, the reality is that an SSD cache would still be good enough to handle the vast majority of workloads, as you won't be writing 2.5 GB/sec for very long.

You say that one should hunt for used enterprise drives (I assume SLC or MLC). Perhaps true, but risky and very, very expensive. You will need to buy/build something capable of NVME SSDs at 15mm NGFF or SAS 15mm.
I don't think less than $200 for a used 4TB enterprise SATA drive is "very, very expensive", considering that they will have PLP (required if you care about your data) and at least 1 DWPD, 5-year warranties. As for risky, most of these used drives have more than 95% of their life remaining. And, even if the listed SMART remaining is lower, as you said, the drive manufacturers are quite conservative compared to actual endurance in the real world. Even at 50% remaining, that 4000TB of manufacturer endurance left on a 4TB drive.
 
Last edited:

KennyPE

Cadet
Joined
Sep 22, 2022
Messages
6
Although I have not yet moved to TrueNAS, my current iSCSI setup over 10Gbps copper gives me 400MB/sec writes to a RAID-5 array of 6x 4TB Seagate SAS drives. I have no SSD cache on the server...it's all spinning rust. So, I could maybe use SSD to get better speed, but if I only had 2.5Gbps networking, I would be just fine without any SSD, as the drives write far faster than 250MB/se
I'm not interested in iSCSI (which is DAS in my book) and SAS isn't exactly consumer. It would have been easier for me to just get a rack unit and fill it with enterprise drives, but I've no place to put such a beast. Even if I DIY a SAS unit, that wasn't my goal and it was due to price. I live in NZ and I've never, ever seen used enterprise grade SSDs go for a price lower than what I can get crucial 4tb or samsung 8tb drives and you always take a risk when purchasing used equipment.
So, no, until you hit 10Gbps networking, you really don't need SSD for speed, as long as you have modern hard drives designed for NAS or "Enterprise" use.
Maybe don't NEED SSD speed, but I want. I use 16TB WD Red pros in my R5 qnap NAS. Most modern consumer grade NAS drives are SATA. I'm sure it's nice to have SAS, but I don't.

My house network is a lowly 2.5gbe and the r5 HDD nas I have tops out at about ~100MB/s writes and ~200MB/s reads hence my SSD aspirations. I can do it cheaper than SAS and most importantly, it will be very quiet.

I did purchase the Samsung SSDs, but only six of them in spite of my plan. Reads are fine, but writes top out at ~150MB/s (sustained after cache). So I'm in the process of experimenting with 4tb crucial drives that do saturate my 2.5 network (since they are capable of ~300MB/s drive writes - sustained after cache) and then sync to the samsung drives after my copy. It's not optimal, but Crucial would have saved me a lot of trouble by just making TLC 8TB drives. But they didn't. Micron makes a 7.68TB drive, but decided to price themselves out of this project.

In a few years, I may upgrade my network to 10 or probably 25gbe. At that point, I'll be scrapping most of my NAS units and upgrading. For now, 2.5 serves my needs.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
That's an issue, right? Firmware. It's not like they publish exactly what it's doing and how it does it. It's super-secret-eyes-only-secret-sauce crap that makes it difficult for all of us to really trust what's going on in the background. I, for one, would really love to see the code.

Keep in mind the majority of the R&D effort in the SSD space these days is in enterprise NVMe. SATA as an interface is a dead end. There will likely never be another significant update to the SATA spec, so all you're going to get going forward is the "SATA controller grafted on <cheapest solution>" for legacy support.

But you are spot on about the firmware. That's really the key. It's just a pile of memory and a controller presenting it as storage. It's behavior is all software. Most of us don't think very much about the firmware on a spinning rust drive. They do publish updates, but I'd wager most consumer drives run the firmware they were first deployed with. Even enterprise spinners only get updates for infrequent issues like lock sequencing, and odd SI corner cases discovered on new platform qualifications. Enterprise spinners have decade long firmware lifespans, and they have to implement a very robust qualification plans. Most sysadmins will be quite hesitant to roll out drive firmware updates without good cause, and only after their own deployment testing.

NVMe SSD's are newer technology, and the details are more fluid. The specifications are under very active development. But I don't see a lot of people discussing firmware updates and the vetting and qualification needed to deploy them. I suspect we probably need to pay more attention to the SSD updates and not assume the same robust qualification regime have followed from HDD to SSD. But if you really want to see the details, the specs are all quite publicly available. It's 400+ pages of light reading. :smile:

 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Off-topic on: @rvassar is (in a rather subtle way :wink: ) making a very interesting argument about the relevance of firmware. To most techies it is what IT is to the business: Something that cannot be as complicated as those guys over there claim all the time to save their pathetic butts.

I bet the really interesting days for firmware are still to come. Hopefully not in the shape of a big disaster. Either way, something worth spending a few thoughts on.

Have a great weekend, everybody!
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I bet the really interesting days for firmware are still to come. Hopefully not in the shape of a big disaster.

While I won't speculate on the disaster likelihood, firmware is definitely playing a much more involved role with new technologies being introduced in both the HDD and SSD space, especially where zoned block devices are considered (SMR HDDs, QLC SSDs)

At present, there's a whole lot of firmware-based "middleware" that lets these devices interface with conventional filesystems (DM-SMR) and we have very little/no visibility into what's going on under the hood, which results in the awful performance pathologies we see now. ZFS could actually be a good fit for ZBD in the future, because the copy-on-write nature means you're able to write in an append-only fashion to a zone, and then tag it for "reshingle+overwrite" later when it reaches a certain threshold of "dirtiness."
 

nabsltd

Contributor
Joined
Jul 1, 2022
Messages
133
r5 HDD nas I have tops out at about ~100MB/s writes and ~200MB/s reads
If this is over the network, then I suggest you upgrade the network before thinking SSD. Since you haven't told us how many hard drives you have in the NAS, we know nothing about whether that speed is good or not. For example, if you have 8x 16TB WD Red drives, then you are completely constrained by the network, as that should easily handle 500MB/sec sustained sequential write if all the other parts of the NAS (CPU, RAM, HBA, etc.) can handle that speed. OTOH, if that's with just 4 drives, then are likely close to the max speed you could get.

On the other hand, I'd worry a bit about the overall hardware, since your first post said that your "WD Reds are failing". If you have 16TB drives, those are relatively new, and are good drives, and unless you keep your NAS in an oven, probably aren't running too hot.

I'm not interested in iSCSI (which is DAS in my book)
I said iSCSI so that you would know that my configuration is limited by the transport. RAID-5 (i.e., striped with parity) is not recommended for block storage, yet I still get decent speeds. With something like NFS or SMB where I could use async writes, I might get closer to 600MB/sec, at least until my 64GB of RAM ran out.

SAS isn't exactly consumer.
6Gbps SATA and 6Gbps SAS perform at exactly the speed. I have Windows machine with 8x 2TB SATA that can sustain 300MB/sec writes, and burst to nearly 700MB/sec for the first 10GB or so.
 
Last edited:

KennyPE

Cadet
Joined
Sep 22, 2022
Messages
6
I see your point. I'm uncertain what my issues are.. My raid5 is a 4 drive qnap unit where 2.5gbe only sometimes works (qnap usb dongle constantly fails), so I'm stuck most of the time at 1gbe.

I don't have any issue with my WD reds (that's someone else's post). I've only been using this unit for a year and I bought 5 drives so I had a spare ready to go.

I'm still experimenting with TrueNAS and the 8 SSDs I bought for the project. Expensive, but it's nearly silent and it's really fast which was my goal.

Thanks to all who contributed. It was very helpful!
 
Last edited:
Top