Large QLC SSDs - will they FreeNAS?

Stoatwblr

Cadet
Joined
Jan 5, 2020
Messages
2
More precisely, "Will they last as long as their mechanical counterparts?"

My home deployment (32TB, mostly static files, in 16+3 RaidZ3) is stuffed full of 2TB WD Reds and one of them has decided to turn up its toes - just outside the warranty period(*)

I'm seeing some oddities in the other Reds and looking wistfully at Samsung 860QVO as a replacement - Yes they're 3 times the price of a Red (or twice that of a Red enterprise) but the lower power draw alone is worthwhile, as long as they'll last 6 years in service (warranty is three).

For obvious reasons if I do changeout drives, I'll be doing it piecemeal, but I'm a little worried about the durability of the remaining Reds (Yes it's backed up, but restoring that much is best avoided even off LTO6 tapes)

The question is: Has anyone tested these out and come to conclusions about endurance vs normal ZFS home nas write patterns? Samsung are quoting 1440TBW, which seems a bit on the low side (320 times total disk capacity if I opt for a 4TB, 720 on a 2TB)

Other than the early years and cheapest drives (especially PATA dropin replacements) I've generally had pretty good success on SSDs but the QLC stuff is a new ballgame.

(*) it spins up when powered, then spins down & goes offline/unresponsive immediately - unless you send it a scsi inq during its spinup and catch it at the right moment.
Attempting to read off it, causes it to go offline - only 12 reallocated sectors (3 actual disk sectors) have shown up and it's well within spec on that or pending - was 12, now 49 - but with that kind of jump I've pulled it anyway.


(Happy owner of a nice Z30 setup at workplace and looking to replace that with another TrueNAS as it's hitting EOL)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Not speaking specifically to the QVO here.

The basic issue with SSD's is endurance. In general, you can beat the heck outta them for reads, but you have to be aware of the write level you are actually putting out. This isn't a ZFS thing, or a home NAS thing, it really depends on what *you* are doing! The answers here are going to be very different if you are doing "static files" - as you suggest you are - than if you are using the ZFS as a repository for all your local backups.

Back in 2011, SSD started to become in the realm of what I was willing to pay to put some in hypervisor hosts here. A lot of people had been doing "data center" grade SSD's for years, but when you're the guy signing off on the P.O.'s because it's your company, and you understand the idea of wasting money unnecessarily, well, maybe you don't jump on the "data center" SSD bandwagon. I bought a bunch of consumer-grade 120GB SSD's and deployed them in RAID1 on hardware RAID controllers. These mostly survived. 80-90% survival rate to this day. OCZ, Kingston, a few other varieties.

Back in 2015, we saw a major flash price crash and suddenly consumer grade 500GB SSD's were in the sub-$200 range. At this point, I made a bit of a choice. If I could increase IOPS at the data center, I could substantially reduce rack space opex. So I bought a bunch of mostly Intel 535 480GB's and put them out in the data center. These were only rated for 40GB/day write endurance over a 5 year period. With it sorta looking like prices would keep falling, I decided it was fine if I killed them earlier. The workload I was putting on them was 120-150GB/day. Now, yes, I could definitely have put in some nice S3710's and never had any trouble. But the 400GB version of the 3710 was over $500 at the time.

So we've lost probably at least half those drives in the last 5 years, and we've replaced them with 850EVO/860EVO/WD-BLUE as they've failed.

The factors to consider seem to be:

1) Will the flash last over time? Answer seems like it is probably a resounding "yes". We start getting a bit twitchy about HDD with maybe 50K hours which is nearly 6 years. I've had drives last a lot longer, but not consistently. Well-treated flash seems to thrive almost indefinitely.

2) What happens if it dies? Flash prices seem to be in free fall again this last 12 months, with a 2TB SSD now in the sub-$200 -- I got two SanDisk Ultra 3D 2TB for $180/ea around Black Friday -- which are basically the WD Blue with a different label. If you can find an excellent deal on the QVO, like say $160, yeah. That's 4x the size in 4 years for around the same price.

If the price doesn't bother you, it seems quite doable for what you've described.
 

Stoatwblr

Cadet
Joined
Jan 5, 2020
Messages
2
"We start getting a bit twitchy about HDD with maybe 50K hours which is nearly 6 years"

Exactly this. I do the same at home and at work and really prefer to get them out of the datacentre at around 40k hours, but the failure modes are invariably down to mechanical wear and tear issues rather than electrical ones and the datacentre NAS systems are thrashed_hard_, vs keeping terabytes of mostly archival stuff (old tv shows, etc) only accessed by 2-3 people.

The 2TB QVOs are currently floating around here for about $270+tax (£250 including tax) which isn't wonderfully priced but I've always worked on the rule of thumb that 4-5 times the cost of spinning media is the justifiable inflexion point and 2TB WD Reds are sitting at £90-100inc tax (or £120-140 for red pros)

4TBs are £420 / £130 / £170 respectively (ssd / red / redpro)

I know there are cheaper ssds than Samsung, but I really don't want to keep rewarding WD or Seagate, given their appalling treatment of the HDD buying market post 2011 (doubling drive prices and slashing consumer warranties from 5 years to 12 months, etc, plus the quite noticeable drops in reliability over the last 6-8 years (especially Seagate barracuda and constellation ranges, but WD Blue/Black don't get any love here either)

I could go on a very long rant about the attitude of both companies to honouring their warranties when claims start piling up but I'll refrain.

Decisions, decisions.....
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I know there are cheaper ssds than Samsung, but I really don't want to keep rewarding WD or Seagate, given their appalling treatment of the HDD buying market post 2011 (doubling drive prices and slashing consumer warranties from 5 years to 12 months, etc, plus the quite noticeable drops in reliability over the last 6-8 years (especially Seagate barracuda and constellation ranges, but WD Blue/Black don't get any love here either)

Might be a little over the top there. :smile:

Drive reliability has always been a cyclical thing. IBM Deathstars became the ultra-reliable HGST drives. Seagate had horrible problems with the Barracuda 32550N's and other drives around that timeframe, came back to some very good drives for awhile, then tanked again about a decade ago with their 1.5/3TB drives, and came back. WD had a horrible reputation for a long time, but a lot of their current drives are pretty good. Picking the current winners and losers has always been difficult.

Prices went sideways due to manufacturing issues. At a bad moment in the evolution of the market, the manufacturing base had shrunk, most manufacturing had been moved to Thailand, and then there were the Thailand floods. Some of the best HDD manufacturing gear on the planet was ruined. It probably didn't get replaced with equivalent gear... The vendors could see the writing on the wall: SSD's were creaming their profitable high end 10K/15K RPM drives, the profits from which had been the driving force for R&D that kept the capacity wars moving forward. If all your profitable products are clearly going to be going away within a handful of years, you probably start to get really paranoid about long term obligations such as warranties, and you probably start trying to control manufacturing costs.

"Appalling treatment" basically comes off as a bit ridiculous. If you look at it from a business point of view, it makes total sense. Major industry realities changed. If a business does not or cannot adapt to a new reality, they go bankrupt.

Both WD and Seagate took some time to come up with a business plan for the post-HDD era. Because that's coming.
 

Algwyn

Dabbler
Joined
Sep 16, 2016
Messages
16
Coming back to this topic, would QLC SSD be a good choice for a media library?
In this use case, media files are written once, then read multiple times. File once written are not modified, and may just be deleted eventually.

Micron now have some large QLC SSD which are more affordable: Micron 5210 ION SSD | MTFDDAK7T6QDE | 7.68TB ($883 in the US, €773-€850 excl VAT in Europe)

Worth considering for a ZFS pool?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
For a media pool? Sure, but why not just use HDD? HDD in the 8TB range is still $130 (shucked Easystore) or around 1/7th the price of that Micron SSD.

Capacity SSD's are overpriced anyways.

Look at 1TB. WD Blue (TLC, 400TBW, $109)/Crucial BX500 (TLC, 360TBW, $99)

So obviously the TLC silicon is available at that price point.
 

Algwyn

Dabbler
Joined
Sep 16, 2016
Messages
16
Well my current pool is based on WD Red Pros, I'm looking at options to make my server more silent ...
Living in an apartment, the seek noise is quite noticeable, even in a quiet case (25-30dB)

Moving to SSDs would be the best option, but I would need to replace 6 x 8To HDD with SSD ... hence my checking the large SSDs.
I cannot fit 26-30 1 To drives to replace the 8To ...

But also looking at options to improve noise dampening of the case, if dampening enables reducing the noise to below 20dB, this would be much cheaper than SSDs.
 

AdrianB1

Dabbler
Joined
Feb 28, 2017
Messages
29
The information on SSD does not last a long time when powered down, especially at higher temperatures, so if you plan to turn off the SSD-based NAS when you go in vacation and the air conditioning is off, the temperature raises in a hot summer then you can return and find the data corrupted big time.
Other than that, except for the price and limited write capability I don't see a big problem with SSD's in a NAS. If you have 10 Gbps network or faster, benchmarking will make you smile versus the same number and configuration of HDD's, but for real life situations I would still run a home NAS on HDD's.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,949
I, based on personal experience would not use the QVO's. Ran them for a few weeks in a RAID 5 array (yes RAID 5 - not ZFS) and then they crapped out taking all the VM's with them. Lost one, then a day later lost two more (or was it one more) either way the whole array was lost.

Test & Development setup - so no data loss - but recovery took a while due to issues with the backups not wanting to restore to the new drives. [Bug in the backup software]
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
SSDs will be an interesting addition to help speed up fusion pools in FreeNAS 12 and up. Ditto persistent L2ARCs.

The combination of a 3-way SSD mirror for small files and a persistent L2ARC should make for a very responsive pool even if the bulk of the storage capacity is based on rusty spinners.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
QLC NAND has the challenge of managing page and block sizes that are much larger than MLC (64K program pages, 16M erase blocks) so think of running your vdevs with ashift=16 and a 4K write being amplified to 64K. For big writes it's no problem, they can go right to QLC pages, but anything smaller gets written to a largely empty page and ends up getting re-written into a full page later by the controller's garbage collection/housekeeping.

A small-file vdev (with a threshold of <64K) with a metadata-only vdev and/or fast SLOG to absorb sync writes would limit those the most, but that's a lot of extra moving parts needed to optimize for "low cost" QLC.
 
Top