Are we on the cusp of SSDs making sense for most people?

oguruma

Patron
Joined
Jan 2, 2016
Messages
226
I've noticed that 4TB consumer-grade SSDs (Samsung Evo, most commonly) are starting to pop up for < $200. And brands like Kioxia and Solodigm have large datacenter-grade NVMes available.

Within the next couple of years, do you think the cost of flash storage will get cheap to the point that SSDs will make sense for a lot more people?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It wasn't that long ago that a 4TB HDD was $350. Today, a 4TB 870 Evo can be had from Best Buy for as low as $219 if you have a little patience. What exactly do you mean by "within the next couple of years"? It is certainly reasonable TODAY. The world is benefitting from Samsung's financial troubles, but that's just going to press prices even lower.

SSD is going to remain a price premium over HDD for awhile yet, but as of a few years ago, the use of SSD's instead of HDD was perfectly reasonable unless your only issue was cost per TB. It mostly has to do with whether or not you can get your head wrapped around the economics; if you need or would really like SSD, and you can cope with paying prices the prices you would have paid for HDD about a decade ago.

Me, I'm waiting for that 870 Evo 8TB....
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
The other question is longevity. Intel used to be the standard-bearer for data-center quality drives. Who to turn to w/reliable stats re their SSDs?
 
Joined
Jun 15, 2022
Messages
674
That and SSDs "forget" when powered off for (some amount of time depending on the drive). That would be a shocker for a home user. Can you imagine the TrueNAS Community answering that question: "All your data is irretrievably gone, you do have 3-2-1 backups, right?"
 
Joined
Jan 18, 2017
Messages
525
"All your data is irretrievably gone, you do have 3-2-1 backups, right?"
As a matter of fact I can, sounds just like the RAID controller conversations lol

I'm definitely grabbing SSD to replace the drives in the scratch pool when they fail not only will it be faster but at the rate the prices have dropped over the last couple years it will also be a size upgrade. The 30TB SAS SSD's have come down a lot too (still too rich for my blood), I have been very interested to see how they develop.
 

somethingweird

Contributor
Joined
Jan 27, 2022
Messages
183
That and SSDs "forget" when powered off for (some amount of time depending on the drive). That would be a shocker for a home user.

Learn something new today - SSD leave them power on.. forever.. till they die. I can't imagine why would a home user turn off their NAS for long period of time ? (Other than electric bill)
 
Joined
Jun 15, 2022
Messages
674
@somethingweird : In some countries (Germany) a 24W or less system is optimal due to ever increasing energy costs. If someone needs a video editing storage center it may make sense to power off 200W+ when not in use. In this example application there's a need for fast storage (SSD and 10Gbit ethernet) and they may take 1+ months of holiday (vacation) to visit other countries.
 
Last edited by a moderator:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
do you think the cost of flash storage will get cheap to the point that SSDs will make sense for a lot more people?
There's enough wiggle room in the question that it's hard to not answer it in the affirmative--for many people, the cost of flash is already low enough that it's a viable alternative for spinners, and I don't doubt that's going to grow. But I don't want to even think what the cost would be to convert my server to flash--100+ TB of flash would cost a pretty penny.
 
Joined
Jun 15, 2022
Messages
674
There's enough wiggle room in the question that it's hard to not answer it in the affirmative--for many people, the cost of flash is already low enough that it's a viable alternative for spinners...
I can see it now..."I bought 14 GoodFeelingsStore 4TB SSDs on eBay and put them in a RAID2 configuration as YouTuber Linustisiez recommended. They work great but I noticed yesterday two drives are colored red in the TrueNAS interface. Is this a problem and how many months/years do I have before I have to replace them? I might have seen the word FAILED somewhere but not sure as the drives are new as of last month and the gaming motherboard is high-end with blazing fast 2.5Gig Realteck Ethernet and InzaneFasstMemorieInk memory modules."
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I can see it now..."I bought 14 GoodFeelingsStore 4TB SSDs on eBay and put them in a RAID2 configuration as YouTuber Linustisiez recommended. They work great but I noticed yesterday two drives are colored red in the TrueNAS interface. Is this a problem and how many months/years do I have before I have to replace them? I might have seen the word FAILED somewhere but not sure as the drives are new as of last month and the gaming motherboard is high-end with blazing fast 2.5Gig Realteck Ethernet and InzaneFasstMemorieInk memory modules."
The culprit is the port multiplier card used to connect all those drives to your gaming motherboard, your drives are safe.
 

somethingweird

Contributor
Joined
Jan 27, 2022
Messages
183
@somethingweird : In some countries (Germany) a 24W or less system is optimal due to ever increasing energy costs. If someone needs a video editing storage center it may make sense to power off 200W+ when not in use. In this example application there's a need for fast storage (SSD and 10Gbit ethernet) and they may take 1+ months of holiday (vacation) to visit other countries. With the virus-that-must-not-be-named (which to me reeks of political manipulation of the public's natural right to freedom) travel between countries may be frozen at any time, which may mean months of off-time.
Make sense - for those taking 1+ month - vacation or called for duty.. *Never* taken 1+ month of off-time/vacation (wouldn't be on my mind - its hard enough to take 4 days off for vacation... )
 

bonox

Dabbler
Joined
May 2, 2021
Messages
17
the JEDEC standards call for 12 months of data retention and results in practice point to minimum retention times between 2 and 5 years with a quick google. That's for consumer drives. Enterprise drives are shorter, but my understanding is that there's no difference in underlying behaviour of the flash units, they just play with the temperature criteria, so store the two units in same conditions and their retention periods will be about the same.

I think you'd need to be aware of limitations for long term cold storage, but that's not really the role of a NAS or a desktop, at least as far as these home users you're mentioning; you also get a bonus for those using redundancy in their NAS pools which normally doesn't happen for desktop use. Endurance as far as write cycles is also not that much of an issue if you find a drive that reverts to a read only mode when you exhaust it's spare cell capacity, rather than bargain bin stuff that just falls apart and you lose everything. If anything, SSD is far more forgiving in their failure modes than hard disks. But backups aren't just for enterprise, right grandma!?

I know a lot of people have seen issues with ancient usb flash drives left in drawers for a decade, so the concept shouldn't be too hard to grasp, but as usual, education is king.
 
Last edited:

NickF

Guru
Joined
Jun 12, 2014
Messages
763
I'm going to quote myelf from another thread.

Love this conversation.
If you are doing what a lot of folks are doing and just dumping some movie files and personal documents and pictures, and later just reading those files back throughout the day? If that's you, buy some QLC NAND instead of spinning rust. I kid. I kid. It's not that easy yet.

In fairness to hard drives, a brand new 2TB hard drive in 2023 only exists to service niche legacy enterprise systems, with some trickle down into retail. According to my good friend, Mr Edward Betts, https://edwardbetts.com/price_per_tb/
The going rate for best price per TB is about $15.00 right now for hard drives, which is interesting given the mix of sizes, SKUs and that follow up to about $18.

My same dear friend reports on pricing for SSDs as well, so we can compare apples-to-apples for how pricing was sourced.
The going rate for a good deal on SATA SSDs? $30 a TB. NVME? $32 a TB.

So based on pricing and relative performance to size ratio, we can pretty much conclude that SATA SSDs for this market are basically a waste of money relative to NVME pricing. Obviously platform restrictions apply, good reasons exist, caveat exemptor.

But we can also conclude that HDD is still king in price/TB. It's up to you if relative performance is a factor, price/TB/insert_relative_permormance_rating ?

In other words, the price per TB is approximately 2x right now, which is actually very good if we consider how much faster NVME is. It's not that long ago that "Ryan's Law" of $0.10/GB was the standard to be judged.

$65 / 2048 GB = $0.03173828125/GB

So the cost per GB is less than $0.10/GB, which means we have indeed met Ryan's Law according to current NVME pricing. Thanks PCPer.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
On the cusp, I guess. It's a ways off though, it certainly isn't today. My Seagate X18 Exos 14TB drives were $140 each. 3 of them, 1 is spare, is $420. So, if 4TB is $200, I need 10 of them to be equivalent. That's $2,000 instead of $420. And then where do I mount 10 of them? Guess I need more hardware, etc. I guess it depends how much space you need. I'll be adding another vdev soon too.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That's $2,000 instead of $420

Yes, if you use that sort of justification, it will be a long time 'til you can go flash.

And then where do I mount 10 of them?

Seeing as that the 2.5" form factor is substantially smaller than 3.5", this is also a bit of a red herring. Spacewise, 4 x 3.5" takes up 1/3rd of a common 3.5" 2U chassis while 10 x 2.5" takes up 5/12ths of a common 2.5" 2U chassis, but that arrangement is strictly to maintain front access. 2.5" SSD's are nowhere near as deep, and it is pretty easy to stack three or four with brackets and then just tape them somewhere convenient. This works great in 1U servers. If you went over to SATA M.2 it is quite feasible to mount these things in relatively high density using something like Silverstone SDP-11 trays. You buy three of them, throw away the metal from two of them, and stack the PCB's using spacers (ISTR these should be 5/8ths inch). You end up able to fit 12 M.2 SSD's in the space of one 3.5" full height HDD.

It's just a matter of wanting to do it. There are lots of reasons you might not want to do it, cost, endurance, inconvenience, etc.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
Which is what I said, I would need more hardware to buy, not just the SSDs. So, it's additional cost. And would my motherboard additionally accept 10 connections to SSDs instead of 3 drives? Or my HBA? Answer is no. I do like the Silverstones though as noted, it's an additional $58 per 4 drives.

I am not against SSDs by any means! I am just saying we are not today (for many people with larger storage needs and possibly others) at a point where SSDs can easily replace HDD. Cost, additional hardware, and max size of SSDs are barriers the way I see it, this may change eventually. You simply need too many of them for larger storage requirements. And mine are puny, many Truenas users need vastly more storage than me. Once I add my second vdev, I'd need around 17 of them instead of 5 drives. And if I added a 3rd vdev, even more. Very cost prohibitive today.
 
Joined
Jun 15, 2022
Messages
674
the JEDEC standards call for 12 months of data retention and results in practice point to minimum retention times between 2 and 5 years with a quick google. That's for consumer drives. Enterprise drives are shorter, but my understanding is that there's no difference in underlying behaviour of the flash units, they just play with the temperature criteria, so store the two units in same conditions and their retention periods will be about the same.

I think you'd need to be aware of limitations for long term cold storage, but that's not really the role of a NAS or a desktop, at least as far as these home users you're mentioning; you also get a bonus for those using redundancy in their NAS pools which normally doesn't happen for desktop use. Endurance as far as write cycles is also not that much of an issue if you find a drive that reverts to a read only mode when you exhaust it's spare cell capacity, rather than bargain bin stuff that just falls apart and you lose everything. If anything, SSD is far more forgiving in their failure modes than hard disks. But backups aren't just for enterprise, right grandma!?

I know a lot of people have seen issues with ancient usb flash drives left in drawers for a decade, so the concept shouldn't be too hard to grasp, but as usual, education is king.
It depends on the drive. The top-tier Enterprise Solid-State Drives I use have a Product Specification (for that exact drive) stating:
Data Retention*: 3 months power-off retention once SSD reaches rated write endurance at 40 °C
*The time period for retaining data in the NAND at maximum rated endurance.
These fall outside JEDEC standard for client drives you indirectly refer to due to the extreme speed and write endurance as the intended use is Enterprise Servers. Client data retention is 1 year as you state, Enterprise is 3 months (kind of). What I do know is if I don't back up the boot array before off-lining a server I may not have one in six-months.

JEDEC standards also take time, temperature, cycling frequency, cycle count, <other stuff>, and very importantly application class into consideration, meaning the client (consumer) drive can be powered off for 1 year under normal and consistent storage conditions.
Because my servers run about 32°C I'm not so lucky. :eek:

1691604088417.png
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
You have to learn to cook your SSDs when in use… :tongue:
 

oguruma

Patron
Joined
Jan 2, 2016
Messages
226
Does anybody have any real-world data on power/heat consumption for a flash vs rust pools of equivalent sizes? For locations where noise/heat is a consideration (home labs, small offices), I'd be curious if the noise and heat differences would be a worthwhile parameter to consider when trying to decide between flash vs rust.
 
Top