SSD vs HDD life expectancy and reliability?

otpi

Contributor
Joined
Feb 23, 2017
Messages
117
I'm considering a case with low capacity (TB) requirements, but life expectancy and reliability is important, i.e. I can probably make do with 1 TB or less. There will be some r/w activity, but not close to the rated TBW of a standard consumer 2.5 in ssd. I can always add a higher capacity hdd for non-essential data (cameras feeds etc).

The environment is less than optimal, vibrations are expected, temperature cannot be guaranteed to stay well below 40 deg C, power outages likely to occur (although UPS for safe shutdown is included). Unless complete HW fail or requirements change, which would require changing entire server, the system is expected to last for 10 years. Maintenance is possible, but costly.

I'm thinking ssd > hdd for this case? Or will they die before hdds anyway?

Which raidz setup would be better for ssd? mirrored? raidz2?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
SSDs are certainly the way to go with the vibration and temperature situation you described. SSDs should easily last ten years if you get decent quality drives. I would suggest either the Samsung Pro or Intel DC series drives. I have some that are ten years old already and are still working perfectly.

I would suggest using a mirrored pair of small capacity SSDs for the boot pool and a RAIDz2 pool of SSDs for the storage pool. You could go with a mirror for the storage pool, but you can save a little money and buy less expensive (lower capacity) drives and build them into an array. You said 1TB would be enough, so you could make your storage pool with six of these :
https://www.amazon.com/Samsung-850-PRO-2-5-Inch-MZ-7KE512BW/dp/B00LF10KTO
It would give you about 1TB of usable storage that could survive the failure of two drives. It would likely last the entire ten years without needing any service at all, and even if a drive fails, it would not quit working.
This is about the lowest capacity SSD that you can buy new that is actually good quality and I would suggest a pair of these for the boot pool:
https://www.amazon.com/Samsung-Electronics-Internal-Version-MZ-7KE128BW/dp/B00LF10L02
You only need about 16GB of capacity for the boot pool, so you could go with something less than this, if budget or size is a restriction.

Is there a size or power constraint for this?
 
Joined
Feb 2, 2016
Messages
574
I mostly agree with Chris but I'm still going to throw out one more option...

Samsung 850 Pro @ 512GB ~ 300 TBW = $260
Samsung 860 EVO @ 1TB ~ 600 TBW = $160

Six of the 850s as RAIDZ2 as Chris suggests would cost $1,560.

A triple mirror of the 860 and two hot spares (unused until activated so as not to wear down) would cost $800. Or you could go RAIDZ3 with five 860s and still be better off in terms of cost, endurance and power usage.

On a project such as this, reliability is far more important than cost, of course. I'd still rather have the larger EVOs over the smaller Pros.

In terms of write endurance, a pair of 2TB 860 EVOs (with 1,200 TBW each) and over provisioned by effectively only writing 1TB for $660 might be the most reliable. Pick up a third as a hot spare (so it isn't wearing out over the ten years) and you're still under $1,000 for drives.

Cheers,
Matt
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

otpi

Contributor
Joined
Feb 23, 2017
Messages
117
No, not any size or power constraints that would matter for a server, but waste not... It's for a mobile office. Backup will be in cloud, accumulated data can be offloaded from local storage, but connectivity/bandwidth can be an issue at times. Snapshots and the ability to roll back from a local copy is why I'm considering this, less downtime.

I like the idea of a mirrored pair + hot spare. Just in case one drive goes bad. Also, it keeps the number of drives down (3 vs 6, + boot). I will have to do the TBW numbers, but I don't even think it needs to be over-provisioned. It's also nice that I can add another mirrored pair in the future if it's outgrown. Looking at DL 20 gen10, 6 drive bays + 2 internal sata and 1 m.2.

Weighing the pros and cons of VMs for server+nas with optional hot standby, or 2x the hardware for bare metal installation, down the rabbit hole I go... (wonder if that S100i is good for passthrough?)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Would you be interested in suggestions for quality used gear or do you need to get new?
 

diedrichg

Wizard
Joined
Dec 4, 2012
Messages
1,319

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Wait, what? FeeeNAS has TRIM? No need to over provision if it doesn't.
FreeBSD's version of OpenZFS added trim support sometime back in 2012. I believe FreeNAS has supported trim since version 9.10... may have been earlier.
Code:
root@CLNAS02:~ # sysctl -a | grep vfs.zfs.trim
vfs.zfs.trim.enabled: 1
vfs.zfs.trim.max_interval: 1
vfs.zfs.trim.timeout: 30
vfs.zfs.trim.txg_delay: 32
 

otpi

Contributor
Joined
Feb 23, 2017
Messages
117
Would you be interested in suggestions for quality used gear or do you need to get new?
Thanks, but no. This would be new HW only, ~20 systems.
 

SamuelFlores

Cadet
Joined
Apr 25, 2022
Messages
1
SSD reliability is several times higher, and the average failure rate is almost 20 times less than HDD. However, if we consider the age of the drives, the difference is smaller, but it is still huge. It is worth noting here that the average age of an SSD at the source is 12.7 months, and for HDD, it is already 49.6 months. At the same time, the oldest SSD is about 30 months old, and the youngest HDD is 24 months old. Thus, without considering age, we can conclude that SSD is much more reliable than HDD, but without considering age, the conclusion will be very conditional. But I still consult with specialists before buying a drive salvagedata.com
 
Top