HDD's or SSD's for 24-bay 2.5", and which models?

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Think it's still a matter of opinion. I have seen highly unexpected URE's on SSD's, and controllers that lock up and drop a drive offline. If you wish to call me paranoid, I'm fine with that.
 

Phlesher

Dabbler
Joined
Jan 9, 2022
Messages
16
Given the costs of going pure SSD here, and knowing I will basically be maxing around 2 TB per drive at the moment in light of that cost, I am starting to be tempted to get a HDD (WD Red Plus?) 2.5" 1 TB 5400 RPM, use them in a 6x3 raidz2 for my bulk storage pool, giving me a starting 12 TB. This is admittedly much smaller than I had originally hoped to be able to achieve in my first pass, but the cost would be very light in comparison. (Note that I would still do a 2x2 mirror of SSD's for the VM pool).

The plan would be that, once I fill that up, there's a good chance that SSD prices have fallen somewhat, and it makes more economic sense to to full flash and increase storage capacity at the same time. The question I have is: does TrueNAS make it reasonable for me to complete this kind of migration: take the pool offline, snapshot it, delete pool, replace all drives, form new empty raidz1 arrays in new pool, restore snapshot to new pool? Is there any other consideration I'm not making here?
 

Phlesher

Dabbler
Joined
Jan 9, 2022
Messages
16
Or, different thought: I could get 4 TB Samsung 870 EVO's, put 5 of them in a RAID-Z1, and that gives me 16 TB starting pool size. I can then expand by adding additional vdev's later to the same pool. While this is more expensive initially, it isn't absurdly so (roughly 60% more), and it allows for the obvious upgrade path in the future with SSD costs that will likely drop. And, bonus, I get the lower power/heat, and higher performance... feels like a winner?

This would be assuming the recommendation earlier in the thread that the 870 EVO's are plenty reliable in a RAID-Z1, and under the assumption that a hot spare is adequate to cover the vast majority of failure scenarios.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Two caveats:

1) Your write endurance requirements NEED to be reasonable. If you are planning crushing levels of writes, you are designing a situation more likely to tease out failures.

2) You need to make sure you have backups of any important data on the pool.

SSD's are less prone to "bad sectors" than HDD's, but more susceptible to just going *poof* in a magic disappearing act.

I am a big fan of abusing SSD's in ways similar to what you're talking about,


and it is interesting that when I've planned for failure on the basis of future SSD costs dropping, it did in fact work out that way...
 

Phlesher

Dabbler
Joined
Jan 9, 2022
Messages
16
Two caveats:

1) Your write endurance requirements NEED to be reasonable. If you are planning crushing levels of writes, you are designing a situation more likely to tease out failures.

2) You need to make sure you have backups of any important data on the pool.

SSD's are less prone to "bad sectors" than HDD's, but more susceptible to just going *poof* in a magic disappearing act.

I am a big fan of abusing SSD's in ways similar to what you're talking about,


and it is interesting that when I've planned for failure on the basis of future SSD costs dropping, it did in fact work out that way...

Good points. I think the level of writes I am talking about doing is marginal. The storage pool will primarily be for longer-term storage. The pool for the VM's that are not supposed to be super active on disk writes.

So my starting setup will be:
  • Boot: 1x 2-way mirror @ 256 GB (yields 256 GB total) = 2 drives @ 256 GB each (rear bays)
  • VM pool: 2x 2-way mirror @ 1 TB each (yields 2 TB total) + 1 hot spare = 5 drives @ 1 TB each
  • Storage pool: 1x 5-wide RAID-Z1 @ 4 TB each (yields 16 TB total, ~12.8 TB useful) + 1 hot spare = 6 drives @ 4 TB each
Future expansion possibilities include:
  • Storage pool expansion option 1: up to 2x additional 5-wide vdev's (yields 48 TB total, ~38.4 TB useful) + 1 hot spare = 11 drives @ 4 TB each
    • My understanding is that this is possible, but for best data balancing would require the existing vdev to have its data striped across the new vdev's. I am not yet clear how this process works (entirely automatic?).
  • Storage pool expansion option 2: reconfigure entirely on the basis of cheaper/higher capacity drives at that time (3x 5-wide or 2x 9-wide)
    • This assumes there is a straightforward way to offline the pool and perform an upgrade to a different set of vdev's. I am not yet clear if this is possible, and if possible, whether it's easy.
This feels like a good strategy for me unless somebody has a reasonable objection, which I would be happy to hear. :)
 
Top