Really?
diskdiddler said:
↑
I'm just curious if I ran 6 SSDs with 2 disk redundancy, just how risky it is. My assumption, based on posts on these forums and replies like yours, is that it would currently be more risky than running with 6 regular hard drives.
Cyberjock is a bit of a pessimist who sees the world through some harshly black and white glasses.
Sure looks like someone asked a question comparing SSDs to regular disks, and the first word to the answer had my name in it...
Yes. Someone asked a question comparing SSDs to regular disks, and you came out with this jaw-dropping response that included all sorts of FUD:
Most people aren't going to buy a 1/2 dozen of 5 or 6 different SSDs, test them in a bunch of different environments to figure out what hardware works with ZFS and what doesn't, and then pick from one of the few that are actually acceptable.
"what hardware works with ZFS"....?!?!? Any quality SSD is likely to be fine. I can't think of a contemporary SSD that doesn't "work with ZFS". This is FUD of the worst sort.
So unless you're ready to go that route, you're simply better off buying hardware from a company like iXsystems that has already spent gobs of money on testing and validating to ensure your SSDs can handle ZFS well.
"Handle ZFS well." It's a NAS. Even for those of us with 10G ports, "handle ZFS well" is kind of a very strange thing to say, because a NAS is limited by nature to some rather modest limits. You're not talking to a bunch of corporate types who are looking for a hot shit software SAN that can run iSCSI on a pair of 40G uplinks to a cluster of ESXi running heavy transactional data; for that, I'd agree, see iXsystems. You're talking to a forum with what's largely a bunch of home hobbyists who are looking to store mostly long term archival data on predominantly 1G networks. There's no need to suggest that somehow there are SSDs that can't "handle ZFS well." Any quality modern SSD should be able to perform NAS duties. It doesn't have to be the best choice or the highest performing choice, it merely has to work well enough as a NAS drive at a low price point. More FUD.
It really boils down to either building it right by doing your own R&D (very expensive this way), buying something from a company like iXsystems that did the R&D for you (probably more than you want to pay), taking some serious chances that things could go badly for you with no warning (probably more risk than you want to accept), or simply not doing an all-SSD zpool (Why do this? Are you not a nerd that has a need for all-SSD for the e-penis factor?).
That's such a narrow view of the world. There are other benefits of an all-SSD pool. Lower noise. Less sensitive to heat. More responsiveness. There are other use models for FreeNAS than just storing massive amounts of data. Consider something like storing office files. You don't need a lot of space, but high reliability and good responsiveness are nice qualities.
I've got a bunch of older SSDs lying around here. I've been considering making an all-SSD zpool of them in my main box. It would be more of an experiment than anything else since it would be different models (but all the same brand.. Intel) and I'd be surprised if the darn thing lasted a month.
Perhaps you should actually try that little experiment. Why the hell would you be "surprised if the darn thing lasted a month"? I mean, sure, you can probably make it burn through its write endurance in that period if you try, that's not really that hard, but part of being a storage admin is understanding your workloads and how to balance competing interests to arrive at something that's suitable to your needs.
Our new hypervisors out in Ashburn, with an LSI 3108 RAID controller, I've been putting Intel 535's in them. Why? Because I did the math and discovered that our write levels were such that we were in the ballpark of the 40GB/day writes that they're spec'd for. So I deployed five in each server, two RAID1's and a spare. The numbers suggest the busier of the two datastores will last the full five years, perhaps just barely. But I don't even really care if it only were to last two, because at that time, the cost to replace will be very modest, and in the meantime, the cost differential between those $150 SSD's and the next step up (Intel DC S3500 480GB) at $320, is so great and the rate at which SSD prices are dropping is so extreme, that this is actually the smart thing to do.
I too would love a near-silent, low power, cool NAS with little or no moving parts. The problem is that without all of the validation and verification of the hardware, you're quickly on an island where something goes wrong that nobody else has ever seen, and the only good solution is to rebuild and restore from backup. *I*, personally, am against scenarios where you are expecting to have to restore from backup in a short period of time. ;)
That's just more FUD. It's 2016, dude. SSD technology is well understood, we have things like media wearout indicators, and SSD's have been deployed in a huge range of storage roles. Lots of users here on the forums have actually been deploying SSD's for their jails, which is arguably a more challenging environment than mere pool storage, and we're not hearing about all the daily failures.
The only SSD zpool I've worked with that wasn't made by iXsystems or pre-built by someone else is a little 30GB I have that uses SLC memory. Good for at least 100k writes for each memory cell, and has 99% of the life remaining per SMART. Not overly concerned about it up and dying. Of course, you can't buy 4TB SLC-based SSDs at any price. They simply don't exist.
We already understand that SLC isn't actually practical or even necessarily better than MLC, and manufacturers have been shipping enterprise grade MLC
since... 2010? Like literally "before FreeNAS"? Those amazing DC S3710 SSD's with the multiple petabyte write endurance? Those use MLC. SLC vs MLC is a settled thing, and it doesn't even matter which is "better".
Basically, to make a long story short, no one gives a crap about SLC anymore. I don't know how it is that you missed out on the last half a decade, but amazing strides have been made. We've got 512GB microSD cards out there. The price on MLC has plummeted while performance and endurance have slowly increased at the low end, and at the top end, it's an unrecognizable market.
It's once of those things that works and either works well for the long term or quickly turns into a nightmare because you're spending so much time trying to figure out why the heck stuff doesn't work right, play nice, etc.
That might have been the case 5 years ago. That's not the case today. All the major issues are known. As with HDD's, there may well be things to learn, such as WDIDLE or other tweaking of the sorts we suggest for HDD's, but you need to remember that this community is perfectly capable of dealing with that sort of thing, and in fact asking some questions about that was actually the opener to this thread.
So, here, here's the non-FUD answer to the question that the poster asked.
The six hard drives would have better endurance characteristics than the SSD's, but that could be fine for many scenarios.
That's a perfect assessment of the realities.