Full Flash (SSD) FreeNAS

Status
Not open for further replies.

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,360
All-flash is a special kind of beast. It requires good quality hardware and adequate resources to work well.

The real 'limitations' (if you want to call it that) are with ZFS design (it wasn't designed for Flash). That being said, it works well when properly managed by someone that knows ZFS well. So naturally, FreeNAS10, FreeNAS11, etc will not be more "all-SSD friendly" as the issues are upstream with ZFS itself.
I can't see FreeNAS moving away from ZFS but I can see SSD's becoming more and more common, until they are defacto.
Will there be changes in FreeNAS10 or 11 which may resolve some of the SSD issues you're talking about?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
No, but the technology will be resolving itself anyways. What we've seen over the past decade of ZFS development is maybe a tenfold increase in performance of a "typical" server, combined with a much larger increase in the amount of regular memory that a typical machine can hold.

ZFS has a fundamental issue in that the designers recognized that CPU was a lot faster than spinny rust storage, and they assumed that the combination of CPU plus RAM could be leveraged to "make disk go faster". That's conditionally true, and the history of disk drives over the last several decades shows no signs of resolving the overall trend of seek speeds dominating I/O within the hardware. No matter what, ZFS is going to remain a kick-ass way to manage the stuff you're putting on spinning platters, because the resource investment to run ZFS is rapidly getting much less expensive while the significant HDD factors remain almost exactly the same.

But that turns into a handicap when you look at SSD. ZFS is, overall, a big massive software package that is highly dependent on disk drives being slow in order to make itself look good. As CPU and memory continue to increase, the rise of flash storage is happening at a similar rate of improvement; SSD can go very fast and ZFS can't keep up. There are some mitigating factors that I believe will be critical in the coming decade:

1) We saw a rapid progression from 10Mbps->100Mbps->1000Mbps (1993, 1996, 1999) networking, but it's now 2015 and 10GbE isn't really "here" yet. The growth of baseline networking speeds has slowed to a crawl. Without substantially faster networks, system speeds are already sufficient to saturate multiple 1Gbps Ethernet in many scenarios. You can already buy a CPU that'll do that for 10GbE too.

2) Flash devices continue to get bigger; I saw SanDisk 960GB units being sold for $199 on Black Friday. As these units get bigger, the amount of time it takes to empty and fill them increases, which tends to be a factor that favors a somewhat slower but much smarter storage system.

3) CPU and RAM will continue to increase. Ten years ago, a fast CPU was about 1/20th of a decent fast CPU today, and 8GB of RAM was $2000. Today, 256GB of RAM is $2000. In the next decade, it's reasonable to expect to see similar improvements.

The improvements in CPU and RAM work fairly effectively against the other two factors to help favor ZFS over time. There are also things that can be done to make ZFS more SSD-friendly, but the things such as automatic tiering that would be most useful aren't likely to make it into ZFS without some large corporate sponsor (and prolly bprw). Fortunately it does look like we'll be getting some similar benefits with things such as persistent L2ARC which are in the works.
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,360
Personally I see / saw FreeNAS and ZFS as about storage reliability first, performance second.
If I replaced my existing 6x5TB disk FreeNAS disks with 6x5TB SSD's sure I'd expect it to be faster - but my goal would continue to be primarily reliability, not speed.
I'd expect maybe 300MB/s instead of the 50 to 100 I get now.

I just read multiple articles, claiming that multi-layer TLC NAND can lose data simply by being powered off for too long. This stuff sounds legitimately quite volatile (so I can't wait to see what someone like Cyberjock says about it, considering he takes hyperbole and paranoia to a factor of 12)

There's going to need to be some fairly strict guidelines about how to do an all SSD FreeNAS machine, because data integrity could be a real issue.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
so I can't wait to see what someone like Cyberjock says about it, considering he takes hyperbole and paranoia to a factor of 12

Well, you got his number for sure. But the flip side of this picture is that most NAS storage systems are always-on affairs, so the TLC NAND volatility issue probably isn't a big deal.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,155
But the flip side of this picture is that most NAS storage systems are always-on affairs, so the TLC NAND volatility issue probably isn't a big deal.
Sounds exactly like a job for regular scrubs. Which is, in fact, what Samsung's updated firmware now does internally.

A problem I see is that flash is presented with some layers of abstraction that get in ZFS' way. CoW can be easily extended (at the cost of some RAM and CPU time) to handle wear leveling as well - but there's no way of doing that externally to the drive. In fact, once you think about it, typical NAND flash controllers are quite similar to HW RAID controllers: they're presenting a large number of NAND die as a single, coherent and fast piece of storage.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
typical NAND flash controllers are quite similar to HW RAID controllers: they're presenting a large number of NAND die as a single, coherent and fast piece of storage.

Which, of course, brings us to the normal usage model for a real RAID controller with ZFS, and which will no doubt bring Cyberjock screaming in rage when I start discussing that. ;-)
 

David E

Contributor
Joined
Nov 1, 2013
Messages
119
@zmi What did you end up doing? I'm interested in a similar build from prosumer SSDs - I'd like to get an order or two magnitude improvement in random reads/writes over what my platters are giving me now. I'm also curious if anyone has thoughts on what if any kind of ZIL would make sense in front a flash array, or if all the devices in the array really should just have caps to prevent data loss themselves.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
No, having supercap based SSD's for the pool is pointless. The write path for pool data and for the SLOG are two almost entirely different beasts from a performance point of view. If you omit the SLOG, optimistically thinking "but my pool's fast", what you'll find out is that the in-pool ZIL still has allocation issues and must be written out according to whatever data protection strategy is in use on the pool; an in-pool ZIL on an eleven SSD RAIDZ3 would really suck bigtime, for example.

If you need the consistency guarantees that a SLOG provides on a conventional storage array, then you probably also want it for an SSD based array - but you want a damn fast one.
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
It's been frustrating, but it's working well now. The problems ended up being other things. For months I battled VMWares NFS bug, and kept thinking it was the freenas server. Moving to iscsi file extents for vmware seems to have solved that.

We have a head unit and then an external jbod of the 24 SSDs.
Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz
114634MB Ram.
We had 256 gigs of ram and it acted all wonky. Ram tests were run for about two weeks and passed just fine. But we backed it down and it's been solid ever since. Reboots take forever for some reason. Probably because of ram tests at startup. We also had an stec zeus ram. But figured out we didn't need it and put it to use elsewhere. We have the drives setup up in mirrored pairs.

We also had problems with he Intel 10 gig nics when we were on freenas 9.2.x. We moved to the chelsio 10 gig nics and they work great.

The intel nics work fine in 9.3.x on some other boxes.

As for benchmarks, I haven't run any. That's a rabbit hole. But I'd be happy to run any benchmarks you'd like. Just let me know.

I do know it's bananas faster than 7200 RPM or 15k SAS drives. Stuff is fast and responsive. That's what really matters.

Also IX does not offer support for hardware not purchased from them. They will redirect you to a consultant.


How many VMs are you running on it?
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,360
Been nearly 3 years since this thread and little progress in the SSD space.

I recall posting in it, when news articles were thick and fast about SSD prices dropping very very soon and a long long way. We'd all have 20TB SSDs by 2018..............

Oh boy, what a shame huh. I feel like this thread is, again, 3 years away from users here considering all flash NAS.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
SSD prices *are* finally falling again.

Black Friday 2015, the price for a low-end 500GB SSD was ~$130, with an Intel consumer being around ~$150, and IIRC the Samsungs were ~$180-200.

Then came problems in 2016 including decreased output and increased demand in smartphones, etc.

This year, prices are again on a downward trend, with a low-end 500GB SSD at $70, a WD Blue 500GB at $90, and the Samsungs at $100.

That link is pretty good overall. There's an oversupply, and prices should continue downwards for awhile, helped in part by a new Toshy fab coming online. We have a long way to go to reach price parity with HDD, of course. If you can wait six months, guessing prices will be closer to a less-volatile low price point.
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,360
I think I paid about $85 US each for 2x480GB 2 weeks ago (in Aus) so now I've got 2 SSDs in my new rig too.

Still, I'm not sure FreeNAS is actually ready for a full SSD only machine.
 

buffalosolja42

Dabbler
Joined
Sep 28, 2017
Messages
16
Did a full SAS SSD with 8 1.92TB Dell "Hitachi" drives outperforms my SAN... #allworkitems.

Sent from my SM-N960U using Tapatalk
 

buffalosolja42

Dabbler
Joined
Sep 28, 2017
Messages
16
I do MSP on side and recently jumped into Freenas. I really tried to get TrueNas in our environment. I did 2 Freenas all flash as previous post as proof of concept with hardware we had. And they perform great, now unfortunately I'm not as versed as I should but I can get major protocols working. I want to understand the tuning and plugins more so I am building one at my home. I for the life of me couldn't get backblaze running on 11.1-U6 so I updated to 11.2 RC1 and now it's not booting lol. Thank goodness I am anal about config backups making one now in lab and will plug in when I leave office. Really miss the linux arena this has been great ride this last yr with Freenas.

Sent from my SM-N960U using Tapatalk
 
Status
Not open for further replies.
Top