Full Flash (SSD) FreeNAS

Status
Not open for further replies.

zmi

Cadet
Joined
Jul 8, 2015
Messages
9
Hi, I've looked the forums, but didn't find anyone talking about full flash freenas. We've been thinking about a supermicro case with 24x 2,5" slots, filling it with 1TB SSD drives in 2x12 mirror (raid-10 like) config, with 64-128G RAM, 2x 10Gb NIC.
Has someone experience with such a machine?
Would a separate ZIL on a very fast PCIe SSD still make sense with that?
How many IOPS could we expect? Each SSD has realistic 40k IOPS, so 12x40k would be 480k IOPS. Or are there limits before that? On 4K IOs, that would be 1875 MB/s, so two 10Gb/s NICs would be saturated.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, not everyone has the money for a full-flash system. ;)

There's probably 2 or 3 people on this forum that have done very small all-flash systems. Nothing nearly as big as you are considering though. I think it's pretty fair to say that if someone wants to spend the money for a full-flash system, they're probably going to be better off buying something that is known to work (like iXsystems' TrueNAS Z50) than potentially embark on a very expensive "project" that may or may not work well. The bigger problem if you have a problem would be that you won't find anyone here that will have the experience to help you much. There are unique things to consider when going all flash that I really don't want to go into detail with here, but it can be a roller coaster ride if you aren't familiar with what you are doing with full-flash on ZFS.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
I can say that on my lab tests on FreeBSD HEAD system with 20 SSDs and 40 logical CPU cores I've reached up to 1.2M IOPS and up to 60Gb/s bandwidth over iSCSI over ZFS: http://www.bsdcan.org/2015/schedule/events/537.en.html Though those tests were highly synthetic, and real life performance will depend on many factors. Also FreeNAS is still mostly based on FreeBSD 9.3 that has less scalable block device layer, so I would expect peak performance to be few times lower then that, at least until FreeNAS 10 is out. As cyberjock@ told, that is still quite a new area, so starting such an expensive project it would be good to use some expertiese.
 

zmi

Cadet
Joined
Jul 8, 2015
Messages
9
Thanks for the responses. As this is new to us, I guess we will start tests with some SSDs, maybe 3-4 of them in RAID-Z1. Is there any reference for good hardware parts? I saw some threads about network cards and sas controllers, but it would be nice to have it in one place. Maybe someone did already.

As for iXsystems: Thanks for the hint to the Z50, sounds nice, didn't see that before your post. I guess the guys know how to build a fast & stable system. My fear is support: We're in Vienna, Austria, Europe, and when we have special hardware it takes long to replace it. That's why formerly we had "4h time to repair" contracts with vendors. Now we build servers from parts, and simply have spare parts lying around in case of problems. Way cheaper.
Also, having the knowledge in-house proofed to be good. Maybe they offer support contracts, I'll have a look into it.
 

zmi

Cadet
Joined
Jul 8, 2015
Messages
9
Great, any experiences you'd like to share? What is the rest of the hardware, how much throughput/IOPS, what's good, what not?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
iXsystems definitely sells support contracts. They also offer 24x7 support for those that really want 24x7 coverage when things go wrong.

Be careful with experimenting with a small system and expecting it to "just work". Things go different when it scales up, so you stand a good chance of it working on the small scale, but as soon as you drop all the cash for the SSDs and build the larger setup it won't do what you need it to. It really is the definition of jumping into the pool feet first.

If reliability is important, there is the option of "high availability". Two nodes run in active/passive mode to ensure maximum uptime, even if you are on the other side of the world.
 

jamiejunk

Contributor
Joined
Jan 13, 2013
Messages
134
Great, any experiences you'd like to share? What is the rest of the hardware, how much throughput/IOPS, what's good, what not?

It's been frustrating, but it's working well now. The problems ended up being other things. For months I battled VMWares NFS bug, and kept thinking it was the freenas server. Moving to iscsi file extents for vmware seems to have solved that.

We have a head unit and then an external jbod of the 24 SSDs.
Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz
114634MB Ram.
We had 256 gigs of ram and it acted all wonky. Ram tests were run for about two weeks and passed just fine. But we backed it down and it's been solid ever since. Reboots take forever for some reason. Probably because of ram tests at startup. We also had an stec zeus ram. But figured out we didn't need it and put it to use elsewhere. We have the drives setup up in mirrored pairs.

We also had problems with he Intel 10 gig nics when we were on freenas 9.2.x. We moved to the chelsio 10 gig nics and they work great.

The intel nics work fine in 9.3.x on some other boxes.

As for benchmarks, I haven't run any. That's a rabbit hole. But I'd be happy to run any benchmarks you'd like. Just let me know.

I do know it's bananas faster than 7200 RPM or 15k SAS drives. Stuff is fast and responsive. That's what really matters.

Also IX does not offer support for hardware not purchased from them. They will redirect you to a consultant.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
I know who you work for, but are you sure that one can still buy support for FreeNAS?

I got the impression that it changed about a month ago. See - https://bugs.freenas.org/issues/10178

iXsystems definitely sells support contracts. They also offer 24x7 support for those that really want 24x7 coverage when things go wrong.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I asked about this to one of our salespeople. There's 4 categories that a given user will fall under:

1. Self-built - iXsystems does not support them, at all. Instead we redirect you to a 3rd party person that will provide assistance in resolving the issue.
2. FreeNAS Certified Systems - These are systems made by iXsystems and use components that we have tested and found to work properly. iXsystems used to sell support contracts for these, but we have discontinued this as of about a month ago. So anyone with a support contract is still on support, but as they expire they will not be renewed. If you are off-contract you will be redirected to a 3rd party for resolution.
3. FreeNAS Mini - Basically the same as self-built, except that we simply need to rule out hardware since the Mini comes with a 1 year hardware warranty. We did offer support contracts, but this has since been discontinued also. If you do not have a contract with us, and we have ruled out the hardware as the problem, you will be redirected to a 3rd party person for additional assistance.
4. TrueNAS - iXsystem's proprietary systems that offer features that don't exist in FreeNAS (such as High Availability) and has optimizations and features that are designed to work specifically with the specific hardware that iXsystems uses. Obviously you cannot build this yourself even if you wanted to.

There may be more, but that's my understanding of the current situation. As always, ymmv. I am not here in the forums to speak on behalf of iXsystems, this is simply my understanding of the situation.
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
I'm assuming you're already well aware of this, but make sure you have a lot of replacement SSDs handy! I don't know what the wear-leveling will look like on 24 SSDs in RAID10, but using a single Crucial 120 GB M4 SSD as my system/jail drive apparently isn't good for the drive as it reached it's write limit about 3 weeks ago. I previously had the drive in my desktop as my system drive for about a year, and then after I got a larger Samsung drive I decided to put the Crucial in the NAS but it only lasted about 6 months before it died.
 

zmi

Cadet
Joined
Jul 8, 2015
Messages
9
This SSD wear leveling thingy is what keeps me nervous. Consumer SSDs are cheap, but example a Crucial MX200 1TB guarantees only 160TBW (TB written). So if you'd fill the drive once a day, you need 3 drives per year. We plan to only use 80% of the capacity, and fill disks only 50% then, that may keep them alive longer. And that's why we want to start small, say 6-8 SSDs, and increase SSDs count every some weeks, so they will not all die at the same time. It might also be clever to mix 2 SSDs from different vendors per mirror, to keep the data save.

@jamiejunk: The SSD840 doesn't have PLP (power-loss protection). I hope you know you risk your data.

BTW, does anyone have experience with SSDs lifetime, PLP state, etc? Any SSDs with higher TBW count that have a reasonable price/performance and PLP?
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
And that's why we want to start small, say 6-8 SSDs, and increase SSDs count every some weeks, so they will not all die at the same time.

You should bother only about wear leveling within each top level vdev (mirror). Just inserting new pairs won't help much. If some old mirror pair die same time, it won't matter if all other pairs are brand new. Sure there can be some more complicated schedule, like this: after some time break all original pairs replacing one SSD of each pair with new one, and removed SSD reinsert back with another new one. Though this way require more manual work with more space for errors.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
@zmi,

Your concern about wear leveling is absolutely valid. iXsystems has contacts with Samsung, Intel, and other brands and has done their own testing on what works and what doesn't work. That's one of a long, long list of reasons why I recommend you just buy from iXsystems. This can go very badly for you without the support and experience that iXsystems has to do all-flash zpools.
 

Peter Jakab

Dabbler
Joined
Jun 18, 2015
Messages
37
Hi All,

What about this news related to this topic?
http://hothardware.com/news/report-...y-with-hdds-in-2016?google_editors_picks=true
Could on year 2016 have same cost for 4TB SSD same cost as 4TB HDD?
I am don't think now. But let's see it in close future.

And I am affraiding of wearing factor also issue.
On single Home SSD you can controll stop overfill then 80% of SSD to have free space for sector movement. But what about in big pools which is could fill several disk meanwhile others not full.
I think ZFS Copy-on-Write not help in this topic also. So could FreeNAS team have to solve SSD based wear protection topic in future somehow.

Bye,
Jackson, is an IT engineer
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,360
Cyberjock: Any of the concerns with all flash FreeNAS builds due to software limitations, hardware limitations or software configuration difficulty?

Will FreeNAS10 be more "all-SSD friendly" ?
In the next 18 months, Iexpect I may build my first all SSD system
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
All-flash is a special kind of beast. It requires good quality hardware and adequate resources to work well.

The real 'limitations' (if you want to call it that) are with ZFS design (it wasn't designed for Flash). That being said, it works well when properly managed by someone that knows ZFS well. So naturally, FreeNAS10, FreeNAS11, etc will not be more "all-SSD friendly" as the issues are upstream with ZFS itself.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
I can second cyberjock's thought that ZFS can be quite CPU-hungry when reaching speeds and especially IOPS of SSDs. Same time FreeBSD 10, used as base for FreeNAS 10, has significant improvements in its block storage layer performance, that should improve SMP scalability of SSD-only systems, so that bigger systems should really be faster then smaller ones. How much of that benefit can be really obtained in practice still depends on integrator qualification.
 

BERKUT

Explorer
Joined
Sep 22, 2015
Messages
70
For FULL flash (more than 2 vdevs) FreeNas need SLOG/ZIL or enough only sync=always?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
For FULL flash (more than 2 vdevs) FreeNas need SLOG/ZIL or enough only sync=always?

That depends on many aspects... it's not a yes/no answer.
 
Status
Not open for further replies.
Top