High Performance All SSD Array?

Status
Not open for further replies.

Cheese

Dabbler
Joined
Feb 11, 2014
Messages
17
Curious to know if anyone knows of any all SSD freenas array?

Unrelated to my other thread about the xserve array... One of the challenges I have is pushing high IO/Bandwith test and development database instances. Some of them are vmware virtualized. I can't justify the spend for any more fusion IO devices, so I'm kicking around the idea of a box of cheap SSDs. Looking around 1 TB usable.

Is this more of a pain in the butt than it's worth on freenas? Or maybe I should look at another option? (Like a cheap linux box with os striping on a dozen drives?)
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Since you're talking about a development world here, I'm assuming you can afford to risk the data a little bit more. You should be able to pick up 240GB Crucial m500s for about $150-175 each - buy a dozen or so and underprovision to 192GB to give them more stable performance and longer write endurance. Get a 12-drive chassis with a 1:1 backplane, stuff it with RAM, and make a mirror of those twelve drives. Just over a TB of SSD.
 

Cheese

Dabbler
Joined
Feb 11, 2014
Messages
17
Yes, can afford data risk. Pretty much what I'm thinking. Just questioning the use of FreeNas (and the memory) for this task.

If I get off my butt and run my backups more often, I might stripe the whole thing.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Metroid FTW!

Anyway, I did work with someone that had an all-SSD system.

First, don't do RAID0. It's not worth it. Even with SSDs and regular backups.

Second, it works decently. Depending on the drives you buy, their performance under heavy load, and their endurance, it could work out pretty well. Don't go with any of those TLC based SSDs. Their lifespan is just too short IMO.

Intel S3500s(or whatever those higher end drives) seem to work very well with ZFS. A couple other brands had problems(in particular, the samsungs and corsairs that the customer tested). They got horrendously slow in just a few hours of use. Even when the system was idle overnight they still performed like crap. Not sure why, but they went back to Newegg.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Intel S3500s(or whatever those higher end drives) seem to work very well with ZFS. A couple other brands had problems(in particular, the samsungs and corsairs that the customer tested). They got horrendously slow in just a few hours of use. Even when the system was idle overnight they still performed like crap. Not sure why, but they went back to Newegg.

The DC S3700 is their higher-end drive, S3500 is the "value" drive and doesn't have nearly the endurance or throughput. It gets trounced in both metrics by the Seagate 600 Pro - so if you can't swing the cost of the S3700s and you want something more reliable/"enterprise-lite" than the Crucials, go with the Seagates. If you do, make sure you get the 100/200/400GB model that are underprovisioned from the factory.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
That's them. The S3700s! Those worked very well for the person I worked with.

Keep in mind that you can self-underprovision Intel SSDs if you desire. ;)
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
That's them. The S3700s! Those worked very well for the person I worked with.

Keep in mind that you can self-underprovision Intel SSDs if you desire. ;)

Yep, they even outline the process in complete detail (PDF warning) in their manual for the 320s via the ATA standard methods. They suggest 20% as recommended value; of course if you're underprovisioning them for SLOG you can chop off waaaaay more than that.

I attached a mirrored SLOG of 2x8GB on 320 80GBs to my little server for giggles and saw sync writes go from ~5MB/s to 50MB/s. Love it.
 

Cheese

Dabbler
Joined
Feb 11, 2014
Messages
17
After looking at this, I think I need to consider moves to windows pro and consumer level stuff. It's going to cost too much to get the I/O out of the storage system :/
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
After looking at this, I think I need to consider moves to windows pro and consumer level stuff. It's going to cost too much to get the I/O out of the storage system :/

Not sure why you'd say it's going to be too expensive ... twelve 240GB SSDs and a board with 64GB or more of RAM will probably still be less than a single FusionIO ioDrive. ;)
 

joelmusicman

Patron
Joined
Feb 20, 2014
Messages
249
I attached a mirrored SLOG of 2x8GB on 320 80GBs to my little server for giggles and saw sync writes go from ~5MB/s to 50MB/s. Love it.

Question to the OP: Does the ENTIRE dataset need to be blistering fast? If not, you could do exactly what HoneyBadger did, and perhaps a 4x2TB spinning array in a fully mirrored config.

Not sure why you'd say it's going to be too expensive ... twelve 240GB SSDs and a board with 64GB or more of RAM will probably still be less than a single FusionIO ioDrive. ;)

He's referring to NIC hardware, to get the speed somwhere else would require Infiniband hardware or at the least 10G ethernet (which would be the bottleneck on an SSD array). It's funny how easy it is for us to spend other people's money around here though! :)
 

Cheese

Dabbler
Joined
Feb 11, 2014
Messages
17
Pretty much external communications (Good catch joel). On my dev desktop, I put a couple of cheap-o pcie-ssd in a stripe and get 1.5GB while crunching numbers. (OLTP and OLAP databases). They will pay for themselves in short order. About 200 gigs are pushed, prodded, poked and generally shaken up during data runs. While I'm working to optimize the batches, they still take hours on the actual test box (using 6x sata ssds with carefully balanced loads)

Any money I save by building a cheap standalone box is squished once I try and get the host to talk to it. Unless i put both the freenas and the database server on the same esxi host. (and avoid getting shot for even mentioning freenas on esxi :P )

Was just an iquiry for thought; not going to chase it any more
 

joelmusicman

Patron
Joined
Feb 20, 2014
Messages
249
Actually now that you mention it, colocating with ESXi is workable in your case. The stern warnings against are for "production" use, whereas you're using for development and already aren't worried about parity.

If you go that way you'd definitely want sufficient RAM and a fast CPU to handle all the IO concurrently with the other VM's needs.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
You can do FreeNAS on ESXi. If you are okay with the fact that one day you might reboot the VM and the pool is gone with no chance of recovery, then go for it! That's literally what has happened to many people. That's why we basically add you to our "idiot" list if you talk about it. 99.99% of people just don't seem to get that, despite it being in black and white all over the forums.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
He's referring to NIC hardware, to get the speed somwhere else would require Infiniband hardware or at the least 10G ethernet (which would be the bottleneck on an SSD array). It's funny how easy it is for us to spend other people's money around here though! :)

Cheap bandwidth? I've got two words for that - fibre channel. Install 9.2.1.1, get a couple of QLogic cards, connect them in PTP mode, follow the directions to enable target mode here and unleash hell.

Or as cyberjock says - if it's floating data, do FreeNAS on ESXi and accept that you're playing Russian Roulette with it. ;)

*cough* Or commit forum heresy and install a Solaris derivative *cough*
 

joelmusicman

Patron
Joined
Feb 20, 2014
Messages
249
Cheap bandwidth? I've got two words for that - fibre channel. Install 9.2.1.1, get a couple of QLogic cards, connect them in PTP mode, follow the directions to enable target mode here and unleash hell.

Or as cyberjock says - if it's floating data, do FreeNAS on ESXi and accept that you're playing Russian Roulette with it. ;)

*cough* Or commit forum heresy and install a Solaris derivative *cough*

Holy crap that's so much cheaper than 10G hardware! Thank you orphan technology! I might just do that for my desktop to get the full DD speeds. :)
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Holy crap that's so much cheaper than 10G hardware! Thank you orphan technology! I might just do that for my desktop to get the full DD speeds. :)

I've been hearing "X will be the year FC dies" about as long as "X will be the year of the Linux desktop." ;) There's a reason it's still around despite being ancient.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
AFAIK FC isn't supported on FreeNAS officially.. so "good luck" if you want to use it. It's one of those technologies that if you haven't heard of it until now, your chances of getting it working are just about... zero.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Well, we're talking about an all-SSD pool built from underprovisioned consumer grade drives holding test/development data. I think we're well into the stage where "Crazy Ideas" are allowed already.

But yes, I'll give you that FC is not newbie friendly. For the cost of the older 4Gbps gear though, it's worth playing with if you enjoy the world of storage though.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Well, we're talking about an all-SSD pool built from underprovisioned consumer grade drives holding test/development data. I think we're well into the stage where "Crazy Ideas" are allowed already.

But yes, I'll give you that FC is not newbie friendly. For the cost of the older 4Gbps gear though, it's worth playing with if you enjoy the world of storage though.

Why go with FC(which is virtually impossible if someone has to ask how hard it is to implement) when you can buy 10Gb LAN cards for $120 each on ebay. I got direct 10Gb from my main desktop to my server and total cost(if I had paid for it) would have been about $270 with cabling.
 
Status
Not open for further replies.
Top