BUILD My options

Status
Not open for further replies.

Bigdata992

Cadet
Joined
Apr 23, 2015
Messages
4
So I purchased a server with the intent of making it a FreeNAS NFS server for my cluster of ESXi 5.5 hosts. I bought this Dell R510 and stacked in 12 Samsung 1TB SSD's with dual Raid1's inside for the OS.

On the PERC6 I Raid0'd all of the drives (and no I hadn't read cyberjock's awesome write up, thanks cyberjock for the fantastic knowledge, that I know now). Anyway, I am stuck with this server with a PERC with 12 solo 1TB drives all mapped as individual datastores on the ESXi server. Six nodes are connected to them all and I basically let DRC move it all around to its hearts content. I am happy with the performance, I have about 200 virtual servers running and everything is running great. I am using the onboard NIC's with Link Aggregation and getting great results.

I read cyberjock's statement about what not to get, after I got it all. I have read through the forums and am throwing myself on my sword for my stupidity and short of throwing it all away and starting over, what modification would you seers of FreeNAS recommend? Basically I am taking my startup website live in 3 weeks and I want something that can scale massively if needed. I have built in an elastic framework of nginx webservers, Redis servers, and a few IIS web api servers that I spin up and down with Python. My goal is to slap in as many used Dell R610's coming out of the cloud providers with 96 Gb of ram and scale out as load increases through our marketing efforts.

I am considering adding these Intel Cards with this switch between the hosts and the NAS. If you were going to make a rocketship to support a huge user website adoption path what would you throw into the mix? Amaze me with your wisdom and PM me and I might offer you a contract position to configure everything after I install it.

Oh yeah I have a spare 80 GB FusionIO drive in my desk drawer. How would you throw that into the mix?

Thanks in advance for everyone's opinion.
Chad
CTO Lifespeed.io
 

Bigdata992

Cadet
Joined
Apr 23, 2015
Messages
4
Oh yeah BTW, I don't have a problem tearing it all down and putting in different cards and reconfiguring the drives.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
Slap in LSI 9207-8i HBAs flashed to the P16 IT mode firmware. Also you'll probably want to up the RAM on the FreeNAS to 64+GB. Striped mirrors is the way to go. Energy costs aren't concerning you? What's the power density in your colocation?

What's your plan for storage redundancy? If the FreeNAS on that used server goes down, all VMs die. Also the SSDs are consumer-grade without power-loss-protection, means in case of unexpected power loss you'll lose everything in the on-SSD write cache - and that can be gigabytes.

The switches and NICs are ... acceptable. Altough you might want to consider a TrueNAS with Chelsio NICs and 10Gbe SFP+ networking - can be important if you're going to scale and run many little transactions across the internal network. The TrueNAS is dual controller capable so no disruption if one controller derps up and comes with different service levels up to 24x7 4h.
 

Bigdata992

Cadet
Joined
Apr 23, 2015
Messages
4
Cool thanks for the heads up on the HBA and RAM. I will research the striped mirrors like you say... Electricity isn't really a concern, the data center has a 100% uptime SLA and they can have up to 22kW directly into my rack in less than 48 hours (for a fee of course). I take 100% SLA's with a grain or two and have dual 3000W APC's I picked up at fire sale prices when I bought my Cisco ASA's, I use those to "condition" the line. I got caught by the Equinix outage a few years back on their 100% SLA so I stack in a few batteries and hope I never need them.

I have put a lot of time and effort into the DevOps side of this things and am going to drop in a second FreeNAS for each 6 hosts basically cookie cuttering the entire rack. Each layer is load balanced with HA proxy. Front end (single page app in html and JavaScript) I could care less about it is static and accelerated with Cloudfront, web api (this is a thin veneer for a micro services tier), Redis data tier. Each Redis cluster is a node on rack 1 and a slave on rack 2 with a tertiary droplet at Digital Ocean.

I have all of the tiers sitting idle on AWS and if I threw a gallon of gas on the FreeNAS I believe I could have the cloud DR running and switch the traffic to it before the lights stopped blinking on the R510. There is no real data lost on the VM's, their sole purpose is to give my engineers virtual machines with DEV, QA, UAT, and OPS pipelines, plus all of the support servers that I need Hadoop etc. The only thing I would miss is the CI. Setting up the CI was tough but I am currently backing up critical the machines to Backblaze.com, and of course my source code is on github. Anything important I backup to a R610 with 6 X 6TB Raid 10 drives that sync to box.com.

Thank you for the awesome advice, I have a quote request in to TrueNAS. Lets see what they come back with. You think the Chelsio's are better than the Intel, I take it. What switch would you recommend?
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I take 100% SLA's with a grain or two
I don't know what your SLA says, but usually a 100% uptime guarantee doesn't mean "uptime will be 100%" (realistically, how could it?) Instead, it usually means "we will compensate you if we fall below 100% uptime." I think a lot of people fail to grasp this, then get very upset when uptime percentage doesn't match the guarantee.
 
Status
Not open for further replies.
Top