Store free or die hard (drive)


April 1, 2013

This was originally posted at as part of their Spotlight on IT series.

Back in 2010, my company had a single server dedicated to each task in the office. It sounded great in concept, but we seemed to almost constantly be fighting down time from failed components or slow system speed because of the age of the equipment. While weighing the options on how to proceed with the necessary upgrades, we bit the bullet and purchased the VMware Essentials Plus bundle. It was on sale and offered us the failover and HA on the compute clusters that we needed. We did not, however, go for the full “standard” bundle, which would have given us Storage vMotion. As a small business, we must carefully balance the cost with the perks of any given package and the storage vMotion just didnt make the cut. But, by building out our storage platform using FreeNAS, we have been able to largely replace the need for that purchase.

Our storage network is as follows:

  • 2x Dell 2950 Gen2 — 16GB Ram (boots FreeNAS from flash)

  • 32GB SSD in first server for SSD write caching to 8x1TB external SATA 7200 RPM HDDs

  • 6x 2TB 7200RPM SATA in RAID5 on perc6/i controller in second server

  • 2x Cisco 2960 24-port Gbit switch

  • 1x per server Intel Pro1000 Dual-port Gbit NIC (PCI-e add-on card)

By using the Intel add-on cards purchased on the used market, we were able to gain redundancy on the network layer, using one port from the Intel and one port from the on-board Broadcom routed to each switch and using iSCSI with MPIO to gain the speed boost. By doing this, each server has two routes and two NICs per route to talk on. In the event that any switch or port fails, it has a backup port that is already transmitting.

We have not, as of yet, made use of the HA option that FreeNAS offers. When we are in a position to purchase replacement servers for the main computer nodes, I will push the current CPU nodes down to the storage level and configure two for HA and the third for the backups. Until then, we have a single primary iSCSI LUN that boots all the servers and a second server that currently only has a NFS share on it to receive the backups.

When trying to figure out IOPS and what kind of performance we needed out of our storage, it was not nearly as straightforward as we thought it would be. Based off our pre-VMware configuration, we estimated 400 IOPS, so on the first configuration we had 5x 2TB 7200 RPM drives in a hardware RAID5. We quickly found this to be insufficient, but it was a numbers game trying to balance the cost of drives with the performance we wanted/needed to serve our staff and our customers. The current iteration is using FreeNAS with 8x 1TB HDDs and a 32GB SSD as a “zil/log” drive. By having the SSD in the mix, we are able to cache the writes to the drives while giving instant access to the reads for the most part. Adding the SSD alone has resulted in a significant improvement in the performance of our storage platform. I couldn’t recommend anyone going without a SSD knowing how cheap they are to purchase now.

Looking back at the notion of using a vendor-supplied solution, I compare our setup to a Dell PowerVault iSCSI SAN because it is as close as I can get to a comparable system. The Dell costs $5,379 according to their website. Below is a breakdown of where our money was spent on our setup.

  • Cisco sSwitches: $1,100 per

  • Dell 2950 Gen2 — repurposed (free for this project)

  • 1TB HDDs — $60 per (eight total)

  • 32 GB SSD — repurposed after a failed desktop SSD upgrade project (that’s another story…)

  • 2TB HDD’s — $150 per (six total)

  • 5x Intel pro1000 NIC — $35 per
  • SGI/Rackable SE3016 — $150

Total cost: $3,755

So for roughly $1,600 less than the cost of one piece of hardware without any drives, we were able to purchase two switches, 14 HDDs, and five dual-port gigabit NICs. We got lucky in that we had recently knocked out the last of the remaining physical servers and were able to repurpose them for use in the storage platform, which cut a bit from the cost of the project but not as much as you might expect.

At this time, I am able to find SuperMirco servers for around $300 with 2x quad core 2.6 Xeons, 16GB ram, and a 2U case with six drive bays. Assuming you don’t have any spare servers, add another $600 to your price and you have a total for your setup.

One note for those thinking of doing FreeNAS with ZFS: Make sure you have enough RAM. FreeNAS has retailed recommendations on their Wiki and it is not something I suggest you ignore. (Trust me, I tried on my home rig before loading it at the office.)

Assuming you were to spend the money to populate it with 12 (out of 24 bays) 2TB drives specified in their web page, you would have to shell out $419 per drive to the tune of $5,028 for a total of 24TB total storage. Compare this price to the NewEgg price of $109.99 or $1,319.88 total, and you have a very significant price savings without a hit on realistic performance especially if you utilize a small SSD as a zil drive. If you find you still want a bit more, you can always add in a larger SSD as a read cache, at which point your drives would be largely independent from your noticed performance.

At this point you can truly customize your solution using FreeNAS (or similar products) and get exactly what you need out of your box without having any of the extra “fluff” that comes with vendor-provided solutions and results in a higher cost for what could very well turn out to be an inferior product. I have played with a number of the solutions offered by the open-source community and I prefer FreeNAS (as you might have guessed) because of the ease of setup and the overall interface. It is simple and easy to understand.

For those of you out there that are well versed in BSD, you will note that by using a wrapped product, you are losing out on the system updates until FreeNAS pulls them into their product but in the year and a half that we have been using open source solutions, I have used both direct and packaged solutions and have not come across one that I would recommend like I do FreeNAS.

As you look out on the landscape of open source VS closed-source and new VS used, please remember that it takes millions upon millions of dollars to market something coming from one of the top vendors and that cost will always be wrapped back into their products. If Google can use consumer grade equipment in their datacenters to replace enterprise grade equipment and use that money to build in redundant circuits, why cant we? I personally would rather have three redundant servers that die once a year than one server that dies once every two years that takes my entire business to its knees. In this day of technology, we simply cant afford to be down and our customers cant afford it either. This fact does not, however, mean that we have to shell out $50,000 to build out a cluster of three servers with redundant datastores, redundant switches, and a bunch of features we don’t want or need just to tell management that we bought stuff with a warranty.

In small business, we are the first and last stop on the blame train, so why not take the time to learn some new tech and put those new skills to work at your company making it a more redundant environment?

Rob Fauls
IT Director
Southern Freight, Inc.

Share On Social: