Build or Buy?

Status
Not open for further replies.

PrivateSnuffy

Dabbler
Joined
Oct 19, 2012
Messages
12
Hi all. New to the FreeNAS community, haven't used it before on smaller units for folks for simple sharing, but something about it recently intrigued me. I'm a net admin at my job, and our aging Equallogic 4000E is soon to be out of warranty, and we are moving most of our production to a datacenter, but we will have Dev/DR at our location to run certain legacy systems. The cost to keep the SAN in warranty for another two years is close to $4500, and I'd really hate to through $4500 down a toilet, as I think once its over, it will be unsupported and we will need a new unit anyway. Our IOPS is very low according to our DPAK tool (~300 IOPS for all our VMs, which is like 25).

I'm scoping out some cheaper units, because after our production is going to be moved out, further limiting the iops/space needed to get things going. I've pretty much done VMware with Equallogic exclusively, so its a bit out of my comfort zone, especially since the whole ZFS, RaidZ is a new way of thinking. I have a good bit of Linux knowledge from the years of setting up LAMP, admin of our ERP system (xtuple) that was all linux based at my old job. Currently i'm looking at some of the units from EMC, Netapp, and HP but they all seem a bit too much for what i really need to accomplish.

I know if done properly, FreeNAS functions very well with VMware and ESXi, due to supporting iSCSI and NFS. I think the biggest challenge is picking out the best equipment and getting the best performance out of the device. Again, I'm not trying to run highly critical stuff, but kinda trying to match or beat the equallogic currently (an 8TB model 4000E with 7.2k drives). I've put together some lists from newegg to build one, or to just buy an off the shelf dell 720 and shove drives in it (after market of course, $400 more for a 1TB drive on newegg is highway robbery!).

Some questions I have are:
1. Will I have to have SSD's in Raid 1 for ZIL?
2. Whats the best configuration of storage for a 12 x 1 TB SAS 6.0 array?
3. Should I load more than 16GB of memory in
4. Dell forces me to get a H310, can that work with ZFS if i keep them out of a hardware raid config?
5. Are the supermicro barebones a good alternative?
6. Any of you guys have some tips when working with VMware?

I don't wanna be that guy, I've search a bit already on the forums thus far, and definitely got caught up to speed. Just looking for some opinions from fellow net admins.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
1) SSD's in RAID1 for ZIL are not necessary, unless you're using FreeNAS 8.2, where the older ZFS freaks if it loses its ZIL. That said, mirrored SSD's is the best practice for avoiding data loss. So you do WANT mirrored if you choose to do ZIL.

2) For maximum redundancy at the cost of some performance? Probably two sets of six drives each in RAIDZ2. ZFS likes power-of-twos for the base number of drives (four plus two for RAIDZ2 redundancy).

3) Have at it. More memory is fun, except beware it may require some tuning.

4) Dunno.

5) Absolutely.

6) Characterize your traffic. What are you doing? You can optimize for it. For example, around here, we build VM's that are designed to avoid "petty writes", which significantly reduces the write load. If you had a bunch of VM's needing random access to small bits of data, but pretty consistently the same small bits of data, enough RAM to hold those in ARC (or an SSD for L2ARC) is a real win. You have a mix of types of VM's? Consider whether maybe you could use more than one ZFS pool. Or maybe two smaller FreeNAS servers. I've been really sorely tempted to go build a Supermicro Atom D525 (dual Intel NIC, 1U, low power) with just SSD and probably will once the prices get down to like 50c/GB for the ~500GB SSD's.
 

PrivateSnuffy

Dabbler
Joined
Oct 19, 2012
Messages
12
Jgreco, thanks for the knowledge!

As for the type of VM's, it's a mix. Most of them will be a mirrored environment of our ERP system, so they are primarily read based (They're just the utility servers, they foward info to the main setup server/SQL). Others will be stuff like low duty file servers, test machines for the IT staff, and other various equipment. Some of these machines, such as our Orion server, will have a local version of SQL, but they are very tiny databases. To keep this in perspective, this ALL currently runs on our 16x500GB 7.2K Equallogic. We even have Exchange on there (which is soon going away)!

If I'm doing ZFS, its pretty much all going to be software raid? I mean, RaidZ2 seems a lot like Raid6 to me, just what worries me is that it is Software raid, not Hardware. I guess since you can put the os on a flash drive/usb disk it really doesn't matter, but it just would seem slower to me, i dunno, that's my IT knowledge of the past conflicting.

if I had 1 disks, 6 per pool, couldn't they be different types? Like 6x 1TB 7.2k "Enterprise" SAS drives and 6x 300GB 15k SAS drives in the other?

Everything I've read regarding the ZIL is essentially what you've said, they recommend SLC ssd's, and MLC ssd's for L2ARC. I may go that route, as since it will be a mixed bag of VM's i'd rather have one beefy system that two slower, or 1 fast 1 slow. If I get a 16 bay server, i'd probably populate as so:

Bay 1-2: SLC SSD (2)- 8 or 16GB mirrored in Raid1
Bay 3: 256GB MLC SSD for L2ARC
Bay 4: 500 GB SATA for Freenas OS (came with the base system on Dell)
Bay 5-10: 1TB Seagate Constellation SATA - RaidZ2
Bay 11-16: 1TB Seagate Constellation SATA - RaidZ2

Seem about right? Do (or should) I need to mirror L2ARC or the OS drive?

I found a really good article about ZFS, ZIL, ARC and L2ARC and some of the suggestions are fairly straightforward and describe best practices well to folks like me. So basically, I understand it all as

ZIL - Write Cache (pending)
ARC - Commonly used data held in RAM (read)
L2ARC - Commonly used data held on SSD's (read) if ARC is full
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You can have different pools, yes. Around here, we find that storage tends to fall into one of several types of buckets. For example, there are files we access infrequently (~1-10 years) but want to have available on-demand. This includes document archiving, etc. For this, a 4x4TB in RAIDZ2 is more sensible than a 10x1TB RAIDZ2. Other stuff, need it fast, need it now, mirrored SSD. Then there's the middle-land. You can have different pools if that makes sense. Downside to multiple pools: ZIL and L2ARC devices are paired with a specific pool.

ZFS is, by definition, "software RAID". Sun recognized years ago that hardware might be faster but increases in CPU came at a lower cost, and at a point about ten years ago, maybe, it became really feasible to use general purpose silicon cycles. Now, it's a relatively trivial burden on modern processors to handle a small storage array. If it makes you feel any better, you should know that if you tore down some of these hardware RAID units, you'd find that they are software RAID with hardware assist to do the XOR computations anyways.

You don't want to burn a SATA/SAS slot on an OS drive. Use a cheap USB flash. If you want to spend money, get a system that'll accept PCI-express SSD or one of the other such options.

You do not want to mirror L2ARC. It will fill and manage itself as needed, and loss is harmless. If you have two SSD's for it, rather than mirroring, add both for double speed fun.

ZIL is only loosely "write cache". It's the traditional concept that it is closest to.

ARC should always be approximately full, except right after a system boot.

L2ARC is what ARC gets swapped out to, if the system can manage it. That means that anything that's been accessed has a good chance of making it out to L2ARC, at least, and as such, any further accesses (until it gets pushed out of L2ARC) will be super fast.
 

PrivateSnuffy

Dabbler
Joined
Oct 19, 2012
Messages
12
So if you have two pools, you need two ZIL mirrors and L2ARC drives? I see what you mean now separate smaller units, that may be the route I take rather than make one super system. The other issues you run into with that, from the networking side, is even if you have the fastest system in the world with 10GB Ethernet its still gonna get bogged, probably at multiple levels. I can make a small 4-6 bay SSD unit and a 6-8 bay 2TB array for fairly cheap, and separate the VM drives two different units (large data and secondary drives on the big storage, OS data and high perf stuff like SQL or heavily used crap on the SSD's, since they typically are smaller, especially if thin provisioned). The good thing is the heavily used stuff like you said is kept in either ram (arc) or high performance SSD (l2arc)

Here's a few more questions
1. have you hit bottlenecks with SSD arrays on the built in sata?
2. What type of cpu's are recommended? I was specing lga 2011 builds
3. Do you HAVE to use SLC for ZIL? MLC is more readily available, and has made a lot of leaps in performance to match SLC.
4. That pci-e idea ain't bad, does freenas support them? because I know the OCZ ones don't work with linux, atleast natively (mine doesn't)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You might want to take some time to ponder multiple units, at least. 10G isn't really a magic bullet. It's expensive and just introduces a more expensive single point of failure. We're not at all married to the idea of ZFS for VM storage, either, since we're doing VM storage on iSCSI. ZFS doesn't seem to bring much except overhead and resource-pigginess to the table for iSCSI, though it'd be neat for NFS. It actually looks to me like we'll end up with VM's stored on Supermicro Atom 525-based systems, SSD would be fun but we could get by without, probably with UFS and FreeNAS, because it's really convenient. The thing is, if you have two VM image servers and they're both less than half full, then when it comes time to do maintenance, or when one fails, you have a small, manageable problem, and can overload one with the stuff from the other, or even borrow a desktop. Need more speed? More capacity? Add another cheap unit. When you have a big monolithic 24-port server, options are ... reduced. Really, if you need one big server, then that is just the way it is, but you're at a great point to consider, does one-size really fit all, or did we go with the 4000E because it was a compromise?

So anyways, looking forward, we're going to assemble something similar to the SYS-5015A (Supermicro's prebuilt uses the SC502 which is an icky "fan-less" design) like perhaps a CSE-510T-203B. Could be interesting.

To your questions:

1) Haven't done it, so can't say for sure. As with any selection of a server-class motherboard, look at the design and then check the Internet for people who've complained.

2) To ensure the most flexibility, you can pick a big CPU to make it less likely that you'll need to revisit the issue. However, in practical terms, I've yet to see FreeNAS stress out the CPU on an 1155-based Supermicro X9SCi with E3-1230, and in fact we're going to be running VM's alongside it on a 32GB system.

3) You can use MLC, it's just not as-preferred.

4) FreeNAS mostly supports basic storage types that FreeBSD does. I don't really keep track of all the possibilities, but there's disk-on-module solutions (which eat SATA ports), internal USB solutions (which don't), etc.
 

PrivateSnuffy

Dabbler
Joined
Oct 19, 2012
Messages
12
Looks good. My main concern then is if I'm using UFS wouldn't it be better just to do hardware raid? Of could I go the NFS route, since vmware supports it, and use ZFS

I did see this item, Might go for it, as it seems to be perfect for what we're doing if we just have multiple small units.
Asus 1U LGA1155 server barebones

Even has a quad nic
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Hardware RAID takes watts. Why bother if you have a bored CPU?

The ASUS 1U looks nice. Let us know how it goes if you go that route.
 

PrivateSnuffy

Dabbler
Joined
Oct 19, 2012
Messages
12
Hardware RAID takes watts. Why bother if you have a bored CPU?

The ASUS 1U looks nice. Let us know how it goes if you go that route.

Only issue I have is it only can take 4 drives. Does that mean I wont be able to run ZIL/L2ARC?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You can do any configuration you want, as long as you don't want more than 4 drives.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
4 drives will offer whatever redundancy you want. Want to do 4 drives in RAID 10, you can. Want to do 3 drive RAID5, you can. Want to do 4 drive RAID6, you can.

I've never used a ZIL yet. You need to understand what a ZIL does and how it works to understand if it will help you for your situation. For alot of situations it doesn't do you any good.
 

PrivateSnuffy

Dabbler
Joined
Oct 19, 2012
Messages
12
4 drives will offer whatever redundancy you want. Want to do 4 drives in RAID 10, you can. Want to do 3 drive RAID5, you can. Want to do 4 drive RAID6, you can.

I've never used a ZIL yet. You need to understand what a ZIL does and how it works to understand if it will help you for your situation. For alot of situations it doesn't do you any good.

I read your powerpoint documentation on the differences, btw, it was very useful!

Here's what I'm looking at:

Unit 1 - OS/Heavy Use Data
Asus 1U Barebones
4 X 480 GB SSD (3 + spare)
16GB ECC DDR3 1333
E3-1220 Quad CPU

Unit 2 - Large Storage
Asus 1U Barebones
4 X 2TB WD RE4 (3+ Spare)
16GB ECC DDR3 1333
E3-1220 Quad CPU

Can get all of that for less than what i'm gonna spend for the renewal of the warranty on the Equallogic.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You can do any configuration you want, as long as you don't want more than 4 drives.

Well, to be fair, in that chassis, it looks like you could jam an SSD in there for L2ARC, there's a space for an optional DVD drive and some space by the power supply, and the board has two 6G SATA ports, so if that worked out, it'd really be an ideal platform for FreeNAS rack mounted. Compare that (plus processor plus memory) to something like iomega's low end rack offering px4-300r and consider that theirs is vegetarian (Celeron), 2GB RAM, and only has two gigE.

Three posts further down...

I think the comments in #14 about a proposed configuration would be that if you're looking for 6TB of space with that 4 x 2TB config, you can tolerate the loss of a single drive. If you have backups elsewhere that's quite possibly the way to go! But for additional reliability consider a 4 x 3TB config and RAIDZ2. It'll be lower performance for writes but able to take a loss of two drives. I will note that I've complained long and loud about 4 drive RAIDZ2 configs and being slow, but this seems to be a tuning-vs-performance tradeoff issue.
 

PrivateSnuffy

Dabbler
Joined
Oct 19, 2012
Messages
12
Well, to be fair, in that chassis, it looks like you could jam an SSD in there for L2ARC, there's a space for an optional DVD drive and some space by the power supply, and the board has two 6G SATA ports, so if that worked out, it'd really be an ideal platform for FreeNAS rack mounted. Compare that (plus processor plus memory) to something like iomega's low end rack offering px4-300r and consider that theirs is vegetarian (Celeron), 2GB RAM, and only has two gigE.

Three posts further down...

I think the comments in #14 about a proposed configuration would be that if you're looking for 6TB of space with that 4 x 2TB config, you can tolerate the loss of a single drive. If you have backups elsewhere that's quite possibly the way to go! But for additional reliability consider a 4 x 3TB config and RAIDZ2. It'll be lower performance for writes but able to take a loss of two drives. I will note that I've complained long and loud about 4 drive RAIDZ2 configs and being slow, but this seems to be a tuning-vs-performance tradeoff issue.

Loss of a single drive is good. I'm gonna buy an extra of each drive to keep as on hand, in case of failure, so i can send the other in for replacement without sweating bullets. We have a data domain system that does VADP based backups, so we can store them there incase everything in DEV/DR crashes

To answer the whole picture question of this, this is gonna be dev testing / DR. It would really suck to try and reload if i had to, but the chance is extraordinary slim that anything production would touch it AND be lost. We have plenty of backups on both ends to prevent that from happening.
 

PrivateSnuffy

Dabbler
Joined
Oct 19, 2012
Messages
12
Also, if there is actually space if/when i get this in, I'll exploit it. I'm all about making it look good, and functional. If I have to "jam" an SSD in a random location I'll avoid it :)

I was thinking putting it near the PSU, doing double sided tape on a SSD bracket to the case, and possible doing it that way, so its accessible, looks decent, and I don't have to look at something and say ewww
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Having never seen the case on the bench, it's difficult to know the precise options. However, the AIC RMC1D2-XP, a similar chassis that has the drive bays on the bottom, has sufficient space for three 2.5" devices on top of the 3.5" bays, possibly four or five if you "engineer" some of the screw posts out of the way. Just a matter of mounting. Finding space for a single L2ARC SSD, especially given the performance gains that could be involved, should be relatively easy. Let us know how it goes, eh?
 

PrivateSnuffy

Dabbler
Joined
Oct 19, 2012
Messages
12
Having never seen the case on the bench, it's difficult to know the precise options. However, the AIC RMC1D2-XP, a similar chassis that has the drive bays on the bottom, has sufficient space for three 2.5" devices on top of the 3.5" bays, possibly four or five if you "engineer" some of the screw posts out of the way. Just a matter of mounting. Finding space for a single L2ARC SSD, especially given the performance gains that could be involved, should be relatively easy. Let us know how it goes, eh?

Looks like the area near the PSU will work. I got a 128GB SSD, figure thats about all we'll need for L2ARC.

Here's the parts list
freenas setup.jpg

Not bad! I'm gonna try this server out first, just to verify it is gonna do what we want it to. If it works great, then we'll either get another or go to a higher speed unit for heavy traffic. I'll also order some of the breakables as spares, because ASUS's turnaround i know is horrendous. Spare board, memory, PSU, hdd and ssd
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Not your first rodeo, eh. You're in a good place to be doing this sort of thing, then.
 
Status
Not open for further replies.
Top