Server with a perc5 board

Status
Not open for further replies.

jonandermb

Explorer
Joined
Jan 15, 2014
Messages
76
Hi!
I have been following this distro for a long time and today I got the chance to finally implement it on my company, on an old dell server.
This server has a perc5 board controlling a raid5 setup (4 sata disks), but I guess the best is to avoid this and let freenas deal with every disk as an independent unit,so I can create a software raid with zfs, am I right?

Then I have some configuration issues, but I guess I'll have to look around some how-to's,,,, I'm not (still) boring yo all with those...
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
Iirc you can flash Perc5 to IT or if you have old Dell servers maybe you can find an hba instead of a raid controller, they used to be in like every other machine...

But yes' its better to present single drives directly to FreeNas instead of a preconfigured raid or dummy raid0 drives.
 

jonandermb

Explorer
Joined
Jan 15, 2014
Messages
76
I don't know about that IT firmware: I have been looking around a bit and I still don't get it clear.. is some kind of firmware to disable the perc altogether? Isn't there some more straightforward method, like disabling the device altogether or just a setting to let the disks pass through it?
If it's too difficult, I'm considering leaving it this way, since it's not going to be a very critical network share and it's more like a test.... but if anyone can elaborate a bit more, it'll be wellcome :)
 

tio

Contributor
Joined
Oct 30, 2013
Messages
119
The only way you can use a PERC card with FreeNas is to create single drive RAID 0's. Then FreeNAS will be able to manage it.
 

jonandermb

Explorer
Joined
Jan 15, 2014
Messages
76
The main problem I encounter if i set 4 raid 0 devices is: if some day the perc stops working, can I just detach those disks and attach them to sata cables to recover the data or does the perc raid0 setup write any extra stuff on the disks, making them unable to use as single devices?

EDIT: Thinking about it... what are the counterparts of having a big raid5 vd managed by hardware? I mean, in case of disk fault, it's better (less work) to just let the perc handle it and, while everything still runs normally, replace the disk, instead of having to do whatever process ZFS might require to do so.
 

jonandermb

Explorer
Joined
Jan 15, 2014
Messages
76
Thank you, thank you, thank you! I'll wait for that link.
The problem now would be using that computer I have now.... Regardless of it's hardware since I took it "recycled"... It has been used for image processing for a lot of time, bought maybe in 2006,but so high end, each disk is 1 Tb... And it's got four of them! Plus a quad core Intel 2.5ghz cpu.... The ram might be it's weak point... 4 Gb @ 667 mhz,but I think it's more than enough for 4 or 5 simultaneous connections via smb and/or afp.....
I'll wait for that link, and if anyone can shed more light on that IT firmware.... I'd be grateful
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Right, well, recycled is nice, but - and this is going to sound horrible and like I'm some sort of new gear bigot - basically it is being saddled with someone else's 250K-miles used car that "works great." And it probably does "work great" and all that.

Now honestly our network still has some gear even older than that still in production. So I'm not actually a new gear bigot. But I *am* pragmatic and look at it from a capex/opex point of view.

So here's basically the thing:

Even some of our 2009-era systems here eat 200 watts to idle, while some of our 2011-era systems have twice the CPU speed and eat 60 watts at idle.

Please do yourself a favor. Go get a kill-a-watt meter. They're usually about $25-$30. Plug in your server and load a FreeBSD Live CD and let it boot up. See what the thing idles at in watts.

Now figure out what a similar system would run on today's hardware. Be conservative and guess at 80 watts maybe. So the math of it all works out that your electric bill is in kilowatt-hours; running a 200 watt box instead of an 80 watt box means you're burning 120 watts. Times 24 hours per day times 365 days per year is 1051 kilowatt-hours per year. Around here electric pricing is around 14c/kWh which turns out to be $150/year or thereabouts. Figure that you would like a 5 year service life and the new hardware has probably paid for itself.

In the meantime you'll have a modern system that's more responsive and has sufficient memory (because 4GB is not enough to run ZFS reliably under FreeNAS) and is actually designed for the task. Plus you won't be blowing all that heat into your house, which then may cost more money to remove with A/C.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Yeah.... all that stuff jgreco just said. I bet if you do the math you'll find out that new hardware will pay for itself in just a few years.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
And I kind of regard 2006-2007 as the peak of the insane watt burn era. I mean, it's still quite possible to design a system that eats lots of watts. That big 12 core beast I have idles at 220 watts, which isn't really all that great but it replaces three smaller systems each burning 60 and it has 12 drives spinning on a 32GB FreeNAS VM on top of that, so if you look at it the right way that could've been 240 watts idle...

But when I was trying to design efficient systems in 2005-2006, carefully picking and choosing low power parts like the super-expensive $500+ Opteron OSB240EE (30 watt), the platform still wound up eating 100 watts at idle with 4 drives. Lots of systems simply guzzled watts with little to no thought given to efficiency. The parts weren't designed for low power. And in that era, 4GB *was* a pretty decent amount of RAM. And CPU's were significantly slower. So every time I hear someone say "I got this great deal on an old server", I'm thinking to myself ... "no you didn't."

Call me cynical, 'cuz I am!
 

jonandermb

Explorer
Joined
Jan 15, 2014
Messages
76
I agree with you but I'm setting this up in my company, so, the basic thinking would be: Use this or there is no nas server cuz you ain't getting a new one.....
Times are hard, and more for a small sized company.
And let me heat the debate up: LINUX... or any *nix system claims to be able to run on low end machines and old hardware: I myself am an open source enthusiast and use Arch linux on my media center and on my laptop back at home, (2006-2007 equipment upgraded a little through the years), android phones and even a raspberry pi: Those numbers you made are still no reason for buying hardware: $150 a year, in 5 years is $750, not enough to buy a new (decent) server, and think that, if I replace it for a new one, i'd still consume electricity, so, let's say with a new computer I would spend $60 a year in electricity: The difference of electricity expenses after five years would be less: $450 does not pay up for a new (decent) server... IMHO.
TLDR: I'm stuck with this server and If I want to build a NAS, this is what I must use: If not: Pull the plug, throw it away and forget freenas ever exists.
:)
Appreciate your opinion, though....
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I agree with you but I'm setting this up in my company, so, the basic thinking would be: Use this or there is no nas server cuz you ain't getting a new one.....
Times are hard, and more for a small sized company.

Times are hard all over, my friend. I'm one of the few people here with a business ownership background. Most of these guys are just looking for reliable storage for their home needs.

And let me heat the debate up: LINUX... or any *nix system claims to be able to run on low end machines and old hardware:

That's a load of horsehooey. While it is true that many *nix systems can be made to run on low end machines or older hardware, your argument is inverted: just because SOME *nix systems can run on small hardware does not mean that ALL of them MUST be able to.

Please note that I have some significant history in this arena, as I was the guy who wrote the original FreeBSD-on-a-floppy that resulted in PicoBSD and then inspired NanoBSD, which is what FreeNAS is based on. I do understand and appreciate the ability to recycle old hardware because that's exactly what I was doing that led to that years ago.

But here's the other end of the equation: One of my early fileservers was a 68020 based system with 4MB (not a typo) of RAM in it. Nowadays you'd have difficulty finding a *BSD or Linux variant that could boot in that, right? So you can also start off with an assumption that resources are cheap and readily available, and are becoming moreso. That's what Sun did with ZFS. It's a filesystem that ASSUMES the system has at least a gig of RAM. FreeNAS bumps that up a bit so that it works correctly "out of the box" on the largest variety of systems. In 2006 when ZFS was new and 4GB of RAM was a lot of money, I felt ZFS was impractical for many uses. Now, I can easily drop 32GB on a board for much less ...

So if you want a NAS that can operate on a shoestring RAM budget, they're out there ... it's just not FreeNAS.

I myself am an open source enthusiast and use Arch linux on my media center and on my laptop back at home, (2006-2007 equipment upgraded a little through the years), android phones and even a raspberry pi: Those numbers you made are still no reason for buying hardware: $150 a year, in 5 years is $750, not enough to buy a new (decent) server, and think that, if I replace it for a new one, i'd still consume electricity, so, let's say with a new computer I would spend $60 a year in electricity: The difference of electricity expenses after five years would be less: $450 does not pay up for a new (decent) server... IMHO.

Well, in this case your HO is errored, because I already took the differential. The $150 a year IS the differential. So it really is $750 saved. But you're really encouraged to pull out a power meter and measure your existing server. It's very likely to be higher than 200 watts if it was intended as a high performance server. Some of those things waste 100 watts just on the fans to cool them.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I got a new motherboard, CPU, RAM(all server grade stuff) for less than $750. So claiming its a price things is wrong in my opinion.

However, what is absolutely wrong is trying to do RAID0s of your disks. That, in my opinion, is just asking for trouble. There's a reason why we do HBAs, and there's a reason why you don't do RAIDs. I talked to a guy in December that lost his pool because he did RAID0s on Perc5i. You need to realize that(especially for business) you do it right or you are doing it very wrong. There is no "middle ground". ZFS wasn't intended to let you build a server however was the most economical. And if you start putting economics ahead of doing it right, you're going to be back here in a few months asking how to recover your pool.
 

jonandermb

Explorer
Joined
Jan 15, 2014
Messages
76
OK, So one big hardware RAID5 it is.... if in the future, I can convince the costs and purchases department to buy me a new toy, I'll build something more suitable.
When you say you bought server hardware, do you mean, like buy through DELL or HP or just buy at a local store or online, just high end hardware?
I'm used to buy via DELL, because of the customer service but nowadays, I'd rather save some bucks and buy through other channels.
BTW, i didn't say it still and I don't like patronizing (i don't seek to do so) - but I really like the look of freenas (what i have been able to test so far): The link aggregation setup is so easy, and the web ui..... (only missing a file explorer)
jgreco, so you are Andrzej Bialecki?

EDIT: I just got a random reboot.... come on... let the "i told you so" parade commence :D
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
jgreco, so you are Andrzej Bialecki

No, I'm the guy who made what may well be the first FreeBSD "appliance" by cramming a FreeBSD kernel and a custom userland (including a custom init) onto a floppy so that a PC with a floppy and no hard drive could do something useful.

Andrzej got all excited about the implications and in the finest free software tradition made something fabulous out of it.... PicoBSD.

I spent a lot of time in the early '90's doing that sort of thing. Created an appliance-ized version of Solaris for hospital operating room use by a medical monitoring device, with creepy similarities to NanoBSD such as dual boot partitions for live upgrade and rollback, read-only partitions to ensure no fsck issues, etc. Oh and it had to go from power on to big GUI app up and running in less than a minute... at a time when it took Solaris more than 3 minutes just to get to a text mode login prompt. Sun said what we wanted wasn't possible. It was, though.

That was the expertise I applied to FreeBSD to make an appliance on a floppy.
 

jonandermb

Explorer
Joined
Jan 15, 2014
Messages
76
I must say, I've been using this setup for months now with total reliability: One thing though is, you can not overload the system with jails and things like that: That is when it runs out of memory and things go south...
As long as I have two active datasets, no dedup, ecc memory, one samba share, another afp share and one machine using iSCSI, the server does not get loaded at all, and the ram usage is at it's 50%, barely going up: No more random reboots or weird behaviour.
I'm expecting a 16Gb new toy, though for other purposes. Both systems will be in production (this one is already)
This system is awesome and the community around it, even better.

Thanks!
 
Status
Not open for further replies.
Top