FreeNAS isn't what I expected?

Status
Not open for further replies.

Trif55

Dabbler
Joined
Aug 27, 2014
Messages
25
Having read many other things online while trialing FreeNAS I’ve just read the very informative "FreeNAS guide presentation by CyberJock" and I'm concerned FreeNAS is not the low cost solution to make the most of spare old HDDs I thought it was,

This seems to stem from ZFS's fragility regarding ram issues and some combination of FreeNAS/ZFS's limitations relating to VDevs being rather restrictive while being the granular component,

As such I guess a spare (old) AMD based tower with non-ECC ram (GA-K8NSC-939 + 2gb ram) combined with probably some PCI SATA card from ebay and hard disks which have already had years of use will likely make you all angry :-(

And I can see why as whatever redundancy I build into the final zpool could be entirely negated by bad memory during a scrub, or the lack of a UPS, or a myriad of other things!

Now I’m wondering if is this just a perception I've developed because FreeNAS is new to me, I'm not comfortable with it and the presentation is a (probably very necessary) stark warning of the risks? Am I just as likely to have something go wrong with files shared from standalone drives in Windows 7 PCs? - I've just been lucky enough to avoid it? Or is FreeNAS really that sensitive?

~

A little history of my experiences with drives etc:

I've never actually lost a drive (by sheer luck) but I learnt enough to have 2 Copies of everything I hold dear (or even just useful, or slow to download)

I once had the drive my main PC go down with everything on it, after some research I discovered it was a known fault with that drive's firmware which Seagate(?) fixed for free in a few days (at their miracle factory in Europe somewhere)!

I also never really trust myself with copying drives/moving PCs so keep a copy around etc, I'd generally consider myself careful with my data - I have my OS drive's imaged to a USB disk and all my documents synced every few days to another USB disk,

~

I hope that shows I'm not the type to throw caution to the wind! I've been planning to basically build 2 ATX tower based NAS's either both running FreeNAS (or possibly FreeNAS and a different storage OS for added fault tolerant diversity?)

The second, likely slower and more power hungry one (more, older disks) would go in a separate building and only be powered up for either nightly or weekly backups from the primary 24/7 system, as you say, if you value it, make backups! :)
(possibly also considering backing up my personal data to a 3rd place less frequently just to be sure! lol)

~
So,
Plan A:

With all that in mind, is that AMD system up to the task of providing a single large striped CIFS share at say 90MB/S (across say 3x2tb drives)?

And do you see any problems with a redundant system probably based on a similar tower + mobo with a bundle of random drives striped across just being powered up to sync every so often?

Plan B:

Having read your spec suggestion section and not having a 2nd spare tower handy I am wondering about the Pentium G2020, 16gb ECC ram and ideally a similar Supermicro board (though half to 2/3 of the price?) I've got a 400w Corsair CX PSU and I think just about enough other stuff to build it for say £250 with is probably close to my maximum budget,

Also while looking into the fault tolerance side for RAIDZ1 and RAIDZ2 I came across this interesting article: http://nowhereman999.wordpress.com/2009/04/19/zfs-freenas-a-poor-and-very-geeky-man’s-drobo-setup/ which is basically suggesting partitioning the disks in a system in such a way that you make a beyondRAID style array manually, would this be worth looking at to allow combinations of random sized drives to be fault tolerant of up to two failures? Also Is there another way to shrink an array to compensate for a missing disk that cannot be replaced?

It would obviously be nicer if my primary system wasn't trying to fall over all the time and give it some resilience if I’m spending money on it :)


In this situation would the AMD system mentioned above be worth having as a backup system? or will it never be stable enough to rely on even for that?

(sorry that’s all rather long but I've been having a lot of thoughts come and go the last few weeks while researching FreeNaS!:) )
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
Having read many other things online while trialing FreeNAS I’ve just read the very informative "FreeNAS guide presentation by CyberJock" and I'm concerned FreeNAS is not the low cost solution to make the most of spare old HDDs I thought it was,

It's not really, but there are other software solutions that may better meet your needs.

As such I guess a spare (old) AMD based tower with non-ECC ram (GA-K8NSC-939 + 2gb ram) combined with probably some PCI SATA card from ebay and hard disks which have already had years of use will likely make you all angry :-(
It doesn't make me angry, but you probably won't even be able to get it to boot. If it does boot, then your kernel may panic due to memory exhaustion.

Now I’m wondering if is this just a perception I've developed because FreeNAS is new to me, I'm not comfortable with it and the presentation is a (probably very necessary) stark warning of the risks?
If you are planning to go with the aforementioned hardware, then you shouldn't be comfortable using FreeNAS.

Am I just as likely to have something go wrong with files shared from standalone drives in Windows 7 PCs? - I've just been lucky enough to avoid it? Or is FreeNAS really that sensitive?
That's an apples and oranges comparison.


Plan A:

With all that in mind, is that AMD system up to the task of providing a single large striped CIFS share at say 90MB/S (across say 3x2tb drives)?

Probably not. BTW, striping is a bad idea. Especially with old drives.

And do you see any problems with a redundant system probably based on a similar tower + mobo with a bundle of random drives striped across just being powered up to sync every so often?
Doubling down on a bad idea doesn't make it better. :)

Plan B:

Having read your spec suggestion section and not having a 2nd spare tower handy I am wondering about the Pentium G2020, 16gb ECC ram and ideally a similar Supermicro board (though half to 2/3 of the price?) I've got a 400w Corsair CX PSU and I think just about enough other stuff to build it for say £250 with is probably close to my maximum budget,

That is much better hardware. Make sure you buy a server motherboard (not a desktop one). Striping your drives is still a bad idea.

Also while looking into the fault tolerance side for RAIDZ1 and RAIDZ2 I came across this interesting article: http://nowhereman999.wordpress.com/2009/04/19/zfs-freenas-a-poor-and-very-geeky-man’s-drobo-setup/ which is basically suggesting partitioning the disks in a system in such a way that you make a beyondRAID style array manually, would this be worth looking at to allow combinations of random sized drives to be fault tolerant of up to two failures?

Don't follow that blog post. Use the FreeNAS documentation / handbook.

It would obviously be nicer if my primary system wasn't trying to fall over all the time and give it some resilience if I’m spending money on it :)
FreeNAS / ZFS is resilient.

In this situation would the AMD system mentioned above be worth having as a backup system? or will it never be stable enough to rely on even for that?
I suppose you could run a different OS (like centos) on it and rsync your data to it, but you have to remember that you really want your backup solution to work.
 

dikk

Dabbler
Joined
Jan 24, 2013
Messages
11
Triff55, have you considered nas4free? t's a continuation of freenas 7 and more suited for older hardware as it's less resource hungry. I've used it on older PC's fine in the past.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
The advice of the others is pretty much spot-on.

FreeNAS 7 was for reusing old hardware. FreeNAS 8+ is not. FreeNAS (the name) diverged at FreeNAS 8. The FreeNAS 7 guys took the code and made NAS4Free while FreeNAS 8 was solely iX and was a rewrite from scratch. The current FreeNAS is meant as an enterprise-class file server solution (and is sold as such as TrueNAS). NAS4Free has no commercial side and is more for enthusiasts wanting to reuse old hardware. To be honest, most of the people that reuse old hardware aren't doing themselves justice by going with non-ECC RAM, but I won't start that discussion because it's been discussed to death. ;)
 

panz

Guru
Joined
May 24, 2013
Messages
556
I always wondered if Nas4Free, with server grade hardware, could be as reliable as FreeNAS.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I always wondered if Nas4Free, with server grade hardware, could be as reliable as FreeNAS.

I'd assume so as long as their code isn't unreliable. The "secret sauce" of things that go on behind the WebGUI is going to make or break the reliability of the system in my book. The original FreeNAS 7 project was deliberately avoided by me back then because I felt that the project had major problems with data "just disappearing" and such. When I read about it circa 2008 or so I saw so many problems related to the "behind-the-scenes" that I wasn't about to trust an OS that has "beta" on it with my data.
 
L

L

Guest
I personally have been running it for over a year on spare parts. There is a tendency for people who have been bitten by bugs or have lost data to non-ecc ram to give warning. I have seen 100's of weird architectures.. If you want to build a low end system, just build one, with understanding that it can break, but same is true for any low-end nas. But with freenas you can build it to have lots of redundancy so it so you won't have to worry(as much).

All that being said freenas has most of the enterprise features of high storage like emc or netapps, that can be built out of commodity parts. If you don't want to buy the better hardware, beware of things that can go wrong. The ease of migration is also fantastic so if you want to migrate to bigger/better server you can export a pool and import it.
 

panz

Guru
Joined
May 24, 2013
Messages
556
Apropos of exporting a pool to use it on different hardware, I noticed that you have to export it from the command line (as said by the manual), but instructions are unclear about encrypted pools.
 

Trif55

Dabbler
Joined
Aug 27, 2014
Messages
25
It doesn't make me angry, but you probably won't even be able to get it to boot. If it does boot, then your kernel may panic due to memory exhaustion.
For interest it does boot quite happily, messed about with SMART a bit, made a ZFS volume (mirror), shared it, sent a 4gb file up - did take a while, traffic was bursty, up to 40mbps briefly then stopping - like due to a cache being full before disk writes occurred or something, getting the file back down managed a solid 40mbps :) anyway I know no one'll be interested in troubleshooting this, call it a fun experiment and I'll go look at nas4free and other less demanding software :)


That's an apples and oranges comparison.
Why do you say that is apples to oranges? still Operating systems and spinning disks? or is it like comparing an old diesel pick-up truck (puts up with any old rubbish) to a modern super-car (monthly visits to main dealer for servicing to keep performing)


Doubling down on a bad idea doesn't make it better. :)
This made me laugh out loud! :)

If you are planning to go with the aforementioned hardware, then you shouldn't be comfortable using FreeNAS.
Is that because FreeNAS is designed for totally different hardware? Or wouldn't you trust any OS to cope with that hardware?

Probably not. BTW, striping is a bad idea. Especially with old drives.

That is much better hardware. Make sure you buy a server motherboard (not a desktop one). Striping your drives is still a bad idea.

Don't follow that blog post. Use the FreeNAS documentation / handbook.

FreeNAS / ZFS is resilient.
I'm grouping these together under the "I think I've missed something about striping" pile. If I had enough drives in the system would I see RAID Zx options when making zpools? Wouldn't that tie me down to drives of the same size? I guess this kind of problem applies to any OS I use? would striping/extending drives together under a raid 6 (combine smaller drives together to match larger drives) be a valid solution? or still ropey?

I suppose you could run a different OS (like centos) on it and rsync your data to it, but you have to remember that you really want your backup solution to work.
(and partly in reference to your doubling down comment) While it may not have a million hour MTBF any more I'd expect a system like this to hang together for a while, especially if it was only powered on once a week, sure it might go down, but not at the same time as whatever its backing up, and even then the drives would still be intact to recover (I take your point about striping though, it'd be nice to have 1-2 drive redundancy ;-) )



The advice of the others is pretty much spot-on.

FreeNAS 7 was for reusing old hardware. FreeNAS 8+ is not. FreeNAS (the name) diverged at FreeNAS 8. The FreeNAS 7 guys took the code and made NAS4Free while FreeNAS 8 was solely iX and was a rewrite from scratch. The current FreeNAS is meant as an enterprise-class file server solution (and is sold as such as TrueNAS). NAS4Free has no commercial side and is more for enthusiasts wanting to reuse old hardware. To be honest, most of the people that reuse old hardware aren't doing themselves justice by going with non-ECC RAM, but I won't start that discussion because it's been discussed to death. ;)

The topics along the lines of "storage OS for cheap fileserver" that point me to FreeNAS may well have been old enough, or have contained comments from people thinking about FreeNAS7.

Why does RAM suddenly become such an issue with these systems compared to Windows? or is that just perception again?

I'd assume so as long as their code isn't unreliable. The "secret sauce" of things that go on behind the WebGUI is going to make or break the reliability of the system in my book. The original FreeNAS 7 project was deliberately avoided by me back then because I felt that the project had major problems with data "just disappearing" and such. When I read about it circa 2008 or so I saw so many problems related to the "behind-the-scenes" that I wasn't about to trust an OS that has "beta" on it with my data.

This concerns me about NAS4free, do you recommend any other storage OSs with better reliability that'll be ok with old hardware?[/QUOTE]
 

Trif55

Dabbler
Joined
Aug 27, 2014
Messages
25
Correction, the disk that isn't a bit faulty reads and writes at 40mpbs over CIFS, the other disk doesn't like being written to (it used to drop out of windows every now and again) I ran smart on it but I hadn't worked out how to actually see the logs so just hoped for the best while testing, lol :)
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Striped volumes are dangerous to your data because if a single disk fails, all data on that volume is lost. This is true whether you're using ZFS under FreeNAS (or some other BSD project), or RAID0 under Windows, Linux, or some other OS. RAIDZ and mirrors give you redundancy, such that one or more disks may fail (depending on the specifics of your setup) without harming your data. Of course, you'll sacrifice some capacity to give you that redundancy.

ZFS, at least as implemented in FreeNAS, requires lots of RAM, mainly to implement its cache. FreeNAS calls for a minimum of 8 GB; if you try to get by with less than that (certainly 1/4 of that) and bad things happen, you'll be pretty much on your own.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'm concerned FreeNAS is not the low cost solution to make the most of spare old HDDs I thought it was,

As long as you have the ability to pick up new information, that is at least correctable.

This seems to stem from ZFS's fragility regarding ram issues

ZFS's fragility? Nope. ZFS is more resilient than most. The difference is that your typical NTFS or UFS system kind of expects for there to be an occasional failure, and people are conditioned to accept an occasional corruption or even a total filesystem loss as being just a cost of computing. So if you have a non-ECC UFS system and it blats out a bunch of badness to the drive, you're screwed and it quite possibly doesn't even get detected by the filesystem.

If you take a single disk system without ECC, your data is at approximately the same level of risk with ZFS as UFS. But if you build your system to work with ZFS, your data becomes rapidly much safer.

Now I’m wondering if is this just a perception I've developed because FreeNAS is new to me, I'm not comfortable with it and the presentation is a (probably very necessary) stark warning of the risks? Am I just as likely to have something go wrong with files shared from standalone drives in Windows 7 PCs? - I've just been lucky enough to avoid it? Or is FreeNAS really that sensitive?

FreeNAS is really that sensitive. So is every other filesystem. That's why professionals have used high quality components, ECC memory, and RAID hard drive configurations for many years. We know failures can happen. The difference is that ZFS is much more capable of coping with recovery than your typical RAID controller, which cannot usually detect bitrot on the hard disk, for example.

Windows people see corruption on their PC's on a semi-regular basis and kind of shrug it off. If we see it here in the FreeNAS world, it is accompanied by screams of WHAT THE FRELL!!!

I hope that shows I'm not the type to throw caution to the wind! I've been planning to basically build 2 ATX tower based NAS's either both running FreeNAS (or possibly FreeNAS and a different storage OS for added fault tolerant diversity?)

Definitely approve of that.
 

dikk

Dabbler
Joined
Jan 24, 2013
Messages
11
If you take a single disk system without ECC, your data is at approximately the same level of risk with ZFS as UFS. But if you build your system to work with ZFS, your data becomes rapidly much safer.

Perfectly summed up, jgreco
 

Trif55

Dabbler
Joined
Aug 27, 2014
Messages
25
~
Edit: oops wrote this earlier and forgot to hit send then did hit send before seeing there were more replies, busy day! :)
~
~in reply to danb35:

Yea, when I thought about it I decided I didn't really like the idea of striping across a big collection of random old drives, far too risky combination! :)

How would you suggest combining different sized drives in raidz1/z2? The nicest example I've seen to explain this is Drobo's little demo on their site, I guess their magic box does a lot of work to spread data around?


And having read all I've read I understand the "if you don't have enough ram we don't really want to know" sentiment, troubleshooting would be like fighting with one arm tied behind your back!

However I feel like I'm back to the apples/oranges thing, why would a ram shortage be so catastrophic? Ditto ECC ram, isn't there some other means by which serious write back errors could be avoided?

As long as you have the ability to pick up new information, that is at least correctable.
Do you mean learn enough to change my mind?

With regard to the rest of your post, that confirms what I've been reading today about the "next gen" file systems ZFS and btrfs, how they prevent bitrot etc. They are more resiliant and the people that use them are acutely more aware of the faults with "the norm" of non-ECC and NTFS/UFS
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Unfortunately, there really isn't a good way to put disks of different capacities together into a RAIDZ environment. If you have, say, a 1TB, a 2TB, and a 3TB drive, and create a RAIDZ1 volume with them, its net capacity will be around 2TB (slightly less, actually), because it's based on the capacity of the smallest disk in the vdev. In the future, if you replaced your 1TB disk with a 3TB disk, your capacity would increase to a little under 4TB, because the 2TB disk is now the smallest in the pool. If you then replaced the 2TB disk with another 3TB disk, your capacity would increase to 6TB. But until all disks in the vdev are the same size, the smallest one will determine the capacity of the vdev.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
However I feel like I'm back to the apples/oranges thing, why would a ram shortage be so catastrophic?

Because computing systems are complicated things. If you design a system that assumes there'll be lots of resources available, then try to run it in a starvation environment, then what?

My first UNIX box had 512KB of RAM (I think, might have been 256). That's KB. Not MB or GB. UFS was designed to go look up files efficiently without consuming lots of system memory. UFS continues to be able to do that reasonably efficiently in a small memory environment, but it is only so efficient. To get more efficient, you need to cache. Hardware manufacturers spent megabucks creating custom solutions that allowed you to have a hardware controller that did relatively dumb caching. It didn't really know what to cache but took a guess.

Sun recognized that the filesystem itself had much better knowledge of what to cache. And for this and other reasons, they decided to throw out the "small" model of UFS and instead see what they could do on a system with plenty of resources. So ZFS caches LOTS. And does lots of other neat stuff. But it was predicated on the idea that it was replacing a hardware RAID controller and cache memory, and that it could use lots of system memory instead.

We have seen various failures when people fail to respect the minimum memory sizing for FreeNAS and ZFS. Don't know why. I'm not paid to develop ZFS and I'm not going to risk a production pool to try to reproduce a catastrophe.

Ditto ECC ram, isn't there some other means by which serious write back errors could be avoided?

Of course. But since ECC is the standard way to implement this sort of protection, would you rather pay a lot more money for a system that implemented some custom proprietary patented whiz-bang solution? ECC is easy, well-understood technology. Why reinvent the wheel?

Do you mean learn enough to change my mind?

You're free to change your mind. I point at things in order to help you see what I see. It doesn't make me right, of course, and it doesn't mean you must change your mind...

With regard to the rest of your post, that confirms what I've been reading today about the "next gen" file systems ZFS and btrfs, how they prevent bitrot etc. They are more resiliant and the people that use them are acutely more aware of the faults with "the norm" of non-ECC and NTFS/UFS

We basically understand that there are certain things that can be done to improve storage reliability. It is always a set of trade-offs and there usually isn't one absolutely correct answer for every circumstance. But there are many things that are highly disrecommended for good reason.
 

Trif55

Dabbler
Joined
Aug 27, 2014
Messages
25
Sun recognized that the filesystem itself had much better knowledge of what to cache. And for this and other reasons, they decided to throw out the "small" model of UFS and instead see what they could do on a system with plenty of resources. So ZFS caches LOTS. And does lots of other neat stuff. But it was predicated on the idea that it was replacing a hardware RAID controller and cache memory, and that it could use lots of system memory instead.

We have seen various failures when people fail to respect the minimum memory sizing for FreeNAS and ZFS. Don't know why. I'm not paid to develop ZFS and I'm not going to risk a production pool to try to reproduce a catastrophe.
So there's not a way to scale back the Cache requirements? It's just that I assume on a home file server, were generally large media files are getting written and read at random I can't imagine there'd be a large performance benefit to having a big Cache?

Of course. But since ECC is the standard way to implement this sort of protection, would you rather pay a lot more money for a system that implemented some custom proprietary patented whiz-bang solution? ECC is easy, well-understood technology. Why reinvent the wheel?
As I was writing "means by which serious write back errors could be avoided?" I did start to think, hmm, I guess it'd be nice to do that in the hardware... like ECC ram.... :)

You're free to change your mind. I point at things in order to help you see what I see. It doesn't make me right, of course, and it doesn't mean you must change your mind...
I'm definitely understanding a lot more and in turn that's likely to lead to changes in direction :) - the ECC ram thing is starting to alter my thinking about other builds etc too, I assume random little bugs in RAM writes/reads cause a fair chunk of OS crashes?

Drobo is rubbish.
I don't doubt it, I'd never buy a proprietary black box like that to do anything! However the demo of random size drives being added and it sorting out the most space available is rather impressive :)

On that note I happened across a thread mentioing:
Code:
zfs set copies=3 pool

That apparently spreads a file to 3 places in the pool on different disks,
I don't suppose that would prevent a zpool going down if a single vdev dropped? even though all the files in that vdev would be elsewhere?

If not that then are there any other ways to deal with mixed disks?
 

SirMaster

Patron
Joined
Mar 19, 2014
Messages
241
The best ways I know how to deal with mixed disk sizes would be SnapRAID on Windows or Linux.

Or the other option for real-time RAID would be to combine LVM and MDADM on Linux. This is how for example Synology achieves their SHR (Hybrid RAID) in their NAS units that allow you to mix different disk sizes.

The last option is unRAID, but I don't like their scaleability as they only support 1 parity drive. SnapRAID supports up to 6 parity drives, and MDADM supports 2 in RAID6 mode.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You can also cope with mixed disk sizes by creating mirror vdevs out of similarly sized disks, and then adding those to your pool. You lose more space from one point of view, but if you have 2 x 500G, and 2 x 1T, and 2 x 2T, with RAIDZ2 the best you get is 2TB usable, while with mirrored vdevs you get 3.5TB usable.
 
Status
Not open for further replies.
Top