FreeNAS isn't what I expected?

Status
Not open for further replies.

Trif55

Dabbler
Joined
Aug 27, 2014
Messages
25
Could I not either partition down to 500gb and make multiple Raidz2 vdevs from that, just partitions from all different drives? until I run out of partition sets of at least 4 different disks?

Or stripe the 2x1tb into 2tb and make raidz2 of the 2tb results?

what I have in reality I think is at least
4x500
2x1tb
4x2tb
and in future i'd probably buy some 3 or 4tbs

So either striping then raidz2 or mini raidz2's gives 8 TB with 2 disk redundcy from a raw 12tb :)

But I can't really work out how i'd expand anything once i got the 4 tb?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Yes, in theory, you could do something like that with ZFS. However, it's a bad idea, and it isn't something FreeNAS supports. Here's why: Suppose you have 2x500GB and 2x2TB. If you put those together into a RAIDZ2 vdev, you're going to have a net capacity of 1 TB, with another 1 TB used as parity, and 3 TB wasted. So, you partition those 2TB disks into 4 x 500 GB partitions, and create a RAIDZ2 vdev of 10x500GB. Now you have 4 TB of net capacity, which is great. However, each of your 2 TB disks holds four of the "disks" that constitute that RAIDZ2. When one of those 2TB disks fails, your pool is gone.
 

Trif55

Dabbler
Joined
Aug 27, 2014
Messages
25
just with partitions from all different drives? until I run out of partition sets of at least 4 different disks?

I can see why FreeNAS doesn't support the bad idea version where some disks hold more than 1 "disks" worth, but why can't it do the sensible version?

e.g.
D1 - 500gb
D2 - 500gb
D3 - 500gb
D4 - 500gb
D5 - 1000gb (split into partition A and B)
D6 - 1000gb (split into partition A and B)
D7 - 2000gb (split into partition A, B, C and D)
D8 - 2000gb (split into partition A, B, C and D)
D9 - 2000gb (split into partition A, B, C and D)
D10 - 2000gb (split into partition A, B, C and D)

so vdev1(A) would be: (raidz2)
D1, D2, D3, D4, D5A, D6A, D7A, D8A, D9A, D10A
vdev2 (B): (raidz2)
D5B, D6B, D7B, D8B, D9B, D10B
Vdev3 (C): (raidz2)
D7C D8C, D9C, D10C
Vdev4 (D): (raidz2)
D7D D8D, D9D, D10D

If any disk failed no vdev would be lost so the pool would be ok,

What I can't work out is how to add a 4tb disk :(

The other, and more simple would be:

subvdev1: (stripe)
D1, D2, D3, D4 = 2TB
subvdev2: (stripe)
D5, D6 = 2TB

vdev1: (raidz2)
subdev1, subdev2, D7, D8, D9, D10

This is even more fault tolerant than the first example as Disks 1-6 could all fail and the vdev would be ok,

It might actually be easier to increase its size as well? though it would take some time to realise the increase, say with 4tb disks,
If the vdev "allow expansion" thing was enabled:
Replace D10 with D11 (4tb)
resilver
remove D9, make subvdev3 (stripe) D9+D10 = (4tb)
add subvdev3 to replace D9
resilver
rinse and repeat for the other positions with two more 4tb drives and ta-da! vdev1 will grow :)

Sadly there's a trade off between raidz2 storage efficiency, e.g. making vdev1 out of more smaller components resulting in a larger cost to expand,
say with the 6 x 2tb above:
66.6% space efficiency and only 3x4tb to expand,
8 x 2tb would give:
75% space efficiency but 4x4tb to expand
Having said that 6 components in RaidZ2 looks like kinda the sweet spot of that tradeoff :)


edit: Awww, tedious :( just read elsewhere that you can't nest vdevs with FreeNAS :(
 
Last edited:

Trif55

Dabbler
Joined
Aug 27, 2014
Messages
25
There's no sensible version. It's an appliance. If it can't reliably do the right thing then it shouldn't be doing it at all.
But so many other things are left down to the admin to be sensible about, such as if you had a sweet 5 drive raidz2 vdev in your zpool you could happily add a single 4tb drive to the pool putting the whole thing at risk without so much as an "are you sure?" :)


Also when you say doesn't support, can you use the command line to do it then just add the partitions through the normal interface?
On that note could a hardware raid card be used to make the striped "subvdevs" which FreeNAS would then have to treat as single disks?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You do actually get an are you sure. And you can probably do any dumbfool thing you want with the CLI.

The appliance can't save you from every form of dumb, of course, but it does try to guide towards sane choices that are compatible with automation.
 

Trif55

Dabbler
Joined
Aug 27, 2014
Messages
25
I'd have thought it was a very common requirement to want to combine drives of varying sizes - it certainly seems that way looking at the amount of times its been asked! :)

Is there any system that has all of the following:
  1. Dual (and triple) disk redundancy
  2. All the benefits of the "Next Gen" file systems (ZFS)
  3. Able to add/remove disks as required?
 

Trif55

Dabbler
Joined
Aug 27, 2014
Messages
25
I just don't get how a filesystem that is so aware of the underlying disks to be able to do things like "Copies = 3" and be able to recover from bitrot by accessing a parity version, still even has such a thing as Raid(Z) !?

Surely it'd just need a setting like "Copies = x" where you set what level of fault tolerance you require, so say "Protection from up to 3 disk failures" which would be like (copies = 4/RaidZ3) and then it goes off, looks at the disks it has and shuffles the files around accordingly (I guess also the metadata would need 4 copies?).

Then it comes back and says "right, here you go, this 3TB has the required fault tolerance" and since its so smart, could probably also present the remainder of the space listed by fault tolerance, so if you had a serious drive imbalance, say while freeing up drives to add to the ZFS storage in the first place you'd get 2tb with 2 disk tolerance, 1tb with 1 disk tolerance and 3tb with no redundancy

This way you could use some of that to hold the contents of a drive in a "lower than requested" state until you add that drive, at which point it has another little shuffle and suddenly, "there we go" - 5tb with triple redundancy,

On the other hand, if a drive did fail there would be no reason for it to just sit there in a degraded state minding its own business, it'd instantly be able to remake the lost copies of its files onto the free space on the other drives, basically self regenerating until you have time + money to order a replacement which would then expand the "triple protected" free space again :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'd have thought it was a very common requirement to want to combine drives of varying sizes - it certainly seems that way looking at the amount of times its been asked! :)

Is there any system that has all of the following:
  1. Dual (and triple) disk redundancy
  2. All the benefits of the "Next Gen" file systems (ZFS)
  3. Able to add/remove disks as required?

No. Not even close.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I just don't get how a filesystem that is so aware of the underlying disks to be able to do things like "Copies = 3" and be able to recover from bitrot by accessing a parity version, still even has such a thing as Raid(Z) !?

Surely it'd just need a setting like "Copies = x" where you set what level of fault tolerance you require, so say "Protection from up to 3 disk failures" which would be like (copies = 4/RaidZ3) and then it goes off, looks at the disks it has and shuffles the files around accordingly (I guess also the metadata would need 4 copies?).

Then it comes back and says "right, here you go, this 3TB has the required fault tolerance" and since its so smart, could probably also present the remainder of the space listed by fault tolerance, so if you had a serious drive imbalance, say while freeing up drives to add to the ZFS storage in the first place you'd get 2tb with 2 disk tolerance, 1tb with 1 disk tolerance and 3tb with no redundancy

This way you could use some of that to hold the contents of a drive in a "lower than requested" state until you add that drive, at which point it has another little shuffle and suddenly, "there we go" - 5tb with triple redundancy,

On the other hand, if a drive did fail there would be no reason for it to just sit there in a degraded state minding its own business, it'd instantly be able to remake the lost copies of its files onto the free space on the other drives, basically self regenerating until you have time + money to order a replacement which would then expand the "triple protected" free space again :)

In fantasyNASland, anything is possible. Here in the cold cruel walls of the real world data center, though, there are things such as practicality and performance to consider.

Honestly, there's nothing wrong with vision but you need to remember that ZFS isn't trying to be a Drobo-like "throw a few random shitty drives in a 4 bay and try to make it work" product. ZFS was intended to be replacing extremely expensive RAID controller hardware to allow Sun's massive CPU and memory resources to step in and take control of 48 disks in a big Sun server.

For the most part nobody had ever really done such an ambitious project, possibly excepting a few things like NetApp's WAFL, to make a combined storage manager and filesystem on such a large scale. You have to remember that Sun's goal was to come up with something that was going to be absolutely solid that admins would trust their data to.

Recall that RAIDZ was a rather new and unusual idea - we sometimes talk about it as though it were merely RAID5 but it certainly isn't. There were more than enough moving parts to implement in something like ZFS as it is.
 

SirMaster

Patron
Joined
Mar 19, 2014
Messages
241
The closest thing to your list is BTRFS. And eventually it will be able to do all those things reliably.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The closest thing to your list is BTRFS. And eventually it will be able to do all those things reliably.

.... right around the same time that ZFS gains BP rewrite... maybe... except btrfs doesn't actually have feature parity with ZFS, so, no, not really.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'd like not to enter a flame war

So don't. ;-)

The basic problem with btrfs is that it immature technology and there isn't a huge amount of work being put into improving it. As an earlier poster put it, "And eventually it will be able to do all those things reliably."

ZFS has the advantage of being a commercially-developed filesystem that continues to be aggressively developed by a coalition of vendors and hackers, with companies like Nexenta and iXsystems selling storage appliances based on it. I'm not seeing people deploy btrfs based storage systems. I've seen a number of production ZFS deployments though.

There are a bunch of pros and cons to both products, of course, and I'd be lying if I said I was entirely satisfied with ZFS.
 

Trif55

Dabbler
Joined
Aug 27, 2014
Messages
25
.... right around the same time that ZFS gains BP rewrite... maybe... except btrfs doesn't actually have feature parity with ZFS, so, no, not really.

Ahh vaporware :( - is the timescale on BTRFS getting it actually any closer than ZFS?

Read this from CyberJock elsewhere:
"Enterprises NEVER add single disks. They add large bunches of disks at the same time, which is exactly what a vdev is."

Guess thats the clincher, with RaidZx vdevs that can expand with complete drive replacement, or the option to add a second vdev to a pool why would you (enterprise) need anything else :(
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
In an enterprise setting you're usually adding like a shelf of disks. If you're more of a cheapskate enterprise you might be doing four, six, or twelve drives at a shot. But basically ZFS was not really designed for the model where you're adding a disk at a time in random sizes.
 

Trif55

Dabbler
Joined
Aug 27, 2014
Messages
25
In an enterprise setting you're usually adding like a shelf of disks. If you're more of a cheapskate enterprise you might be doing four, six, or twelve drives at a shot. But basically ZFS was not really designed for the model where you're adding a disk at a time in random sizes.
Yea, that's what I was getting at, it's not how enterprise does it, ZFS is targeted at enterprise, why would anyone bother putting in a major feature that enterprise would never need :(

so BtrFS may, or may not do most of those things, but the general feeling its not as mature as ZFS and considering ZFS isn't that mature - I've seen a number of rather vague statements that ZFS does strange things if you don't stick to a very straight and narrow usage and no one really knows why, on this forum - it does make me wonder if I should trust my files to BtrFS yet
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
As with anything, you should use a product being mindful of how it was intended to be used. ZFS is pretty much the freight train of filesystems. Trains have lots in common with trucks: diesel, wheels, containers, etc. But unlike a truck, a train can haul a LOT more stuff, while on the other hand, it is restricted to tracks, and it has to be driven in a certain manner. When this is done, it is a relatively unstoppable and highly reliable mode of transit.

We know exactly why ZFS does strange things if you don't stick to the straight and narrow. Don't drive it on the street.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Trif55,

You have to decide if you want more flexibility, or the reliability and subsequent requirements of the ZFS freight train. There are a number of "raid like" ways of adding parity and flexibility to your storage.

SnapRAID: mentioned in this thread, is incredibly flexible supports low hardware requirements, multi-disk parity, lets you mix and match at will, fixes bit-rot, allows recovery from disk failure beyond the number of parity drives. But the trade off is the interface, speed, smaller user base, no commercial support, designed for reasonably static data (media). In my testing it has done well. But I am paranoid so still not ready to commit.

XPEnology: The Synology software is open source. This is the x86 port. Minimal hardware requirements. It is polished and has a commercial feel, very simple, good plug-ins. Supports pretty flexible drive configurations, allows adding disks on the fly. Uses ext4 and linux raid components so recovery via a livecd is possible. Install process isn't great, but bearable. Doesn't deal with bit-rot. You don't own their hardware, so commercial support will be limited. Upside is a HUGE userbase and a TON of data stored on these. Runs flawlessly in my environment. Basically you can roll your own synology box with as much hardware as you want to throw at it.

Linux, BSD,Solaris etc: Lots of easily configurable "server" appliances are showing up. Zentyal, SMEServer, Turnkey ,distributions. This space keeps getting better and better.

The others: Napp-it, OpenMediaVault, Nas4Free, unRAID, FlexRAID disParity, Btrfs, Storage Spaces. All have a different set of trade-offs and requirements. Obviously this is just a sampling, but the ones that made the cut to at least test for me.

I could learn to love SnapRAID for a large home media collection demanding flexibility, with reproducible data. XPEnology is a Synology box on better hardware so a known quantity. FreeNAS is killer if you want a freight train. Lots of ways to skin this cat, but match the solution to the hardware and your needs. Don't try and force a freight train down a dirt road.

Have fun,
 
Status
Not open for further replies.
Top