Reading Solaris ZFS volumes?

Status
Not open for further replies.
Joined
Jul 13, 2013
Messages
286
I've got a ZFS version 22 pool created and running under Solaris snv_134 (i.e. fairly old). Assuming (and of course I'll check before really trying this) that the hardware will run FreeNAS, should I be able to get FreeNAS to import that pool successfully? Everything I know about ZFS says "yes", but so far my searches haven't come up with much saying it really has worked for people, so I wanted to ask, and see if there are any known gotchas.

(And can I import the pool readonly at first for testing? I just looked at the screens and I don't see a readonly checkbox, though I kind of remember reading about one.)

Also, why are people giving such high memory requirements? Are people running dedup? If I don't need dedup can I run a 2TB (or maybe 4TB) pool happily with 4GB of memory? I've certainly been doing so under Solaris (I think, actually, that 2GB of memory was entirely adequate).
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I've got a ZFS version 22 pool created and running under Solaris snv_134 (i.e. fairly old). Assuming (and of course I'll check before really trying this) that the hardware will run FreeNAS, should I be able to get FreeNAS to import that pool successfully? Everything I know about ZFS says "yes", but so far my searches haven't come up with much saying it really has worked for people, so I wanted to ask, and see if there are any known gotchas.

(And can I import the pool readonly at first for testing? I just looked at the screens and I don't see a readonly checkbox, though I kind of remember reading about one.)
Yep. It should work.

You can't mount a zpool as read-only from the GUI, but you could by doing zpool import -o readonly=on yourzpoolname. Keep in mind you will NOT be able to share out the drive or do much of anything since you are circumventing the GUI. The GUI does some sanity checks and it doesn't like it when you try to share out a zpool that, as far as its concerned, doesn't exist.

Also, why are people giving such high memory requirements? Are people running dedup? If I don't need dedup can I run a 2TB (or maybe 4TB) pool happily with 4GB of memory? I've certainly been doing so under Solaris (I think, actually, that 2GB of memory was entirely adequate).
FreeNAS uses alot of RAM internally on top of ZFS. Even if you use UFS, the typical recommendation is 2GB of RAM minimum. Very few people run Dedup. The reason that the extra RAM is needed is to handle the FreeNAS stuff.

I've never played with Solaris, but don't try to make comparisons between Solaris and FreeBSD to justify ignoring recommendations from the manual and forums. Plenty of people argue that XYZ GB of RAM is far too much and justify it by comparing it to some other OS, but FreeNAS is its own beast. FreeNAS does need alot of RAM. Often you can get by with less than the manual recommends, but going below 6GB can result in a system that panics regularly. As some users have learned at the cost of their zpool, all it takes is that "one" kernel panic in the middle of a write to the zpool to result in a zpool that is unmountable and corrupted beyond recovery. Typically if you have 8GB of RAM your system won't panic if it needs more RAM, but you'll be unhappy with the zpool performance and want to add more RAM. The manual recommends 6GB minimum plus 1GB of RAM per TB of disk space, and I think its a very reasonable recommendation. On my 30TB zpool performance was horrible with 12GB. But after an upgrade to 20GB of RAM, zpool performance was excellent. Personally when I build systems for friends and family I always stick with 8GB sticks of RAM. Your standard 4 RAM slot motherboards max out at 32GB of RAM which should be plenty for just about any FreeNAS system they'd want now as well as for the next 3-5 years.

If buying(or installing) more RAM is an issue, I'd recommend you look at NAS4Free. NAS4Free seems to not panic with less RAM, but I believe their recommendation is a flat 1GB of RAM per TB of disk storage. If you have 5TB of hard drives, you should have 5GB of RAM. Don't quote me on this though as I've only heard this from other people in the forum and I played with NAS4Free for only a few hours in a virtual machine to see what it was like compared to FreeNAS.

Often enough, if you can't put at least 8GB of RAM in the system it will likely perform so poorly because of how old the system is you wouldn't want to run ZFS on it anyway.

If you really want to jump into FreeNAS and you value your data, take the recommendations in the manual to heart. If you follow those and don't try to reinvent the wheel because you think know better you'll be much happier and better off than more than 75% of the people that try FreeNAS. The manual and stickies provide excellent recommendations and you can't go wrong if you follow them.

And keep good backups.. more than 75% of people seem to be bad at doing backups. You might be glad you had backups someday.


Welcome to FreeNAS!
 
Joined
Jul 13, 2013
Messages
286
Yep. It should work.

You can't mount a zpool as read-only from the GUI, but you could by doing zpool import -o readonly=on yourzpoolname. Keep in mind you will NOT be able to share out the drive or do much of anything since you are circumventing the GUI. The GUI does some sanity checks and it doesn't like it when you try to share out a zpool that, as far as its concerned, doesn't exist.

Thanks. Just what one can do from the command line would be at least a bit of a start on testing that it really liked the pool, before committing to allowing writes.

FreeNAS uses alot of RAM internally on top of ZFS. Even if you use UFS, the typical recommendation is 2GB of RAM minimum. Very few people run Dedup. The reason that the extra RAM is needed is to handle the FreeNAS stuff.

Okay. Thanks for pointing out that difference. Yeah, it'd be stupid to count on FreeNAS behaving like Solaris in any broad sense -- at least once somebody who knows what's what has told me definitely that it doesn't in this case!

As such, if you choose to use less than 6-8GB of RAM with ZFS don't expect much sympathy if you show up crying about your lost zpool and you having no backups and your wife is threatening to leave you, your dog died, etc. It happens often enough and those people are ignored often enough too. So consider this your warning and don't try to skimp your system. ;)

I've got the backup religion; three backup sets, two living locally in a rated fire safe, the third rotating off-site (just to a friend's house). AND a weekly scrub of the main data pool. Haven't lost anything from this server since it went live in 2006.

If buying(or installing) more RAM is an issue, I'd recommend you look at NAS4Free. NAS4Free seems to not panic with less RAM, but I believe their recommendation is a flat 1GB of RAM per TB of disk storage. If you have 5TB of hard drives, you should have 5GB of RAM. Don't quote me on this though as I've only heard this from other people in the forum and I played with NAS4Free for only a few hours in a virtual machine to see what it was like compared to FreeNAS.

I haven't checked yet, but getting to 8GB at least on the current hardware should be perfectly possible. Probably at least twice that is easy (but, 2006 motherboard). Anyway, knowing what's sane to attempt is useful. (My current pool is under 2TB, so actually 8GB RAM should be fine through at least another expansion.)

Except...the suggestion (got lost out of the quotes, sorry) that a memory exhaustion crash is dangerous to the pool is terrifying! ZFS is supposed to be a transactional filesystem, and it's supposed to be able to roll back to the last consistent point in that sort of situation. The Solaris version does so. Yeah, I could lose data being written at the time of the crash, and that's bad enough, but it shouldn't put the entire pool at risk. Does it, really? Significant risk?

I would certainly intend to use FreeNAS within its specified parameters (or at least would consider myself a test-pilot, flying at my own risk, if I went significantly beyond them).
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Except...the suggestion (got lost out of the quotes, sorry) that a memory exhaustion crash is dangerous to the pool is terrifying! ZFS is supposed to be a transactional filesystem, and it's supposed to be able to roll back to the last consistent point in that sort of situation. The Solaris version does so. Yeah, I could lose data being written at the time of the crash, and that's bad enough, but it shouldn't put the entire pool at risk. Does it, really? Significant risk?

There have been a few people that have had problems. I have seen a few situations where a UPS didn't activate and the server went off. I can't vouch for the exact reason as it was based on that users observation along with understanding from many of the more experienced users on the forum but it seems to be a risk that should be considered nonetheless.
 
Status
Not open for further replies.
Top