Playing with ZFS [8.3.0-BETA3]

Status
Not open for further replies.

Nix46

Cadet
Joined
Oct 3, 2012
Messages
2
I used FreeNAS 0.7 with mirrored UFS disks since 3 years. Last mounth my nas motherboard died. This is what you can get when you buy a cheap barebone.
Bored by UFS, fsck and the time to resync a poor 1To full disk, I'm wondering if I have to create a new nas with FreeNAS 8 and ZFS, or not.
I heard snapshots, shadow copy, fast resilvering, extendable pool... so much cool stuff then I decided to play with it in a VM before to spend to much money.

But I got some weird thinks, I'm not sure if a released version or real machine would change something, so that's why I'm here.

I started my test by creating a mirrored pool with different sized vhd in order to see how the ZFS is extendable.
I created 8, 10 and 16GiB small disks to test then mirrored 8 and 10 then I got a 5.9GiB pool. Well maybe it need 2G to work.
Then I removed the 8GiB, I bet a 7.9GiB pool with the remaning 10GiB, but I got 5.9GiB. Well may be it is waiting to get a bigger replacment disk to extend the capacity.
So I put the 16GiB, I bet 5,9 and I won.
Then I replaced by 2x 2TiB vhd, df -h -> 5.9GiB.
I don't get it, how to get it autoextendable from the GUI ?
I'm also wondering if resilvering is fast, not yet tested.

Then I played with dataset and it's funny.
Create a new dataset in the pool, with the name of an existing directory in the pool, will erase the directory content without warning.
Delete the dataset prompt that data will be lost, will not erase the directory and its content.
Don't get it.

But that's totally worth it, I am seduced by snapshots and shadow copy.
For me it's still not efficient, but it's a nice feature (shadow copy for cifs homes are missing, but I guess it's ongoing).
First manual snapshots arn't available, by removing auto or manual in the snapshot name (it's not a big deal) I guess that could work with shadow copy.
Second, create in a minute, 2 periodic snapshots with same periodicity for different dataset will create errors, because they will get the same name.
Maybe by adding seconds in the name could avoid it.

Code:
Oct  4 02:20:04 freenas autosnap.py: [tools.autosnap:42] Popen()ing: /sbin/zfs snapshot -r tank/data@auto-20121004.0220-2w
Oct  4 02:20:05 freenas autosnap.py: [tools.autosnap:42] Popen()ing: /sbin/zfs snapshot -r tank@auto-20121004.0220-2w
Oct  4 02:20:05 freenas autosnap.py: [tools.autosnap:247] Failed to create snapshot 'tank@auto-20121004.0220-2w': cannot create snapshot 'tank@auto-20121004.0220-2w': dataset already exists no snapshots were created
Oct  4 02:21:02 freenas autosnap.py: [tools.autosnap:42] Popen()ing: /sbin/zfs snapshot -r tank@auto-20121004.0221-2w


Good point it recreate a new one just after. Bad point it recreate also a new one from the GUI for other dataset.
By the way, from the GUI only the latest snapshot can be restored. And I still searching how to remove several snapshots in one time...

There is other stuff that I tested, but that's all for the moment.

Nico.


I don't always post in forum, but when I do it's as long as a novel.
 

William Grzybowski

Wizard
iXsystems
Joined
May 27, 2011
Messages
1,754
Did you create 2 periodic snapshot tasks?

Why would you do that if you have chosen recursive snapshot for an upper layer level?

Forgive me if I get wrong, I am tired and did not read the whole post, as you said, its a novel :)
 

Nix46

Cadet
Joined
Oct 3, 2012
Messages
2
Did you create 2 periodic snapshot tasks?
Yes I did.

Why would you do that if you have chosen recursive snapshot for an upper layer level?
That was just to test, as the caveat is there must be a one-to-one mapping.
2 cifs shares, one on data and one hidden to the root to see if my home share get shadow copy from an upper layer level.
But that doesn't matter, the error would be the same with any two or more datasets (mapped with two or more cifs shares).

Forgive me if I get wrong, I am tired and did not read the whole post, as you said, its a novel :)
Thanks you for your reply.
 
Status
Not open for further replies.
Top