Switch dataset to casesensitivity=mixed

Status
Not open for further replies.

Pete248

Dabbler
Joined
Sep 6, 2012
Messages
16
I know you can set this only when you create a dataset.

So I created a new dataset:
zfs create -o casesensitivity=mixed pool2/data

Then I transferred a snapshot of the original dataset, that has casesensitivity=sensitiv:
zfs send pool1/data@auto-xxx | zfs receive -F receive pool2/data

And boom the pool2/data now has casesensitivity=sensitiv.

I assume, the "zfs receive -F" sets the options of the dataset to those of the snapshot and that was casesensitivity=sensitiv

Is there a way to send a casesensitivity=sensitiv snapshot to a casesensitivity=mixed dataset?

If that is not possible, are there other ways to accomplish the task of migrating a dataset from casesensitivity=sensitiv to casesensitivity=mixed ?
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
You could transfer the files with rsync or mv.
 

Pete248

Dabbler
Joined
Sep 6, 2012
Messages
16
Thank you.

Can I still "preserve" the pool1 periodic snapshots somehow in the new pool2 without having the data twice in pool2 (snapshots + independent dataset)?
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
Not that I'm aware of. Well, I'm sure you could make use of the .zfs/snapshots folder and rsync --link-dest with a snapshot taken after each incremental transfer, but I'm not sure it'd be worth the time.

And after rereading your question this isn't what you're after anyway.

I think you need to create the new dataset, cp / rsync the files, destroy the old dataset.
 

Pete248

Dabbler
Joined
Sep 6, 2012
Messages
16
Again thank you for your help.

Actually I want to accomplish 2 things:

1.) Move an existing zpool with all its periodic snapshots to a new zpool located on a new set of bigger drives. That's easy to do with zfs send -R(I) xxx | zfs receive -F xxx

2.) The old zpool was set up as casesensitivity=sensitiv as this was the default in FreeNAS. Because the box is exclusively used for filesharing to newer Mac clients via AFP, casesensitivity=insensitiv or mixed would have been better. As you can switch case sensitivity only, when you create a filesystem, I wanted to switch case sensitivity together with the transfer of the zpool.

I wasn't aware of the rsync --link-dest option. Could give it a try.

3 question arise:

Can I really write to the .zfs/snapshots folder on the destination zpool? Both the snapshot and the enclosing .zfs folder is: dr-xr-xr-x 2 root wheel

Does FreeNAS automatically recognize new folders in the .zfs/snapshots folder on the destination zpool as snapshots of the dataset?

rsync --link-dest would have to be run for each snapshot with the previous snapshot as the --link-dest parameter, right?

After the last rsync, the dataset has to be rolled back to the latest snapshot, right?
 
Last edited:

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
OK, I wouldn't recommend writing to the .zfs directories. I don't even think you can, but CyberJock has suggested they shouldn't even be read due to a rare bug (though I haven't seen any evidence that it actually occurs).

Anyway, the link-dest suggestion probably isn't even necessary.

What you could do is rsync from each snapshot directory, in order, to the new dataset. Make a new snapshot after each session completes. This would be easily scripted. I can't write it right now, but I'll take a shot layer tonight if you haven't.

You'd need to include the --delete option so files that have been removed from one snapshot to the next don't stick around on your new dataset.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I'm only going to say this once because if you don't listen you'll learn the hard way....

If you go into the .zfs directory and start doing ANYTHING you risk serious problems.

There's a reason the .zfs directory is not visible. PERIOD. Even I don't go in that directory, EVER.

I don't even want to know what your thought process is with rsync and ZFS snapshots. The two should never meet. So whatever you are doing is almost certainly a stupendous way to watch your data (and your pool) go up in flames.

Rsync is a tech from the 80s and is file system independent. ZFS snapshots and replication are for ZFS. Rsync has no clue what ZFS is just like it has no clue what other file systems are.
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
I just have to disagree with you here.

It's not visible because the "snapdir" property is set to "hidden" by default. Oracle includes an example of turning this on and off here.

I have never seen any sort of indication that viewing, entering, or copying data from this directory is dangerous. The ZFS documentation itself indicates how to view data from here. I agree that it would not be a good idea to write data to this location, but as indicated above, it isn't writeable. I copy files out of snapshots like this all the time because it's far faster than cloning, copying, and then destroying. Heck, it's a great way to track file history too. I'll often issue commands like: ls -la /mnt/tank/.zfs/snapshot/auto-201501*/blah/blah/blah/file to see on what date a file has been changed. I can then grab the copy I want quickly. I've never seen an issue with this and it seems to be well documented as being a feature.

If there is any actual evidence that this behavior is dangerous I'd love to see it. Even an anecdote would be interesting. It would likely point to a bug that could be fixed.

My position is that if .zfs is actually so dangerous as to never be accessed, it would only be accessible via zdb.


As for rsync. It's initial release was 1996. I'm not sure what your point is there. ZFS is from 2005, with initial development starting in 2001. Is either of these too new to be considered stable? Too old to be considered trustworthy under modern conditions?

Rsync being file system agnostic is a feature. Yes, it has no knowledge of file system specifics (I'd actually claim that isn't true, but whatever) and it works fine on other file systems. And from my experience, it works fine with ZFS.

I've used rsync to copy out of the .zfs directory many times without issue. If there's a genuine issue here I'd love to hear it.


Now back to my suggested use of .zfs/snapshot and rsync.
My first crack at a script, without review or testing. I'm most unsure about grabbing the list of snapshots and then converting that to just the name as I don't recall the format that is returned by "zfs list". I'll revisit this in a few hours and refine what is going on. At this point I wouldn't trust it myself.
Code:
OLD_DATASET=/tank/old_dataset
NEW_DATASET=/tank/new_dataset
 
zfs list -t snap -H -o name -r -d1 "$OLD_DATASET" | while read SNAPSHOT
do
    SNAP="$(echo "$SNAPSHOT" | sed "s/.*@//")"
    rsync --partial --progress -a /mnt/"$OLD_DATASET"/.zfs/snapshot/"$SNAP/ /mnt/"$NEW_DATASET"/
done
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I'm really not going to get into this argument again. It's hard for me to explain everything and every time I've tried to in the past it hasn't ended well for anyone. Only a direct phone call or mumble chat has really allowed me to explain this well.

Also, your ZFS documentation applies to the Oracle code. It does not apply to FreeBSD code, and therefore shouldn't be the defining document to say whether something is safe or not for the ZFS that FreeBSD is using. I've known 2 people that did stuff in .zfs. Both times they crashed the box a split second after running their copy command, and in both cases the pools were damaged beyond recovery. I was talking to a ZFS developer the other day on the phone and he referred to v28 pools as "a totally different beast from what we have today". There's lots of structures added to ZFS, expectations and dependencies that just don't exist in Oracle's implementation.

Yes, you can copy files out of the .zfs directory, but anything else is dangerous and basically stupid to do. The problem is exacerbated if you try to go to the .zfs directory in an OS like Windows because windows *will* try to read and write files all over the place, and that would be pretty fugly for you and your pool. The big problem is that people go in the .zfs directory and accidentally do a chmod, chown, or copy to the .zfs directory on accident instead of the reverse. It's like walking up to a cliff. You can take a step forward or a step backwards. One is safe and the other is pretty deadly. Make that mistake and step forward when you meant to step backwards and the consequences are pretty devastating. The best advice is just never walk up to the cliff. ;)
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
Only a direct phone call or mumble chat has really allowed me to explain this well.
Ya know, I'd actually be up for that. Or even a long drawn out post or PM. Understanding the time commitment that would require for pretty much zero benefit on your part.

Also, your ZFS documentation applies to the Oracle code. It does not apply to FreeBSD code, and therefore shouldn't be the defining document to say whether something is safe or not for the ZFS that FreeBSD is using.
Wouldn't that disqualify pretty much all the Oracle documentation? In any case, here's one FreeBSD link that mentions the .zfs/snapshot directory.

I've known 2 people that did stuff in .zfs. Both times they crashed the box a split second after running their copy command, and in both cases the pools were damaged beyond recovery.
Now you know at least three and every time I've listed, copied, rsynced, etc. from those directories I've had no issue. Unless these stories are part of a ticket filled against FreeBSDs ZFS I'd be inclined to attribute those instances as coincidence and bad hardware.

I was talking to a ZFS developer the other day on the phone and he referred to v28 pools as "a totally different beast from what we have today". There's lots of structures added to ZFS, expectations and dependencies that just don't exist in Oracle's implementation.
This doesn't exactly speak well of the stewardship of ZFS post-v28.

Yes, you can copy files out of the .zfs directory, but anything else is dangerous and basically stupid to do.
Victory! But, no, really. That's all that I use it for. And all that I'd think would be supported.

The problem is exacerbated if you try to go to the .zfs directory in an OS like Windows because windows *will* try to read and write files all over the place, and that would be pretty fugly for you and your pool. The big problem is that people go in the .zfs directory and accidentally do a chmod, chown, or copy to the .zfs directory on accident instead of the reverse. It's like walking up to a cliff. You can take a step forward or a step backwards. One is safe and the other is pretty deadly. Make that mistake and step forward when you meant to step backwards and the consequences are pretty devastating. The best advice is just never walk up to the cliff. ;)
I can totally see how chmod, chown, writing, etc. would be a bad idea within the .zfs directory. I'm also highly suspect that it would actually work. If ZFS allowed any of those changes it should be a ticket that is being worked on.
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
Back to the task at hand. I haven't tested the script that I'm posting below, but it should work. I'll see if I can get together a sample dataset to test with this weekend. But, no promises.

Code:
#!/bin/bash

OLD_DATASET=/tank/old_dataset
NEW_DATASET=/tank/new_dataset

zfs list -t snap -H -o name -r -d1 "$OLD_DATASET" | while read SNAPSHOT
do
    SNAP="$(echo "$SNAPSHOT" | sed "s/.*@//")"
    rsync --progress --partial --delete -a /mnt/"$OLD_DATASET"/.zfs/snapshot/"$SNAP"/ /mnt/"$NEW_DATASET"/
    zfs snap "$NEW_DATASET"@"$SNAP"
done


Basically:
"zfs list" gets together all the available snapshots for the dataset.
"while read SNAPSHOT" loops over each snapshot name
"SNAP=…" strips off the dataset name from the snapshot
"rsync …" does the actual file copying. It pulls everything out of .zfs/snapshot/"$SNAP" and puts it in the new dataset. Anything that is in the new dataset that shouldn't be there is deleted. Anything that is already there and hasn't changed is left in place.
"zfs snap" creates a new snapshot in the new dataset with the same name as the old snapshot.

There is one issue that I can see where the time that the snapshot was created will not be the same as the old snapshot. This is probably not a big deal, but if you're using anything that refers to the creation time (like my ZFS Rollup script) you might get unexpected results. Automated expiration should continue to function as the last time I checked the scripts parse the snapshot name to determine the creation date and calculate when it should expire rather than using the ZFS property.

So, This "should work just fine", but there is always the chance that something goes wrong. Make sure you have a backup of your data and tread carefully. I have no reason to think that this could actually destroy your data or pool, but I'd be remiss to leave out the warning.
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
Oh, the "rsync" line includes the "--progress" argument. Feel free to remove this if you don't need to see the files fly by as they are transferred and checked.

You could also add a line between 7 and 8 to print out the name of each snapshot as it is handled:
Code:
echo "$SNAPSHOT"
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Ya know, I'd actually be up for that. Or even a long drawn out post or PM. Understanding the time commitment that would require for pretty much zero benefit on your part.

Last time I literally spent about 10 hours writing stuff on that topic. No way you could pay me enough to type that long for a single topic again. :P

If you want to chat with me, drop in IRC one evening. ;)

Wouldn't that disqualify pretty much all the Oracle documentation? In any case, here's one FreeBSD link that mentions the .zfs/snapshot directory.

In some ways yes. The expectation is that you are so freakin' badass at ZFS that you can identify what is and isn't applicable. Raise of hands of who can do that around here...

/hears crickets

Well... crap.

Even I don't trust the Oracle documentation much aside from basic CLI stuff like adding and replacing disks. :/
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
Still not tested, but here's a version that doesn't use the hidden directories. Instead, it clones the snapshot to a temporary dataset, rsyncs from there, and destroys the temp dataset. It's all still automated in the script so you don't have to deal with the "pain" of cloning and destroying just to recover files.

Code:
#!/bin/bash

OLD_DATASET=/tank/old_dataset
NEW_DATASET=/tank/new_dataset

zfs list -t snap -H -o name -r -d1 "$OLD_DATASET" | while read SNAPSHOT
do
    SNAP="$(echo "$SNAPSHOT" | sed "s/.*@//")"
    zfs clone "$OLD_DATASET"@"$SNAP" "$OLD_DATASET"_"$SNAP"
    rsync --progress --partial --delete -a /mnt/"$OLD_DATASET"_"$SNAP"/ /mnt/"$NEW_DATASET"/
    zfs snap "$NEW_DATASET"@"$SNAP"
    zfs destroy "$OLD_DATASET"_"$SNAP"
done


If you want to chat with me, drop in IRC one evening. ;)
I'll see you there :smile:
 

Pete248

Dabbler
Joined
Sep 6, 2012
Messages
16
Wow, I'm blown away by the amount of help I get. Never expected someone would write a script to solve my problem. Many, many thanks!

Can't contribute much to the discussion about whether it is save to access the .zfs directory. All I can say is, that I made it visible and use it in a similar way fracai does. In fact I even publish it as a read only share, so my filesharing users can easily grab older versions of a file or directory without having to ask the administrator for help . It is rarely accessed but so far I have not seen any issues.

Concerning your scripts:

The last one sounds like the "cleaner" way as it does not access the .zfs directory directly. But it should be much slower due to the additional cloning step. Would you agree? So I assume the last but one is probably your favorite, while only the last one would get a blessing from cyberjock.
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
Cloning and destroying is going to take some time, but it shouldn't be prohibitive.

And I enjoy puzzles like this; I'm happy to help.
 
Status
Not open for further replies.
Top