Splitting an existing ZFS file system into several, keeping all data

Status
Not open for further replies.

n3mmr

Explorer
Joined
Jan 7, 2015
Messages
82
I have a rather mindlessly concieved file system layout, where I have a single file system for all media files.

Now I think I need to split this file system into several smaller ones, so I can choose more freely which protocol to use for read-write sharing for each object collection to share. And I wonder what would be the most effective and the safest way to do that.

I have the zfs file system Tank/Media containg directories Video, Music, ISOs and Backups. Tank/Media is mounted as /mnt/Media.

Originally these were all managed from a single host, a UNIX machine, so I had /mnt/Media exported read-write using NFS, and read-only using CIFS for the benefit of some Network media players with CIFS capability.

I have now come to the conclusion I need to manage the Music and ISOs subtrees over CIFS (since windows has no useable NFS client in the Home versions), and Video and Backups from UNIX. The reason being that I need to run video transcoding to files inside the Video data tree and DLNA distribution of video in a UNIX environment, while Ripping, Tagging and editing of music is best done in Windows, due to the availability of good ripping and tagging SW for classical music.

I gather that you always have a choice, if you need to export a file system over both NFS and CIFS, only one should be sharing read-write, and the other needs to be read-only.

So I need to split Tank/Media up into separately shareable file systems.

I have a lot of data in these subdirectories; Music is about 1TB, Video about 1.8 TB, and ISOs and Backups are around 200 GB each. And I would like to minimize copying if at all possible.

How should I do this to minimize downtime?

I have about 8TB to spare in the zpool.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Create the new datasets you want in the GUI and then use an SSH client such as putty to move the files from the CLI. You can use the shell prompt from the GUI but it is less than optimal. The command to move the files would be:

mv /source /destination
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
I agree with Jailer. Note though that when you create the datasets (presumably at the top level of the pool) it will want to create directory names equal to the dataset names. So if you want to use the same directory names as you have currently, then I would rename all of those first. Maybe to [directory].orig or something.

Then create the datasets via the gui, then moves the files/subdirectories from underneath the .orig directories. I'm not sure what would happen if you issued "mv /mnt/Media/Music.orig /mnt/Media/Music" actually. I would issue "mv /mnt/Media/Music.orig/* /mnt/Media/Music"
 

n3mmr

Explorer
Joined
Jan 7, 2015
Messages
82
mv doesn't really work between datasets (filesystems where I came from...), it actually does a cp -pRP source targetdir

The proper incantation should be

mv Music oldMusic # Renames the original catalog, and doesn't actually move anything.
zfs create <options> Tank/Media/Music
...various finishing touches...
cd /mnt/Media/oldMusic
cp -pRP . ../Music

I was just hoping there might be some GUI magic to save me the trouble of thinking.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
zfs create <options> Tank/Media/Music
No, this is not part of the "proper incantation". The "proper" way to handle this would be to create the dataset via the GUI.

And while you're right that mv does a cp behind the scenes, it still accomplishes what you want to do, albeit taking much longer than a true mv would.
 

n3mmr

Explorer
Joined
Jan 7, 2015
Messages
82
The proper incantation of cp, of course. the zfs create can be handled by the GUI.

A better way than cp is actually

find <source_path_top> | cpio -pdvM <new path>

as long as there are no ACLs of importance or extra attributes.

As I said, I was mostly hoping for this to be so common there might be some sort of special thingie in the GUI.
 
Status
Not open for further replies.
Top