issue with dataset

William Bravin

Contributor
Joined
Mar 16, 2016
Messages
194
Hello all

please be patient and understanding

due to a power outage i had to rebuilt the bootable usb to install 11.3 r5. As you can see from my signature i have 2 freenas servers.

Since I installed the second server early last year, and i tried to use rsync to replicate all the data from one server to the other.

I have 4 datasets that i need to replicate They are :
Documents
Movies
Music
tv shows

Since then I was successful to run the rsync tasks for the first 3 dataset with no issues. The tv shows dataset never worked. I attached the log i get when running a rsync task

The ACLs for each dataset on each server are identical

I installed Emby server in it's jail and i can load movies and music with no issues. I cannot load tv shows

I can access al the data in all the folder in all the dataset from win 10, win7 and Linux mint machines with no issue.

I can cause a copy and past process (either with windows or terra copy) to copy the tv show dataset form one server to the other from any machine no issues However this would take over 4 days to complete. This is what i have been doing for the last year

The only issues I can understand is the the tv shows dataset issue name has a space in it

If this is the case, is the solution is to rename the dataset?

If so this would mean to destroy the tv shows on one server, create a dataset called tvshows and copy from one server to the other


Is my reasoning correct ? or is there a better way to do this

I thank you in advance for your time in reading this and your response
 

Attachments

  • Freenas log.txt
    163 bytes · Views: 287

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
The only issues I can understand is the the tv shows dataset issue name has a space in it

If this is the case, is the solution is to rename the dataset?
It seems from the error in the attached file that a space in the name wouldn't be the cause of that (the rsync connection is being refused, which is a little odd if it worked for other directories, but there's not enough detail there to understand what's going on)... although may separately be causing you more trouble... I prefer to avoid spaces in names (at least for directories) where possible.

is there a better way to do this
You can get the data across using zfs send | zfs recv, which would get you to a block level matching copy from the source to the target.
 

William Bravin

Contributor
Joined
Mar 16, 2016
Messages
194
It seems from the error in the attached file that a space in the name wouldn't be the cause of that (the rsync connection is being refused, which is a little odd if it worked for other directories, but there's not enough detail there to understand what's going on)... although may separately be causing you more trouble... I prefer to avoid spaces in names (at least for directories) where possible.


You can get the data across using zfs send | zfs recv, which would get you to a block level matching copy from the source to the target.
Wow! thank you @sretalla for the very quick response

Although i did in the past loaded the tv shows dataset into Emby with no issues. It seems to me that the off and on problem encountered by emby could be resolved.

I fear that the issue in rsync will still remain.

I apologize I do not know what you mean by zfs send and zfs recv. How would this command work?
Would i proceed with this after i destroy the tv shows dataset?
nd the use the command to move from one server to the other?
would this be a more expeditious way of doing what i suggested?

On an other note I see from your signature that you migrated to truenas. as i am only using my solution for my home media environment would you suggest that i too should migrate to truenas? would i be only add additional complexities and features i do not need?

Once again Thank you
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
FreeNAS/TrueNAS provides local and remote synchronisation out of the box. You did not write your own rsync script, did you? There's an entry in the Tasks menu for that.
And right there is also "Replication Tasks". Faster than rsync - plus snapshots, so you have an entire backup history ...
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
OK, if we have serverA and serverB with the dataset on serverA called tank/TV Shows which you want to put on serverB in the tank pool (which we'll assume is named the same as the pool on ServerA)

The commands might look like this:

zfs snapshot -r "tank/TV Shows@sending"

zfs send -R "tank/TV Shows@sending" | pv | ssh serverB zfs recv tank

You may need to replace the name of serverB with its IP address in the second command

Or you can use a replication task and kill it after it finishes (as suggested by @Patrick M. Hausen .
 

William Bravin

Contributor
Joined
Mar 16, 2016
Messages
194
OK, if we have serverA and serverB with the dataset on serverA called tank/TV Shows which you want to put on serverB in the tank pool (which we'll assume is named the same as the pool on ServerA)

The commands might look like this:

zfs snapshot -r "tank/TV Shows@sending"

zfs send -R "tank/TV Shows@sending" | pv | ssh serverB zfs recv tank

You may need to replace the name of serverB with its IP address in the second command

Or you can use a replication task and kill it after it finishes (as suggested by @Patrick M. Hausen .
Hello

Thank you for your reply and patience with me

your description is exactly what i intend to recreate.

Here where i live we have a very weak power grid.

Mi objective is that in the event of a server going down (call it my primary server) and something nasty occurs to it, It would be up and running with server B me less than 1 hour.

I would have my media up. this would leave with the time to enjoy my media whilst the server gets repaired replaced or rebuilt

I do this replication every week (I was using rsync for this) and then once a month i do a backup of all my media on a third server ( buffalo NAS ) using novastore. Luckily i never had to retrieve from it

I am not a computer savvy kind a guy. I'm like the guy who goes to the flour mill and gets his trousers filled with flour just by standing there.
So i am a bit apprehensive when stuff like ssh.

I do not know what replication does precisely, however i tried to create a task and run it for the first time . unsuccessfully and i got this log

I tried it a second time and i returned completed in a mater of seconds (this did not work because the dataset is 2.7TB)

sorry for the delay in responding I had a my regular nap

Thank you once again
 

Attachments

  • sreplicationlog.txt
    761 bytes · Views: 272

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Why don't you replicate the data hourly or at least daily - ZFS replication is fast, cheap and rather simple to setup with FreeNAS/TrueNAS.

That being said - the task is warning you that you are overwriting an active dataset with data in it. You need to start with an empty target for ZFS replication. Once the first snapshot is replicated, it is incremental for future runs.

So create a new empty dataset, e.g. tank/backup and set up the job to replicate tank/tv into tank/backup/tv - the target will be created on the first run.

Once you changed all your replication jobs to the new system successfully you can rename things so the backup server looks identical to the primary one. E.g. on the backup server:
Code:
zfs create tank/old-rsync
zfs rename tank/tv tank/old-rsync/tv
zfs rename tank/backup/tv tank/tv
Of course after that you need to adjust your replication task, but since the entire replication history will be included in the renaming, that won't pose a problem.

HTH,
Patrick
 

William Bravin

Contributor
Joined
Mar 16, 2016
Messages
194
Why don't you replicate the data hourly or at least daily - ZFS replication is fast, cheap and rather simple to setup with FreeNAS/TrueNAS.

That being said - the task is warning you that you are overwriting an active dataset with data in it. You need to start with an empty target for ZFS replication. Once the first snapshot is replicated, it is incremental for future runs.

So create a new empty dataset, e.g. tank/backup and set up the job to replicate tank/tv into tank/backup/tv - the target will be created on the first run.

Once you changed all your replication jobs to the new system successfully you can rename things so the backup server looks identical to the primary one. E.g. on the backup server:
Code:
zfs create tank/old-rsync
zfs rename tank/tv tank/old-rsync/tv
zfs rename tank/backup/tv tank/tv


f course after that you need to adjust your replication task, but since the entire replication history will be included in the renaming, that won't pose a problem.

HTH,
Patrick


Thank you @Patrick M. Hausen for your reply

I am reading up on replication and probably try again tomorrow.

I seem to understand your code however i fail to see how you are sending this to the other server. I will think about it and then try to explain to you what i understand.

once again thank you for you patience
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I wrote:
Once you changed all your replication jobs to the new system successfully you can rename things so the backup server looks identical to the primary one.

The commands are only to get rid of the tank/backup intermediate dataset after you changed everything successfully. On the backup system - not on the primary one!

Step 1: set up replication primary:tank/something --> backup:tank/backup/something
Step 2: once this is completed, rename tank/backup/something to tank/something and adjust the replication task
 

William Bravin

Contributor
Joined
Mar 16, 2016
Messages
194
hello all

my problems are all are all solved thank you so much for all your help.

I resolved the issues by going back through the settings in pool, sharing mountpoints (in jails) and i found out that there was an issue with TV shows that was cause by using upper and lower case naming of the datasets. and by ensuring that the ACL were identical on both servers.

It took a week of trial and error but it resulted in a success.

What i learned during this issue is that i will document the procedure (with screen shots) to use as an installation/configuration manual for my systems (in addition to the usual backups

Once again thank you all
 
Top