SOLVED Preserve AD ACL's from one nas to another

Status
Not open for further replies.

ArcticRevrus

Cadet
Joined
Mar 26, 2014
Messages
8
Hello,

When attempting to migrate data to our new storage system running 9.10 from another freenas box running 9.3, I cannot find any way to be able to preserve ACL's via any transfer protocol on files using AD users. Both systems are joined to the same domain, but when attempting to rsync the files as root, I recieve the error

rsync: mkstemp "/mnt/raid/Shares/Public/Public/IT/2_0_0_71/.duo-win-login-2.0.0.71.exe.ndsHa9" failed: Operation not permitted (1)

Files that have no AD users on them transfer perfectly fine, and files also transfer properly if i do not use -A. This system has very detailed ACLs on the source system, so manually fixing the permissions is extremely undesirable. I have also attempted mounting the source file system on the destination box via NFSv4 and transfering using cp, mounting the CIFS share on a windows box and using robocopy, and SCP'ing the files from the shell, all to no avail.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Is there a reason you are not replicating the data using ZFS?
 

ArcticRevrus

Cadet
Joined
Mar 26, 2014
Messages
8
Is there a reason you are not replicating the data using ZFS?
When attempting to use zfs send, the ACL is preserved, however the groups do not resolve to the AD groups.

Sending host:

[root@san01] ~# getfacl /mnt/raid/test/test
# file: /mnt/raid/test/test
# owner: root
# group: wheel
group:domain admins:rwxpDdaARWcCo-:fd----:allow
everyone@:rwxpD-a-R-c---:------:allow
group@:rwxpD-a-R-c---:------:allow
owner@:rwxpD-aARWcCo-:------:allow

Recieving Host:

[root@newsan01] ~# getfacl /mnt/raid/test/test
# file: /mnt/raid/test/test
# owner: root
# group: wheel
group:10512:rwxpDdaARWcCo-:fd-----:allow
everyone@:rwxpD-a-R-c---:-------:allow
group@:rwxpD-a-R-c---:-------:allow
owner@:rwxpD-aARWcCo-:-------:allow
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
When attempting to use zfs send, the ACL is preserved, however the groups do not resolve to the AD groups.

Sending host:

[root@san01] ~# getfacl /mnt/raid/test/test
# file: /mnt/raid/test/test
# owner: root
# group: wheel
group:domain admins:rwxpDdaARWcCo-:fd----:allow
everyone@:rwxpD-a-R-c---:------:allow
group@:rwxpD-a-R-c---:------:allow
owner@:rwxpD-aARWcCo-:------:allow

Recieving Host:

[root@newsan01] ~# getfacl /mnt/raid/test/test
# file: /mnt/raid/test/test
# owner: root
# group: wheel
group:10512:rwxpDdaARWcCo-:fd-----:allow
everyone@:rwxpD-a-R-c---:-------:allow
group@:rwxpD-a-R-c---:-------:allow
owner@:rwxpD-aARWcCo-:-------:allow
Are both servers joined to the domain?

If you join both to the domain and have the same idmap settings on both servers, then the users/groups on both servers will be identical.
 

ArcticRevrus

Cadet
Joined
Mar 26, 2014
Messages
8
Both are joined to the domain, and both are set to rid. The source nas did have an issue at one point where i had massive troubles getting it on the domain, and had to join it to the domain using the shell. So it is possible that the setting is not actually in sync with the webui database. Is there a command to check the idmap setting from shell?
 
Last edited:

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Both are joined to the domain, and both are set to rid. The source nas did have an issue at one point where i had massive troubles getting it on the domain, and had to join it to the domain using the shell. So it is possible that the setting is not actually in sync with the webui database. Is there a command to check the idmap setting from shell?
You can check the following:
  • "wbinfo -g" and "wbinfo -u" - These should show whether you are pulling domain users and groups
  • "getent group" should also show these groups
  • "cat /usr/local/etc/smb4.conf" - This will show your idmap settings.
 

ArcticRevrus

Cadet
Joined
Mar 26, 2014
Messages
8
Turns out the idmap range was set to 20000-90000000 on the new box, and 10000-90000000 on the old one. Changing the RID range to match up with the old system worked like a charm. Taking a look at the smb4.conf made me notice it, so thanks for the help.

Marking thread fixed.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Turns out the idmap range was set to 20000-90000000 on the new box, and 10000-90000000 on the old one. Changing the RID range to match up with the old system worked like a charm. Taking a look at the smb4.conf made me notice it, so thanks for the help.

Marking thread fixed.
Glad to hear.
 
Status
Not open for further replies.
Top