NFS not showing snapshot rollback changes

hydrian

Dabbler
Joined
Dec 22, 2015
Messages
12
I recently started using snapshots for a versioned YUM repository. This

While testing one of my update scripts for my yum repositories, it accidentally deleted some of the directories I needed to keep. No problem. I had a snapshot of the previous version that I could easily rebase from.

The NFS Share is FREENAS1:mnt/VM_Vol1/GGVA-Yum-Repos/

So I rolled back the snapshot and checked in FreeNAS's file structure if it had done so. It looked like it had done it without issue.
Code:
root@FREENAS1:/mnt/VM_Vol1/GGVA-Yum-Repos/CURRENT # ls -l /mnt/VM_Vol1/GGVA-Yum-Repos/CURRENT
total 2
drwxr-xr-x  3 root  wheel   3 May 17  2019 epel
drwxrwxr-x  3 496   493     3 Nov 29  2018 extras
lrwxr-xr-x  1 root  wheel  25 Nov 11 13:03 os -> ../../CENTOS/7.6.1810/os/
drwxrwxr-x  3 496   493     3 Nov 29  2018 updates
root@FREENAS1:/mnt/VM_Vol1/GGVA-Yum-Repos/CURRENT # 


My NFS mount is defined as:
Code:
/mnt/VM_Vol1/GGVA-Yum-Repos -alldirs -maproot="root":"wheel"


So I go check it on my NFS client where all the YUM update script runs from and the YUM HTTP repositories are hosted from. It is mounted as /mnt/GGVA-Yum-Repos on my CentOS 7 NFS client:
Code:
ls -l /mnt/GGVA-Yum-Repos/CURRENT/
total 1
drwxrwxr-x. 3 496 493 3 Sep 14 07:26 updates

I have it mounted as on the client:
Code:
freenas1.stc.int:/mnt/VM_Vol1/YUM_Repos on /mnt/data/yum_repos type nfs (rw,noatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.1.1.226,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=10.1.1.226)


At first, I thought this was a client caching issue, but it is not. I rebooted the client machine and the same issue there. I tried mounting the share from a new NFS client machine. I had the same issue.

Nothing not worthy is coming up in the client or server logs.

Does anybody have any idea at what I should throw at this thing?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
What is your dataset structure?
An output of zfs list might help.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
lrwxr-xr-x 1 root wheel 25 Nov 11 13:03 os -> ../../CENTOS/7.6.1810/os/
This also looks like a symbolic link to somewhere that is possibly outside your snapshot dataset... what did you restore the snapshot of? the whole Pool?
 

hydrian

Dabbler
Joined
Dec 22, 2015
Messages
12
This also looks like a symbolic link to somewhere that is possibly outside your snapshot dataset... what did you restore the snapshot of? the whole Pool?
It is a symlink to outside the dataset and it supposed to be that way. Once I have a snapshot made then I make a mount_nullfs mount from that dataset's .zfs/snapshot/(snap name) to the location in another NFS mount point. That's not the NFS mount I'm having an issue with.
 

hydrian

Dabbler
Joined
Dec 22, 2015
Messages
12
As you requested:
Code:
root@FREENAS1:~ # zfs list
NAME                                                       USED  AVAIL  REFER  MOUNTPOINT
SSD_Pool1                                                  305G   125G    23K  /mnt/SSD_Pool1
SSD_Pool1/Demo-GGVA60                                      102G   204G  22.8G  -
SSD_Pool1/Staging-GGVA62                                   102G   208G  18.9G  -
SSD_Pool1/Staging-GGVA63                                   102G   222G  4.42G  -
SSD_Pool1/iocage                                          3.62M   125G  3.48M  /mnt/SSD_Pool1/iocage
SSD_Pool1/iocage/download                                   23K   125G    23K  /mnt/SSD_Pool1/iocage/download
SSD_Pool1/iocage/images                                     23K   125G    23K  /mnt/SSD_Pool1/iocage/images
SSD_Pool1/iocage/jails                                      23K   125G    23K  /mnt/SSD_Pool1/iocage/jails
SSD_Pool1/iocage/log                                        23K   125G    23K  /mnt/SSD_Pool1/iocage/log
SSD_Pool1/iocage/releases                                   23K   125G    23K  /mnt/SSD_Pool1/iocage/releases
SSD_Pool1/iocage/templates                                  23K   125G    23K  /mnt/SSD_Pool1/iocage/templates
VM_Vol1                                                   1.17T   436G    96K  /mnt/VM_Vol1
VM_Vol1/.system                                            301M   436G   104K  legacy
VM_Vol1/.system/configs-e2eccb3703ad46d2b19f2e4809443384   129M   436G   128M  legacy
VM_Vol1/.system/cores                                     4.51M   436G  2.09M  legacy
VM_Vol1/.system/rrd-e2eccb3703ad46d2b19f2e4809443384       158M   436G  62.2M  legacy
VM_Vol1/.system/samba4                                    1.22M   436G   604K  legacy
VM_Vol1/.system/syslog-e2eccb3703ad46d2b19f2e4809443384   7.25M   436G  6.16M  legacy
VM_Vol1/.system/webui                                       88K   436G    88K  legacy
VM_Vol1/Buildbox7-Data1_2                                  136G   527G  44.7G  -
VM_Vol1/Demo-ArrayAG-2                                    80.8G   497G  19.4G  -
VM_Vol1/Demo-Gradiator2                                   81.3G   495G  22.3G  -
VM_Vol1/Demo-Lieberman                                    91.4G   498G  29.4G  -
VM_Vol1/Demo-NetScaler11                                  30.5G   453G  13.1G  -
VM_Vol1/Demo-Stinger                                      66.0G   487G  15.5G  -
VM_Vol1/Devel-Shrike                                      25.4G   448G  13.1G  -
VM_Vol1/Devel-jmeter2                                     81.3G   494G  22.9G  -
VM_Vol1/GGVA-Yum-Repos                                    87.3G   436G  5.34G  /mnt/VM_Vol1/GGVA-Yum-Repos
VM_Vol1/GGVA-Yum-Repos/CURRENT                            82.0G   436G  25.3G  /mnt/VM_Vol1/GGVA-Yum-Repos/CURRENT
VM_Vol1/ISOs                                               874M  99.1G   874M  /mnt/VM_Vol1/ISOs
VM_Vol1/Isolated_Client                                   30.5G   449G  17.7G  -
VM_Vol1/Linux-Admin                                       69.9G   486G  15.2G  -
VM_Vol1/Prime                                             96.0G   512G  18.3G  -
VM_Vol1/Prod-Thunderbolt                                   139G   563G  12.2G  -
VM_Vol1/Staging-PulseSecure                               50.8G   474G  12.3G  -
VM_Vol1/YUM_Repos                                         81.3G   119G  81.3G  /mnt/VM_Vol1/YUM_Repos
VM_Vol1/demo-PulseSecure92                                50.8G   473G  14.2G  -
freenas-boot                                              3.55G  24.1G    64K  none
freenas-boot/ROOT                                         3.53G  24.1G    29K  none
freenas-boot/ROOT/11.1-U7                                  344K  24.1G   741M  /
freenas-boot/ROOT/11.2-U7                                 2.28G  24.1G   760M  /
freenas-boot/ROOT/9.10.2-U6                               1.25G  24.1G   638M  /
freenas-boot/ROOT/Initial-Install                            1K  24.1G   635M  legacy
freenas-boot/ROOT/default                                  133K  24.1G   636M  legacy
freenas-boot/ROOT/default-20180702-233827                  328K  24.1G   838M  legacy
freenas-boot/grub                                         6.96M  24.1G  6.96M  legacy
 

hydrian

Dabbler
Joined
Dec 22, 2015
Messages
12
I'll try to explain the crazy system I have here.

I have mirrors of the Centos 7 major releases outside the dataset in question. These are downloaded once and don't change.
Then over the /mnt/VM_Vol1/GGVA-Yum-Repos NFS export I do the following things
  • I then create a symlink in /mnt/VM_Vol1/GGVA-Yum-Repos/CURRENT with the name 'os' in the version of CentOS I need work with,
  • Then I run a script that rsync mirrors the CentOS update, extra, and EPEL YUM repositories under the /mnt/VM_Vol1/GGVA-Yum-Repos/CURRENT dataset.
I then create VM_Vol1/GGVA-Yum-Repos/CURRENT snapshot. After that, I mount_nullfs the VM_Vol1/GGVA-Yum-Repos/CURRENT/.zfs/snapshot/{SNAP SHOT NAME} to /mnt/VM_Vol1/YUM_Repos/GGVA/{My Version Number}

My NFS client mounts to the /mnt/VM_Vol1/YUM_Repos/ and /mnt/VM_Vol1/GGVA-Yum-Repos NFS exports.

My issue is started when my script messed up my /mnt/VM_Vol1/GGVA-Yum-Repos/CURRENT directory. I rolled back to my most recent snapshot of VM_Vol1/GGVA-Yum-Repos/CURRENT. That's when the crap hit the fan. I checked the local files for that directory and they seemed fine. But if any machine looks at if via the NFS export, it looks like this:
Code:
/mnt/GGVA-Yum-Repos/CURRENT
[root@buildbox7 CURRENT]# ls -l
total 1
drwxrwxr-x. 3 496 493 3 Sep 14 07:26 updates

And not:
root@FREENAS1:/mnt/VM_Vol1/GGVA-Yum-Repos/CURRENT # ls -l
total 2
drwxr-xr-x 3 root wheel 3 May 17 2019 epel
drwxrwxr-x 3 496 493 3 Nov 29 2018 extras
lrwxr-xr-x 1 root wheel 25 Nov 11 13:03 os -> ../../CENTOS/7.6.1810/os/
drwxrwxr-x 3 496 493 3 Nov 29 2018 updates
 
Top