Here's the TLDR;
So IIUC, it appears that for someone starting from scratch the steps would be:
zfs create freenas-boot/CUSTOM
zfs set mountpoint=/CUSTOM freenas-boot/CUSTOM
zfs set canmount=on freenas-boot/CUSTOM
zfs mount freenas-boot/CUSTOM
the dataset /CUSTOM is now available and ready for use.
Obviously it makes good sense to replicate to the storage pool or another system for backup, (especially before running an update) but this strategy appears to be a very viable way to keep custom scripts available even if the storage pool is offline.
================================================================
I know this isn't considered a best practice/supported action, but for maintenance I want a dataset "CUSTOM" on the boot pool so that my maintenance scripts are available if for some reason I have my main pool off line. This has been working well, but now I need to learn how to manage an upgrade (I've got lots of backup copies if I need to destroy the pool and recreate it.)
I just went through the process of upgrading from 11.1-Release to 11.1U4, and I need a little bit of guidance with migrating my CUSTOM Datatset.
Before the upgrade, things looked like this:
after the upgrade they look like this:
When I initially tried to access /CUSTOM/ all I got was an empty directory. I was able to access my data (and the mount was Read/Write) by executing the command:
I would like to continue forward with freenas-boot/ROOT/11.1-U4/CUSTOM and eventually destroy freenas-boot/ROOT/11.1-RELEASE sometime in the future when I am certain that I won't need to roll it back.
I Tried the following:
but I was not able to mount 11.1-U4 - Why?
I'm also confused as to why both freenas-boot/ROOT/11.1-U4/CUSTOM and freenas-boot/ROOT/11.1-RELEASE/CUSTOM show that they are mounted as /CUSTOM - unless I am missing something, only one dataset can occupy a mount point.
Should be exporting the 11.1-Release pool?
I have a symlink to the mountpoint from root - could this be causing the problem?
I can easily delete/recreate later if necesary.
For now
Any hints/suggestions would be much appreciated. This doesn't appear it should be difficult to solve, but there are obviously a couple of fine points that I'm missing.
So IIUC, it appears that for someone starting from scratch the steps would be:
zfs create freenas-boot/CUSTOM
zfs set mountpoint=/CUSTOM freenas-boot/CUSTOM
zfs set canmount=on freenas-boot/CUSTOM
zfs mount freenas-boot/CUSTOM
the dataset /CUSTOM is now available and ready for use.
Obviously it makes good sense to replicate to the storage pool or another system for backup, (especially before running an update) but this strategy appears to be a very viable way to keep custom scripts available even if the storage pool is offline.
================================================================
I know this isn't considered a best practice/supported action, but for maintenance I want a dataset "CUSTOM" on the boot pool so that my maintenance scripts are available if for some reason I have my main pool off line. This has been working well, but now I need to learn how to manage an upgrade (I've got lots of backup copies if I need to destroy the pool and recreate it.)
I just went through the process of upgrading from 11.1-Release to 11.1U4, and I need a little bit of guidance with migrating my CUSTOM Datatset.
Before the upgrade, things looked like this:
Code:
#>zfs list -t all -r freenas-boot/ROOT/11.1-RELEASE/CUSTOM NAME USED AVAIL REFER MOUNTPOINT freenas-boot/ROOT/11.1-RELEASE/CUSTOM 82.3M 103G 81.7M /CUSTOM freenas-boot/ROOT/11.1-RELEASE/CUSTOM@CUSTOM_2018-04-15_232104 104K - 81.6M - freenas-boot/ROOT/11.1-RELEASE/CUSTOM@CUSTOM_2018-04-16_054026 72K - 81.6M - freenas-boot/ROOT/11.1-RELEASE/CUSTOM@CUSTOM_2018-04-18_064920 112K - 81.7M - freenas-boot/ROOT/11.1-RELEASE/CUSTOM@CUSTOM_2018-04-19_054729 112K - 81.7M - freenas-boot/ROOT/11.1-RELEASE/CUSTOM@CUSTOM_2018-04-21_233201 148K - 81.7M - freenas-boot/ROOT/11.1-RELEASE/CUSTOM@CUSTOM_2018-04-23_204519 0 - 81.7M - freenas-boot/ROOT/11.1-RELEASE/CUSTOM@CUSTOM_2018-05-03_004248 0 - 81.7M -
Code:
freenas-boot 5.24G 102G 176K none freenas-boot/.system 110M 102G 4.34M legacy freenas-boot/.system/configs-66c5ca0d9b594eb08a0e7191ec86e4a6 19.6M 102G 19.6M legacy freenas-boot/.system/configs-9d613bc4d69d4caa9ab03b2439285b53 136K 102G 136K legacy freenas-boot/.system/cores 16.1M 102G 16.1M legacy freenas-boot/.system/rrd-66c5ca0d9b594eb08a0e7191ec86e4a6 28.2M 102G 28.2M legacy freenas-boot/.system/rrd-9d613bc4d69d4caa9ab03b2439285b53 3.85M 102G 3.85M legacy freenas-boot/.system/samba4 496K 102G 496K legacy freenas-boot/.system/syslog-66c5ca0d9b594eb08a0e7191ec86e4a6 36.8M 102G 36.8M legacy freenas-boot/.system/syslog-9d613bc4d69d4caa9ab03b2439285b53 296K 102G 296K legacy freenas-boot/ROOT 5.11G 102G 136K none freenas-boot/ROOT/11.0-U3 232K 102G 970M / freenas-boot/ROOT/11.0-U4 248K 102G 973M / freenas-boot/ROOT/11.1-RELEASE 432K 102G 1.08G / freenas-boot/ROOT/11.1-RELEASE/CUSTOM 8K 102G 81.7M /CUSTOM freenas-boot/ROOT/11.1-U4 5.11G 102G 1.09G / freenas-boot/ROOT/11.1-U4@2017-09-20-01:21:53 4.05M - 980M - freenas-boot/ROOT/11.1-U4@2017-09-19-23:36:41 4.06M - 980M - freenas-boot/ROOT/11.1-U4@2017-09-29-02:04:28 969M - 970M - freenas-boot/ROOT/11.1-U4@2017-12-15-20:16:16 971M - 973M - freenas-boot/ROOT/11.1-U4@2018-05-03-01:15:15 1.08G - 1.08G - freenas-boot/ROOT/11.1-U4/CUSTOM 82.4M 102G 81.7M /CUSTOM freenas-boot/ROOT/11.1-U4/CUSTOM@CUSTOM_2018-04-15_232104 104K - 81.6M - freenas-boot/ROOT/11.1-U4/CUSTOM@CUSTOM_2018-04-16_054026 72K - 81.6M - freenas-boot/ROOT/11.1-U4/CUSTOM@CUSTOM_2018-04-18_064920 112K - 81.7M - freenas-boot/ROOT/11.1-U4/CUSTOM@CUSTOM_2018-04-19_054729 112K - 81.7M - freenas-boot/ROOT/11.1-U4/CUSTOM@CUSTOM_2018-04-21_233201 148K - 81.7M - freenas-boot/ROOT/11.1-U4/CUSTOM@CUSTOM_2018-04-23_204519 0 - 81.7M - freenas-boot/ROOT/11.1-U4/CUSTOM@CUSTOM_2018-05-03_004248 0 - 81.7M - freenas-boot/ROOT/11.1-U4/CUSTOM@2018-05-03-01:15:15 0 - 81.7M - freenas-boot/ROOT/Initial-Install 8K 102G 980M legacy freenas-boot/ROOT/default 232K 102G 980M legacy freenas-boot/grub 7.80M 102G 7.80M legacy
When I initially tried to access /CUSTOM/ all I got was an empty directory. I was able to access my data (and the mount was Read/Write) by executing the command:
zfs mount -v freenas-boot/ROOT/11.1-RELEASE/CUSTOM
, but my sense is that this isn't what I should be doing as the new boot dataset is freenas-boot/ROOT/11.1-U4. I would like to continue forward with freenas-boot/ROOT/11.1-U4/CUSTOM and eventually destroy freenas-boot/ROOT/11.1-RELEASE sometime in the future when I am certain that I won't need to roll it back.
I Tried the following:
Code:
#>umount /CUSTOM/ #>ls -la /CUSTOM/ total 17 drwxr-xr-x 2 root wheel 2 Jan 5 01:49 . drwxr-xr-x 24 root wheel 31 May 3 01:33 .. #>zfs mount -v freenas-boot/ROOT/11.1-U4/CUSTOM cannot mount 'freenas-boot/ROOT/11.1-U4/CUSTOM': 'canmount' property is set to 'off'
but I was not able to mount 11.1-U4 - Why?
I'm also confused as to why both freenas-boot/ROOT/11.1-U4/CUSTOM and freenas-boot/ROOT/11.1-RELEASE/CUSTOM show that they are mounted as /CUSTOM - unless I am missing something, only one dataset can occupy a mount point.
Should be exporting the 11.1-Release pool?
I have a symlink to the mountpoint from root - could this be causing the problem?
Code:
#>ls -la /root/CUSTOM lrwxr-xr-x 1 root wheel 7 Jan 15 00:10 /root/CUSTOM -> /CUSTOM
I can easily delete/recreate later if necesary.
For now
zfs mount -v freenas-boot/ROOT/11.1-RELEASE/CUSTOM
seems to work, but I'm sure it's not what I should be doing.Any hints/suggestions would be much appreciated. This doesn't appear it should be difficult to solve, but there are obviously a couple of fine points that I'm missing.
Last edited: