Hi all,
2 weeks back, I upgraded my box to 11.2-RELEASE. That seemed to work fine and to clean things up after that upgrade I reinstalled PLEX as well, this time using iocage.
All has been running fine since then until yesterday when I applied the U1 patch release (to fix the AFP security issue). After this install, the PLEX iocage was marked ad "CORRUPT".
I tried to delete it from the UI but that didn't work at all - it showed a progress popup for like half a second (impossible to read what it said) then returned to the UI with the message 'deleted' at the bottom. But the jail still showed up. So I tried deleting (destroy) via the CLI and that seemed to work. I then deleted the ZFS dataset which held the jail information.
Now I'm trying to reinstall PLEX and that's where things aren't working -> the GUI tells me "Release 11.2-RELEASE missing, will attempt to fetch it." and then sits like that for hours and hours and hours without result. When I refresh the UI, it then really gets upset and asks me to "Activate" iocage which I try to do but doesn't work either - only a reboot sets the situation straight.
I then reverted my boot to the 11.2-RELEASE (not U1) and tried again, same thing. Then I tried again and left it overnight yesterday to find it still in that spot this morning.
What I find weird, not sure if that is intended, is that my "iocage" dataset (which sits under /mnt/SSD_pool/iocage in the ZFS pool hierarchy) is mounted directly into /mnt/iocage - this could be normal but as I don't know, I'm just asking as well.
This is what my "zfs list" shows (did not include the other pools which I think are irrelevant here):
And these are my mounts (same, not all pools are included):
This is my iocage setup:
The unifi jail works fine (as long as I manually start it each time, I still need to find out how to set it to autostart).
Anybody any ideas?
Thanks,
B.
2 weeks back, I upgraded my box to 11.2-RELEASE. That seemed to work fine and to clean things up after that upgrade I reinstalled PLEX as well, this time using iocage.
All has been running fine since then until yesterday when I applied the U1 patch release (to fix the AFP security issue). After this install, the PLEX iocage was marked ad "CORRUPT".
I tried to delete it from the UI but that didn't work at all - it showed a progress popup for like half a second (impossible to read what it said) then returned to the UI with the message 'deleted' at the bottom. But the jail still showed up. So I tried deleting (destroy) via the CLI and that seemed to work. I then deleted the ZFS dataset which held the jail information.
Now I'm trying to reinstall PLEX and that's where things aren't working -> the GUI tells me "Release 11.2-RELEASE missing, will attempt to fetch it." and then sits like that for hours and hours and hours without result. When I refresh the UI, it then really gets upset and asks me to "Activate" iocage which I try to do but doesn't work either - only a reboot sets the situation straight.
I then reverted my boot to the 11.2-RELEASE (not U1) and tried again, same thing. Then I tried again and left it overnight yesterday to find it still in that spot this morning.
What I find weird, not sure if that is intended, is that my "iocage" dataset (which sits under /mnt/SSD_pool/iocage in the ZFS pool hierarchy) is mounted directly into /mnt/iocage - this could be normal but as I don't know, I'm just asking as well.
This is what my "zfs list" shows (did not include the other pools which I think are irrelevant here):
Code:
SSD_pool 8.48G 205G 112K /mnt/SSD_pool SSD_pool/VM_data 3.44G 205G 88K /mnt/SSD_pool/VM_data SSD_pool/VM_data/RancherData 3.44G 205G 3.15G /mnt/SSD_pool/VM_data/RancherData SSD_pool/WebDav 1.03M 205G 564K /mnt/SSD_pool/WebDav SSD_pool/iocage 4.84G 205G 100K /mnt/iocage SSD_pool/iocage/download 532M 205G 88K /mnt/iocage/download SSD_pool/iocage/download/11.1-RELEASE 260M 205G 260M /mnt/iocage/download/11.1-RELEASE SSD_pool/iocage/download/11.2-RELEASE 272M 205G 272M /mnt/iocage/download/11.2-RELEASE SSD_pool/iocage/images 88K 205G 88K /mnt/iocage/images SSD_pool/iocage/jails 2.42G 205G 88K /mnt/iocage/jails SSD_pool/iocage/jails/unifi 2.42G 205G 96K /mnt/iocage/jails/unifi SSD_pool/iocage/jails/unifi/root 2.42G 205G 3.30G /mnt/iocage/jails/unifi/root SSD_pool/iocage/log 152K 205G 92K /mnt/iocage/log SSD_pool/iocage/releases 1.90G 205G 88K /mnt/iocage/releases SSD_pool/iocage/releases/11.1-RELEASE 973M 205G 88K /mnt/iocage/releases/11.1-RELEASE SSD_pool/iocage/releases/11.1-RELEASE/root 973M 205G 973M /mnt/iocage/releases/11.1-RELEASE/root SSD_pool/iocage/releases/11.2-RELEASE 972M 205G 88K /mnt/iocage/releases/11.2-RELEASE SSD_pool/iocage/releases/11.2-RELEASE/root 972M 205G 972M /mnt/iocage/releases/11.2-RELEASE/root SSD_pool/iocage/templates 88K 205G 88K /mnt/iocage/templates freenas-boot 6.76G 48.9G 64K none freenas-boot/ROOT 6.72G 48.9G 29K none freenas-boot/ROOT/11.0-U2 158K 48.9G 739M / freenas-boot/ROOT/11.1-RELEASE 282K 48.9G 828M / freenas-boot/ROOT/11.1-U4 395K 48.9G 838M / freenas-boot/ROOT/11.1-U5 307K 48.9G 840M / freenas-boot/ROOT/11.1-U6 434K 48.9G 840M / freenas-boot/ROOT/11.2-RELEASE 5.98G 48.9G 764M / freenas-boot/ROOT/11.2-RELEASE-U1 760M 48.9G 763M / freenas-boot/ROOT/9.10.2-U6 137K 48.9G 638M / freenas-boot/ROOT/Initial-Install 1K 48.9G 636M legacy freenas-boot/ROOT/Wizard-2017-07-30_12-30-44 1K 48.9G 739M / freenas-boot/ROOT/default 218K 48.9G 637M legacy freenas-boot/grub 7.03M 48.9G 7.03M legacy
And these are my mounts (same, not all pools are included):
Code:
drwxr-xr-x 9 root wheel 10 Dec 16 12:49 iocage -rw-r--r-- 1 root wheel 5 Dec 16 12:19 md_size drwxrwxr-x+ 4 root wheel 6 Dec 30 19:11 SSD_pool
This is my iocage setup:
Code:
iocage list +-----+-------+-------+--------------+---------------+ | JID | NAME | STATE | RELEASE | IP4 | +=====+=======+=======+==============+===============+ | - | unifi | down | 11.1-RELEASE | 172.16.10.244 | +-----+-------+-------+--------------+---------------+ iocage list -r +---------------+ | Bases fetched | +===============+ | 11.1-RELEASE | +---------------+ | 11.2-RELEASE | +---------------+ iocage get -p SSD_pool
The unifi jail works fine (as long as I manually start it each time, I still need to find out how to set it to autostart).
Anybody any ideas?
Thanks,
B.
Last edited: