Apps still trying to mount old pool after migration

ch36u3v4r4

Cadet
Joined
Nov 24, 2019
Messages
3
Apologies in advance if I missed this in the documentation or elsewhere in the forum.
I'm having an issue with some of my apps being unable to deploy.
I began getting errors on the disk containing my application pool (named jails).
I added new disks, created a new pool for my apps (named apps) and used to the GUI to "Choose Pool" and "Migrate applications to the new pool."
The apps seemed to run normally in the new pool for a while until I finally disconnected the old pool. Apps created after the migration to the new pool deploy and work as expected.
Plex server reports this and the other apps are the same error but with their own pvc strings and directories.
(combined from similar events): MountVolume.SetUp failed for volume "pvc-e547118a-a11c-4131-be37-a22555f03148" : rpc error: code = Internal desc = zfs get mountpoint failed, cannot open 'jails/ix-applications/releases/plexmediaserver/volumes/pvc-e547118a-a11c-4131-be37-a22555f03148': dataset does not exist

MountVolume.SetUp failed for volume "pvc-e547118a-a11c-4131-be37-a22555f03148" : rpc error: code = Internal desc = zfs get mountpoint failed, cannot open 'jails/ix-applications/releases/plexmediaserver/volumes/pvc-e547118a-a11c-4131-be37-a22555f03148': dataset does not exist

Unable to attach or mount volumes: unmounted volumes=[config], unattached volumes=[temp varlogs list-1 config list-0 list-2 shared shm kube-api-access-hgl7d]: timed out waiting for the condition
I'm running Scale 22.02.4 and I did the upgrade in place from CORE.
I'm creating an error report for IX but I thought I would add here as well in case someone can help. Thanks!
 
Top