Pool mounted to root after manual recovery

Fogl

Cadet
Joined
Sep 29, 2019
Messages
3
So, I'm running little FreeNAS on ESXi combo under my desk for years.
Tonight, I was doing some networking maintenance and instead of pulling on QSFP handle to unplug the network cable, I accidentally pulled on 8088 handle and pulled SAS cable between the controller (H310) and the disk array (old Rackable) with 8 4TB drives. In that disk array, I have one encrypted RAIDZ2 pool "tank" (yeah, so unique) spanning all 8 drives.
After the boot, "tank" was gone, seems like my FreeNAS completely forgot about it. After a moment of panic, I used my geli key and I was able to unlock all 8 drives using
"geli attach" command. As soon as all disks were unlocked, I attempted the command "zpool import tank". That seemed to work just fine and UI showed tank pool present and healthy. To make sure FreeNAS picked that up, I restarted it and tank pool showed up in the list of pools, this time automatically.
I thought all is good, everything is back, but when I attempted to access my data, there was nothing int the /mnt/tank folder. I was puzzled for a moment, I felt my heart sink. But then I checked the mount point of the "tank" and behold, that pool got mounted to the root. I quickly checked some data and seems like everything is in there, just mounted in the wrong place.
I tried "zpool export tank" and "zpool import tank" again, but again, my pool ended up being in the root.
Can anybody help? What am I missing?

Thanks a lot!
 

Fogl

Cadet
Joined
Sep 29, 2019
Messages
3
Meh, I guess I have to use GUI import to get this work properly. Silly me.
So I did export, then import using GUI and tank is where it is supposed to be.
Now reboot to make sure everything sticks ... and FreeNAS is hanging in "starting cron"???
Well, it is actually running, I can SSH to it, when I run top, this is what I get:

Code:
 4945 root          1 101    0 68648K 62560K RUN     3   1:49  90.42% python3.6
 4727 root         11  20    0 90516K 48672K nanslp  3   1:05  64.10% collectd
  275 root         28  52    0   195M   164M usem    1   0:34  50.07% python3.6
 5482 root          1  77    0 24148K 17944K RUN     0   0:03  27.52% testparm
 4399 root          8  20    0 24840K  9980K select  3   0:15  27.08% rrdcached
  385 root          3  78    0   119M   105M RUN     1   0:07  22.78% python3.6
 5446 root          1  79    0 23544K 18204K RUN     2   0:22  13.77% python3.6
 5490 root          1  35    0  8196K  3916K CPU2    2   0:01   4.55% top
 2409 root          1  25    0 11400K  5628K CPU3    3   0:09   4.55% vmtoolsd


What the heck is python doing???
 

Fogl

Cadet
Joined
Sep 29, 2019
Messages
3
I guess it was Samba freaking out for some reasons. As soon as I tuned of smb service, all usage went to idle. After restarting smb, everything still seems to be fine.
I guess time will tell, but scrub never hurts ...
 
Top