disk pool throwing alert

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I think we still haven't run rsync with the right command yet to be sure.

rsync -avn /mnt/tank/media/ /mnt/bunker/media/

Seems I'm not remembering things as well as I used to.. trailing slashes are important here in the paths.
 

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
will try that one...

not to worry, i also tend to right allot of things down these days...

G
 

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
I see we previously ran:

tmux new-session -d -s rsync 'rsync -anv /mnt/tank/media /mnt/bunker/media'

without them travelling slashes... think this is on elf the one's that then filled the diskpool.

G
 

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
@sretalla guess this is good news... the 2 drives that never reported a error on smart, but that was throwing alerts, the badblocks just completed and they all good.
so the plan is to take them together with the 3 drives from tank and create 5 wide RaidZ2 new tank now.
G
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
OK, sounds like we're making progress then.
 

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
for now the different directories that was hosted on "tank/media" have been renamed, to try and see if anything breaks that might still be using them, if all still good this coming wknd then i'll drop the dataset and diskpool and rebuild adding the 2 drives t create the new tank thats 5 wide.
in the end i just gave up on trying to do a confirm on the data ... tank/media vs bunker/media
trying blind faith as in the act of wanting to be diligent and confirm cause more of a head ache.
G
 

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
@sretalla
for "tank"
I've renamed all the folders... to break anything that pointed to the folders.
I've also changed permissions on dataset so that no one has any permissions other than "other" having execute, had to leave something in place.
Is there any other way to "offline" the diskpool in another way for a day or 2... trying to make sure nothing is dependant on it...
not even a snapshot or anything/nothing/ :)
before I destroy the tank diskpool and recreate as a 5 wide.
G
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
I executed. they are showing exported in diskpool/disks...

got the following when I executed the command:

root@vaultx[~]# zpool export tank

root@vaultx[~]# 2023 Aug 18 09:43:03 vaultx md/raid1:md125: Disk failure on sde1, disabling device.
md/raid1:md125: Operation continuing on 1 devices.
2023 Aug 18 09:43:04 vaultx md/raid1:md127: Disk failure on sdi1, disabling device.
md/raid1:md127: Operation continuing on 1 devices.
root@vaultx[~]#
root@vaultx[~]#

/dev/sdb, /dev/sdc and /dev/sde is what comprised tank

/dev/sdi and /dev/sdj belong to "app" but interesting no where other than this is there any "errors" reported.

Please comment.

G
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
md raid is what's used for SWAP.

it's normal that exporting a pool will also result in any of its member disks being used for SWAP being removed from the SWAP mirror.

Ignore those messages.
 

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
will leave system as is now till Monday, if I don't find anything weird... then i will drop the diskpool

G
 

georgelza

Patron
Joined
Feb 24, 2021
Messages
417
@sretalla FYI, dropped "tank" this past weekend and recreated as a 5 wide RaidZ2 diskpool.

will look at moving some data back onto it.

thanks for all the help.

G
 
Top