Brandito
Explorer
- Joined
- May 6, 2023
- Messages
- 72
If the data is served by ZFS in a corrupt state to rsync without an error message, then probably (my guess) zfs send and receive would not behave differently. Advantage of rsync is that you can go by directory which you cannot with zfs send. Only full datasets with the latter.
My expectation is that rsync will transfer the files and upon encountering a broken one and hence an I/O error from ZFS, it will lig something like "skipped" and continue or abort altogether. Check the manpage for options - if there are any - to control this.
If abort is the only thing it can do, you can build an exclude list that you add to as you hit broken files. Rsync will always continue from where it stopped so it's good for an iterative process.
Should I be worried about any of this for the purposes of backing up? I'm not sure why that one drive is not part of the vdev, it's connected to the system and shows up as a drive that can be added to a pool in the webUI. That date for the scrub is weird too, at the time of the TXG I imported the pool at I know that scrub was no longer running.
I also see that despite all but one of the drives showing offline with zpool status, in the Webui I see the 6 drives belonging to the newest VDEV showing N/A instead of Home.
Here's the info I collected when I first started trying to roll back to a working TXG and this is the one I mounted today. The timestamp is from the day the pool went down. However it seems like I actually rolled back to Nov 10th which I believe is prior to adding the 4th vdev.
Code:
Uberblock[7] magic = 0000000000bab10c version = 5000 txg = 3080103 guid_sum = 17417729159499886982 timestamp = 1699889223 UTC = Mon Nov 13 09:27:03 2023 mmp_magic = 00000000a11cea11 mmp_delay = 0 mmp_valid = 0 checkpoint_txg = 0 labels = 0 1 2 3
I was able to copy some of my media to my other pool just to see if it would work and the file plays fine. Even with that one drive gone it's a raidz2 so I don't know if it's worth trying to reattach it to the pool, I assume a resilver would ensue and being the pool is imported read only I doubt it would work anyhow. Seems like a problem to tackle once data is backed up. Maybe this isn't something to be concerned about until I see what data can be saved
Thoughts?
Code:
root@truenas[~]# zpool status Home pool: Home state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the device using 'zpool replace'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J scan: scrub in progress since Fri Nov 10 06:01:35 2023 0B / 135T scanned, 0B / 135T issued 0B repaired, 0.00% done, no estimated completion time config: NAME STATE READ WRITE CKSUM Home DEGRADED 0 0 0 raidz2-0 ONLINE 0 0 0 a7d78b0d-f891-11ed-a2f8-90e2baf17bf0 ONLINE 0 0 3 a7b00eef-f891-11ed-a2f8-90e2baf17bf0 ONLINE 0 0 3 a7d01f81-f891-11ed-a2f8-90e2baf17bf0 ONLINE 0 0 2 a7c951e3-f891-11ed-a2f8-90e2baf17bf0 ONLINE 0 0 1 a7bfef1b-f891-11ed-a2f8-90e2baf17bf0 ONLINE 0 0 3 e4f37ae1-f494-4baf-94e5-07db0c38cb0c ONLINE 0 0 3 raidz2-1 ONLINE 0 0 0 8cca2c8f-39ee-40a6-88e0-24ddf3485aa0 ONLINE 0 0 2 74f3cc23-1b32-4faf-89cc-ba0cd72ba308 ONLINE 0 0 5 4e5f5b16-6c2b-4e6b-a907-3e1b9b1c4886 ONLINE 0 0 5 cde58bb6-9d8e-4cdc-a1bf-847f459b459b ONLINE 0 0 5 58c22778-521b-4e8f-aadd-6d5ad17a8f68 ONLINE 0 0 2 33633f68-920b-4a40-bd4d-45e30b6872bc ONLINE 0 0 2 raidz2-2 ONLINE 0 0 0 2a2e5211-d4ea-4da9-8ea5-bdabdc542bdb ONLINE 0 0 5 56c07fd7-6cb6-4985-9a20-2b5ff9d42631 ONLINE 0 0 5 1147286d-8cd8-4025-8e5d-bbf06e2bd795 ONLINE 0 0 6 7e1fa408-7565-4913-b045-49447ef9253b ONLINE 0 0 9 3d56d2fa-d505-4bea-b9a2-80c121e4e559 ONLINE 0 0 9 a9906b32-2690-4f7b-8d8f-00ca915d8f3d ONLINE 0 0 8 raidz2-5 DEGRADED 0 0 0 b8c63108-353b-4ed7-a927-ca3df817bd21 ONLINE 0 0 0 58782264-02f1-41c6-9b91-d07144cb0ccb ONLINE 0 0 0 03df98a5-a86d-4bc8-879a-5cf611d4306c ONLINE 0 0 0 12991589318322434965 UNAVAIL 0 0 0 was /dev/disk/by-partuuid/1a865d37-0e03-4dd8-a0f4-96f35e6fcfd3 a5786a1f-a7ad-4a30-877a-88a03c94a774 ONLINE 0 0 0 4c59238e-5cbd-428e-8a72-a018d9dae9c2 ONLINE 0 0 0 logs mirror-6 ONLINE 0 0 0 5ba1f70b-be51-470f-94ed-777683425477 ONLINE 0 0 0 f2605776-46a9-4455-a4bc-322d4cf8a688 ONLINE 0 0 0 errors: No known data errors