Eureka! I made some progress:
I found this thread 
https://www.truenas.com/community/threads/zfs-gptid-has-disappeared.80797/ which pictured the very same issue regarding the GPTIDs (da1 had no GPTID after importing, but it had one with the pool being exported). It had no solution, but somehow made me find this: 
https://www.reddit.com/r/zfs/comments/f7b3if/mixed_gptid_and_dev_names_in_zpool_status/
As said in the later post, I exported the pool and then did:
glabel refresh /dev/da1
glabel refresh /dev/da2
glabel refresh /dev/da3
glabel refresh /dev/da4
and then imported the pool using:
zpool import -d /dev/gptid z2pool
After this zpool status showed me, that all hard drives where added by their GPTID instead mixed, opposed to what it was like before:
	
	
		
		
			root@truenas[/var/log]# zpool status z2pool
  pool: z2pool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Fri Aug 12 19:11:15 2022
    291G scanned at 2.85G/s, 1.75M issued at 17.5K/s, 23.3T total
    0B resilvered, 0.00% done, no estimated completion time
config:
    NAME                                            STATE     READ WRITE CKSUM
    z2pool                                          ONLINE       0     0     0
      raidz2-0                                      ONLINE       0     0     0
        gptid/2affd65a-0bc6-11ed-bd90-000c29077bb3  ONLINE       0     0     0  (awaiting resilver)
        gptid/10046937-6b06-11eb-9b87-000c29077bb3  ONLINE       0     0     0
        gptid/aa7c274d-0bc3-11ed-bd90-000c29077bb3  ONLINE       0     0     0  (awaiting resilver)
        gptid/9ce5247f-331b-3148-9b38-c958d2bd057a  ONLINE       0     0     0
		
		
	 
The WebUI did not show the pool at all at this stage, so I exported it again, rebootet and imported it normally using the WebUI, with this result:
 
	
		
			
		
		
	
		
		
	
	
	
		
			
		
		
	
So they are all added by their drive number now, instead of being mixed. I remember this was the 'normal' status for years (until my self-inflicted rampage). Also the WebUI is able to access da1's options again, which did not work before!
So I now have two options, I think:
1. See if the resilver runs through now without instantly restarting
2. Fix that strange partitioning diversity on da4 first, we remember it looked like this:
	
	
		
		
			root@truenas[~]# gpart show /dev/da1
=>         40  15628053088  da1  GPT  (7.3T)
           40           88       - free -  (44K)
          128      4194304    1  freebsd-swap  (2.0G)
      4194432  15623858696    2  freebsd-zfs  (7.3T)
root@truenas[~]# gpart show /dev/da2
=>         40  15628053088  da2  GPT  (7.3T)
           40           88       - free -  (44K)
          128      4194304    1  freebsd-swap  (2.0G)
      4194432  15623858696    2  freebsd-zfs  (7.3T)
root@truenas[~]# gpart show /dev/da3
=>         40  15628053088  da3  GPT  (7.3T)
           40           88       - free -  (44K)
          128      4194304    1  freebsd-swap  (2.0G)
      4194432  15623858696    2  freebsd-zfs  (7.3T)
root@truenas[~]# gpart show /dev/da4
=>         34  15628053101  da4  GPT  (7.3T)
           34         2014       - free -  (1.0M)
         2048  15628034048    1  !6a898cc3-1dd2-11b2-99a6-080020736631  (7.3T)
  15628036096        16384    9  !6a945a3b-1dd2-11b2-99a6-080020736631  (8.0M)
  15628052480          655       - free -  (328K)
		
		
	 
I don't know how I would fix that, though.. would removing, erasing and and committing a replace operation on da4 fix the block size and partitioning to be in line with the others?