Tasslehoff
Cadet
- Joined
- Nov 30, 2023
- Messages
- 1
Hi, recently on one of my TrueNAS I got a fault NVMe drive used for logs and cache, so I changed the faulted drive with a new one, cloned partitions structures and made a zfs replace on the faulted device with the right partition on the new one.
As you can see the pool is pretty big and the resilver takes forever, in two days It did only 0.31%
I was looking for some zfs tuning parameters to speed things out and I found something...
...but seems like vfs.zfs.scrub_delay and vfs.zfs.resilver_delay return "unknown oid" error.
Please note that I'm running the latest Core version (I updated from 13.0-U4 a few days ago).
Is there any other available parameters I can tune to make resilver a little faster?
Do vfs.zfs.scrub_delay and vfs.zfs.resilver_delay parameters been changed with something else?
Thanks for any advice.
Tas
Code:
root@drsrv01[~]# zpool status drpool
pool: drpool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Tue Nov 28 17:44:25 2023
256G scanned at 1.54M/s, 257G issued at 1.54M/s, 81.2T total
0B resilvered, 0.31% done, no estimated completion time
config:
NAME STATE READ WRITE CKSUM
drpool DEGRADED 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/0669279c-eded-11eb-80d9-7c10c91fa126 ONLINE 0 0 0
gptid/06a65ff5-eded-11eb-80d9-7c10c91fa126 ONLINE 0 0 0
gptid/06b6a529-eded-11eb-80d9-7c10c91fa126 ONLINE 0 0 0
gptid/06cdfa15-eded-11eb-80d9-7c10c91fa126 ONLINE 0 0 0
gptid/06f6faa6-eded-11eb-80d9-7c10c91fa126 ONLINE 0 0 0
gptid/071217a4-eded-11eb-80d9-7c10c91fa126 ONLINE 0 0 0
gptid/0709e55c-eded-11eb-80d9-7c10c91fa126 ONLINE 0 0 0
gptid/07002dfd-eded-11eb-80d9-7c10c91fa126 ONLINE 0 0 0
gptid/072ebe3b-eded-11eb-80d9-7c10c91fa126 ONLINE 0 0 0
gptid/073b819f-eded-11eb-80d9-7c10c91fa126 ONLINE 0 0 0
logs
mirror-1 DEGRADED 0 0 0
gptid/379f7792-edf9-11eb-80d9-7c10c91fa126 ONLINE 0 0 0
replacing-1 DEGRADED 0 0 0
435495427710988294 UNAVAIL 0 0 0 was /dev/gptid/16c5daf8-edf9-11eb-80d9-7c10c91fa126
gptid/b0142897-8d14-11ee-bf44-7c10c91fa126 ONLINE 0 0 0
cache
gptid/44c383dd-edf9-11eb-80d9-7c10c91fa126 UNAVAIL 0 0 0 cannot open
gptid/4d025c31-edf9-11eb-80d9-7c10c91fa126 ONLINE 0 0 0
errors: No known data errors
As you can see the pool is pretty big and the resilver takes forever, in two days It did only 0.31%
I was looking for some zfs tuning parameters to speed things out and I found something...
Code:
vfs.zfs.scrub_delay=0 vfs.zfs.top_maxinflight=128 vfs.zfs.resilver_min_time_ms=5000 vfs.zfs.resilver_delay=0
...but seems like vfs.zfs.scrub_delay and vfs.zfs.resilver_delay return "unknown oid" error.
Please note that I'm running the latest Core version (I updated from 13.0-U4 a few days ago).
Is there any other available parameters I can tune to make resilver a little faster?
Do vfs.zfs.scrub_delay and vfs.zfs.resilver_delay parameters been changed with something else?
Thanks for any advice.
Tas