kdragon75
Wizard
- Joined
- Aug 7, 2016
- Messages
- 2,457
I'm in the process of migrating data from one server to another using
zpool iostat -v:
@Chris Moore I know you saw something similar with a bad drive but I don't recall how you find the specific drive. Please enlighten me if you will!
zfs send data@snap | nc -w 10 192.168.70.11 8099
and nc -w 120 -l 8099 | zfs recv -F pico
. This is from a set of 4 mirror vdevs to a set of two mirror vdevs of identical drives over 10 switched ethernet. From FreeNAS 11.1U4 to FreeNAS 11.1U6. It has bee saturating the disks and all drives have seen almost identical write performance for the first 2TB and now its a bit unbalanced. The target pico is a fresh and clean pool so there is no fragmentation or existing data.zpool iostat -v:
Code:
pico 2.40T 3.04T 56 22.7K 228K 182M mirror 1.20T 1.52T 17 13.5K 71.9K 106M gptid/76abef07-cfd1-11e8-9474-00266cf5eda0 - - 3 325 16.0K 106M gptid/77db6d53-cfd1-11e8-9474-00266cf5eda0 - - 13 349 55.9K 107M mirror 1.20T 1.52T 38 9.17K 156K 76.2M gptid/78c59732-cfd1-11e8-9474-00266cf5eda0 - - 18 195 75.9K 75.9M gptid/79e97dda-cfd1-11e8-9474-00266cf5eda0 - - 19 310 79.9K 76.8M
@Chris Moore I know you saw something similar with a bad drive but I don't recall how you find the specific drive. Please enlighten me if you will!