I've used zpool attach to add a mirror to a single drive vdev, as part of setting up my new server (I had to do it that way as I'm migrating both disks and data; there's a backup of the data on the old server).
What surprises me is that after attaching, its resilvering speed was consistently about 39-50MB/s, according to zpool status.
Why is it going at 39-50MB/s for what should in theory be a straightforward sequential data copy between 2 disks (whatever's stored on them it can be mirrored sequentially), even though the disks are capable of about 3 times that speed and there's no other load or demand on them?
Update - after an hour the figure shown by zpool status suddenly shot up. But the change raises more questions than it answers, and I'm not even sure I'm looking at the correct figure - "iostat -x" shows just 20MB/s (!) I/O on individual drives:
even though zpool status shows a much higher figure:
In the meantime systat gives a different figure for these disks as well:
What's gone on behind the scenes, what do the numbers mean (and why do they differ so much), and why is it apparently resilvering so much slower than expected?
What surprises me is that after attaching, its resilvering speed was consistently about 39-50MB/s, according to zpool status.
Why is it going at 39-50MB/s for what should in theory be a straightforward sequential data copy between 2 disks (whatever's stored on them it can be mirrored sequentially), even though the disks are capable of about 3 times that speed and there's no other load or demand on them?
Update - after an hour the figure shown by zpool status suddenly shot up. But the change raises more questions than it answers, and I'm not even sure I'm looking at the correct figure - "iostat -x" shows just 20MB/s (!) I/O on individual drives:
Code:
device r/s w/s kr/s kw/s qlen svc_t %b ada0 182.9 0.9 20641.9 19.1 2 3.0 24 ada1 0.0 177.2 0.5 19080.4 1 1.0 17 ada5 170.2 1.0 19030.1 19.1 2 3.6 27 ada6 0.0 191.1 0.5 20691.8 1 0.6 12
even though zpool status shows a much higher figure:
Code:
zpool status | egrep "to go|done" 1.06T scanned out of 6.68T at 193M/s, 8h29m to go 761G resilvered, 15.89% done
In the meantime systat gives a different figure for these disks as well:
Code:
/0% /10 /20 /30 /40 /50 /60 /70 /80 /90 /100 ada0 MB/s XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 115.66 tps| XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 1020.80 ada1 MB/s XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 105.90 tps| XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 935.04 ada5 MB/s XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 106.23 tps| XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 932.46 ada6 MB/s XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 115.63 tps| XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 1020.20
What's gone on behind the scenes, what do the numbers mean (and why do they differ so much), and why is it apparently resilvering so much slower than expected?
Last edited: