SOLVED Upgrading from 11 x 1TB to 2TB

Status
Not open for further replies.

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Still accelerating... 225M/s now...

da0 is a source drive, da7 is the resilvering drive.

Looking at the reports, the slow 20M/s part earlier seemed to co-incide with an incoming replication ;)

Anyway, its the weekend now... so this can resilver in piece.

da0.png
da7.png


Your welcome.
 

siconic

Explorer
Joined
Oct 12, 2016
Messages
95
So on mine, it appears my disks being resilvered are 100% busy. I saw this with my other two yesterday, throughout the entire resilver. da16 and da17 are the resilver, da14 is one disk in the Raid.

upload_2017-9-15_8-29-5.png


upload_2017-9-15_8-31-26.png


upload_2017-9-15_8-31-56.png


So in my case, the bottleneck IS my new disks.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I've noticed that resilvers seem to gallop much faster when there's less redundancy available. I figure its a feature to not degrade performance too much when a resilver is not urgent.

Which would explain why yours are perhaps proceeding at full speed, and mine is not. I only have one degraded disk. And no, i'm not prepared to degrade another to see ;)

(now at 275MB/s and 28hrs to go)
 

siconic

Explorer
Joined
Oct 12, 2016
Messages
95
Thanks for bothering to resilver a disk, for sake of experimentation! It is a good comparison, and I know there are many factors, but its good to see a comparison anyway! With only 9.5TB, If I could get 275/MB/s, I would be done in no time! I suppose 91MB/s is not too bad.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Code:
root@titan:~ # zpool status tank
  pool: tank
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Fri Sep 15 22:06:59 2017
		1.63T scanned out of 27.3T at 306M/s, 24h27m to go
		99.8G resilvered, 5.95% done
config:

	NAME											STATE	 READ WRITE CKSUM
	tank											ONLINE	   0	 0	 0
	  raidz2-0									  ONLINE	   0	 0	 0
		gptid/a61f8f5c-7ef9-11e6-b84d-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/a6eae950-7ef9-11e6-b84d-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/a78e5dac-7ef9-11e6-b84d-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/5ab77fca-806c-11e6-9032-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/a9102c37-7ef9-11e6-b84d-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/a9c43104-7ef9-11e6-b84d-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/aa88c559-7ef9-11e6-b84d-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/4fcf2538-9a0e-11e7-ad39-0cc47aaa44de  ONLINE	   0	 0	 0  (resilvering)
	  raidz2-1									  ONLINE	   0	 0	 0
		gptid/e23cf23f-413e-11e7-833a-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/e31e3469-413e-11e7-833a-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/a46a7f07-687e-11e7-b029-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/e4ef8f20-413e-11e7-833a-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/e5f104ef-413e-11e7-833a-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/e6c77a4c-413e-11e7-833a-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/e7d03b92-413e-11e7-833a-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/e8c4ceec-413e-11e7-833a-0cc47aaa44de  ONLINE	   0	 0	 0

errors: No known data errors
 
Last edited:

siconic

Explorer
Joined
Oct 12, 2016
Messages
95
Code:
root@titan:~ # zpool status tank
  pool: tank
state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Fri Sep 15 22:06:59 2017
		1.63T scanned out of 27.3T at 306M/s, 24h27m to go
		99.8G resilvered, 5.95% done
config:

	NAME											STATE	 READ WRITE CKSUM
	tank											ONLINE	   0	 0	 0
	  raidz2-0									  ONLINE	   0	 0	 0
		gptid/a61f8f5c-7ef9-11e6-b84d-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/a6eae950-7ef9-11e6-b84d-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/a78e5dac-7ef9-11e6-b84d-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/5ab77fca-806c-11e6-9032-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/a9102c37-7ef9-11e6-b84d-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/a9c43104-7ef9-11e6-b84d-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/aa88c559-7ef9-11e6-b84d-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/4fcf2538-9a0e-11e7-ad39-0cc47aaa44de  ONLINE	   0	 0	 0  (resilvering)
	  raidz2-1									  ONLINE	   0	 0	 0
		gptid/e23cf23f-413e-11e7-833a-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/e31e3469-413e-11e7-833a-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/a46a7f07-687e-11e7-b029-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/e4ef8f20-413e-11e7-833a-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/e5f104ef-413e-11e7-833a-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/e6c77a4c-413e-11e7-833a-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/e7d03b92-413e-11e7-833a-0cc47aaa44de  ONLINE	   0	 0	 0
		gptid/e8c4ceec-413e-11e7-833a-0cc47aaa44de  ONLINE	   0	 0	 0

errors: No known data errors

Impressive! Someday, I too will have massive storage space. If I can ever upgrade to 4TB disks, I will be there! That would be 36TB useable. :)

Are you running 3TB disks??
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
16x 4TB disks in that server currently (see sig), and Build Report. Recently added another 8x disks as I hit 80% fullness a few months back.

Next step will probably be another 8 disks, but perhaps 8TB.
 

siconic

Explorer
Joined
Oct 12, 2016
Messages
95
Duh! Shoulda looked.... Like the build report, what a nice setup. I use an actual rack mount server, which is noisy. If I could build a nice quiet SAN, I would put it in the house. And the ability to have 25 disks, instead of 15 is pretty nice! I would likely build that, add another 11 disk VDEV, and have some room for standalone disks like I do now. Sweet!
 

siconic

Explorer
Joined
Oct 12, 2016
Messages
95
Well, the upgrade went well! I was able to resilver 2 in place, and offline and replace 2, so 4 at a time! I do have two bad disks, but my JBOD NAS supports SAS 6GB/s, so I bought 2x2TB SAS disks off fleabay, and they will be here tomorrow.

Those 2 disks are BADLY degrading the performance of my NAS. I used to be able to max out my GigbitLAN getting 100Mb/s, but since the degradation, only about 83Mb/s. Cant wait to get new disks in there, and be back to full performance, and double the space!

I think I want for my next upgrade, 8TB HGST Hermetically sealed Helium disks!
 
Status
Not open for further replies.
Top