*

Status
Not open for further replies.

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
Anyone know how to speed up a resilver on FreeNAS?

Thanks


Sent from my iPhone using Tapatalk
 

Robert Smith

Patron
Joined
May 4, 2014
Messages
270
Use smaller and/or faster drives.

System speed also matters, but I do not know if anyone has done comparison benchmarks specifically for resliver.

You can also use mirrors instead of RaidZ, mirrors recover much faster.
 

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
No can do. 4TB SAS drives giving max raw cap of about 350TB


Sent from my iPhone using Tapatalk
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Price you pay for using larger drives. It's all about compromise.
 

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
Agreed. Out of my control I'm afraid. Any ideas on increasing IO to speed up the process or no can do on FreeNAS?


Sent from my iPhone using Tapatalk
 

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
I'm running 90 x 4TB SAS drives across three JBODs in a RAID Z2 8+2 vdev layout.


Sent from my iPhone using Tapatalk
 

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
Not one. 9 vdevs


Sent from my iPhone using Tapatalk
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
"in a RAID Z2 8+2 vdev layout." ;)
 

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
So 8+2 disks in each vdev with RAID Z2 x 9 vdevs.


Sent from my iPhone using Tapatalk
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
What's the estimated time to completion?
 

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
It hasn't happened yet so I'm just trying to prepare myself in the event.


Sent from my iPhone using Tapatalk
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, can't provide any advice because the ways to make it faster depend on what the problem is. And some problems (like pools that are being used heavily non-stop 24x7) can't really be fixed, except to make them idle so they can resilver. Yes, you can force the resilver to override user data requests, but that makes a mess because then all of your user requests end up with high latency (and sometimes so high that things start timing out).

So don't worry about it and deal with the problem when/if it actually happens.
 

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
Ok thanks for the info.


Sent from my iPhone using Tapatalk
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
How does a scrub affect the system? It's the closest thing to simulation you can easily do and still have it be something that you can cancel out of.
 

TinTIn

Contributor
Joined
Feb 9, 2015
Messages
142
Thanks jgreco, I'll give it a try and let you know. The kit is arriving in a couple of weeks so trying to get prepared.


Sent from my iPhone using Tapatalk
 

GrumpyBear

Contributor
Joined
Jan 28, 2015
Messages
141
I'm doing some research into the gotchas encountered when disks fail while I'm beating on my system before putting it into production.

Came across this ZFS Resilvering Speed Comparison and thought it was interesting so when I stumbled across this post thought I'd share it.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
I'm doing some research into the gotchas encountered when disks fail while I'm beating on my system before putting it into production.

Came across this ZFS Resilvering Speed Comparison and thought it was interesting so when I stumbled across this post thought I'd share it.

Awesome random blogs. He's trying to test the difference between striped mirrors and raidZx by writing to it using /dev/random as a source? That's hilarious.

Also, for sequential writes (which dd is), as long as the CPU is fast enough, a 4 disk z2 should be equivalent to a 4 disk striped mirror. Random is where the striped mirror would exceed of course.

Not sure why people can't simply disable compression and use /dev/zero. Again, purely sequential test, but better than testing your cpu with /dev/random.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm doing some research into the gotchas encountered when disks fail while I'm beating on my system before putting it into production.

Came across this ZFS Resilvering Speed Comparison and thought it was interesting so when I stumbled across this post thought I'd share it.

That chart is not overly shocking to me. But I would like to point out that if you have a pool that is heavily used, that will affect the time that the scrub (or resilver) takes. Additionally, having more vdevs also means that I/O is being spread out amonst all of the vdevs, so on a per-vdev basis, if the workload is the same, more vdevs would make resilvering go faster.

So the chart is a nice indicator, for a no-workload condition. But virtually 100% of the time the server is under some kind of workload, and that workload has the ability to affect the resilver/scrub performance in ways that go far beyond just a few hours. One server I saw that was heavily loaded and had multiple vdevs was going to take 600 days to complete. Why? Because the resilvering was being constantly interrupted with real workload. What happened when we took the server offline? Completed about 30 hours later.

So interesting read, but doesn't really affect reality.
 
Status
Not open for further replies.
Top