Replacing non-failed disk without resilver from parity

Status
Not open for further replies.

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Whatever the command does, it will do it on a non redundant pool. Meaning you HAVE to use the "new_device" option for a striped pool, in order to have a source for its data.

And yes, the FreeNAS GUI should be used for normal purposes. I am just more familiar with CLI, being a Solaris / Unix SysAdmin.
 
Last edited by a moderator:

Blokmeister

Dabbler
Joined
Aug 28, 2016
Messages
12
Yep, on my system with a 13 TB pool filled with about 3.5 TB of data a scrub takes a bit more than 2 hours.

Right now, my scrub is still running and says it still has 25 hours more to go. It just takes crazily long. Could it be that my non native block size slows down my scrub by this much?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Right now, my scrub is still running and says it still has 25 hours more to go. It just takes crazily long. Could it be that my non native block size slows down my scrub by this much?

Already answered in my previous post but, yeah, it's possible ;)
 

Blokmeister

Dabbler
Joined
Aug 28, 2016
Messages
12
Already answered in my previous post but, yeah, it's possible ;)

Thanks! I just found out I could use a big server at my university to back up my data. I'm gonna back up my entire pool and then format my entire pool with the correct block size and then put the data back.

In this case I also don't need a method to replace the disk without resilvering, since this will do that for me.

Thanks a lot for the help you guys!
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
Right now, my scrub is still running and says it still has 25 hours more to go. It just takes crazily long. Could it be that my non native block size slows down my scrub by this much?

Just for fun, I'm going to say no, the non-native block size does not slow down your scrub. Non-native block size only slows writes.

Other factors would slow your scrub, such as small recordsize/blocksize datasets, large amounts of data written randomly, poor disk I/O tuning, or an inefficient vdev configuration.
 

Blokmeister

Dabbler
Joined
Aug 28, 2016
Messages
12
Just for fun, I'm going to say no, the non-native block size does not slow down your scrub. Non-native block size only slows writes.

Other factors would slow your scrub, such as small recordsize/blocksize datasets, large amounts of data written randomly, poor disk I/O tuning, or an inefficient vdev configuration.

How would these slow them down, and how can I check the exact values? I googled a bit, but didn't find a nice guide.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Just for fun, I'm going to say no, the non-native block size does not slow down your scrub. Non-native block size only slows writes.

Other factors would slow your scrub, such as small recordsize/blocksize datasets, large amounts of data written randomly, poor disk I/O tuning, or an inefficient vdev configuration.

I'm curious to know why; can you elaborate please?
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
Assuming you mean the non-native block size, accessing a 4K block on 512-byte boundaries causes no penalty. Reads larger than 512-bytes still happen as a single large read. The only penalty is writes to 4K sectors that do not align on the beginning of a 4K sector and/or end at the end of a 4K sector. In those cases, the drive has to read the pre-existing data that isn't changing into cache, wait for the rotation, and then do the full aligning write.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Well, it has to seek 8 times instead of 1 time even on a read so I think there's some penalty here.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
Well, it has to seek 8 times instead of 1 time even on a read so I think there's some penalty here.

If the reads were 512-bytes, you would get 1 seek, and then 7 cache hits. But the reads can be as big as ZFS wants, up to 128KiB, so effectively no penalty. 512b doesn't mean 512b is the largest read, just as 4K doesn't mean 4K is the largest read. Likewise, ZFS doesn't break blocks up for no reason, it stores a single ZFS block in one continuous row, whatever the ashift.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Are you talking about the drive's cache or ZFS' cache?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
So you know for sure that every drive use prefetching on its cache?
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
So you know for sure that every drive use prefetching on its cache?

In effect, yes. Because the smallest sector is 4K, it has no choice but to read that amount in. Only if you turned off the read cache on the drive, then it might discard it. Since a read cache is so critical to Advanced Format drives, it is possible they ignore a read cache off command.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Ok, very interesting ;)
 
Status
Not open for further replies.
Top