Help with Resilvering

Status
Not open for further replies.

Falcon559

Cadet
Joined
Nov 27, 2017
Messages
3
Hello, I am a bit of a noob when it comes to some in depth FreeNas knowledge. My issue is that resilvering is taking way to long. Below is the details and the feeble steps I have taken to try and remedy it.:

FreeNAS Version: FreeNAS-11.0-U4
Hardware details: Super X10DRi-T Motherboard 2U Chassis Intel Xeon E5-2603 v3 @ 1.60GHz 16 Gigs of RAM
Use Case Info: Backup Storage
Overview of the issue: Resilvering is taking way to long. 6 days and only at 73% for one drive failure.
Pool Configuration: RaidZ2-0, 8 drives 6TB HGST Model # HUS726060ALE610
Sharing Protocols: ISCSI
Client's OS's: Windows
Steps taken to resolve the issue: Added these tunables - I see no speed increase: vfs.zfs.top_maxinflight=128 vfs.zfs.resilver_min_time_ms=5000 vfs.zfs.resilver_delay=0

I am thinking it is a ram limitation but I am not sure. It is pegged for sure. I just wanted to throw this out there to see if I am doing something stupid.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Steps taken to resolve the issue: Added these tunables - I see no speed increase: vfs.zfs.top_maxinflight=128 vfs.zfs.resilver_min_time_ms=5000 vfs.zfs.resilver_delay=0

I am thinking it is a ram limitation but I am not sure. It is pegged for sure. I just wanted to throw this out there to see if I am doing something stupid.
I wouldn't be mean and say it was stupid, because it isn't really, but what you have is sub-optimal. When you are doing iSCSI, and with as much storage as you should have given the information you provided, you should probably have a little more RAM.
Can you give some more information though because something that will have a significant impact is the amount of data present on the pool.
I have a storage server at work that uses 6TB drives and have had to replace three since the system has been up. The time to resilver is usually about 5 days for me, but I have the pool about half full of data.
If it is still making progress, I would let it finish but you can certainly tell us more about your configuration and how you are using it so we can give you some pointers. There are many people here with a great deal of experience that would be happy to help.
 

Falcon559

Cadet
Joined
Nov 27, 2017
Messages
3
It is 97% used. I use it for dumping backups from backupexec and that is it. 29.3 TB.
 

dsgb

Cadet
Joined
Nov 28, 2017
Messages
1
If this helps, I just finished resilvering (10) 8TB drives and the average time was 27-30 hours per disk.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
It is 97% used.
If your pool is that full, performance is going to really tank. There is a change in the storage algorithm when it goes over 90% and it makes it very slow.
You will need to add more storage to the system or reduce the volume of data used and since it is an iSCSI extent, deleting data doesn't automatically free space on the pool.
Sorry, but the speed is all down to how much data you have.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Steps taken to resolve the issue: Added these tunables - I see no speed increase: vfs.zfs.top_maxinflight=128 vfs.zfs.resilver_min_time_ms=5000 vfs.zfs.resilver_delay=0
You could probably take these out because this is not the source or cure to the issue.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
It is 97% used.
Ouch. There's your problem. ZFS performance takes a hit at about 80% used (even with normal file workloads--it's worse with block storage workloads), and absolutely tanks at either 90% or 95%.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Bear in mind, deleting files using the client machine(s) may not reduce the storage used by the iSCSI extent on your FreeNAS server.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Ah crud... I didn't know that. Thanks everyone. I will move some data off.
It would take additional hardware, but you can expand the pool with additional drives. If hardware can be thrown at the problem, let us know and we can give you some suggestions.
 
Status
Not open for further replies.
Top