Issue with replacing Disk

Middge

Dabbler
Joined
Jun 24, 2016
Messages
19
Good morning!

A couple days ago I had a disk go bad in one of my pools. No big deal, I offlined the disk, shut down the freenas server, replaced the disk and booted back up. I went into the pool, then I went to replace the disk like normal.

Unfortunately something went wrong. The task hung for over a day and when I checked the usage logs, there was no activity on the new disk. I assumed the task hung for whatever reason. Unfortunately I don't know how to proceed now. I offlined the new disk and tried to reseat it but now I have two orphaned disks with "REPLACING" status, and I'm not sure how to fix this. I still have a pool in a degraded state.

See screen shot for reference;
 

Attachments

  • freenas.PNG
    freenas.PNG
    974.5 KB · Views: 190

Middge

Dabbler
Joined
Jun 24, 2016
Messages
19
Some added information;

If I try to initiate a replace task against either of those orphaned drives, I get the following error. I suspect it's because I'm not supposed to do it that way.

Environment:

Software Version: FreeNAS-11.2-U8 (06e1172340)
Request Method: POST
Request URL: https://my.domain.net/legacy/storage/zpool-Pool01/disk/replace/369973771180808386/


Traceback:
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/exception.py" in inner
42. response = get_response(request)
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py" in _legacy_get_response
249. response = self._get_response(request)
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
178. response = middleware_method(request, callback, callback_args, callback_kwargs)
File "./freenasUI/freeadmin/middleware.py" in process_view
163. return login_required(view_func)(request, *view_args, **view_kwargs)
File "/usr/local/lib/python3.6/site-packages/django/contrib/auth/decorators.py" in _wrapped_view
23. return view_func(request, *args, **kwargs)
File "./freenasUI/storage/views.py" in zpool_disk_replace
800. if form.done():
File "./freenasUI/storage/forms.py" in done
2291. passphrase=passfile
File "./freenasUI/middleware/notifier.py" in zfs_replace_disk
996. self.__gpt_labeldisk(type="freebsd-zfs", devname=to_disk, swapsize=swapsize)
File "./freenasUI/middleware/notifier.py" in __gpt_labeldisk
341. c.call('disk.wipe', devname, 'QUICK', False, job=True)
File "./freenasUI/middleware/notifier.py" in __gpt_labeldisk
341. c.call('disk.wipe', devname, 'QUICK', False, job=True)
File "/usr/local/lib/python3.6/site-packages/middlewared/client/client.py" in call
402. raise ClientException(job['error'], trace=job['exception'])

Exception Type: ClientException at /legacy/storage/zpool-Pool01/disk/replace/369973771180808386/
Exception Value: Command '('dd', 'if=/dev/zero', 'of=/dev/da15', 'bs=1m', 'count=32')' returned non-zero exit status 1.
 

Middge

Dabbler
Joined
Jun 24, 2016
Messages
19
Bump. I could really use an assist. I just need a way to safely replace the disk and get the pool healthy again :-(
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
If you offlined the disk, it would still retain information about the array it used to belong to. You could restart your system with the disk plugged in and Freenas should reinsert it and proceed with resilvering.
If not, you would need to wipe the new disk from within the GUI under the Disk section.
Then you should be able to proceed with the replace procedure.
I don't think there are any logs related to resilvering.
If you want the status, just go into CLI and run:

zpool status and you should see the resilvering process.
It will take some time to resilver and could potentially take several days. Again, you can follow up with the command above.
 
Top