Replacing Failing Disk with Spare Fails

Status
Not open for further replies.

Baaaa5

Cadet
Joined
Feb 12, 2018
Messages
2
I have a failing disk in one of my volumes that has unreadable/pending sectors. This is not a problem though as I have two hot spares sitting in the machine ready to go. The two hot spares are sitting in a spares volume. When I select the failing disk, da6, and select replace and select one of the two hot spares, I then hit replace and it kicks back an error saying that the disk is not clear, partitions or zfs labels are found. I select force and this fails as well; kicking back the following error.
Environment:

Software Version: FreeNAS-11.0-U4 (54848d13b)
Request Method: POST
Request URL: http://192.168.1.5/storage/zpool-Ma...tid/9b5a8fa4-7a51-11e7-bf9d-0007433897b0.eli/


Traceback:
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/exception.py" in inner
39. response = get_response(request)
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py" in _legacy_get_response
249. response = self._get_response(request)
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
178. response = middleware_method(request, callback, callback_args, callback_kwargs)
File "./freenasUI/freeadmin/middleware.py" in process_view
162. return login_required(view_func)(request, *view_args, **view_kwargs)
File "/usr/local/lib/python3.6/site-packages/django/contrib/auth/decorators.py" in _wrapped_view
23. return view_func(request, *args, **kwargs)
File "./freenasUI/storage/views.py" in zpool_disk_replace
951. if form.done():
File "./freenasUI/storage/forms.py" in done
2039. passphrase=passfile
File "./freenasUI/middleware/notifier.py" in zfs_replace_disk
1086. self.__gpt_labeldisk(type="freebsd-zfs", devname=to_disk, swapsize=swapsize)
File "./freenasUI/middleware/notifier.py" in __gpt_labeldisk
410. raise MiddlewareError(f'Unable to GPT format the disk "{devname}": {error}')

Exception Type: MiddlewareError at /storage/zpool-MainVolume/disk/replace/gptid/9b5a8fa4-7a51-11e7-bf9d-0007433897b0.eli/
Exception Value: [MiddlewareError: b'Unable to GPT format the disk "da22": gpart: geom \'da22\': File exists\n']

I am currently living in Hawaii and the Server is located in Connecticut, so physical access to the machine is not an option for several months. I can remote access the freenas gui as well as the command line. I also have IPMI (i-KVM) on this machine, so I can cycle power as necessary.

I appreciate any help I can get. I have tried for a while to see if anyone else has a similar problem to this without any luck. I am not sure why the spare won't just allow me to use it to replace a failing disk.
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
I’ve never used spares myself, but I’m pretty sure they need to be in the same pool as the failing disk. Also it should automatically kick in until you physically replace the failed disk and the revert back to being a spare (unless promoted).
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
You stated that your spare disks are in a spares volume. Maybe a simple thought from my side, but would simply destroying that spares volume free up those disks for use?
 

Baaaa5

Cadet
Joined
Feb 12, 2018
Messages
2
When you setup hot spares on freenas 11, it makes that spares volume.
I can delete them from the volume, but they will still have the freenas markings on them and I wont be able to add them to a new volume unless I am physically there to pull the drive, wipe it, and then re-install it physically into the server.
I have never been able to erase those markings from inside the freenas environment, I may have been doing it wrong, but I have not been able to remove a disk from a freenas machine and then add it to a different array on another freenas machine without wiping it on a separate computer.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
Understood. I have never worked with hot spares and I don't know all the ins and outs. But you did not mention if you tried to solve your problem with ZFS commands. So maybe I state the obvious, but on the Oracle docs site are commands to find for working with ZFS https://docs.oracle.com/cd/E53394_01/html/E54801/gpegp.html
Like this command that allows you to remove a hot spare from the pool, as long as it is not in use by the pool:
# zpool remove pool spare-device
It's my hope that ZFS will take care of removing those markings for you and free the disk up for replacement of the failing disk trough the GUI or on the command line.
 
Status
Not open for further replies.
Top