r2p2
Cadet
- Joined
- Jan 29, 2019
- Messages
- 8
I might be doing something wrong but can't figure out what. Before going live, I thought it would be a good idea to test things with the help of a virtual machine. The result was a vm containing 2 disks for the operating system and 4 in an enrypted raidz2 as storage pool. When playing around with virtually pulling and inserting the drives, I noticed that it seems impossible to get the same drive running again after it was pulled out. It does work if I replace a drive with a completely new one. After a while I've broken it down to the simplest not working scenario which looks like this:
As far as I know, it should be fine to offline -> online a working drive?
In case it is helpful, I am using FreeNAS-11.2-RELEASE-U1 (Build Date: Dec 20, 2018 22:41 and zpool status looks like:
Thank you in advance,
Robert
- Setup everything as described (4 drives in an encrypted raidz2 storage pool)
- Storage > Pool > Pool Operations (Gear) > Status > Offline any of the disks
- Storage > Pool > Pool Operations (Gear) > Status > Online the offlined disk
Code:
Error: Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 219, in wrapper response = callback(request, *args, **kwargs) File "./freenasUI/api/resources.py", line 886, in online_disk notifier().zfs_online_disk(obj, deserialized.get('label')) File "./freenasUI/middleware/notifier.py", line 1064, in zfs_online_disk assert volume.vol_encrypt == 0 AssertionError
As far as I know, it should be fine to offline -> online a working drive?
In case it is helpful, I am using FreeNAS-11.2-RELEASE-U1 (Build Date: Dec 20, 2018 22:41 and zpool status looks like:
Code:
root@freenas[~]# zpool status storage pool: storage state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scan: resilvered 37.8M in 0 days 00:00:39 with 0 errors on Wed Jan 30 00:23:55 2019 config: NAME STATE READ WRITE CKSUM storage DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 gptid/7afeb39f-1e8f-11e9-9423-080027323634.eli ONLINE 0 0 0 12034077341320501396 OFFLINE 0 0 0 was /dev/gptid/d41df718-241c-11e9-8b42-080027323634.eli gptid/1351f31c-1fdd-11e9-a5f1-080027323634.eli ONLINE 0 0 0 gptid/7ce8fda5-1e8f-11e9-9423-080027323634.eli ONLINE 0 0 0 errors: No known data errors
Thank you in advance,
Robert