Hi.
Over the last 2 months or so I've been going through the process of expanding my pool by replacing all the drives. I've gone through more than half the drives now without issue.
I don't have hot swap or spare bays in the case so I offline a drive shutdown remove and replace with new then power up and resilver. I'm replacing the drives with a mix of 2 & 3 TB WD Reds and Greens. At first I was just replacing disks at random but later decided I would like to keep all the bigger 3 Tb drives in the same Vdev so I could expand the pool again easily later on.
Now I'm trying to offline one of the 3TB disks that has recently been resilvered into the pool so I can replace it with a 2 Tb and then later add it back in again to the other vdev with the other 3 TB drives.
Now here's what happens.
Even after multiple attempts I get the same message. The pool remains healthy and I even did a scrub to confirm everything is healthy.
I'm very reluctant to just pull the plug on the disk because my backup isn't complete yet. I'm using the retired disks to back everything up to. Maybe not the best idea but still better then nothing. Can anyone shed some light as to why this is happening and what I can do. The pool is almost full and I really need the extra capacity. I've got all the disks just waiting.
System specs are as follows.
FreeNAS 9.3
Xeon E3-1220v2
Supermicro X9SCM-F
32 Gb ECC Ram
Dual Lsi 9211-8i
Pool Configuration.
Over the last 2 months or so I've been going through the process of expanding my pool by replacing all the drives. I've gone through more than half the drives now without issue.
I don't have hot swap or spare bays in the case so I offline a drive shutdown remove and replace with new then power up and resilver. I'm replacing the drives with a mix of 2 & 3 TB WD Reds and Greens. At first I was just replacing disks at random but later decided I would like to keep all the bigger 3 Tb drives in the same Vdev so I could expand the pool again easily later on.
Now I'm trying to offline one of the 3TB disks that has recently been resilvered into the pool so I can replace it with a 2 Tb and then later add it back in again to the other vdev with the other 3 TB drives.
Now here's what happens.
Code:
Apr 14 19:09:21 freenas notifier: geli: No such device: /dev/da5p1. Apr 14 19:09:21 freenas manage.py: [middleware.exceptions:38] [MiddlewareError: Disk offline failed: "cannot offline gptid/c871fbe2-d716-11e4-90be-0cc47a00428b: no such device in pool, "] Apr 14 19:09:21 freenas GEOM_ELI: Device da5p1.eli destroyed. Apr 14 19:09:21 freenas GEOM_ELI: Detached da5p1.eli on last close.
Even after multiple attempts I get the same message. The pool remains healthy and I even did a scrub to confirm everything is healthy.
I'm very reluctant to just pull the plug on the disk because my backup isn't complete yet. I'm using the retired disks to back everything up to. Maybe not the best idea but still better then nothing. Can anyone shed some light as to why this is happening and what I can do. The pool is almost full and I really need the extra capacity. I've got all the disks just waiting.
System specs are as follows.
FreeNAS 9.3
Xeon E3-1220v2
Supermicro X9SCM-F
32 Gb ECC Ram
Dual Lsi 9211-8i
Pool Configuration.
Code:
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT Main 27.2T 21.4T 5.82T - 2% 78% 1.00x ONLINE /mnt freenas-boot 7.44G 943M 6.52G - - 12% 1.00x ONLINE - [root@freenas ~]# zpool status pool: Main state: ONLINE scan: scrub repaired 0 in 10h12m with 0 errors on Tue Apr 14 18:44:18 2015 config: NAME STATE READ WRITE CKSUM Main ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/b71dbc93-3f06-11e4-b93c-00237dfbe0f0 ONLINE 0 0 0 gptid/b79b8b8b-3f06-11e4-b93c-00237dfbe0f0 ONLINE 0 0 0 gptid/b875b15e-3f06-11e4-b93c-00237dfbe0f0 ONLINE 0 0 0 gptid/219366ec-4543-11e4-93fc-00237dfbe0f0 ONLINE 0 0 0 gptid/b95234f5-3f06-11e4-b93c-00237dfbe0f0 ONLINE 0 0 0 gptid/c871fbe2-d716-11e4-90be-0cc47a00428b ONLINE 0 0 0 gptid/ba703018-3f06-11e4-b93c-00237dfbe0f0 ONLINE 0 0 0 gptid/f09f246a-4936-11e4-86e8-00237dfbe0f0 ONLINE 0 0 0 gptid/57474628-d2c9-11e4-b74d-0cc47a00428b ONLINE 0 0 0 gptid/16a216da-ceee-11e4-8764-0cc47a00428b ONLINE 0 0 0 raidz2-1 ONLINE 0 0 0 gptid/c9acc021-d152-11e4-b719-0cc47a00428b ONLINE 0 0 0 gptid/ce3bc243-80c4-11e4-8289-0cc47a00428b ONLINE 0 0 0 gptid/4f22dafc-d934-11e4-92a2-0cc47a00428b ONLINE 0 0 0 gptid/a4de6b6c-3f07-11e4-b93c-00237dfbe0f0 ONLINE 0 0 0 gptid/50f8fb19-d6ad-11e4-a2b4-0cc47a00428b ONLINE 0 0 0 gptid/99e8bdb9-dd20-11e4-b830-0cc47a00428b ONLINE 0 0 0 gptid/6fac4876-be3b-11e4-945d-0cc47a00428b ONLINE 0 0 0 gptid/70b609bf-ddd9-11e4-b4f5-0cc47a00428b ONLINE 0 0 0 gptid/15b3c501-d39f-11e4-9aee-0cc47a00428b ONLINE 0 0 0 gptid/cf505d11-d458-11e4-a3c6-0cc47a00428b ONLINE 0 0 0 errors: No known data errors