I have the same problem.. it seems the instructions in the guide are incomplete..
What is interesting is I had two drives fail on me in my configuration.. I have 4 x 2TB drives (zpool name: Thing2) and 6 x 1TB drives (Zpool name: Thing1). For thing2 I was able to pop in another drive, run the "Replace" command from the GUI and presto.. 5 hours later.. everything is perfect.. but for Thing1, i keep getting Middleware errors about the drive being too small even though its a 1TB drive.. All the drives use 4K sectors. So I'm doing it via the CLI but there is a lot of dissenting opinions on the "right way" to do things.
So far, I've followed the steps in the 8.0.3 guide (though running 8.0.4p1.. I switched to 8.2.0-BETA3 and it has the same problems) guide.. With the Zpool replace Thing1 <long numerical value of old drive> ada5 and that works after a very long re-silvering process, but after that, the CLI detach doesn't work since there is no "old" anymore.. I can do a detach from the GUI but I remember reading that mixing and matching CLI and GUI commands are a sure way to break things.
I followed the thread with a lot of interest, because this is the most important part of the whole story: To maintain the redundant system over the time. You have to accept that disks will break over the time, but you cannot accept that the repair process fails.
Just my personal remarks:
Don't wait with learning the workflow until a real problem occures. It is so easy to set up a mini freenas system with lots of drives in VMware (or other virtualization platforms). Simulate a disk problem and try to repair. And document this what you have learned for the day when you will have to fix your production unit.
Speeking about dokumentation: The freenas guide is not very detailed in describing a disk failure case in the chapter "Replacing a failed Drive". And the chapter later on in the appendix "How do I replace a bad drive?" is, in my eyes, complete bullshit, as it lists commands which will not work at all, at least not on a freenas device.
But back to you 6 drive Zpool: I am running a similar configuration, but with 6 x 3TB WD30EZRX drives. And ZFS is a killer feature. I say this after years of experience with different types and configurations of "traditional" RAID systems. So I decided to set up a Raidz2, to have the possibility to replace a disk without loosing redundancy. I purchased the drives (antediluvian, meaning before the big flood) from different dealers. One of the drives had more vibrations and was getting warmer than the others. This units was now, after 6 months, getting more and more Offline_Uncorrectables and Current_Pending_Sectors. So I ordered a replacement unit (advanced replacement by WD). The unit I got was a bit different (WD3009FYPX, obviously the brand new Enterprise grade 3 TB drive, which was not listed in the WD website). But it had exact the same size in blocks and bytes.
I am using FreeNAS-8.0.4-RELEASE_MULTIMEDIA-p1-x64 (11076)
To get all this informations, use for example smartctl -a /dev/ada3 <--ada3 was my problem disk
Then I disabled the aktive services, like CIFS and Appletalk (as the Macbook of my daughter is doing TM backups the whole day..)
I shut down the NAS.
I replaced the problem disk against the new one.
I rebooted the NAS again.
In the GUI, is did a View All Volumes (was degraded as expected) and, for this volume, a View Disks (displaying the removed disk with a Serial "unknown"). The other disks, ada0, ada1, ada2, ada4, ada5 are still displayed with a real Serial.
I selected exactly this "Unknown" entry (the last shadow of the died, and removed disk) and did a Replace. Now it offers as Member Disk "in-place (ada3) 3.0TB))". After this, the new disk appeared in the list as ada3 with the new Serial. The old entry was still here.
I had a look on the Zpool in the CLI:
[root@freenas] ~# zpool status
pool: ZFS6x3T
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h1m, 0.07% done, 27h9m to go
config:
NAME STATE READ WRITE CKSUM
ZFS6x3T DEGRADED 0 0 0
raidz2 DEGRADED 0 0 0
gptid/4a6f4bd2-e16b-11e0-8df7-f46d04d89638 ONLINE 0 0 0
gptid/4b4370a8-e16b-11e0-8df7-f46d04d89638 ONLINE 0 0 0
gptid/4c19e405-e16b-11e0-8df7-f46d04d89638 ONLINE 0 0 0
replacing DEGRADED 0 0 0
12902692225261382052 UNAVAIL 0 0 0 was /dev/ada3p2/old
ada3p2 ONLINE 0 0 0 1.85G resilvered
gptid/4dc98083-e16b-11e0-8df7-f46d04d89638 ONLINE 0 0 0
gptid/4ea2e01f-e16b-11e0-8df7-f46d04d89638 ONLINE 0 0 0
errors: No known data errors
[root@freenas] ~#
Then I did, in the GUI, a "Detach" on the entry of the old disk.
Then again a view on the Zpool in the CLI:
[root@freenas] ~# zpool status
pool: ZFS6x3T
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h2m, 0.15% done, 24h28m to go
config:
NAME STATE READ WRITE CKSUM
ZFS6x3T ONLINE 0 0 0
raidz2 ONLINE 0 0 0
gptid/4a6f4bd2-e16b-11e0-8df7-f46d04d89638 ONLINE 0 0 0
gptid/4b4370a8-e16b-11e0-8df7-f46d04d89638 ONLINE 0 0 0
gptid/4c19e405-e16b-11e0-8df7-f46d04d89638 ONLINE 0 0 0
ada3p2 ONLINE 0 0 0 3.93G resilvered
gptid/4dc98083-e16b-11e0-8df7-f46d04d89638 ONLINE 0 0 0
gptid/4ea2e01f-e16b-11e0-8df7-f46d04d89638 ONLINE 0 0 0
errors: No known data errors
[root@freenas] ~#
Now I have to wait until the resilvering process is finished. The Zpool is already Online again. The magic partition ID will stay with the short variant ada3p2, but this is not an issue at all.
This took, without the resilvering process, not more than 15 minutes for me. What I wanted to say is, that the replacement of a bad disks seem to work strait on forward in the GUI.
erwin