If you are concerned that the GUI isn't reporting correctly (or that you don't understand), just run "zpool status" at the command line:
Don't mind the failed disk, just look at the freenas-boot pool and notice the mirror-0 with 2 drives below it.
Code:
[root@freenas2] ~# zpool status
pool: backup-tank2
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: none requested
config:
NAME STATE READ WRITE CKSUM
backup-tank2 DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
gptid/6c9419be-9654-11e5-87db-001f33eaf869 ONLINE 0 0 0
gptid/6d54648c-9654-11e5-87db-001f33eaf869 ONLINE 0 0 0
gptid/6e0f6e32-9654-11e5-87db-001f33eaf869 ONLINE 0 0 0
gptid/6ec82f73-9654-11e5-87db-001f33eaf869 ONLINE 0 0 0
4440998044591230907 OFFLINE 0 0 0 was /dev/gptid/9521ba9a-9654-11e5-87db-001f33eaf869
gptid/9d5448ba-9654-11e5-87db-001f33eaf869 ONLINE 0 0 0
errors: No known data errors
pool: freenas-boot
state: ONLINE
scan: scrub repaired 132K in 0h13m with 0 errors on Mon Nov 9 15:21:00 2015
config:
NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/1c4fbb56-c83f-11e4-ab84-001f33eaf869 ONLINE 0 0 0
gptid/1caf62bd-c83f-11e4-ab84-001f33eaf869 ONLINE 0 0 0
errors: No known data errors
[root@freenas2] ~#