It would be much more useful if you had pasted the whole zpool status update...
I'm guessing a disk is starting to fail, so you would have to replace it.
		
		
	 
Removing stale files from /var/preserve:
Cleaning out old system announcements:
Backup passwd and group files:
Verifying group file syntax:
/etc/group is fine
Disk status:
Filesystem             Size    Used   Avail Capacity  Mounted on
/dev/ufs/FreeNASs2a    927M    331M    521M    39%    /
devfs                  1.0K    1.0K      0B   100%    /dev
/dev/md0               4.6M    1.8M    2.4M    43%    /etc
/dev/md1               824K    2.0K    756K     0%    /mnt
/dev/md2               149M    7.9M    129M     6%    /var
/dev/ufs/FreeNASs4      20M    746K     17M     4%    /data
Hello                  4.3T     56M    4.3T     0%    /mnt/Hello
Hello/Bernard          5.0T    743G    4.3T    14%    /mnt/Hello/Bernard
Hello/Chris            4.6T    320G    4.3T     7%    /mnt/Hello/Chris
Hello/Frank            4.3T    149K    4.3T     0%    /mnt/Hello/Frank
Hello/Guest            4.3T    149K    4.3T     0%    /mnt/Hello/Guest
Hello/Jennifer         4.3T    155K    4.3T     0%    /mnt/Hello/Jennifer
Last dump(s) done (Dump '>' file systems):
Checking status of zfs pools:
  pool: Hello
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: 
http://www.sun.com/msg/ZFS-8000-9P
 scrub: none requested
config:
        NAME        STATE     READ WRITE CKSUM
        Hello       ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ada0p2  ONLINE       0     0     0
            ada1p2  ONLINE       0     0     0
            ada2p2  ONLINE       0  113K     2
errors: No known data errors
Checking status of ATA raid partitions:
Checking status of gmirror(8) devices:
Checking status of graid3(8) devices:
Checking status of gstripe(8) devices:
Network interface status:
Name    Mtu Network       Address              Ipkts Ierrs Idrop    Opkts Oerrs  Coll
re0    1500 <Link#1>      c8:60:00:e4:2c:d7  7750722     0     0  5935201     0     0
re0    1500 192.168.1.0   192.168.1.150      7679400     -     -  5934751     -     -
lo0   16384 <Link#2>                             923     0     0      923     0     0
lo0   16384 fe80:2::1     fe80:2::1                0     -     -        0     -     -
lo0   16384 localhost     ::1                      0     -     -        0     -     -
lo0   16384 your-net      localhost              895     -     -      923     -     -
Security check:
    (output mailed separately)
Checking status of 3ware RAID controllers:
Alarms (most recent first):
  No new alarms.
-- End of daily output --
Does this help? I hope not because the drives have not been in use for more than a month