Hi,
I have setup a large server with 32711MB of RAM and optimized settings (hope so) - running FreeNAS-8.3.1-RELEASE-x64 (r13452) on an Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz.
I will use It as a backup server mainly.
I have started backing up another FreeNAS device using the documented method of ZFS snapshots over ssh.
I have no particular error during snapshots…*but after the snapshots backup has been conducted I have the following strange behavior :
1. GUI indicates that the mounpoint where snapshots are conducted is healthy but "Error getting available space + Error getting total space"
2. If I had a look at the server using CLI, I can't find this error : the output of the zfs list and zfs status seems ok :
Since this is a server that's intended to be deployed in production in a quite critical environment, I just wanted to know if there was anything "strange" in my configuration or if this was a "bug" in the GUI ?
Thanks for your help.
I have setup a large server with 32711MB of RAM and optimized settings (hope so) - running FreeNAS-8.3.1-RELEASE-x64 (r13452) on an Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz.
I will use It as a backup server mainly.
I have started backing up another FreeNAS device using the documented method of ZFS snapshots over ssh.
I have no particular error during snapshots…*but after the snapshots backup has been conducted I have the following strange behavior :
1. GUI indicates that the mounpoint where snapshots are conducted is healthy but "Error getting available space + Error getting total space"

2. If I had a look at the server using CLI, I can't find this error : the output of the zfs list and zfs status seems ok :
Code:
[root@tide] ~# zpool status pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/86cf8c92-9c9c-11e2-b30e-001e67549fcd ONLINE 0 0 0 gptid/873789bc-9c9c-11e2-b30e-001e67549fcd ONLINE 0 0 0 gptid/87a09632-9c9c-11e2-b30e-001e67549fcd ONLINE 0 0 0 gptid/8813426f-9c9c-11e2-b30e-001e67549fcd ONLINE 0 0 0 gptid/887aaba2-9c9c-11e2-b30e-001e67549fcd ONLINE 0 0 0 gptid/88e2cf32-9c9c-11e2-b30e-001e67549fcd ONLINE 0 0 0 logs mirror-1 ONLINE 0 0 0 gptid/89380ddc-9c9c-11e2-b30e-001e67549fcd ONLINE 0 0 0 gptid/895d3c99-9c9c-11e2-b30e-001e67549fcd ONLINE 0 0 0 errors: No known data errors [root@tide] ~# zfs list NAME USED AVAIL REFER MOUNTPOINT tank 80.9G 10.6T 320K /mnt/tank tank/backup 80.8G 3.92T 304K /mnt/tank/backup tank/backup/Partage 80.8G 3.92T 288K /mnt/tank/backup/Partage tank/backup/Partage/Private 54.6G 3.92T 52.7G /mnt/tank/backup/Partage/Private tank/backup/Partage/ToDoo 26.2G 3.92T 22.8G /mnt/tank/backup/Partage/ToDoo tank/home 312K 100G 312K /mnt/tank/home
Since this is a server that's intended to be deployed in production in a quite critical environment, I just wanted to know if there was anything "strange" in my configuration or if this was a "bug" in the GUI ?
Thanks for your help.