I'm getting status mails every morning from my NAS, and since I'm not from tne *nix world I'm not understanding all of it. My biggest conern is that a drive is starting to fail and I won't know. How will I know?
The mail I'm getting contains this, what is what?
The mail I'm getting contains this, what is what?
Code:
Removing stale files from /var/preserve:
Cleaning out old system announcements:
Backup passwd and group files:
Verifying group file syntax:
/etc/group is fine
Disk status:
Filesystem Size Used Avail Capacity Mounted on
/dev/ufs/FreeNASs1a 927M 422M 431M 49% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/md0 4.4M 2.4M 1.6M 60% /etc
/dev/md1 686K 8.0K 624K 1% /mnt
/dev/md2 75M 16M 53M 23% /var
/dev/ufs/FreeNASs4 20M 1.0M 17M 5% /data
panda 5.3T 2.8T 2.5T 53% /mnt/panda
Last dump(s) done (Dump '>' file systems):
Checking status of zfs pools:
all pools are healthy
Checking status of ATA raid partitions:
Checking status of gmirror(8) devices:
Checking status of graid3(8) devices:
Checking status of gstripe(8) devices:
Network interface status:
Name Mtu Network Address Ipkts Ierrs Idrop Opkts Oerrs Coll
re0 1500 <Link#1> 00:19:db:b3:4e:94 394811953 0 0 665720176 0 0
re0 1500 192.168.0.0 192.168.0.100 394582601 - - 665704430 - -
lo0 16384 <Link#2> 231820 0 0 231820 0 0
lo0 16384 fe80:2::1 fe80:2::1 0 - - 0 - -
lo0 16384 localhost ::1 0 - - 0 - -
lo0 16384 your-net localhost 231774 - - 231820 - -
Security check:
(output mailed separately)
Checking for denied zone transfers (AXFR and IXFR):
Scrubbing of zfs pools:
skipping scrubbing of pool 'panda':
last scrubbing is 17 days ago, threshold is set to 30 days
Checking status of 3ware RAID controllers:
Alarms (most recent first):
No new alarms.
-- End of daily output --