Understanding Daily Run output

Status
Not open for further replies.

Jdb

Dabbler
Joined
Nov 2, 2013
Messages
10
I hope I'm just misreading this.

When I setup the test server I thought I created a RaidZ2 configuration with 4 2tb hard drives. When I am looking at the disk status it is referencing a ufs file system. I am guessing by the size that it is for the USB drive. Is that correct? (It's a 16gb usb drive but the numbers below do not add up to 16.)

Then in the status of zfs pools it is showing 7.25T of storage available. Since it's supposed to be a RaidZ2 should it be more around 3.85T in size?

Thanks,

Daily Run Output


Removing stale files from /var/preserve:

Cleaning out old system announcements:

Backup passwd and group files:

Verifying group file syntax:
/etc/group is fine

Backing up mail aliases:

Backing up package db directory:

Disk status:
Filesystem Size Used Avail Capacity Mounted on
/dev/ufs/FreeNASs1a 926M 654M 198M 77% /
devfs 1.0k 1.0k 0B 100% /dev
/dev/md0 4.6M 3.8M 413k 90% /etc
/dev/md1 823k 2.0k 756k 0% /mnt
/dev/md2 149M 23M 114M 17% /var
/dev/ufs/FreeNASs4 19M 1.1M 17M 6% /data
Store1 3.4T 232k 3.4T 0% /mnt/Store1
Store1/Personal 3.4T 221k 3.4T 0% /mnt/Store1/Personal
Store1/Personal/backup_jd 300G 57G 242G 19% /mnt/Store1/Personal/backup_jd
Store1/Phone_Recordings 3.4T 209k 3.4T 0% /mnt/Store1/Phone_Recordings

Checking status of zfs pools:
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
Store1 7.25T 119G 7.13T 1% 1.00x ONLINE /mnt

all pools are healthy

Checking status of ATA raid partitions:

Checking status of gmirror(8) devices:

Checking status of graid3(8) devices:

Checking status of gstripe(8) devices:

Network interface status:
Name Mtu Network Address Ipkts Ierrs Idrop Opkts Oerrs Coll Drop
em0 1500 <Link#1> 00:25:90:d0:6b:d5 2156618 0 0 1888239 0 0 0
em0 1500 192.168.1.0 192.168.1.11 2118174 - - 2297840 - - -
usbus 0 <Link#2> 0 0 0 0 0 0 0
em1 1500 <Link#3> 00:25:90:d0:6b:d4 93970 0 0 394 0 0 0
em1 1500 fe80::225:90f fe80::225:90ff:fe 0 - - 20 - - -
em1 1500 192.168.1.0 192.168.1.12 19588 - - 375 - - -
usbus 0 <Link#4> 0 0 0 0 0 0 0
ipfw0 65536 <Link#5> 0 0 0 0 0 0 0
lo0 16384 <Link#6> 263629 0 0 263627 0 0 0
lo0 16384 localhost ::1 152 - - 152 - - -
lo0 16384 fe80::1%lo0 fe80::1 0 - - 0 - - -
lo0 16384 your-net localhost 263485 - - 263476 - - -

Security check:
(output mailed separately)

Checking status of 3ware RAID controllers:
Alarms (most recent first):
No new alarms.

-- End of daily output --
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
Yes, the UFS file system you see is the OS on the USB stick.

The output of zpool list (which I believe is shown in the ZFS status section) needs some interpretation. I think it's actually just summing up the sizes of the child datasets, which results in the 7.x TB. If you want to get a better estimation, look at this line, which you should get with zfs list:

tank1 7.41T 1.14T 943M /mnt/tank1

Example of my pool, sum this up and you have your total space.

There has been some discussions in other threads how ZFS calculates the numbers, I can't recall the details at the moment. Maybe you have more luck by searching for those threads.
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
The output of zpool list (which I believe is shown in the ZFS status section) needs some interpretation. I think it's actually just summing up the sizes of the child datasets
The size displayed by zpool list is actually very simple -- it's the raw size of all devices (not taking any redundancy into account).
2TB (power of 1000) = 1.819TB (power of 1024)
1.189 TB * 4 = 7,276TB
The displayed size of 7,25TB is slightly smaller than 7,276TB due to swap partitions and some other overhead.
 
Status
Not open for further replies.
Top