Reporting stopped working after reboot

Status
Not open for further replies.

daverdfw

Cadet
Joined
Oct 31, 2012
Messages
5
running 11.1 RELEASE and all of my reports are blank after I did a reboot to change a BIOS setting. Any help would be appreciated. thanks!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
running 11.1 RELEASE and all of my reports are blank after I did a reboot to change a BIOS setting. Any help would be appreciated. thanks!
What reports are blank? Can you be more specific?
 

daverdfw

Cadet
Joined
Oct 31, 2012
Messages
5
What's the output of ls -lh /var/db/collectd/rrd/?

oot@freenas:~ # ls -lh /var/db/collectd/rrd
lrwxr-xr-x 1 root wheel 51B Jan 12 13:21 /var/db/collectd/rrd -> /var/db/system/rrd-76c11d7f8a944b3d8e42fe35420dbaa3
 
D

dlavigne

Guest
How about ls -lh /var/db/system/rrd-76c11d7f8a944b3d8e42fe35420dbaa3?
 

daverdfw

Cadet
Joined
Oct 31, 2012
Messages
5
How about ls -lh /var/db/system/rrd-76c11d7f8a944b3d8e42fe35420dbaa3?

lrwxr-xr-x 1 root wheel 9B Jan 12 13:21 freenas.local -> localhost
drwxr-xr-x 59 root wheel 59B Jan 12 11:43 localhost


root@freenas:/var/db/system/rrd-76c11d7f8a944b3d8e42fe35420dbaa3 # ls -lh localhost/
total 173
drwxr-xr-x 2 root wheel 7B Dec 9 17:56 aggregation-cpu-average
drwxr-xr-x 2 root wheel 7B Dec 9 17:56 aggregation-cpu-max
drwxr-xr-x 2 root wheel 7B Dec 9 17:56 aggregation-cpu-min
drwxr-xr-x 2 root wheel 7B Dec 9 17:56 aggregation-cpu-num
drwxr-xr-x 2 root wheel 7B Dec 9 17:56 aggregation-cpu-stddev
drwxrwxrwx 2 root wheel 7B Oct 29 21:36 aggregation-cpu-sum
drwxrwxrwx 2 root wheel 7B Oct 29 21:36 cpu-0
drwxrwxrwx 2 root wheel 7B Oct 29 21:36 cpu-1
drwxr-xr-x 2 root wheel 8B Oct 29 21:36 ctl-ioctl
drwxr-xr-x 2 root wheel 8B Oct 29 21:36 ctl-tpc
drwxrwxrwx 2 root wheel 5B Jan 11 19:03 df-mnt-iocage
drwxrwxrwx 2 root wheel 5B Jan 11 19:03 df-mnt-iocage-download
drwxr-xr-x 2 root wheel 5B Jan 12 11:38 df-mnt-iocage-download-11.1-RELEASE
drwxrwxrwx 2 root wheel 5B Jan 11 19:03 df-mnt-iocage-images
drwxrwxrwx 2 root wheel 5B Jan 11 19:03 df-mnt-iocage-jails
drwxr-xr-x 2 root wheel 5B Jan 12 11:43 df-mnt-iocage-jails-plex
drwxr-xr-x 2 root wheel 5B Jan 12 11:43 df-mnt-iocage-jails-plex-root
drwxrwxrwx 2 root wheel 5B Jan 11 19:03 df-mnt-iocage-log
drwxrwxrwx 2 root wheel 5B Jan 11 19:03 df-mnt-iocage-releases
drwxr-xr-x 2 root wheel 5B Jan 12 11:40 df-mnt-iocage-releases-11.1-RELEASE
drwxr-xr-x 2 root wheel 5B Jan 12 11:40 df-mnt-iocage-releases-11.1-RELEASE-root
drwxrwxrwx 2 root wheel 5B Jan 11 19:03 df-mnt-iocage-templates
drwxrwxrwx 2 root wheel 5B Oct 29 21:36 df-mnt-vmware
drwxr-xr-x 2 root wheel 5B Jan 11 18:38 df-mnt-vmware-.bhyve_containers
drwxr-xr-x 2 root wheel 5B Nov 12 14:35 df-mnt-vmware-jails
drwxr-xr-x 2 root wheel 5B Nov 12 14:38 df-mnt-vmware-jails-.warden-template-pluginjail
drwxr-xr-x 2 root wheel 5B Dec 9 17:55 df-mnt-vmware-jails-.warden-template-pluginjail-11.0-x64
drwxr-xr-x 2 root wheel 5B Jan 12 11:22 df-mnt-vmware-jails-.warden-template-standard
drwxr-xr-x 2 root wheel 5B Jan 12 11:22 df-mnt-vmware-jails-media_server
drwxr-xr-x 2 root wheel 5B Nov 12 14:42 df-mnt-vmware-jails-nextcloud_1
drwxr-xr-x 2 root wheel 5B Jan 12 15:02 df-mnt-vmware-smb
drwxr-xr-x 2 root wheel 5B Oct 29 21:36 df-root
drwxr-xr-x 2 root wheel 6B Oct 29 21:36 disk-ada0
drwxr-xr-x 2 root wheel 6B Nov 4 10:36 disk-ada0p1
drwxrwxrwx 2 root wheel 6B Oct 29 21:36 disk-ada1
drwxrwxrwx 2 root wheel 6B Oct 29 21:36 disk-ada2
drwxrwxrwx 2 root wheel 6B Oct 29 21:36 disk-ada3
drwxrwxrwx 2 root wheel 6B Oct 29 21:36 disk-ada4
drwxrwxrwx 2 root wheel 6B Oct 29 21:36 disk-ada5
drwxrwxrwx 2 root wheel 6B Oct 29 21:36 disk-da0
drwxr-xr-x 2 root wheel 158B Dec 9 17:39 geom_stat
drwxr-xr-x 2 root wheel 5B Nov 12 14:42 interface-bridge0
drwxr-xr-x 2 root wheel 5B Oct 29 21:36 interface-em0
drwxr-xr-x 2 root wheel 5B Nov 12 14:42 interface-epair0a
drwxr-xr-x 2 root wheel 5B Oct 29 21:58 interface-lagg0
drwxr-xr-x 2 root wheel 5B Oct 29 22:09 interface-lagg1
drwxrwxrwx 2 root wheel 5B Oct 29 21:36 interface-re0
drwxr-xr-x 2 root wheel 5B Jan 11 18:45 interface-tap0
drwxrwxrwx 2 root wheel 5B Jan 11 18:50 interface-tap1
drwxr-xr-x 2 root wheel 5B Jan 12 11:43 interface-vnet0:2
drwxrwxrwx 2 root wheel 3B Oct 29 21:36 load
drwxrwxrwx 2 root wheel 7B Oct 29 21:36 memory
drwxrwxrwx 2 root wheel 9B Oct 29 21:36 processes
drwxrwxrwx 2 root wheel 4B Oct 29 21:36 swap
drwxrwxrwx 2 root wheel 3B Oct 29 21:36 uptime
drwxrwxrwx 2 root wheel 39B Oct 29 21:36 zfs_arc
drwxrwxrwx 2 root wheel 97B Oct 29 21:36 zfs_arc_v2
 
D

dlavigne

Guest
Looks like the data is still there. Is this the old UI or the new UI? Also, any errors in /var/log/messages?
 

Roman V

Cadet
Joined
Jan 13, 2018
Messages
1
Got same problem after upgrading from 11 to 11.1. Statistics are dissapearing after each reboot. I've looked down the directories, and found this:

Code:
# mount -v | grep rr
V1/.system/rrd-d39f69b081d347a7bd3cf8235f592148 on /var/db/system/rrd-d39f69b081d347a7bd3cf8235f592148 (zfs, local, nfsv4acls, fsid 8157ce39de390e0e)
tmpfs on /var/db/collectd/rrd (tmpfs, local, fsid 06ff008787000000)


There's special directory in .system, but default directory for collectd is mounted as tmpfs, which contents not preserved at reboot.

Is there an easy way to fix it ?
 

Uhtred

Cadet
Joined
Oct 9, 2012
Messages
8
I see this as well and I believe it's since I upgraded from 9.x to 11 (i'm now at 11.1).

I've been rebooting often this week to replace my disks and each time my graphs are being reset.

Code:
# mount -v | grep rr										   
stardust/.system/rrd-7e2d2fe9f1a944abae444b96dbc71a5e on /var/db/system/rrd-7e2d
2fe9f1a944abae444b96dbc71a5e (zfs, local, nfsv4acls, fsid d5edea24dee6431d)	 
tmpfs on /var/db/collectd/rrd (tmpfs, local, fsid 06ff008787000000) 
 

Uhtred

Cadet
Joined
Oct 9, 2012
Messages
8
I've just seen under the System Dataset tab in System that there is a tick box to save the RRD database to the system dataset. I have ticked this and now the output from the command is:

Code:
mount -v | grep rr										  
stardust/.system/rrd-7e2d2fe9f1a944abae444b96dbc71a5e on /var/db/system/rrd-7e2d
2fe9f1a944abae444b96dbc71a5e (zfs, local, nfsv4acls, fsid d5edea24dee6431d)


I'll be rebooting in about 8 hours to replace the next disk so I will report back if the graph data is retained this time.

Edit - Worked perfectly after a reboot this time.
 
Last edited:
Status
Not open for further replies.
Top