Resource icon

multi_report.sh version for Core and Scale 3.0

dak180

Patron
Joined
Nov 22, 2017
Messages
310
A different thing I could try, verify all the external appliclications are executable and if not, toss a warning message and exit. I think FreeNAS Report may do this.
And even some custom behaviors depending on what command is missing.
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
If anyone is using SCALE 24.x (Dragonfish currently in Beta), and is encrypting email attachments, 7zip will not automatically install as it previously did, at least in the Beta.1 version. I have an updated script but it's in my Beta version right now and IF you are encrypting, ONLY then will 7zip be installed. Contact me and I will forward you a copy. The next formal release will be v3.0 with a few changes, hopefully things will be easier to setup and it will result in more accurate data being reported.

Encrypting your email attachments is not really required since the TrueNAS password file is encrypted by TrueNAS. But I know some people prefer to encrypt everything so 7zip is required for my version (I like it better).
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
What a great tool! Thank you!

I prefer witholding execution permissions on my datasets in order to mitigate any possibile damage caused from a compromised system that has access to the NAS.
The new User Guide now only has instructions for placing the script in the /root directory. Now that was an easy change. Thanks for the perspective.
I have a question regarding the documentation and the above information.

You mean placing the script in / is the better way to go?

In the pdf linked in the latest release the way of creating a separate dataset is still described and I have not seen the part about placing it in the root dir.
Initial Setup The basic setup for Multi-Report is to install the script into a Dataset within your pool, and preferably a dataset that has an accessible share such as SMB. This will make everything easier to manipulate in the future.

Don't get me wrong, I do not want to nitpick but the guide says:
NOTE: V2.2 or V2.3 steps only apply to the respective installation you are performing.
I assume the V2.3 steps also apply to V2.3 and above, maybe this can be clarified? I renamed the tool to multi_report.sh (V2.5) and it works, so I assume this is the correct setup.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
You mean placing the script in / is the better way to go?
I have been told by several people that placing the script in /root is the better/easier way to go. I personally have never done that (except to test and verify using /root does work), I prefer using my own dataset (/mnt/pool/scripts). The issue is permissions, TrueNAS is not the easiest for non-IT personnel to configure to work as they desire.

In the pdf linked in the latest release the way of creating a separate dataset is still described and I have not seen the part about placing it in the root dir.
True, I have not released the next version yet. Hopefully in the next couple of month, but I've been saying that since November. Python is slow learning for me and I'm working on an easier configuration program that will mitigate a lot of questions and really help someone use multi-report.

I assume the V2.3 steps also apply to V2.3 and above, maybe this can be clarified? I renamed the tool to multi_report.sh (V2.5) and it works, so I assume this is the correct setup.
Yes, the version you are running now will recognize and use whatever name you give it. In V2.3 that was not the case, the script name was hard coded. All that will be gone, the new User Guide will only be for version 3.0 and not earlier versions.

Don't get me wrong, I do not want to nitpick but the guide says:
No problem at all. I do not mind getting feedback, provided it isn't one person just trying to piss me off. That is not you, I've seen you as an active forum member and helping people out. I like feedback, constructive is always best.

Hope I answered all your questions.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I have been told by several people that placing the script in /root is the better/easier way to go.
There are pros and cons to everything, of course. A drawback to putting it in /root is that it lives on the boot device, and thus it will be lost of the boot pool fails. And in some circumstances, it might even be lost during an upgrade (I've had that happen a couple of times with things in /root, but not consistently). But, yes, if you put it somewhere else, you'll need to set permissions correctly, and that seems to be a trouble spot for many users.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Tell me if I'm wrong: placing in mnt/pool/scripts requires you to give execution permissions to that dataset, which means a potential vector for an attack while placing in /root does not create a vulnerability in the system.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Tell me if I'm wrong
I'm afraid you are. With Unix permissions, the "execute" permission on a directory/dataset simply allows the user/group/other to change to that directory. So, yes, that permission would be required, but it doesn't address the execution of code contained in that dataset.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Tell me if I'm wrong: placing in mnt/pool/scripts requires you to give execution permissions to that dataset, which means a potential vector for an attack while placing in /root does not create a vulnerability in the system.
Here is my directory drwxrwxrwx 66 root root 195 Feb 11 15:50 scripts
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
There are pros and cons to everything, of course. A drawback to putting it in /root is that it lives on the boot device, and thus it will be lost of the boot pool fails. And in some circumstances, it might even be lost during an upgrade (I've had that happen a couple of times with things in /root, but not consistently). But, yes, if you put it somewhere else, you'll need to set permissions correctly, and that seems to be a trouble spot for many users.
Thanks for that enlightening piece of information. I never thought about upgrades with respect to the /root directory. And of course I use my own dataset for the script, for all scripts actually. Time to rethink what the manual will say.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I don't know how all ZFS installations work, but you CAN enable / disable various attributes of a ZFS Dataset;

exec
atime
devices
setuid
readonly

If the execute permissions on the ZFS DATASET are turned off, then it does not mater if the script has execute permissions. This is useful for pure data storage, along with disabling devices and SUID.

In regards to execute permissions of a Unix DIRECTORY, that is simply for "change directory". Without valid execute permissions on a Unix directory, a SHELL can not "cd" in to that directory. There are details related to the READ permissions on a directory but I don't remember all of them.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
It's worth noting that the upgrade process does bring over /root on each upgrade, so it would be weird if that was something that stopped without some kind of warning from iX.

I know this because I had a large file accidentally in /root for some years and was wondering why my upgrades were taking ages on that system every time... finally figured it out and removed the file... and all the bloated boot environments.

Even if it does get nailed by an upgrade, you always have the previous boot environment(s) to get it back from.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
It's worth noting that the upgrade process does bring over /root on each upgrade
I've had a couple of times where it didn't--but mostly it has. I've never bothered to dig into what happened, as I don't generally keep much in /root.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
True, I have not released the next version yet.
I assumed this part from december meant it already was in the published guide.
The new User Guide now only has instructions for placing the script in the /root directory. Now that was an easy change. Thanks for the perspective.

Hope I answered all your questions.
Yes, thank you very much. It was relatively straight forward that the V2.3 instructions applied here, but taking it too literal one could argue that the version specific instructions exclusively apply to only that version, hence I was missing instruction if not using V2.2 or V2.3. But you cleared that up!

That is not you, I've seen you as an active forum member and helping people out.
Thanks! I received so much help around in getting me started with truenas that I try to give back that little knowledge I have.

Regarding the other replies, really interesting discussion especially the part about
With Unix permissions, the "execute" permission on a directory/dataset simply allows the user/group/other to change to that directory.
I also realized you all meant /root and not /

This script is really great, without it I would've missed my degraded boot pool and you always have a recent config backup at hand.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Thanks for the complements and understanding.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
By and large, this script is working great on two systems, one with CORE, the other with SCALE. Except...

The CORE system has a two-disk mirrored pool, and the capacity reporting isn't quite right:
1710150050369.png

9.17 TiB used out of 14.43 would be 64% used, not 42%, and that's what the system shows on its dashboard:
1710150127572.png

zpool list and zfs list do seem to produce some inconsistent output:
Code:
root@cbnas[~]# zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool  29.5G  3.80G  25.7G        -         -     7%    12%  1.00x    ONLINE  -
tank       14.5T  6.11T  8.44T        -         -    30%    42%  1.00x    ONLINE  /mnt
root@cbnas[~]# zfs list tank
NAME   USED  AVAIL     REFER  MOUNTPOINT
tank  9.16T  5.26T     1.26G  /mnt/tank
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
zpool list and zfs list do seem to produce some inconsistent output:
This would be normal with RAIDZ, since the zpool application is unaware of the space implications of RAIDZ (or DRAID, I guess).

With mirrors, it's definitely weird. I just checked a few machines running mirrors and they all show the same value with zpool and zfs, which matches my understanding of how this should work.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I'm getting inconsistencies as well with mirrors, but my % is being correctly reported by the script; iirc there is an option in the script to use either zfs or zpool for these values.
Code:
root@truenas[~]# zpool list alpha
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
alpha  5.44T  3.79T  1.64T        -         -     3%    69%  1.00x    ONLINE  /mnt
root@truenas[~]# zfs list alpha
NAME    USED  AVAIL     REFER  MOUNTPOINT
alpha  3.79T  1.52T      104K  /mnt/alpha
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
First of all, thank you for bringing this to our attention. You should know that I'm all about accuracy.

The CORE system has a two-disk mirrored pool, and the capacity reporting isn't quite right:
Is it reporting correctly for SCALE? Assuming you have a similar situation there as well.

Additionally is there some function/formula that could be applied here to generate the correct value, preferably for all cases?

AND of course I just found an error in the script. By default I use "zfs" to provide the data. You can change that to use "zpool" if you like which "might" fix this discrepancy. This is where I blundered when writing the config file, I left off a part of the variable name. I have just fixed it. Wow, that error has been there for a long time.

In the multi_report_config.txt file, roll down to about line 62 "General Settings" and you will likely see Pool_Capacity="" but that is incorrect, please change it to Pool_Capacity_Type="zfs" and that will fix that minor issue temporarily. Even with this error the script defaults to "zfs". To change it to "zpool" and see if that helps, but I think I still get capacity from zpool. I need to dive into the code.

For the specific issue with 'tank' where TrueNAS is using the 'zfs list tank' values, it is a simple division formula (9.16/(9.16+5.26))=.6352...
Is it really that simple for ZFS? Should I use simple division if Pool_Capacity_Type="zpool" for example, that is an easy change. The problem, if you have multiple pools of varying types, this change affects all of them.

For argument sake, let's say it is that simple, how would I determine when this formula should be used or when to use the zpool data instead?
The script currently uses the zfs values for pool capacity (USED+AVAIL)=TOTAL.

I am very open to suggestions as this is one of those things that is Doomed if you do, Doomed if you don't situations. It would be nice for some consistency and short of that, something that would define when to use either version. I could use 'zpool status' to list which pools have mirrors and use the simple formula there, but what if it were a 3-way mirror? I'm not afraid to make the script more complex, what is a few hundred more lines, but I need to know if it can be solved in a BASH script, or even if it is worth it. If I could just grab that data from TrueNAS, it would be so much easier.

I will wait for a response before making any changes. But I did fix the Pool_Capacity_Type="zfs" issue. If someone would like the latest Beta, PM me. I need to run a few more tests on my second SCALE server just to verify the output looks good.

Here is the temporary changelog (I make notes to myself in the change log until it is no longer a Beta):
### Changelog:
# V2.5.8 Beta (11 March 2024)

# VERIFY ALL EXECUTABLES (fdisk, smartctl, etc) ACTUALLY CAN EXECUTE.
# CHANGE ALL TRUE AND FALSE TO ENABLE/DISABLE?
# CHANGE ALL CONFIG QUESTIONS to 'Y'es or 'N'o or Enter to accept the current value.
# ADD ZPOOL DUMP DATA then UPDATE SIMULATIONS

# CHANGE SO ID 200 IS NOT READ IF A SSD/NVME. DOES THIS IMPACT OTHER VALUES?

# SAS/SCSI POWER ON HOURS CAN BE HAD BY RUNNING A SMART SHORT TEST AND THEN CHECK THE HOURS. --- WORKING ON IT

# - Corrected Pool_Capacity_Type variable missing in config file.
# - Updated to list Drive Idents for NVMe in the Text section.
# - Updated 7zip only being installed if email is encrypted (See line 5 of this script).
# - Updated script for SCALE Dragonfish for installing 7zip if required.
# - Enhanced SCSI/SAS drive recognition and Power_On_Hours collection.
# - Changed Non-Recognized drive power_on_hours from Warning to Caution.
# - Added custom wear level alarm value AFTER 'n' 'r' 'd' and 'i' for Ignore. This makes wearLevel="", non-exist.
# - Added Wait for SMART Short Self-test to complete before completing the report, and added delay value.
# - Added SMART Self-test Failure Recognition for NVMe.
# - Added Email Report ONLY on Alert (any Error Message).
# - Updated NVMe routines to ignore real data gathering while in test mode.
# - Removed un-needed variables (IncluedSSD and IncludeNVM).
# - Fixed Zpool Reporting of Resilvering xx days from not reporting correctly in SCALE.
# - Updated to send attachments when Email_On_Alarm_Only="true" and Email_On_Alarm_Only_And_Attachments="true".

# V2.5.3.1 Beta (31 December 2023)
#
# - Fixed checking NVMe drives for if they support Self-tests.
#
# Look into NVMe Min/Max Temps more using nvme or nvmecontrol, Some report Temp 1 (Max) and Temp 2 (Min) but not all. Smartctl should report this as well.
#
# - Added NVME Short and Long Self-test for smartctl 7.3 and below. Monday through Saturday a Short Test, Sunday a Long Test. You can disable either or both options.
# -- Once TrueNAS can run NVMe SMART Tests this option will go away.
#
# - Updated CORE ability to capture NVMe Last Test Age.
#
# - Adjusted script for multiple LBA reporting on Yucun SSDs.
# - Updated script to work in a directory with a 'space character' in the path. NOT TESTED ON SCALE YET.
 

neofusion

Contributor
Joined
Apr 2, 2022
Messages
159
I'm running the 2.5.3.1 Beta and after updating SCALE from 22.12.4.2 to 23.10.2 multi_report.sh appears to have stopped running daily self tests on NVMe drives. I noticed this because "Last Test Age" kept ticking up until multi_report started showing a warning about it.

My multi_report_config.txt used to be set to the default NVM_Smartmontools_74_Override="disable" and I tried changing that to NVM_Smartmontools_74_Override="enable", but it made no difference.

If I run a self-test manually before following up with a run of multi_report.sh it detects the self-test result and resets the "Last Test Age" parameter as expected.

I am running the default Smarttools included in SCALE 23.10.2: smartmontools release 7.4 dated 2023-08-01 at 10:59:45 UTC

Am I missing something obvious? If so, please point me in the right direction.
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Is it reporting correctly for SCALE? Assuming you have a similar situation there as well.
It is reporting correctly for SCALE, both in my mirrored pools and in my 4 x 6-disk RAIDZ2 pool.
 
Top