Resource icon

multi_report.sh version for Core and Scale 3.0

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
joeschmuck updated multi_report.sh version for Core and Scale with a new update entry:

Update - Now includes a little more NVMe support.

The multi_report_config.txt file will be updated and is backwards compatible.

# V2.5 (25 November 2023)
# - Added Custom Drive option to use 'Normalized' Wear Level.
# - Added customization for Normal, Warning, and Critical Subject Lines.
# - Added quick fix for odd reporting LITEON SSDs.
# - Added nvme power level reporting.
# - Added setting nvme lowest power level option.
# - Updated to use smartmontools 7.4 or greater.
# - Updated to use 'nvme' command in absence of...

Read the rest of this update entry...

I don't think I broke anything but if I did, please contact me and describe the issue.

Enjoy!
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
Works fine! Thanks, Joe!
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Thank you. Hopefully you are able to take advantage of the new NVMe reporting. It will have a little more once smartmontools v7.4 is included into TrueNAS, or if you were to manually install it. It will not survive an upgrade of course. I have boot images as Original and then I upgraded smartmontools so I could add and test the future features. It should reduce how often an update is required.
 

bermau

Dabbler
Joined
Jul 4, 2017
Messages
28
Hi everybody,
I updated the script to v.2.5 and now the report does not contain the information for my 3 SSD disks (ok for the other disks and pools).

My system is Truenas core 13.0-U6, two pools (one with 4 sata disks and the other with two ssd disks), boot-pool with one ssd disk (total 7 disks).

when I run the script, I receive:

Multi-Report v2.5 dtd:2023-11-26 (TrueNAS Core 13.0-U6)
Checking for Updates
Current Version 2.5 -- GitHub Version 2.5
No Update Required
./multi_report.sh: line 3533: 32948486144
32948486144
32948486144
32948486144
32948486144
32948486144
32948486144
32948486144: syntax error in expression (error token is "32948486144
32948486144
32948486144
32948486144
32948486144
32948486144
32948486144")

the line 3533 of the script is:
if [[ "$1" == "NVM" ]] && [[ $tdw != "" ]]; then tdwbytes=$((( $tdw * 512000 ))); else tdwbytes=$((( $ssdblocksize * ssdtdw ))); fi

Can anyone tell me how to solve it?
Thank you so mutch.

MB
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I updated the script to v.2.5 and now the report does not contain the information for my 3 SSD disks (ok for the other disks and pools).
Sorry to hear that. You said you upgraded, do you recall what version you were running and was it reporting correctly?

The only way to solve this, the correct way, is to run the script from the command line and use the switch -dump email which will send me the smart data for each of your drives and then I can run it through my simulator and generate the fix. I would prefer to do that because if you have this problem, someone else is bound to as well.

The second way to half-ass fix it is to reinstall/replace the multi_report.sh file with an earlier version and then turn off checking for updates. This is a poor way to go. If you want to go this route, I can send you a message here with the previous version as an attachment, or you can go to github and download it. I do maintain a few older versions on the github site.

If you use the -dump email switch, I will know your email address. If you have read this thread, you know I've said I will not share it and you have never seen a posting saying I've violated that trust.

Once I have your data, I can investigate the issue.

What do I think the issue is? I suspect you have an NVMe drive reporting an odd value or a SSD reporting an odd value. The problem with not having a standard is any vendor can do what they want. Or the last possibility, I just broke the script when I updated it to support NVMe drives. While I hope I didn't break it, there is a real chance I did and it will affect only a very few people.

And thank you for bringing this to my attention. It should take me an hour to identify and fix the problem, and a few hours to run a few tests on both CORE and SCALE. I will send you the updated script to test and validate, if it works then I can post to update.
 

bermau

Dabbler
Joined
Jul 4, 2017
Messages
28
Sorry to hear that. You said you upgraded, do you recall what version you were running and was it reporting correctly?

The only way to solve this, the correct way, is to run the script from the command line and use the switch -dump email which will send me the smart data for each of your drives and then I can run it through my simulator and generate the fix. I would prefer to do that because if you have this problem, someone else is bound to as well.

The second way to half-ass fix it is to reinstall/replace the multi_report.sh file with an earlier version and then turn off checking for updates. This is a poor way to go. If you want to go this route, I can send you a message here with the previous version as an attachment, or you can go to github and download it. I do maintain a few older versions on the github site.

If you use the -dump email switch, I will know your email address. If you have read this thread, you know I've said I will not share it and you have never seen a posting saying I've violated that trust.

Once I have your data, I can investigate the issue.

What do I think the issue is? I suspect you have an NVMe drive reporting an odd value or a SSD reporting an odd value. The problem with not having a standard is any vendor can do what they want. Or the last possibility, I just broke the script when I updated it to support NVMe drives. While I hope I didn't break it, there is a real chance I did and it will affect only a very few people.

And thank you for bringing this to my attention. It should take me an hour to identify and fix the problem, and a few hours to run a few tests on both CORE and SCALE. I will send you the updated script to test and validate, if it works then I can post to update.
Hi joeschmuck, thank you for reply.
I upgraded from "Multi-Report v2.1 dtd:2023-03-29" and yes, it was reporting correctly.

I just sent you the dump.
Thank you so mutch.

MB
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I will be sending you an updated version to test and verify it works. I will send you a detailed email. I will also test the data against multi report 2.1 to see what was going correctly, if it even was correctly reporting. The issue is the Yucun SSD SMART reporting, it is very non-standard. It reports Logical Sectors Written 8 times and LBAs Written, neither is identical. Very problematic.

Once I've extensively tested and verified the new version doesn't break anything else, I will post it here but I'd rather not push it out to everyone unless more people are affected, and wait a month or so. It will take a few weeks to properly test it on CORE and SCALE and all the simulations.

Edit: If there are no more changes, this will be version 2.5.2. Yes, I did create 2.5.1 to enhance the report a little bit, a good request by @gravelfreeman and he has that version and is testing it out. It did not "fix" perse a problem.
 
Last edited:

bermau

Dabbler
Joined
Jul 4, 2017
Messages
28
I will be sending you an updated version to test and verify it works. I will send you a detailed email. I will also test the data against multi report 2.1 to see what was going correctly, if it even was correctly reporting. The issue is the Yucun SSD SMART reporting, it is very non-standard. It reports Logical Sectors Written 8 times and LBAs Written, neither is identical. Very problematic.

Once I've extensively tested and verified the new version doesn't break anything else, I will post it here but I'd rather not push it out to everyone unless more people are affected, and wait a month or so. It will take a few weeks to properly test it on CORE and SCALE and all the simulations.

Edit: If there are no more changes, this will be version 2.5.2. Yes, I did create 2.5.1 to enhance the report a little bit, a good request by @gravelfreeman and he has that version and is testing it out. It did not "fix" perse a problem.
Hi Joe,
I replied you by email.
The report is now ok. Sorry for the issues with Yucon ssd, I had doubts that I hadn't done the right thing. I actually think I'll replace them with Kingston.
Thank you so much for your work and your availability.
MB
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I actually think I'll replace them with Kingston.
I would not replace them if they are working for you. They report statistical data wrong but that does not mean they will fail anytime soon. And the script is adjusted for now, but have no fear, there will be another problem from a different drive model.
 

Macaroni323

Explorer
Joined
Oct 8, 2015
Messages
60
I may have missed this in this thread. How do you update Multi-Report v2.1 to v2.5? I have 2.1 running and it seems to work fine.

Sorry complete noob here.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I may have missed this in this thread. How do you update Multi-Report v2.1 to v2.5? I have 2.1 running and it seems to work fine.
Update to v2.2 and follow the procedure in the manual, I remember doing it. Or, just delete the old version and install the new one: a bit of a hassle to re-configure everything but maybe it's easier; from then on you can make it auto-update.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Version 2.5 has a lot of upgrades from v2.1 mostly to be more compatible with drives and NVMe.

The easiest thing is to grab a copy of 2.5 here and replace the old script, leaving the config file. The "multi_report_config.txt" file will automatically update when you run it (well it should).
 

RobPatton

Cadet
Joined
Dec 29, 2023
Messages
4
Is there any sort of best practices? I'm unable to make the script run, I assume its some type of security thing, which is EVERY issue I've ever had with Truenas/freenas over the years.

Multi-Report v2.5 dtd:2023-11-26 (TrueNAS Scale 23.10.1)
Installing 7-Zip...
cp: cannot create regular file '/usr/local/bin/7zzs': Permission denied
ln: failed to create symbolic link '/usr/local/bin/7z': Permission denied
7-Zip Installed
Checking for Updates
rm: remove write-protected regular file '/tmp/Multi-Report/.git/objects/pack/pack-3a391229246db94c88a8d4a125d2e426a6e3c91c.idx'?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Yup, that is a permissions issue for sure. Are you logged in as 'root' ? And when you setup a CRON job, user is root as well.

There is a PDF user guide, take it for a spin, see if that fixes it. I'm actually asking for constructive feedback as well to make the user guide better. But I hope it works fine for you.

-Joe
 

RobPatton

Cadet
Joined
Dec 29, 2023
Messages
4
In the imortal words : ACLs will be the death of me.

I'm managed to monkey my way into making the script run and got an email.

Its gives me a LOT of data. Like a lot. Love that I can get green blocks to tell me everything is chill. would love to just get a simple "Free space x, z%) Is it configurable to do that?

Honestly, I'd love to also include a list of dataset usage reports.

dataset1 - X TB
dataset2 - X TB
and so on? Just so I could look and see if dataset 5, Franks home folder, is now 14TB, I can go take a look at what Frank did. (for instance)
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
My suggestion is to run it in the root folder, no need for "easy" access by a dataset with execution permissions.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
In the imortal words : ACLs will be the death of me.
Believe it or not, I think I have a handle on it now. The new user guide will provide a specific example for both CORE and SCALE that a user would be able to use. But the root directory works too I guess.
 

RobPatton

Cadet
Joined
Dec 29, 2023
Messages
4
Well, I got it all running, cron job and all. That said, I think I like the idea of keeping it in the root, or thereabouts, just to have one less dataset and share.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Well, I got it all running, cron job and all. That said, I think I like the idea of keeping it in the root, or thereabouts, just to have one less dataset and share.
Thanks for the feedback. I will update the User Guide to include both options.
 
Top