I'm having a hard time with the cron jobs paths.
I have the scripts created in :
root/scripts/cpu_hdd_temp.sh
Now that's what I wrote in the cron job in the path is :
/root/scripts/cpu_hdd_temp.sh
The e-mail I get is:
Code:Cron <root@freenas> PATH="/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/root/bin" /root/scrips/cpu_hdd_temp.sh > /dev/null
with the body
Code:/root/scrips/cpu_hdd_temp.sh: not found
Any ideas where I messed up? :(
Because the scripts can (and do) evolve so even if I can do one script now I don't know if it would be the case tomorrow.
#!/bin/sh drives="disk1 disk2 disk3 disk4 disk5 disk6 disk7" echo "" echo "+========+============================================+======================+=======+" echo "| Device | GPTID | Serial | active|" echo "+========+============================================+======================+=======+" for drive in $drives do activedev=`gmultipath getactive ${drive}` gptid=`glabel status -s "multipath/${drive}p2" | awk '{print $1}'` serial=`smartctl -i /dev/${activedev} | grep "Serial number" | awk '{print $3}'` printf "| %-6s | %-42s | %-20s | %-5s |\n" "$drive" "$gptid" "$serial" "$activedev" echo "+--------+--------------------------------------------+----------------------+-------+" done echo ""
+========+============================================+======================+=======+ | Device | GPTID | Serial | active| +========+============================================+======================+=======+ | disk1 | gptid/c1f2fcc4-b868-11e5-8b7b-f46d0428d010 | XXX1K5xx | da6 | +--------+--------------------------------------------+----------------------+-------+ | disk2 | gptid/9067d034-b7fd-11e5-93c1-f46d0428d010 | XXX1WDxx00009438Kxxx | da11 | +--------+--------------------------------------------+----------------------+-------+ | disk3 | gptid/0ecce300-2424-11e4-8980-f46d0428d010 | XXX1WDxx0000C4394xxx | da7 | +--------+--------------------------------------------+----------------------+-------+ | disk4 | gptid/0f031df2-2424-11e4-8980-f46d0428d010 | XXX1WDxx00009437Jxxx | da9 | +--------+--------------------------------------------+----------------------+-------+ | disk5 | gptid/0f3b1967-2424-11e4-8980-f46d0428d010 | XXX1WDxx0000C4396xxx | da3 | +--------+--------------------------------------------+----------------------+-------+ | disk6 | gptid/0f7472dc-2424-11e4-8980-f46d0428d010 | XXX1WDxx0000C4390xxx | da8 | +--------+--------------------------------------------+----------------------+-------+ | disk7 | gptid/e4eee36e-b824-11e5-882c-f46d0428d010 | XXX1Wxx1000094393xxx | da12 | +--------+--------------------------------------------+----------------------+-------+
[root@freenas] /# zpool status myvol1 pool: myvol1 state: ONLINE scan: resilvered 239G in 1h8m with 0 errors on Mon Jan 11 00:41:58 2016 config: NAME STATE READ WRITE CKSUM myvol1 ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/e4eee36e-b824-11e5-882c-f46d0428d010 ONLINE 0 0 0 gptid/9067d034-b7fd-11e5-93c1-f46d0428d010 ONLINE 0 0 0 gptid/0ecce300-2424-11e4-8980-f46d0428d010 ONLINE 0 0 0 gptid/0f031df2-2424-11e4-8980-f46d0428d010 ONLINE 0 0 0 gptid/0f3b1967-2424-11e4-8980-f46d0428d010 ONLINE 0 0 0 gptid/0f7472dc-2424-11e4-8980-f46d0428d010 ONLINE 0 0 0 errors: No known data errors
This really should be in its own thread. If not, it will escape notice from most new users once there are enough new posts in this thread so your item is no longer on the last page.Summary: I wrote this quickly to create a persistent record of SMART status in CSV format that could be opened by Excel (for pivot table goodness). It currently tries any valid 'da' and 'ada' devices, but it's easy to change or add more. There's a hook up top to for an easy place to add an SCP, email, etc. command to do something useful with the data.
Script: http://pastebin.com/reUEEsgU
Sample output: http://pastebin.com/8sdAG9Yy
Ok, no problem ;)
It'll not be overwritten by updates (from the GUI) but it'll be by upgrades (re-install of the system). I recommend to put them on a dedicated dataset on the safest pool you have (e.g. if you have a mirror and a RAID-Z2 put it on the RAID-Z2) you'll be able to access them with /mnt/your_pool/the_dataset/ ;)