Unsure of SATA drive spindown

Status
Not open for further replies.

domax

Cadet
Joined
Dec 25, 2012
Messages
6
hmm, could have seen that myself ;o)
well, also the modified query doesn't give any results.

btw, you hit the nail on the head... i disabled my plugins jail, now write activity is 0. Now i have to find out which plugin causes this, but you got me a step further!
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
well, also the modified query doesn't give any results.

That's not good (though probably to be expected) - what do you with the following command:

Code:
/usr/local/bin/sqlite3 /data/freenas-v1.db "SELECT * FROM storage_disk ORDER BY disk_name ASC"
 

domax

Cadet
Joined
Dec 25, 2012
Messages
6
That's not good (though probably to be expected) - what do you with the following command:

Code:
/usr/local/bin/sqlite3 /data/freenas-v1.db "SELECT * FROM storage_disk ORDER BY disk_name ASC"


That shows more info:
[root@freenas] ~# /usr/local/bin/sqlite3 /data/freenas-v1.db "SELECT * FROM stor age_disk ORDER BY disk_name ASC"
0|Disabled|Always On|QM00001||{serial}QM00001|1|Disabled|Auto||||1|ada0
1|Disabled|10|120947400194||{serial}120947400194|1|1|Auto||||2|da0
1|Disabled|10|120947401210||{serial}120947401210|1|1|Auto||||3|da1
1|Disabled|10|W1F0JP2C||{serial}W1F0JP2C|1|1|Auto||||4|da2
1|Disabled|10|W1F0LDKT||{serial}W1F0LDKT|1|1|Auto||||5|da3
1|Disabled|10|W1F0LDLH||{serial}W1F0LDLH|1|1|Auto||||6|da4
1|Disabled|10|W1F0JNSX||{serial}W1F0JNSX|1|1|Auto||||7|da5
1|Disabled|10|W1F0M1BK||{serial}W1F0M1BK|1|1|Auto||||8|da6
1|Disabled|10|W1F0JNWR||{serial}W1F0JNWR|1|1|Auto||||9|da7
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
That shows more info:
[root@freenas] ~# /usr/local/bin/sqlite3 /data/freenas-v1.db "SELECT * FROM stor age_disk ORDER BY disk_name ASC"
0|Disabled|Always On|QM00001||{serial}QM00001|1|Disabled|Auto||||1|ada0
1|Disabled|10|120947400194||{serial}120947400194|1|1|Auto||||2|da0
1|Disabled|10|120947401210||{serial}120947401210|1|1|Auto||||3|da1
1|Disabled|10|W1F0JP2C||{serial}W1F0JP2C|1|1|Auto||||4|da2
1|Disabled|10|W1F0LDKT||{serial}W1F0LDKT|1|1|Auto||||5|da3
1|Disabled|10|W1F0LDLH||{serial}W1F0LDLH|1|1|Auto||||6|da4
1|Disabled|10|W1F0JNSX||{serial}W1F0JNSX|1|1|Auto||||7|da5
1|Disabled|10|W1F0M1BK||{serial}W1F0M1BK|1|1|Auto||||8|da6
1|Disabled|10|W1F0JNWR||{serial}W1F0JNWR|1|1|Auto||||9|da7

Ok, there's the problem - your disk_description is blank (that's the field to the right of "Auto|" and which should contain a description along the lines of "Member of <poolname> raidz". If you look in the FreeNAS GUI, on the View Disks tab, your "Description" fields will also appear blank. Since this description is set when volumes are created, I'm guessing you've either imported an existing volume created on another system, or the disk description has somehow been cleared.

Also, "disk_group_id" is not set (the field to the right of disk_description) which should indicate disk groupings (ie. vdev group). How did you create this volume?

I suppose you could update the description using SQL, which is probably your only option and should get the script working:

Code:
/usr/local/bin/sqlite3 /data/freenas-v1.db "UPDATE storage_disk SET disk_description='Member of zvol0 raidz' WHERE disk_name=daY"


Change daY for each disk you want to modify (or remove the WHERE clause entirely if you're happy to update all disks).

If you don't want to modify your database, just pass the devices as an argument to the script.
 

OldAbe

Cadet
Joined
Jan 7, 2013
Messages
2
I got 14 DISK in 2 zpools. The disk on my fist zpool on the MB is going down as it shod do. My other zpool on the LSI SAS3081E-R is not going down. Im runing FreeNAS 0.7.2 (8191) will there be any chans that Milhous fix can work on my system as well?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
I got 14 DISK in 2 zpools. The disk on my fist zpool on the MB is going down as it shod do. My other zpool on the LSI SAS3081E-R is not going down. Im runing FreeNAS 0.7.2 (8191) will there be any chans that Milhous fix can work on my system as well?

Yes, wrong forum - but assuming you knew that, then yes, it might work. Suck it and see, although since this is the FreeNAS 8 forum I can't really offer support.

camcontrol is available in FreeNAS 0.7.2, so stopping your disks should be possible. Obviously you'll need to manually specify your disks as command line arguments as you can't rely on the script automatically detecting devices since there will be no FreeNAS 8 settings database to query. Not sure how you're going to start sasidled on FreeNAS 7, but there's bound to be a way.
 

OldAbe

Cadet
Joined
Jan 7, 2013
Messages
2
Yes, wrong forum - but assuming you knew that, then yes, it might work. Suck it and see, although since this is the FreeNAS 8 forum I can't really offer support.

camcontrol is available in FreeNAS 0.7.2, so stopping your disks should be possible. Obviously you'll need to manually specify your disks as command line arguments as you can't rely on the script automatically detecting devices since there will be no FreeNAS 8 settings database to query. Not sure how you're going to start sasidled on FreeNAS 7, but there's bound to be a way.

Thanks for fast feedback. I give it a tray. I make a post as well in the 7.2 forum..
 

purduephotog

Explorer
Joined
Jan 14, 2013
Messages
73
Are these issues still present in the current release? I ask because I've got the same issues and it appears to behave the same.
 

glich

Dabbler
Joined
Jun 16, 2011
Messages
20
Hello,
thanks for the really good script.

I have seen one issue:
- If the HDDs are stopped and the machine is going to shutdown, there are few errors at mps0 driver due the hdds are not ready.

I have been searching over internet to find a way to have the HHD running before shutdown, but not yet found a way.

I have think in this options:
1- to catch the shutdown signal on the script and start the HDDs if they are sttoped
2- at freebsd shutdown process, another script, that checks the HDD state and sping up them if they are stopped.
3- others

I think option 1 will be the best.

I am not a freedsd expert, so I will appreciate any ideas or help :)

BR
glich
 

Neme

Dabbler
Joined
Feb 23, 2013
Messages
14
Anyone managed to get this working on FreeNAS-9.1.1-RELEASE-x64 (a752d35)??

Had it working flawlessly on FreeNAS-8.3.1-RELEASE-p2-x64, but on 9.1.1 whatever i do I keep getting the following error:

Code:
Starting sasidled.
eval: /mnt/ZFS1/sasidle: not found
/etc/local/rc.d/sasidled: WARNING: failed to start sasidled


Same error whether i run "sasidled start" manually or at boot through /conf/base/etc/rc.conf

I have tried placing sasidle in various places (/bin/, /usr/bin/, /mnt/ZFS1/) along with various permissions (755, 777, 555). At this stage I'm not sure why I can't get it to work. Thanks in advance for any ideas and let me know if I can provide any further usefull information.

Some things I guess people might ask for to try and fault find:

Code:
[John@nebula] /# ls -l /conf/base/etc/local/rc.d/sasidled
-r-xr-xr-x  1 root  wheel  624 Oct 22 10:20 /conf/base/etc/local/rc.d/sasidled*


Code:
[John@nebula] /# ls -l /mnt/ZFS1/sasidle
-rwxr-xr-x  1 root  wheel  9678 Oct 22 10:31 /mnt/ZFS1/sasidle*


Code:
[John@nebula] /# tail /conf/base/etc/rc.conf
early_kld_list="geom_stripe geom_raid3 geom_raid5 geom_gate geom_multipath"
 
# A set of kernel modules that can be loaded after mounting local filesystems.
kld_list="dtraceall if_cxgbe"
 
nginx_enable="YES"
 
sasidled_enable=YES
sasidled_cmdpath=/mnt/ZFS1
sasidled_args="-v"
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
I haven't bothered migrating to FreeNAS 9.x yet, but just slapped it on a test N36L to investigate this.

There is a typo (unterminated string) in the sasidle script (line #164), which wasn't a problem in FreeNAS 8.x but is a problem for FreeNAS 9.x. I've updated the sasidle script in post #79 to fix this, so try updating and see if that helps.

I upgraded a FreeNAS 8.3.x system to FreeNAS 9.1.1 to test this, and noted the SQL database had lost the disk descriptions which are part of the SQL query the script uses to find the pool disk members, so if sasidle doesn't detect any devices ("Monitored Devices:" will be blank in /var/log/messages) then set the following description on each disk in your pool: "Member of share raidz1" where "share" is your pool name and "raidz1" is your redundancy (or "raidz2" etc.). Just go into "View Disks" in the GUI, edit each disk and paste in the same description for each member of the pool.

Other than that, it all works for me.

If you continue to have a problem, post up your sasidled as that may be the problem - for instance, check it's not been saved in Windows format (same goes for sasidle).

These are my file sizes
Code:
[root@freenas3] /mnt/share/bin# ls -la /etc/local/rc.d/sasidled /mnt/share/bin/sasidle.orig /mnt/share/bin/sasidle
-rwxr-xr-x  1 root  wheel  613 Oct 22 12:54 /etc/local/rc.d/sasidled*
-rwxr-xr-x  1 root  wheel  8825 Oct 22 13:38 /mnt/share/bin/sasidle.orig*  # v0.3.2
-rwxr-xr-x  1 root  wheel  9002 Oct 22 13:39 /mnt/share/bin/sasidle*       # v0.3.3


while yours are noticeably larger, suggesting they've been modified or perhaps include Windows-style line termination:

Code:
[John@nebula] /# ls -l /conf/base/etc/local/rc.d/sasidled
-r-xr-xr-x  1 root  wheel  624 Oct 22 10:20 /conf/base/etc/local/rc.d/sasidled*
 
[John@nebula] /# ls -l /mnt/ZFS1/sasidle
-rwxr-xr-x  1 root  wheel  9678 Oct 22 10:31 /mnt/ZFS1/sasidle*


And to prove it works... :)

Code:
[root@freenas3] /mnt/share/bin# uname -a
FreeBSD freenas3.local 9.1-STABLE FreeBSD 9.1-STABLE #0 r+16f6355: Tue Aug 27 00:38:40 PDT 2013    root@build.ixsystems.com:/tank/home/jkh/src/freenas/os-base/amd64/tank/home/jkh/src/freenas/FreeBSD/src/sys/FREENAS.amd64  amd64
 
[root@freenas3] /mnt/share/bin# tail -3 /etc/rc.conf
sasidled_enable=YES
sasidled_cmdpath=/mnt/share/bin
sasidled_args="-v"
 
[root@freenas3] /mnt/share/bin# service sasidled start
Starting sasidled.
 
[root@freenas3] /mnt/share/bin# service sasidled status
sasidled is running as pid 4866.
 
[root@freenas3] /mnt/share/bin# tail -20 /var/log/messages
Oct 22 12:48:33 freenas3 ntpd[1868]: time reset +0.201201 s
Oct 22 12:48:47 freenas3 avahi-daemon[2324]: WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
Oct 22 12:51:00 freenas3 manage.py: [common.pipesubr:57] Popen()ing: /usr/local/bin/warden list  -v
Oct 22 12:51:06 freenas3 last message repeated 2 times
Oct 22 13:18:27 freenas3 sasidle[4867]: /mnt/share/bin/sasidle starting
Oct 22 13:18:27 freenas3 sasidle[4867]: Identifying devices for pool "share"...
Oct 22 13:18:27 freenas3 sasidle[4867]: ---------------------------------------------------------
Oct 22 13:18:27 freenas3 sasidle[4867]: Monitored Pool:    share
Oct 22 13:18:27 freenas3 sasidle[4867]: Monitored Devices: ada0 ada1 ada2 ada3
Oct 22 13:18:27 freenas3 sasidle[4867]: Polling Interval:  60 seconds
Oct 22 13:18:27 freenas3 sasidle[4867]: Idle Timeout:      30 * 60 seconds
Oct 22 13:18:27 freenas3 sasidle[4867]: ASync Enabled:    Yes
Oct 22 13:18:27 freenas3 sasidle[4867]: Simulated Stop:    No
Oct 22 13:18:27 freenas3 sasidle[4867]: Log Disk Start:    Yes
Oct 22 13:18:27 freenas3 sasidle[4867]: ---------------------------------------------------------
Oct 22 13:18:27 freenas3 sasidle[4867]:
Oct 22 13:18:27 freenas3 sasidle[4867]: 030: capacity    operations    bandwidth
Oct 22 13:18:27 freenas3 sasidle[4867]: 030: pool        alloc  free  read  write  read  write
Oct 22 13:18:27 freenas3 sasidle[4867]: 030: ----------  -----  -----  -----  -----  -----  -----
Oct 22 13:18:27 freenas3 sasidle[4867]: 030: share      2.40T  1.23T      0      2    420  10.6K
 
[root@freenas3] /mnt/share/bin# service sasidled stop
Stopping sasidled.
Waiting for PIDS: 4866.
 

Neme

Dabbler
Joined
Feb 23, 2013
Messages
14
Firstly many thanks for the assistance Milhouse, it's very much appreciated.

I had already spotted and fixed the drive description fields as I had previously encountered this issue under 8.3.1 and found your solution earlier in the thread :D.

With the above and your code fix it is all looking good so far :) (log below), I'll pop back later and hopefully confirm all is working well, as for the file sizes...

Looks like i had a little line wrap action going on, must have pasted the code into a non full screen console at some point. I deleted and re-created these files a few times so I'm sure it was right at some point during my investigations :confused:

Code:
Oct 22 14:51:20 nebula sasidle[14219]: /bin/sasidle starting
Oct 22 14:51:20 nebula sasidle[14219]: Identifying devices for pool "INT1"...
Oct 22 14:51:20 nebula sasidle[14227]: /bin/sasidle starting
Oct 22 14:51:20 nebula sasidle[14227]: Identifying devices for pool "ZFS1"...
Oct 22 14:51:20 nebula sasidle[14219]: ---------------------------------------------------------
Oct 22 14:51:20 nebula sasidle[14219]: Monitored Pool:    INT1
Oct 22 14:51:20 nebula sasidle[14219]: Monitored Devices: da1
Oct 22 14:51:20 nebula sasidle[14219]: Polling Interval:  60 seconds
Oct 22 14:51:20 nebula sasidle[14219]: Idle Timeout:      60 * 60 seconds
Oct 22 14:51:20 nebula sasidle[14219]: ASync Enabled:    Yes
Oct 22 14:51:20 nebula sasidle[14219]: Simulated Stop:    No
Oct 22 14:51:20 nebula sasidle[14219]: Log Disk Start:    Yes
Oct 22 14:51:20 nebula sasidle[14219]: ---------------------------------------------------------
Oct 22 14:51:20 nebula sasidle[14219]:
Oct 22 14:51:20 nebula sasidle[14219]: 060: capacity    operations    bandwidth
Oct 22 14:51:20 nebula sasidle[14219]: 060: pool        alloc  free  read  write  read  write
Oct 22 14:51:20 nebula sasidle[14219]: 060: ----------  -----  -----  -----  -----  -----  -----
Oct 22 14:51:20 nebula sasidle[14219]: 060: INT1        1.84G  2.72T      4      4  78.3K  46.9K
Oct 22 14:51:20 nebula sasidle[14227]: ---------------------------------------------------------
Oct 22 14:51:20 nebula sasidle[14227]: Monitored Pool:    ZFS1
Oct 22 14:51:20 nebula sasidle[14227]: Monitored Devices: da10 da11 da12 da2 da3 da4 da5 da6 da7 da8 da9
Oct 22 14:51:20 nebula sasidle[14227]: Polling Interval:  60 seconds
Oct 22 14:51:20 nebula sasidle[14227]: Idle Timeout:      60 * 60 seconds
Oct 22 14:51:20 nebula sasidle[14227]: ASync Enabled:    Yes
Oct 22 14:51:20 nebula sasidle[14227]: Simulated Stop:    No
Oct 22 14:51:20 nebula sasidle[14227]: Log Disk Start:    Yes
Oct 22 14:51:20 nebula sasidle[14227]: ---------------------------------------------------------
Oct 22 14:51:20 nebula sasidle[14227]:
Oct 22 14:51:20 nebula sasidle[14227]: 060: capacity    operations    bandwidth
Oct 22 14:51:20 nebula sasidle[14227]: 060: pool        alloc  free  read  write  read  write
Oct 22 14:51:20 nebula sasidle[14227]: 060: ----------  -----  -----  -----  -----  -----  -----
Oct 22 14:51:20 nebula sasidle[14227]: 060: ZFS1        6.36T  23.4T      1      1  9.65K  7.92K
Oct 22 14:51:25 nebula sasidle[5965]: 054: ZFS1        6.36T  23.4T      0      0      0      0
Oct 22 14:52:20 nebula sasidle[14219]: 059: INT1        1.84G  2.72T      0      0      0      0
Oct 22 14:52:20 nebula sasidle[14227]: 059: ZFS1        6.36T  23.4T      0      0      0      0
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
Glad it's fixed. Regarding the details above, in line 33 it looks like you have another sasidle process running (pid 5965), which you probably want to kill.
 

Neme

Dabbler
Joined
Feb 23, 2013
Messages
14
Well spotted, there were actually more than just that extra one where i had spawned various processes while fixing this issue, took the easy way out and bounced the server (the joys of home servers) :)

Think thats the final configuration of my 9.1.1 box finished for now :D
 

Neme

Dabbler
Joined
Feb 23, 2013
Messages
14
Just thought I'd give the final update, all working perfectly again, Thanks again Milhouse:

Code:
Oct 22 21:37:01 nebula sasidle[2733]: ** Stopping devices in pool "ZFS1" **
Oct 22 21:37:01 nebula sasidle[2733]: camcontrol stop da10
Oct 22 21:37:01 nebula sasidle[2733]: camcontrol stop da11
Oct 22 21:37:01 nebula sasidle[2733]: camcontrol stop da12
Oct 22 21:37:01 nebula sasidle[2733]: camcontrol stop da2
Oct 22 21:37:01 nebula sasidle[2733]: camcontrol stop da3
Oct 22 21:37:01 nebula sasidle[2733]: camcontrol stop da4
Oct 22 21:37:01 nebula sasidle[2733]: camcontrol stop da5
Oct 22 21:37:01 nebula sasidle[2733]: camcontrol stop da6
Oct 22 21:37:01 nebula sasidle[2733]: camcontrol stop da7
Oct 22 21:37:01 nebula sasidle[2733]: camcontrol stop da8
Oct 22 21:37:01 nebula sasidle[2733]: camcontrol stop da9
Oct 22 21:37:02 nebula sasidle[2733]: Unit stopped successfully
Oct 22 21:37:02 nebula last message repeated 10 times


Checked and confirmed /dev/da2-12 via smartctl all showing:

Code:
Device State:                        Stand-by (1)
 

HolyK

Ninja Turtle
Moderator
Joined
May 26, 2011
Messages
654
Other than that, it all works for me.
Hi,
If i understand things correctly, the script in post #79 is the latest version which is working with FreeNAS 9.1.1. Anyway i am a bit confused with the "edit each disk and paste in the same description for each member of the pool." ... Did i missed something important while reading the whole thread?

EDIT
Is there way how to delay the sasidled start after boot? I just realized that i have my scripts stored on encrypted pool, so the "sasidle" script will be not accessible until i unlock the pool. So like 15min delay to start the daemon would be more than enough.
/EDIT

Just for the info ... here is my situation:
- I've replaced MoBo to SuperMicro X10SL7-F which has a on-board SAS2 controller (flashed into IT mode) and i have 6x3TB WD green connected to that SAS and all of them are in RAIDZ2 pool ( /dev/da[0-5] ). This pool is used as an archive/backup storage so it is accessed not so frequently -> I'd like to spun all six disks down. All disks have Standby set to 60min and SMART check frequency is set to 120min and "Standby" power mode to avoid spinning the disk up. Smartd is enabled, APM is disabled. This setting worked on previous motherboard where standard onboard controller was used. Now its not because of that SAS2
- I also i have two 2TB WD RED in mirror connected to the standard controlled (/dev/ada[0-1]). This is used for plugins and basically always active.

ataidle is not usable for disks connected to SAS
Code:
[root@HolyNAS] ~# ataidle /dev/da0
Model:
Serial:
Firmware Rev:
ATA revision:          unknown/pre ATA-2
LBA 48:                no
Geometry:              0 cyls, 0 heads, 0 spt
Capacity:              0MB
SMART Supported:        no
Write Cache Supported:  no
APM Supported:          no
AAM Supported:          no


select from storage db:
Code:
sqlite> select * from storage_disk;
1|Disabled|60|||{devicename}da0|1|Disabled|Auto||bay4||1|da0
1|Disabled|60|WD-WMC1XXXXXXXX||{serial}WD-WMC1XXXXXXXX|1|Disabled|Auto||bay3||4|da1
1|Disabled|60|WD-WMC1XXXXXXXX||{serial}WD-WMC1XXXXXXXX|1|Disabled|Auto||bay2||5|da2
1|Disabled|60|WD-WMC1XXXXXXXX||{serial}WD-WMC1XXXXXXXX|1|Disabled|Auto||bay1||6|da3
1|Disabled|60|WD-WMC1XXXXXXXX||{serial}WD-WMC1XXXXXXXX|1|Disabled|Auto||bay6||7|da4
1|Disabled|60|WD-WMC1XXXXXXXX||{serial}WD-WMC1XXXXXXXX|1|Disabled|Auto||bay5||8|da5


No active writes (for all, i did not pasted all 6)
Code:
[root@HolyNAS] ~# iostat -d -w1 da0
            da0
  KB/t tps  MB/s
  0.00  0  0.00
  0.00  0  0.00
  0.00  0  0.00
  0.00  0  0.00
  0.00  0  0.00


camcontrol output:
Code:
[root@HolyNAS] ~# camcontrol devlist | grep EZRX
<ATA WDC WD30EZRX-00D 0A80>        at scbus0 target 0 lun 0 (da0,pass0)
<ATA WDC WD30EZRX-00D 0A80>        at scbus0 target 1 lun 0 (da1,pass1)
<ATA WDC WD30EZRX-00D 0A80>        at scbus0 target 2 lun 0 (da2,pass2)
<ATA WDC WD30EZRX-00D 0A80>        at scbus0 target 3 lun 0 (da3,pass3)
<ATA WDC WD30EZRX-00D 0A80>        at scbus0 target 4 lun 0 (da4,pass4)
<ATA WDC WD30EZRX-00D 0A80>        at scbus0 target 5 lun 0 (da5,pass5)


stop via camcontrol stop is working
Code:
[root@HolyNAS] ~# camcontrol stop /dev/da0
Unit stopped successfully
[root@HolyNAS] ~# camcontrol stop /dev/da1
Unit stopped successfully
[root@HolyNAS] ~# camcontrol stop /dev/da2
Unit stopped successfully
[root@HolyNAS] ~# camcontrol stop /dev/da3
Unit stopped successfully
[root@HolyNAS] ~# camcontrol stop /dev/da4
Unit stopped successfully
[root@HolyNAS] ~# camcontrol stop /dev/da5
 
[root@HolyNAS] ~# smartctl -a -n standby /dev/da0
Device is in STANDBY mode, exit(2)
[root@HolyNAS] ~# smartctl -a -n standby /dev/da1
Device is in STANDBY mode, exit(2)
[root@HolyNAS] ~# smartctl -a -n standby /dev/da2
Device is in STANDBY mode, exit(2)
[root@HolyNAS] ~# smartctl -a -n standby /dev/da3
Device is in STANDBY mode, exit(2)
[root@HolyNAS] ~# smartctl -a -n standby /dev/da4
Device is in STANDBY mode, exit(2)
[root@HolyNAS] ~# smartctl -a -n standby /dev/da5
Device is in STANDBY mode, exit(2)


And power drain dropped by 20W ^^
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
Anyway i am a bit confused with the "edit each disk and paste in the same description for each member of the pool." ... Did i missed something important while reading the whole thread?
Seems straightforward enough.
I upgraded a FreeNAS 8.3.x system to FreeNAS 9.1.1 to test this, and noted the SQL database had lost the disk descriptions which are part of the SQL query the script uses to find the pool disk members,

Line 185 - 191 is where it is looking at the descriptions.

You could try the -d option (example on line 61) instead and see if that works for you.
 

anickname

Cadet
Joined
Dec 18, 2013
Messages
1
First of all, thank you, very much for your script, it worked very well.

My situation is a little complicated because the pool is encrypted. So in order to have the drives stopped I need to do the following :
  • start the computer in the morning
  • log into FreeNAS interface
  • unlock the pool
  • run ssh and connect to FreeNAS
  • run the sasidle script manually (if run from init won't work because the pool is encrypted)
Until I'll do all these steps the drives are all running at full speed.
In order to simplify the procedure I've created a small script which uses gstat and not zpool iostat like your script.
I hope you don't mind if I'll post it in your thread.

I am running the script as postinit script.

Because the indentation is broken here is the link to the script :
http://pastebin.com/pP0QqZqb
 
Status
Not open for further replies.
Top