Unsure of SATA drive spindown

Status
Not open for further replies.

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
For what it's worth, I would avoid setting APM in addition to HDD Standby - as you say, it's one or the other but in my experience setting HDD Standby and forgetting about APM is the sanest approach. As you have determined, some APM settings are waaay too aggressive and unpredictable (particularly on "Green" drives) so best just leave it off, and let HDD Standby do it's spin-down thing according to your own schedule.

It is a bit annoying that ATAidle spins down the disks when setting the timeout values, but I think that's the price to be paid since the idle timer has to be set on the drive and the only way to do it seems to result in an immediate spin-down. However, with a regular SATA controller I haven't had any problems with ATAidle - I'm not sure why your disks would be spinning back up again if you have APM disabled and HDD Standby enabled.

For what it's worth, I created the iostat script because the FreeNAS built-in spin-down doesn't work at all with some (all?) add-in HBA controllers, and in my opinion that's down to a problem with the HBA drivers (not supporting SATA spin-down) rather than any deficiency of FreeNAS, and as such I don't see this problem ever being resolved to anyone's satisfaction. Hence this rather hackish script... if the built-in ATAidle isn't working then give the script a go as a (hopefully) more predictable alternative! :)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
It is a bit annoying that ATAidle spins down the disks when setting the timeout values, but I think that's the price to be paid since the idle timer has to be set on the drive and the only way to do it seems to result in an immediate spin-down. However, with a regular SATA controller I haven't had any problems with ATAidle - I'm not sure why your disks would be spinning back up again if you have APM disabled and HDD Standby enabled.

Just one note, ataidle wil spin down the drive immediately or not depending on if you use the -S or -I parameter. It makes the difference. But I agree with Milhouse that if you have APM disabled and use only the HDD Standby setting that it should be working for you unless you have a hardware incompatibility issue which I doubt since you are seeing the drives spin down. I recommend a HDD Standby value that would cause your drive to not spin up in a 24 hour period more than 3 to 4 times. It's only my opinion that you would be causing more wear on the drive if it's spinning up more than that in a 24 hour period. Again, my opinion not based on anything factual. I set mine to 120 minutes and I schedule backups around two specific times a day. My drives spin approx. 6 hours a day on average. That is how I use it.

But trust me, there was a lot more frustration in the beginning when we had to troubleshoot the ataidle problems and get in touch with the program originator Bruce. He made some changes which helped.
 

TimeBandit

Cadet
Joined
Jun 7, 2012
Messages
9
Milhouse - we're definitely on the same page. The point I'm making to the FreeNAS user community (and developers) is that ataidle, camcontrol, atacontrol, (pick your poison) etc.... are simply commands that toggle/access settings within the drives/controllers. They run, set something, then exit. We are then left to assume that a tiny little firmware process in the drive will monitor self and internal timer and activity and then do what it is supposed to when it's supposed to. Thus means we (to include the FreeNAS OS) are at the mercy of crossing our fingers and praying the drives obey and does what is expected. This is obviously not the situation. Therefore, I argue that we say the hell with relying on the internal mechanisms of these various drives, and just advocate that the FreeNAS team approach this from another direction. Your script, as adhoc-ish as it may seem, is really the way to go. I'm thinking it's a good start. It wouldn’t take much to enhance it to actually query FreeNAS's configs (SQLight tables) to discern the pools and drive groups. FreeNAS developers would subsequently remove the two form fields for a new one that the script will use as timeout parameter. The only use for ataidle, etc... is just to send the spin-down command (which we all know works fine).

joeschmuck - thanks for the tip on getting the time set without the spin-down.

As far as my drives which are SATA WD Scorpio Blacks, I've tried every combo under the sun. I've tried IDE, AHCI, etc... The non-APM method (aka ataidle's "-S") wont spin-down the drives on their own post the 1st initial spin-down issuance. The ONLY way I've seen self-spindown occur is when setting APM (-P) - but as I said, it's either 8 seconds or non at all.

I'll keep playing around, and also play with the sasidle script to see how integrated it could become, etc... If the FreeNAS team sees this, they'd be more inclined to fold it into the project and give up on trusting what these hard drives are supposed to do internally.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
@ TimeBandit

Not sure what is going on in your system that you cannot get a single drive to spin down. Are you certain that there is no hard drive activity going on? If you can issue the ataidle -I command (ataidle -I 30 /dev/ada0) and the drive spins down, but then spins right back up again, then you have activity with the hard drives. Please do not take anything I say as offensive, I sometimes phrase my questions so anyone of any knowledge level can follow it, I have no idea how knowledgeable you are.

I don't recall you stating what FreeNAS software you are running. Could you describe what features you have enabled on FreeNAS? I'd like to figure out what might be going on with your system. Do you run plugins, have some feature enabled that periodically looks at your hard drives, have a computer on your network that periodically polls your FreeNAS system? Lets say you only have FreeNAS 8.0.4 installed, drives configured, CIFS setup, HDD Standby set to 30 minutes. Reboot so the HDD Standby can take affect (must reboot, changes in this area are not effective until a reboot). Once your system is running normally then unplug the LAN cable from it. Wait 30+ minutes to see if the drives have spun down. You could also just disconnect the LAN cable and from the console shell issue the ataidle command vice all the rebooting and stuff. Maybe ataidle -S 5 /dev/ada0 and see if the drive spins down, or use the -I and see how long it takes the drive to spin up.

Also, ataidle works very well in FreeBSD 8.x the last time I checked (about 8 months ago), it was the implementation into FreeNAS where problems arise.
 

TimeBandit

Cadet
Joined
Jun 7, 2012
Messages
9
joeschmuck - I do appreciate your willingness to help with my individual problem. I have some new test data using your suggested "-I" technique.... and as a result I'm really confused as to what my results mean. Here's what I have:

HDD Standby set for 5 min, APM is not set as you suggested, I'm running plain 8.0.4 booted from thumb drive, a mounted UFS mirror (ada0 & ada1), and no services and shares configured. My attempt at insuring nothing is "tapping" on the drives etc...

Tests were just as inconsistent, although I must admit to witnessing some automatic spinning down, just not reliable nor consistent, nor near the set timeout setting (usually double).
FWIW, iostat showed no hdd activity to explain the inconsistency, so this must be the "problems" you speak of regarding FreeNAS implementation. I also tried tests setting ataidle from the console, including the -I switch. My guess is whatever is being used as a reference mechanism for the internal "timer" and activity "trigger" are not robust, or maybe being reset by something other than legit disk IO.

So that all said, at this point I'm still leaning toward an external method, i.e. like the sasidle script.
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
It wouldn’t take much to enhance it to actually query FreeNAS's configs (SQLight tables) to discern the pools and drive groups. FreeNAS developers would subsequently remove the two form fields for a new one that the script will use as timeout parameter.

The sasidle script already reads the config tables, however if they give incorrect information you can choose to override with your own parameters (not ideal, but then I don't understand why the config tables should give out incorrect information). I agree a single timeout per pool would be necessary, I could get the default disk timeouts from the config tables but since these are at the disk level I don't bother and instead it's left as a user parameter (default 30 minutes).

The only use for ataidle, etc... is just to send the spin-down command (which we all know works fine).

Ah, but ataidle doesn't work at all with some HBA controllers (driver issue) which is why this script exists in the first place. Using camcontrol in place of ataidle may have a higher success rate as it should work with both SATA and HBA controllers, or just use ataidle with SATA controllers (/dev/adaX?) and camcontrol with non-SATA controllers.

In addition, the script only works for ZFS and not at all for UFS - the current ataidle method should (in theory) work for all filesystems. Since I don't have any UFS storage I'm not sure how to monitor UFS for inactivity.
 

TimeBandit

Cadet
Joined
Jun 7, 2012
Messages
9
....
Since I don't have any UFS storage I'm not sure how to monitor UFS for inactivity.

Hmmmm.... shouldn't be that hard at all. i.e., there is a "zpool iostat" counter-part command for native file systems called simply "iostat". Fundamentally works the same.

i.e.
# iostat -xzdc 999999 -w 60 ada0 ada1

This is a handy method that basically produces stats on 1-minute intervals for just /dev/ada0 & /dev/ada1 ( aka you'll need to enumerate what raw drives belong to a UFS mirror/stripe, etc...).
The -c 999999 is just a super big number to keep it from exiting out, aka 999999 is almost two years worth of intervals - realistically one won't run the NAS longer than that without a reboot -haha
Moreover, this method suppresses output until there is actual activity, so scripting would be logically easier if you just wanted to detect activity or not.

A simple bash script implementation might be:
Code:
#!/bin/bash

/usr/sbin/iostat -xzdc 999999 -w 60 ada0 ada1 | while read LINE
do

....
....  if [ "${LINE}" blah blah ]
...

done


Yeah, I must have glazed over the part where your script queries the config database (sorry), I admit, I need time to get a closer look at it - thx!
 

arryo

Dabbler
Joined
May 5, 2012
Messages
42
I think for harddrives onboard sata controller, the camcontrol stop will not work,

Code:
Jun 21 07:50:57 freenas sasidle[1377]: camcontrol stop ada0
Jun 21 07:50:57 freenas sasidle[1377]: camcontrol stop ada1
Jun 21 07:50:57 freenas sasidle[1377]: Error received from stop unit command
Jun 21 07:50:57 freenas sasidle[1377]: Error received from stop unit command



It uses camcontrol idle with time set:

Code:
camcontrol idle ada0 -t 900


Since it already has time set, you just let it run in rc.local for each disk. I used to use that method when the spindow option in freenas not working
 
Joined
Jul 5, 2012
Messages
1
Having some problem with spindown

Hi! Hoping for some help here..sorry for my bad English grammar..

I have set up a Nas for home use and everything is working fine except for the disk spindown.

It's a total of 5 sata disks.

ada0 = ST2000DL003-9VT166 CC3C
ada1 = WDC WD20EARS-00MVWB0 51.0AB51
ada2 = WDC WD20EARX-22PASB0 51.0AB51
ada3 = ST2000DL003-9VT166 CC3C
ada4 = WDC WD20EARS-00MVWB0 51.0AB51

My first problem is ada0 and ada3.

When I issue:
Code:
ataidle -S 30 /dev/ada0
ataidle -S 30 /dev/ada3 

They do spin down, but randomly, after 5 or 25 min they spins up again...(I disconnected the Ethernet cable, so no disk activity what I know of)

I monitor the power consumption via an energy logger with a SD-card.

I confirm that it is the ST-disks by the command:
Code:
camcontrol cmd ada0 -a "E5 00 00 00 00 00 00 00 00 00 00 00" -r -

The WD-disks always stays off when the ST-disks spins up.

If I access my cifs share all drives spins up like they should.

My second problem is after access, the drives spins longer than they should.. all the drives.
I have Issued adaidle -S 30 for all the drives but it can take several hours before the drives spin down again. And when they do, the WD spins down nicely but the ST often keeps running until I issue a new ataidle -S.

/Andreas
 

fungus1487

Dabbler
Joined
Jan 12, 2012
Messages
42
Hi Guys, I have a query about configuring this script for both an on-board mobo sata controller and an IBM M1015 flashed with LSI firmware.

I have a pool called 'live' which contains 10 discs.

ada0, ada1, ada2, ada3 are all connected directly to the mobo
da0,da1,da2,da3,da4,da5 are connected to the M1015

I have configured the script as such...

Code:
sasidled_enable=YES
sasidled_cmdpath=/usr/share
sasidled_args="-v --devices '/dev/da0 /dev/da1 /dev/da2 /dev/da3 /dev/da4 /dev/da5 /dev/ada0 /dev/ada1 /dev/ada2 /dev/ada3'"


However the camcontrol commands do not work directly on the drives ada0, ada1, ada2, ada3 and the script does not issue an ataidle in this case.

Is this intended and should I configure the drives connected directly to the mobo through the FreeNAS power management in the GUI?

Just unsure of how to configure spindown in this environment, thanks.
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
However the camcontrol commands do not work directly on the drives ada0, ada1, ada2, ada3 and the script does not issue an ataidle in this case.

Is this intended and should I configure the drives connected directly to the mobo through the FreeNAS power management in the GUI?

The script was created for non-ATA disks, but there's no reason why you can't add a call to ataidle along with the call to camcontrol - one of the two should then work (or call atatidle if camcontrol returns an error).

Using both this script and the built-in ATA spindown will likely leave you with some disks in your volume spinning, and some not.
 

fungus1487

Dabbler
Joined
Jan 12, 2012
Messages
42
I made a few changes to script for those wanting to use it for pools which contain both hba sas connected disks and directly to the mobo.

This lets me configure the spindown in one place rather than several. Thanks Milhouse for all the great work on the original script this is just slightly adjusted for my needs. In short it uses a regular expression on the disk identifier e.g. 'dev/ada0' to find if the disk should be idled using camcontrol or ataidle.

Code:
#!/bin/sh
#set -xv

#VERSION:  0.3.3
#MODIFIED: 7 Sep 2012
#AUTHOR:   Milhouse
#EDIT:     Craig McNicholas
#
#DESCRIPTION: Created on FreeNAS 8.0.1-BETA4 with LSI 9211-8i HBA
#             Tested on FreeNAS FreeNAS 8.0.2-RELEASE with LSI 9211-8i HBA
#
#  This script will attempt to stop disks in a ZFS pool connected
#  by a SAS HBA controller, using the camcontrol command. In effect,
#  this is a poor mans ATAIDLE.
#
#  The script can also be used to manage pools with both SAS HBA connected
#  disks as well as disks connected directly to the motherboard. In the
#  instance a disk attached to the mobo is detected it will issue an
#  ataidle command instead of camcontrol.
#
#  The script is designed to run as a "daemon" process, running
#  constantly from bootup. The idea is that it will consume standard
#  input, which will itself be the output of "zpool iostat n" where n
#  is a time interval, usually 60 (seconds).
#
#  A count (--timeout) is specified as an argument for the script, and this
#  quantity is decremented each time there is no iostat activity, and reset
#  whenever there is iostat activity. When the count reaches zero, the disks
#  will be stopped using "camcontrol stop" (unless the --test option is
#  specified).
#
#  The amount of time elapsed before disks are stopped is calculated as
#  the product of the timeout and the value n specified for zpool iostat.
#  For example, a timeout of 15 and an interval of 60 gives 15 * 60, or
#  900 seconds (ie. 15 minutes). The default timeout is 30.
#
#  By default, disks will be stopped asynchronously (ie. simultaneously)
#  however if this is a problem, specify the --sync option to stop disks
#  in sequence, one after the other.
#
#  If a system has multiple pools, run individual commands for each pool.
#
#  If the script is unable to automatically detect disks/devices, specific
#  devices can be specified using the --devices argument.
#
#CHANGELOG:
#
#  0.1.0: 2011-08-01  - Initial version
#  0.2.0: 2011-09-05  - Tweak CAMCONTROL detection
#  0.2.1: 2011-10-08  - Rewrite device detection to read devices from SQLITE db
#  0.2.2: 2011-11-11  - Log when new disc activity detected after a stop (disable with --nologstart)
#                       Simplify logging.
#  0.2.3: 2011-11-12  - Minor fix for logging.
#  0.3.0: 2012-06-04  - Add support for multiple pools and pool specific devices
#  0.3.1: 2012-06-05  - Minor coding style corrections, remove trailing comma from default pools
#  0.3.2: 2012-06-17  - Fix regression decoding devices and expansion of args
#  0.3.3: 2012-09-07  - Added ataidle for non hba disks allowing pool spindown in dual mobo/hba setups
#
#
#USAGE:
#  Execute the script with the -h or --help arguments.
#
#EXAMPLES:
#   zpool iostat tank 60 | sh sasidle --timeout 15
#   zpool iostat tank 60 | sh sasidle --timeout 15 --devices "/dev/da[2-3]"
#   zpool iostat tank 60 | sh sasidle --timeout 15 --devices "tank:/dev/da[1-2] music:/dev/da[3-4]" --pool tank,music
#   zpool iostat tank 60 | sh sasidle --timeout 15 --sync
#   zpool iostat tank 60 | sh sasidle --timeout 15 --test --verbose
#

POOL=`zpool list -H -o name | tr -s '\n' ',' | sed 's/,$//g'`
INTERVAL=60
TIMEOUT=30
DEVICES=
ASYNC=Y
LOGSTART=Y
DOSTOP=Y
DEBUG=
INFO=Y

_log() {
	[ ! -z $RC_PID ] && echo "$1" || echo "`date +'%Y/%m/%d %H:%M:%S'` $1"
}

_help() {
	echo "usage: ${0} [-p --pool <pool1,pool2>] [-i --interval #] [-t --timeout #] [-s -sync]
[-d --devices "pool1:/dev/da[0-2] pool2:/dev/da[3-6]"] [--nologstart] [-x --test]
[-q --quiet|-v --verbose] [-h --help]
-p --pool       name of pool(s) to monitor, comma separated list
-i --inteval    time interval between checks, in seconds
-t --timeout    number of intervals to elaps before stopping disks (default 30)
-s --sync       stop disks synchronously (default, async)
-d --devices    override device detection by specifying devices eg. "/dev/da[0-4]",
                or pool specific (pool1:dev1,dev2 pool2:dev3,dev4)
   --nologstart log message when disk activity detected after a prior stop
-x --test       do not stop disks (simulation)
-q --quiet      suppress all informational and debug messages
-v --verbose    output debug messages
-h --help       display this help"
}

# Total hack to avoid consuming arguments (needed for later expansion)
# getopts doesn't support long options in BSD, so this hack will have
# to do.
# In order to kick off the background task, we only need to know --pool
# and --interval so make sure they're first in any arguments, and we'll
# pass them in again to the background job where they can be consumed...
P=1
while [ $P -le $# ]; do
	[ $P -gt 9 ] && break;

	ARG1=`eval echo \$"$P"`
	let P=P+1 >/dev/null
	ARG2=`eval echo \$"$P"`
	let PN=P+1 >/dev/null

	case ${ARG1} in
		"-p" | "--pool")	POOL=${ARG2}; P=$PN;;
		"-i" | "--interval")	INTERVAL=${ARG2}; P=$PN;;
	esac
done

# Being called by rc - fork off a backgroun task
if [ ! -z $RC_PID ]; then
	if [ -z ${PROCMAIN} ]; then
		export PROCMAIN=YES;
		for p in `echo ${POOL} | sed "s/,/ /g"`; do
			zpool list $p >/dev/null
# Pool must exist - if not, ignore it (error message will appear in logs)
			if [ $? -eq 0 ]; then
				zpool iostat ${p} ${INTERVAL} | \
					/bin/sh $0 "$@" --pool ${p} --interval ${INTERVAL} | \
					logger -i -t sasidle &
			fi
		done
		exit 0
	fi
	echo $$ >>/var/run/sasidled.pid
fi

# To have got here, we're either being run manually from the command line, or
# called from rc, so lets validate all arguments - they're safe to consume now.
while [ $# -gt 0 ]; do
	case ${1} in
		"-p" | "--pool")	shift 1; POOL=${1};;
		"-i" | "--interval")	shift 1; INTERVAL=${1};;
		"-t" | "--timeout")	shift 1; TIMEOUT=${1};;
		"-s" | "--sync")	ASYNC=;;
		"-d" | "--devices")     shift 1; DEVICES=${1};;
		       "--nologstart")  LOGSTART=;;
		"-x" | "--test")	DOSTOP=;;
		"-q" | "--quiet")	DEBUG=; INFO=;;
		"-v" | "--verbose")	DEBUG=Y; INFO=Y;;

		"-h" | "--help")	_help; exit;;
		*)			echo "Unrecognised argument: $1"; _help; exit;;
	esac
	shift 1
done

[ "${INFO}" ] && _log "$0 starting"

if [ "${DEVICES}" ]; then
	TEMP_DEV=${DEVICES}
	DEVICES=

	for x in ${TEMP_DEV}; do
		if [ `echo $x | grep "^${POOL}:` ]; then
			[ "${INFO}" ] && _log "Parsing devices for pool \"${POOL}\"..."
			DEVICES="`echo "$x" | cut -d: -f2- | sed 's/,/ /g'`"
			break
		fi
# Use these unamed devices if no pool is matched
		[ `echo $x | grep -v ":"` ] && DEVICES="${DEVICES} $x"
	done

# Clean up devices if necessary...
	[ "$DEVICES" ] && DEVICES=`eval echo '$DEVICES' | sed 's/,/ /g' | sed 's/"//g'`
fi

# Determine managed devices if not already known...
if [ ! "${DEVICES}" ]; then
	[ "${INFO}" ] && _log "Identifying devices for pool \"${POOL}\"..."

	OLD_RC_PID=$RC_PID
	. /etc/rc.freenas
	RC_PID=$OLD_RC_PID

	DEVICES=`${FREENAS_SQLITE_CMD} ${FREENAS_CONFIG} \
		"SELECT disk_name FROM storage_disk  \
		WHERE LOWER(disk_description) LIKE LOWER('% ${POOL} %') \
		ORDER BY disk_name ASC" | \
		while read disk_name; do
			echo $disk_name|sed "s/p.$//"
		done|tr -s "\n" " "`
fi

# Strip /dev/ prefix, eliminate any duplicates and stop disks in ascending sequence...
DEVICES=`echo ${DEVICES} | sed "s#/dev/##g" | tr -s " " "\n" | sort -u | tr -s "\n" " "`

# Show config info...
if [ "${INFO}" ]; then
	_log "---------------------------------------------------------"
	_log "Monitored Pool:    ${POOL}"
	_log "Monitored Devices: ${DEVICES}"
	_log "Polling Interval:  ${INTERVAL} seconds"
	_log "Idle Timeout:      ${TIMEOUT} * ${INTERVAL} seconds"
	_log "ASync Enabled:     $([ ${ASYNC} ] && echo "Yes" || echo "No")"
	_log "Simulated Stop:    $([ ${DOSTOP} ] && echo "No" || echo "Yes")"
	_log "Log Disk Start:    $([ ${LOGSTART} ] && echo "Yes" || echo "No")"
	_log "---------------------------------------------------------"
	_log ""
fi

# Skip 3 lines of "zpool iostat" headers...
for H in 1 2 3; do
	read HEADER
	[ "$DEBUG" ] && _log "$(printf "%.3d: %s\n" ${TIMEOUT} "${HEADER}")"
done

COUNT=${TIMEOUT}
STOPPED=

# Main infinite loop...
while [ true ]; do
	read POOL_NAME POOL_USED POOL_AVAIL POOL_OP_READ POOL_OP_WRITE POOL_BW_READ POOL_BW_WRITE

# If no activity, decrement count, else reset it
	if [ ${POOL_OP_READ-1} = 0 -a ${POOL_OP_WRITE-1} = 0 -a \
	     ${POOL_BW_READ-1} = 0 -a ${POOL_BW_WRITE-1} = 0 ]; then
		[ ! ${STOPPED} ] && let COUNT=COUNT-1 >/dev/null
	else
		if [ "${STOPPED}" -a "${LOGSTART}" ]; then
			[ "${INFO}" ] && _log "** Restarting devices in pool \"${POOL_NAME}\" due to activity **"
		fi

		COUNT=${TIMEOUT}
		STOPPED=
	fi

# Optional diagnostic output...
	[ "${DEBUG}" ] && _log "$(printf "%.3d: %-10s  %5s  %5s  %5s  %5s  %5s  %5s\n" \
		${COUNT} ${POOL_NAME} ${POOL_USED} ${POOL_AVAIL} \
		${POOL_OP_READ} ${POOL_OP_WRITE} ${POOL_BW_READ} ${POOL_BW_WRITE})"

# If count reaches zero, stop devices
	if [ ${COUNT} -le 0 -a ! "${STOPPED}" ]; then
		[ "${INFO}" ] && _log "** Stopping devices in pool \"${POOL_NAME}\" **"

		for DISK in ${DEVICES}; do
# Decide if we should use camcontrol or ataidle
			case "${DISK}" in 
				*ada*)
					if [ "${DOSTOP}" ]; then
						[ "${INFO}" ] && _log "ataidle -S 5 ${DISK}"
						[   "${ASYNC}" ] && ataidle -S 5 ${DISK} &
						[ ! "${ASYNC}" ] && ataidle -S 5 ${DISK}
					else
						[ "${INFO}" ] && _log "#ataidle -S 5 ${DISK}"
					fi
				;;
				*)
					if [ "${DOSTOP}" ]; then
						[ "${INFO}" ] && _log "camcontrol stop ${DISK}"
						[   "${ASYNC}" ] && camcontrol stop ${DISK} &
						[ ! "${ASYNC}" ] && camcontrol stop ${DISK}
					else
						[ "${INFO}" ] && _log "#camcontrol stop ${DISK}"
					fi
				;;
			esac
		done

		STOPPED=Y
	fi
done

exit 0
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
Alternatively, you could replace the CASE statement with an if-test for the result of "camcontrol inquiry ${DISK} 1>/dev/null 2>/dev/null" - if the result in $? is 0 (ie. success) then the disk is under the control of camcontrol, any other non-zero result means the disk is most likely under the control of ataidle.

Code:
		for DISK in ${DEVICES}; do
# Decide if we should use camcontrol or ataidle
			camcontrol inquiry ${DISK} 1>/dev/null 2>/dev/null
			if [ $? -ne 0 ]; then
				if [ "${DOSTOP}" ]; then
					[ "${INFO}" ] && _log "ataidle -S 5 ${DISK}"
					[   "${ASYNC}" ] && ataidle -S 5 ${DISK} &
					[ ! "${ASYNC}" ] && ataidle -S 5 ${DISK}
				else
					[ "${INFO}" ] && _log "#ataidle -S 5 ${DISK}"
				fi
			else
				if [ "${DOSTOP}" ]; then
					[ "${INFO}" ] && _log "camcontrol stop ${DISK}"
					[   "${ASYNC}" ] && camcontrol stop ${DISK} &
					[ ! "${ASYNC}" ] && camcontrol stop ${DISK}
				else
					[ "${INFO}" ] && _log "#camcontrol stop ${DISK}"
				fi
			fi
		done


This way you should be able to accurately determine which subsystem actually controls the disk - either camcontrol or ataidle, without making assumptions based on disk identifier.
 

domax

Cadet
Joined
Dec 25, 2012
Messages
6
I'm running v 0.3.2 of the script by milhouse.
It looks like write access never get's below 20 for me?!?! How to find out what is causing this? My disks never go in standby because of this. I'm running Freenas 8.3 with a raidz2 zfs pool, and 2 ssd cache devices.
any ideas what could be causing this?

The output of the script is as follows:
Dec 31 14:55:13 freenas sasidle[2380]: /mnt/zvol0/FreeNAS/scripts/sasidle starting
Dec 31 14:55:13 freenas sasidle[2380]: Identifying devices for pool "zvol0"...
Dec 31 14:55:14 freenas sasidle[2380]: ---------------------------------------------------------
Dec 31 14:55:14 freenas sasidle[2380]: Monitored Pool: zvol0
Dec 31 14:55:14 freenas sasidle[2380]: Monitored Devices:
Dec 31 14:55:14 freenas sasidle[2380]: Polling Interval: 60 seconds
Dec 31 14:55:14 freenas sasidle[2380]: Idle Timeout: 30 * 60 seconds
Dec 31 14:55:14 freenas sasidle[2380]: ASync Enabled: Yes
Dec 31 14:55:14 freenas sasidle[2380]: Simulated Stop: No
Dec 31 14:55:14 freenas sasidle[2380]: Log Disk Start: Yes
Dec 31 14:55:14 freenas sasidle[2380]: ---------------------------------------------------------
Dec 31 14:55:14 freenas sasidle[2380]:
Dec 31 14:55:14 freenas sasidle[2380]: 030: capacity operations bandwidth
Dec 31 14:55:14 freenas sasidle[2380]: 030: pool alloc free read write read write
Dec 31 14:55:14 freenas sasidle[2380]: 030: ---------- ----- ----- ----- ----- ----- -----
Dec 31 14:55:14 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 54 28 272K 176K
Dec 31 14:56:15 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 201 74 2.05M 484K
Dec 31 14:57:15 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 29 2.80K 168K
Dec 31 14:58:15 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 24 0 118K
Dec 31 14:59:15 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 24 0 124K
Dec 31 15:00:15 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 30 1.47K 177K
Dec 31 15:01:15 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 31 0 185K
Dec 31 15:02:15 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 23 0 117K
Dec 31 15:03:15 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 24 0 124K
Dec 31 15:04:15 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 22 0 101K
Dec 31 15:05:15 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 38 0 258K
Dec 31 15:06:15 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 24 0 127K
Dec 31 15:07:15 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 31 0 194K
Dec 31 15:08:15 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 2 35 15.7K 193K
Dec 31 15:09:15 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 40 0 249K
Dec 31 15:10:15 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 32 0 166K
Dec 31 15:10:50 freenas ntpd[1828]: kernel time sync status change 2001
Dec 31 15:11:15 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 32 0 174K
Dec 31 15:12:15 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 31 0 164K
Dec 31 15:13:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 32 4.20K 245K
Dec 31 15:14:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 27 0 129K
Dec 31 15:15:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 32 0 166K
Dec 31 15:16:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 29 0 142K
Dec 31 15:17:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 36 0 216K
Dec 31 15:18:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 39 0 236K
Dec 31 15:19:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 58 272 483K
Dec 31 15:20:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 39 0 243K
Dec 31 15:21:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 52 0 368K
Dec 31 15:22:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 32 0 167K
Dec 31 15:23:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 47 0 313K
Dec 31 15:24:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 29 0 146K
Dec 31 15:25:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 34 409 263K
Dec 31 15:26:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 35 0 200K
Dec 31 15:27:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 31 0 148K
Dec 31 15:28:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 34 0 182K
Dec 31 15:29:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 38 0 230K
Dec 31 15:30:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 36 0 207K
Dec 31 15:31:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 51 2.93K 416K
Dec 31 15:32:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 29 0 150K
Dec 31 15:33:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 34 0 186K
Dec 31 15:34:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 33 0 186K
Dec 31 15:35:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 39 0 236K
Dec 31 15:36:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 35 0 264K
Dec 31 15:37:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 33 0 179K
Dec 31 15:38:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 32 0 169K
Dec 31 15:39:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 29 0 146K
Dec 31 15:40:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 5 44 24.1K 251K
Dec 31 15:41:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 32 68 172K
Dec 31 15:42:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 37 0 296K
Dec 31 15:43:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 32 0 169K
Dec 31 15:44:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 32 0 167K
Dec 31 15:45:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 31 0 158K
Dec 31 15:46:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 29 0 142K
Dec 31 15:47:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 29 0 146K
Dec 31 15:48:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 39 0 311K
Dec 31 15:49:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 36 0 209K
Dec 31 15:50:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 26 0 109K
Dec 31 15:51:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 32 0 169K
Dec 31 15:52:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 36 0 213K
Dec 31 15:53:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 31 0 156K
Dec 31 15:54:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 42 0 337K
Dec 31 15:55:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 40 68 238K
Dec 31 15:56:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 35 0 215K
Dec 31 15:57:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 30 0 152K
Dec 31 15:58:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 35 0 199K
Dec 31 15:59:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 32 0 173K
Dec 31 16:00:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 44 68 352K
Dec 31 16:01:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 28 0 129K
Dec 31 16:02:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 32 0 173K
Dec 31 16:03:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 33 0 179K
Dec 31 16:04:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 28 0 131K
Dec 31 16:05:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 33 0 183K
Dec 31 16:06:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 35 0 282K
Dec 31 16:07:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 28 0 160K
Dec 31 16:08:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 27 0 143K
Dec 31 16:09:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 26 0 135K
Dec 31 16:10:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 38 0 242K
Dec 31 16:11:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 14 43 163K 327K
Dec 31 16:12:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 31 0 161K
Dec 31 16:13:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 30 0 152K
Dec 31 16:14:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 29 0 146K
Dec 31 16:15:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 33 0 179K
Dec 31 16:16:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 30 0 155K
Dec 31 16:17:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 31 0 237K
Dec 31 16:18:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 30 0 152K
Dec 31 16:19:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 39 0 239K
Dec 31 16:20:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 36 0 206K
Dec 31 16:21:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 32 0 176K
Dec 31 16:22:16 freenas sasidle[2380]: 030: zvol0 4.12T 12.1T 0 30 0 151K
Stop refresh
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
I've no idea what's keeping your disks busy (though if you're running a jail with services, eg. mysql, that would be a good candidate), but I would point out that the script is not correctly identifying the disks in your volume so even if there was a sufficient period of inactivity its unlikely the script would spin down any of your disks.

Where you have:
Code:
Dec 31 14:55:14 freenas sasidle[2380]: Monitored Devices:

it should list your disks, eg.
Code:
Dec 23 17:57:31 freenas sasidle[3060]: Monitored Devices: da0 da1 da2 da3 da4 da5 da6 da7


What arguments are you passing to the script (in sasidled_args), and can you paste the result of the following command:
Code:
/usr/local/bin/sqlite3 /data/freenas-v1.db "SELECT disk_name FROM storage_disk WHERE LOWER(disk_description) LIKE LOWER('%zvol0%') ORDER BY disk_name ASC"


I suppose the script abort if it cannot identify any disks...
 

domax

Cadet
Joined
Dec 25, 2012
Messages
6
I use: sasidled_args="-v"

the sql query doesn't give any output, so that's why the scripts show no devices probably. What can this be?
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
Ugh, my mistake - I tested it with my share but forgot to change it to suit your setup. I've modified my post, can your try running it again...
 
Status
Not open for further replies.
Top