SOLVED How to configure TrueNAS Scale as home NAS with HDD spin down

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Since Scale is Debian based it might be as simple as using an existing solution designed to work on Debian Linux. Here's one for example that can not only spin down the drives but also control fans based on drive temperatures: https://github.com/desbma/hddfancontrol

Thanks,
Harry
Seems that script uses this to spin down drives... I think we already worked out that doesn't help us. (we need to use hdparm -Y to have the desired effect)
Code:
    def spinDown(self) -> None:
        """Spin down a drive, effectively setting it to DriveState.STANDBY state."""
        self.logger.info(f"Spinning down drive {self}")
        cmd = ("hdparm", "-y", self.device_filepath)
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
But i "found" an Alternative if someone is looking for a Solutuon.
Using a language that may not be installed on the host, but indeed uses a completely different method, so maybe would be something to try.
 

HarryMuscle

Contributor
Joined
Nov 15, 2021
Messages
161
Seems that script uses this to spin down drives... I think we already worked out that doesn't help us. (we need to use hdparm -Y to have the desired effect)
Code:
    def spinDown(self) -> None:
        """Spin down a drive, effectively setting it to DriveState.STANDBY state."""
        self.logger.info(f"Spinning down drive {self}")
        cmd = ("hdparm", "-y", self.device_filepath)
According to the hdparm man pages using the -Y parameter requires a soft reset to access the drive again. It would be weird to have to use that instead of -y which just causes the drive to spin down.

Thanks,
Harry
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
According to the hdparm man pages using the -Y parameter requires a soft reset to access the drive again. It would be weird to have to use that instead of -y which just causes the drive to spin down.
Putting that together with the other post which seems to be identifying that maybe the parsing of the iostat output is somehow failing to identify idle disks, maybe you're right and the script should be changed back to the -y option together with some attention on the idle identification code.

Maybe I'll feel like doing that tomorrow.

EDIT: OK, so I already found it... FreeBSD uses "extended device statistics" to separate the full list from the idle ones in the iostat output... Debian uses 2 blank lines.

I'll see if I can get something to work with that.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703

justjasch

Dabbler
Joined
May 8, 2022
Messages
20
1653840869207.png


with direct naming its not reading out io/stat correct

with /dev/sdx it does, then spindown would work, just the double dev errors its out.

wbr Alex
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
with direct naming its not reading out io/stat correct

with /dev/sdx it does, then spindown would work, just the double dev errors its out.
OK, so it would seem changing the lines for hdparm to just give the name will be enough... I made an edit to the script again.
 

justjasch

Dabbler
Joined
May 8, 2022
Messages
20
1653843515912.png


1653843541623.png


working.

an nice add would be to read out the current state, before issuing command and only set command if active/idle.


mfg Alex
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
an nice add would be to read out the current state, before issuing command and only set command if active/idle.
I thought that was already in the logic of the script... I'll check it.

And it's right there above the hdparm lines:
Code:
function spindown_drive() {
    if [[ $(drive_is_spinning $1) -eq 1 ]]; then


and "drive_is_spinning" is using:
Code:
        if [[ -z $(hdparm -C /dev/$1 | grep 'standby') ]]; then echo 1; else echo 0; fi


Which is where the problem is... I'll correct that to not prepend the /dev/
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
If it really is days - then its probably fine. Note the probably as there no hard evidence, mostly opinion. Stopping and starting 29 times a day, I think most people would say No to. But days - thats a different ball game
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Even if my drives are unused for days at times?
Look at the SMART data for the drives, examine ID 4 Start Stop Count. This count will be indicative of how many times the drive(s) spinup over a given period of time. If you feel your drives only spinup once a week for example, then check this value once a week. If it increments more than a count of "1", then it's spinning up more that you expect.

So if you check the ID 4 raw value and it's "30", then one week later you check and it's "60". This means that your drive spun down and up 30 times over that one week period. Is that acceptable to you? Only you can say. But if you expect the drives to be down most of the time, well this is not indicative of that.

My point: Collect some data to prove to yourself that the script is causing no harm to your drives.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I think there's woefully insufficient data-driven study of this in the industry considering the amount of time and how many drives are manufactured.

This post does get to some of it (the "best" available large study seems to be the one Google did), which indicates in drives older than 3 years, higher start/stop counts contributed to at least 2% more failures.


Sometimes 2% makes a difference, sometimes not.

If you care about your data 100% of the time, pay attention to the additional 2% risk.
 

psarrism

Cadet
Joined
Sep 6, 2023
Messages
2
I have 2 x 8GB RAM. I removed one of them, just to test the power consumption difference. Then I realized that the hard disks could not spin down anymore, because now there was something writing and reading on them every few minutes. I put back the second ram and it stopped, the hard disks can spin down again.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I have 2 x 8GB RAM. I removed one of them, just to test the power consumption difference. Then I realized that the hard disks could not spin down anymore, because now there was something writing and reading on them every few minutes. I put back the second ram and it stopped, the hard disks can spin down again.
Odds are that is because you ran out of RAM space and the system was using the SWAP space where it will free up RAM by moving the data to the hard drive SWAP partition, and then when that data is needed again, it will remove something else out of RAM so it can copy back what it removed before. It's a bad place to be.
 
Top