How to tell if drives are actually going to sleep?

Status
Not open for further replies.

el-John-o

Dabbler
Joined
Jan 26, 2013
Messages
15
Hey all.

My FreeNAS array is a home setup that's not used by more than two people at any time. So to save power/heat I have the hard disks set to sleep after 20 minutes in the WebGUI. Is there any way to determine that they are, indeed, going to sleep?

The reason I ask, is when I open up the NAS, say in the finder on my MacBook Pro, it just pops right up with no delay, I can instantly access my files. Not that I'm complaining, but I would expect a second or two delay before I can see my files as the drives spin up. It behaves as if the drives ARE spun up and running all the time.

Thoughts?


John
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm not sure about verifying they are sleep. But I will say that if your cache provides enough information to prevent them from spinning up then you can expect no delay until you actually try to do something that isn't cache.

Personally, I don't believe that hard drives should be put to sleep. It seems to put wear and tear on them. My drives that are used in my server 24x7 uptime last FAR longer than drives that have been intermittently used. Other senior members have noticed the same thing and while we don't agree as to why this is the case, we agree that the effect is the same- longer drive life.
 

el-John-o

Dabbler
Joined
Jan 26, 2013
Messages
15
I'm not sure about verifying they are sleep. But I will say that if your cache provides enough information to prevent them from spinning up then you can expect no delay until you actually try to do something that isn't cache.

Personally, I don't believe that hard drives should be put to sleep. It seems to put wear and tear on them. My drives that are used in my server 24x7 uptime last FAR longer than drives that have been intermittently used. Other senior members have noticed the same thing and while we don't agree as to why this is the case, we agree that the effect is the same- longer drive life.

Interesting thoughts on the drive sleep causing additional wear. I wonder just how much it accelerates it? The point here is to reduce cost by reducing electrical load and heat (A Xeon server from '04 isn't exactly the pinnacle of efficiency). But you haven't saved anything if you're replacing drives...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'll put it to you like this.. I own 4 machines. Before SSDs each had a boot drive. Typically I had to replace 1-2 every year. Never a year goes by where I would "get away" with not having to replace one of htem. I have had 24 drives in my server now for 3 years straight and had only 1 failure. If they were failing at the same rate as my desktop drive I should have expect 6-8 failures a year!

My personal thumbrule is: If I have to access the drives anything more frequently than once a week I leave them on all the time. ZFS makes this "sleep/wake" cycle worse by requiring all disks to be "awake" if any of them need access. Not a completely unrealistic need. But in server environments you can expect far more cycling than in desktop use because anyone in the house can make the zpool waken up for almost any reason.

If you were doing weekly backups every Saturday I would recommend you setup a schedule to powerup the drives, do the backup, then shut them down. But for regular server use I see them being cycled alot. For example.. you're on your PC and you want to watch a movie on your HTPC. You shutdown your PC, take a shower,then sit down to watch a movie.. poof, you just cycled the drives. It's a situation that just isn't worth messing with.
 

JaimieV

Guru
Joined
Oct 12, 2012
Messages
742
Agreed with Cyberjock. I'm sure there is a flipover point where occasional use gets *so* occasional that the drives last longer if they're set to sleep, but there's no data on which to base the decision, as far as I know. All longevity studies work with datacentres where drives are not only constantly spun, but constant temperature too (the Google results document from 2009 suggests 35-45'C was the ideal for the disks in their tests).
 

el-John-o

Dabbler
Joined
Jan 26, 2013
Messages
15
Okay, well that makes sense!

I've had good luck with drives. I've only ever lost one drive, a 120GB ATA drive from 10 years ago. Had a backup though, and that machine was long since out of service. Guess I've been lucky!

I have a Hewlett-Packard laptop, has a 166MHz Pentium III CPU and is currently maxed out at 192MB of RAM. Not only does it's stock 6.0GB hard drive still work, but the battery still holds a charge and it's got over an hour of battery life! Guess I just got lucky.

I disabled the drive sleep. The server has 4 hot swap bays (and one non hot swap) with two large fans behind it, so I think the temperature should remain pretty constant. I'd like to eventually switch out with WD Red drives.
 
Status
Not open for further replies.
Top