Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Hacking WD Greens (and Reds) with WDIDLE3.exe

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,100
EDIT:

And still running strong, no reallocated sectors, all test passing. Just curious, is this problem?

I should say it is a well run in and reliable drive and it is best to stick with it. But it could be a problem in some cases as it greatly exceeds WD's maximum rated load cycle count.
 

pasiz

Explorer
Joined
Oct 3, 2016
Messages
62
I should say it is a well run in and reliable drive and it is best to stick with it. But it could be a problem in some cases as it greatly exceeds WD's maximum rated load cycle count.
As I stated before, WD have done testing of 300000 load cycles. That does not mean, that they are saying; "drive is failing because you have parked your drive more than we tested it".

I have four of them in the pool, 8 sec timer, just for testing and i don't eat bullsh*t that drives cannot be parked over 300000 times. And i have proved that it, that they don't die in million or two million cycles.... WD does not have maximum cycles, all they advertise, is they tested 300000 cycles (these test don't even advertise that drives have failed during testing)...
 

Tekkie

Patron
Joined
May 31, 2011
Messages
344
Code:
root@shrek:~ # smartctl -a -q noserial /dev/da0 | grep Load_Cycle
  9 Power_On_Hours		  0x0032   100   100   000	Old_age   Always	   -	   45987
225 Load_Cycle_Count		0x0032   001   001   000	Old_age   Always	   -	   1363170

root@shrek:~ # smartctl -a -q noserial /dev/da1 | grep Load_Cycle
  9 Power_On_Hours		  0x0032   026   026   000	Old_age   Always	   -	   54089
193 Load_Cycle_Count		0x0032   001   001   000	Old_age   Always	   -	   3272860

root@shrek:~ # smartctl -a -q noserial /dev/da2 | grep Load_Cycle
  9 Power_On_Hours		  0x0032   044   044   000	Old_age   Always	   -	   41559
193 Load_Cycle_Count		0x0032   001   001   000	Old_age   Always	   -	   2938992

root@shrek:~ # smartctl -a -q noserial /dev/da3 | grep Load_Cycle
  9 Power_On_Hours		  0x0032   073   073   000	Old_age   Always	   -	   20060
193 Load_Cycle_Count		0x0032   001   001   000	Old_age   Always	   -	   955079

root@shrek:~ # smartctl -a -q noserial /dev/da4 | grep Load_Cycle
  9 Power_On_Hours		  0x0032   076   076   000	Old_age   Always	   -	   17543
193 Load_Cycle_Count		0x0032   001   001   000	Old_age   Always	   -	   2213398

root@shrek:~ # smartctl -a -q noserial /dev/da7 | grep Load_Cycle
  9 Power_On_Hours		  0x0032   030   030   000	Old_age   Always	   -	   61585
193 Load_Cycle_Count		0x0032   100   100   000	Old_age   Always	   -	   601


These are my numbers for my 7 year old RAIDZ2 config (4x WD Green, 1x Samsung, 1x Seagate - 2TB).

Edit: Added Power_On_Hours
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
9,348
@Tekkie
Thanks for posting that data. It's good to know that there is some real longevity here.
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
Generally, the numbers that a manufacturer posts are used to decide if they want to honor a warrenty or not.

In some cases, a lower number might suggest lower quality hardware, hence the lower number. Without opening a drive and looking at its internal hardware, we are stuck using the provided numbers as a possible sign of quality or it can be just plain marketing to sell their high end devices.
 

rgoessl

Cadet
Joined
Dec 29, 2017
Messages
3
Just to put this on record for anyone else looking - this process does not work on 2.5 inch blue drives, model WD10JPVX-22JC3T0. The wdidle program claims that the timer has been changed from 8s to 300s but the drives still parks after 8 seconds. Oh well.
 
Last edited:

Sonik

Dabbler
Joined
Sep 30, 2014
Messages
21
Just to put this on record for anyone else looking - this process does not work on 2.5 inch blue drives, model WD10JPVX-22JC3T0. The wdidle program claims that the timer has been changed from 8s to 300s but the drives still parks after 8 seconds. Oh well.

I don't think 2.5" drives are designed to be changed like that. They're essentially designed as portable drives & to be used in laptops so are supposed to endure being moved around a lot. Parking the heads every 8 seconds seems like the sensible thing to do to avoid damage to the drives. The 3.5" drives aren't really designed to be portable drives.
 

sremick

Patron
Joined
Sep 24, 2014
Messages
319
I don't think 2.5" drives are designed to be changed like that. They're essentially designed as portable drives & to be used in laptops so are supposed to endure being moved around a lot. Parking the heads every 8 seconds seems like the sensible thing to do to avoid damage to the drives. The 3.5" drives aren't really designed to be portable drives.

Actually a popular niche for 2.5" drives now are high-density servers. They're not just for laptops anymore. In fact, there are many 2.5" form factor drives that are far to thick to be used in most laptops, which makes shopping tricky for people looking to upgrade their laptops.
 

rgoessl

Cadet
Joined
Dec 29, 2017
Messages
3
I don't think 2.5" drives are designed to be changed like that. They're essentially designed as portable drives & to be used in laptops so are supposed to endure being moved around a lot. Parking the heads every 8 seconds seems like the sensible thing to do to avoid damage to the drives. The 3.5" drives aren't really designed to be portable drives.

It seems noticably too aggressive when using SolidWorks or playing games. Waiting for a 3 second spin up every time you draw a line gets old really quick. At the very least they could have let us adjust it.
 

Sonik

Dabbler
Joined
Sep 30, 2014
Messages
21
It seems noticably too aggressive when using SolidWorks or playing games. Waiting for a 3 second spin up every time you draw a line gets old really quick. At the very least they could have let us adjust it.

Don't get me wrong, I'm not saying I agree with it. We should definitely be given the option to change it, as it is far too aggressive. I've always changed it to 300 seconds on all my drives.

Was only suggesting that it might be the reason why some 2.5" drives aren't designed to be changed. But as Sremick pointed out - some are used in servers so the reasoning behind WD not to let them be changed seems a little behind the times. Or perhaps it's just not compatible with WDIDLE3? Just curious how old is the drive btw? I've noticed a few people posting saying they can't change it on some newer drives, but I've never had any issues changing it on any of my recent Blues.
 

rgoessl

Cadet
Joined
Dec 29, 2017
Messages
3
Don't get me wrong, I'm not saying I agree with it. We should definitely be given the option to change it, as it is far too aggressive. I've always changed it to 300 seconds on all my drives.

Was only suggesting that it might be the reason why some 2.5" drives aren't designed to be changed. But as Sremick pointed out - some are used in servers so the reasoning behind WD not to let them be changed seems a little behind the times. Or perhaps it's just not compatible with WDIDLE3? Just curious how old is the drive btw? I've noticed a few people posting saying they can't change it on some newer drives, but I've never had any issues changing it on any of my recent Blues.

I bought the drive around Jan 2017 IIRC.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
15,979
I don't think 2.5" drives are designed to be changed like that. They're essentially designed as portable drives & to be used in laptops so are supposed to endure being moved around a lot. Parking the heads every 8 seconds seems like the sensible thing to do to avoid damage to the drives. The 3.5" drives aren't really designed to be portable drives.

This simply isn't true.

First off, 3.5" drives were definitely designed for portable applications. Back in the early '90's, when 3.5" was still relatively new, I was working with a company that was making medical monitoring devices, and the drive manufacturers were definitely touting that their 3.5" drives were much more suitable to being used in battery-backed gear located on a wheeled cart than the old 5.25" drives, banging down the hallways of hospitals. Generally speaking, manufacturers did see the desirability of portable storage, but it took the advent of USB to really make it work out in the form of external enclosures. There were definitely portable ("luggable") computers that used them, as well.

2.5" drives have been a popular format in the data center since the introduction of SAS, which provided a plausible connector format that also allowed the use of SATA drives in those slots. Server use of 2.5" really started to pick up steam about a decade ago, as it was discovered that you could suddenly fit 24 separate spindles into the space that 12 3.5" spindles used to take, potentially increasing IOPS and making more useful RAID5 configurations. It is definitely correct to say that the development of laptop drives helped to drive the form factor, and that there's been a lot of cross-pollination of ideas as the laptop technology improved, but drives like the Fuji MAY2073RC are definitely descendants of enterprise 3.5" drives, and share nothing but a form factor with laptop drives.

Desktop class drives such as the WD Blue 2.5" and NAS class drives such as the WD Red 2.5" pick up a lot of the best engineering from both sides of the equation, but it is worth noting that 2.5" drives that are not spec'ed for laptop use may not use the sophisticated shock sensors and other improvements of recent years intended to prevent damage associated with laptop mishaps in a manner consistent with laptop use. The WD Reds in particular have redesigned these things as vibration sensors to make drives work better in arrays.

With specific reference to the WD Red 2.5 (WD10JFCX), and WDIDLE3, I can tell you that they do support it. Every now and then someone slots a WD10JFCX into a hypervisor here without disabling the spindown, and eventually the LSI RAID controller gets pissed off and drops the disk because it takes too long to spin up. Any sort of delay, parking the heads, TLER/ERC, etc., can piss off the RAID controller and cause a disk to drop, so there can be a real hazard.
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
The 2.5" drives at 10k or 15k RPM are very popular for specific driven database servers.

They:
  1. Sync up quickly, because of drive speed & lower size per drive
  2. Are low power
  3. Produce less heat for comparable 3.5" drive
  4. For comparably sized 3.5" drive, they generally have a higher data density, so less travel for head's, making faster seeks, etc.
I have seen them used in RAID 1 configuration's quite a bit. Resynch time is quick, reducing window for multiple failures.

The price per TB is not that great, though.

RAIDZ3 with 12TB or 14TB works well, as the multiple failure window requires 4 failed drives for RAIDZ3 failure. When it might take 24+ hours to resynch 14TB drives under user load, this is very important.

This is the Achilles heal for 8+ TB drives.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
15,979
The 2.5" drives at 10k or 15k RPM are very popular for specific driven database servers.[...]The price per TB is not that great, though.

Well, that was true five or more years ago, but even then, SSD was making massive inroads. WD abandoned 15K drives years ago (but picked up a new line of them via HGST), Seagate isn't developing them anymore, etc.

The price on those suckers was never good, because enterprise drives were always the profitable thing for drive manufacturers, even if they didn't sell large numbers of units. Hard drives still win out on overall endurance, but huge strides have been made in storage options with multiple tiers of varying sorts, including hybrid drives, Intel SRT, Apple Fusion, ZFS with L2ARC, LSI CacheCade, etc., along with a serious look at using cheaper drives, SSD, and optimizing for use case.

That last bit, in particular... if you can figure out that your use case might allow you to use cheaper SSD's. I've talked a number of people into going with 850 Evo's or Intel consumer SSD in RAID 1 when they couldn't make a case for actually needing massive write endurance, and this has worked out very well. This has shocked a bunch of people, but I've been cutting corners on pointless expense for most of my professional life.

There are definitely a bunch of people out there who seem to think that it is ten years ago and they need their 15K drives for their database servers, and a few other specific things. In some cases, they're even correct... if you actually need large capacity and good performance across all of it, for example. But the HDD manufacturers were making big bank off the enterprise guys who would just spec buying a whole shelf of the fastest drives and then using that for everything. That particular gravy train has left the station and the smartest people (which likely includes most of the userbase here) are discovering how to exploit low cost storage and make it perform well.

We're going to continue to see SSD evolve to handle the random transactional workloads and HDD's, as you note especially with the large capacities and seek speed issues, have already evolved towards large sequential storage, especially things such as archival storage. Sad to see some of the cooler old technologies go by the wayside.
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
Well, that was true five or more years ago, but even then, SSD was making massive inroads. WD abandoned 15K drives years ago (but picked up a new line of them via HGST), Seagate isn't developing them anymore, etc.

Points well taken. I do not disagree. If they made 2.5" in 4+ TB, then the 2.5" would hang around longer. I see 2TB 2.5", but nothing bigger.

Maybe helium will get it 3TB! :rolleyes:

I do not foresee this happening.

SSD are great for lower disk space usage applications. Price of SSD vs. 2.5" HD are getting closer to being on par.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
15,979
Points well taken. I do not disagree. If they made 2.5" in 4+ TB, then the 2.5" would hang around longer. I see 2TB 2.5", but nothing bigger.

... "try harder?" :smile: ;-)

Seagate Momentus ST3000LM016 15mm 3TB
Seagate Momentus ST4000LM016 15mm 4TB
Seagate Momentus ST5000LM016 15mm 5TB


Seagate Barracuda ST3000LM024 15mm 3TB
Seagate Barracuda ST4000LM024 15mm 4TB
Seagate Barracuda ST5000LM000 15mm 5TB


Unfortunately only Seagate seems to have viable offerings at this point. The only way that they've been hitting these capacities is to go 5400RPM, which is consistent with my view that hard drives are headed towards large file bulk storage while SSD will take over random access and highly active storage tasks.

I wonder if the 2.5" drive form factor will actually survive. I kinda think that if HDD is headed towards bulk storage, we have a problem with density at 2.5", where they've managed to hit 5TB HDD, whereas the 3.5" are at 14TB. Within the same amount of front panel space, that's 3x3.5" HDD => 3*14TB=42TB or 6x2.5" HDD => 6*5TB=30TB. Plus less SAS PHY's and connectors and all that. So if you then look at 2.5" SSD, you realize that there's also a format change going on for SSD with M.2 and Samsung's M.3 / NGSFF. Take a look at this - Supermicro has a 1U with 36 NGSFF, that's right, THIRTY SIX.

What kind of future does 2.5"/U.2 really have? Hard drive manufacturers are being pressured by increased sales of SSD in end-user devices. HDD continues to win the price-per-TB battle for the time being, despite "price parity approaches" claims from several years ago, mostly due to a slowing of the decline of HDD per-GB pricing and the monster increases in SSD flash chip pricing. But the hard drive manufacturers have to see that the writing is on the wall, which is that eventually flash prices will fall. It's got to be expensive to develop multiple product lines, which is why we're seeing fewer manufacturers and fewer product lines within each one. Perhaps WD has already made a strategic decision to not develop additional 2.5" high capacity drives, which is the only reason I can think that they haven't even taken the 2.5" 2TB WD Green and made a Red variant out of it. It suggests to me that 2.5" may actually be a dying format, and that the HDD manufacturers may retreat to 3.5", which offers increased density, and dropping 2.5" maybe allows them to squeeze profits out of 3.5" for a few more years until Samsung pulls the rug out from under the HDD market entirely. HDD manufacturers seem to have figured out that price wars don't make for profit, and that their business model has an expiration date, so it isn't clear that we'll ever again see the glory days of HDD storage prices rapidly falling. But we could see flash prices fall, rapidly even.

Price of SSD vs. 2.5" HD are getting closer to being on par.

A little. Not that much. The WD10JFCX is around $70 while the 850 Evo 1TB is up at $350. That's still 5x, and it hasn't made any real progress in the last two years.
 

sssteeve.a

Cadet
Joined
Mar 1, 2018
Messages
1
Just a quick note. When I first used WDIDLE3 I believe that the notes said that if it was turned off the OS would control when the heads were parked. Is that true?
BTW when I replaced a dozen HDs with WD Greens the temperature in my computer room dropped about 5 degrees and I saved money having to run my AC less. But once the Greens started dropping like flies I was less impressed with them...
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,100
Just a quick note. When I first used WDIDLE3 I believe that the notes said that if it was turned off the OS would control when the heads were parked. Is that true?

The OS does not directly cause the disks to park, they simply park when the OS has left them without accessing them for a certain time. If (as in some older disks) it is possible to turn off disk parking altogether then they won't park whatever the OS says. But usually modern WD Green disks can only be set to a parking time between 8 and 300 seconds, not to never park. They will park this number of seconds after the OS stops accessing them. So either you prevent the OS accessing them very often so they stay parked for a long time and don't keep moving to a parked position; or you set the parking time to a reasonably long time and let the OS access them *more* often so they never (or rarely) park.

With FreeNAS you can prevent the disks being accessed very often in a home server that is not intensively used (say never at night and only periodically in the day) by changing the default use of the pool disks for recording the system database and recording the latter elsewhere, say on a boot disk, but preferably not the boot disk if it is a USB stick. This may be appropriate for a home server and also enables you to let the disks go to standby if that is what you want.

However, with the default recording of the system information database on the pool disks this happens every minute. If the parking delay of the disks is set to 300seconds then the disk never parks its heads because it is never left idle for more than a minute. This is how the parking time and default FreeNAS settings interact. Because if the parking time is less then a minute then the disks will be woken up and parked every minute, amounting to over a thousand times per day.

(BTW, what temperature did your failed disks reach?)
 

Sonik

Dabbler
Joined
Sep 30, 2014
Messages
21
Somebody has posted on a YouTube video saying that the wdidle3 /d command doesn't disable head parking, just the timer, so it actually parks the heads immediately after read/writing.

I'm going to assume that person is completely misinformed? I've had the timers disabled on 4 2TB Blue drives and the load/unload cycle count has barely changed in months. Wouldn't my load/unload cycle count be massive if it were parking immediately?

Pretty sure it's complete nonsense, but just wanted to be sure.
 
Joined
May 10, 2017
Messages
837
Somebody has posted on a YouTube video saying that the wdidle3 /d command doesn't disable head parking, just the timer, so it actually parks the heads immediately after read/writing.

I'm going to assume that person is completely misinformed? I've had the timers disabled on 4 2TB Blue drives and the load/unload cycle count has barely changed in months. Wouldn't my load/unload cycle count be massive if it were parking immediately?

Pretty sure it's complete nonsense, but just wanted to be sure.
That's wrong or the load/unload cycle would increase much faster.
 
Top