MAX working temperature of Seagate Ironwolf drives...

rwillett

Dabbler
Joined
Feb 18, 2015
Messages
37
Hi,

I'm slowly burning in my new 6 x 4TB Ironwolf drives. I'm using badblocks and going through that inevitable "I wonder how long before the program finishes" stage with using badblocks, best guess is 3-4 days which is fine, nothing else to do with them :)

Anyway I thought I'd check the temperature of the disks as they are in a small Dell T110 ii case. It is a server case but still, 6 x 4TB drives is quite a lot.

The temperatures are all round 24C to 33C. The disks in between other disks are hotter, which is what I'd expect. I have some additional fans to keep things cool but am waiting for badblocks to finish before installing them. The overall temperature range doesn't seem bad, but I thought I'd look on the Seagate website for what the thermal range is defined to be (https://www.seagate.com/www-content/datasheets/pdfs/ironwolf-12tbDS1904-9-1707US-en_US.pdf). I was expecting a top end around 45C but Seagate quote 70C which seems surprising to me. This 70C is across the whole range of their disks.

Looking on this website, it appears that anything from 20C to mid 30C seems Ok, so I'm not that bothered about my current temperatures but the fans will go in once the testing completes next week.

However I'd be interested as to whether anybody has ever run disks at 70C and how long they actually do last. I understand there is a linkage between heat and life span, but 70C seems very hot to me indeed. Part of me is almost tempted to try and heat the disk up somehow to get to 70C and see how long the disks last, but short of a blow torch I can't see how. I live in the North of England in the Yorkshire Dales and a hot day here is 21C, a very hot day is 21C without rain :)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I was expecting a top end around 45C but Seagate quote 70C which seems surprising to me.
That is a maximum number, kind of how your car may be able to drive 120 miles an hour, but you wouldn't do that all the time. The advice to keep drives below 35°C is about getting the drives to last longer.
In a paper published by Microsoft and University of Virginia:
https://cheetah.cs.virginia.edu/~gurumurthi/papers/acmtos13.pdf
Authors: SRIRAM SANKAR, Microsoft Corporation MARK SHAW, Microsoft Corporation KUSHAGRA VAID, Microsoft Corporation SUDHANVA GURUMURTHI, University of Virginia
You will find the following quote:
Yang et al [1999] establishes that a 25 C delta in temperature derates the MTTF by a factor of 2 in their study on Quantum hard disk drives. Cole et al [2000] from Seagate, present thermal de-rating models showing that MTTF could degrade by close to 50% when going from operating temperatures of 30C to 42C. Our results agree with the observations made by Cole. Our measured failure rates also exceed the AFR rates that manufacturers mention in their datasheets
To me, and I may not have a perfect understanding of all the data presented in their paper, but I also have about two decades of experience working in the field, and it has been my observation that drives in a cool, dry environment (not too dry) last longer than drives in a warm, moist environment and the difference is pretty significant. Also, drives are the highest failure item. Many times it is possible to run a server through a set or two of hard drives, kind of like replacing tires on your car. The server is still good to go, you just need higher capacity drives or newer dives if the ones you have are approaching five years of age, which is when the failure rate starts to go up in the vast majority of situations.
This 70C is across the whole range of their disks.
No, different models will have different temperature ratings. Best to check, but also best to keep them cool.
However I'd be interested as to whether anybody has ever run disks at 70C and how long they actually do last.
We had one of our supplemental cooling units fail at work and the data-center got hot for a while, maybe a month, where the drives were running close to the limit, but I don't suggest pushing the limit. We had a user on the forum some time back (a couple years ago if I recall) that had their server in a building in the desert, somewhere in the Middle-East, that had no air-conditioning. Their drives routinely ran over temperature and were failing somewhere between 1.5 and 2 years of service. No amount of airflow is going to keep the drives cool if the ambient temperature is too high to allow the heat to dissipate. That heat takes it's toll on the bearing surfaces in the drive. Kind of like if you drained the oil out of your car engine and drive cross country. It will go for a while, then it won't, no way to know for sure how far you get.
I live in the North of England in the Yorkshire Dales and a hot day here is 21C, a very hot day is 21C without rain :)
Then it might be more important to worry about the humidity. Too much ambient humidity can cause the drives to fail prematurely also.
 

rwillett

Dabbler
Joined
Feb 18, 2015
Messages
37
@Chris Moore

Thanks for the reply, I was a little surprised by Seagate's temperature rating. I was aware that disks would become less reliable under hot conditions but hadn't seen the paper and the calculations.

My aim is to get the temperatures down to 30C for all the drives. I just need for badblocks to finish so I can shut the server down and get the fans in. I can live with 32C for one of the disks for a few days, get the fans in and hopefully have a sensible internal temperature.

The last comment was actually a joke. Where I live does get quite a lot of rain and is well known for it, but we don't suffer high humidity. The weather comes across the Atlantic, drops most of any rain on Ireland and saves the rest up for us as we're on the first high ground from Ireland in the UK. Nobody would build a data centre here unless it ran on cow or sheep poop, in which case it would cost nothing to run. I've no doubt somebody has tried it.

Rob
 

pschatz100

Guru
Joined
Mar 30, 2014
Messages
1,184
My home server is in a location where noise is an issue, so I'm limited to the number of fans I can run. My disks run between 32c and 35c, and I have not had what I would consider a premature failure. However my home server is also on 24/7, so the disks don't start and stop.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
My home server is in a location where noise is an issue, so I'm limited to the number of fans I can run. My disks run between 32c and 35c, and I have not had what I would consider a premature failure. However my home server is also on 24/7, so the disks don't start and stop.
That temp range is fine for most NAS rated drives. At my work we always have a rash of drive failures when the supplemental cooling in the data-center goes offline for a few hours and the temperatures spike. Thankfully it doesn't happen very often but it did happen twice in March and I ended up changing out a dozen drives between all the servers I manage.
In your situation, I wouldn't worry if my temperatures were under 35°C .
 

rwillett

Dabbler
Joined
Feb 18, 2015
Messages
37
I put a small fan in my Dell T110i, it had to be small as there's nowhere to exhaust the hot air :)

I left it running (without badblocks) and the temperature seems to have stabilised between 26C and 29C depending on the disk position. I suspect that badblocks was creating a good proportion of the temperature rise and don't attribute all the heat drop to the fan.

I will investigate further options for cooling. My original intention was to simply move the whole server into a much large case with lots of air movement, however I then discovered that Dell servers are mirror images of desktop servers (e.g. the PCI slots are at the 'wrong' end) and that you can't easily move the motherboards around. I hadn't realised they were so proprietary. I've been in the job for four decades now and hadn't used Dell kit before so didn't know they were like this.

Oh well, we live and learn. I'll work out how to stuff another fan or two in and see how that goes :)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I hadn't realised they were so proprietary.
Dell, especially in their server gear, is often proprietary in that they often design their own chassis and system board without regard to any industry standard. I have had to deal with their hardware at work since 1999 when I started working at the site where I am working now. Before that, I was working at a repair depo where we handled gear from companies like HP, IBM, NEC and Compaq (back when NEC and Compaq were still around) and I can tell you from many years of experience that in the higher end gear (servers especially) all the big name companies design custom gear. If you want to be able to mix and match chassis and main board, your best bet is to deal with Supermicro system boards although even they make some custom models that are specific to certain chassis.
 

rwillett

Dabbler
Joined
Feb 18, 2015
Messages
37
Chris,

Thanks for the update. I worked at IBM for 15 years before branching out on my own. I used a lot of P-series servers and occasionally needed to open one up for a new hardware card. Beautifully made and wholly proprietary which is as expected for a Power series CPU.

I had mistakenly thought that Dell was far more standards based and my exceptionally limited research hadn't picked up the fact that their MB's will not fit other cases. Pity really as the new case I brought is a delight to use and will hold around 12-15 disks. I'll use the T110 for a while and see how it goes and use the new case for another project

Rob
 

Stevie_1der

Explorer
Joined
Feb 5, 2019
Messages
80
All big manufacturers (well, except Supermicro thankfully) use almost always non-standard stuff, be it Dell, HP, Fujitsu etc.
I wonder if there is some annual competition for the most stupid ideas to increase their spare part sales.
Something like:
"Hey, let's make the mainboard a little smaller, and the chassis also, so no other part would fit."
"Or let's just move the standoffs to some weird positions."
"What about making a custom sized PSU?" - "Oh yeah, and we could use a different power connector also, great!"

The last winner was someone at Apple:
"Hey, wouldn't it be cool to re-purpose HDD pin 11 from activity/staggered spinup to a temperature sensor and some specific firmware? So that the BIOS would scream and cry if someone installs a cheap evil non-Apple HDD."

I wonder if the next winner will be something like "Um, Intel-guys, we'd like to order some CPUs, but add some lands to the socket or just rewire the existing pins, so that users cannot install CPUs not sold by us to our mainboards."

:D
 

Stevie_1der

Explorer
Joined
Feb 5, 2019
Messages
80
I can easily imagine that someone already thought about that, but discarded it due to the immense cost...
I mean, developing some custom and highly incompatible stuff is always more expensive than just use existing and well-tested standards, but they wouldn't do it if their profit was too small.

But who knows, you'd just need some highly expensive consultants to write up the right buzzword-filled text, and some people stupid enough to actually fall for it.
Both shouldn't be too hard to find... :D
 
Top