SOLVED Would you recommend the Toshiba DT01ACA300 HDD?

Status
Not open for further replies.

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
(I've completely forgotten everything from last time round...)
  1. SMART - short, conveyance (if your drive supports it), long.
  2. badblocks
  3. SMART - long.
  4. Done. Profit !!
Nothing to it.
 

VladTepes

Patron
Joined
May 18, 2016
Messages
287
short - fine
conveyance - not supported
long - just started eta 6 hours !!!!!!

Not that I should worry, the badblocks later will take a LOT longer than that !

I don't suppose there is anyway to 'shut down' the FreeNAS system (so the drives still connected to it stop/idle etc) while still keeping power to the system? It occurs to me that would be a nice way to minimise the chance of additional disk failures during the burn/in process.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
You can always burn your disks on a different desktop that you might have.
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
For reference, take a look at the Disk Drive Price/Performance Spreadsheet.

You never talked about some important points of interest:
  • Warranty
  • MTBF
  • Work Load
  • Error Rate
For performance look at:
  • Transfer speed
  • Seek
  • Latency
  • RPM
Environment
  • Power usage per TB
  • Startup/run time watts affect power supply size

There is a lot more to buying a better than average disk drive then just straight price... :eek:
 
Last edited:

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
For reference, take a look at the Disk Drive Price/Performance Spreadsheet.

You never talked about some important points of interest:
  • Warranty
  • MTBF
  • Work Load
  • Error Rate
For performance look at:
  • Transfer speed
  • Seek
  • Latency
  • RPM
Environment
  • Power usage per TB
  • Startup/run time watts affect power supply size

There is a lot more to buying a better than average disk drive then just straight price... :eek:
While that might be true, for an average home user, the differences in those values tend to be so close that it shouldn't matter too much for home use. Also, if you do way too much research in buying them, you might just end up doing research and not have any time left to buy the damn hardware ;)

But I do see your point where someone needs to get every ounce of performance out of their hardware. And most times, performance and price are on the same end of the price spectrum. Higher the performance required, higher the price.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
While that might be true, for an average home user, the differences in those values tend to be so close that it shouldn't matter too much for home use. Also, if you do way too much research in buying them, you might just end up doing research and not have any time left to buy the damn hardware ;)

But I do see your point where someone needs to get every ounce of performance out of their hardware. And most times, performance and price are on the same end of the price spectrum. Higher the performance required, higher the price.

I think that the comparison is more important for the datacenter that is buying 60 drives at once than for the home user that is buying 1 drive.
I did the figures for a server we are looking to buy at work and we figured the 8TB helium filled HGST drives were the way to go.
I don't figure I will ever want drives with that much single drive capacity at home because of the single drive cost.
I am currently looking to increase my drive count in one of my servers to 24 drives to increase IOPS. I would never need 24 drives at 8 TB, I don't even need the capacity of 24 drives at 2TB. I just want the speed for virtual machine storage. I would never even need the capacity of 12 drives at 8TB. So, the 'value' of 8TB drives is wasted on my home system. I was even considering going back to 1TB drives to fill that 24 drive chassis. I just don't need that volume of storage. Everyone has different requirements.
It is a whole different calculation for home than for a business.
 
Last edited:

VladTepes

Patron
Joined
May 18, 2016
Messages
287
Well the long smart test was fine too. :)

badblocks is running now. So I'm waiting.....


Sort of related follow up question: Given the WD Red that I have replaced seems quite flaky - would it be worth keeping it for any reason, or bin it?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Well the long smart test was fine too. :)

badblocks is running now. So I'm waiting.....

Sort of related follow up question: Given the WD Red that I have replaced seems quite flaky - would it be worth keeping it for any reason, or bin it?
If it isn't in warranty, you might as well run a program like DBAN against it to clean the data as best as possible. Then you could consider if you want to sell it for parts on eBay or just dump it in the recycle bin. I sold some 2TB drives recently and got as much as $20 plus shipping for them.
 

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
badblocks is running now. So I'm waiting.....

Expect that it takes roughly 8 times as long as the SMART long test (4 complete disk writes with different patterns and 4 complete disk reads). So if the latter took 7 hours, badblocks will take roughly 56 hours.
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
While that might be true, for an average home user, the differences in those values tend to be so close that it shouldn't matter too much for home use. Also, if you do way too much research in buying them, you might just end up doing research and not have any time left to buy the damn hardware ;)

Actually, the home user will likely need it more than the enterprise. The home system tends to be ignored more than the datacenter. The datacenter has tools that track and watch for errors and will automatically put a spare into play.

Most home users do not have spares running ready to be brought in automatically.

The home user who gets enterprise drives will have far fewer problems then if they buy WD Red NAS.

Take a look at the nice tables (Disk Price/Performance Spreadsheet) and you will see the difference in specs. The drives are just plain better made. Compare HGST drives. Error rate on media is 1.000E+14 vs. 1.000E+15, MTBF is 1,000,000 vs. 2,500,000, 3 year warranty vs. 5 year, etc.


My consumer grade drives never last as long as the enterprise ones. Fewer failures and headaches by a long shot! :confused:

He DRIVES
IMHO, WD bought HGST to get their He technology. WD will be going more and more to He only drives in the future. In time most if not all drives will be He.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
You can always burn your disks on a different desktop that you might have.
Absolutely. Especially when I am getting replacement disks ready, I burn them in on a completely different system. I have several spares on hand ready to put in if they are needed and they have all been fully tested ahead of time. Right now, I have more drives than usual because I just replaced a bunch of 2TB drives with 4TB drives, so where I would normally only have a couple spares, I have 8 right now
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
My consumer grade drives never last as long as the enterprise ones. Fewer failures and headaches by a long shot!
I guess I don't think of it that way because I monitor my drives closely, both at home and at work, and have spares on hand to replace any drive that begins to give me errors. I figure it is like a light bulb, if it fails, just replace it and move on. I am not going to spend extra to get the special light bulb that might last longer. Not having to be overly worried about a single drive failure is the whole reason I use array disks to store my data.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
I think that the comparison is more important for the datacenter that is buying 60 drives at once than for the home user that is buying 1 drive.
I did the figures for a server we are looking to buy at work and we figured the 8TB helium filled HGST drives were the way to go.
I don't figure I will ever want drives with that much single drive capacity at home because of the single drive cost.
I am currently looking to increase my drive count in one of my servers to 24 drives to increase IOPS. I would never need 24 drives at 8 TB, I don't even need the capacity of 24 drives at 2TB. I just want the speed for virtual machine storage. I would never even need the capacity of 12 drives at 8TB. So, the 'value' of 8TB drives is wasted on my home system. I was even considering going back to 1TB drives to fill that 24 drive chassis. I just don't need that volume of storage. Everyone has different requirements.
It is a whole different calculation for home than for a business.
So you are agreeing with me?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

VladTepes

Patron
Joined
May 18, 2016
Messages
287
So i come home and have a look at my screen to check on progress and am greeted by this.
I have no idea what it all means, so if someone can interpret for me, that'd be great.

Code:
root@Remus:~ # badblocks -ws /dev/ada6
Testing with pattern 0xaa: set_o_direct: Inappropriate ioctl for device
done
Reading and comparing: done
Testing with pattern 0x55: done
Reading and comparing: done
Testing with pattern 0xff: done
Reading and comparing: done
Testing with pattern 0x00: done
Reading and comparing: done



It hasn't been going for anywhere near long enough (I wouldn't think) to complete badblocks.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
So i come home and have a look at my screen to check on progress and am greeted by this.
I have no idea what it all means, so if someone can interpret for me, that'd be great.

Code:
root@Remus:~ # badblocks -ws /dev/ada6
Testing with pattern 0xaa: set_o_direct: Inappropriate ioctl for device
done
Reading and comparing: done
Testing with pattern 0x55: done
Reading and comparing: done
Testing with pattern 0xff: done
Reading and comparing: done
Testing with pattern 0x00: done
Reading and comparing: done



It hasn't been going for anywhere near long enough (I wouldn't think) to complete badblocks.
Seems like the test completed. Run the SMART long again to find out if everything is still A-OK.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Inappropriate ioctl for device

That has me wondering if something didn't do what it was supposed to do.
I use a different utility to test my disks, so I am not familiar with what the badblocks test is supposed to give as output.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
That has me wondering if something didn't do what it was supposed to do.
I use a different utility to test my disks, so I am not familiar with what the badblocks test is supposed to give as output.
Not really. See this thread. Mainly the first 3 posts.
 

VladTepes

Patron
Joined
May 18, 2016
Messages
287
Thanks inxsible. I hadn't seen that thread in a search, somehow.
I am somewhat surprised that the badblocks didn't seem to take anywhere near as long as people were suggesting it might. Anyway I'm not complaining.

I anticipated the need for your other suggestion and set it to do another long smart test late last night. When I get home from work I will check the result.

If all is still well, I guess its then time to tell FreeNAS to get its act together and start resilvering.
 

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
I am somewhat surprised that the badblocks didn't seem to take anywhere near as long as people were suggesting it might. Anyway I'm not complaining.

How long did the badblocks run actually take? Are you able to specify at least an order of magnitude (5 hours, 50 minutes, 5 minutes, ...)?

Edit: Perhaps using Reporting -> Disk from the FreeNAS GUI if you have no other evidence.
 
Last edited:
Status
Not open for further replies.
Top