Hi all, my new hard drives arrived and I was hoping to start a burn-in process for them, something I did in the past.
These are the drives that I bought:
Seagate Ironwolf Pro 18TB
I did this in the past with smaller drives as 3TB WD RED NAS 5400
Did read the resources burn in post and also about this script post
As I was reading though ran into some potential issue with drives bigger than 16TB (mine being 18TB)
I did research the forum as well for a clear solution but don't feel confident to proceed forward. I am sure a lot more people will be interested in this in the future since the 18TB became so much more affordable (paid $279/each + tax)
At this point, I am running the so have to wait to finish in about 24 hours from now.
Came across this comment from the main burn in resource:
Would you please confirm what is the recommend way to do this burn in test?
I am running this test in a R730xd for 6 new drives (this machine has a live pool of another 6 drives, sure not relevant, but will mention). The drives will move into a new server once I receive the parts and put it together.
Thank you in advance
These are the drives that I bought:
Seagate Ironwolf Pro 18TB
I did this in the past with smaller drives as 3TB WD RED NAS 5400
Did read the resources burn in post and also about this script post
As I was reading though ran into some potential issue with drives bigger than 16TB (mine being 18TB)
I did research the forum as well for a clear solution but don't feel confident to proceed forward. I am sure a lot more people will be interested in this in the future since the 18TB became so much more affordable (paid $279/each + tax)
At this point, I am running the
Code:
smartctl -t long /dev/adaX
Came across this comment from the main burn in resource:
Really helpful info - thanks qwertymodo.
- wafliron
- 5.00 star(s)
- Aug 4, 2022
In case it helps anyone else, I just ran into an issue trying to run badblocks on 18TB drives - it throws an error when the number of blocks to test is greater than the max value of an unsigned 32-bit integer (4,294,967,295):
root@delta:~ # badblocks -b 4096 -wsv /dev/da16
badblocks: Value too large to be stored in data type invalid end block (4394582016): must be 32-bit value
Assuming 4K physical blocks, 16TB and lower drives should be fine, but the problem will crop up on any drive 18TB or larger.
It appears there's two possible solutions to this:
1) Run badblocks with a larger block size (that's still a multiple of the drive's physical block size) - e.g. 8192, 16384, etc with 4k physical blocks. I did, however, read that using a non-native block size can cause false negatives - albeit this was anecdotal (a few mentions on forums, but I can't find a primary source).
2) Split the badblocks run into chunks of less than 4,294,967,295 blocks (i.e. each run only targeting only part of the disk). e.g. in my specific case:
badblocks -b 4096 -wsv /dev/da16 2197291008 0
followed by:
badblocks -b 4096 -wsv /dev/da16 4394582016 2197291009
Would you please confirm what is the recommend way to do this burn in test?
I am running this test in a R730xd for 6 new drives (this machine has a live pool of another 6 drives, sure not relevant, but will mention). The drives will move into a new server once I receive the parts and put it together.
Thank you in advance