Resource icon

Hard Drive Burn-In Testing - Discussion Thread

Joined
Oct 22, 2019
Messages
3,641
/incoming appears to be empty on that FTP, do you know somewhere else to get the script?
Don't bother with Chrome or Firefox, even if you fix the URL. They dropped FTP support.

Quickest way to grab the file (directly):
wget ftp://ftp.sol.net/incoming/solnet-array-test-v2.sh

✅ The above command downloads the file, and I even double-checked the contents.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
/incoming appears to be empty on that FTP, do you know somewhere else to get the script?

/incoming is not browseable. The file is there. Also, lots of browsers no longer support ftp:// URL's.

You can probably use "fetch ftp://snarchive.sol.net/incomng/solnet-array-test-v2.sh" from the command line.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
I am currently running burn-in test on 4 18TB drives and I came across the below error:
badblocks -b 4096 -ws /dev/da0
badblocks: Value too large to be stored in data type invalid end block (4394582016): must be 32-bit value
I did some searching and came across another solution which requires splitting the test to cover chunks of physical disk space.

WD 18TB, Badblocks error, value too large?

I simply add the start and end range for Badblocks to perform the task, as long as the area is below 2^32 blocks:

badblocks -b 4096 -ws /dev/da0 2197291008 0
Testing with pattern 0xaa: set_o_direct: Inappropriate ioctl for device

Once the first section of the disk is completed, I will have to proceed with the remaining part.

badblocks -b 4096 -ws /dev/da0 4394582016 2197291009

Test is already at the 72 hrs mark and is expected to run around 77 hrs. At this point I will have to run for another +77 hrs to get the entire drive tested.


Seemed too good to be true.
Badblocks doesn't even allow offset to work past 32 bit boundary. I was weary about it but had hope the system would allow for it.

So what are the options knowing hard drives are still going to get larger and larger over time (hopefully).
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I did some searching and came across another solution which requires splitting the test to cover chunks of physical disk space.

Yeah, because badblocks isn't designed for testing disks.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
So what are the options knowing hard drives are still going to get larger and larger over time (hopefully).
Have you tried badblocks -b 8192 -ws /dev/da0?
In the long term the solution is to move out of (mis)using badblocks though.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Have you tried badblocks -b 8192 -ws /dev/da0?
In the long term the solution is to move out of (mis)using badblocks though.
I haven't tried. The reason is that there may be some subtelties that are not accounted for that would cause the test to be unreliable.
I just don't have enough experience in this area.
@jgreco mentioned BadBlocks isn't designed for testing disk, and to some level it is and it is not.
From my understanding, what Badblocks is trying to do is validate the disk is able to write and read sets of bits without errors. What Badblocks does is checking any bits can be written as "0" and "1":at least ones and the default pattern used is to check for bit walking as well.

The drives I am testing are the Seagate EXOS 18TB and while trying to understand the migration setting to switch from 512e to 4Kn (I have contacted Seagate support in the matter), I went to read through the SeaChest series of utilities documents, and it would appear there may be a specific series of test SeaChest could perform to write patterns to disk.


If Badblocks wasn't designed to test drives, why is it the only recommended solution?

On a second note, if I am able to perform transition from 512e to 4Kn, then theoritically, I would have to undergo the same stress test as I believe the physical location of the block maybe slightly offset from where the 512e block resided.


Doesn't it make sense?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
If Badblocks wasn't designed to test drives, why is it the only recommended solution?
People have been (ab)using badblocks to test HDDs for about three decades. I suppose that old habits die hard.

On a second note, if I am able to perform transition from 512e to 4Kn, then theoritically, I would have to undergo the same stress test as I believe the physical location of the block maybe slightly offset from where the 512e block resided.
Ultimately it's about testing the ability to write and read back bits, not the bytes or sectors they may belong, so block size should not matter.
A 4k sector is 8 old-style 512e sectors (or rather the other way around because I doubt there are 512-bit-native devices in the 10+ TB era…) and an 8k block is two 4k sectors. The only point is that 18 TB is less than 2^32 8k blocks.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
I am now running @Spearfoot burnin script which is already setup to use 8192 block size.

Prior to running the script, I was able to manage changing the 512e logical sector size to 4Kn physical sector size with the following steps:

SeaChest_Format --device /dev/sgX --showSupportedFormats

1648408234675.png


  • run following command on the corresponding drive to set "Logical Block Size" to 4096:
SeaChest_Format -- device /dev/sgX --setSectorSize 4096 --confirm this-will-erase-data
  • At this point the "Logical Block Size" should be set to 4096, but it will not show as being updated. The change seems to take place only after the disk has been power cycled.
Here is what the output of SMART info look like in TrueNAS before and after the conversion:

smartctl -i /dev/dax

Before:
=== START OF INFORMATION SECTION ===
Device Model: ST18000NM000J-2TV103
Serial Number: --------
LU WWN Device Id: --------------------
Firmware Version: SN02
User Capacity: 18,000,207,937,536 bytes [18.0 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ACS-4 (minor revision not indicated)
SATA Version is: SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Sun Mar 27 11:07:22 2022 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

After:
=== START OF INFORMATION SECTION ===
Device Model: ST18000NM000J-2TV103
Serial Number: --------
LU WWN Device Id: --------------------
Firmware Version: SN02
User Capacity: 18,000,207,937,536 bytes [18.0 TB]
Sector Size: 4096 bytes logical/physical
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ACS-4 (minor revision not indicated)
SATA Version is: SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Sun Mar 27 11:07:32 2022 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
@jgreco mentioned BadBlocks isn't designed for testing disk, and to some level it is and it is not.

No, it is 100% "is NOT". badblocks evolved from a toolset history designed to mark blocks as bad through manual methods prior to the widespread availability of block remapping inside the controller. In the really old days, your typical MFM, RLL, etc, hard disks did not have the intelligence to manage defect lists, and these had to be mapped into unavailable block lists, for example by creating a fake file that used the "faulty blocks" as data blocks. It isn't designed to be exercising disks for burnin.

If Badblocks wasn't designed to test drives, why is it the only recommended solution?

It's not only not "the only recommended solution", it isn't even *A* recommended solution.

My recommended solution is listed above. It's nondestructive and pretty good at ferreting out issues with modern drives. However, as with badblocks, it is actually designed to solve a different problem, with the disk stress test actually happening as a side effect. But it's blessed by the author for the purpose of disk stress testing.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
In the really old days, your typical MFM, RLL, etc, hard disks did not have the intelligence to manage defect lists, and these had to be mapped into unavailable block lists, for example by creating a fake file that used the "faulty blocks" as data blocks.
My very first hard disk was a Seagtae ST251-1 (MFM, 5,25" half-height, 42 MB, about 280 kb/s) and it even had a sticker on its top, that specified bad sectors as out of the factory. Unfortunately, quite a few others came after that and I lost some data. Since then I have been paranoid about backups.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
My very first hard disk was a Seagtae ST251-1 (MFM, 5,25" half-height, 42 MB, about 280 kb/s) and it even had a sticker on its top, that specified bad sectors as out of the factory. Unfortunately, quite a few others came after that and I lost some data. Since then I have been paranoid about backups.
I remember that drive but I couldn't afford a new one so I had two used ST-225 20MB drives, made it almost double when I changed the MFM controller to a RLL controller. the throughput, I don't recall but it was faster than floppy disc. I also recall doing head alignments on them. It saved my data. It was good that I had access to an Oscope and I had been doing head alignments on IBM disk pack machines for about 6 years. I also taught the course on repairing the IBM disk drive (like the IBM 1301 but Military Version), flip flops galore. Good times.
 

F_L_A_S_H

Cadet
Joined
Mar 30, 2022
Messages
9
I've tried using this guide, and the Jgreco's solnet-array-test-discussion-thread here, without success on either one. I'm having a tough time finding hard drive burn-in commands for Scale. Can someone tell me what I'm missing?

This Guide:
Code:
root@TrueNAS[~]# sysctl kern.geom.debugflags=0x10
sysctl: cannot stat /proc/sys/kern/geom/debugflags: No such file or directory
root@TrueNAS[~]# badblocks -ws /dev/sdf       badblocks: Value too large for defined data type invalid end block (9766436864): must be 32-bit value
root@TrueNAS[~]# badblocks -b 4096 -ws /def/sdf
badblocks: No such file or directory while trying to determine device size
root@TrueNAS[~]# badblocks -b 8192 -ws /def/sdf
badblocks: No such file or directory while trying to determine device size
root@TrueNAS[~]#
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
You're missing a 'deFice' directory in the file system. Drop the B-movie Nazi accent and use 'device' (/dev) instead. :wink:
 

F_L_A_S_H

Cadet
Joined
Mar 30, 2022
Messages
9
You're missing a 'deFice' directory in the file system. Drop the B-movie Nazi accent and use 'device' (/dev) instead. :wink:
Thank you! Good Grief... how stupid of me. It worked after I corrected my stupid mistake.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
I came across this issue with 18TB drives and badblocks as well... unfortunately this link does not work for me https://www.truenas.com/community/t...n-testing-discussion-thread.21451/post-683231 what could I do next to burn in the drive?
I used the following link:

 

Revolution

Dabbler
Joined
Sep 8, 2015
Messages
39
I used the following link:

Thank you but the script in the repo just seems to be a wrapper around badblocks as well. I thought there would be the script jgreco mentioned above.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Thank you but the script in the repo just seems to be a wrapper around badblocks as well. I thought there would be the script jgreco mentioned above.
It is different from jgreco's script and I haven't tried it so I can't comment.

I ran the badblocks based script and tested my four 18TB drives.
It took around 8-9 days to complete, including the long and short SMART tests.
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
@jgreco
could you please advise how can one burn-in 18TB drive?

there is a long discussion with no sum up / no solution?

Can anyone advice how to execute/ achieve burn-in of 18TB drives?

Thank you!
 

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925
Top