Are you sure the smart short tests took an hour :/
Yes, here's a screengrab whilst the last long test is being completed

Are you sure the smart short tests took an hour :/
That says "2 minutes"
That is not quite correct. The debug flags disable a safety that prevents raw I/O to devices that are in use by the GEOM system.To perform raw disk i/o, enable the kernel geometry debug flags
That is not quite correct. The debug flags disable a safety that prevents raw I/O to devices that are in use by the GEOM system.
Please do not use that routinely, or recommend it without a warning that it defeats a safety that protects the system.
Not a good sign for that HD that its already lost 8 sectors. But this is precisely the reason that you run the badblock tests followed by the long tests. Now that data has been written to every sector the long test will test if all the sectors can actually be read.
In this case, 8 sectors were unable to return their data. The drive is waiting (pending) for you to decide to re-write the data.
RMA :)
Link?I've noticed on another forum where someone is saying that all 8tb drives are bad.
It sounds reasonable. He did not say that the 8TB drives were "bad", just that he had seen some with early failures. That would not surprise me with the first generation of higher-density drives. It often goes that way. Vendor warranties should help tell the story. If the same warranty is offered, then the vendors have no reason to think those drives will have shorter lives. If they turn out to be wrong, the customer gets replacement drives that probably have engineering improvements.
Note that the following use the same amount of parity disks, 6. But, the RAID-Z2 will get slight better read and write performance as it's 3 x vDevs, each with one disk less parity....
In the meantime, I am still undecided on what raid to implement. I have been looking at the following
* 4 vdevs of 6 drives - Raid z2
* 3 vdevs of 8 drives - Raid z2
* 2 vdevs of 12 drives - Raid z3
What are people's recommendations? The server will only have music/blu ray backups stored on it serving 2 desktop computers, 2 laptops and 2 htpc's
Yes, the long test can find defects even before data is ever written, but some defects can't be found until data is first written and the attempt is made to read that data back. The thing I use to test my drives is a tool called DBAN Boot and Nuke. It is actually a utility to erase drives but one of the settings lets you have it verify each write pass. I configure it to do what is called a DOD short erase with a verify on each pass. This writes random data to every sector on the drive and attempts to read it back as a test that the drive is working properly. If there are enough errors, DBAN will say that the drive failed but it can pass DBAN standards and still have problems, so after I run that I check the drive status with a smart long test and look at the results. I don't put my data on the drive until it has passed this test with no bad or reallocated sectors. I have been burned by bad drives too many times to take a chance on one that is questionable. Just last month I replaced seven drives in one of my systems because they were getting old and I didn't trust them. I had already had to replace 5 drives in that system in the past year, so I replaced the rest so they are all within a year of the same age.That is exactly right. What I found unusual is I had only carried out the smart short and long tests. I still hadn't started badblock testingthe errors occurred in long tests.
I guess the drive was bad from the beginning