So I wanted to install two Crucial CT500MX500SSD1 SSD's in my test-but-soon-to-be-real server to be used as a pool for app data. (By the way, the server is a Supermicro A2SDi-H-TF board running TrueNAS-SCALE-23.10.1.3 in a Jonsbo N3 case with direct connections to the drives via the normal SAS-SATA breakout cables. Well, OK, they go through a back plate but that's direct enough I think...)
Initially all the SMART tests went well, so I thought what the heck, it's still a test server so let's create a mirrored pool. After a couple of minutes suddenly hundred critical alerts showed up, with various messages about I/O errors. (I can't recall the exact wording and unfortunately I didn't write it down, sorry for that.) In such a way, that the whole process got stuck, the disks' front panel lights were still blinking, the user interface couldn't even load the disk reporting widgets anymore etc. I checked server health via my IPMI interface and everything looked OK, normal temperatures, normal voltages etc. After waiting for quite a while, I had to reset the server. Now, the pool showed up like normal, as nothing had happened. I thought, that can't be good, let's try again but first wipe the SSD's by writing out all zeros. That succeeded, but now the drives reported 0 GB so I couldn't create a pool at all...
Then I though, let's erase the drives first with another computer I had nearby, and there they just showed up as normal drives with the full capacity available. So I erased them, put them back in the test server and now I could normally create a pool. Everything seems fine now, SMART tests rune fine, scrubbing runs fine, writing and reading a lot of data runs fine...
What would be the consensus, was it just a SCALE glitch, or do they happen to be bad drives? Apologies for not being able to be more precise about the reported errors...
Initially all the SMART tests went well, so I thought what the heck, it's still a test server so let's create a mirrored pool. After a couple of minutes suddenly hundred critical alerts showed up, with various messages about I/O errors. (I can't recall the exact wording and unfortunately I didn't write it down, sorry for that.) In such a way, that the whole process got stuck, the disks' front panel lights were still blinking, the user interface couldn't even load the disk reporting widgets anymore etc. I checked server health via my IPMI interface and everything looked OK, normal temperatures, normal voltages etc. After waiting for quite a while, I had to reset the server. Now, the pool showed up like normal, as nothing had happened. I thought, that can't be good, let's try again but first wipe the SSD's by writing out all zeros. That succeeded, but now the drives reported 0 GB so I couldn't create a pool at all...
Then I though, let's erase the drives first with another computer I had nearby, and there they just showed up as normal drives with the full capacity available. So I erased them, put them back in the test server and now I could normally create a pool. Everything seems fine now, SMART tests rune fine, scrubbing runs fine, writing and reading a lot of data runs fine...
What would be the consensus, was it just a SCALE glitch, or do they happen to be bad drives? Apologies for not being able to be more precise about the reported errors...