MichaelGMorgan
Dabbler
- Joined
- Jun 7, 2017
- Messages
- 13
About 7-8 months ago I put together a FreeNAS box which is currently running 9.10 Stable.
It had 1 SSD for the OS and then 7x3TB Toshiba p300 HDDs. These were arranged in groups of two, 3 directly to the motherboard and 4 connected to a StarTech RAID card (https://www.amazon.co.uk/StarTech-Port-SATA-Controller-Adapter/dp/B0001Y7PU8)
I had it setup as two groups with the 3 motherboard connected disks together and then the other 4 RAID disks also together.
Everything was great for about 5 months and disaster struck. Almost out of nowhere 3 of the RAID card disks started failing. Long story short I lost all data. After investigating, all three failed drives showed high seek error counts. I've sent those disks off and should be getting replacements soon as I believe they were faulty.
I rebuild the system, now with 6 x 3TB Toshiba P300 disks in a ZFS-3 setup thinking that it would take 3 HDD failures to lose the data this time. It's using 3 original HDDs which were connected to motherboard, the non-failed drive from before and two new drives. I checked smartctl figures on all disks and all were looking fine. These are currently connected with 3 to the motherboard and the other three via my RAID card.
This was running for 2 days and I've hit problems again. The 3 drives attached to the motherboard are running fine, but two of the RAID card attached drives are now detached by FreeNAS and are inaccessible. I can't even run a smartctl on them. The other third drive which is also connected to the RAID card currently has a "Seek Error Rate" of 9 which is slowly increasing - in other words, that disk is also failing.
So in total, 5 failed HDDs, all of which were connected to the RAID card mentioned above.
This can't be coincidence - so I'm looking to remove that RAID card and replace it. I'll then replace the 3 HDDs one by one when my replacement drives arrive.
I have two initial questions...
- How the heck is that RAID card damaging disks? Is it actually damaging them or just incorrectly reporting SMART errors?
- I need a replacement card which supports at least 3 X 3TB HDDs, ideally 4. I've been looking at an "IBM LSI ServeRAID-M1015 6Gbps PCI-E controller" on Amazon for £50 GBP.
And a more general question - It's my understanding that a card like the IBM M1015 is SAS. Am I correct in saying that I'd simply then purchase a SAS to SATA splitter and connect up my drives?
Any help would be very much appreciated!
Thanks!
It had 1 SSD for the OS and then 7x3TB Toshiba p300 HDDs. These were arranged in groups of two, 3 directly to the motherboard and 4 connected to a StarTech RAID card (https://www.amazon.co.uk/StarTech-Port-SATA-Controller-Adapter/dp/B0001Y7PU8)
I had it setup as two groups with the 3 motherboard connected disks together and then the other 4 RAID disks also together.
Everything was great for about 5 months and disaster struck. Almost out of nowhere 3 of the RAID card disks started failing. Long story short I lost all data. After investigating, all three failed drives showed high seek error counts. I've sent those disks off and should be getting replacements soon as I believe they were faulty.
I rebuild the system, now with 6 x 3TB Toshiba P300 disks in a ZFS-3 setup thinking that it would take 3 HDD failures to lose the data this time. It's using 3 original HDDs which were connected to motherboard, the non-failed drive from before and two new drives. I checked smartctl figures on all disks and all were looking fine. These are currently connected with 3 to the motherboard and the other three via my RAID card.
This was running for 2 days and I've hit problems again. The 3 drives attached to the motherboard are running fine, but two of the RAID card attached drives are now detached by FreeNAS and are inaccessible. I can't even run a smartctl on them. The other third drive which is also connected to the RAID card currently has a "Seek Error Rate" of 9 which is slowly increasing - in other words, that disk is also failing.
So in total, 5 failed HDDs, all of which were connected to the RAID card mentioned above.
This can't be coincidence - so I'm looking to remove that RAID card and replace it. I'll then replace the 3 HDDs one by one when my replacement drives arrive.
I have two initial questions...
- How the heck is that RAID card damaging disks? Is it actually damaging them or just incorrectly reporting SMART errors?
- I need a replacement card which supports at least 3 X 3TB HDDs, ideally 4. I've been looking at an "IBM LSI ServeRAID-M1015 6Gbps PCI-E controller" on Amazon for £50 GBP.
And a more general question - It's my understanding that a card like the IBM M1015 is SAS. Am I correct in saying that I'd simply then purchase a SAS to SATA splitter and connect up my drives?
Any help would be very much appreciated!
Thanks!