- Joined
- May 13, 2015
- Messages
- 2,478
This Spring I bought 3 of Supermicro's AOC-S3008L-L8e cards, with the idea of upgrading the 3 LSI SAS 9210-8i HBAs in my main X9DRi-LN4F+-based FreeNAS server. These should just be a 'drop-in' upgrade, right?
Wrong! I got all kinds of errors, like this:
Well... since I am passing through the 3 HBAs to a FreeNAS VM running under ESXi 6.7 -- hey, I know! but this is my home lab! and it's worked flawlessly for years! -- I thought that might be the problem. So I moved my testing over to another system (an X9SRL-F) where I could run both ESXi and bare metal.
Same results either way. So it's not an artifact of passing the cards through to a VM.
The cards came with phase 16.00.01.00 firmware, so I then tried Supermicro's phase 16.00.10.00, and then Broadcom's 16.00.10.00, and then Broadcom's pre-release 16.00.12.00 (courtesy of iXsystems)... all with the same results.
I found this scary thread at bugs.freebsd.org, in which quite a few users report similar problems with LSI HBAs (don't be fooled by the title; the problem isn't just with Seagate drives):
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224496
I tried a few BIOS tricks:
I'm using good Supermicro cables, and I have six of them -- so I doubt that's the problem.
Have I simply had the bad luck to stumble across the FreeBSD driver bug? Are SAS3008 cards just incompatible with my X9-series Supermicro systems? It doesn't seem likely that I bought three lemons, but I guess that's possible, too.
Thanks in advance for your thoughts, my friends!
(Crossposted at STH)
Wrong! I got all kinds of errors, like this:
Code:
(da2:mpr0:0:0:0): READ(10). CDB: 28 00 e8 e0 87 80 00 01 00 00 (da2:mpr0:0:0:0): CAM status: SCSI Status Error (da2:mpr0:0:0:0): SCSI status: Check Condition (da2:mpr0:0:0:0): SCSI sense: ABORTED COMMAND asc:47,3 (Information unit iuCRC error detected) (da2:mpr0:0:0:0): Retrying command (per sense data)
Well... since I am passing through the 3 HBAs to a FreeNAS VM running under ESXi 6.7 -- hey, I know! but this is my home lab! and it's worked flawlessly for years! -- I thought that might be the problem. So I moved my testing over to another system (an X9SRL-F) where I could run both ESXi and bare metal.
Same results either way. So it's not an artifact of passing the cards through to a VM.
The cards came with phase 16.00.01.00 firmware, so I then tried Supermicro's phase 16.00.10.00, and then Broadcom's 16.00.10.00, and then Broadcom's pre-release 16.00.12.00 (courtesy of iXsystems)... all with the same results.
I found this scary thread at bugs.freebsd.org, in which quite a few users report similar problems with LSI HBAs (don't be fooled by the title; the problem isn't just with Seagate drives):
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224496
I tried a few BIOS tricks:
- Increased the PCI latency from 64 to 128
- Changed PCIe Maximum Payload from 'Auto' to the largest value
- Changed PCIe Maximum Read Request from 'Auto' to the largest value
- Disabled Power Technology
I'm using good Supermicro cables, and I have six of them -- so I doubt that's the problem.
Have I simply had the bad luck to stumble across the FreeBSD driver bug? Are SAS3008 cards just incompatible with my X9-series Supermicro systems? It doesn't seem likely that I bought three lemons, but I guess that's possible, too.
Thanks in advance for your thoughts, my friends!
(Crossposted at STH)