Dell PERC 4e/DC Help

Mike Katz

Dabbler
Joined
May 26, 2017
Messages
20
I have a system with a Yotta 16 Drive SATA to SCSI box with 16 2TB drives in it. It is setup as JBOD on SCSI Address 10.

On my old Adaptec controller Freenas was able to see 16 drives on the single SCSI address.

I am upgraded my system and the new system as a Dell PERC 4e/DC dual SCSI port raid controller. I have disabled the bios on the PERC controller so the controller is not trying to do any RAID configuration. However when I access the Storage -> Disks from the TrueNAS web interface all I see are the boot drives (which are not attached to this controller).

Do I need a special driver for the PERC controller? If yes, where can I find it?

If I don't need special drivers, how do I get TrueNAS to see the 16 drives?

Thank you...
 

Mike Katz

Dabbler
Joined
May 26, 2017
Messages
20
According to the FreeBSD documentation I should add the line

amr_load="YES" to /boot/loader.conf

I added this but it didn't fix the issue.

I could not find amr.ko or anything similar to that in the /boot/modules/ or boot/kernel/

Any suggestions would be greatly appreciated.

Thanks
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I added this but it didn't fix the issue.

And it's not going to. This RAID controller, and every other RAID controller, is unsuitable for use with ZFS and TrueNAS. Please refer to

 

Mike Katz

Dabbler
Joined
May 26, 2017
Messages
20
With the RAID bios disabled this is basically just an HBA controller with 16 drives at a single SCSI address.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Parallel SCSI? In 2021? Good luck with that, but I wouldn't hold my breath. Gen 1 Serial SCSI has enough gremlins to scare everyone away, I don't even want to imagine what horrors the software stack for this is hiding. If FreeBSD even includes the drivers, which might not be the case anymore. And even then, FreeNAS/TrueNAS may have chosen not to include them.

I mean, your disks are SATA, surely there's a better way of going about this than with parallel SCSI?
 

Mike Katz

Dabbler
Joined
May 26, 2017
Messages
20
My old freenas server was setup as follows:

IBM e-Server 340 Dual Xenon w/PCI-X Ultra320 SCSI controller
Axus Yotta 16 Drive SATA to SCSI enclosure.
The Axus was setup for JBOD and the Adaptec controller and FreeNAS saw all of the drives no problem.

I am upgrading the IBM to a newer server and I was hoping to be able to use the Axus SATA to SCSI enclosure.
The new server is a half high server with a single horizontal PIC-e 8X slot.

I bought the Dell controller on ebay for very little.

I have ordered an LSI Ultra SCSI320 HBA controller to see if that will work.

If that doesn't work then I need a new drive enclosure and a new controller.

I was thinking of a SATA to SAS enclosure and just connecting the drives to the sever via 1GB ethernet.

Another alternative is to get a PCI-e SAS controller that can handle 16 or more drives and an SAS enclosure and run multiple cables between the server and the enclosure.

Any inexpensive (relatively) solution suggestions would be appreciated.

Thank you.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Another alternative is to get a PCI-e SAS controller that can handle 16 or more drives and an SAS enclosure and run multiple cables between the server and the enclosure.
Yes, but you don't need to worry too much about the controller side of things and you definitely want an expander chassis, so just one cable for all disks. Something like this. Add an HBA (e.g. LSI SAS 9211-8e, LSI SAS 9207-8e, or -4i4e variants) and an SFF-8088 to SFF-8088 cable and you're done.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
If that doesn't work then I need a new drive enclosure and a new controller.

I was thinking of a SATA to SAS enclosure and just connecting the drives to the sever via 1GB ethernet.

Another alternative is to get a PCI-e SAS controller that can handle 16 or more drives and an SAS enclosure and run multiple cables between the server and the enclosure.

You sound relatively confused. Please head on over to


and have a read-up.

I was thinking of a SATA to SAS enclosure and just connecting the drives to the sever via 1GB ethernet.

There's no such thing. SAS is not ethernet.

PCI-e SAS controller that can handle 16 or more drives and an SAS enclosure and run multiple cables between the server and the enclosure.

This is also basically nonsensical. While you could do it, no one WOULD do it. The expander chassis scenario outlined by @Ericloewe above is the normal way to do this.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
As a sidenote, how was there ever a market for a 16-bay chassis that takes parallel SCSI and converts that to SATA? How is that even accomplished? SCSI -> Parallel ATA -> SATA for each drive, with all the disks hanging directly off the SCSI bus? As a side-sidenote, I want to plug in an optical drive and see what happens and if ATAPI works over that frankenstein of a setup.

This is also basically nonsensical. While you could do it, no one WOULD do it. The expander chassis scenario outlined by @Ericloewe above is the normal way to do this.
It's not just the ridiculous bundle of cables, it's also a matter of signal integrity. SATA has a very strict 1-meter cable length limit.
 

Mike Katz

Dabbler
Joined
May 26, 2017
Messages
20
You might want to turn down your rhetoric a bit. And maybe do your research before attacking someone.

I am an embedded systems engineer (hardware and software) and I have been building PC's since the '80s and building computers from kits since the mid '70s.

iSCSI is specifically designed for internet based storage enclosures.

"In computing, iSCSI is an acronym for Internet Small Computer Systems Interface, an Internet Protocol-based storage networking standard for linking data storage facilities. iSCSI provides block-level access to storage devices by carrying SCSI commands over a TCP/IP network."

SAS controllers can address up to 256 drive on daisy chained controllers, depending on the controller and enclosure.

Most SAS controllers can use adapter cables that go from 1 SAS connector to 4 SATA connectors. The SAS connector plugs into the controller and the SATA connectors connect to 4 drives. In the 16 drive example in my previous message that would require 4 SAS to SATA cables to connect 16 drives to one SAS HBA controller.

SAS would be faster than iSCSI.

I do need to do more research on how to connect the SAS controller to an SAS enclosure and how many drives a single cable can support.
 

Mike Katz

Dabbler
Joined
May 26, 2017
Messages
20
As a sidenote, how was there ever a market for a 16-bay chassis that takes parallel SCSI and converts that to SATA? How is that even accomplished? SCSI -> Parallel ATA -> SATA for each drive, with all the disks hanging directly off the SCSI bus? As a side-sidenote, I want to plug in an optical drive and see what happens and if ATAPI works over that frankenstein of a setup.


It's not just the ridiculous bundle of cables, it's also a matter of signal integrity. SATA has a very strict 1-meter cable length limit.

The enclosure took 16 SATA drives on it's backplane and connected to a controller via SCSI. It could be RAID 0, 1, 5, 6, and also JBOD.

See here: http://storusint.com/pdf/axusdocs/Yotta_A_Datasheet.pdf
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
how was there ever a market for a 16-bay chassis that takes parallel SCSI and converts that to SATA?

Well, interestingly, way back in the day, one of my customers had purchased 16-bay Infortrends drive arrays attached with SCSI, this would have been around 2003-2004, and wondered why they performed so horribly in their default "striped" configuration. Well, when your stripe size is only 64KB and you're pulling Usenet news articles, you get all the drives involved just to pull a single article.

This was actually the genesis of the very early 24-bay SATA storage servers we built here in the shop, which omitted the pricey $12K Infortrend storage-only chassis entirely in favor of a host-plus-chassis design that only cost about $4K. It is the first time I'm aware of that anyone hooked up 24 SATA disks to a FreeBSD host, and I've still got the bruises to prove it.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You might want to turn down your rhetoric a bit. And maybe do your research before attacking someone.

I am an embedded systems engineer (hardware and software) and I have been building PC's since the '80s and building computers from kits since the mid '70s.

iSCSI is specifically designed for internet based storage enclosures.

"In computing, iSCSI is an acronym for Internet Small Computer Systems Interface, an Internet Protocol-based storage networking standard for linking data storage facilities. iSCSI provides block-level access to storage devices by carrying SCSI commands over a TCP/IP network."

SAS controllers can address up to 256 drive on daisy chained controllers, depending on the controller and enclosure.

Most SAS controllers can use adapter cables that go from 1 SAS connector to 4 SATA connectors. The SAS connector plugs into the controller and the SATA connectors connect to 4 drives. In the 16 drive example in my previous message that would require 4 SAS to SATA cables to connect 16 drives to one SAS HBA controller.

SAS would be faster than iSCSI.

I do need to do more research on how to connect the SAS controller to an SAS enclosure and how many drives a single cable can support.

Well holy mackerel, aren't I an idjit. I had no idea. I too an am embedded systems engineer with a background in medical electronics and having architected custom-designed UNIX appliance platforms. I've also been writing in depth about storage with FreeNAS for over a decade, and have written many/most of the forum explainer posts on the topics of networking, SAS, block storage, and many others. My Synertek SYM-1 waves hi at your '70's kits, it's lonely sitting on the shelf here.

It isn't clear what research you feel I should have done, and it isn't clear I was "attacking" you. You appeared to be confused. I usually push confused-sounding people at whatever resources I feel may help un-confuse them.

I didn't say that there was no such thing as iSCSI. Anyone who does even a trite amount of searching on these forums probably finds my name in every iSCSI thread.

However, iSCSI is not a technology that can be used to attach your disks to FreeNAS/TrueNAS; it's simply not supported in that role. ZFS places crushing amounts of I/O towards its storage, and iSCSI wouldn't be practical even at 10Gbps, except maybe for tiny arrays. It appeared you had some confusion as to how all the bits of everything here worked together. I don't know how to make any other sense out of:

If that doesn't work then I need a new drive enclosure and a new controller.

I was thinking of a SATA to SAS enclosure and just connecting the drives to the sever via 1GB ethernet.

What exactly were you suggesting here?
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
SCSI is specifically designed for internet based storage enclosures.

"In computing, iSCSI is an acronym for Internet Small Computer Systems Interface, an Internet Protocol-based storage networking standard for linking data storage facilities. iSCSI provides block-level access to storage devices by carrying SCSI commands over a TCP/IP network."

SAS controllers can address up to 256 drive on daisy chained controllers, depending on the controller and enclosure.

Most SAS controllers can use adapter cables that go from 1 SAS connector to 4 SATA connectors. The SAS connector plugs into the controller and the SATA connectors connect to 4 drives. In the 16 drive example in my previous message that would require 4 SAS to SATA cables to connect 16 drives to one SAS HBA controller.

SAS would be faster than iSCSI.
It's important to establish a very clear distinction between SAS and iSCSI.
SAS is akin to SATA, but with enterprise features. You'd use it to attach disks to a server.
iSCSI is just a convenient abstraction that was invented to put block storage on a network without having to reinvent the wheel, by repurposing the SCSI command set. You'd use it to provide (typically virtual) disks to multiple clients. There are some degenerate cases of that, typically involving proprietary RAID crap, absurd price tags and ludicrous support contracts, but we don't deal with that stuff here.

The orthodox way of doing things (beyond SATA scales) is to attach disks with SAS. I'm not aware of any hard limits that the spec imposes on the number of supported devices, beyond the address space (64 bits), but typical HBAs support 128ish to 1024ish disks and have 8 lanes arranged in two physical connectors of four lanes each. Although you could directly connect to either SATA or SAS disks, realistically you'd use a SAS expander (think of it like an Ethernet switch). Typical expanders are 24- or 36-port (e.g. 24 lanes for disks, plus 8 for uplink, plus 4 for another downstream expander). Although you can get them separately, they're typically integrated into the chassis backplane.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
iSCSI is just a convenient abstraction that was invented to put block storage on a network without having to reinvent the wheel

Well, it's also more and also less. Unlike traditional SCSI, it has interesting scaling properties (though the use of TCP creates unexpected bottlenecks). It's very powerful because it introduces what could effectively be an arbitrarily large connection matrix, but at the same time, it is impractical for large scale disk bandwidth. Even just a pair of contemporary SATA SSD's attached via iSCSI would find a 10Gbps ethernet to be a chokepoint, even if iSCSI TCP could keep up (which seems somewhat unlikely). This is why you usually see iSCSI used to attach storage consumers to a RAID controller, but you don't see it on the backend of the RAID controller to attach disk shelves. The normal access patterns of conventional storage consumers tend to make iSCSI practical up to a certain point, or where overall design requirements make it necessary to attach gobs of disk. It's almost always impractical to attach iSCSI disks to the backend of ZFS because a scrub or resilver will crush the backend network and cause significant performance disruptions.
 
Top