what 12Gb/s controllers are supported and which should I avoid?

southwow

Contributor
Joined
Jan 18, 2018
Messages
114
Just wondering what everyone is having the best luck with? My throughput is stuck in the dark ages, I thought maybe I'd pick up a new (new to me at least) controller and test it out on some enterprise drives.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
Just wondering what everyone is having the best luck with? My throughput is stuck in the dark ages, I thought maybe I'd pick up a new (new to me at least) controller and test it out on some enterprise drives.
What kind of controller do you have? (hardware description) In many cases it is not the disk controller because even the fastest mechanical drives are only getting around 250MB/s when a SAS2 controller is able to do 600MB/s. So the drive is usually the problem, not the controller. If you can give some details about your hardware, perhaps some suggestions for upgrades can be made.
 

southwow

Contributor
Joined
Jan 18, 2018
Messages
114
I have an m1015 and an x10 board with LSI 2308 both flashed to IT mode. Both are dual-linked to matching SAS expanders.
Drive pool is all 8tb WD red 5400rpm
32 GB ECC RAM
1231V3 CPU

Gigabit network with netgear switches. Still seeing some lousy performance with SMB, Plex, and sometimes NFS. Network is isolated, has its own router.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
Gigabit network with netgear switches.
Why would you think the problem is the SAS controller when you have a bandwidth limit of 1Gb at the switch?
Plex, and sometimes NFS. Network is isolated, has its own router.
If that is a Xeon E3-1231 V3 CPU, it should be plenty fast enough for Plex.
Drive pool is all 8tb WD red 5400rpm
How many drives in what type of pool configuration?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
If traffic must go across the router to get from one network to another, that could be part of the problem.
 

l@e

Contributor
Joined
Nov 4, 2013
Messages
143
Drive limitation is a factor, untill you stipe differt vdevs for higher iops and transfer. Reds 5.4k should be sata not sas as I remember. That should give max interface transfer 600MB/s per drive. but with 5.4k the seek speed (random access) is reduced due to mechanical speed. Any way as Chris said still the 1Gbps network is the bottleneck @ 110-125 MBps. The worst scenario with sequential rw the slow drives will put a very low limit. So you still will be unable to saturate the connection. I use striped mirror of 4 WD re4 and i was able to saturate 1g. Now i have 4 ports in LACP and so far i reached approx. 400 MB/s from multiple clients over smb.
More dettails over your pool setup will clear more things.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
I have an m1015 and an x10 board
Something that might be slowing you down is that old M1015 card because it is only PCI-E 2.0 and, if you have enough drives connected, it might be a bottleneck to your performance, however, there is no possibility that the SAS controller is going to be slower than the 1Gb network.
If you will respond regarding the number of drives and pool layout, we can have a discussion about this.
 

southwow

Contributor
Joined
Jan 18, 2018
Messages
114
If traffic must go across the router to get from one network to another, that could be part of the problem.
Network is completely isolated. Multiple drops in each room, router on this particular network is an archer c7 to allow for non-wired devices to connect, only traffic crossing networks is to get outside. Eliminating the archer router makes no difference. I'll post pool performance stats this weekend when I have physical access.

The performance of this pool is about the same as an old 7200RPM utra scsi array I had 10 years ago shared from an old compaq dual pentium pro server running win2k on 100 megabit.

Performance seems to have dropped as I've added vdevs. I'd planned to rebuild my pool and restore all of the data, but finding time is always difficult.
 

southwow

Contributor
Joined
Jan 18, 2018
Messages
114
Something that might be slowing you down is that old M1015 card because it is only PCI-E 2.0 and, if you have enough drives connected, it might be a bottleneck to your performance, however, there is no possibility that the SAS controller is going to be slower than the 1Gb network.
If you will respond regarding the number of drives and pool layout, we can have a discussion about this.
@Chris Moore any suggestions on a PCI-E 3.0 replacement? The 1015 is sitting connected to an empty enclosure right now.

The onboard sas2 controller is definitely faster, but it isn't spectacular.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
The performance of this pool is about the same as an old 7200RPM utra scsi array I had 10 years ago shared from an old compaq dual pentium pro server running win2k on 100 megabit.
Funny. I used to work at a Compaq authorized repair center between 1995 and 1999 and I was the only technician at the location that was certified to repair some of the big Compaq servers that were shipping back then. Compaq actually made one that had what they called "hot swap" PCI slots. The OS had to have a Compaq provided software installed that allowed the slot to be taken offline and there was what amounted to a power switch for the individual slots, but you could change out a failed card while the server was running, and then only if you replaced it with another card that was exactly the same. I studied for a week to pass the test for that. I think it was Pentium Pro, but memory isn't what it was. At least it wasn't MCA (Micro Channel Architecture) I hated having to configure those things. Times sure have changed.
Performance seems to have dropped as I've added vdevs. I'd planned to rebuild my pool and restore all of the data, but finding time is always difficult.
If the system is slowing down with more vdevs, there could be some optimization that could be done architecturally. Would you share the pool configuration? For example, the pool layout in my home NAS is like this:
Code:
root@Emily-NAS:~/scripts # zpool list

NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
Backup        21.8T  16.4T  5.34T         -     0%    75%  1.00x  ONLINE  /mnt
Emily         43.5T  16.6T  26.9T         -     0%    38%  1.00x  ONLINE  /mnt
Irene         43.5T  17.1T  26.4T         -     0%    39%  1.00x  ONLINE  /mnt
freenas-boot  37.2G  10.4G  26.9G         -      -    27%  1.00x  ONLINE  -

root@Emily-NAS:~/scripts # zpool status

  pool: Backup
state: ONLINE
  scan: scrub repaired 0 in 0 days 08:52:28 with 0 errors on Mon Nov 26 08:52:31 2018
config:

        NAME                                            STATE     READ WRITE CKSUM
        Backup                                          ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/181101e2-a35b-11e8-aefa-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/18e924eb-a35b-11e8-aefa-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/19a7111b-a35b-11e8-aefa-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/1a9e1915-a35b-11e8-aefa-0cc47a9cd5a4  ONLINE       0     0     0

errors: No known data errors


  pool: Emily
state: ONLINE
  scan: scrub repaired 0 in 0 days 05:23:05 with 0 errors on Tue Nov 13 05:23:07 2018
config:

        NAME                                            STATE     READ WRITE CKSUM
        Emily                                           ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/af7c42c6-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/b07bc723-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/b1893397-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/b2bfc678-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/b3c1849e-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/b4d16ad2-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/bc1e50e5-c1fa-11e8-87f0-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/a03dd690-c1fb-11e8-87f0-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/a6ed2ed5-c240-11e8-87f0-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/b9de3232-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/baf4aba8-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/bbf26621-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
        logs
          gptid/ae487c50-bec3-11e8-b1c8-0cc47a9cd5a4    ONLINE       0     0     0
        cache
          gptid/ae52d59d-bec3-11e8-b1c8-0cc47a9cd5a4    ONLINE       0     0     0

errors: No known data errors


  pool: Irene
state: ONLINE
  scan: scrub repaired 0 in 0 days 03:22:04 with 0 errors on Wed Nov 28 03:22:05 2018
config:

        NAME                                            STATE     READ WRITE CKSUM
        Irene                                           ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/8710385b-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/87e94156-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/88db19ad-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/89addd3b-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/8a865453-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/8b66b1ef-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/8c69bc72-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/8d48655d-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/8e2b6d1f-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/8efea929-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/8fd4d25c-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/90c2759a-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0

errors: No known data errors


  pool: freenas-boot
state: ONLINE
  scan: scrub repaired 0 in 0 days 00:08:34 with 0 errors on Tue Nov 27 03:53:34 2018
config:

        NAME                                            STATE     READ WRITE CKSUM
        freenas-boot                                    ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/f659fd6d-4b12-11e6-a97c-002590aecc79  ONLINE       0     0     0
            gptid/f6a61d33-4b12-11e6-a97c-002590aecc79  ONLINE       0     0     0

errors: No known data errors

root@Emily-NAS:~/scripts #

@Chris Moore any suggestions on a PCI-E 3.0 replacement? The 1015 is sitting connected to an empty enclosure right now.

The onboard sas2 controller is definitely faster, but it isn't spectacular.
The SAS controller I use in my main server at home is an LSI card like this:
https://www.ebay.com/itm/LSI-SAS-92...s-9200-9207-PCIE-3-0-ZFS-SAS9200/273681554193

If I recall correctly, the bandwidth limitation on the PCI-E 2.0 card is around 4GB/s from the host to the controller. The PCI-E 3.0 gives you about 7.5GB/s of bandwidth between the host system and controller card. So, if you have enough disks, it could totally be worth it to have the faster interface, but even the 12GB SAS cards still connect to an eight lane PCI-E 3.0 card slot, so they are still limited to that 7.5GB/s bandwidth to the host.
Regardless of which card you choose, the need for speed is limited by the structure of the pool, type of drive and number of drives and even the type of data. For example, if you are dealing with high IOPS, lots of small files, you can sometimes see more performance splitting the storage between two controller cards, even when you are using the same number of disks. The gain is not huge, but it is a little bit. At the same time, if you are doing large sequential IO, it is usually best to use a SAS expander and keep all the drives on a single controller. If you can provide some details, we can talk options.
No matter the speed of the storage pool, the network connectivity is going to be a limit. You might want to think about 10Gb network infrastructure even if it is just a 10Gb link from the server to switch, it allows better throughput to the endpoints if multiple systems are all hitting the server at the same time.
 

southwow

Contributor
Joined
Jan 18, 2018
Messages
114
Thanks,

I'll post up data if I have time this weekend. I know you don't remember my older post, but I initially started with 3 drive vdevs many years ago and have been upgrading drives. Then I scored and shucked a bunch of seagate drives and added several new vdevs. It honestly feels like the reds are A LOT slower than the fast-failing seagate desktop drives and I have 24 of them now.

All of my data is backed up locally and in cloud storage, but I'm not at all knowledgeable at tuning freenas. I just did what was easy to expand my storage. At this point, I'm ready to do what's correct.

The 1015 is actually connected to a jbod case I built out of an 846 for this specific purpose and I have 8-10 reds laying around to build a new pool and move the data over.
 

southwow

Contributor
Joined
Jan 18, 2018
Messages
114
About the compaq thing, if this was around 1998 I can guarantee we've talked on the phone! LOL. I remember setting up the RAID unit, it was complete hell!
 

southwow

Contributor
Joined
Jan 18, 2018
Messages
114
Just an update to say there is no update. Data coming tonight! I did check the pool via the web UI, I'm closing in on 80% space usage which may be contributing to the problem.
 

southwow

Contributor
Joined
Jan 18, 2018
Messages
114
@Chris Moore,

I moved about 10TB of data off onto another server. Still seeing the same slowness. Windows volume is a small SSD mirror for testing Plex performance in a VM so not concerned about the fragmentation.

1549585173347.png


1549588045851.png

1549588110367.png
 

southwow

Contributor
Joined
Jan 18, 2018
Messages
114
Just wanted to add, I also pulled out all of the spares that I had in the 846 chassis that this was in as well. There were 5 of them, that made no difference either.

I'm going to start shopping around for a decent NIC and see if that helps.

LOL, that resliver was the last of the 8tb Seagate drives failing.
 
Top