Swapping Backplanes

Mugiwara

Dabbler
Joined
Apr 16, 2014
Messages
36
Hey all,

I am on my second generation build of my FreeNas box, and believe mys SAS2 backplane is starting to become a bottleneck. I currently have a Supermicro 4U SC846A-R1200B chasis (SC846A-R1200B) with a SC846A-R1200B connected to two HBAs, a LSI 3008 thats integrated into my X11SSH-CTF Motherboard for one 8 drive zpool, and the other 8 drive zpool connected to a LSI2008 IT flash PCI express controller card.

My question related to replaced the current 846 backplane with a SAS3 backplane (Im thinking a BPN-SAS3-846EL1). Does anyone see any issues with swapping the backplane? Since the SAS3 backplane only has a single connector, it will all route through the integrates LSI3008 card, so both pools will now be on a single HBA as well. Should all this just work automagically on a part and cable swap / reboot, or should I expect to do anything to make it all work with the new backplane / HBA configuration? Thanks!
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
It will work as usual. FreeNAS just needs to be able to see the disks through some path, it's not particular about the details of which one.

There's the other question of what you're doing to bottleneck on the existing backplane, since that isn't usually possible unless you have a large number of SSDs attached to it.
 

Mugiwara

Dabbler
Joined
Apr 16, 2014
Messages
36
Thanks sretalla. I seem to cap out at transfer rates of about 350MBps, but looking at this zfs2 array, striping across 16 drives, my theoretical transfer rates should be much higher, right? Maybe I have some unreasonable expectations, but with a 10gb network I've just been disappointed in some of these transfer rates from fast nvme storage to the Nas, and even internal copies via the cli from one nas folder to another seem to cap out around the same 350 max speed.

If you think that a sas2 VS sas3 backplane wouldn't make much of a difference, maybe I will hold off. As always, appreciate the expertise in here.
 
Joined
Jan 18, 2017
Messages
525
@sretalla is probably thinking it's your zpool arrangement that is your limitation at the moment, there isn't enough information here to know for sure though. The output of zpool status and the model of your drives would help.
 

Mugiwara

Dabbler
Joined
Apr 16, 2014
Messages
36
Thanks. Im running 16 of the 8TB WDC Red drives in 2 zfs2 vdevs which make up my pool. See below for status. Thanks!

Code:
root@nas:~ # zpool status
  pool: Jails
state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:17 with 0 errors on Sun Sep  8 00:00:17 2019
config:

        NAME                                          STATE     READ WRITE CKSUM
        Jails                                         ONLINE       0     0     0
          gptid/5867ebc4-e08c-11e7-91af-000c290e9eb1  ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:15 with 0 errors on Tue Sep 24 03:45:15 2019
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          da0p2     ONLINE       0     0     0

errors: No known data errors

  pool: wdc8TBRed
state: ONLINE
  scan: scrub repaired 0 in 0 days 05:07:12 with 0 errors on Sun Sep 15 05:07:14 2019
config:

        NAME                                            STATE     READ WRITE CKSUM
        wdc8TBRed                                       ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/da236a18-53c1-11e7-9ee5-000c290e9eb1  ONLINE       0     0     0
            gptid/da9b66da-53c1-11e7-9ee5-000c290e9eb1  ONLINE       0     0     0
            gptid/db17e55d-53c1-11e7-9ee5-000c290e9eb1  ONLINE       0     0     0
            gptid/dba37408-53c1-11e7-9ee5-000c290e9eb1  ONLINE       0     0     0
            gptid/dc215693-53c1-11e7-9ee5-000c290e9eb1  ONLINE       0     0     0
            gptid/dca5a0a8-53c1-11e7-9ee5-000c290e9eb1  ONLINE       0     0     0
            gptid/dd2839bb-53c1-11e7-9ee5-000c290e9eb1  ONLINE       0     0     0
            gptid/ddaddddb-53c1-11e7-9ee5-000c290e9eb1  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/08fa490b-53d2-11e7-9ee5-000c290e9eb1  ONLINE       0     0     0
            gptid/098d636e-53d2-11e7-9ee5-000c290e9eb1  ONLINE       0     0     0
            gptid/0a1ef707-53d2-11e7-9ee5-000c290e9eb1  ONLINE       0     0     0
            gptid/0aaecd43-53d2-11e7-9ee5-000c290e9eb1  ONLINE       0     0     0
            gptid/0b3d956e-53d2-11e7-9ee5-000c290e9eb1  ONLINE       0     0     0
            gptid/0bd92415-53d2-11e7-9ee5-000c290e9eb1  ONLINE       0     0     0
            gptid/0c6f4c81-53d2-11e7-9ee5-000c290e9eb1  ONLINE       0     0     0
            gptid/0d10b861-53d2-11e7-9ee5-000c290e9eb1  ONLINE       0     0     0

errors: No known data errors


root@nas:~ # uname -a
FreeBSD nas.****.net 11.1-STABLE FreeBSD 11.1-STABLE #0 r321665+79f05c3dd3d(HEAD): Wed Jan 23 12:02:34 EST 2019 root@nemesis.tn.ixsystems.com:/freenas-releng/freenas/_BE/objs/freenas-releng/freenas/_BE/os/sys/FreeNAS.amd64 amd64
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
It's very unlikely that you're maxing out the backplane of PCI2 with that setup.

Much more likely is the pool layout of only 2 VDEVs. (meaning that you are running the equivalent IO of just 2 disks working in a stripe - 350MB/s makes sense here).

If speed is your goal, then Mirrored VDEVs and a lot more of them is your answer (you need to consider capacity and redindancy when making that change of course).

I seriously doubt that you will see any improvement in transfer speed if you switch to a PCI3 card.
 

Ender117

Patron
Joined
Aug 20, 2018
Messages
219
If I understand correctly a raidz(2) will (roughly) give you IOPS of a single disk and throughput of all disk (minus redundancy) combined. Assuming that you are copying large files, your bottlenecked throughput may be result of fragmentation or simply pool too full.
 

Mugiwara

Dabbler
Joined
Apr 16, 2014
Messages
36
Aha, thank you both. I guess I don't know ZFS well enough, I was thinking about the disc in a more traditional Raid manner and expected to get performance at a disk level, not at a zdev level. My pool is only about 35% full, so still a bit confused as to the overall throughput, but not a huge deal. I guess my next step will be to add 8 more drives in a zdev and add them to the existing pool if I really find I need more performance, otherwise I will live with it for another couple of years until I build my third generation. Thanks again for the help and advice.
 
Top