Can’t even hit 2.5Gbps [SOLVED]

Kuro Houou

Contributor
Joined
Jun 17, 2014
Messages
193
SOLVED see post here

So I recently upgraded my two NAS servers (can see detailed specs in my signature) with Intel X550 nics so I could link them to my new 2.5Gbps switch and PCs which also have 2.5Gbps nics.

I have a custom built backup NAS with a supermicro mobo and 7 disks in a raidz1 config and had no issues hitting 2.5Gbps for both reads and writes, it can do this consistently with no drops in performance even over massive 40+ GB files.

I also have my main NAS which is a Supermicro X9DRi-LN4F+, 12 Bay 6Gbps SAS2, 96GB Ram and 2x Xeon 2560 v2 processors, it uses a LSI 9211- HBA Card and is flashed to IT mode as well to control the disks. Its had no issues and performed great with its built in nics at 1Gbps.

This is a layout of the mobo in the main server,

1604623109464.png


I have the LSI HBA installed in Slot 2 (That's how it came), and I have tried the nic in both Slot5 and Slot3 to the same results...

Below are two snapshots of me doing a copy from the main NAS to my PC, as you can see, one averaging only about 200MB/s and this is a good test, the other is well below 1Gbps speeds... What I have noticed is the read operations and bandwidth are very erratic compared to my backup NAS which has no issues, it has very consistent read operations and bandwidth per disk. It also looks like the main server fluctuates between 3k-5k read operations while copying while the backup server sits at about 8.5k-10k consistently.

Top 2 are from Main NAS bottom 1 is from Backup NAS

1604623340696.png

1604624494493.png

1604625573557.png


Now if I do a small benchmark on the pool itself using the command,

Write Test
root@freenas:/mnt/v01/Data/Temp/test # dd if=/dev/zero of=/mnt/v01/Data/Temp/testfile bs=1000M count=50
50+0 records in
50+0 records out
52428800000 bytes transferred in 32.357254 secs (1620310561 bytes/sec) or ~1.6GB/s

and

Read Test
root@freenas:/mnt/v01/Data/Temp/test # dd of=/dev/zero if=/mnt/v01/Data/Temp/testfile bs=1000M count=50
50+0 records in
50+0 records out
52428800000 bytes transferred in 17.239056 secs (3041280179 bytes/sec) or ~3GB/s (I think this has to be reading from RAM...)

So the disks can obviously write and read pretty fast.

So this begs the question, what exactly is the bottleneck here? I would think maybe its the PCI-E bus but I tried multiple slots, 1 that is dedicate to CPU 1 and 1 that was dedicated to CPU 2. Also the PCI-E bandwidth should be plenty I would think... Any other ideas?
 
Last edited:

Kuro Houou

Contributor
Joined
Jun 17, 2014
Messages
193
So researching my SAS HBA, it is a LSI-9211-4i. Which sas it supports 4 ports at 6Gbps... does that mean if I have 6 drives it can only read from 4 of them at a time? Could that be the bottleneck? If the data is spread across 6 drives but can only read 4 at a time I could see how that would hurt speeds. It also explains the erratic behavior when monitoring the pool operations and bandwidth in the above screenshots..
 

Kuro Houou

Contributor
Joined
Jun 17, 2014
Messages
193
So I have pretty much ruled out the Nic as an issue, just looking at pure internal copy tests, speeds are slower then I think they should be.

In my test I just ran a test to copy a file essentially to a new file in the same directory to see how fast it could do that.

On the FreeNAS Main server it is much slower. Seeing combined read and write speeds in the 200-300MB/s total during the operation
One the FreeNAS Backup server its much faster. Seeing combined read and write speeds in the 400-700 MB/s total during the operation

Just kicking off both at the same time, the Backup server finished the job almost 2x of the Main server.

The test file was 75Gb and took about 6 minutes to copy on the Backup server or about ~215MB/s on average vs on the Main server about ~115MB/s on average.

My backup server has all my disks plugged into the Mobo using SATA so it has direct communication to all drives all the time.
It seems my Main server is hitting a wall reading from the 6x14TB drives. I ordered a LSI Logic SAS 9207-8i HBA which ill try installing once I get it, I assume this way the HBA can communicate directly with the 6 drives vs the LSI-9211-4i card which only has 4 paths and is why I am thinking I can only talk to 4 drives at a time so its having to shuffle its communication to the other 2 drives causing a bottleneck.

Anyone think this sounds logical or have run into the same issue with a LSI-9211-4i card with more then 4 drives?
 

Kuro Houou

Contributor
Joined
Jun 17, 2014
Messages
193
Another series of tests to make certain its not the nic's or other hardware outside of the server, I did iperf test both to and from both NAS servers, both gave the same resutls ~2.26GB/s or 280MB/s bandwidth between the two.

Also ran a iperf test between the Main NAS and my PC, both times showing 283MB/s or 2.38 GB/s. So networking seems 100% OK.
 
Last edited:

Kuro Houou

Contributor
Joined
Jun 17, 2014
Messages
193
The only thing that's really throwing a wrench in all my thinking is how when I use the 1Gbps nic I get a solid 113MB/s, yet when I use the 2.5Gbps nic I can see speeds ~80MB/s or less sometimes... Yet the iperf testing clearly shows its capable of sustaining 283MB/s. Its the inconsistency of the speed over the 2.5Gbps link that is so odd..
 

Kuro Houou

Contributor
Joined
Jun 17, 2014
Messages
193
Well might have jumped the gun on using a dual port HBA with my SAS expander.. I looked up my SAS expander and its this one,

1604775698706.png

Looks like only one port can go to the HBA...

Not sure if the new HBA is going to help now, I thought having more ports plugged into the SAS expander would help but now that I see I can't run the second cable I am not sure its going to make any difference using a newer HBA...

This one is definitely an enigma, a Dual E5-2690v2 CPU server with 128GB of RAM that can't copy a file internally faster then 115MB/s let alone over a nic is just crazy :(
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,972
Write Test
root@freenas:/mnt/v01/Data/Temp/test # dd if=/dev/zero of=/mnt/v01/Data/Temp/testfile bs=1000M count=50
50+0 records in
50+0 records out
52428800000 bytes transferred in 32.357254 secs (1620310561 bytes/sec) or ~1.6GB/s

and

Read Test
root@freenas:/mnt/v01/Data/Temp/test # dd of=/dev/zero if=/mnt/v01/Data/Temp/testfile bs=1000M count=50
50+0 records in
50+0 records out
52428800000 bytes transferred in 17.239056 secs (3041280179 bytes/sec) or ~3GB/s (I think this has to be reading from RAM...)

So the disks can obviously write and read pretty fast.
Nope, it's not fast, it's false. You are writing and reading a lot of zeros "0" which are highly compressible. Also ensure that your dataset has compression turned off. As for the tests you need to substitute "if=/dev/zero" with "if=/dev/random" and then wait a long time while your system generates the huge file you wanted it to write. This changed my value from ~32 seconds (using zero) to ~300 seconds (using random).

As for other performance issues, the configuration of your RAID makes a difference, how full your pool is, all these things factor in. But without knowing a lot more, I couldn't tell you it wasn't a hardware issue. Have you read much about ZFS and performance? There are a few good discussions and a few good presentations to read, one in included below, and I'm not saying it's your problem but it is possibly a factor. I say this becasue your faster system is built with RAIDZ2 and 6 hard drives and the slower in RAIDZ1 and 7 hard drives.

Also make sure you delete your file when done.
 

Kuro Houou

Contributor
Joined
Jun 17, 2014
Messages
193
I agree, I don't think those dd benchmarks were accurate. Although I did what you said and got ~185MB/s on the Backup server and ~135MB/s on the Main.. so not sure I believe those numbers either as I can clearly read/write faster then that on the Backup server.. Still I don't think I can trust that benchmark as I know I can get faster speeds then that too... So I am kind of back to square one.

From what I have been reading online and other tests people have done, there is no way I should be seeing the slow speeds I see on my Main server, people tend to get 500-600MB/s in a RaidZ2 configuration (granted most tests I saw were 8 disks vs my 6).. however I still can't buy that my speeds would be 1/3 of that with 2 less disks.

Just to Note:
-Both the Backup server and Main server are at about 50% utilization
-The Main server I just installed 6 new drives in over the summer.
-The backup server I moved the 6 drives from the main server to it when I upgraded, then bought a 7th disk for extra space on the backup server to make them as close to the same total storage granted the main server is RaidZ2 and the backup is RaidZ1.
-My SAS expander is a bpn-sas2-826el1 and the HBA is a LSI 9211-4i (IT Mode)

So I found another benchmark to try, fio. The tests run were,

Read:
fio --name=seqread --rw=read --direct=0 --iodepth=32 --bs=128k --numjobs=1 --size=128G --group_reportin

Write:
fio --name=seqwrite --rw=write --direct=0 --iodepth=32 --bs=128k --numjobs=1 --size=128G --group_reporting

Results Below:

Backup Server:
Read:
READ: bw=686MiB/s (719MB/s), 686MiB/s-686MiB/s (719MB/s-719MB/s), io=128GiB (137GB), run=191062-191062msec
Write:
WRITE: bw=440MiB/s (462MB/s), 440MiB/s-440MiB/s (462MB/s-462MB/s), io=128GiB (137GB), run=297629-297629msec


Main Server:
Read:
READ: bw=275MiB/s (288MB/s), 275MiB/s-275MiB/s (288MB/s-288MB/s), io=128GiB (137GB), run=477066-477066msec
Write:
WRITE: bw=334MiB/s (351MB/s), 334MiB/s-334MiB/s (351MB/s-351MB/s), io=128GiB (137GB), run=392105-392105msec

Looking at the FreeNAS reports while these tests were being run I see the below data (this is just snapshot of one disk, but all disks looked pretty much the same for each server).

The first orange section and the purple section was the Read part of the benchmark, first it wrote data, then read it.
The second orange section is Write benchmark.


Backup Server:

1604797710791.png


Main Server:
1604797771887.png


My thoughts:

1. Just the consistency of the R/W operations on the Backup Server, very consistent through the whole process vs the erratic behavior on the Main Server.
2. The read performance is solid and consistent on the Backup server, over 95MB/s it seems per disk vs an average of about 47MB/s on the Main Server.
3. Write performance between the two is closer, but still less on the Main Server and still more erratic but not as bad when reading.
4. Something really seems to be up on the Main Server where it can only read from 4 disks at once (writing can happen to all 6 at the same time oddly enough), see the screen shot below.. Its reading from 4 disks at a rate of 50-100MB/s but two disks are always much lower, but not always the same two disks, the two slow ones are less then 5 MB/s. Would RaidZ2 affect the reading from two disks? Does it not use 2 disks effectively meaning I only have a max read speed equal to the read speed of 4 disks? Think I might be onto something here. I thought RaidZ2 wrote to all disks and read from all disks at the same time but maybe I am wrong.

1604807215327.png
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,972
My results which reflects the speed of a single hard drive, as expected:
READ: bw=295MiB/s (309MB/s), 295MiB/s-295MiB/s (309MB/s-309MB/s), io=128GiB (137GB), run=444823-444823msec

WRITE: bw=197MiB/s (207MB/s), 197MiB/s-197MiB/s (207MB/s-207MB/s), io=128GiB (137GB), run=664739-664739msec
 

Kuro Houou

Contributor
Joined
Jun 17, 2014
Messages
193
My results which reflects the speed of a single hard drive, as expected:

What kind of drive is that? Mine are, Western Digital 14TB Ultrastar DC HC530 SATA HDD - 7200 RPM Class, SATA 6 Gb/s, 512MB Cache
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,972
What kind of drive is that?
Four HGST HDN726060ALE614 6TB Deskstar NAS Hard Drives (RAIDZ2, 8.72TB healthy usable space)

Also remember that fragmentation plays a part in overall throughput speed and the physical location of the data on your hard drive (inner tracks vs outer tracks).

So just a few more data points on my system...
I created a file called testfile in a dataset that was not compressed using this command dd if=/dev/random of=testfile bs=1000M count=50 and I received the following stats averaged over a 10 second period, then I copied the file using this command cp testfile testfile2 and received the second set of results, and finally the third set of results with dd if=testfile of=/dev/null so you have now a reasonable comparison to a simple 4 drive RAIDZ2 7200 RPM stetup, with 16GB RAM and only 2 threads of a CPU on ESXi. It's not anything spectacular, multiple vdevs or mirrors would produce better throughput results but they work for my needs.

dd if=/dev/random of=testfile bs=1000M count=50
Code:
Sun Nov  8 09:39:19 EST 2020
                                           capacity     operations    bandwidth
pool                                    alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
farm2                                   10.8T  11.0T      8  1.61K   112K   194M
  raidz2                                10.8T  11.0T      8  1.61K   112K   194M
    gptid/64a06668-ab84-1-0cc47ab37c5a      -      -      2    341  34.0K  99.7M
    gptid/6528d863-d52f-1-0cc47ab37c5a      -      -      2    381  28.8K  99.9M
    gptid/65b68ce1-d-ab84-0cc47ab37c5a      -      -      2    356  23.2K  99.9M
    gptid/66431f30-d-ab84-0cc47ab37c5a      -      -      2    357  26.4K  99.7M
--------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot                            1.02G  14.5G      0      0      0      0
  da0p2                                 1.02G  14.5G      0      0      0      0
--------------------------------------  -----  -----  -----  -----  -----  -----


cp testfile testfile2
Code:
Sun Nov  8 09:51:40 EST 2020
                                           capacity     operations    bandwidth
pool                                    alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
farm2                                   10.9T  10.9T  1.15K    733   147M  87.7M
  raidz2                                10.9T  10.9T  1.15K    733   147M  87.7M
    gptid/64a06668-d-ab84-0cc47ab37c5a      -      -    592    179  36.9M  45.2M
    gptid/6528d863-d-ab84-0cc47ab37c5a      -      -    585    194  36.4M  45.2M
    gptid/65b68ce1-d-ab84-0cc47ab37c5a      -      -    590    192  36.8M  45.3M
    gptid/66431f30-d-ab84-0cc47ab37c5a      -      -    589    194  36.7M  45.1M
--------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot                            1.02G  14.5G      0      0      0      0
  da0p2                                 1.02G  14.5G      0      0      0      0
--------------------------------------  -----  -----  -----  -----  -----  -----


dd if=testfile of=/dev/null
Code:
Sun Nov  8 09:57:01 EST 2020
                                           capacity     operations    bandwidth
pool                                    alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
farm2                                   10.9T  10.8T  1.50K     23   192M  99.1K
  raidz2                                10.9T  10.8T  1.50K     23   192M  99.1K
    gptid/64a06668-d-ab84-0cc47ab37c5a      -      -    768      6  48.0M  75.9K
    gptid/6528d863-d-ab84-0cc47ab37c5a      -      -    768      6  48.0M  76.3K
    gptid/65b68ce1-d-ab84-0cc47ab37c5a      -      -    768      7  48.0M  74.7K
    gptid/66431f30-d-ab84-0cc47ab37c5a      -      -    768      6  48.0M  73.5K
--------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot                            1.02G  14.5G      0      0  5.04K      0
  da0p2                                 1.02G  14.5G      0      0  5.04K      0
--------------------------------------  -----  -----  -----  -----  -----  -----
 

Kuro Houou

Contributor
Joined
Jun 17, 2014
Messages
193
spec shows sustained xfer rate 260 MB/s

Yeah, my drives should be more then fast enough to sustain a solid 260MB/s... Even with RaidZ2 and 6 drives I have never seen anyone get less then 300MB/s Read/Write.

One question I will throw out to the community, anyone else running Raid Z2 with 6 drives, do you see a consistent read speed across all drives or do you only see 4 drives being read from at a time, because that's what I am seeing :(
 

Kuro Houou

Contributor
Joined
Jun 17, 2014
Messages
193
"Also remember that fragmentation plays a part in overall throughput speed and the physical location of the data on your hard drive (inner tracks vs outer tracks)."

I just ran,

zpool get capacity,size,health,fragmentation

Results were,
Code:
root@freenas:~ # zpool get capacity,size,health,fragmentation
NAME          PROPERTY       VALUE     SOURCE
freenas-boot  capacity       68%       -
freenas-boot  size           29.2G     -
freenas-boot  health         ONLINE    -
freenas-boot  fragmentation  -         -
v01           capacity       49%       -
v01           size           76.2T     -
v01           health         ONLINE    -
v01           fragmentation  13%       -


My results based on your test commands are below,

dd if=/dev/random of=testfile bs=1000M count=50
Code:
                                                  capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot                                    20.0G  9.27G      0      0      0      0
  mirror                                        20.0G  9.27G      0      0      0      0
    ada0p2                                          -      -      0      0      0      0
    da6p2                                           -      -      0      0      0      0
----------------------------------------------  -----  -----  -----  -----  -----  -----
v01                                             38.1T  38.1T      0  1.58K      0   259M
  raidz2                                        38.1T  38.1T      0  1.58K      0   259M
    gptid/34e89d2d-d8e2-11ea-9e63-002590827b48      -      -      0    260      0  41.6M
    gptid/f4d12f63-d9a0-11ea-9e63-002590827b48      -      -      0    281      0  43.3M
    gptid/9a399e52-da85-11ea-9e63-002590827b48      -      -      0    263      0  42.6M
    gptid/b44b8134-dc00-11ea-9e63-002590827b48      -      -      0    279      0  42.8M
    gptid/2caa7a09-dd55-11ea-9e63-002590827b48      -      -      0    262      0  44.9M
    gptid/4f57d40f-de97-11ea-9e63-002590827b48      -      -      0    270      0  43.3M
----------------------------------------------  -----  -----  -----  -----  -----  -----



cp testfile testfile2
Code:
                                                  capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot                                    20.0G  9.27G      0      0      0      0
  mirror                                        20.0G  9.27G      0      0      0      0
    ada0p2                                          -      -      0      0      0      0
    da6p2                                           -      -      0      0      0      0
----------------------------------------------  -----  -----  -----  -----  -----  -----
v01                                             38.0T  38.3T    765  1.15K   154M   174M
  raidz2                                        38.0T  38.3T    765  1.15K   154M   174M
    gptid/34e89d2d-d8e2-11ea-9e63-002590827b48      -      -      4    205  19.9K  29.3M
    gptid/f4d12f63-d9a0-11ea-9e63-002590827b48      -      -      4    207  19.9K  29.0M
    gptid/9a399e52-da85-11ea-9e63-002590827b48      -      -    221    216  39.2M  29.0M
    gptid/b44b8134-dc00-11ea-9e63-002590827b48      -      -    178    187  37.3M  29.0M
    gptid/2caa7a09-dd55-11ea-9e63-002590827b48      -      -    190    199  37.3M  29.0M
    gptid/4f57d40f-de97-11ea-9e63-002590827b48      -      -    166    160  39.8M  29.0M
----------------------------------------------  -----  -----  -----  -----  -----  -----



dd if=testfile of=/dev/null
Code:
                                                  capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot                                    20.0G  9.27G      0      0      0      0
  mirror                                        20.0G  9.27G      0      0      0      0
    ada0p2                                          -      -      0      0      0      0
    da6p2                                           -      -      0      0      0      0
----------------------------------------------  -----  -----  -----  -----  -----  -----
v01                                             38.0T  38.2T  2.69K      0   113M      0
  raidz2                                        38.0T  38.2T  2.69K      0   113M      0
    gptid/34e89d2d-d8e2-11ea-9e63-002590827b48      -      -    644      0  28.0M      0
    gptid/f4d12f63-d9a0-11ea-9e63-002590827b48      -      -    687      0  28.9M      0
    gptid/9a399e52-da85-11ea-9e63-002590827b48      -      -    735      0  28.5M      0
    gptid/b44b8134-dc00-11ea-9e63-002590827b48      -      -      0      0      0      0
    gptid/2caa7a09-dd55-11ea-9e63-002590827b48      -      -      0      0      0      0
    gptid/4f57d40f-de97-11ea-9e63-002590827b48      -      -    691      0  27.9M      0
----------------------------------------------  -----  -----  -----  -----  -----  -----



Again the big thing that sticks out to me especially in the last test, its not reading from two drives..

Here is another snapshot in time from the third test, same operation just a minute later,

Code:
                                                  capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot                                    20.0G  9.27G      0      0      0      0
  mirror                                        20.0G  9.27G      0      0      0      0
    ada0p2                                          -      -      0      0      0      0
    da6p2                                           -      -      0      0      0      0
----------------------------------------------  -----  -----  -----  -----  -----  -----
v01                                             38.0T  38.2T  2.54K     11   116M   376K
  raidz2                                        38.0T  38.2T  2.54K     11   116M   376K
    gptid/34e89d2d-d8e2-11ea-9e63-002590827b48      -      -      3      1  15.7K  62.7K
    gptid/f4d12f63-d9a0-11ea-9e63-002590827b48      -      -      3      1  15.7K  62.7K
    gptid/9a399e52-da85-11ea-9e63-002590827b48      -      -    599      1  29.5M  62.7K
    gptid/b44b8134-dc00-11ea-9e63-002590827b48      -      -    706      1  28.3M  62.7K
    gptid/2caa7a09-dd55-11ea-9e63-002590827b48      -      -    653      1  29.2M  62.7K
    gptid/4f57d40f-de97-11ea-9e63-002590827b48      -      -    628      1  29.0M  62.7K
----------------------------------------------  -----  -----  -----  -----  -----  -----


As you can see the drives that it stopped reading from switched.. so its not like just two drives are bad or something.. but usually when I check its either the first two drives or drives 4 and 5.. Maybe I can try moving the drives around some in my chassis, it supports 12 but I only have 6 in there.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,972
I will have to agree with you that something is fishy with the way your system is operating and I'm not sure what else you could do to improve it aside from replacing the controller.

What version of FreeNAS/TrueNAS are you running on the troubled system? While I really doubt it's FreeNAS/TrueNAS, if you think it is then you could boot up another OS such as Ubuntu Live and mount the vdev and then run the benchmarks again. This is more of an elimination of what is causing it vice locating the problem directly. If the benchmarks stay the same then it's not the OS.

it uses a LSI 9211- HBA Card and is flashed to IT mode
So you do not really have this card, correct? If you do have it then what version is it flashed to?

Something to try... Because I don't know your hardware configuration I will still suggest this.
Your motherboard has 6 SATA ports, plug your 6 hard drives into those ports.
Make a USB Bootable FreeNAS drive and boot from this.
Test your speeds.
 

Kuro Houou

Contributor
Joined
Jun 17, 2014
Messages
193
I am on TrueNAS Core 12 now. Both Servers are running the same stable chain.

I do have the LSI 9211-4i card installed and have been using it for the past three years. Firmware is 20.00.02.00.

Ill try the new HBA when it comes tomorrow, if that doesn’t work I’ll see if I can figure out how to use the onboard sata ports, with the sas extender in there it will be a bit hard, might have to take it out. Or I could try swapping the 6 drives and putting them in my backup server with a fresh usb stick with freenas. Moving disks around shouldn’t affect the data right? Freenas will just detect the zpool. As long as all drives are detected i think I’ll be ok testing all this.. have 25 TB of data I don’t want to lose ;). I do have an offsite backup if all else fails but that would be a slow and long process to restore..

Thanks for your help though in running through my thoughts though, this does seem very strange..
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,972
Isn't the current IT firmware 20.00.07.00 ? I could be thinking of a different IT firmware. Not sure that would make a difference.

I do have an offsite backup if all else fails but that would be a slow and long process to restore..
I certainly hope it doesn't come to that, and it shouldn't.
 

Kuro Houou

Contributor
Joined
Jun 17, 2014
Messages
193
Isn't the current IT firmware 20.00.07.00 ? I could be thinking of a different IT firmware. Not sure that would make a difference.


I certainly hope it doesn't come to that, and it shouldn't.

it may be, honestly haven’t updated the firmware since I got the card.. usually don’t mess with things unless really needed, but it might be worth doing now.
 

Kuro Houou

Contributor
Joined
Jun 17, 2014
Messages
193
Well I just replaced the HBA, upgraded to a LSI 9207-8i... No Luck :(

Speeds are the same, I can write to all 6 disks consistently, but when reading it prefers to read from only 4 at a time alternating between what seems like 4 disks.

Just to show what I am talking about, look at these tests that did a Write/Read/Write benchmark,

1604956545826.png


Then another round of the exact same benchmarks...

1604956624031.png
 

Kuro Houou

Contributor
Joined
Jun 17, 2014
Messages
193
So some new updates... I put 6 4TB disks in my Main server in a RaidZ2 configuration and did some copy/paste tests between v01 with the 14TB disks and v02 with the 4TB disks... low and behold, I get great read/write speeds, check out the iostat below,

1604971924129.png


Roughly 400MB/s read from v01 and oddly enough, no issues with only reading 4 disks at a time, they do fluctuate when reading but nothing drops to 1-5MB/s.

And here is a read from v02 to my PC, as you can see, reading just fine from the volume and hitting max speeds for 2.5Gbps links.

1604972110830.png


And just for fun, tried copying from v01 to the PC again.. same slow speeds as before :( What is going on.. Here is where I am at.

1. I can obviously read from v01 very fast, >400MB/s, and write >600MB/s, if its writing to another volume.
2. I can copy from v02 to my PC at 280MB/s sustained.
3. I can only copy from v01 to my PC at ~150MB/s on average.

The data path from the v02 Disks -> SAS Expander -> HBA -> CPU -> Nic obviously has no bottleneck as reading from v02 to an outside device is as fast as it should be...

I am going to do some more benchmarks.. see if I can narrow this down anymore.. At this point, might just blow away v01 and hope for the best starting that volume from scratch.. just hate to have to do that with 6 jails and 3 vm's on that volume (just a pain to migrate everything)
 
Last edited:

Kuro Houou

Contributor
Joined
Jun 17, 2014
Messages
193
Latest Benchmarks all from my Main Server

Same tests as before,

Read:
fio --name=seqread --rw=read --direct=0 --iodepth=32 --bs=128k --numjobs=1 --size=128G --group_reportin

Write:
fio --name=seqwrite --rw=write --direct=0 --iodepth=32 --bs=128k --numjobs=1 --size=128G --group_reporting

Results

v01:
READ:
381MB/s
Write: 476MB/s


v02:
READ: 598MB/s
Write: 407MB/s

I mean those results are both pretty good. Interestingly when I did those same tests originally, my v01 volume got Read: 275MB/s Write 334MB/s.. so some inconsistency it seems.
 
Top