Supermicro X10SRH-CLN4F Server Performance Tests - Weird Results?

Status
Not open for further replies.

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
I built another server (a variation of the server I tested in this thread) slightly different motherboard and I used the onboard LSI3008 controller.

Supermicro Superserver 5028R-E1CR12L
Supermicro X10SRH-CLN4F Motherboard
1 x Intel Xeon E5-2640 V3 8 Core 2.66GHz
4 x 16GB PC4-17000 DDR4 2133Mhz Registered ECC
12 x 4TB HGST HDN724040AL 7200RPM NAS SATA Hard Drives
LSI3008 SAS Controller - Flashed to IT Mode via
(ftp://ftp.supermicro.nl/driver/sas/lsi/3008/Firmware/3008%20FW_PH6_091714.zip)
LSI SAS3x28 SAS Expander
Dual 920 Watt Platinum Power Supplies
16GB USB Thumb Drive for booting
FreeNAS-9.3-STABLE-201503270027

We have multiple supermicro servers and I wanted to test this one as I had tested the others. So I started out by using the script that jgreco built and I was very happy to see the performance numbers until I hit the parallel seek-stress array read. I was floored by the low numbers.

I thought initially that the LSI3008 might be the problem, but there are people on the forums using this exact motherboard and LSI card with no problems at all, the expander included.

The initial serial looked great but once it got to the parallel seek-stress array read, it fell flat on its face!!

Performing initial parallel array read
--snip--
Serial Parall % of
Disk Disk Size MB/sec MB/sec Serial
------- ---------- ------ ------ ------
da0 3815447MB 163 163 100
da1 3815447MB 159 159 100
da2 3815447MB 160 160 100
da3 3815447MB 159 159 100
da4 3815447MB 161 161 100
da5 3815447MB 161 161 100
da6 3815447MB 163 163 100
da7 3815447MB 159 159 100
da8 3815447MB 158 158 100
da9 3815447MB 161 161 100
da10 3815447MB 163 163 100
da11 3815447MB 163 163 100


Performing initial parallel seek-stress array read
--snip--
Serial Parall % of
Disk Disk Size MB/sec MB/sec Serial
------- ---------- ------ ------ ------
da0 3815447MB 163 36 22
da1 3815447MB 159 38 24
da2 3815447MB 160 30 19
da3 3815447MB 159 37 23
da4 3815447MB 161 32 20
da5 3815447MB 161 34 21
da6 3815447MB 163 36 22
da7 3815447MB 159 36 23
da8 3815447MB 158 38 24
da9 3815447MB 161 32 20
da10 3815447MB 163 36 22
da11 3815447MB 163 35 21


Sooooo.....I thought I would do a little bit more testing. The first thing I did was reinstall freenas from scratch just to make sure there was nothing there, then I individually tested the drives.

The interesting thing is that when I run a single instance of this command:

dd if=/dev/da3 of=/dev/null bs=8M

Everything looks fantastic, but the minute that I launch a second copy of the same command, the performance drops from 155MB/Sec to about 1/4 that amount.

da3.png



When I did the same thing but writing instead, it looks a little bit better, but still a decrease:

dd if=/dev/zero of=/dev/da4 bs=8M

da4.png



All of the testing was leading me to believe that maybe I had a bad LSI3008 card or maybe SAS expander. But before I gave up I decided to create a zpool and do some testing against the zpool.

I created two vdevs, each with 6 x 4TB drives in RAIDZ2. I shut off compression:

[root@plexnas] ~# zpool status
pool: freenas-boot
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da12p2 ONLINE 0 0 0

errors: No known data errors

pool: vol1
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
vol1 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/5d1b0005-e0bf-11e4-b678-0cc47a31abcc ONLINE 0 0 0
gptid/5f37cd98-e0bf-11e4-b678-0cc47a31abcc ONLINE 0 0 0
gptid/615785b5-e0bf-11e4-b678-0cc47a31abcc ONLINE 0 0 0
gptid/63767979-e0bf-11e4-b678-0cc47a31abcc ONLINE 0 0 0
gptid/65813195-e0bf-11e4-b678-0cc47a31abcc ONLINE 0 0 0
gptid/67a00cb7-e0bf-11e4-b678-0cc47a31abcc ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/69c9bc8a-e0bf-11e4-b678-0cc47a31abcc ONLINE 0 0 0
gptid/6be8c517-e0bf-11e4-b678-0cc47a31abcc ONLINE 0 0 0
gptid/6e12c162-e0bf-11e4-b678-0cc47a31abcc ONLINE 0 0 0
gptid/702832d5-e0bf-11e4-b678-0cc47a31abcc ONLINE 0 0 0
gptid/72467862-e0bf-11e4-b678-0cc47a31abcc ONLINE 0 0 0
gptid/74502b7c-e0bf-11e4-b678-0cc47a31abcc ONLINE 0 0 0

errors: No known data errors

[root@plexnas] ~# zfs get compression vol1
NAME PROPERTY VALUE SOURCE
vol1 compression off local


Then I ran the following command:

[root@plexnas] ~# dd if=/dev/zero of=/mnt/vol1/testfile bs=64M count=64000
64000+0 records in
64000+0 records out
4294967296000 bytes transferred in 3984.653195 secs (1077877317 bytes/sec)

Based on my earlier tests, I fully expected to see some great write performance since I saw the same thing when writing directly to the drives.

Here is what the drives look like:

screen 2015-04-11 at 10.17.20 PM.jpg

screen 2015-04-11 at 10.17.42 PM.jpg

screen 2015-04-11 at 10.17.59 PM.jpg



So now it was on to the read tests. Again, based on testing directly to the drives, I expected to see some strange results, and I was not disappointed:

screen 2015-04-11 at 10.52.04 PM.jpg

screen 2015-04-11 at 10.52.21 PM.jpg

screen 2015-04-11 at 10.52.39 PM.jpg



[root@plexnas] ~# dd if=/mnt/vol1/testfile of=/dev/null bs=64M count=64000
64000+0 records in
64000+0 records out
4294967296000 bytes transferred in 3991.578583 secs (1076007200 bytes/sec)


So while the graphs from the Freenas GUI show that reads appear to be slower, the output of the command line dd seems to show that the read and writes are happening at about the same speed.


So I guess I am looking for some advice about what the next steps should be to determine what test I should run next in order to determine which of these tests are actually correct, some of the tests seem to show an issue with the system, and yet some of the tests (against the zpool) seem to show that the system is performing well.

Any input or suggestions would be greatly appreciated!

Thanks
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So nothing I saw doesn't seem particular "far fetched" for various definitions of far fetched. Sorry, but I don't feel like writing a long post since it's after midnight, but I'll keep this short and sweet.

Writes are handled on a per-vdev basis while reads are more of a per-disk basis.

12 disks in a RAIDZ2 give you a theoretical throughput of 10 disks of data. If you assume 100MB/sec, that's about 1GB/sec. Now if you break that into two vdevs of 6 disks in RAIDZ2, you now have only 8 disks, so your theoretical throughput has dropped to about 800MB/sec. But, since you have twice the vdevs you will, in theory, get twice the possible iops. So you're robbing from one metric to improve another. So you, as the server admin, are expected to decide what kind of I/O you need, what kind of throughput you need, and balance that out with your choice of zpool layout. Add to that the fact that ZFS will preferentially choose what vdev to store data in based on the throughput required, % of the vdev, full, amount of data you need to write, etc. So you shouldn't expect even workload at all times, *especially* during testing. What you can expect is that all of these variations work themselves out to an average capability that meets your needs, if you have hardware that can meet those needs. Your tests are very singular and give some raw values that are only kind-of-sort-of useful. Actual pool performance will be much lower, especially when you start storing actual real-world data and using it in some kind of real-world scenario.

Keep in mind that the LSI 3008 controller drivers are basically just above alpha quality. They are included for testing, but last I heard shouldn't be trusted with real data and should not be expected to perform well, if at all. So yeah, for testing what you are doing is fine. If you were doing testing with the intent of going into some kind of production environment, ditch the 3008 and get an M1015 or something until the 3008 is matured in the world of FreeBSD.


Just a note: I don't know *anyone* that has come to the forum using a 3008 and said it worked fine and performed as expected. So clearly this isn't something you should even think is going to happen. If you search around for 3008, I'm sure you'll see what I mean. ;)
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Cyberjock -

Thanks for the reply and for the info...it is appreciated. This testing looked very different than the testing I had done before on my last supermicro, so I was trying to see if I was missing something.

I had actually looked around and this was one post that lead me to believe the 3008 was ok to run:

https://forums.freenas.org/index.php?threads/doing-a-14-000-build-input-on-harddrive-choice-and-the-rest-of-the-components.28671/#post-187713

There are people that seem to be running it with Freenas, it came on the motherboard so I figured it was worth a try. I do run the 1015s in my other machines and they tend to work well, but so far the 3008 seems to outperform my 1015s given the same zpool configurations.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
@jgreco has a 3108, @AltecBX, @depasseg, that thread and a few more have done the cln4f. We've seen an expander/drive enumeration issue. But no performance problems near term to speak of. Unfortunately it's gonna be a while until we see them en masse. Pretty sure depasseg has been plug and play. Personally I am not anticipating many or any problems to be encountered by jgreco.

You seem to be perfectly positioned to split test this vs a M1015. That's what I'd be doing. I expect you'll find it is more about balance and zfs than an hba choice.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I have that same exact server. Only difference is CPU, RAM and I'm using SAS drives instead of SATA.

I also downgraded my 3008 FW to match the Freenas driver version (5).

Run this:

[root@freenas1] ~# dmesg | grep mpr
mpr0: <LSI SAS3008> port 0xe000-0xe0ff mem 0xfb500000-0xfb50ffff irq 26 at device 0.0 on pci1
mpr0: IOCFacts :
mpr0: Firmware: 05.00.00.00, Driver: 05.255.05.00-fbsd
mpr0: IOCCapabilities: 7a85c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,EventReplay,MSIXIndex,HostDisc>
ses0 at mpr0 bus 0 scbus0 target 20 lun 0
da0 at mpr0 bus 0 scbus0 target 8 lun 0
da1 at mpr0 bus 0 scbus0 target 9 lun 0
da2 at mpr0 bus 0 scbus0 target 10 lun 0
da3 at mpr0 bus 0 scbus0 target 11 lun 0
da4 at mpr0 bus 0 scbus0 target 12 lun 0
da5 at mpr0 bus 0 scbus0 target 13 lun 0
da6 at mpr0 bus 0 scbus0 target 14 lun 0
da7 at mpr0 bus 0 scbus0 target 15 lun 0
da8 at mpr0 bus 0 scbus0 target 16 lun 0
da10 at mpr0 bus 0 scbus0 target 18 lun 0
da11 at mpr0 bus 0 scbus0 target 19 lun 0
da9 at mpr0 bus 0 scbus0 target 17 lun 0
[root@freenas1] ~#

I ran ( dd if=/dev/da3 of=/dev/null bs=8M) and then added a second instance at 14:22. I don't see a difference.

upload_2015-4-12_14-29-9.png
 
Last edited:

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
@depasseg -

Thanks. I will downgrade the firmware and see if I get anything different!
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
OK, FW downgraded to match freenas, rerunning tests...will advise...


[root@plexnas] ~# dmesg | grep mpr
mpr0: <LSI SAS3008> port 0xe000-0xe0ff mem 0xfb200000-0xfb20ffff irq 26 at device 0.0 on pci1
mpr0: IOCFacts :
mpr0: Firmware: 05.00.00.00, Driver: 05.255.05.00-fbsd
mpr0: IOCCapabilities: 7a85c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,EventReplay,MSIXIndex,HostDisc>
ses0 at mpr0 bus 0 scbus0 target 20 lun 0
da0 at mpr0 bus 0 scbus0 target 8 lun 0
da1 at mpr0 bus 0 scbus0 target 9 lun 0
da2 at mpr0 bus 0 scbus0 target 10 lun 0
da3 at mpr0 bus 0 scbus0 target 11 lun 0
da4 at mpr0 bus 0 scbus0 target 12 lun 0
da5 at mpr0 bus 0 scbus0 target 13 lun 0
da6 at mpr0 bus 0 scbus0 target 14 lun 0
da7 at mpr0 bus 0 scbus0 target 15 lun 0
da8 at mpr0 bus 0 scbus0 target 16 lun 0
da9 at mpr0 bus 0 scbus0 target 17 lun 0
da10 at mpr0 bus 0 scbus0 target 18 lun 0
da11 at mpr0 bus 0 scbus0 target 19 lun 0
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Well....still seeing the exact same results:

A single: dd if=/dev/da3 of=/dev/null bs=8M works great, as soon as I start a second one, I get this:

da3.png



Upgraded to the latest stable: FreeNAS-9.3-STABLE-201504100216

Also, no change.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Any chance you have a SAS drive lying around to test? :smile:
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
In actual fact I think I do have some older 500GB 15K RPM SAS drives laying around. I'll grab them later today and plug them in for a test tomorrow morning! Good idea!
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
In an attempt to do more testing I added two vdevs, 6 drives each, RAIDZ2 and ran the following commands concurrently:

dd if=/dev/zero of=/mnt/Vol1/testfile bs=32M count=32000 ; dd if=/mnt/vol1/testfile of=/mnt/vol1/testfile.2 bs=32M ; dd if=/dev/zero of=/mnt/vol1/testfile bs=32M count=32000 ; dd if=/mnt/vol1/testfile of=/mnt/vol1/testfile.2 bs=32M
dd if=/dev/zero of=/mnt/Vol1/testfile.5 bs=32M count=32000 ; dd if=/mnt/vol1/testfile.5 of=/mnt/vol1/testfile.6 bs=32M ; dd if=/dev/zero of=/mnt/vol1/testfile.5 bs=32M count=32000 ; dd if=/mnt/vol1/testfile.5 of=/mnt/vol1/testfile.6 bs=32M
dd if=/dev/zero of=/mnt/Vol1/testfile.7 bs=32M count=32000 ; dd if=/mnt/vol1/testfile.7 of=/mnt/vol1/testfile.8 bs=32M ; dd if=/dev/zero of=/mnt/vol1/testfile.7 bs=32M count=32000 ; dd if=/mnt/vol1/testfile.7 of=/mnt/vol1/testfile.8 bs=32M

This is what I see on all of the 12 drives (almost identical output):

da0_multiple_dd_to_zpool.png


hummm.......
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
OK, replaced the cables from the motherboard to the SAS expanded and swapped ports as well, same issue:

Single dd looks great, multiple dd drops like a stone:

dd if=/dev/da3 of=/dev/null bs=8M

da3_multiple_dd_to_dev.png



@depasseg - Can you tell me what revision your board is and what board bios you are running? Thanks
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I bought the system in Dec. I don't have the Board Rev (it's running and I don't have enough slack in my cabling to pull it out while it's running to check). :smile:

From IPMI:
Firmware Revision : 01.51
Firmware Build Time : 06/28/2014
BIOS Version : 1.0
BIOS Build Time : 07/02/2014
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Awesome - thanks for the quick reply! I'm going to check that against my specs.

I'm starting to wonder if it is the particular drives I am using. I got the new HGST HDN724040AL 7200RPM drives, all brand new...

I have some 4TB Reds and some 500GB SAS drives I will try tomorrow. Also going to throw in an M1015 that I have for testing and see if the problem goes away. If it is the drives, I suspect it will not go away, but if we have the identical board (albeit different CPU & Memory combination - which should not cause this issue) we should see the same results.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I'll try to vacate one of my LUN's so I can safely replace one of my SAS drives with a SATA and see what it looks like. Probably won't be until this weekend though.
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Thanks @depasseg - I am going to swap drives tomorrow morning as well and see what happens.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
So I was playing around a little with iostat, I figured it might help provide more insight.

I ran your DD cmd: dd if=/dev/da0 of=/dev/null bs=8M

Code:
[root@freenas1] /mnt/tank# iostat -C -w 2 -d -t da /dev/da0
             da0              da1              da2             cpu
  KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
61.97   6  0.39  61.89   7  0.39  62.69   7  0.40   0  0  0  0 99
19.37   9  0.18  19.37   9  0.18  18.78  11  0.21   0  0  1  0 99
  0.00   0  0.00   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
20.00   9  0.18  20.00   9  0.18  20.67  12  0.24   0  0  0  0 100
  0.00   0  0.00   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
  0.00   0  0.00   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
30.67   1  0.04  30.67   1  0.04  60.89   4  0.27   0  0  0  0 100
  0.00   0  0.00   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
<<<<<THIS IS WHEN I STARTED THE FIRST DD COMMAND>>>>>>>
127.86 1339 167.17   2.67   1  0.00  16.80  10  0.16   0  0  0  0 99
127.91 1367 170.73   2.67   1  0.00   0.00   0  0.00   0  0  0  0 99
127.95 1345 168.04   0.00   0  0.00   0.00   0  0.00   0  0  0  0 99
126.44 708 87.38  49.14  14  0.67  50.45  15  0.76   0  0  1  0 99
128.00 1382 172.73   0.00   0  0.00   0.00   0  0.00   0  0  0  0 99
127.86 1379 172.17   3.20   2  0.01   3.20   2  0.01   0  0  0  0 100
127.86 1354 169.11   0.00   0  0.00   0.00   0  0.00   0  0  0  0 99
128.00 1357 169.67   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
127.86 1377 171.92   3.20   2  0.01   9.81  15  0.15   0  0  0  0 99
127.86 1331 166.17   0.00   0  0.00   0.00   0  0.00   0  0  0  0 99
127.95 1353 169.10  21.33   1  0.03  20.00   4  0.08   0  0  0  0 99
127.80 1267 158.11  22.40   2  0.05  15.27   5  0.08   0  0  1  0 99
127.87 1317 164.44   0.00   0  0.00   0.00   0  0.00   0  0  0  0 99
127.20 1029 127.81  16.50   8  0.13  15.04  12  0.18   0  0  0  0 99
128.00 1384 172.98   0.00   0  0.00   0.00   0  0.00   0  0  1  0 99
127.60 1190 148.34  15.69   6  0.10  24.31   6  0.15   0  0  0  0 100
<<<<<THIS IS WHEN I STARTED THE SECOND DD COMMAND>>>>>>>
126.32 833 102.77   7.16   9  0.07   7.16   9  0.07   5  0  1  0 94
128.00 1365 170.66   0.00   0  0.00   0.00   0  0.00   3  0  1  0 96
128.00 1368 170.98  33.33   1  0.05  42.67   1  0.06   0  0  0  0 99
127.95 1355 169.36   0.00   0  0.00   0.00   0  0.00   0  0  1  0 99
127.99 1358 169.71   0.00   0  0.00   0.00   0  0.00   0  0  0  0 99
127.91 1334 166.61   9.33   1  0.01   9.33   1  0.01   0  0  0  0 100
127.95 1355 169.36   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
127.96 1355 169.36  17.00   2  0.03  17.00   2  0.03   0  0  0  0 100
127.95 1361 170.04   0.00   0  0.00   0.00   0  0.00   0  0  0  0 99
127.97 1351 168.81   0.00   0  0.00   0.00   0  0.00   0  0  0  0 99
127.95 1358 169.67   0.00   0  0.00   0.00   0  0.00   0  0  0  0 99
127.96 1352 168.99  17.00   2  0.03  17.00   2  0.03   0  0  0  0 99
127.95 1359 169.79   0.00   0  0.00   0.00   0  0.00   0  0  1  0 99
127.92 1333 166.56   0.00   0  0.00   0.00   0  0.00   0  0  1  0 99
127.88 1299 162.20  10.40   2  0.03  10.40   2  0.03   0  0  0  0 100
127.91 1332 166.36   0.00   0  0.00   0.00   0  0.00   0  0  1  0 99
^C
[root@freenas1] /mnt/tank# 
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Still very weird!

Started the command: dd if=/dev/da0 of=/dev/null bs=8M


Code:
[root@plexnas] ~# iostat -C -w 2 -d -t da /dev/da0
             da0              da1              da2             cpu
  KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
104.37 494 50.35  104.08 487 49.50  104.03 487 49.52   0  0  5  0 95
128.00 1343 167.86   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1283 160.36   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1298 162.23   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1297 162.11   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1305 163.11   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1300 162.48   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1338 167.23   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1281 160.17   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1300 162.48   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1294 161.73   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1308 163.48   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1302 162.79   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1331 166.35   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1274 159.23   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1213 151.61   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100

Start of second DD command:

128.00 431 53.85   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 241 30.17   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 228 28.49   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
123.64 334 40.37   5.67  12  0.07   5.67  12  0.07   0  0  0  0 100
128.00 220 27.49   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 229 28.67   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 216 27.05   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 222 27.74   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 225 28.17   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 225 28.11   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 228 28.55   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 230 28.80   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100

Back to single DD command:

128.00 1169 146.11   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1310 163.73   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1327 165.92   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1276 159.48   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1307 163.36   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1295 161.92   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1304 162.98   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 1314 164.29   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
128.00 976 122.00   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100




I have a couple of SSD drives I am going to install tomorrow and see if I see the same thing...
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Your transations per second is tanking. If you start a DD read on a second drive, does the first drive still suffer? Might narrow it down to controller vs disk.
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Nope - only when I hit the same drive more than once....

Started the command: dd if=/dev/da0 of=/dev/null bs=8M

Code:
[root@plexnas] ~# iostat -C -w 2 -d -t da /dev/da0
             da0              da1              da2             cpu
  KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
 104.53 482 49.24  104.07 472 47.99  104.03 473 48.01   0  0  5  0 95
 126.55 1243 153.61   7.06  17  0.12   7.27  16  0.12   0  0  0  0 100
 128.00 1329 166.17   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
 128.00 1275 159.42   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
 128.00 1306 163.29   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
 128.00 1294 161.73   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
 128.00 1303 162.92   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100

Started the command: dd if=/dev/da2 of=/dev/null bs=8M 

128.00 1314 164.29   0.00   0  0.00  128.00 433 54.10   0  0  0  0 100
 128.00 1324 165.48   0.00   0  0.00  128.00 1286 160.80   0  0  0  0 100
 128.00 1279 159.86   0.00   0  0.00  128.00 1184 147.99   0  0  0  0 100
 128.00 1300 162.54   0.00   0  0.00  128.00 1308 163.54   0  0  0  0 100
 126.61 1223 151.26   5.68  15  0.09  126.68 1243 153.76   0  0  0  0 100
 128.00 1304 162.98   0.00   0  0.00  128.00 1340 167.48   0  0  0  0 100
 128.00 1313 164.17   0.00   0  0.00  128.00 1342 167.79   0  0  0  0 99
 128.00 1325 165.67   0.00   0  0.00  128.00 1273 159.11   0  0  0  0 100
 
Status
Not open for further replies.
Top