50MB/s Speeds in RAID0

Soarin

Cadet
Joined
Mar 23, 2019
Messages
4
I've recently gotten my RAID controller (m1015) and was going to flash the card to IT, however come to find out the Ebay seller gave me the M5015 instead so until they respond to my refund request I'm just testing it out.

Without RAID and just directly into a port on my motherboard I get 110MB/s read and 53MB/s write, I test this by CDing to my pool and running these commands:
# dd if=/dev/zero of=testfile bs=1024 count=50000

Run the following command to test Read speed
# dd if=testfile of=/dev/zero bs=1024 count=50000

I transfer files over the network and get 50MB/s which is right, however now I throw this temporary-RAID controller into the mix and I setup RAID10(hardware) to see what the speeds would be like on my 4 drive configuration. I boot into FreeNAS and setup and test the network transfer speeds and I notice it's exactly the same as a single drive performance, I run the 2 commands again in the shell to see if the performance increased any there and it's the same!

I continue then by setting up RAID0 and re-creating the pool and the same results are there, 50MB/s! I go to my pool and turn off ATime and compression to see if that does anything and still no dice, I run an IPerf on my PC & my NAS and here's the results:
1553717700965.png



I also checked to see if maybe my CPU or RAM was being maxed or something (Xeon X3430 & 8GB ECC RAM), and here's the results during a file transfer:
1553717721775.png


My FreeNAS is mostly stock since I've just installed it a few days ago and haven't really gotten to use it since I was waiting for my M1015 to come in the mail, now I have to send it back since they gave me the wrong card..

If you need any more info let me know, I hope this can be resolved! I still plan to get the M1015 even if the M5015 isn't the problem here.


OS Version:
FreeNAS-11.2-U2.1
(Build Date: Feb 27, 2019 20:59)
 

Attachments

  • 1553716465654.png
    1553716465654.png
    231.9 KB · Views: 534
  • 1553716862697.png
    1553716862697.png
    174.3 KB · Views: 468
Last edited:

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
I know one thing… You may never ever use hardware RAID on FreeNAS. People will hate you for it. All disks should be JBOD. Could that explain this all?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
ZFS has a tendency to throw massive amounts of I/O at multiple ports simultaneously, and this can totally swamp a hardware RAID controller. The M5015 only has 512MB RAM, and following typical LSI it probably allocates half that to write cache, so, 256MB for each. I also noticed a long time ago that the MFI drivers for FreeBSD are a little bit twitchy as far as performance goes. I didn't bother to investigate too far as it wasn't really that important to me.
 

Soarin

Cadet
Joined
Mar 23, 2019
Messages
4
ZFS has a tendency to throw massive amounts of I/O at multiple ports simultaneously, and this can totally swamp a hardware RAID controller. The M5015 only has 512MB RAM, and following typical LSI it probably allocates half that to write cache, so, 256MB for each. I also noticed a long time ago that the MFI drivers for FreeBSD are a little bit twitchy as far as performance goes. I didn't bother to investigate too far as it wasn't really that important to me.
Ah alright, 3 days later still waiting for the Ebay seller to respond to my refund request. I'll see if that fixes my problems once the original m1015 I bought comes.

I know one thing… You may never ever use hardware RAID on FreeNAS. People will hate you for it. All disks should be JBOD. Could that explain this all?

Haven't got enough ports on my motherboard and the Ebay seller sent me the wrong product I paid for. :(
 

pasiz

Explorer
Joined
Oct 3, 2016
Messages
62
I transfer files over the network and get 50MB/s which is right, however now I throw this temporary-RAID controller into the mix and I setup RAID10(hardware) to see what the speeds would be like on my 4 drive configuration. I boot into FreeNAS and setup and test the network transfer speeds and I notice it's exactly the same as a single drive performance, I run the 2 commands again in the shell to see if the performance increased any there and it's the same!

I continue then by setting up RAID0 and re-creating the pool and the same results are there, 50MB/s! I go to my pool and turn off ATime and compression to see if that does anything and still no dice, I run an IPerf on my PC & my NAS and here's the results:

OS Version:
FreeNAS-11.2-U2.1
(Build Date: Feb 27, 2019 20:59)


So how is dd performing, network transfer is too complicated to diagnose bottleneck with this amount of information.

If you are using samba, then you should watch your htop when transfer is on.

So could you write with dd to your pool and post the results?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Run the following command to test Read speed
# dd if=testfile of=/dev/zero bs=1024 count=50000
This is wrong. Should be
# dd if=testfile of=/dev/null bs=1024

You might also want to change the bs to 1M
 

pasiz

Explorer
Joined
Oct 3, 2016
Messages
62
This is wrong. Should be
# dd if=testfile of=/dev/null bs=1024

You might also want to change the bs to 1M

Why it's wrong? If the testfile is 100 gigabytes, and you want to read 50 gigabytes of it to measure performance? Your version is reading the whole file, not exact amount of data from it. Assumption doesn't make something wrong or right.

Most wrong is to write file full of zeroes to modern file system, that way you are testing your processor capabilities more than disk system.

I'm just waiting the answer are we measuring network sharing performance or pool performance.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Why it's wrong? If the testfile is 100 gigabytes, and you want to read 50 gigabytes of it to measure performance? Your version is reading the whole file, not exact amount of data from it. Assumption doesn't make something wrong or right.

Most wrong is to write file full of zeroes to modern file system, that way you are testing your processor capabilities more than disk system.

I'm just waiting the answer are we measuring network sharing performance or pool performance.
Because you don't output to dev zero, you use dev null. Also turn off compression and read tests can only be executed once because after that the reads will be out of arc.
 

pasiz

Explorer
Joined
Oct 3, 2016
Messages
62
Because you don't output to dev zero, you use dev null.
For what reason? I have used unix from 90s and zero is used for anonymous memory mapping (yes, in modern world they pass MAP_ANONYMOUS flag to mmap()). You can use it as a source or sink, all that is written to zero stream is discarded, just like null, there's nothing wrong to use it as sink.

Also turn off compression and read tests can only be executed once because after that the reads will be out of arc.

Or reduce your arc size temporarily for testing, i prefer mostly this method. If you made read test data out of /dev/random, it can be used on compressed FS. This is of course just my opinion, that testing should be made as close to production than is possible.
 
Top