Speed are low in setup with multiply mirrored vdevs and one pool

Status
Not open for further replies.

nonifo

Cadet
Joined
May 30, 2016
Messages
7
Sitting and playing with my first setup of FreeNAS.
But do not get write and read speed to something sensible.
Have noticed that this is the forum seem to have a good handle on it, so I wonder if you have any tips on what I should start checking for?

I have tried to read me a lot but I will not be the wiser.

My configuration looks like this, not quite by the book but it should not be so crazy anyway ..

Motherboard: Supermicro X9DRi-LN4+/X9DR3-LN4+
CPU: 2x Intel(R) Xeon(R) CPU E5-2660 @ 2.20GHz
Memory: 64GB DDR3 ECC REG

Controller1: LSI SAS 9207-8I - duallink to HP SAS EXPANDER - Backplane of my SC846TQ-R900B.
Attched to Ctrl1: Harddisk: 6xWD RED 2TB, 2xSAMSUNG F1 2TB, 8xWD RED 3TB

Controller2: LSI SAS 9240 (IBM1015) IT FIRMWARE
(for use in the future for SSD (got 8x128GB) but not figerid out how to setup them yet (need to read more first ;) )

Controlle3: HP P410 for system disk for VMware ESXi 6.0

VM for FreeNAS: 12 core, 16GB Memory (waiting for another 64GB then i go 64GB for FreeNAS)
I directpath Controller1 to FreeNAS witch seams to work great this fare.

All disk are setupd as mirrors, (2 disk in each vdev) and striped to one pool.

But I could only get about 20-30MB/sec write, and about the same on read.
Could my expander card limit the speed this much?
Or could some disk be bad?


Do you have any tips?

I have orderd 2 new set on IBM1015, so tecnical i can go native when i get them, but until then i want to get everything workig.
Need to play whit my setup a bit :) It ittching in my fingers you know =)

Sorry for my bad English but I hope you will undestand me anyway ;) (swedish)
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
You're virtualizing FreeNAS, right?

Make sure you reserve/lock all of the memory you provision for it (Settings->Resources->Memory).

Try reducing the number of vCPU's to 2 from the 12 you're using now.

Are you using the VMXNET or E1000 network drivers?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Also you might want to tell us how you are testing the read/write speed. And during your testing, connect the Ethernet cable directly to your computer. It possible you are having network issues.
 

nonifo

Cadet
Joined
May 30, 2016
Messages
7
You're virtualizing FreeNAS, right?

Make sure you reserve/lock all of the memory you provision for it (Settings->Resources->Memory).

Try reducing the number of vCPU's to 2 from the 12 you're using now.

Are you using the VMXNET or E1000 network drivers?

Yes I'm virtualizing FreeNAS.
The memory are locked, I'm forced to that when I passtrught I/O (like my lsi hba controller)
Reduce the vCPU? ok I can try it, but i don't understand why?
Network are passdtrught to, ( got quad intel card in my server so two of the card are attached to just FreeNAS)
+ one E1000 virtual for management and local iSCSI connection to hosting VMware setup.
 

nonifo

Cadet
Joined
May 30, 2016
Messages
7
Also you might want to tell us how you are testing the read/write speed. And during your testing, connect the Ethernet cable directly to your computer. It possible you are having network issues.


First, my disc setup

[nonifo@freenas1] /mnt/vm1# zpool status
pool: freenas-boot
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da0p2 ONLINE 0 0 0

errors: No known data errors

pool: platina
state: ONLINE
scan: scrub canceled on Sat May 28 16:06:26 2016
config:

NAME STATE READ WRITE CKSUM
platina ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/39adffb8-22ab-11e6-9a26-000c29099830 ONLINE 0 0 0
gptid/3a9eebc8-22ab-11e6-9a26-000c29099830 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/fb8eedda-22ad-11e6-b5bd-000c29099830 ONLINE 0 0 0
gptid/fc794498-22ad-11e6-b5bd-000c29099830 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
gptid/1e1c3139-22ae-11e6-b5bd-000c29099830 ONLINE 0 0 0
gptid/1f0494bf-22ae-11e6-b5bd-000c29099830 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
gptid/5a57b78d-22ae-11e6-b5bd-000c29099830 ONLINE 0 0 0
gptid/5d1d1737-22ae-11e6-b5bd-000c29099830 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
gptid/2da8d2d0-2419-11e6-8f22-000c29099830 ONLINE 0 0 0
gptid/2e7198b2-2419-11e6-8f22-000c29099830 ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
gptid/50d54737-2419-11e6-8f22-000c29099830 ONLINE 0 0 0
gptid/51d30549-2419-11e6-8f22-000c29099830 ONLINE 0 0 0
mirror-6 ONLINE 0 0 0
gptid/9e61a593-2419-11e6-8f22-000c29099830 ONLINE 0 0 0
gptid/a05bb034-2419-11e6-8f22-000c29099830 ONLINE 0 0 0
mirror-7 ONLINE 0 0 0
gptid/da6b423a-2419-11e6-8f22-000c29099830 ONLINE 0 0 0
gptid/dc618a33-2419-11e6-8f22-000c29099830 ONLINE 0 0 0

errors: No known data errors

pool: vm1
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
vm1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/fb8892f3-242a-11e6-b84e-000c29099830 ONLINE 0 0 0
gptid/fc687893-242a-11e6-b84e-000c29099830 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/fd953605-242a-11e6-b84e-000c29099830 ONLINE 0 0 0
gptid/fe6e0ca2-242a-11e6-b84e-000c29099830 ONLINE 0 0 0

errors: No known data errors



platina are the mecanical disk.
vm1 are created of SSD disk


For write (95.15202 MB/sec)
[nonifo@freenas1] /mnt/vm1# dd if=/dev/zero of=testfile bs=1024 count=500
000
500000+0 records in
500000+0 records out
512000000 bytes transferred in 5.131591 secs (99774128 bytes/sec)

For read (62.05477 MB/sec)
[nonifo@freenas1] /mnt/vm1# dd if=testfile of=/dev/zero bs=1024 count=500000
500000+0 records in
500000+0 records out
512000000 bytes transferred in 7.868553 secs (65069144 bytes/sec)

and then the same on platina ;)

For write (79.9973 MB/sec)
[nonifo@freenas1] /mnt/platina# dd if=/dev/zero of=testfile bs=1024 count=500000
500000+0 records in
500000+0 records out
512000000 bytes transferred in 6.103722 secs (83883246 bytes/sec)


For read (176.68764 MB/sec)
[nonifo@freenas1] /mnt/platina# dd if=testfile of=/dev/zero bs=1024 count=500000
500000+0 records in
500000+0 records out
512000000 bytes transferred in 2.763528 secs (185270415 bytes/sec)
[nonifo@freenas1] /mnt/platina#


Today's test are better than last time but still they don't seems correct, I have expected a lot more preference by this setup.

Local time here are now 23:03 so time for bed, workday tomorrow ;)
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Yes I'm virtualizing FreeNAS.
The memory are locked, I'm forced to that when I passtrught I/O (like my lsi hba controller)

I don't understand what you mean here; I've never heard of I/O passthrough having anything to do with reserving memory for a virtual machine. I'm talking about reserving all of the FreeNAS VM's memory, like this:

lock-vm-memory.jpg

Reduce the vCPU? ok I can try it, but i don't understand why?
Sometimes fewer vCPUs gives better performance; see this comment (the entire thread would be a good read for you as well):

https://forums.freenas.org/index.ph...completely-losing-your-data.12714/#post-59656

Network are passdtrught to, ( got quad intel card in my server so two of the card are attached to just FreeNAS)
+ one E1000 virtual for management and local iSCSI connection to hosting VMware setup.
Yikes! A complicated network setup and you're having network issues! Who could have known? :)

You have passed 2 of the motherboard's LAN ports through to the FreeNAS VM: how do you have the relevant LAGG/LACP set up? What kind of switch are you connecting to, and how is the switch configured? All of these things have a huge impact on networking, not just with respect to performance but also whether or not it even works at all.
 

nonifo

Cadet
Joined
May 30, 2016
Messages
7
I don't understand what you mean here; I've never heard of I/O passthrough having anything to do with reserving memory for a virtual machine. I'm talking about reserving all of the FreeNAS VM's memory, like this:

View attachment 12041


Sometimes fewer vCPUs gives better performance; see this comment (the entire thread would be a good read for you as well):

https://forums.freenas.org/index.ph...completely-losing-your-data.12714/#post-59656


Yikes! A complicated network setup and you're having network issues! Who could have known? :)

You have passed 2 of the motherboard's LAN ports through to the FreeNAS VM: how do you have the relevant LAGG/LACP set up? What kind of switch are you connecting to, and how is the switch configured? All of these things have a huge impact on networking, not just with respect to performance but also whether or not it even works at all.

First of all, my problem are not network. I got slow speed when testing local.
Network work perfectly, one local network E1000 becouse it connect local on my esxi host to the other vm witch are way faster than the physical "slow" 1gbit card. Have been locking for 10gbit card, but save that to the future.
Two physical network interface are for local network, but only one are in use right now, but later I will start using both, one for local traffic and one for external traffic, I will use my mikrotik rb1200 router for my traffic.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
First of all, my problem are not network. I got slow speed when testing local.
Network work perfectly, one local network E1000 becouse it connect local on my esxi host to the other vm witch are way faster than the physical "slow" 1gbit card. Have been locking for 10gbit card, but save that to the future.
Two physical network interface are for local network, but only one are in use right now, but later I will start using both, one for local traffic and one for external traffic, I will use my mikrotik rb1200 router for my traffic.
Sorry, I confused this thread with another one where a gentleman is having NFS problems. :smile:

Did you try reducing the VM's vCPU count and locking its memory?
 

nonifo

Cadet
Joined
May 30, 2016
Messages
7
Sorry, I confused this thread with another one where a gentleman is having NFS problems. :)

Did you try reducing the VM's vCPU count and locking its memory?

Hehe, we are just humans? Right? :)
No I lying in my bed, trying to get some sleep but get caught up in reading the links provide above on my phone :)

I would try it tomorrow. 23:51 local time right now :)
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Reduce the vCPU? ok I can try it, but i don't understand why?
When you get a chance, watch this video. It is the VMware developer presenting at VMWorld 2015 going through the impact of having to many vCPU's. It's very technical, but it is a great explanation of what's going on.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
Assuming this is what I think it is as far as cpu resource scheduling conflicts, it'll show up as "co-stop" under the performance graphs in vmware.
 

nonifo

Cadet
Joined
May 30, 2016
Messages
7
Now I have incresed the memory, checked that it are locked.
I have reduced the cpu-count, right now its only 2 vcpu for my freenas, but still the same result, way to bad write and read speed.

What should 16 disk in set of 2 mirrors (2disk/mirror x 8 vdev) preform? read/write? Are I'm wrong thinking read speed should be 8x1hdd speed and write 2x1hdd speed?
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Now I have incresed the memory, checked that it are locked.
I have reduced the cpu-count, right now its only 2 vcpu for my freenas, but still the same result, way to bad write and read speed.

What should 16 disk in set of 2 mirrors (2disk/mirror x 8 vdev) preform? read/write? Are I'm wrong thinking read speed should be 8x1hdd speed and write 2x1hdd speed?
No, read/write speed will not be a multiple like that.

What results did you get? Can you post them here, in CODE tags?
 

nonifo

Cadet
Joined
May 30, 2016
Messages
7
No, read/write speed will not be a multiple like that.

What results did you get? Can you post them here, in CODE tags?

Code:
(VM1 = 2 ssd mirror x 2)
[nonifo@freenas1] /mnt/vm1# dd if=/dev/zero of=testfile bs=1024 count=500000
500000+0 records in
500000+0 records out
512000000 bytes transferred in 4.092292 secs (125113260 bytes/sec)
[nonifo@freenas1] /mnt/vm1# dd if=testfile of=/dev/zero bs=1024 count=500000
500000+0 records in
500000+0 records out
512000000 bytes transferred in 7.311968 secs (70022187 bytes/sec)

(platina = 2 hdd mirror x 8)
[nonifo@freenas1] /mnt/platina# dd if=/dev/zero of=testfile bs=1024 count=500000
500000+0 records in
500000+0 records out
512000000 bytes transferred in 4.370142 secs (117158665 bytes/sec)
[nonifo@freenas1] /mnt/platina# dd if=testfile of=/dev/zero bs=1024 count=500000
500000+0 records in
500000+0 records out
512000000 bytes transferred in 6.420726 secs (79741763 bytes/sec)
[nonifo@freenas1] /mnt/platina#
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Baffling... it makes sense that there isn't a lot of difference between the spinning disks vs. the SSD disks because a small test like this should simply be hitting cache memory, right? But your speeds are just a little over half of what I get on a RAIDZ2 HDD array, and I'm running with 16GB of RAM, just like you:

Code:
[root@boomer] /mnt/tank/sysadmin# dd if=/dev/zero of=testfile bs=1024 count=500000
500000+0 records in
500000+0 records out
512000000 bytes transferred in 2.140815 secs (239161252 bytes/sec)
[root@boomer] /mnt/tank/sysadmin# dd if=testfile of=/dev/zero bs=1024 count=500000
500000+0 records in
500000+0 records out
512000000 bytes transferred in 4.581515 secs (111753425 bytes/sec)


Is it possible you're running at SAS speeds (3Gbps) and not SAS2 (6Gbps)? Is the HP SAS expander you're using a SAS2-compatible device? One way to check would be to look at this log:

/var/log/messages

It will list the disks and their transfer rate, like this:

Code:
Jun  1 22:58:27 boomer da2: <ATA HGST HUS724020AL A580> Fixed Direct Access SPC-4 SCSI device
Jun  1 22:58:27 boomer da2: Serial Number  PK2134P5G3PNVX
Jun  1 22:58:27 boomer da2: 600.000MB/s transfers
Jun  1 22:58:27 boomer da2: Command Queueing enabled
Jun  1 22:58:27 boomer da2: 1907729MB (3907029168 512 byte sectors)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
First of all that is a poor test to use other than just to see if you are in the general ball park because writing zeros means nothing when you have compression turned on (the default). Can you post if you have compression turned on or not?

My results for my ESXi machine for the same tests are:
Code:
[root@freenas] /tmp# dd if=/dev/zero of=testfile bs=1024 count=500000
500000+0 records in
500000+0 records out
512000000 bytes transferred in 0.502116 secs (1019684766 bytes/sec)
[root@freenas] /tmp# dd if=testfile of=/dev/zero bs=1024 count=500000
500000+0 records in
500000+0 records out
512000000 bytes transferred in 0.348988 secs (1467099921 bytes/sec)


To a share without compression:
Code:
[root@freenas] /mnt/farm/backups# dd if=/dev/zero of=testfile bs=1024 count=500000
500000+0 records in
500000+0 records out
512000000 bytes transferred in 2.334359 secs (219332166 bytes/sec)
[root@freenas] /mnt/farm/backups# dd if=testfile of=/dev/zero bs=1024 count=500000
500000+0 records in
500000+0 records out
512000000 bytes transferred in 1.030850 secs (496677531 bytes/sec)


Also, I'm not saying the lack of compression is the issue but we do have to rule that out as a possibility.
 
Status
Not open for further replies.
Top