FreeNAS VM and Proxmox performance

Status
Not open for further replies.

wtfR6a

Explorer
Joined
Jan 9, 2016
Messages
88
I was interested in trying to virtualise a FreeNAS instance which is only responsible for some temporary storage for security camera recording and found performance under Proxmox to be pretty awful. Im curious if anyone else has observed this or of theres a magic CPU flag which fixes this.

Hardware is Supermicro X9SRL-F, 64GB, Xeon e5-1620v2, LSI 9300-8i, Intel 520 10gig network card.
Running Proxmox 4.3-12/6894c9d9
FreeNAS-9.10.1-U4 (ec9a7d3)
Theres no other VMs on this system running during these tests.

Am testing with a simple 'dd' command

Code:
dd if=/dev/zero of=/mnt/tankssd/dd.tst bs=2048000 count=262000


Have tested and confirm identical performance with following options
  • 2 and 4 cores
  • CPU=host and QEMU64
  • 16GB and 32GB RAM allocated
  • WD Red 4TB and Samsung 850 Pro 256GB
  • 10% free vs 100% free pool space
With a non compressed array (this is fine)

Code:
										   capacity	 operations	bandwidth
pool									alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
tankssd								 14.0G   222G	  0  3.90K	  0   496M
  mirror								14.0G   222G	  0  3.90K	  0   496M
	gptid/f090ced5-bb64-11e6-97fc-171bf529e9b8	  -	  -	  0  3.91K	  0   496M
	gptid/f0ce17b9-bb64-11e6-97fc-171bf529e9b8	  -	  -	  0  3.90K	  0   496M
--------------------------------------  -----  -----  -----  -----  -----  -----


and
Code:
[root@freenas2] ~# vmstat 1
procs	  memory	  page					disks	 faults		 cpu
r b w	 avm	fre   flt  re  pi  po	fr  sr da0 da1   in   sy   cs us sy id
2 0 0   3903M	27G  6094   0   1   0 10833 134 10333 10333  111 7107 1439  5 10 86
1 0 0   3903M	26G  4735   0   0   0  8892 126 4022 4016 7740 3795 56611  4 53 43
1 0 0   3903M	26G  9677   0   0   0 17761 126 3635 3643 6971 5057 38518  6 77 17
2 0 0   3903M	25G  4684   0   0   0  8883 126 3981 3978 7697 3609 55220  3 61 36
1 0 0   3903M	25G   489   0   0   0	 0 126 3998 3994 7763 2138 56854  1 55 44
1 0 0   3903M	24G  5016   0   0   0  8882 125 4005 4000 7856 3732 56925  4 57 39
2 0 0   3896M	23G  7917   0   0   0 17659 126 3933 3945 7745 4565 52515  5 60 34
0 0 0   3903M	23G 11038   0   0   0 17857 126 3125 3124 5771 5521 29260 11 78 12
1 0 0   3903M	22G  5554   0   0   0  9140 126 3999 3998 7869 5962 56800  5 59 37
1 0 0   3903M	22G  4723   0   0   0  8945 126 3989 3983 7754 3575 55717  3 53 44
1 0 0   3903M	21G  5020   0   0   0  8882 126 3998 3994 7727 3834 55828  4 56 41
0 0 0   3903M	20G  5016   0   0   0  8881 126 4022 4019 7708 3702 54599  4 50 46
2 0 0   3896M	20G  7855   0   0   0 17653 126 3945 3962 7734 3990 42821  5 41 54
1 0 0   3903M	20G  2274   0   0   0   286 126 3930 3926 7416 2950 39975  2 69 29
1 0 0   3903M	20G  4891   0   0   0  8880 126 4027 4022 7751 3450 51036  4 40 56
0 0 0   3903M	19G  4735   0   0   0  8881 126 4030 4023 7762 3499 52647  3 44 53
0 0 0   3903M	19G  5014   0   0   0  8881 126 3995 3990 7736 3520 53165  4 47 49
0 0 0   3903M	18G 14347   0   0   0 26658 124 3946 3944 7579 6812 49687 11 51 38
1 0 0   3903M	18G  5151   0   0   0  8893 126 3789 3809 7297 5343 46307  4 57 39
1 0 0   3903M	17G  5022   0   0   0  8881 126 3919 3914 7368 3564 43008  3 70 26
1 0 0   3903M	17G  5015   0   0   0  8881 126 3986 3982 7680 3672 52283  3 44 53
2 0 0   3889M	16G   318   0   0   0  8397 126 4016 4010 7761 2286 54775  0 49 51
1 0 0   3903M	16G  9458   0   0   0  9363 125 3991 3986 7551 4684 50820  7 49 44
1 0 0   3894M	15G  5423   0   0   0  9567 125 3926 3941 7494 4071 46480  3 67 29
0 0 0   3894M	15G  4640   0   0   0  8880 125 3765 3772 7240 3555 44864  4 69 28
0 0 0   3894M	15G  5017   0   0   0  8882 125 3997 4003 7771 3559 53724  4 39 57
1 0 0   3894M	14G  5033   0   0   0  8882 125 3976 3974 7723 3555 53041  4 44 52
4 0 0   3895M	14G 13998   0   0   0 26644 125 3938 3928 7589 7194 49667 11 47 42
1 0 0   3894M	13G  5734   0   0   0  8923 125 3515 3509 6758 4998 41630  4 65 32



With a compressed (lz4) array (this blows)

Code:
										   capacity	 operations	bandwidth
pool									alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
tankssd								 2.91M   236G	  0  2.71K	  0  10.8M
  mirror								2.91M   236G	  0  2.71K	  0  10.8M
	gptid/f090ced5-bb64-11e6-97fc-171bf529e9b8	  -	  -	  0	651	  0  11.2M
	gptid/f0ce17b9-bb64-11e6-97fc-171bf529e9b8	  -	  -	  0	650	  0  11.2M
--------------------------------------  -----  -----  -----  -----  -----  -----


and

Code:
procs	  memory	  page					disks	 faults		 cpu
r b w	 avm	fre   flt  re  pi  po	fr  sr da0 da1   in   sy   cs us sy id
2 0 0   3912M	29G  5030   0   0   0  8885 125 631 632 1282 6575 8299  3 85 13
2 0 0   3905M	29G  3811   0   0   0  8777 126 479 479  972 5983 6623  1 91  8
1 0 0   3912M	29G  1322   0   0   0   106 125 593 589 1160 5180 7682  4 87  9
2 0 0   3894M	29G  5644   0   0   0 17658 125 606 604 1177 7213 7714  4 88  8
3 0 0   3912M	29G  8778   0   0   0  9005 125 411 408  797 7034 5715  7 87  5
2 0 0   3912M	29G  5032   0   0   0  8884 122 637 636 1238 6326 8550  3 87 10
2 0 0   3912M	29G  4895   0   0   0  8884 125 575 572 1178 6473 8044  4 87  9
1 0 0   3912M	29G  9681   0   0   0 17767 125 390 386  762 7943 5466  6 86  8
1 0 0   3912M	29G  5322   0   0   0  8892 125 557 556 1083 7985 7547  3 86 11
2 0 0   3905M	29G  8752   0   0   0 17659 125 576 574 1115 7220 7593  5 86  9
1 0 0   3912M	29G  1346   0   0   0   105 125 736 736 1439 5313 9613  2 86 12
1 0 0   3912M	29G  4733   0   0   0  8884 130 575 574 1111 6172 7770  4 85 11
1 0 0   3912M	29G  5015   0   0   0  8883 125 637 635 1222 6336 8452  5 84 11
2 0 0   3912M	29G  4735   0   0   0  8884 122 486 485  955 6425 6608  4 88  9
2 0 0   3912M	29G  9642   0   0   0 17764 125 412 412  805 7852 5746  7 84  9
1 0 0   3912M	29G  5036   0   0   0  8885 125 623 621 1207 6485 8155  4 85 11
2 0 0   3903M	29G  3126   0   0   0  8778 125 631 631 1223 5781 8357  1 88 11
1 0 0   3912M	29G  6653   0   0   0  9005 125 531 530 1005 6613 7158  6 85  9
2 0 0   3989M	29G 11428   0   0   0 12861 125 606 604 1193 14699 7999 12 83  5
2 0 0   3989M	29G  9357   0   0   0 17776 128 695 693 1381 7415 8860 30 70  0
2 0 0   3989M	29G  5049   0   0   0  8886 128 810 810 1620 5632 10132 32 68  0
2 0 0   3989M	29G  5016   0   0   0  8883 129 723 723 1448 6001 9186 23 77  0
3 0 0   3989M	29G  4729   0   0   0  8883 129 721 721 1449 6064 9290 23 77  0
2 0 0   3989M	29G  5024   0   0   0  8883 128 826 825 1660 7290 10556 37 63  0
3 0 0   3983M	29G  9417   0   0   0 17663 128 821 823 1618 6571 9923 39 61  0
2 0 0   3989M	29G   400   0   0   0   104 128 949 948 1885 4330 11694 38 62  0
2 0 0   3989M	29G  5041   0   0   0  8891 128 832 828 1657 5647 10438 35 65  0
2 0 0   3989M	29G  9635   0   0   0 17768 130 738 738 1467 7358 9316 33 67  0



Performance drops dramatically form 500MB/s to 10MB/s on the SSD array and from 120MB/s to 10MB/s on the WD RED due to CPU bottleneck but adding cores doesn't help so is there another solution or is it just a case of FreeNAS doesn't play well with Proxmox?
 
Last edited:

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
Hi there,

if you really want something that is worthwhile then just ditch ProxMox and pull out a real/serious hypervisor. Like VMware (btw it's the default go-to solution).
I hope you are using configuration equivalents to PCI-Passthrough, Memory Reservation and other optimizations to avoid additional latencies...
 

wtfR6a

Explorer
Joined
Jan 9, 2016
Messages
88
yup, confirmed using reserved memory and passing through PCI HBA's etc.
I've had good success with proxmox previously but can take a look at VMWare. I'd really like a virtualize my backup system if possible.
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
I'm running my FreeNAS instance virtualized under VmWare and never had any issue, so it's definitely doable if you know what you are doing (and it looks like) ;)
 

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139
Hi there,

if you really want something that is worthwhile then just ditch ProxMox and pull out a real/serious hypervisor. Like VMware (btw it's the default go-to solution).
I hope you are using configuration equivalents to PCI-Passthrough, Memory Reservation and other optimizations to avoid additional latencies...

Promox uses KVM, which is a real hypervisor.

https://www.spec.org/cgi-bin/osgresults?conf=virt_sc2013&op=fetch&field=COMPANY&pattern=*

Look for the esxi results, they don't exactly stack up to KVM.
 

wtfR6a

Explorer
Joined
Jan 9, 2016
Messages
88
My testing of both ESXi & Proxmox has ESXi significantly ahead in the performance stakes on identical hardware. Perhaps I could have tweaked something to close the gap but I also had more serious issues with PCI passthrough under Proxmox which resulted in some damaged files, luckily it was only in testing on my backup system but since moving that system over to ESXi its been faultless.
Proxmox but does many things really well but PCI passthrough isn't ready for production use IMHO.
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139
Lol @ that link! ESXi 5.1, oldest test, oldest software, slowest hardware...that doesn't exactly prove diddly squat.

Well that is the latest standard performance test VMware has submitted, probably because they were tired of being creamed in performance.

Lol@ the thinking that VMware is the only one in the enterprise hypervisor market. 10 years ago they were the go to, but name a cloud built on VMware. They are all built on opensource hypervisors like xen and KVM.
The infra that backs the large hadron collider isn't VMware, it's opensource KVM. The infra that backs eBay and PayPal also isn't VMware, it's KVM. Even Apple ditched them because they weren't cutting the mustard.

That link is proof the last time VMware was a good choice was 2013 ;)

I'm not a VMware fan....
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
And where exactly did I Lol at saying VMware was the only worthy enterprise hypervisor? I simply stated that the test results prove nothing about the current release of ESXi. There are many top notch options, but you singled out VMware as being low rent compared to another product which you cannot prove with that link...that was my point.
 
Status
Not open for further replies.
Top