Finally upgraded to 11 today, classic UI locking up for 3+ minutes

Status
Not open for further replies.

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Nothing of the sort, making assumptions is bad, please stop.

Machine ran flawlessly for 18 months with 8GB of memory and FIVE jails.
Purchased an extra 8GB to attempt to eliminate an issue with one particular jail, didn't work.
(Bare in mind I've provisioned the VM with an entire 768MB of memory and 1 core too.)

Code:

last pid: 59010;  load averages:  0.44,  0.49,  0.47	up 2+22:40:35  15:27:03
82 processes:  2 running, 80 sleeping
CPU: 49.0% user,  0.0% nice,  6.9% system,  0.0% interrupt, 44.1% idle
Mem: 192M Active, 1327M Inact, 14G Wired, 418M Free
ARC: 12G Total, 3498M MFU, 7482M MRU, 2344K Anon, 161M Header, 746M Other
Swap: 12G Total, 141M Used, 12G Free, 1% Inuse

  PID USERNAME	THR PRI NICE   SIZE	RES STATE   C   TIME	WCPU COMMAND
58993 root		  1  80	0   237M 59408K CPU1	1   0:03  95.53% python3.6
11921 root		 13  20	0   847M 24872K kqread  1   5:54  10.34% bhyve
13486 media		16  20	0   479M   297M select  1  27:56   0.25% python2.7
2882 root		 10  52	0 53312K 19048K uwait   1   8:54   0.19% consul
58991 root		  1  20	0 24276K  3832K CPU0	0   0:00   0.15% top
8073 media		 2  20	0   369M   174M kqread  0  16:02   0.12% python2.7
12950 media		26  20	0   186M 37060K select  1   1:42   0.11% python2.7
11643 media		26  20	0   158M 34496K select  1   4:32   0.11% python2.7
  205 root		  6  20	0   403M 83484K kqread  1   0:33   0.07% python3.6
8477 root		  9  20	0   145M 12400K select  1   2:04   0.03% qbittorre
22700 root		  2  20	0   127M 31748K select  0  11:18   0.03% python3.6
2746 www		   1  20	0 30984K  1116K kqread  1   0:14   0.01% nginx
11803 root		  6  20	0   149M 37212K select  1   0:25   0.01% python2.7
58987 root		  1  20	0 82852K  7468K select  1   0:00   0.01% sshd
9980 root		  6  20	0   145M 41436K select  1   0:25   0.01% python2.7
4963 root		  6  20	0   179M 38348K select  0   0:25   0.01% python2.7

You keep proving my point. To your understanding your system ran perfectly then you added a VM and it didn't. When you remove the VM it gets better. This is 100% you adding a VM and over burdening your system.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Still. It really shouldn't lock out the UI.

Is deduplication in use?
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
You keep proving my point. To your understanding your system ran perfectly then you added a VM and it didn't. When you remove the VM it gets better. This is 100% you adding a VM and over burdening your system.

You keep missing my point.
A system which ran flawlessly with 8GB, was unnecessarily upgraded to 16GB - can't even run a 512MB VM, furthermore the VM is hardly 'thrashing'

*EVEN IF YOU WERE RIGHT*

This is not how the software should 'cleanly' handle this. Errors should be more distinct, there should be more information available.
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
Still. It really shouldn't lock out the UI.

Is deduplication in use?

It is not, no.
I'm not insane! I know the limits of that poor little machine - heck only one of my datasets has compression on (jails)
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
But you only have a dual core system. Take one for the VM and there is only one left for FreeNAS and all the other jails to share. It does appear that you are over burdening that teeny tiny processor.

It's certainly possible - but the error message (none) is not ideal.
Also I (blindly?) assumed when it takes a core for the VM, it's not taking 100% of that core exclusively?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I'm assuming something is going off the rails in the middlewared and that's why you're seeing 100% core usage on python. Try checking the middleware log for enlightenment
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
Hi all,

Just FYI, I haven't "spun up" the VM in 3 or 4 weeks and it's totally behaved since. I admit I don't log in to the UI that often but it's generally been there since.

I gather it is indeed some kind of performance issue - that being said, the correct behaviour is to be _bloody slow_ not, "not work"

None the less, we know, for posterity in future.
 
Status
Not open for further replies.
Top