Please, look at my NAS configuration (memory issue)

Status
Not open for further replies.

strelok

Dabbler
Joined
Jan 28, 2015
Messages
36
Hello !

Recently I've build my NAS based on FreeNAS.

Here is my hardware:
Platform: HP MicroServer Gen8
CPU: Xeon E3-1230v2 3.3GHz
Memory: 16GB - DDR3 1600 MHz Kingston ECC
HDD: 4x 1TB
2 x 1TB WD Red (in mirror) - for personal archive (photos, videos)
1TB WD Green (stripe) - for movies (no any data on disk yet)
1TB WD Green (stripe) - for lab with VmWare (no any data on disk yet)

Applications (Plugins):
Transmission
Plex Media Server
miniDLNA

1. I noted that my memory was absorbed by system.
As you see from top here is 13GB of Wired memory.

Code:
[root@freenas] ~# top
last pid: 88442;  load averages:  0.00,  0.02,  0.00                                                                                 up 6+02:24:53  14:47:30
59 processes:  1 running, 55 sleeping, 3 zombie
CPU:  0.3% user,  0.0% nice,  0.0% system,  0.0% interrupt, 99.7% idle
Mem: 498M Active, 1371M Inact, 13G Wired, 16M Cache, 689M Free
ARC: 12G Total, 1100M MFU, 10G MRU, 432K Anon, 56M Header, 467M Other
Swap: 6144M Total, 6144M Free

  PID USERNAME      THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
19514 transmission    3  20    0   217M 93652K select  4  22:04   0.00% transmission-daemon
50800 root           12  20    0   178M 14992K uwait   7  21:05   0.00% collectd
2660 root            1  21    0   193M 40304K select  0   8:36   0.00% python2.7
5162    972         13  35   15   388M 66876K select  0   4:43   0.00% python
2656 root            6  20    0   527M   213M usem    6   4:41   0.00% python2.7
8615 root            2  20    0   116M 19896K select  5   3:35   0.00% python2.7
5977    972         10  20    0   214M 24448K select  1   1:41   0.00% python
5950    972         12  20    0   252M 55620K uwait   5   1:08   0.00% Plex DLNA Server
86751    972         14  20    0   299M 74392K uwait   4   0:42   0.00% Plex Media Server
6293 root            1  20    0   110M  9952K kqread  1   0:27   0.00% syslog-ng
4402 root            6  20    0   168M 26368K usem    7   0:21   0.00% python2.7
85525 root            6  20    0   192M 27616K usem    7   0:21   0.00% python2.7
79856    972         13  35   15   304M 73616K select  7   0:20   0.00% python
86767    972         10  20    0   239M 28732K uwait   4   0:14   0.00% Plex DLNA Server
2084 root            1  20    0 22224K  2988K select  4   0:09   0.00% ntpd
2349 root            1  20    0   276M  8480K select  1   0:08   0.00% smbd
2589 root            2  47    0 39408K  2696K select  0   0:05   0.00% netatalk
2346 root            1  20    0   209M  5800K select  2   0:03   0.00% nmbd
13160    933          2  39   19   110M  5080K kqread  0   0:03   0.00% minidlnad
64553 root            1  45   10 18596K  2080K wait    6   0:02   0.00% sh
1737 root            1 -52   r0 12048K  8024K nanslp  7   0:02   0.00% watchdogd
6259 root            1  52    0   165M 13028K ttyin   6   0:01   0.00% python2.7
31287 root            4  52    0   173M 12356K select  7   0:01   0.00% python2.7
2356 root            1  20    0   260M  7220K select  4   0:01   0.00% winbindd
6097 root            1  20    0 14140K  1732K nanslp  2   0:01   0.00% cron
4178 root            1  20    0 14188K  1228K nanslp  0   0:01   0.00% cron
85301 root            1  20    0 14188K  1156K nanslp  6   0:01   0.00% cron
85237 root            1  20    0 12092K  1192K select  7   0:01   0.00% syslogd
4120 root            1  20    0 12092K  1304K select  7   0:01   0.00% syslogd
77634 root            1  20    0 14188K  1260K nanslp  1   0:00   0.00% cron
31621 root            1  20    0   262M  7224K select  2   0:00   0.00% winbindd
77579 root            1  20    0 12092K  1300K select  5   0:00   0.00% syslogd
  858 root            1  20    0 28096K  3176K nanslp  4   0:00   0.00% smartd
2352 root            1  20    0   261M  7268K select  7   0:00   0.00% winbindd
2604 root            1  52    0 89772K  4788K select  1   0:00   0.00% afpd
2585 nobody          1  20    0  9912K  1972K select  5   0:00   0.00% mdnsd
1363 root            1  20    0  6276K   892K select  6   0:00   0.00% devd
22329 www             1  20    0 26348K  4064K kqread  3   0:00   0.00% nginx
88003 root            1  20    0 71916K  5492K select  2   0:00   0.00% sshd
88010 root            1  20    0 17524K  3720K pause   2   0:00   0.00% csh
6267 root            1  20    0 45000K  2828K select  6   0:00   0.00% zfsd
88442 root            1  20    0 18628K  2636K CPU1    1   0:00   0.00% top
2565 root            1  20    0 26348K  3480K pause   4   0:00   0.00% nginx
31286 root            1  20    0 49220K  3652K select  2   0:00   0.00% sshd
2605 root            1  52    0 34884K  2480K select  3   0:00   0.00% cnid_metad
6263 root            1  52    0 12048K  1464K ttyin   6   0:00   0.00% getty
6266 root            1  52    0 12048K  1464K ttyin   1   0:00   0.00% getty
6262 root            1  52    0 12048K  1452K ttyin   4   0:00   0.00% getty
6261 root            1  52    0 12048K  1464K ttyin   3   0:00   0.00% getty
6260 root            1  52    0 12048K  1464K ttyin   0   0:00   0.00% getty
6264 root            1  52    0 12048K  1464K ttyin   4   0:00   0.00% getty
[root@freenas] ~#


I read some topics about memory and as I understand for my configuration I should follow "magic formula"
(8Gb + 4Gb) = 12 Gb memory that not exceed my 16Gb.

So, is it normal distribution of memory in my case ? because I have only 498M available memory...

2. Another question that I would like to use my FreeNAS as storage by iSCSI interface but just for tests.
As I understand it's not good idea because of some problem with ZFS and iSCSI and will have influence on performance system.

So what's you opinion about using iSCSI interface even just for tests ?
 
L

L

Guest
I only have an answer to your first question. Most of your memory is in the read cache

ARC: 12G Total, 1100M MFU, 10G MRU, 432K Anon, 56M Header, 467M Other

Show 12GB ram being used by zfs adaptive replacement cache
 

strelok

Dabbler
Joined
Jan 28, 2015
Messages
36
Hello Linda,

thank you for repsonse.

As I understand most of amount memory (12GB) was allocated for cache (ARC).

What's recommended size for ARC and where I can change it ?

I found in google that common rule for alocating ARC is 1GB of RAM for each 1TB of data the system will store.
So, in my case it should be no more 4GB. Is it correct ?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Arc should be as much ram as possible and you just let FreeNAS figure all that out. Why would you ever want memory not doing anything, free memory is a bad thing unlike how people think it should work because of what they learned with windows. Your system looks fine and if you get more memory you will see it go into the arc also.
 

strelok

Dabbler
Joined
Jan 28, 2015
Messages
36
Hello SweetAndLow !

You are right, I was worred because of low counter Free memory on FreeNAS.
 
L

L

Guest
From what I am seeing your plex and python is regularly using about 3-4GB of ram. The nice thing about zfs is that it will keep in cache both most recently used and most frequently. the plex and python should stay memory resident because they are used frequently.

If you wanted to change it though.. you set it in tunables. The setting is vfs.zfs.arc_max. Autotune will also set this to something reasonable.

But as the others have said, it's best to just let the system decide.
 

strelok

Dabbler
Joined
Jan 28, 2015
Messages
36
Thank you Linda for your recomendations.
For sure, I will leave system as is.
My question appeared just because of lack of knowledge about ARC :)
 
L

L

Guest
The other thing i would do though, is throw all your disks into a big pool/volume. That is the primary benefit of zfs is that you don't need to use disks as boundries, you use datasets as boundaries.
 

strelok

Dabbler
Joined
Jan 28, 2015
Messages
36
Hello Linda,

I considered your idea about consolidation all disks in one pool.
But there is one problem, I have one pool (mirror) for my personal files that are critical data for me.
In case of I consolidate the rest disks in one pool I can loose ALL my data (include critical data) when any disk will failure.

As I understand failure of ANY Vdev (disk) in a zpool will cause the zpool to fail.

P.S. Thank you very much for your videos I saw almost all free videos and it help me to understand ZFS.
 
Status
Not open for further replies.
Top