Problems with FreeNAS-9.2.1.5

Status
Not open for further replies.

Hudu

Dabbler
Joined
Apr 9, 2014
Messages
18
My build is:
Mainbord with CPU: Supermicro A1SAM-C2550F
Memory: 16 GB Kingston ECC (1,35V)
Harddrives: 5x WD Red 3TB
SSD: Samsung 840 Pro (for jails)
PSU: Seasonic G-Series 360Watt
Case: Fractal-Design DEFINE R4

The hardware itself runs excellent and very stable.
I first uses Ubuntu 14.04 LTS with ZoL. But ZoL on the new 3.13 kernel is crap,
because it cause the kernel hungs, if I write to the zpools.

So I installed FreeNAS-9.2.1.5-RELEASE-x64 and created the zpool new.
I use a proxy jail (debian with ziproxy, privoxy, dansguardian) and a plex jail.

My pool configuration is
Code:
  pool: share
state: ONLINE
  scan: scrub repaired 0 in 4h43m with 0 errors on Mon May 12 16:46:39 2014
config:
 
    NAME                                            STATE    READ WRITE CKSUM
    share                                          ONLINE      0    0    0
      raidz2-0                                      ONLINE      0    0    0
        gptid/4a94ecc5-d748-11e3-8ade-0cc47a04b680  ONLINE      0    0    0
        gptid/4afcc0cf-d748-11e3-8ade-0cc47a04b680  ONLINE      0    0    0
        gptid/4b6cfb52-d748-11e3-8ade-0cc47a04b680  ONLINE      0    0    0
        gptid/4bd53893-d748-11e3-8ade-0cc47a04b680  ONLINE      0    0    0
        gptid/4c3d819a-d748-11e3-8ade-0cc47a04b680  ONLINE      0    0    0
 
errors: No known data errors
 
  pool: system
state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Wed May 14 08:20:03 2014
config:
 
    NAME                                          STATE    READ WRITE CKSUM
    system                                        ONLINE      0    0    0
      gptid/c2efca47-d82c-11e3-8bd5-0cc47a04b680  ONLINE      0    0    0
 
errors: No known data errors


With the command "zdb -C share" get
Code:
MOS Configuration:
        version: 5000
        name: 'share'
        state: 0
        txg: 16131
        pool_guid: 1981500429679314236
        hostid: 3142734730
        hostname: ''
        vdev_children: 1
        vdev_tree:
            type: 'root'
            id: 0
            guid: 1981500429679314236
            children[0]:
                type: 'raidz'
                id: 0
                guid: 145295719929335913
                nparity: 2
                metaslab_array: 35
                metaslab_shift: 37
                ashift: 12
                asize: 14992202792960
                is_log: 0
                create_txg: 4
                children[0]:
                    type: 'disk'
                    id: 0
                    guid: 15031826407315855706
                    path: '/dev/gptid/4a94ecc5-d748-11e3-8ade-0cc47a04b680'
                    whole_disk: 1
                    create_txg: 4
                children[1]:
                    type: 'disk'
                    id: 1
                    guid: 7494143635279517186
                    path: '/dev/gptid/4afcc0cf-d748-11e3-8ade-0cc47a04b680'
                    whole_disk: 1
                    create_txg: 4
                children[2]:
                    type: 'disk'
                    id: 2
                    guid: 12806404261779409058
                    path: '/dev/gptid/4b6cfb52-d748-11e3-8ade-0cc47a04b680'
                    whole_disk: 1
                    create_txg: 4
                children[3]:
                    type: 'disk'
                    id: 3
                    guid: 11506242951164426092
                    path: '/dev/gptid/4bd53893-d748-11e3-8ade-0cc47a04b680'
                    whole_disk: 1
                    create_txg: 4
                children[4]:
                    type: 'disk'
                    id: 4
                    guid: 3571886597765505926
                    path: '/dev/gptid/4c3d819a-d748-11e3-8ade-0cc47a04b680'
                    whole_disk: 1
                    create_txg: 4
        features_for_read:
            com.delphix:hole_birth
space map refcount mismatch: expected 48 != actual 33


Whats alerts me is the message after the zdb:
"space map refcount mismatch: expected 48 != actual 33"

(With "zdb -C system" I get the message "space map refcount mismatch: expected 26 != actual 24
".)

Is the space map ref count mismatch a serious error? What can I do to solve it?

Additionally, I observed that the system uses 28 Watt, if idle and hdds in standby. Under Ubuntu 14.04 it uses only 18 Watt.

I already tuned:
  1. powerd on
  2. sysctl hw.acpi.cpu.cx_lowest=Cmax
  3. disable virtualisation in the bios
  4. enable aggressive sata power managent in the bios

Is there some tuning for additional power management in FreeNAS/FreeBSD?
 

Hudu

Dabbler
Joined
Apr 9, 2014
Messages
18
Did you ever figure this out?
The problem is still there and I have no clue how to fix it:
Code:
[root@freenas] ~# zdb -b system
 
Traversing all blocks to verify nothing leaked ...
 
9.70G completed ( 282MB/s) estimated time remaining: 4294395109hr 4294967247min 4294967267sec       
    No leaks (block sum matches space maps exactly)
 
    bp count:          898761
    bp logical:    11161372672      avg:  12418
    bp physical:  7132817920      avg:  7936    compression:  1.56
    bp allocated:  10709467136      avg:  11915    compression:  1.04
    bp deduped:    1639436288    ref>1: 107668  deduplication:  1.15
    SPA allocated: 9070030848    used:  7.22%
 
space map refcount mismatch: expected 43 != actual 39
[root@freenas] ~# zdb -b share
 
Traversing all blocks to verify nothing leaked ...
 
2.74T completed (1817MB/s) estimated time remaining: 302478hr 55min 41sec       
    No leaks (block sum matches space maps exactly)
 
    bp count:        13774861
    bp logical:    1709674773504      avg: 124115
    bp physical:  1696745727488      avg: 123176    compression:  1.01
    bp allocated:  3016455475200      avg: 218982    compression:  0.57
    bp deduped:    174580027392    ref>1: 430620  deduplication:  1.06
    SPA allocated: 2841875447808    used: 18.97%
 
space map refcount mismatch: expected 64 != actual 44
 
D

dlavigne

Guest
I asked our in-house ZFS guru who said:

We have never seen this on FreeBSD, it's possibly a ZFS on Linux bug.

It seems to be caused by bad accounting for spacemap_histrogram feature. I don't think it's big deal though, the feature is active
and stays active for the lifetime of pool and therefore the refcount no longer matters.
 
Status
Not open for further replies.
Top