So many devices...Did something go wrong?

Status
Not open for further replies.

Lorsung23647

Dabbler
Joined
Mar 10, 2014
Messages
17
I created a new FreeNAS system a few weeks ago, and since then I've been looking at gstat once in a while trying to figure out where a few bottlenecks in my system lie.

I've created all of my pools from scratch in using the GUI in 9.2.1.2, and all of my disks are connected with a hba.

I just can't tell if this output is even remotely close to what it should look like. There are currently 11 disks in the system plus a l2arc (5 in one Z1 pool will be converted to a Z2 pool and striped with my other 6 in their Z2 pool.)

My pool status looks like the below:

Code:
        NAME                                            STATE    READ WRITE CKSUM
        MSA                                            ONLINE      0    0    0
          raidz2-0                                      ONLINE      0    0    0
            gptid/b2e0e1a2-a7d4-11e3-af90-0015173e3bf8  ONLINE      0    0    0
            gptid/b36eec51-a7d4-11e3-af90-0015173e3bf8  ONLINE      0    0    0
            gptid/b3e0e1ec-a7d4-11e3-af90-0015173e3bf8  ONLINE      0    0    0
            gptid/b45a353d-a7d4-11e3-af90-0015173e3bf8  ONLINE      0    0    0
            gptid/b4d47dad-a7d4-11e3-af90-0015173e3bf8  ONLINE      0    0    0
            gptid/b55f3e3f-a7d4-11e3-af90-0015173e3bf8  ONLINE      0    0    0
        cache
          gptid/b5a7db42-a7d4-11e3-af90-0015173e3bf8    ONLINE      0    0    0
 
  pool: RAID-Z1
state: ONLINE
 
        NAME                                            STATE    READ WRITE CKSUM
        RAID-Z1                                        ONLINE      0    0    0
          raidz1-0                                      ONLINE      0    0    0
            gptid/6297453b-a792-11e3-af90-0015173e3bf8  ONLINE      0    0    0
            gptid/633e850b-a792-11e3-af90-0015173e3bf8  ONLINE      0    0    0
            gptid/64853e31-a792-11e3-af90-0015173e3bf8  ONLINE      0    0    0
            gptid/667bef2c-a792-11e3-af90-0015173e3bf8  ONLINE      0    0    0
            gptid/67d7d0b8-a792-11e3-af90-0015173e3bf8  ONLINE      0    0    0


However when I run gstat, I get this giant list of devices. Is this supposed to be normal behavior?

Code:
dT: 1.001s  w: 1.000s
L(q)  ops/s    r/s  kBps  ms/r    w/s  kBps  ms/w  %busy Name
    0      0      0      0    0.0      0      0    0.0    0.0| da0
    0      0      0      0    0.0      0      0    0.0    0.0| da1
    0      0      0      0    0.0      0      0    0.0    0.0| da2
    0      0      0      0    0.0      0      0    0.0    0.0| da3
    2    300    300  6086    2.1      0      0    0.0  38.6| da4
    0    173    173  3860    6.0      0      0    0.0  59.1| da5
    0    230    230  4647    5.3      0      0    0.0  66.2| da6
    0    345    345  6681    2.1      0      0    0.0  42.2| da7
    0      0      0      0    0.0      0      0    0.0    0.0| da0p1
    0      0      0      0    0.0      0      0    0.0    0.0| da0p2
    0      0      0      0    0.0      0      0    0.0    0.0| da1p1
    0      0      0      0    0.0      0      0    0.0    0.0| da1p2
    0      0      0      0    0.0      0      0    0.0    0.0| da2p1
    0      0      0      0    0.0      0      0    0.0    0.0| da2p2
    0      0      0      0    0.0      0      0    0.0    0.0| da3p1
    0      0      0      0    0.0      0      0    0.0    0.0| da3p2
    0      0      0      0    0.0      0      0    0.0    0.0| da4p1
    2    300    300  6086    2.1      0      0    0.0  38.7| da4p2
    0      0      0      0    0.0      0      0    0.0    0.0| da5p1
    0    173    173  3860    6.0      0      0    0.0  59.1| da5p2
    0      0      0      0    0.0      0      0    0.0    0.0| da6p1
    0    230    230  4647    5.3      0      0    0.0  66.3| da6p2
    0      0      0      0    0.0      0      0    0.0    0.0| da7p1
    0    345    345  6681    2.1      0      0    0.0  42.2| da7p2
    0    287    287  4380    2.2      0      0    0.0  37.4| da8
    0    305    305  4943    2.6      0      0    0.0  47.1| da9
    0      0      0      0    0.0      0      0    0.0    0.0| da10
    0      0      0      0    0.0      0      0    0.0    0.0| da11
    0      0      0      0    0.0      0      0    0.0    0.0| da12
    0      0      0      0    0.0      0      0    0.0    0.0| da1p1.eli
    0      0      0      0    0.0      0      0    0.0    0.0| gptid/64853e31-a792-11e3-af90-0015173e3bf8
    0      0      0      0    0.0      0      0    0.0    0.0| da2p1.eli
    0      0      0      0    0.0      0      0    0.0    0.0| gptid/667bef2c-a792-11e3-af90-0015173e3bf8
    0      0      0      0    0.0      0      0    0.0    0.0| da3p1.eli
    0      0      0      0    0.0      0      0    0.0    0.0| gptid/633e850b-a792-11e3-af90-0015173e3bf8
    0      0      0      0    0.0      0      0    0.0    0.0| da4p1.eli
    0      0      0      0    0.0      0      0    0.0    0.0| gptid/67d7d0b8-a792-11e3-af90-0015173e3bf8
    0      0      0      0    0.0      0      0    0.0    0.0| da5p1.eli
    2    300    300  6086    2.1      0      0    0.0  38.7| gptid/b2e0e1a2-a7d4-11e3-af90-0015173e3bf8
    0      0      0      0    0.0      0      0    0.0    0.0| da6p1.eli
    0    173    173  3860    6.1      0      0    0.0  59.1| gptid/b36eec51-a7d4-11e3-af90-0015173e3bf8
    0      0      0      0    0.0      0      0    0.0    0.0| da7p1.eli
    0    230    230  4647    5.3      0      0    0.0  66.3| gptid/b3e0e1ec-a7d4-11e3-af90-0015173e3bf8
    0      0      0      0    0.0      0      0    0.0    0.0| da8p1.eli
    0    345    345  6681    2.2      0      0    0.0  42.3| gptid/b45a353d-a7d4-11e3-af90-0015173e3bf8
    0      0      0      0    0.0      0      0    0.0    0.0| da8p1
    0    287    287  4380    2.2      0      0    0.0  37.5| da8p2
    0      0      0      0    0.0      0      0    0.0    0.0| da9p1
    0    305    305  4943    2.6      0      0    0.0  47.2| da9p2
    0      0      0      0    0.0      0      0    0.0    0.0| da10p1
    0      0      0      0    0.0      0      0    0.0    0.0| da10p2
    0      0      0      0    0.0      0      0    0.0    0.0| da11p1
    0      0      0      0    0.0      0      0    0.0    0.0| da12s1
    0      0      0      0    0.0      0      0    0.0    0.0| da12s2
    0      0      0      0    0.0      0      0    0.0    0.0| da12s3
    0      0      0      0    0.0      0      0    0.0    0.0| da12s4
    0      0      0      0    0.0      0      0    0.0    0.0| da9p1.eli
    0    287    287  4380    2.2      0      0    0.0  37.5| gptid/b4d47dad-a7d4-11e3-af90-0015173e3bf8
    0      0      0      0    0.0      0      0    0.0    0.0| da10p1.eli
    0    305    305  4943    2.6      0      0    0.0  47.2| gptid/b55f3e3f-a7d4-11e3-af90-0015173e3bf8
    0      0      0      0    0.0      0      0    0.0    0.0| gptid/6297453b-a792-11e3-af90-0015173e3bf8
    0      0      0      0    0.0      0      0    0.0    0.0| gptid/b5a7db42-a7d4-11e3-af90-0015173e3bf8
    0      0      0      0    0.0      0      0    0.0    0.0| da12s1a
    0      0      0      0    0.0      0      0    0.0    0.0| ufs/FreeNASs3
    0      0      0      0    0.0      0      0    0.0    0.0| da0p1.eli
    0      0      0      0    0.0      0      0    0.0    0.0| ufs/FreeNASs4
    0      0      0      0    0.0      0      0    0.0    0.0| ufs/FreeNASs1a
    0      0      0      0    0.0      0      0    0.0    0.0| md0
    0      0      0      0    0.0      0      0    0.0    0.0| md1
    0      0      0      0    0.0      0      0    0.0    0.0| md2
    0      4      4    511    0.1      0      0    0.0    0.0| zvol/RAID-Z1/Storage
    0      0      0      0    0.0      0      0    0.0    0.0| zvol/MSA/Proxmox-VM-Storage-iSCSI
 

TCM

Cadet
Joined
Mar 23, 2014
Messages
6
You need to understand how FreeBSD handles disks. You have plain disks called da*. Then you have partitioned your disks using GPT labels, which leads to the appearance of multiple additional devices: /dev/gptid/ contains GPT partitions by UUID. These same partitions appear generically as "da0p1" as well.

You have created filesystems on these partitions that also have a label. These are the /dev/ufs/ entries.

Then you use GELI, which creates all the *.eli entries.

Then you have created ZVOLs, which create the /dev/zvol/ entries.

Finally, you have some RAM disks md*.

I think that sums it up. Personally, I don't partition my disks. I glabel them using the disks's serial number, then I run GELI on /dev/label/*, then I run ZFS on /dev/label/*.eli.
 

Lorsung23647

Dabbler
Joined
Mar 10, 2014
Messages
17
That would explain it. I understood that plain disks were da*, and I figured that I would see the listing for all the GPT partitions, I didn't understand that all those GPT partitions would show up as a da*p* partition. Thank you.
 

TCM

Cadet
Joined
Mar 23, 2014
Messages
6
If you pick a specific pair of matching da*p* and /dev/gptid/ entries and filter gstat's output to only show these two, you should be able to see that the stats for those two are always identical.

For example, from the output I guess that gptid/b55f3e3f-a7d4-11e3-af90-0015173e3bf8 and da9p2 are identical. It's really just looking at the symptom, but it helps to understand and verify.
 
Status
Not open for further replies.
Top