Walking back from multipathing

StorageCurious

Explorer
Joined
Sep 28, 2022
Messages
60
Hi,

I'm losing confidence in TrueNAS multipathing, and have been told (after opening a ticket) that no effort is spent on multipathing anymore, and likely won't be.

I am currently running a pool of 12 drives (all data drives/HBA/enclosure SAS, nothing SATA), working well as far as I can tell but the UI is identifying some (pool-attached multipathed) disks as available for new pools. In the short term it's something I can live with as it seems like just a UI issue, but I don't like the implication of this, especially if a colleague of mine needs to replace a disk while I'm away (oh, look, this disk is available... and pop! goes the pool).

Anyways, the question is: how can I walk back from my RAID10-like pool of 12 multipath disks to non-multipath disks. How would I "convert" to not using multipath anymore? Is there a safe way without having to have an extra 12 disks to transfer data from and back to a "corrected" pool? I can bring the server down outside business hours if needed, but it's not ideal.

I really would have rather liked to use multipath, as it looks great on paper, but I'd rather be safe than sorry here.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I wasn't aware it was even a thing with zfs.
Please specify your version.

And please don't use RAID10 or similar names.
 

StorageCurious

Explorer
Joined
Sep 28, 2022
Messages
60
I wrote RAID10-like, but I can use "pool of 6 mirrored vdev" if it helps. I don't think it's relevant information, was just putting it in for completeness.

I am using version TrueNAS-13.0-U3.1. The pool was built with U2, that particular issue appeared after the upgrade but it may have just due to my first reboot since that particular pool had been created.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I wrote RAID10-like, but I can use "pool of 6 mirrored vdev" if it helps. I don't think it's relevant information, was just putting it in for completeness.
Thank you, using proper terminology helps preventing any communication errors, and is a good way to teach new users as well.

Also, SCALE is the focus of iX's main development effort right now: you probably have more chances of getting multipath support/development with it than with CORE.
 

StorageCurious

Explorer
Joined
Sep 28, 2022
Messages
60
Thank you, using proper terminology helps preventing any communication errors, and is a good way to teach new users as well.

Also, SCALE is the focus of iX's main development effort right now: you probably have more chances of getting multipath support/development with it than with CORE.

My understanding is the CORE is still more mature and (for storage-only purposes) the recommendation. Anyways, that is besides the point - I'm just trying to stop using something that was meant as a extra-layer of redundancy and turns out to seem like a minefield.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
What do you get from gmultipath list ?

And zpool status -v
 

StorageCurious

Explorer
Joined
Sep 28, 2022
Messages
60
What do you get from gmultipath list ?

And zpool status -v

Type: AUTOMATIC
Mode: Active/Passive
UUID: 47b6481e-7188-11ed-ad48-001018e12890
State: OPTIMAL
Providers:
1. Name: multipath/disk2
Mediasize: 14000519642624 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e3
State: OPTIMAL
Consumers:
1. Name: da29
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: ACTIVE
2. Name: da17
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: PASSIVE

Geom name: disk3
Type: AUTOMATIC
Mode: Active/Passive
UUID: 9e74eac1-766e-11ed-8ced-001018e12890
State: OPTIMAL
Providers:
1. Name: multipath/disk3
Mediasize: 14000519642624 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e3
State: OPTIMAL
Consumers:
1. Name: da15
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: ACTIVE
2. Name: da27
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: PASSIVE

Geom name: disk11
Type: AUTOMATIC
Mode: Active/Passive
UUID: d73f25ce-6690-11ed-ad48-001018e12890
State: OPTIMAL
Providers:
1. Name: multipath/disk11
Mediasize: 14000519642624 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e3
State: OPTIMAL
Consumers:
1. Name: da32
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: ACTIVE
2. Name: da20
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: PASSIVE

Geom name: disk5
Type: AUTOMATIC
Mode: Active/Passive
UUID: 9192d8db-719d-11ed-ad48-001018e12890
State: OPTIMAL
Providers:
1. Name: multipath/disk5
Mediasize: 14000519642624 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e3
State: OPTIMAL
Consumers:
1. Name: da30
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: ACTIVE
2. Name: da18
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: PASSIVE

Geom name: disk6
Type: AUTOMATIC
Mode: Active/Passive
UUID: 5fa5091e-71b1-11ed-ad48-001018e12890
State: OPTIMAL
Providers:
1. Name: multipath/disk6
Mediasize: 14000519642624 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e3
State: OPTIMAL
Consumers:
1. Name: da25
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: ACTIVE
2. Name: da13
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: PASSIVE

Geom name: disk9
Type: AUTOMATIC
Mode: Active/Passive
UUID: 5e5e69a1-6b32-11ed-ad48-001018e12890
State: OPTIMAL
Providers:
1. Name: multipath/disk9
Mediasize: 14000519642624 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e3
State: OPTIMAL
Consumers:
1. Name: da35
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: ACTIVE
2. Name: da23
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: PASSIVE

Geom name: disk12
Type: AUTOMATIC
Mode: Active/Passive
UUID: d75f9ad7-6690-11ed-ad48-001018e12890
State: OPTIMAL
Providers:
1. Name: multipath/disk12
Mediasize: 14000519642624 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e3
State: OPTIMAL
Consumers:
1. Name: da36
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: ACTIVE
2. Name: da24
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: PASSIVE

Geom name: disk1
Type: AUTOMATIC
Mode: Active/Passive
UUID: d579fc59-7177-11ed-ad48-001018e12890
State: OPTIMAL
Providers:
1. Name: multipath/disk1
Mediasize: 14000519642624 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e3
State: OPTIMAL
Consumers:
1. Name: da34
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: ACTIVE
2. Name: da22
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: PASSIVE

Geom name: disk10
Type: AUTOMATIC
Mode: Active/Passive
UUID: 5e736d2b-6b32-11ed-ad48-001018e12890
State: OPTIMAL
Providers:
1. Name: multipath/disk10
Mediasize: 14000519642624 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e3
State: OPTIMAL
Consumers:
1. Name: da28
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: ACTIVE
2. Name: da16
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: PASSIVE

Geom name: disk7
Type: AUTOMATIC
Mode: Active/Passive
UUID: 991be35a-77f2-11ed-8462-001018e12890
State: OPTIMAL
Providers:
1. Name: multipath/disk7
Mediasize: 14000519642624 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e3
State: OPTIMAL
Consumers:
1. Name: da14
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: ACTIVE
2. Name: da26
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: PASSIVE

Geom name: disk8
Type: AUTOMATIC
Mode: Active/Passive
UUID: 993230ad-77f2-11ed-8462-001018e12890
State: OPTIMAL
Providers:
1. Name: multipath/disk8
Mediasize: 14000519642624 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e3
State: OPTIMAL
Consumers:
1. Name: da33
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: ACTIVE
2. Name: da21
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: PASSIVE

Geom name: disk4
Type: AUTOMATIC
Mode: Active/Passive
UUID: fd42ab30-77c0-11ed-8462-001018e12890
State: OPTIMAL
Providers:
1. Name: multipath/disk4
Mediasize: 14000519642624 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e3
State: OPTIMAL
Consumers:
1. Name: da31
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: ACTIVE
2. Name: da19
Mediasize: 14000519643136 (13T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e4
State: PASSIVE




pool: MD1400
state: ONLINE
scan: resilvered 17.7G in 00:03:14 with 0 errors on Fri Dec 9 08:04:40 2022
config:

NAME STATE READ WRITE CKSUM
MD1400 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/d47467e9-6bfc-11ed-ad48-001018e12890 ONLINE 0 0 0
gptid/8cd53a05-6c0b-11ed-ad48-001018e12890 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/9141ffa0-719e-11ed-ad48-001018e12890 ONLINE 0 0 0
gptid/b42550dd-71b1-11ed-ad48-001018e12890 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
gptid/59f9a691-7178-11ed-ad48-001018e12890 ONLINE 0 0 0
gptid/805d6e96-7188-11ed-ad48-001018e12890 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
gptid/a54e37aa-774b-11ed-8462-001018e12890 ONLINE 0 0 0
gptid/969adf54-77c1-11ed-8462-001018e12890 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
gptid/b8ba0556-669c-11ed-ad48-001018e12890 ONLINE 0 0 0
gptid/2993eed6-6693-11ed-ad48-001018e12890 ONLINE 0 0 0
mirror-6 ONLINE 0 0 0
gptid/e44f6fe6-77f2-11ed-8462-001018e12890 ONLINE 0 0 0
gptid/e45481a6-77f2-11ed-8462-001018e12890 ONLINE 0 0 0
logs
mirror-8 ONLINE 0 0 0
gptid/03fd9823-7c87-11ed-9c90-001018e12890 ONLINE 0 0 0
gptid/03f9301b-7c87-11ed-9c90-001018e12890 ONLINE 0 0 0
cache
gptid/38dc847c-7a54-11ed-8462-001018e12890 ONLINE 0 0 0

errors: No known data errors



To my non-expert eyes, this looks exactly like it should be
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
How would I "convert" to not using multipath anymore?
Unfortunately, it's probably a pool rebuild and even with that some annoying activity to prevent multipath from reclaiming the disks for the lifetime of the pool. (maybe somebody knows better than me and can suggest a better way)


you would start by backing up and/or replicating the pool contents somewhere safe.

Then destroy the pool

Then gmultipath destroy disk1 (and the others up to disk12)

Then you would need to unload gmultipath

gmultipath unload

After which, you should see only your original disks without the second path:

geom disk list | grep Name

Making a pool out of those would be the next step...

Then the annoying part (if the process hasn't already been so far)...

You'll need to create a preinit task to unload gmultipath (for every reboot).

kldunload geom_multipath.ko

Make sure that is working consistently with your freshly made pool (still empty) with a few reboots.

You'll see issues pretty quickly if there's a problem as your pool will be missing drives (just destroy the gmultipath disks to return your pool to normal after unloading gmultipath).

If you're comfortable with the stability of the solution, put your data back.
 

StorageCurious

Explorer
Joined
Sep 28, 2022
Messages
60
Unfortunately, it's probably a pool rebuild and even with that some annoying activity to prevent multipath from reclaiming the disks for the lifetime of the pool. (maybe somebody knows better than me and can suggest a better way)


you would start by backing up and/or replicating the pool contents somewhere safe.

Then destroy the pool
....
What I was afraid of, basically.
You'll need to create a preinit task to unload gmultipath (for every reboot).

kldunload geom_multipath.ko
Do I need to do this if I remove one cable from the enclosure, making it single-pathed anyways?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Do I need to do this if I remove one cable from the enclosure, making it single-pathed anyways?
I would say no... depends on how your backplane works.
 

StorageCurious

Explorer
Joined
Sep 28, 2022
Messages
60
I have an additional question - how can I manually run whatever is needed to have TrueNAS see that those disks are indeed part of a multipath and not available for pools?

I restarted middlewared, didn't change a thing. I also ran the multidisk.sync command, and then restarted middlewared, no change. Short term I just need TrueNAS to wake up to the fact that those disk aren't actually unused. I'm ok with running this command when needed, until I fix it for good.

But what would that command be? What runs during boot to define all this?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
It's the startup sequence of the kernel module... and continually re-evaluated as it runs.

I don't think there's anything that's supposed to be required to "tickle it" into finding something that's multi-pathed, it's actively hunting for it when the module is loaded.

from the man pages:
Code:
     When new devices are added to the system the MULTIPATH GEOM class is
     given an opportunity to taste these new devices.  If a new device has a
     MULTIPATH on-disk metadata label, the device is either used to create a
     new MULTIPATH GEOM, or added to the list of paths for an existing
     MULTIPATH GEOM.

     It is this mechanism that works reasonably with isp(4) and mpt(4) based
     Fibre Channel disk devices.  For these devices, when a device disappears
     (due to e.g., a cable pull or power failure to a switch), the device is
     proactively marked as gone and I/O to it failed.  This causes the
     MULTIPATH failure event just described.

     When Fibre Channel events inform either isp(4) or mpt(4) host bus
     adapters that new devices may have arrived (e.g., the arrival of an RSCN
     event from the Fabric Domain Controller), they can cause a rescan to
     occur and cause the attachment and configuration of any (now) new devices
     to occur, causing the taste event described above.

     This means that this multipath architecture is not a one-shot path
     failover, but can be considered to be steady state as long as failed
     paths are repaired (automatically or otherwise).

     Automatic rescanning is not a requirement.  Nor is Fibre Channel.  The
     same failover mechanisms work equally well for traditional "Parallel"
     SCSI but may require manual intervention with camcontrol(8) to cause the
     reattachment of repaired device links.


Maybe the clue is in there to look at the metadata label on the disks to remove MULTIPATH.
 
Joined
Jul 3, 2015
Messages
926
Have you tried /usr/local/bin/midclt call disk.multipath_sync
 

StorageCurious

Explorer
Joined
Sep 28, 2022
Messages
60

StorageCurious

Explorer
Joined
Sep 28, 2022
Messages
60
It's the startup sequence of the kernel module... and continually re-evaluated as it runs.

I don't think there's anything that's supposed to be required to "tickle it" into finding something that's multi-pathed, it's actively hunting for it when the module is loaded.
To be fair It IS finding the multipath disks, as the multipath UI shows all disks. the problem is that the Storage/Pool screen is lying to me about 3 disks being "free to use" while they are multipath disks in an existing pool.

It's that part I would like TrueNAS to reconsider.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
3 disks being "free to use" while they are multipath disks in an existing pool.

It's that part I would like TrueNAS to reconsider.
Maybe have a look at the metadata for those disks...
 
Joined
Jul 3, 2015
Messages
926
It would be interesting to look at the partitions on those disks with gpart show and also swap usage of your pool with swapinfo.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
To be fair It IS finding the multipath disks, as the multipath UI shows all disks. the problem is that the Storage/Pool screen is lying to me about 3 disks being "free to use" while they are multipath disks in an existing pool.

That's weirdish, I thought all the handwringing about device serial numbers (you know, the thing that breaks crappy USB-to-SATA bridges that all have a single serial number assigned) was in part to help the system understand multipath environments.
 

StorageCurious

Explorer
Joined
Sep 28, 2022
Messages
60
That's weirdish, I thought all the handwringing about device serial numbers (you know, the thing that breaks crappy USB-to-SATA bridges that all have a single serial number assigned) was in part to help the system understand multipath environments.
Yeah, my own reading when I got started not long ago was that serial numbers were the "disk unique key". So I'm not sure how a disk serial can be used both for a multipath and is shown in the pool disks.

My own educated guess is this is just a tiny UI, but with possibly dramatic consequences if one does not pay attention..
 

StorageCurious

Explorer
Joined
Sep 28, 2022
Messages
60
It would be interesting to look at the partitions on those disks with gpart show and also swap usage of your pool with swapinfo.
Code:
=>       40  195371488  nvd0  GPT  (93G)
         40         88        - free -  (44K)
        128  195371400     1  freebsd-zfs  (93G)

=>       40  195371488  nvd1  GPT  (93G)
         40         88        - free -  (44K)
        128  195371400     1  freebsd-zfs  (93G)

=>       40  234441568  da11  GPT  (112G)
         40     532480     1  efi  (260M)
     532520   33554432     3  freebsd-swap  (16G)
   34086952  200343552     2  freebsd-zfs  (96G)
  234430504      11104        - free -  (5.4M)

=>       40  234441568  da12  GPT  (112G)
         40     532480     1  efi  (260M)
     532520   33554432     3  freebsd-swap  (16G)
   34086952  200343552     2  freebsd-zfs  (96G)
  234430504      11104        - free -  (5.4M)

=>       40  781249920  da9  GPT  (373G)
         40         88       - free -  (44K)
        128  781249832    1  freebsd-zfs  (373G)

=>        40  7814037088  da7  GPT  (3.6T)
          40          88       - free -  (44K)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  7809842696    2  freebsd-zfs  (3.6T)

=>        40  7814037088  da8  GPT  (3.6T)
          40          88       - free -  (44K)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  7809842696    2  freebsd-zfs  (3.6T)

=>         40  27344764848  multipath/disk2  GPT  (13T)
           40           88                   - free -  (44K)
          128      4194304                1  freebsd-swap  (2.0G)
      4194432  27340570456                2  freebsd-zfs  (13T)

=>         40  27344764848  multipath/disk3  GPT  (13T)
           40           88                   - free -  (44K)
          128      4194304                1  freebsd-swap  (2.0G)
      4194432  27340570456                2  freebsd-zfs  (13T)

=>         40  27344764848  multipath/disk11  GPT  (13T)
           40           88                    - free -  (44K)
          128      4194304                 1  freebsd-swap  (2.0G)
      4194432  27340570456                 2  freebsd-zfs  (13T)

=>         40  27344764848  multipath/disk5  GPT  (13T)
           40           88                   - free -  (44K)
          128      4194304                1  freebsd-swap  (2.0G)
      4194432  27340570456                2  freebsd-zfs  (13T)

=>         40  27344764848  multipath/disk6  GPT  (13T)
           40           88                   - free -  (44K)
          128      4194304                1  freebsd-swap  (2.0G)
      4194432  27340570456                2  freebsd-zfs  (13T)

=>         40  27344764848  multipath/disk9  GPT  (13T)
           40           88                   - free -  (44K)
          128      4194304                1  freebsd-swap  (2.0G)
      4194432  27340570456                2  freebsd-zfs  (13T)

=>         40  27344764848  multipath/disk12  GPT  (13T)
           40           88                    - free -  (44K)
          128      4194304                 1  freebsd-swap  (2.0G)
      4194432  27340570456                 2  freebsd-zfs  (13T)

=>         40  27344764848  multipath/disk1  GPT  (13T)
           40           88                   - free -  (44K)
          128      4194304                1  freebsd-swap  (2.0G)
      4194432  27340570456                2  freebsd-zfs  (13T)

=>         40  27344764848  multipath/disk10  GPT  (13T)
           40           88                    - free -  (44K)
          128      4194304                 1  freebsd-swap  (2.0G)
      4194432  27340570456                 2  freebsd-zfs  (13T)

=>         40  27344764848  multipath/disk7  GPT  (13T)
           40           88                   - free -  (44K)
          128      4194304                1  freebsd-swap  (2.0G)
      4194432  27340570456                2  freebsd-zfs  (13T)

=>         40  27344764848  multipath/disk8  GPT  (13T)
           40           88                   - free -  (44K)
          128      4194304                1  freebsd-swap  (2.0G)
      4194432  27340570456                2  freebsd-zfs  (13T)

=>         40  27344764848  multipath/disk4  GPT  (13T)
           40           88                   - free -  (44K)
          128      4194304                1  freebsd-swap  (2.0G)
      4194432  27340570456                2  freebsd-zfs  (13T)

=>       40  781422688  nvd3  GPT  (373G)
         40         88        - free -  (44K)
        128  781422600     1  freebsd-zfs  (373G)

=>       40  781422688  nvd2  GPT  (373G)
         40         88        - free -  (44K)
        128  781422600     1  freebsd-zfs  (373G)


Code:
Device          1K-blocks     Used    Avail Capacity
/dev/mirror/swap0.eli   2097152     5444  2091708     0%
/dev/mirror/swap1.eli  16777216     4576 16772640     0%
Total            18874368    10020 18864348     0%
 
Top