How do i safely get rid of this 1 drive stripe?

Rob Townley

Dabbler
Joined
May 1, 2017
Messages
19
I dont want the stripe. Want to get rid of the stupid stripe and move da4p2 into one of the raidz arrays. Backstory is that at one time, i had one or two 8TB SATA drives that were only recognized as 2.2TB by the very old SAS controller. i had attempted to put them into their own RAID set to see if more of the drive space would be seen. I ended up swapping them out for 3TB SAS drives because at least the entire drive is seen. I have no idea if any of my actual data is on this 1 drive stripe. So how would one move any used blocks on the strip to the raidz2 arrays. Currently, there is only 109GiB used and 24.4TiB available. So there is tons of free space, but how does one know where the actual data resides?

Storage --> Volumes --> "View Volumes" --> highlight the pool or top line --> "Volume Status" --> click da4p2 and the options presented are "Edit, Offline, or Replace".

Under "View Disks", wipe or Edit are the options available.

Any pointers as to where to read up?

Is the only real option to move the 109GB to another machine and start from scratch on this one?
20190515-151446-FreeNas8-StripeOfSingleDrive-da4p2.png


Build FreeNAS-9.10.2-U6 (561f0d7a1)
Platform Intel(R) Xeon(R) CPU L5410 @ 2.33GHz
Memory 32732MB
Load Average 0.11, 0.23, 0.23
 

myoung

Explorer
Joined
Mar 14, 2018
Messages
70
I have no idea if any of my actual data is on this 1 drive stripe.

At the very least some metadata is

So how would one move any used blocks on the strip to the raidz2 arrays.

You can't

how does one know where the actual data resides?

zpool iostat -v might tell you what you're looking for

Is the only real option to move the 109GB to another machine and start from scratch on this one?

Yes
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
You rebuild the pool
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
your pool is completely not redundant as the stripe drive failing will kill it, and there is no way to detach it without completely destroying the pool and starting again as the top level vdev removal is not available until freebsd 12 iirc.
you should consider attaching at least one of your spare drives to the stripe which will give you mirror redundancy until you sort out what to do, or immediately copy it to a spare location in prep for rebuilding the pool.
I don't recall the exact syntax, but something like (I don't believe this is possible in the GUI currently):
Code:
sudo zpool detach da2p2
sudo zpool attach freenas8-pool0 da4p2 da2p2

data location is completely managed by zfs and spread across every top level vdev, and there is no way to reorganize it, as that would be part of top level vdev removal.

you should also really have a backup, because as you can see zfs is extremely and unrelentingly unforgiving of mistakes.
 
Last edited:

tfran1990

Patron
Joined
Oct 18, 2017
Messages
294
You should be extremely grateful that you are having this problem with only 109G of data on your pool.
Rebuilding sux but at least you dont have to find a place to move 30TB of data, to then rebuild the pool.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Is the only real option to move the 109GB to another machine and start from scratch on this one?
As everyone else has said, currently, that's the only option--and 109G is nothing, so copying it off the pool and rebuilding the pool should be pretty easy. Device removal is coming, but I don't know the ETA.
 

Rob Townley

Dabbler
Joined
May 1, 2017
Messages
19
FreeNAS says the POOL and everything is healthy, so probably the drive was removed before any data was added. Started as a test machine and then put into service before i had dug down into what was really under-the-covers.
 

Rob Townley

Dabbler
Joined
May 1, 2017
Messages
19
At the very least some metadata is



You can't



zpool iostat -v might tell you what you're looking for

Code:
freenas8#
freenas8# zpool iostat -v -v
                                           capacity     operations    bandwidth
pool                                    alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
FreeNas8-POOL0                           109G  24.4T      0     32     14   169K
  raidz2                                52.6G  10.9T      0     14      2  49.9K
    gptid/38f6a9b1-945c-11e8-bae5-001517e36380      -      -      0      4      8  28.3K
    da1                                     -      -      0      4     10  28.3K
    gptid/d4d55d27-9806-11e8-bae5-001517e36380      -      -      0      4      8  28.3K
    gptid/4801b3e7-945a-11e8-bae5-001517e36380      -      -      0      4      8  28.3K
  raidz2                                53.2G  10.9T      0     13      2  49.2K
    gptid/9eff3ce2-96b2-11e8-bae5-001517e36380      -      -      0      4      9  27.9K
    gptid/e55df6a6-9737-11e8-bae5-001517e36380      -      -      0      4     10  28.0K
    gptid/79afd0d6-9761-11e8-bae5-001517e36380      -      -      0      4     10  28.0K
    gptid/6721a119-b53a-11e8-b428-001517e36380      -      -      0      4     10  28.0K
  gptid/c69cf547-de54-11e8-b983-001517e36380  3.18G  2.72T      0      4      9  69.8K
--------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot                            2.97G  4.46G      0      0    474  3.61K
  mirror                                2.97G  4.46G      0      0    474  3.61K
    gptid/b743692e-a2d3-11e6-9dab-001517e28e00      -      -      0      0    282  3.61K
    da12p2                                  -      -      0      0    268  3.61K
--------------------------------------  -----  -----  -----  -----  -----  -----



This XyraTex only saw about 1/4 of the internal 8TB SATA drive, but at at least disklist.pl sees the external 8TB USB WD MyBook. And either the usb stick died or was never joined.
Code:
freenas8# /mnt/FreeNas8-POOL0/rjt/bin/disklist.pl
partition  zpool           device  disk                      size  serial                 rpm
---------------------------------------------------------------------------------------------
da0p2      FreeNas8-POOL0  da0     IBM-XIV ST33000650SS  B1  3000  Z296G1J600009250PMZD  7200
da2p2      FreeNas8-POOL0  da2     IBM-XIV ST33000650SS  B1  3000  Z293ZXP3000093024E1M  7200
da3p2      FreeNas8-POOL0  da3     IBM-XIV ST33000650SS  B1  3000  Z2971SQF00009335RHNM  7200
da4p2      FreeNas8-POOL0  da4     IBM-XIV ST33000650SS  B1  3000  Z296XLRB0000C333646W  7200
da5p2      FreeNas8-POOL0  da5     IBM-XIV ST33000650SS  B1  3000  Z293YYGR0000C250BUPQ  7200
da6p2      FreeNas8-POOL0  da6     IBM-XIV ST33000650SS  B1  3000  Z296XSGW0000C33210B9  7200
da7p2      FreeNas8-POOL0  da7     IBM-XIV ST33000650SS  B1  3000  Z2979KZM00009337WD32  7200
da8p2      FreeNas8-POOL0  da8     IBM-XIV ST33000650SS  B1  3000  Z293ZEFS00009301W84H  7200
da9p2      FreeNas8-POOL0  da9     IBM-XIV ST33000650SS  B1  3000  Z296ZARQ0000C3322F2E  7200
da10p2     FreeNas8-POOL0  da10    IBM-XIV ST33000650SS  B1  3000  Z293ZEGJ00009302X6S0  7200
da11p2     FreeNas8-POOL0  da11    IBM-XIV ST33000650SS  B1  3000  Z293ZLL900009302YFVM  7200
da13p2     freenas-boot    da13    MUSHKIN MKNUFDMH8GB          8  07BA1302D6C000CB       ???
                           da1     IBM-XIV ST33000650SS  B1  3000  Z2925ZAV0000C2515B3A  7200
                           da12    PNY USB 2.0 FD               8  0416KK00000066856215   ???
                           da14    WD My Book 25EE           8001  32544B3837375044      5400
freenas8#



Yes
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
FreeNAS says the POOL and everything is healthy, so probably the drive was removed before any data was added
i think you're misunderstanding something crucial...your entire pool has the redundancy of a stripe. the pool is healthy currently, but the raidz's are meaningless. the location of data in the pool is meaningless. if you lose da4 *everything dies*. dead. kicked the bucket. your data is just gone. there is no "probably the drive was removed" because *you cant remove a top level vdev, period. end. hard stop.* losing any top level vdev *destroys the entire pool*
there is no chance of recovery, there is nothing to do but change the stripe da4 to a mirror or destroy and rebuild pool the right way, because you have absolutely zero redundancy on that one drive and the entire pool is critically dependant on that one drive being available.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Device removal is coming, but I don't know the ETA.
On further reflection, this is incorrect--I was thinking of device addition to existing RAIDZ vdevs. Device removal is already here, but it only works on pools where all vdevs are either single disks or mirrors. To my knowledge, there is no solution in the works that would let you remove the single disk without destroying the pool.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Im fairly sure its not freenas yet
It is in FreeNAS (came in with the release of 11.2), but as I said above, it only works when all vdevs in the pool are either single disks or mirrors.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
https://www.ixsystems.com/community/resources/zfs-feature-flags-in-freenas.95/

or you can list your zfs features for you specific pool, this is mine from a 11.2-U4.1 freenas install.
Code:
root@tank:~ # zpool get all tank | grep feature
tank  feature@async_destroy          enabled                        local
tank  feature@empty_bpobj            active                         local
tank  feature@lz4_compress           active                         local
tank  feature@multi_vdev_crash_dump  enabled                        local
tank  feature@spacemap_histogram     active                         local
tank  feature@enabled_txg            active                         local
tank  feature@hole_birth             active                         local
tank  feature@extensible_dataset     active                         local
tank  feature@embedded_data          active                         local
tank  feature@bookmarks              enabled                        local
tank  feature@filesystem_limits      enabled                        local
tank  feature@large_blocks           active                         local
tank  feature@sha512                 enabled                        local
tank  feature@skein                  enabled                        local
tank  feature@device_removal         enabled                        local
tank  feature@obsolete_counts        enabled                        local
tank  feature@zpool_checkpoint       enabled                        local
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
ok, so i was having trouble confirming for sure exactly what vdev removal was possible online, so i just tried it out instead, and it is indeed possible to remove stripe/mirror from stripe/mirror, but not stripe/mirror from raidz, making that useless in this case.
I'm a little curious how the OP got a stripe added in the first place since both the GUI and command line wont allow it. you have to force it with the command line to do it at all.

EDIT: course, naturally, just after I post this, i find the post where danb35 does the same thing. sigh. reinventing the wheel ftw. oh well, now I know how to do it.
Code:
  pool: test
state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        test                                            ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/f8a8771f-799f-11e9-8877-000c291d8d08  ONLINE       0     0     0
            gptid/fa20be2e-799f-11e9-8877-000c291d8d08  ONLINE       0     0     0
            gptid/fa23e72e-799f-11e9-8877-000c291d8d08  ONLINE       0     0     0
          raidz1-1                                      ONLINE       0     0     0
            gptid/f9e78050-799f-11e9-8877-000c291d8d08  ONLINE       0     0     0
            gptid/fa04167e-799f-11e9-8877-000c291d8d08  ONLINE       0     0     0
            gptid/fa30cf28-799f-11e9-8877-000c291d8d08  ONLINE       0     0     0
          ada6                                          ONLINE       0     0     0

errors: No known data errors
freenastest% sudo zpool remove test ada6
cannot remove ada6: invalid config; all top-level vdevs must have the same sector size and not be raidz.

Code:
  pool: test
state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        test                                            ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/daa39e71-79a4-11e9-8877-000c291d8d08  ONLINE       0     0     0
            gptid/dacd8427-79a4-11e9-8877-000c291d8d08  ONLINE       0     0     0
            gptid/dacfe671-79a4-11e9-8877-000c291d8d08  ONLINE       0     0     0
          raidz1-1                                      ONLINE       0     0     0
            gptid/dab3a624-79a4-11e9-8877-000c291d8d08  ONLINE       0     0     0
            gptid/dac69b72-79a4-11e9-8877-000c291d8d08  ONLINE       0     0     0
            gptid/dad73412-79a4-11e9-8877-000c291d8d08  ONLINE       0     0     0
          mirror-2                                      ONLINE       0     0     0
            ada6                                        ONLINE       0     0     0
            ada7                                        ONLINE       0     0     0

errors: No known data errors
freenastest% sudo zpool remove test mirror-2
cannot remove mirror-2: invalid config; all top-level vdevs must have the same sector size and not be raidz.

 
Top