SOLVED Stripped special vdev and Metadata

hasore

Cadet
Joined
Jul 23, 2019
Messages
9
I am currently running a stripped+mirror (3x) special allocation class using 6 sata ssds and planning to either expand or reconfigure to:
- Expand: stripped+mirror (3x) special allocation class using 6 sata ssds + 2 nvme drivers
- Rebuild: stripped+mirror (2x) special allocation class using 4 sata ssds (the ones with best performance) + 2 nvme drives.

1 - How does metadata works on a stripped mirror? I should be wrong, but i don't expect the metadata to be split between the special vdevs like the small block files or does it?
2 - Would be better to have a single special vdev with three-way mirror (would achieve or does the metadata actually benefit for multiple special vdevs?

Small block size: 256K
Record size: 512K

Pool layout + verbose:
Code:
root@storage[~]# zpool list -v frostfire
NAME                                             SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
frostfire                                       15.8T   995G  14.8T        -         -     0%     6%  1.00x    ONLINE  /mnt
  mirror                                        3.62T   247G  3.38T        -         -     0%  6.65%      -    ONLINE
    gptid/6221b76a-8bcf-11ec-81c4-001b21b939e4      -      -      -        -         -      -      -      -    ONLINE
    gptid/625b2df5-8bcf-11ec-81c4-001b21b939e4      -      -      -        -         -      -      -      -    ONLINE
  mirror                                        3.62T   247G  3.38T        -         -     0%  6.66%      -    ONLINE
    gptid/61f79690-8bcf-11ec-81c4-001b21b939e4      -      -      -        -         -      -      -      -    ONLINE
    gptid/62745d67-8bcf-11ec-81c4-001b21b939e4      -      -      -        -         -      -      -      -    ONLINE
  mirror                                        3.62T   247G  3.38T        -         -     0%  6.64%      -    ONLINE
    gptid/618c3d42-8bcf-11ec-81c4-001b21b939e4      -      -      -        -         -      -      -      -    ONLINE
    gptid/61df2d04-8bcf-11ec-81c4-001b21b939e4      -      -      -        -         -      -      -      -    ONLINE
  mirror                                        3.62T   248G  3.38T        -         -     0%  6.67%      -    ONLINE
    gptid/61c3eafb-8bcf-11ec-81c4-001b21b939e4      -      -      -        -         -      -      -      -    ONLINE
    gptid/6195cca7-8bcf-11ec-81c4-001b21b939e4      -      -      -        -         -      -      -      -    ONLINE
special                                             -      -      -        -         -      -      -      -  -
  mirror                                         436G  2.09G   434G        -         -     0%  0.47%      -    ONLINE
    gptid/278dcedc-8ba1-11ec-bd37-001b21b939e4      -      -      -        -         -      -      -      -    ONLINE
    gptid/12cdfbdb-8ac5-11ec-8d10-001b21b939e4      -      -      -        -         -      -      -      -    ONLINE
  mirror                                         436G  2.09G   434G        -         -     0%  0.47%      -    ONLINE
    gptid/1bfe8e76-8ac5-11ec-8d10-001b21b939e4      -      -      -        -         -      -      -      -    ONLINE
    gptid/22ffdad9-8ac5-11ec-8d10-001b21b939e4      -      -      -        -         -      -      -      -    ONLINE
  mirror                                         436G  2.09G   434G        -         -     0%  0.47%      -    ONLINE
    gptid/1faf368d-8ac5-11ec-8d10-001b21b939e4      -      -      -        -         -      -      -      -    ONLINE
    gptid/a4006fe0-8ba0-11ec-bd37-001b21b939e4      -      -      -        -         -      -      -      -    ONLINE


Code:
  block   psize                lsize                asize
   size   Count   Size   Cum.  Count   Size   Cum.  Count   Size   Cum.
    512:  7.41K  3.71M  3.71M  7.41K  3.71M  3.71M      0      0      0
     1K:  6.31K  7.61M  11.3M  6.31K  7.61M  11.3M      0      0      0
     2K:  6.10K  15.9M  27.2M  6.10K  15.9M  27.2M      0      0      0
     4K:  28.3K   114M   141M  5.64K  31.4M  58.6M  26.2K   105M   105M
     8K:  15.0K   159M   299M  4.52K  48.9M   107M  28.8K   244M   349M
    16K:  4.90K   103M   402M  6.82K   137M   244M  12.7K   280M   629M
    32K:  5.65K   257M   659M  26.6K   891M  1.11G  5.78K   260M   890M
    64K:  14.0K  1.27G  1.91G  2.17K   196M  1.30G  14.0K  1.27G  2.14G
   128K:  20.8K  3.92G  5.83G  2.80K   479M  1.77G  20.8K  3.92G  6.06G
   256K:  76.9K  28.4G  34.2G  1.68K   603M  2.36G  76.9K  28.4G  34.4G
   512K:  1.88M   961G   995G  1.99M  1018G  1021G  1.88M   961G   995G
     1M:      0      0   995G      0      0  1021G      0      0   995G
     2M:      0      0   995G      0      0  1021G      0      0   995G
     4M:      0      0   995G      0      0  1021G      0      0   995G
     8M:      0      0   995G      0      0  1021G      0      0   995G
    16M:      0      0   995G      0      0  1021G      0      0   995G
 

hasore

Cadet
Joined
Jul 23, 2019
Messages
9
Also planning to create a l2arc using partitioning of one or more storage devices.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
1 - How does metadata works on a stripped mirror? I should be wrong, but i don't expect the metadata to be split between the special vdevs like the small block files or does it?
2 - Would be better to have a single special vdev with three-way mirror (would achieve or does the metadata actually benefit for multiple special vdevs?
How I understand it is that your special VDEVs are all treated equal, so anything you have set to go to special VDEVs will be spread across them evenly... in which case, you would theoretically benefit from increased IOPS for small files and metadata.
 

hasore

Cadet
Joined
Jul 23, 2019
Messages
9
How I understand it is that your special VDEVs are all treated equal, so anything you have set to go to special VDEVs will be spread across them evenly... in which case, you would theoretically benefit from increased IOPS for small files and metadata.
You are probably right, the metadata probably should work like the normal data and small files in this scenario, was unlucky searching any information regarding this.
In the end, opted into attaching two nvme drivers (custom partitions) into vdevs mirror-4 and mirror-6, and will add a ssd into vdev mirror-5 at a later date.

Layout Before:

Code:
root@storage[~]# zpool status frostfire
  pool: frostfire
 state: ONLINE
config:

    NAME                                            STATE     READ WRITE CKSUM
    frostfire                                       ONLINE       0     0     0
      mirror-0                                      ONLINE       0     0     0
        gptid/6221b76a-8bcf-11ec-81c4-001b21b939e4  ONLINE       0     0     0
        gptid/625b2df5-8bcf-11ec-81c4-001b21b939e4  ONLINE       0     0     0
      mirror-1                                      ONLINE       0     0     0
        gptid/61f79690-8bcf-11ec-81c4-001b21b939e4  ONLINE       0     0     0
        gptid/62745d67-8bcf-11ec-81c4-001b21b939e4  ONLINE       0     0     0
      mirror-2                                      ONLINE       0     0     0
        gptid/618c3d42-8bcf-11ec-81c4-001b21b939e4  ONLINE       0     0     0
        gptid/61df2d04-8bcf-11ec-81c4-001b21b939e4  ONLINE       0     0     0
      mirror-3                                      ONLINE       0     0     0
        gptid/61c3eafb-8bcf-11ec-81c4-001b21b939e4  ONLINE       0     0     0
        gptid/6195cca7-8bcf-11ec-81c4-001b21b939e4  ONLINE       0     0     0
    special   
      mirror-4                                      ONLINE       0     0     0
        gptid/278dcedc-8ba1-11ec-bd37-001b21b939e4  ONLINE       0     0     0
        gptid/12cdfbdb-8ac5-11ec-8d10-001b21b939e4  ONLINE       0     0     0
      mirror-5                                      ONLINE       0     0     0
        gptid/1bfe8e76-8ac5-11ec-8d10-001b21b939e4  ONLINE       0     0     0
        gptid/22ffdad9-8ac5-11ec-8d10-001b21b939e4  ONLINE       0     0     0
      mirror-6                                      ONLINE       0     0     0
        gptid/1faf368d-8ac5-11ec-8d10-001b21b939e4  ONLINE       0     0     0
        gptid/a4006fe0-8ba0-11ec-bd37-001b21b939e4  ONLINE       0     0     0

errors: No known data errors

Layout After:
Code:
root@storage[~]# zpool status frostfire
  pool: frostfire
 state: ONLINE
  scan: resilvered 2.21G in 00:00:05 with 0 errors on Wed Feb 16 16:50:26 2022
config:

    NAME                                            STATE     READ WRITE CKSUM
    frostfire                                       ONLINE       0     0     0
      mirror-0                                      ONLINE       0     0     0
        gptid/6221b76a-8bcf-11ec-81c4-001b21b939e4  ONLINE       0     0     0
        gptid/625b2df5-8bcf-11ec-81c4-001b21b939e4  ONLINE       0     0     0
      mirror-1                                      ONLINE       0     0     0
        gptid/61f79690-8bcf-11ec-81c4-001b21b939e4  ONLINE       0     0     0
        gptid/62745d67-8bcf-11ec-81c4-001b21b939e4  ONLINE       0     0     0
      mirror-2                                      ONLINE       0     0     0
        gptid/618c3d42-8bcf-11ec-81c4-001b21b939e4  ONLINE       0     0     0
        gptid/61df2d04-8bcf-11ec-81c4-001b21b939e4  ONLINE       0     0     0
      mirror-3                                      ONLINE       0     0     0
        gptid/61c3eafb-8bcf-11ec-81c4-001b21b939e4  ONLINE       0     0     0
        gptid/6195cca7-8bcf-11ec-81c4-001b21b939e4  ONLINE       0     0     0
    special   
      mirror-4                                      ONLINE       0     0     0
        gptid/278dcedc-8ba1-11ec-bd37-001b21b939e4  ONLINE       0     0     0
        gptid/12cdfbdb-8ac5-11ec-8d10-001b21b939e4  ONLINE       0     0     0
        gptid/4155886c-8f59-11ec-8aa4-001b21b939e4  ONLINE       0     0     0
      mirror-5                                      ONLINE       0     0     0
        gptid/1bfe8e76-8ac5-11ec-8d10-001b21b939e4  ONLINE       0     0     0
        gptid/22ffdad9-8ac5-11ec-8d10-001b21b939e4  ONLINE       0     0     0
      mirror-6                                      ONLINE       0     0     0
        gptid/1faf368d-8ac5-11ec-8d10-001b21b939e4  ONLINE       0     0     0
        gptid/a4006fe0-8ba0-11ec-bd37-001b21b939e4  ONLINE       0     0     0
        gptid/3dc9f6ac-8f59-11ec-8aa4-001b21b939e4  ONLINE       0     0     0

errors: No known data errors


For my use case scenario, the benefit of redundancy of 3 special vdevs in 3-way mirror would be better than the benefits of 4 specials vdevs in 2-way mirror.
Now, just need to work in creating the l2arc.
 
Top