Dual Intel Optane in Mirror as Logs for Multiple Pools

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,949
Anyone,
If you look at my specs below you can see I have two optane drives. Each acts as a Log for a different pool
SSDPool (6*SATA SSD, Mirrored Devs) & BigPool (6*HDD, Mirrored Devs)
The Optanes are 250+GB, but a ZIL I understand needs a mere 16GB or so. Currently the optanes are running with a 120GB partition.
From the command line I have run
gpart create -s GPT nvd0
gpart add -t freebsd-zfs -a 1m -l "name" -s 120G nvd0
And then added this as a log using zpool add "Pool" log nvd0p1

It occurs to me that I could (at least in theory until I try it)
1. Remove both logs
2. Wipe both Optanes nvd0 & nvd1
3. Create 2 * 120 (ish) GB partitions on each Optane using gpart
4. zpool add "Pool1" log mirror nvd0p1 nvd1p1
5. zpool add "Pool2" log mirror nvd0p2 nvd1p2

That way the ZIL / Slogs would be mirrored (in case of failure)

Anyone got any thoughts as to good / bad idea?
--------------------------------------------------------
As a second question - I have got my paws on a pair of Intel DC S3610 SSD's (860GB) in good condition
My large HDD pool has video data on it (not much help) but also a large number of small files in a host of sub folders that can take a while to look through from a client.
If I understand things correctly adding these SSD's as a mirror special vdev to the HDD pool, and then copying off and back on again all the data would cache the metadata on the SSD's and potentially speed some operations up (my mileage may vary).
Almost as importantly, as the Pool is mirrored vdevs (3*2) and the special vdev would be mirrored then the vdev could be removed somehow from the pool if the effect is not as obvious as I would like. I am not sure I would want to go ahead if I can't remove the fusion (special) vdev from the pool

Output from a couple of commands below (assuming I have posted it correctly)
Code:
  1k:  25996
  2k:  96317
  4k:  30507
  8k:   8270
 16k:  12312
 32k:  41484
 64k:  61987
128k:  29278
256k:  68105
512k:  59129
  1M:  23653
  2M:  13284
  4M:  12278
  8M:   8403
 16M:   4052
 32M:   2500
 64M:    887
128M:    973
256M:  10093
512M:   4869
  1G:   1807
  2G:    978
  4G:     75
  8G:     18
 16G:      5
 32G:      4
 64G:      1
128G:      1

Traversing all blocks ...


        bp count:             115609922
        ganged count:                 0
        bp logical:      16163205622784      avg: 139808
        bp physical:     15831147257344      avg: 136935     compression:   1.02
        bp allocated:    15842228080640      avg: 137031     compression:   1.02
        bp deduped:                   0    ref>1:      0   deduplication:   1.00
        Normal class:    15842201255936     used: 44.16%

        additional, non-pointer bps of type 0:     501995
         number of (compressed) bytes:  number of bps
                         17:     35 *
                         18:    645 *
                         19:    131 *
                         20:    200 *
                         21:    294 *
                         22:    171 *
                         23:    209 *
                         24:    170 *
                         25:     81 *
                         26:    537 *
                         27:    440 *
                         28:  20033 *********
                         29:  90890 ****************************************
                         30:     32 *
                         31:    366 *
                         32:    330 *
                         33:   1095 *
                         34:    235 *
                         35:    249 *
                         36:     95 *
                         37:    313 *
                         38:    351 *
                         39:    564 *
                         40:   1474 *
                         41:   1088 *
                         42:   1094 *
                         43:    637 *
                         44:    326 *
                         45:    174 *
                         46:    211 *
                         47:    277 *
                         48:    364 *
                         49:    609 *
                         50:    613 *
                         51:   2388 **
                         52:   2034 *
                         53:  15682 *******
                         54:  46653 *********************
                         55:   7557 ****
                         56:  20175 *********
                         57:   3764 **
                         58:  16107 ********
                         59:    904 *
                         60:    782 *
                         61:    734 *
                         62:    974 *
                         63:    897 *
                         64:    856 *
                         65:    975 *
                         66:   1424 *
                         67:   1670 *
                         68:  20794 **********
                         69:   3523 **
                         70:   2182 *
                         71:   2710 **
                         72:   2008 *
                         73:   2741 **
                         74:   4311 **
                         75:   3816 **
                         76:   2524 **
                         77:   2892 **
                         78:   4608 ***
                         79:  34884 ****************
                         80:   3507 **
                         81:   5909 ***
                         82:  19271 *********
                         83:   2895 **
                         84:   1957 *
                         85:   2041 *
                         86:   5233 ***
                         87:   3211 **
                         88:  10550 *****
                         89:  21068 **********
                         90:   7270 ****
                         91:  10659 *****
                         92:   3203 **
                         93:   2695 **
                         94:   2642 **
                         95:   5120 ***
                         96:   5205 ***
                         97:   2563 **
                         98:   2096 *
                         99:   1801 *
                        100:   1823 *
                        101:   2001 *
                        102:   1694 *
                        103:   2214 *
                        104:   1928 *
                        105:   1257 *
                        106:   1522 *
                        107:   1755 *
                        108:   1708 *
                        109:   2313 **
                        110:   1780 *
                        111:  12902 ******
                        112:  15300 *******
        Dittoed blocks on same vdev: 70889

Blocks  LSIZE   PSIZE   ASIZE     avg    comp   %Total  Type
     -      -       -       -       -       -        -  unallocated
     2    32K      8K     24K     12K    4.00     0.00  object directory
    10     5K      5K    120K     12K    1.00     0.00  object array
     1    16K      4K     12K     12K    4.00     0.00  packed nvlist
     -      -       -       -       -       -        -  packed nvlist size
    68  2.12M    348K   1.02M   15.4K    6.25     0.00      L1 bpobj
 2.58K   330M   23.0M   68.9M   26.7K   14.38     0.00      L0 bpobj
 2.65K   332M   23.3M   69.9M   26.4K   14.26     0.00  bpobj
     -      -       -       -       -       -        -  bpobj header
     -      -       -       -       -       -        -  SPA space map header
    41   656K    164K    492K     12K    4.00     0.00      L2 SPA space map
   588  9.19M   2.61M   7.82M   13.6K    3.53     0.00      L1 SPA space map
 14.8K   162M   89.2M    268M   18.0K    1.82     0.00      L0 SPA space map
 15.4K   172M   91.9M    276M   17.9K    1.87     0.00  SPA space map
    12   312K    312K    312K     26K    1.00     0.00  ZIL intent log
   105  13.1M    420K    840K      8K   32.00     0.00      L5 DMU dnode
   105  13.1M    420K    840K      8K   32.00     0.00      L4 DMU dnode
   105  13.1M    420K    840K      8K   32.00     0.00      L3 DMU dnode
   106  13.2M    424K    852K   8.04K   32.00     0.00      L2 DMU dnode
   304    38M   7.53M   15.1M   50.9K    5.05     0.00      L1 DMU dnode
 79.6K  1.24G    343M    688M   8.65K    3.71     0.00      L0 DMU dnode
 80.3K  1.33G    352M    706M   8.80K    3.87     0.00  DMU dnode
   106   408K    408K    852K   8.04K    1.00     0.00  DMU objset
     -      -       -       -       -       -        -  DSL directory
    49  25.5K      3K     48K    1003    8.50     0.00  DSL directory child map
    47    42K   16.5K    180K   3.83K    2.55     0.00  DSL dataset snap map
    60   374K   96.5K    408K   6.80K    3.87     0.00  DSL props
     -      -       -       -       -       -        -  DSL dataset
     -      -       -       -       -       -        -  ZFS znode
     -      -       -       -       -       -        -  ZFS V0 ACL
    35  1.09M    140K    280K      8K    8.00     0.00      L3 ZFS plain file
 21.9K   701M   91.6M    183M   8.35K    7.66     0.00      L2 ZFS plain file
  581K  18.1G   4.90G   9.79G   17.3K    3.71     0.07      L1 ZFS plain file
 87.4M  14.3T   14.2T   14.2T    166K    1.01    98.30      L0 ZFS plain file
 88.0M  14.4T   14.2T   14.2T    165K    1.01    98.37  ZFS plain file
     1    32K      4K      8K      8K    8.00     0.00      L2 ZFS directory
 89.4K  2.79G    358M    715M      8K    8.00     0.00      L1 ZFS directory
  672K  3.10G    848M   2.79G   4.25K    3.74     0.02      L0 ZFS directory
  761K  5.89G   1.18G   3.49G   4.69K    5.00     0.02  ZFS directory
    45    45K     39K    360K      8K    1.15     0.00  ZFS master node
     -      -       -       -       -       -        -  ZFS delete queue
     9   288K     36K     72K      8K    8.00     0.00      L4 zvol object
    30   960K    300K    600K     20K    3.20     0.00      L3 zvol object
   953  29.8M   12.9M   25.8M   27.7K    2.31     0.00      L2 zvol object
 98.6K  3.08G   1.12G   2.24G   23.2K    2.76     0.02      L1 zvol object
 21.3M   341G    234G    234G   11.0K    1.46     1.59      L0 zvol object
 21.4M   344G    235G    236G   11.0K    1.46     1.60  zvol object
     -      -       -       -       -       -        -  zvol prop
     -      -       -       -       -       -        -  other uint8[]
     -      -       -       -       -       -        -  other uint64[]
     -      -       -       -       -       -        -  other ZAP
     -      -       -       -       -       -        -  persistent error log
     1    32K     12K     36K     36K    2.67     0.00      L1 SPA history
   128    16M   1.48M   4.44M   35.5K   10.81     0.00      L0 SPA history
   129  16.0M   1.49M   4.48M   35.5K   10.74     0.00  SPA history
     -      -       -       -       -       -        -  SPA history offsets
     -      -       -       -       -       -        -  Pool properties
     -      -       -       -       -       -        -  DSL permissions
     -      -       -       -       -       -        -  ZFS ACL
     -      -       -       -       -       -        -  ZFS SYSACL
     -      -       -       -       -       -        -  FUID table
     -      -       -       -       -       -        -  FUID table size
    64  34.5K      3K     12K     192   11.50     0.00  DSL dataset next clones
     -      -       -       -       -       -        -  scan work queue
   263   166K   61.5K    432K   1.64K    2.70     0.00  ZFS user/group/project used
     -      -       -       -       -       -        -  ZFS user/group/project quota
     -      -       -       -       -       -        -  snapshot refcount tags
     -      -       -       -       -       -        -  DDT ZAP algorithm
     -      -       -       -       -       -        -  DDT statistics
     -      -       -       -       -       -        -  System attributes
     -      -       -       -       -       -        -  SA master node
    45  67.5K   65.5K    360K      8K    1.03     0.00  SA attr registration
    90  1.41M    348K    720K      8K    4.14     0.00  SA attr layouts
     -      -       -       -       -       -        -  scan translations
     -      -       -       -       -       -        -  deduplicated block
   252   194K    119K   1.39M   5.67K    1.63     0.00  DSL deadlist map
     -      -       -       -       -       -        -  DSL deadlist map hdr
    22  13.5K      3K     12K     558    4.50     0.00  DSL dir clones
     -      -       -       -       -       -        -  bpobj subobj
     -      -       -       -       -       -        -  deferred free
     -      -       -       -       -       -        -  dedup ditto
    21   672K     84K    252K     12K    8.00     0.00      L1 other
   277  1.33M    210K    756K   2.73K    6.48     0.00      L0 other
   298  1.99M    294K   1008K   3.38K    6.91     0.00  other
   105  13.1M    420K    840K      8K   32.00     0.00      L5 Total
   114  13.4M    456K    912K      8K   30.11     0.00      L4 Total
   170  15.2M    860K   1.68M   10.1K   18.05     0.00      L3 Total
 23.0K   745M    105M    210M   9.14K    7.09     0.00      L2 Total
  770K  24.1G   6.37G   12.7G   17.0K    3.78     0.09      L1 Total
  109M  14.7T   14.4T   14.4T    135K    1.02    99.91      L0 Total
  110M  14.7T   14.4T   14.4T    134K    1.02   100.00  Total

Block Size Histogram

  block   psize                lsize                asize
   size   Count   Size   Cum.  Count   Size   Cum.  Count   Size   Cum.
    512:   241K   120M   120M   238K   119M   119M      0      0      0
     1K:   147K   175M   295M   141K   167M   286M      0      0      0
     2K:   210K   572M   867M   205K   560M   846M      0      0      0
     4K:  6.29M  25.2G  26.0G   157K   798M  1.61G  6.07M  24.3G  24.3G
     8K:  7.87M  80.3G   106G  83.2K   940M  2.52G  8.22M  81.5G   106G
    16K:  9.02M   146G   252G  21.5M   345G   348G  9.46M   156G   262G
    32K:   697K  32.3G   284G   894K  29.3G   377G   701K  32.4G   294G
    64K:  1.45M   133G   417G  86.3K  7.95G   385G  1.45M   133G   427G
   128K:  74.5M  9.32T  9.72T  77.1M  9.64T  10.0T  74.5M  9.32T  9.73T
   256K:  19.2K  7.04G  9.73T     11  4.05M  10.0T  19.4K  7.10G  9.74T
   512K:  9.33M  4.67T  14.4T  9.37M  4.68T  14.7T  9.33M  4.67T  14.4T
     1M:      0      0  14.4T      0      0  14.7T      0      0  14.4T
     2M:      0      0  14.4T      0      0  14.7T      0      0  14.4T
     4M:      0      0  14.4T      0      0  14.7T      0      0  14.4T
     8M:      0      0  14.4T      0      0  14.7T      0      0  14.4T
    16M:      0      0  14.4T      0      0  14.7T      0      0  14.4T

root@freenas[~]#
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
The shared mirrored SLOG should work. I'm not qualified enough to say if it's a bad idea.

As to the second point, again it should work. But if all you need is to speed up browsing, this could be achieved with a (persistent) metadata-only L2ARC, which need not even be mirrored. And which could be made... from another partition of one or two Optane drives. It has been shown that Optane can sustain acting both a a SLOG and as L2ARC without a performance hit.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,949
OK - I didn't know a persistent metadata-only L2 ARC even existed - some research is due.
After I fix my backups - which have just gone fubar
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
4. zpool add "Pool1" log mirror nvd0p1 nvd1p1
5. zpool add "Pool2" log mirror nvd0p2 nvd1p2
Don't use the device names but the GPT IDs. You can get them with gpart list - look for "rawuuid".
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,949
Don't use the device names but the GPT IDs. You can get them with gpart list - look for "rawuuid".
Why not?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Because that's how TrueNAS references disks. If you do something special in the CLI, at least stick to the scheme. Otherwise you will confuse the UI and e.g. not get a proper display of the pool.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680

Because if the devices ever happen to scan in a different order, nvd0 might become nvd1, which might not seem like a problem in a mirror, but if they are not exactly equal in every way, that's a bad thing, and once you introduce a third device to act as L2ARC, and *that* shows up as nvd0, shoving your old nvd0/1 up to nvd1/2, I guarantee problems.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
I concur, I had the random disk swaperoo happen to me once, followed by me formatting the “wrong” drive with the right ada address after a reboot.

So never use ada1 etc from the command line if you can help it. UUIDs are the way to go. identity swaps are rare on low-disk-count systems but can happen there too.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,949
OK , makes sense. Thank you
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,949
OK - so I have created the fusion pool, now what the best way to populate it.
BTW - I am playing - this is a home system - so sensible isn't always the most important factor.
I plan on doing this dataset by dataset, and won't bother with some
I have a backup of the first dataset on a separate device
I have (or will shortly) have a file by file copy of the dataset on the same pool (testing to make sure the special vdev is being populated - it is)
I have a snapshot on a different pool locally and the same snapshot on another TrueNAS box (QNAP NAS running TrueNAS for giggles)
All snapshots run through the GUI and replicated through the GUI
The dataset is essentially static - 500GB of small files.

Can I just erase the files from the dataset and then click restore on the replication task? That seems too easy

Another possibility:
zfs send tank/mydataset tank/mydataset_new
zfs destroy -r tank/mydataset
zfs rename tank/mydataset_new tank/mydataset

And then presumably fix permissions / shares
That's not working - zfs send says too many arguments
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,949
Apparently
Code:
zfs snap BigPool/SMB/dataset@migrate

zfs send -R BigPool/SMB/dataset@migrate | zfs recv -F BigPool/SMB/dataset_New

zfs snap BigPool/SMB/dataset@migrate2

zfs send -i @migrate BigPool/SMB/dataset@migrate2 | zfs recv -F BigPool/SMB/dataset_New

zfs destroy -rf BigPool/SMB/dataset

zfs rename -f BigPool/SMB/dataset_New BigPool/SMB/dataset


Would appear to work. Shares do not need to be modified, and permissions are correct
[Modified to include the second snapshot - which would be better practise in case changes occurred during the first snapshot]
 
Last edited:
Top