Does adding dedup vdev to pool automatically move dedup table to it?

ubergosu

Cadet
Joined
Feb 26, 2022
Messages
9
We had a pool on spinning disks and after some time we decided to add an SSD as a dedup vdev. When we do this it dedup table automatically be moved from data vdevs to dedup vdev, or we have to move it explicitly by some command, or it should not be done and we need to recreate our entire pool to get rid of dedup table on data disks?
 

rmckenzie

Cadet
Joined
Aug 29, 2018
Messages
1
The dedup table is synchronized to disk periodically as the block map is changed. When you add a dedup special vdev to the pool, ZFS changes automatically from storing the dedup table on the data vdevs to storing it on the special vdev. No additional actions are needed.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
No, the dedup table will not automatically move to the SSD. And no, their is no command to move it manually. New entries will use the new special vDev for the dedup table, but existing entries will stay where they are.

Next, the special vDev for the dedup table, (your SSD), should have the same level of protection as the main vDevs. If your main vDev is RAID-Z2, (meaning 2 disks can be lost without data loss), then you would need a 3 way mirrored special vDev for your dedup table, (aka 3 SSDs in a mirror). Otherwise you are setting up your pool to fail.


De-dup is not a suggested option for the casual TrueNAS user. It tends to perform poorly unless properly designed and managed. The fact that you are asking questions indicates you have less experience and may not have a well designed server for use with dedup.

However, it's your decision.
 

ubergosu

Cadet
Joined
Feb 26, 2022
Messages
9
Thank you all for your replies.

New entries will use the new special vDev for the dedup table, but existing entries will stay where they are.

That's how I supposed it works. So either pool has to be destroyed and recreated or it is a random process.

the same level of protection as the main vDev

This I also understand. Dedup vdev is critical and its failure will take down the whole pool. Probability of its failure should not be greater than of data vdevs.

may not have a well designed server for use with dedup

That's the core problem. The server can not be modified easly (it is limited by hardware available in data center ), so we're trying to acheive higest write performance (minimize random i/o caused by dedup table) and available space by sacrificing redundancy.

The server is used for backups and we are setting it to mirror to another server that is designed with higher redundancy. So loss of the pool in question is not that critical.

Our experiments show that only with compression and deduplication both enabled we are acheiving neccessary storage capacity.
 
Top