Hi, I've got a use case where I'm very thinking about enabling zfs deduplication.
I've got two separate datasets :
- Medias : For Plex movies, tvshows and other
- Qbittorrent : Where qbt is watching, downloading and seeding files
Today, when I'm downloading video file that I want to put in Plex, I need to copy it from the Qbittorrent dataset to the Medias dataset. However, this is duplicating files ...
My question is, would it worth to create a brand new dataset with ZFS deduplication enabled (only on this dataset) and then recreate my two datasets in it (so Qbittorrent and Medias).
I'm not on specific server hardware but I'm running on i3 (8th gen) with 16GB RAM), and I've got less than 2TB of Medias files.
Thank's for your reply !
I've got two separate datasets :
- Medias : For Plex movies, tvshows and other
- Qbittorrent : Where qbt is watching, downloading and seeding files
Today, when I'm downloading video file that I want to put in Plex, I need to copy it from the Qbittorrent dataset to the Medias dataset. However, this is duplicating files ...
My question is, would it worth to create a brand new dataset with ZFS deduplication enabled (only on this dataset) and then recreate my two datasets in it (so Qbittorrent and Medias).
I'm not on specific server hardware but I'm running on i3 (8th gen) with 16GB RAM), and I've got less than 2TB of Medias files.
Thank's for your reply !