TrueNAS Scale 23.10 Scrub running boot loop on system with dedub activated on a dataset

itet

Dabbler
Joined
Aug 19, 2020
Messages
26
I have 5 TrueNAS Scale 23.10.1.3 devices and on 3 of them the scrub will run in boot loop since 4 months.

TrueNAS01 proxmox virtual machine (1 socket, 8 cores, 36 GiB) with 1x virtual HD for boot, 1x virtual HD for apps, 1x pcie path through with 10 HD for data
TrueNAS02 proxmox virtual machine (1 socket, 8 cores, 12 GiB) with 1x virtual HD for boot, 1x virtual HD for apps, 1x virtual HD for data
TrueNAS11 proxmox virtual machine (1 socket, 8 cores, 36 GiB) with 1x virtual HD for boot, 1x virtual HD for apps, 1x pcie path thorugh with 5HD for data
TrueNAS21 proxmox 10SDV-4C-TLN2F Supermicro 16GiB, boot on usb msata 32GB SSD, 5HD for Data
TrueNAS22 proxmox 10SDV-4C-TLN2F Supermicro 16GiB, boot on usb msata 32GB SSD, 5HD for Data

On TrueNAS 01, 11 and 21 I have on a sub dataset which I use for UrBackup activated deduplication. I'm not sure but it feels the boot loop when running scrub has something todo with deduplication. I'm not sure because scrub running ones a month and also some truenas updates are placed at this time. First I paused the scrub and wait until next truenas updates but same situation with 23.10.1.3. When pause the scrub all 3 systems running without any problems.

Logs are very difficult because crash showing no failure at logs.
On these three machine with boot loop happen on different percent of scrub TrueNAS01 at 51%, TrueNAS11 at 54% and TrueNAS21 at 80%.

to underline the combination of scrub and dedup what is the best way to deactivate the dedup and bring on one system the data to a undedup state to try then the scrub again?
 
Last edited:

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
you have 2 lists of systems and they list different specs. your sig says 3 of them have 64GB but the list in the post says none has more than 36GB. either way, this is inadequate for dedup.

one does not just enable dedup; you have to plan it out carefully, and I most certainly wouldnt start with less than 128GB of RAM.
64GB is the absolutely bare minimum to begin considering dedup. dedup will bring your system to its knees or outright flatten it, which I suspect your are experiencing.

compression will generally accomplish similar for nil resources use.

you cannot truly deactivate dedup. you have to delete any dataset with dedup enabled to get rid of it, or the whole pool if it was enabled pool wide. dedup is a one way trip.

there is a reason dedup is generally discouraged.
 

itet

Dabbler
Joined
Aug 19, 2020
Messages
26
thanks artlessknave, sorry have not updated may spoiler config, was the setup before working with proxmox. Now done.

I totally understand your concerns about dedup. I have read a lot about this and in my case I thought it could work with backups, because with backup only few PC it will almost same files.

After 3 month I have theses vales:
TrueNAS01: dedup: DDT entries 41065678, size 316B on disk, 156B in core = 12,375 GB RAM
TrueNAS11: dedup: DDT entries 1765952, size 406B on disk, 228B in core = 683,76 GB RAM
TrueNAS21: dedup: DDT entries 4069572, size 441B on disk, 264B in core = 1,711 GB RAM

This values themes ok, but ok how to go away from dedup without lost data.
Is there no way to copy the data from dedup dataset to non dedup datasest?
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
Is there no way to copy the data from dedup dataset to non dedup datasest?
there absolutely is. make a new dataset, dont apply dedup, and then to replicate it. as long as you never enabled dedup on the pool, you can just move away from the dedup dataset. you could also get more RAM to match the requirements but I wouldnt bother.

you could also perhaps dedicate a single system to dedup (plus get more RAM). that could have less issues since it does nothing but that, and doesnt have to do file shares or such.

one of the problems with zfs dedup is that if you dont have enough memory you cant store the whole dedup tables in RAM, requiring to read them from disk, which is terrible for speed, and can bring everything to a screeching halt. 64GB and under means your dedup tables AND ARC are all trying to use a small resource.

I have not used dedup, so I can't comment on the quality or efficiency of your dedup setup, beyond that I know the specs you gave are woefully inadequate for it.
 
Top