bullerwins
Dabbler
- Joined
- Mar 15, 2022
- Messages
- 43
Acording to htop not reallyThat's almost 18GB over 24GB. of non ECC, oof.
Is your system using swap space?
Acording to htop not reallyThat's almost 18GB over 24GB. of non ECC, oof.
Is your system using swap space?
I just got the feeling it wasn't that bad as per Craft's Computing video.
It's on every dataset
Code:dedup: DDT entries 86714013, size 930B on disk, 206B in core
I guess that the hardest choices require the strongest wills... as I already have everything dedup'd, I might offload it and rebuild everything following the practises that I've learnt from you guys on this thread, thanks!Thanks to @Davvo for doing some math here. The key is that OpenZFS by default won't let metadata be more than 75% of ARC, so it may have been getting pushed down to disk, and you'll only feel the impact if you don't get a hit to the portion of DDT that's already in RAM.
The Craft video is unfortunately serving up a very big softball for dedup to make look easy/affordable/practical, whereas the "reality is often disappointing."
That will work to remove deduplication, yes. You'll want to scrap the old datasets to be certain to kill off the DDT but it should be doable without a pool delete.If I disable dedup on the pool, and every dataset. Create a new dataset and move the data there, im good? Or do I need to scrap the pool and move the data outside truenas?
Thanks! as I'm under 50% of my total pool, i can create a new dataset and copy everything over.That will work to remove deduplication, yes. You'll want to scrap the old datasets to be certain to kill off the DDT but it should be doable without a pool delete.
Deduplication is one of those things that works great right until it doesn't - thankfully you hadn't hit the metaphorical "point of no return" where the DDT is too big to fit in RAM. Since new writes have to go through the entire table to see if there is a match, the lookups in RAM are fast but then it thrashes the disks trying to get all of those little records off of your disks.
Special vdevs for dedup tables makes this viable, but it's still very much a case of needing to have data that will significantly reduce (multiple times into one) before it's really worthwhile.
There is a very well done write-up by user @Stilez here about adventures in deduplication:
My experiments in building a home server capable of handling fast + consistent deduplication
AIM: To help people looking at deduplication on TrueNAS 12+, what I've found on the way making it work on mine. On sustained mixed loads, such as 50GB+ file copies and multiple transfers, using TrueNAS 12 with a deduped pool and default config...www.truenas.com
Long and technical but worth the read if you're interested in what makes dedup tick and why it's often not recommended for most users.
I don't believe dedup is a property that's transmitted with ZFS replication. If that's the case, that would probably be the simplest (and likely also the fastest) way to do it:Is there any best practice on how to do this? Rsync?
zfs snapshot pool/old_dataset@migrate zfs send pool/old_dataset@migrate | zfs recv pool/new_dataset zfs destroy pool/old_dataset
I don't believe dedup is a property that's transmitted with ZFS replication.
This is working great, I`m already 2TB out of 10TB.I don't believe dedup is a property that's transmitted with ZFS replication. If that's the case, that would probably be the simplest (and likely also the fastest) way to do it:
Code:zfs snapshot pool/old_dataset@migrate zfs send pool/old_dataset@migrate | zfs recv pool/new_dataset zfs destroy pool/old_dataset
Edit: Ninja'd!
sed -i 's/auth.generate_token",\[300/auth.generate_token",\[129600/g' /usr/share/truenas/webui/*.js
The GUI shell is unreliable, and has all kind of bugs copying/pasting text.I run the command from the web gui shell, in case the server shuts off during the process... is there any way to "continue" it? or do I have to start over.
tmux attach
.But then I would need to have the system which is ssh'd not to drop the connection or power off? is the ssh session is stopped, does the zfs send interrupts?The GUI shell is unreliable, and has all kind of bugs copying/pasting text.
Log in via SSH and run commands from a better terminal.
If you have initiated a tmux session, you recover it withtmux attach
.
That's what tmux is for.I would need to have the system which is ssh'd not to drop the connection or power off?
Got it working thanks.That's what tmux is for.