Redundant SSD's for both SLOG and Metadata VDEV's

bullerwins

Dabbler
Joined
Mar 15, 2022
Messages
43
That's almost 18GB over 24GB. of non ECC, oof.
Is your system using swap space?
Acording to htop not really

Screenshot 2022-08-13 at 09.10.36.png

Screenshot 2022-08-13 at 09.12.21.png
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Checked the math again according to @HoneyBadger method, using Google's conversion it's 17,86 GB.

Either the calculation or the reporting is wrong.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I just got the feeling it wasn't that bad as per Craft's Computing video.
It's on every dataset
Code:
dedup: DDT entries 86714013, size 930B on disk, 206B in core

Thanks to @Davvo for doing some math here. The key is that OpenZFS by default won't let metadata be more than 75% of ARC, so it may have been getting pushed down to disk, and you'll only feel the impact if you don't get a hit to the portion of DDT that's already in RAM.

The Craft video is unfortunately serving up a very big softball for dedup to make look easy/affordable/practical, whereas the "reality is often disappointing."
 

bullerwins

Dabbler
Joined
Mar 15, 2022
Messages
43
Thanks to @Davvo for doing some math here. The key is that OpenZFS by default won't let metadata be more than 75% of ARC, so it may have been getting pushed down to disk, and you'll only feel the impact if you don't get a hit to the portion of DDT that's already in RAM.

The Craft video is unfortunately serving up a very big softball for dedup to make look easy/affordable/practical, whereas the "reality is often disappointing."
I guess that the hardest choices require the strongest wills... as I already have everything dedup'd, I might offload it and rebuild everything following the practises that I've learnt from you guys on this thread, thanks!
 
Last edited:

bullerwins

Dabbler
Joined
Mar 15, 2022
Messages
43
If I disable dedup on the pool, and every dataset. Create a new dataset and move the data there, im good? Or do I need to scrap the pool and move the data outside truenas?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
If I disable dedup on the pool, and every dataset. Create a new dataset and move the data there, im good? Or do I need to scrap the pool and move the data outside truenas?
That will work to remove deduplication, yes. You'll want to scrap the old datasets to be certain to kill off the DDT but it should be doable without a pool delete.

Deduplication is one of those things that works great right until it doesn't - thankfully you hadn't hit the metaphorical "point of no return" where the DDT is too big to fit in RAM. Since new writes have to go through the entire table to see if there is a match, the lookups in RAM are fast but then it thrashes the disks trying to get all of those little records off of your disks.

Special vdevs for dedup tables makes this viable, but it's still very much a case of needing to have data that will significantly reduce (multiple times into one) before it's really worthwhile.

There is a very well done write-up by user @Stilez here about adventures in deduplication:


Long and technical but worth the read if you're interested in what makes dedup tick and why it's often not recommended for most users.
 

bullerwins

Dabbler
Joined
Mar 15, 2022
Messages
43
That will work to remove deduplication, yes. You'll want to scrap the old datasets to be certain to kill off the DDT but it should be doable without a pool delete.

Deduplication is one of those things that works great right until it doesn't - thankfully you hadn't hit the metaphorical "point of no return" where the DDT is too big to fit in RAM. Since new writes have to go through the entire table to see if there is a match, the lookups in RAM are fast but then it thrashes the disks trying to get all of those little records off of your disks.

Special vdevs for dedup tables makes this viable, but it's still very much a case of needing to have data that will significantly reduce (multiple times into one) before it's really worthwhile.

There is a very well done write-up by user @Stilez here about adventures in deduplication:


Long and technical but worth the read if you're interested in what makes dedup tick and why it's often not recommended for most users.
Thanks! as I'm under 50% of my total pool, i can create a new dataset and copy everything over.
Is there any best practice on how to do this? Rsync?

Thanks for the link, really interesting stuff I'll dive more into it.
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Is there any best practice on how to do this? Rsync?
I don't believe dedup is a property that's transmitted with ZFS replication. If that's the case, that would probably be the simplest (and likely also the fastest) way to do it:
Code:
zfs snapshot pool/old_dataset@migrate
zfs send pool/old_dataset@migrate | zfs recv pool/new_dataset
zfs destroy pool/old_dataset


Edit: Ninja'd!
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
I don't believe dedup is a property that's transmitted with ZFS replication.

It's definitely not (since the receiving filesystem may not support it). You could deduplicate the stream. But that's deprecated now AFAIK (and was not enabled by default).
 

bullerwins

Dabbler
Joined
Mar 15, 2022
Messages
43
I don't believe dedup is a property that's transmitted with ZFS replication. If that's the case, that would probably be the simplest (and likely also the fastest) way to do it:
Code:
zfs snapshot pool/old_dataset@migrate
zfs send pool/old_dataset@migrate | zfs recv pool/new_dataset
zfs destroy pool/old_dataset


Edit: Ninja'd!
This is working great, I`m already 2TB out of 10TB.

One followup question. I created manually the "new_dataset" and the second command failed as it already existed, I didn't know the command created the new dataset, I guess is with default options?

I run the command from the web gui shell, in case the server shuts off during the process... is there any way to "continue" it? or do I have to start over.

If I close the firefox tab where the command is running, does it stop the zfs send process? The session timeout of Truenas Scale is really short. I just learnt about tmux reasearching for this, but the command is already started on the normal gui shell. I just left the shell tab open even though if I refresh it will log out.

I tried to use this, to increase the session timeout, but it's not working:
Code:
sed -i 's/auth.generate_token",\[300/auth.generate_token",\[129600/g'  /usr/share/truenas/webui/*.js


From https://tomschlick.com/extend-truenas-web-ui-session-timeout/ but for some reason it's not working for me, it stills logs me out after 5mins. I have SCALE and chose the first command.

EDIT: Also, the new dataset it's not showing in the filesystem if I do "cd /mnt/pool/" only the old datasets are here, is this normal while copying? It shows up in the GUI just fine though, it's how Im monitoring how many TB are left.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I run the command from the web gui shell, in case the server shuts off during the process... is there any way to "continue" it? or do I have to start over.
The GUI shell is unreliable, and has all kind of bugs copying/pasting text.
Log in via SSH and run commands from a better terminal.

If you have initiated a tmux session, you recover it with tmux attach.
 

bullerwins

Dabbler
Joined
Mar 15, 2022
Messages
43
The GUI shell is unreliable, and has all kind of bugs copying/pasting text.
Log in via SSH and run commands from a better terminal.

If you have initiated a tmux session, you recover it with tmux attach.
But then I would need to have the system which is ssh'd not to drop the connection or power off? is the ssh session is stopped, does the zfs send interrupts?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

bullerwins

Dabbler
Joined
Mar 15, 2022
Messages
43
That's what tmux is for.
Got it working thanks.

Ssh into truenas

tmux #open session
#type stuff
#if i close the terminal windows where I'm ssh'd from, connect again via ssh
tmux a #i'm where I left off.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
It's usable for very basic stuff, and it does seem to have improved over the years--but I wouldn't shed a tear if it were to disappear tomorrow. Particularly when any reasonably-modern OS (yes, even Windows) ships with ssh.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I find it handy for checking very basic stuff from a phone ;)

Yeah. I know... I could use an ssh terminal on the phone.

But basically, its not for anything more complicated than running a single command and seeing its output
 
Top