I lost my Dedup VDev Disk and Pool UNAVAIL

LengQing

Cadet
Joined
Apr 9, 2021
Messages
7
I added an Optane SSD for Dedup VDev some days ago and tested dedup performance, but it seems not very good.
Then I try to delete the SSD from zpool and use it for other things, but failed.
So I shut down the TrueNAS host and unplugged the SSD then erased it for other systems.
When I turn on the TrueNAS host, I found my pool is offline and cannot import on WebUI, I try to import it on CLI but failed.
The following is the return information:
root@x99[~]# zpool import
pool: X99
id: 8527058640105614735
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C
config:

X99 UNAVAIL insufficient replicas
raidz1-0 ONLINE
gptid/663f9311-5cec-11eb-8e1b-000c29ea1b3f ONLINE
gptid/66810ec2-5cec-11eb-8e1b-000c29ea1b3f ONLINE
gptid/66f14c20-5cec-11eb-8e1b-000c29ea1b3f ONLINE
gptid/66fcd7dc-5cec-11eb-8e1b-000c29ea1b3f ONLINE
gptid/ce20c3aa-9862-11eb-a85c-000c29ea1b3f UNAVAIL cannot open

root@x99[~]#


I think it's just lost a Dedup disk, it has nothing to do with data, so can I get my data back with the 4 Data VDev DISKs?
It's my fault to do such a hasty thing, I have some important information in it, I hope someone can help me, THANK YOU!
 
Joined
Oct 22, 2019
Messages
3,641
Since I created my pool prior to TrueNAS 12, I hadn't even noticed the option to create a special Dedup vdev. I don't even know if a Dedup vdev (of only one drive) can be "rebuilt" as one would rebuild a missing drive with a mirror or RAIDZ vdev. You already erased the SSD that was serving as a special vdev for your pool?

Quoted from @HoneyBadger earlier last year:
Regarding redundancy, "special" vdevs of any type in OpenZFS are considered root vdevs of the pool - losing them results in either inaccessible data (for things like a small_blocks vdev) or a total pool failure (metadata/ddt) so you should be looking at two-way mirrors at a minimum; three-way mirror wouldn't be unreasonable.


My question is, can someone even force the import of a pool that is missing its special Dedup vdev as "read-only" and perhaps recover the data?

I never knew such a vdev existed, and I'm wondering what purpose it serves above and beyond using ZFS compression, such as ZSTD or LZ4?
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Sorry, but based on the description of the activities you took, you didn't just kill your pool, but also burned the body and scattered the ashes. The reason you couldn't detach the special vdev is because vdev removal requires all top-level vdevs to be mirrors or stripes - you have a RAIDZ data vdev.

@winnielinnie it's a means to accelerate metadata operations on HDD based pools by shifting those small read/write operations to SSD, which handles them better.
 

LengQing

Cadet
Joined
Apr 9, 2021
Messages
7
Since I created my pool prior to TrueNAS 12, I hadn't even noticed the option to create a special Dedup vdev. I don't even know if a Dedup vdev (of only one drive) can be "rebuilt" as one would rebuild a missing drive with a mirror or RAIDZ vdev. You already erased the SSD that was serving as a special vdev for your pool?

Quoted from @HoneyBadger earlier last year:



My question is, can someone even force the import of a pool that is missing its special Dedup vdev as "read-only" and perhaps recover the data?

I never knew such a vdev existed, and I'm wondering what purpose it serves above and beyond using ZFS compression, such as ZSTD or LZ4?
Yes, I already erased the SSD before the TrueNAS turned on. Because I thought Dedup VDev just related to dedup and I closed dedup of all Dataset.
I should have turned it on and checked it, This is my fault.
 

LengQing

Cadet
Joined
Apr 9, 2021
Messages
7
Sorry, but based on the description of the activities you took, you didn't just kill your pool, but also burned the body and scattered the ashes. The reason you couldn't detach the special vdev is because vdev removal requires all top-level vdevs to be mirrors or stripes - you have a RAIDZ data vdev.

@winnielinnie it's a means to accelerate metadata operations on HDD based pools by shifting those small read/write operations to SSD, which handles them better.
I'm pity and sad to hear you say that, I lost a lot of memories and collections, even many things with my life, study and work. All my backups before every time my PC and Laptop reinstalled OS are in there.
 
Joined
Oct 22, 2019
Messages
3,641
After researching more about these "dedup vdevs" I will never touch them. They don't seem worth the risk nor extra cost. The best bet for saving space and not losing performance is using built-in ZFS compression (LZ4 or one of the ZSTD levels).

The best I could find is through some advanced voodoo magic you might be able to force the import of the zpool on another system, maybe an updated Linux distro with OpenZFS 2.0+. But even then, you cannot bypass the fact that ZFS is innately different than other file-systems.

Since I haven't any experience playing with Dedup Tables (let alone a "dedup vdev"), there might me someone in here or in the ZFS reddit that might know of a way based on your steps where you disabled Dedup on your datasets before killing the dedup vdev.

The GUI prevents you from doing many things, and when it does, it's best practice to stop what you're doing and research it. It doesn't stop you for no reason. :frown:
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
After researching more about these "dedup vdevs" I will never touch them. They don't seem worth the risk nor extra cost. The best bet for saving space and not losing performance is using built-in ZFS compression (LZ4 or one of the ZSTD levels).
Maybe do some more research...

Special Allocation Classes, do NOT create extra risk.
Let me explain a few usecases/kinds of Special Allocation Classes:
- Dedupe: You can create a special allocation class that just contains the dedupe tables. These tables are the thing that normally makes Dedupe super-uber-slow and are basically a requirement for even using deduplication
- Metadata: These special allocation classes offload metadata that's normally stored on the disks, hence resulting in beter responsiveness. They also can improve rebuild speeds, because the rebuild io doesn't have to be shared with the rebuild metadata lookups
- Small Block Sizes: These Special Allocation Classes, carry blocks up to a user defined blocksize. They are very handy to offload blocks that normally would the limited io of HDD's. However: this system can also be hacked to force specific VDEVS onto an SSD tier, without having to pools.

The advice is to ALWAYS have redundant vdevs for special allocation classes, preferably mirror or tripple mirror. They are 100% as secure as the vdev you put them on.

Since I haven't any experience playing with Dedup Tables (let alone a "dedup vdev"), there might me someone in here or in the ZFS reddit that might know of a way based on your steps where you disabled Dedup on your datasets before killing the dedup vdev.
Disabling before ripping out the dedupe tables is irrelevant. As soon as you enable dedupe, it starts deduping new writes (obviously). When disabling only NEW writes get un-deduped. So disabling and ripping out the table straight away at the very least destroyed all data that was previously deduped.

However there might be some vodo to get some old data back (before deduping was enabled), but I HIGHLY suggest hiring a professional Data recovery specialist for this. You already make super obvious mistakes which destroyed your data. Are you really willing to try out all kinds of dangereus hacks from fora and reddit to get it back?


Regardless of "dedupe disks", every manual (even the iX manual) is very clear: Dedupe can NOT be easily reverted.
 

LengQing

Cadet
Joined
Apr 9, 2021
Messages
7
After researching more about these "dedup vdevs" I will never touch them. They don't seem worth the risk nor extra cost. The best bet for saving space and not losing performance is using built-in ZFS compression (LZ4 or one of the ZSTD levels).

The best I could find is through some advanced voodoo magic you might be able to force the import of the zpool on another system, maybe an updated Linux distro with OpenZFS 2.0+. But even then, you cannot bypass the fact that ZFS is innately different than other file-systems.

Since I haven't any experience playing with Dedup Tables (let alone a "dedup vdev"), there might me someone in here or in the ZFS reddit that might know of a way based on your steps where you disabled Dedup on your datasets before killing the dedup vdev.

The GUI prevents you from doing many things, and when it does, it's best practice to stop what you're doing and research it. It doesn't stop you for no reason. :frown:
Yes, I learned from it and I will be careful in the future. I'm going to mirror the disks and save them for futrue.
 

LengQing

Cadet
Joined
Apr 9, 2021
Messages
7
Maybe do some more research...

Special Allocation Classes, do NOT create extra risk.
Let me explain a few usecases/kinds of Special Allocation Classes:
- Dedupe: You can create a special allocation class that just contains the dedupe tables. These tables are the thing that normally makes Dedupe super-uber-slow and are basically a requirement for even using deduplication
- Metadata: These special allocation classes offload metadata that's normally stored on the disks, hence resulting in beter responsiveness. They also can improve rebuild speeds, because the rebuild io doesn't have to be shared with the rebuild metadata lookups
- Small Block Sizes: These Special Allocation Classes, carry blocks up to a user defined blocksize. They are very handy to offload blocks that normally would the limited io of HDD's. However: this system can also be hacked to force specific VDEVS onto an SSD tier, without having to pools.

The advice is to ALWAYS have redundant vdevs for special allocation classes, preferably mirror or tripple mirror. They are 100% as secure as the vdev you put them on.


Disabling before ripping out the dedupe tables is irrelevant. As soon as you enable dedupe, it starts deduping new writes (obviously). When disabling only NEW writes get un-deduped. So disabling and ripping out the table straight away at the very least destroyed all data that was previously deduped.

However there might be some vodo to get some old data back (before deduping was enabled), but I HIGHLY suggest hiring a professional Data recovery specialist for this. You already make super obvious mistakes which destroyed your data. Are you really willing to try out all kinds of dangereus hacks from fora and reddit to get it back?


Regardless of "dedupe disks", every manual (even the iX manual) is very clear: Dedupe can NOT be easily reverted.
I will consider hiring a professional Data recovery specialist for this.
Now I'm going to make some disk mirrors to prevent other misoperation.
 

LengQing

Cadet
Joined
Apr 9, 2021
Messages
7
After researching more about these "dedup vdevs" I will never touch them. They don't seem worth the risk nor extra cost. The best bet for saving space and not losing performance is using built-in ZFS compression (LZ4 or one of the ZSTD levels).

The best I could find is through some advanced voodoo magic you might be able to force the import of the zpool on another system, maybe an updated Linux distro with OpenZFS 2.0+. But even then, you cannot bypass the fact that ZFS is innately different than other file-systems.

Since I haven't any experience playing with Dedup Tables (let alone a "dedup vdev"), there might me someone in here or in the ZFS reddit that might know of a way based on your steps where you disabled Dedup on your datasets before killing the dedup vdev.

The GUI prevents you from doing many things, and when it does, it's best practice to stop what you're doing and research it. It doesn't stop you for no reason. :frown:
I have tried to import it on Ubuntu, but the situation seems to be no different from TrueNAS.
 
Joined
Oct 22, 2019
Messages
3,641
Maybe do some more research...

After researching more about these "dedup vdevs" I will never touch them. They don't seem worth the risk nor extra cost. The best bet for saving space and not losing performance is using built-in ZFS compression (LZ4 or one of the ZSTD levels).

The advice is to ALWAYS have redundant vdevs for special allocation classes, preferably mirror or tripple mirror. They are 100% as secure as the vdev you put them on.


I'm not going to use fine-print asterisks every time casual-speech implies something, or else every sentence on every post will look extremely redundant, like this:

After researching more about these "dedup vdevs" I will never touch them. They don't seem worth the risk (to me, because I'm writing this post) nor extra cost (because it requires two or three extra drives to provide redundant protection for your dedup vdev, the same as a mirror or RAIDZ provides for your data vdevs.) The best bet for saving space and not losing performance is using built-in ZFS compression (LZ4 or one of the ZSTD levels). (In my opinion, because I'm writing this post.)

EDIT: In your honor I added a signature to my forum posts. The world is a slightly better place to live now.
 
Last edited:

LengQing

Cadet
Joined
Apr 9, 2021
Messages
7
I'm not going to use fine-print asterisks every time casual-speech implies something, or else every sentence on every post will look extremely redundant, like this:

After researching more about these "dedup vdevs" I will never touch them. They don't seem worth the risk (to me, because I'm writing this post) nor extra cost (because it requires two or three extra drives to provide redundant protection for your dedup vdev, the same as a mirror or RAIDZ provides for your data vdevs.) The best bet for saving space and not losing performance is using built-in ZFS compression (LZ4 or one of the ZSTD levels). (In my opinion, because I'm writing this post.)

EDIT: In your honor I added a signature to my forum posts. The world is slightly better place to live now.
I quite agree with you now. I won't touch it again.
Before the problem happened, I thought it could be deleted at will just like the Cache VDev.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
EDIT: In your honor I added a signature to my forum posts. The world is a slightly better place to live now.

There is no extra "risk" involved, which I carefully took the time to explain.

Instead of reacting on my thorough explaination of special allocation classes, you highlight the things I didn't mention...
Your grave:
No, in the case of deduplication, they are not "extra costs" because they are actually a costs-saving measure. Because they allow decent dedupe performance with less extreme amounts of RAM.

You can call inherent bullshit an "opinion" all you want, it's a free world.
Adding onto that bullshit by completely ignoring the time people take to explain why you're wrong, is also not okey.

The only thing that's true in your whole post is that you personally wouldn't use them. Which is the only part thats actually an opinion.

ohh and btw:
Special Vdevs, are not a "space saving measure" at all to freaking begin with, nor are special allocation clases for dedupe tables.
They are an addon for improved performance. In this case improved performance for dedupe.

You are mixing "deduplication" with "special allocation classes" and come to just simply wrong conclussions because of it.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Before the problem happened, I thought it could be deleted at will just like the Cache VDev.
They are, in essence, comparable with the smallblock and metadata vdevs, which also (as explained in the docs) can not be deleted.
(nor can deduplication actually be fully disabled easily to begin with)
 
Joined
Oct 22, 2019
Messages
3,641
There is no extra "risk" involved, which I carefully took the time to explain.
No, in the case of deduplication, they are not "extra costs" because they are actually a costs-saving measure. Because they allow decent dedupe performance with less extreme amounts of RAM.
There are additional risks, as this very thread demonstrates. The risks can be mitigated by spending extra money on extra drives. In the real world, the two considerations are interwined. This issue of an unavailable pool is because it went beyond only consisting of data vdevs and introduced special vdevs. (Even "after-the-fact" of originally creating the pool.)


You can call inherent bull**** an "opinion" all you want, it's a free world.
Adding onto that bull**** by completely ignoring the time people take to explain why you're wrong, is also not okey.
Which is a moot point since you misinterpretted my post and perhaps missed the nuance of causal speech (that I admit is difficult to translate from oral to written text.) Do you already have rebuttals prepared even before understanding what the other person is trying to communicate?


Special Vdevs, are not a "space saving measure" at all to freaking begin with, nor are special allocation clases for dedupe tables.
They are an addon for improved performance. In this case improved performance for dedupe.

You are mixing "deduplication" with "special allocation classes" and come to just simply wrong conclussions because of it.
Misinterpretted again. I was referring to the space-saving of using compression (ZSTD, LZ4, i.e, little-to-no performance impact) versus Dedup. (Dedup requires more RAM compared to simply using compression, like LZ4 or ZSTD, if saving space on your pool is your goal.)


EDIT: Irregardless, prefer not to have this thread derail.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
After researching more about these "dedup vdevs" I will never touch them. They don't seem worth the risk nor extra cost. The best bet for saving space and not losing performance is using built-in ZFS compression (LZ4 or one of the ZSTD levels). ...
The GUI prevents you from doing many things, and when it does, it's best practice to stop what you're doing and research it. It doesn't stop you for no reason. :frown:
For the right reasons, dedup is a huge benefit, such as the folk who have to run a bunch of similar VMs on one machine. For the rest of us, dedup likely has little to no benefit unless you're too lazy to hunt down duplicate pictures, video files, etc. en masse. I rather eliminate duplicates.

As for the GUI, in general I agree with you that it can stop some issues but it certainly cannot stop all. A fairly thorough understanding is still needed to understand your use case and how to maximize the performance of the TrueNAS accordingly. Simplifying options the way that QNAP/ReadyNAS/Synology/etc. practice is certainly possible but then the flexibility of TrueNAS is also lost for the power users who know what they are doing.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
There are additional risks, as this very thread demonstrates. The risks can be mitigated by spending extra money on extra drives. In the real world, the two considerations are interwined. This issue of an unavailable pool is because it went beyond only consisting of data vdevs and introduced special vdevs. (Even "after-the-fact" of originally creating the pool.)
The installation itself of special vdevs is basically almost risk-free.

The risk that is encountered here is a layer-8 one: users making wrong decisions. Because the mistake happened before the vdev was added: The Decision to add them is the mistake, not the allocation classes themselves.

Which is a moot point since you misinterpretted my post and perhaps missed the nuance of causal speech (that I admit is difficult to translate from oral to written text.) Do you already have rebuttals prepared even before understanding what the other person is trying to communicate?

Calling others out for "misinterprenting" your statements is a classic way to "whitewash" being wrong.

"Casual speech" works for "as a mater of speech" and "tone", it doesn't work as an excuse when you are just completely wrong.
If you said this to me orally I would also confront you with how stupid it sounds, regardless of tone.

i'll leave it at that


On topic:

For the right reasons, dedup is a huge benefit, such as the folk who have to run a bunch of similar VMs on one machine. For the rest of us, dedup likely has little to no benefit unless you're too lazy to hunt down duplicate pictures, video files, etc. en masse. I rather eliminate duplicates.

It's a lovely tech when you are required to have multiple copies of the same thing laying around. Really powerfull! :)

As for the GUI, in general I agree with you that it can stop some issues but it certainly cannot stop all. A fairly thorough understanding is still needed to understand your use case and how to maximize the performance of the TrueNAS accordingly. Simplifying options the way that QNAP/ReadyNAS/Synology/etc. practice is certainly possible but then the flexibility of TrueNAS is also lost for the power users who know what they are doing.
I agree adding vdevs is risky and the easy GUI increases the risk of beginners making mistakes doing so significantly :(

The thing is: Adding any vdev is a risk, because vdevs can inherently not be removed. So the question is: Do you want to put a big warning on every vdev additionor just the special vdevs?


----

*PARTY MODE*
1000'th post :D
 
Last edited:

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
To me, this is the biggest challenge that the UI team at iXsystems faces. They started to go down this path with "advanced" vs. "Basic" views but IMO, this likely should be a more global switch - not just for share creation, permissions, etc. but rather a simplified UI for everything where the designers make some assumptions about use cases and hence can also offer a simplified UI for beginners. Perhaps only offer it for "certified iXsystems" stuff like the MiniXL I have here as an enticement to buy iXsystems hardware.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
To me, this is the biggest challenge that the UI team at iXsystems faces. They started to go down this path with "advanced" vs. "Basic" views but IMO, this likely should be a more global switch - not just for share creation, permissions, etc. but rather a simplified UI for everything where the designers make some assumptions about use cases and hence can also offer a simplified UI for beginners. Perhaps only offer it for "certified iXsystems" stuff like the MiniXL I have here as an enticement to buy iXsystems hardware.
It's a bit the same with certificates on SCALE.

The need to add CSR and turn those into letsencrypt certificates is very vague for the average joe. But totally logically if you know how certificates and certbot actually work....

Yes, I expect the future will bring more of these basic vs advanced views.
I've had some chats about it and iX also seems to notice the UI needed some more thorough review and management and seems to be heavily investing into it the last few months :)

In general QA, testing and UI is getting more attention it seems :)
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Hah, you read my mind.

so I got a step ca certificate authority installed and running on a Pi but how to apply it to my pi holes, the NAS, other junk around the home like the switch still escapes me at the moment.

this dotard will hopefully figure it out soon enough.
 
Top