Another pool degraded question.

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
I actually had a nice HP micro server gen8 with a raid card and xeon CPU and ECC RAM but felt its a little over kill and swapped it for this little NAS called topon N1.
Thought the extra ram and lan ports will be better.
starting to change my mind...:smile:

Maybe i must set it up as a full on true scale.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
I am also not too sure what to do after....do I just wipe the drive and start over?
Not sure what your backup strategy is, but if you've been doing replication tasks regularly, usually it's as simple as a ZFS send/recv and you may not even lose that much data (basically revert to the last snapshot).
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Please read the following resources:
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
I actually had a nice HP micro server gen8 with a raid card and xeon CPU and ECC RAM but felt its a little over kill and swapped it for this little NAS called topon N1.
Thought the extra ram and lan ports will be better.
Generally, using a high-end RAID card that has its own cache & battery is not recommended. You should probably read this.

EDIT: @Davvo beat me to it and even gave you another bonus link :grin:
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
Code:
root@truenas[~]# zpool import -F Bar1-18TB
internal error: cannot import 'Bar1-18TB': Invalid exchange
zsh: abort (core dumped)  zpool import -F Bar1-18TB
root@truenas[~]# 2023 Jan  3 13:15:19 truenas Process 10169 (zpool) of user 0 dumped core.

Stack trace of thread 10169:
#0  0x00007fedfff5dce1 __GI_raise (libc.so.6 + 0x38ce1)
#1  0x00007fedfff47537 __GI_abort (libc.so.6 + 0x22537)
#2  0x00007fee003592b9 zfs_verror (libzfs.so.4 + 0x3c2b9)
#3  0x00007fee00359e1f zpool_standard_error_fmt (libzfs.so.4 + 0x3ce1f)
#4  0x00007fee0034b5a5 zpool_import_props (libzfs.so.4 + 0x2e5a5)
#5  0x000055c26fde26d9 do_import (zpool + 0xb6d9)
#6  0x000055c26fdf3001 import_pools (zpool + 0x1c001)
#7  0x000055c26fdf3f1e zpool_do_import (zpool + 0x1cf1e)
#8  0x000055c26fddf543 main (zpool + 0x8543)
#9  0x00007fedfff48d0a __libc_start_main (libc.so.6 + 0x23d0a)
#10 0x000055c26fddf6fa _start (zpool + 0x86fa)
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
Generally, using a high-end RAID card that has its own cache & battery is not recommended. You should probably read this.

EDIT: @Davvo beat me to it and even gave you another bonus link :grin:
I was just using it to be able to use some free SAS drives I managed to get my hands on, and I set up the RAID within truenas but anyways...this is history :smile:
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
Code:
root@truenas[~]# zpool import -F Bar1-18TB
internal error: cannot import 'Bar1-18TB': Invalid exchange
zsh: abort (core dumped)  zpool import -F Bar1-18TB
root@truenas[~]# 2023 Jan  3 13:15:19 truenas Process 10169 (zpool) of user 0 dumped core.

Stack trace of thread 10169:
#0  0x00007fedfff5dce1 __GI_raise (libc.so.6 + 0x38ce1)
#1  0x00007fedfff47537 __GI_abort (libc.so.6 + 0x22537)
#2  0x00007fee003592b9 zfs_verror (libzfs.so.4 + 0x3c2b9)
#3  0x00007fee00359e1f zpool_standard_error_fmt (libzfs.so.4 + 0x3ce1f)
#4  0x00007fee0034b5a5 zpool_import_props (libzfs.so.4 + 0x2e5a5)
#5  0x000055c26fde26d9 do_import (zpool + 0xb6d9)
#6  0x000055c26fdf3001 import_pools (zpool + 0x1c001)
#7  0x000055c26fdf3f1e zpool_do_import (zpool + 0x1cf1e)
#8  0x000055c26fddf543 main (zpool + 0x8543)
#9  0x00007fedfff48d0a __libc_start_main (libc.so.6 + 0x23d0a)
#10 0x000055c26fddf6fa _start (zpool + 0x86fa)
1672780918131.png
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
By the way should I run
zpool upgrade?
Unless you actually need a new feature that it has, you shouldn't. The fact that you're asking the question tells me that you shouldn't be upgrading the pool. I'd focus on getting the data out of there before fooling around with the pool any further.
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
Unless you actually need a new feature that it has, you shouldn't. The fact that you're asking the question tells me that you shouldn't be upgrading the pool. I'd focus on getting the data out of there before fooling around with the pool any further.
Just thought maybe new version can handle my errors better, I mean i have nothing to lose,
but yes let me copy the data
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
Ok that might take me a few days....just to make sure everything is copied so will provide feedback close to the weekend.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Just thought maybe new version can handle my errors better, I mean i have nothing to lose,
You have already lost the pool to a combination of virtualised TrueNAS with insufficient RAM, passing drives instead of the whole controller and single drive vdev. ZFS has no "zfsck" tool: The model is that errors will be corrected on the spot using checksuming and redundancy.
A single drive vdev has no redudancy. If an error creeps in, the game is over.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@bar1 - Sorry, your pool Bar1-18TB is mostly not recoverable. Maybe you can copy other files out. But without any redundancy at the ZFS level, the file impacted is gone from pool Bar1-18TB.

Using external redundancy with ZFS is not a good idea at the home or non-Enterprise level. Nor is using a pool without any redundancy.


That said, I DO use a pool without any redundancy on my miniature media server for the media pool. (But, the OS pool IS using ZFS Mirror.) However, my media pool is backed up to multiple targets. It also does not change much. And it is not critical to me. For the dozen or so times I lost a file, it was trivial to restore it, (though took time).
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
This might be a very stupid question...I still have lots of free space on my 18TB , if i understand correctly....the drive is fine but the pool is corrupt.

Can i create a new pool on the 18TB hard drive sync the data and destroy the corrupt pool (I am sure I read its possible in the past).

Also maybe leave it like this....sure it's not a good practice if the drive fail but not a bad plan if the pool fails....

In other words, can I have 2 pools in 1 drive?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
No. You can have as many datasets as you want in a pool, but drives should belong in whole to a single pool—preferably several drives in a single pool (or rather vdev, which itself belongs to a pool).
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
If you really want to use single drives, consider using a copies setting greater than 1.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
No. You can have as many datasets as you want in a pool, but drives should belong in whole to a single pool—preferably several drives in a single pool (or rather vdev, which itself belongs to a pool).

Let me make that a bit more straight forward... (even if a lot longer)...

Your system attaches to disk storage:

SATA Controller, SAS controller, HBA (but not RAID controller, nor passthrough of individual virtual disks in any way)... possibly then with a SAS expander or backplane to connect to the individual disks.

Those disks can then be a member of VDEVs (which are the building blocks of a pool)... either Stripe, Mirror, RAIDZ(1,2 or 3)... and some special types of VDEV like L2ARC, SLOG, "SPECIAL" (metadata and/or small files), SPARE or DEDUP.

A VDEV must have at least one disk in it, but can (or must) have more depending on the type.

A pool is then constructed from those VDEVs (both are made at pool creation as VDEVs don't exist outside of a pool) by striping them together. (except for L2ARC and SLOG, which aren't critical to the pool and can be removed without harm, all VDEVs must be present and working for a pool to work)

A pool can then contain datasets (at least the root dataset, present in every pool), which behave as directories with some "special properties" and optionally ZVOLs (separate block storage devices which don't contain the ZFS filesystem, so don't behave at all like directories).

According to the OpenZFS project (and the developers of TrueNAS), entire disks should be given to VDEVs. (https://openzfs.github.io/openzfs-docs/Performance and Tuning/Workload Tuning.html#whole-disks)

As has been noted in the forum many times, that's not a technical limitation and many have documented interesting things that can be done with partitioned disks, but ignoring the recommendation is at your own peril.
This might be a very stupid question...I still have lots of free space on my 18TB , if i understand correctly....the drive is fine but the pool is corrupt.

Can i create a new pool on the 18TB hard drive sync the data and destroy the corrupt pool (I am sure I read its possible in the past).

Also maybe leave it like this....sure it's not a good practice if the drive fail but not a bad plan if the pool fails....

In other words, can I have 2 pools in 1 drive?
To answer your question directly, yes, technically, but don't.

That pool has already had corruption (possibly related to your virtual disk passthrough setup), so I would suggest considering how you'll fix that before you do anything other than getting the most complete backup of your data that you can.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
The resources @Arwen posted in page 1 are quite useful to get familiar with ZFS and pool layouts. I suggest you to read them.
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
When trying to replicate the dataset with the corrupt file I am getting:
Can not receive new file system stream incomplete stream

I tried a few snapshots ....is there a way to skip the errors or something?
It's running for a few hours, 1.4TB of data and at the end nothing is written to the disk....
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
You'll need to use a different method to replication.

rsync is probably the best option.
 
Top