Device: /dev/ada1, 1688 Currently unreadable (pending) sectors

M@tchichi

Cadet
Joined
Sep 9, 2020
Messages
5
Bonjour à tous,

Je suis assez nouveau avec l'utilisation de freenas mais si je l'utilise sur mon nas depuis un moment.
J'ai des erreurs critiques comme ceci

1599669683786.png


Mon volume est comme ceci

1599669740695.png


Je ne sais pas quoi faire et comment résoudre cela.

Merci à vous.
 

Pitfrr

Wizard
Joined
Feb 10, 2014
Messages
1,523
Bonjour,

Oh ben là faut vite changer de disque! :-O
S'il est encore sous garantie il devrait être possible de la faire jouer.

Et j'espère que tu as des bonnes sauvegardes aussi... Car il semblerait que le disque ada1 est en train de rendre l'ame avec 1688 secteurs défectueux.
Et comme le volume est un stripe, il n'y a pas de redondance donc si un disque lâche tout le volume est perdu.
Apparemment un scrub est en cours, il est possible qu'il trouve des erreurs également et il ne pourra peut-être pas les réparer puisqu'il n'y a pas de redondance.
 

Pitfrr

Wizard
Joined
Feb 10, 2014
Messages
1,523
D'ailleurs il est fort probable que le scrub ait trouvé des erreurs, il le dit déjà "One or more devices has experienced an error resulting in data corruption".
Et la colonne "read" indique des erreurs de lecture pour ada1.
 

M@tchichi

Cadet
Joined
Sep 9, 2020
Messages
5
Merci .. Je suis en train de faire un sauvegarde d'un dossier important dans un autre volume. Comment je fais pour remplacer ce disque dans le volume ? Le disque n'est plus sous garantie je pense. C'est un disque de 2 TO. Oui j'aurai du faire un mirror et non un stripe ... dans mon volume j'ai un disque de 2TO et un de 4 TO. Comment je peux faire sortir le 2TO et ne garder que le 4TO et copier les données du 2TO vers le 4TO ?
Merci de ton aide
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Hi there,

a stripe is just a pool with two degraded mirror vdevs. There aren't a lot of great options here, depending on space for additional disks and your budget.

You could:

1) Move all your data off. Destroy the pool, then recreate it as a 2x4TB mirror. Copy your data back on. Requires one additional 4TB disk.
or
2) Attach two 4TB disks to the 2TB one, and one 4TB disk to the 4TB one. You end up with five disks, configured as two mirror vdevs, one is 3-wide. Wait for resilver. Then detach the 2TB disk and throw it away. You are left with your current pool layout, just now with redundancy. Requires three additional 4TB disks.
or
3) Attach one 4TB disk to the 2TB one. Wait for resilver, detach the 2TB disk and throw it away. You are left with your current pool layout, without redundancy. Make sure your backups are excellent, running a pool without redundancy assumes you're okay with data loss. Requires one additional 4TB disk
or
4) Like 1), just that you create a pool with a single 4TB drive, no redundancy. Same warnings as 3) apply. No additional disks needed.

I am not covering the obvious 5) because that'd take a lot of RAM, and is hardly ever worth it. That being to detach the existing 2TB vdev and keep going. I suppose you can, but read into the implications and RAM usage first.
 

M@tchichi

Cadet
Joined
Sep 9, 2020
Messages
5
Hi Yorick,

Thx for your explanations. I assume the first solution is the safer if i can backup all my data off.
So There is no chance to convert a strip to a mirror without deleting data ?

Could i know what data are on the 2TO disk and on the 4TO disk ?
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
You can absolutely convert single-disk vdevs to mirror vdevs, that’s option 2. Attach a disk to each existing vdev (single disk), you have mirrors. I suggested attaching two to the 2TB so you can, after resilver, detach the 2TB and be up and running with two mirror vdevs.

I recommend reading Ars Technica’s primer on ZFS, or something equivalent in French. ZFS distributes writes across all vdevs by its own algorithm. It’s not that certain files are on vdev one and the other on vdev two; it’s that the entire file system is distributed across both vdevs, recordsize chunks at a time. There is no easy way to untangle that.
 

M@tchichi

Cadet
Joined
Sep 9, 2020
Messages
5
Thx for spending to help me :)
I would like to better understand: i had a spool called Storage with a disk of 2TO, afterwards i bought a 4TO and i attach it to the pool Storage. When you Say attach a disk how Can i do that ? Thx again ;)
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Hence the recommendation to read into how ZFS works :). https://arstechnica.com/information...01-understanding-zfs-storage-and-performance/ is a good one.

You had a pool with one vdev, and you added a second vdev. These vdevs are "single disk" vdevs, which ZFS treats as a special case of a mirror vdev. In ZFS terms, you used the "zpool add" command. In FreeNAS terms, you likely used "Add vdevs" in the Pools -> Storage UI.

You now want to attach a second disk to each existing single-disk vdev, to make them into mirror vdevs. To do this from UI requires TrueNAS 12.0 Core, which will be available as an RC1 in one week.

In TrueNAS Core, the option to attach a disk to a single-disk or mirror vdev is named a little oddly, it's called "Extend". You get to it thusly:

Pools -> Storage
Use the gear icon for "Status"
Use the three-dot menu next to an individual disk and choose "Extend"
You get an "Extend vdev" window that prompts you for "New Disk". Select from the drop-down and choose "Extend"

TrueNAS Core will then "zpool attach" the new disk to your existing vdev, changing it from a single-disk vdev into a mirror vdev.

All these operations can also be done from CLI, but that'll require fiddling with gpart and getting disk gptids and so on. There's a decent chance to make a misstep. My recommendation is to use the UI in TrueNAS Core 12.0 for this.

To illustrate this, here's a pool with two single-disk vdevs as shown by zpool status:

Code:
 pool: lonely
state: ONLINE
config:

    NAME                     STATE     READ WRITE CKSUM
    lonely                   ONLINE       0     0     0
      /mnt/Gion/VMs/sparse1  ONLINE       0     0     0
      /mnt/Gion/VMs/sparse2  ONLINE       0     0     0


I'll now zpool attach a second disk to that first vdev, and run zpool status again:

Code:
  pool: lonely
state: ONLINE
  scan: resilvered 72K in 00:00:01 with 0 errors on Thu Sep 10 07:36:44 2020
config:

    NAME                       STATE     READ WRITE CKSUM
    lonely                     ONLINE       0     0     0
      mirror-0                 ONLINE       0     0     0
        /mnt/Gion/VMs/sparse1  ONLINE       0     0     0
        /mnt/Gion/VMs/sparse3  ONLINE       0     0     0
      /mnt/Gion/VMs/sparse2    ONLINE       0     0     0


My first vdev is now called mirror-0! Success. And I still have a single-disk vdev as my second vdev. If I now zpool attach another disk to that second vdev as well, the pool looks like this:

Code:
  pool: lonely
state: ONLINE
  scan: resilvered 171K in 00:00:00 with 0 errors on Thu Sep 10 07:38:05 2020
config:

    NAME                       STATE     READ WRITE CKSUM
    lonely                     ONLINE       0     0     0
      mirror-0                 ONLINE       0     0     0
        /mnt/Gion/VMs/sparse1  ONLINE       0     0     0
        /mnt/Gion/VMs/sparse3  ONLINE       0     0     0
      mirror-1                 ONLINE       0     0     0
        /mnt/Gion/VMs/sparse2  ONLINE       0     0     0
        /mnt/Gion/VMs/sparse4  ONLINE       0     0     0


Not so lonely any more, those vdevs. This pool now has redundancy. All redundancy exists at vdev level, never at pool level.

I was using sparse files for this example. FreeNAS/TrueNAS partitions a drive and then uses the gptid of the second partition, which you can see with zpool status. This is vital to the way it functions: Because of the partitions, a replacement drive can be a few sectors smaller and still work (not all 4TB drives are created precisely equal), and because of the gptid, the drive can be moved to a different controller or port and still be part of the pool without issue. Doing that correctly from CLI is possible, and this forum has guides on how to: But why struggle with that when it is now available in the UI.|

A mirror can have any number of disks. If I assume that the first vdev is similar to the one you have with a failed disk, I can attach two disks and wait for resilver:

Code:
  pool: lonely
state: ONLINE
  scan: resilvered 180K in 00:00:00 with 0 errors on Thu Sep 10 07:47:16 2020
config:

    NAME                       STATE     READ WRITE CKSUM
    lonely                     ONLINE       0     0     0
      mirror-0                 ONLINE       0     0     0
        /mnt/Gion/VMs/sparse1  ONLINE       0     0     0
        /mnt/Gion/VMs/sparse3  ONLINE       0     0     0
        /mnt/Gion/VMs/sparse5  ONLINE       0     0     0
      mirror-1                 ONLINE       0     0     0
        /mnt/Gion/VMs/sparse2  ONLINE       0     0     0
        /mnt/Gion/VMs/sparse4  ONLINE       0     0     0


I'll pretend that sparse1 is the failed disk, and after resilver completes, I'll zpool detach it - in TrueNAS Core that'll be a drop-down next to the disk to remove it, though I don't have a second disk handy right now to see what the UI calls it. I'll have that in a week or so. Here's the pool with the "defective" sparse1 removed:


Code:
  pool: lonely
state: ONLINE
  scan: resilvered 180K in 00:00:00 with 0 errors on Thu Sep 10 07:47:16 2020
config:

    NAME                       STATE     READ WRITE CKSUM
    lonely                     ONLINE       0     0     0
      mirror-0                 ONLINE       0     0     0
        /mnt/Gion/VMs/sparse3  ONLINE       0     0     0
        /mnt/Gion/VMs/sparse5  ONLINE       0     0     0
      mirror-1                 ONLINE       0     0     0
        /mnt/Gion/VMs/sparse2  ONLINE       0     0     0
        /mnt/Gion/VMs/sparse4  ONLINE       0     0     0


Playing around with these concepts using sparse files, which you can create with "truncate -s 1T <filename>", can be helpful for becoming familiar with ZFS concepts. If you do, I recommend using SSH to connect to the CLI, not the built-in web CLI.

All changes to a pool's data-carrying vdevs are permanent, whether done from CLI or UI. Keep that in mind when you work on your real pool. You want to make sure you know the steps you are taking quite well. If, for example, you were to "add vdev" to the existing pool instead of using the "extend" command on a single vdev (disk), then you'd end up with three single-disk vdevs, still no redundancy, and no closer to a solution.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
FWIW,

In TrueNAS Core, the option to attach a disk to a single-disk or mirror vdev is named a little oddly, it's called "Extend". You get to it thusly:

I just checked the documentation, and AFAICT, this is undocumented. Would love to be proved wrong.

Anyway, I was hoping there was a gui way to do this in 2021 ;)

Will give it a whirl.

It should be documented!

Edit: Yep, can confirm "Extend"on a vdev means "Attach".
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I am wondering whether it was named that because raidz expansion may land within a year, and “extend” on a raidz vdev is a bit more intuitive language than “attach”.
 
Top