SOLVED 4 Disks, Mirrored - Is This Right?

dakotta

Dabbler
Joined
Oct 12, 2018
Messages
42
I'm not sure I did this right.

I have only 4 disks.

I have read about the difference between using 2 mirrored vdevs vs RAID-Z2, and have decided that, for my use case, it's a toss-up. I decided to go with the mirror for the simplicity of rebuilding.

When I set up my pool, I selected all 4 drives and then clicked "mirror". For some reason I was expecting to set up a 2-disk striped vdev, then setup a second striped vdev and then tell TrueNAS to mirror the first to the second.

Did TrueNAS automatically do that when I clicked the "mirror" button? Or did I setup my pool incorrectly?

EDIT - or did I just create a 4-way mirror?

Cheers,
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
zpool status or the UI will tell you ...
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
When I set up my pool, I selected all 4 drives and then clicked "mirror". For some reason I was expecting to set up a 2-disk striped vdev, then setup a second striped vdev and then tell TrueNAS to mirror the first to the second.
Even if it had automatically done what you intended, you would have ended with a stripe of 2 mirrors, not a mirror of 2 stripes.

VDEVs are striped together in a pool, that's the only way.

As @Patrick M. Hausen said, zpool status -v will show you.
 

dakotta

Dabbler
Joined
Oct 12, 2018
Messages
42
Even if it had automatically done what you intended, you would have ended with a stripe of 2 mirrors, not a mirror of 2 stripes.

VDEVs are striped together in a pool, that's the only way.

As @Patrick M. Hausen said, zpool status -v will show you.

If I understand you correctly: it doesn't matter how many vdevs I have in my pool, data written to my pool is always striped across all of them. This is why a single-disk vdev is so dangerous for my data: losing that disk will destroy the vdev, which in turn will destroy the pool.

So, I don't actually have to tell TrueNAS what to do with the vdevs I create: it will always stripe data across all available vdevs.

What I have to tell TrueNAS is how many vdevs to create and how to structure the data in those vdevs.

In using the wizard, I selected 4 disks and told TrueNAS to create a mirrored pool. I'm pretty sure it created one vdev, in a 4-way mirror (see below). To achieve what I wanted to do, I should have selected two disks and created a mirror vdev. Then selected two more discs and created another mirror vdev. TrueNAS will then stripe data across those two vdevs... and any data written to a particular vdev will be mirrored across both discs in that vdev.

Introduction to ZFS
Cyberjock's slideshow

If I understand the report below, it shows a single vdev ( named mirror-0 ) with four disks in a 4-way mirror.
Code:
########## ZPool status report summary for all pools on server TRUENAS ##########

+--------------+--------+------+------+------+----+----+--------+------+-----+
|Pool Name     |Status  |Read  |Write |Cksum |Used|Frag|Scrub   |Scrub |Last |
|              |        |Errors|Errors|Errors|    |    |Repaired|Errors|Scrub|
|              |        |      |      |      |    |    |Bytes   |      |Age  |
+--------------+--------+------+------+------+----+----+--------+------+-----+
|boot-pool     |ONLINE  |     0|     0|     0|  2%|  0%|     N/A|   N/A|  N/A|
|tank          |ONLINE  |     0|     0|     0|  0%|  0%|     N/A|   N/A|  N/A|
+--------------+--------+------+------+------+----+----+--------+------+-----+

########## ZPool status report for boot-pool ##########

  pool: boot-pool
 state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    boot-pool   ONLINE       0     0     0
      da4p2     ONLINE       0     0     0

errors: No known data errors

########## ZPool status report for tank ##########

  pool: tank
 state: ONLINE
config:

    NAME                                            STATE     READ WRITE CKSUM
    tank                                            ONLINE       0     0     0
      mirror-0                                      ONLINE       0     0     0
        gptid/74d65684-f72b-11eb-a7fa-3417ebecd9d8  ONLINE       0     0     0
        gptid/77e8fb8b-f72b-11eb-a7fa-3417ebecd9d8  ONLINE       0     0     0
        gptid/78833559-f72b-11eb-a7fa-3417ebecd9d8  ONLINE       0     0     0
        gptid/7890cf3d-f72b-11eb-a7fa-3417ebecd9d8  ONLINE       0     0     0

errors: No known data errors
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Correct. But that can easily be fixed.
Code:
zpool detach tank gptid/78833559-f72b-11eb-a7fa-3417ebecd9d8
zpool detach tank gptid/7890cf3d-f72b-11eb-a7fa-3417ebecd9d8
zpool add tank mirror gptid/78833559-f72b-11eb-a7fa-3417ebecd9d8 gptid/7890cf3d-f72b-11eb-a7fa-3417ebecd9d8
 

dakotta

Dabbler
Joined
Oct 12, 2018
Messages
42
Oh. Very nice! Thank you. :)
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
If I understand you correctly: it doesn't matter how many vdevs I have in my pool, data written to my pool is always striped across all of them. This is why a single-disk vdev is so dangerous for my data: losing that disk will destroy the vdev, which in turn will destroy the pool.

So, I don't actually have to tell TrueNAS what to do with the vdevs I create: it will always stripe data across all available vdevs.

What I have to tell TrueNAS is how many vdevs to create and how to structure the data in those vdevs.
That's a perfect summary of how it is.

Although I would add the slight caveat that the striping isn't always even across all of the VDEVs for a bunch of technical reasons including the timing of the VDEVs being added (at the beginning or later) and VDEV sizes/free space. In principle, for a pool that has always had the same number of equally-sized VDEVs, it's correct.
 

dakotta

Dabbler
Joined
Oct 12, 2018
Messages
42
That's a perfect summary of how it is.

Although I would add the slight caveat that the striping isn't always even across all of the VDEVs for a bunch of technical reasons including the timing of the VDEVs being added (at the beginning or later) and VDEV sizes/free space. In principle, for a pool that has always had the same number of equally-sized VDEVs, it's correct.
Sure. :cool:

I assumed it was more complicated than I was suggesting, but that was all my tiny, newbie-brain can handle right now. :wink:

Cheers,
 
Top