Creating a degraded pool (for migration)

tannisroot

Dabbler
Joined
Oct 14, 2023
Messages
45
Hi, I currently have a singular drive with no redundancy and data on it and I want to migrate to a RAIDZ2 setup with the same disk models as the one I currently have.
Is there a way to somehow create a degraded pool (as in, when it's created it is already degraded) with 1 disk missing, move my data onto the new RAIDZ pool and after that resilver the original drive into the new RAIDZ2 pool?
There are some very old instructions for CORE that tell how to do it but I couldn't find anything concrete for SCALE.
 

jcunha

Cadet
Joined
Jan 18, 2024
Messages
1
I'd also like to know that. Those CORE instructions won't work on SCALE because it's made for FreeBSD.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
What do you mean by those core instructions? This guide?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Yes, the old Resource is FreeBSD specific;

But, the concept is the same. Simply convert the steps to Linux. If you can't do that, then it is possible you may not have the skills to perform the task with complete data safety.
 

tannisroot

Dabbler
Joined
Oct 14, 2023
Messages
45
Yes, the old Resource is FreeBSD specific;

But, the concept is the same. Simply convert the steps to Linux. If you can't do that, then it is possible you may not have the skills to perform the task with complete data safety.
It's not that I am uncapable of converting the steps, it's just time consuming to research what is what and what the differences are on different platforms, I would have just preferred if there was a pre-existing resource.
 

tannisroot

Dabbler
Joined
Oct 14, 2023
Messages
45
Anyway, I wanted my degraded pool to take up all of available space on disks as I, in 2024, have no need for swap on spinning rust, and I wanted it to be the same size as if I created it using SCALE GUI. If you want the same, here are the detailed steps on how to do this:
1) Use SCALE GUI to create 1 stripe, 1 vdev Pool consisting of just 1 disk.
2) Go to Storage -> Disk and note the letter of the drive that houses the new temporary pool.
3) Login to shell (I did so through SSH) and run `sudo fdisk -l /dev/sdX` where X is the said drive's letter.
4) At the bottom, note the values of start, end and sectors of the partition /dev/sdX1, copy and save those values somewhere. For my 18TB drive, the values were 4096, 35156654080 and 35156649985.
5) Run `sudo lsblk -ba /dev/sdh` to get the size of the partition used by the temporary pool. For mine I got `18000204792320`.
6) In the GUI, export and and tick the box to destroy data on the temporary pool
7) In Storage -> Disks, note the letters of the drives you wish to use to create the degraded pool, and note the letter of the drive(s) you want to keep intact until the transition is complete.
8) Run `sudo fdisk /dev/sdX` for the first drive you wish to the prepare for the creation of a pre-degraded zpool.
9) You will be greeted with an interactive CLI interface. First, create GPT partition by entering `g` and pressing enter. Then, create a new partition by pressing `n`. For the partition number, choose 1. For the start and end sector values, use the values you got in step 4. The partition will get created. Then, press `p` to overview the prepared partition layout. Verify that `sectors` value matches the value you got in step 4. Write the results by entering `w`.
10) Repeat steps 8 to 9 for every drive you wish to use for a new pre-degraded pool.
11) Write a sparsefile using `truncate -s X /root/sparsefile` where X is the byte size of the partition you got in step 5. If needed, repeat this step for however many sparsefiles you need, which correspons with the number of drives you wish to later integrate into the RAIDZ pool. It just can't be more than the parity allows, so for example in a RAIDZ2 pool, you can't use more than 2 sparse files. In my case, I only needed 1.
12) Get the GPTIDs of the /dev/sdX1 partitions you created in steps 8 and 9 by running `sudo blkid`. It will be the value of PARTUUID. For example, for one of the partitions, the line was `PARTUUID="a2cbeb8f-f839-4f7e-9f86-222dc4f9d3b4"`. You copy what is inside quotes. Copy all of the GPTIDs of the drives you've prepared and paste them in some text file
13) Create the degraded pool. I wanted a 8 disk wide RAIDZ2 pool and only needed 1 sparsefile, so I ran `sudo zpool create -f TANK raidz2 GPTID1 GPTID2 GPTID3 GPTID4 GPTID5 GPTID6 GPTID7 /root/sparsefile`, where TANK is the desire name of the pool, GPTIDx are the GPTIDs you got from step 12.
14) After the command completes and the pool is created, run `sudo zpool offline TANK /root/sparsefile` to get rid of the sparse file. DO NOT write any data to the pool before you do this. After that, you can safely delete it.
15) Run `sudo zpool export TANK` and then import the pool through the GUI.
16) Replicate whatever data you need onto the new degraded pool.
17) When you are done with that, export the the old pool you wanted to incorporate into the new pool but couldn't before copying data off it, check `destroy data`.
18) Repeat the steps 7, 8 and 9 to get the letter of this new blank drive and create the partition on it, and then repeat step 12 to get the GPTID of this partition.
19) Finally, replace the sparse file with `sudo zpool replace TANK /root/sparsefile GPTID`.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
It's not that I am uncapable of converting the steps, it's just time consuming to research what is what and what the differences are on different platforms, I would have just preferred if there was a pre-existing resource.
Resources are written by people, (community members, not iX staff), who either see a need or have done the work themselves and want to share the steps. Thus, it is entirely possible the writer of that Resource has not performed the steps in TrueNAS SCALE, (based on Linux).

Feel free to write your own, Linux specific Resource, for others to view. Plus, in the discussion thread, others can make suggestions or optimizations.

I wrote a few Resources so that it would be easier to point someone with a question, to a more complete and proof read answer. Rather than a terse, off the cuff answer which may have typos or less than good wording.
 

tannisroot

Dabbler
Joined
Oct 14, 2023
Messages
45
Resources are written by people, (community members, not iX staff), who either see a need or have done the work themselves and want to share the steps. Thus, it is entirely possible the writer of that Resource has not performed the steps in TrueNAS SCALE, (based on Linux).

Feel free to write your own, Linux specific Resource, for others to view. Plus, in the discussion thread, others can make suggestions or optimizations.

I wrote a few Resources so that it would be easier to point someone with a question, to a more complete and proof read answer. Rather than a terse, off the cuff answer which may have typos or less than good wording.
Yeah exactly! I basically wrote my own guide for other people in the message above yours. I might reword it a bit better and make a post out of it.
 
Top