SOLVED Cloned the disk to a new larger disk. How to add its unallocated space to the pool of the same disk?

Joined
Nov 4, 2021
Messages
18
I can not solve the issue: the xsystem has one system dataset disk with a pool "SSD180" (ZFS). I cloned it in the Windows environment to a larger disk (just one to one). Started up with a new drive. The web shows all the same 180 GB.

Nextcloud is installed on this system disk and all the information that is there, I need exactly the way it is now, so I resorted to cloning.

I understand very poorly and when creating a NAS I did everything gradually, reading on the forum. But now a specific plug. How can I add the unallocated space of my new disk to its own pool, and make the web interface see a larger size? If possible, then we need direct commands one by one, because everything that I have read and tried does not help. Most likely I'm doing something wrong. Even the zpool status command shows that there is just a pool SSD180 (that's what it's called), but it doesn't show any gptids. Disk without a mirror. I just need to change it to a larger disk but I don't know how to do it

I would be very grateful for help in resolving this issue.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
From the Shell, run zpool status -v SSD180 to display the construction of your pool.

If this is as described, a single-disk pool, then you can run from Shell zpool online -e SSD180 gptid/<GUID of pool disk> to expand it.
 
Joined
Nov 4, 2021
Messages
18
From the Shell, run zpool status -v SSD180 to display the construction of your pool.

If this is as described, a single-disk pool, then you can run from Shell zpool online -e SSD180 gptid/<GUID of pool disk> to expand it.
Will it be displayed on the WEB or do I still need to enter some commands?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Will it be displayed on the WEB or do I still need to enter some commands?
The expanded size should display in the web UI.
 
Joined
Nov 4, 2021
Messages
18
From the Shell, run zpool status -v SSD180 to display the construction of your pool.

If this is as described, a single-disk pool, then you can run from Shell zpool online -e SSD180 gptid/<GUID of pool disk> to expand it.
And there is one small problem. Status does not show gptid/<GUID of pool disk>

I'm doing something wrong again.
 

Attachments

  • 33.jpg
    33.jpg
    208.9 KB · Views: 110

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
OK, try zpool online -e SSD180 ada1p2.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Do a gpart show ada1 and post the results, please. Copy and paste text, don't post screenshots.
 
Joined
Nov 4, 2021
Messages
18
Do a gpart show ada1 and post the results, please. Copy and paste text, don't post screenshots.

root@truenas[~]# gpart show ada1
=> 34 468862060 ada1 GPT (224G) [CORRUPT]
34 262144 1 ms-reserved (128M)
262178 2014 - free - (1.0M)
264192 4194304 2 freebsd-swap (2.0G)
4458496 347457536 3 freebsd-zfs (166G)
351916032 116946062 - free - (56G)
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Code:
gpart recover ada1
gpart resize -i 3 ada1
zpool online -e SSD180 ada1p3
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
What does not show as it should is the disk device referenced in the zpool. How did you create the pool? Command line or UI?

TrueNAS does not reference disk-partition-number scheme devices but always uses the UUID of the partition in question. So instead of ada1p3 there should be something like gptid/8d299abe-e22e-11ea-9ee7-ac1f6b76641c in your zpool status output.

I am not quite sure if this is fixable for a single disk pool. What you could try: zpool export SSD180 (shutdown your jail first) and then re-import the pool from the UI. Maybe that works.
 
Joined
Nov 4, 2021
Messages
18
TrueNAS does not reference disk-partition-number scheme devices but always uses the UUID of the partition in question. So instead of ada1p3 there should be something like gptid/8d299abe-e22e-11ea-9ee7-ac1f6b76641c in your zpool status output.
Now the entire space of the new disk has become visible on the WEB. Do I need to fix ada1p3? Can I leave it like this if it works?
 
Joined
Nov 4, 2021
Messages
18
What does not show as it should is the disk device referenced in the zpool. How did you create the pool? Command line or UI?

TrueNAS does not reference disk-partition-number scheme devices but always uses the UUID of the partition in question. So instead of ada1p3 there should be something like gptid/8d299abe-e22e-11ea-9ee7-ac1f6b76641c in your zpool status output.

I am not quite sure if this is fixable for a single disk pool. What you could try: zpool export SSD180 (shutdown your jail first) and then re-import the pool from the UI. Maybe that works.
I did the export as you wrote. Through UI. Throws an error through Command line. Then I imported the disk again. But nothing has changed in zpool status.
 
Joined
Nov 4, 2021
Messages
18
What does not show as it should is the disk device referenced in the zpool. How did you create the pool? Command line or UI?

TrueNAS does not reference disk-partition-number scheme devices but always uses the UUID of the partition in question. So instead of ada1p3 there should be something like gptid/8d299abe-e22e-11ea-9ee7-ac1f6b76641c in your zpool status output.

I am not quite sure if this is fixable for a single disk pool. What you could try: zpool export SSD180 (shutdown your jail first) and then re-import the pool from the UI. Maybe that works.

root@truenas[~]# zpool status
pool: SSD180
state: ONLINE
scan: scrub repaired 0B in 00:06:15 with 0 errors on Sun May 1 00:06:15 2022
config:

NAME STATE READ WRITE CKSUM
SSD180 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0

errors: No known data errors

pool: WD4TB
state: ONLINE
scan: scrub repaired 0B in 00:24:51 with 0 errors on Sun Apr 17 00:24:51 2022
config:

NAME STATE READ WRITE CKSUM
WD4TB ONLINE 0 0 0
gptid/f1a31aab-286b-11ec-875b-d43d7eea7b49 ONLINE 0 0 0

errors: No known data errors

pool: WD6TB
state: ONLINE
scan: scrub repaired 0B in 07:00:11 with 0 errors on Sun Apr 17 07:00:11 2022
config:

NAME STATE READ WRITE CKSUM
WD6TB ONLINE 0 0 0
gptid/eaa6ed87-251c-11ec-85ab-d43d7eea7b49 ONLINE 0 0 0

errors: No known data errors

pool: boot-pool
state: ONLINE
status: One or more devices are configured to use a non-native block size.
Expect reduced performance.
action: Replace affected devices with devices that support the
configured block size, or migrate data to a properly configured
pool.
scan: scrub repaired 0B in 00:00:24 with 0 errors on Sat May 14 03:45:24 2022
config:

NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
ada3p3 ONLINE 0 0 0 block size: 512B configured, 4096B native

This is the result I get
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Unless you have another scratch disk of at least the same size, I don't know how to fix this. If you have, I would definitely do it. Also Windows messed up your partition table as it seems. That first partition (from gpart show) does not belong there.

Your data is not at risk. ZFS does not care which way to address a particular device you use. But the TrueNAS UI and middleware do. E.g. should you ever want to attach another disk to turn this pool into a mirrored one, the UI will most certainly not work.

So avoid pool/vdev manipulation in the UI and you should be fine.
 
Joined
Nov 4, 2021
Messages
18
Unless you have another scratch disk of at least the same size, I don't know how to fix this. If you have, I would definitely do it. Also Windows messed up your partition table as it seems. That first partition (from gpart show) does not belong there.

Your data is not at risk. ZFS does not care which way to address a particular device you use. But the TrueNAS UI and middleware do. E.g. should you ever want to attach another disk to turn this pool into a mirrored one, the UI will most certainly not work.

So avoid pool/vdev manipulation in the UI and you should be fine.
Is there any way I can try to fix this? So that in the future I can add a mirror disk to the pool. All these disks were created from the interface. Even when connect the original SSD180 there will be no gptid.
 
Top