SOLVED General Advice - Volumes, Dataset and Raid configuration.

Status
Not open for further replies.

Brer

Explorer
Joined
Mar 2, 2017
Messages
63
A while back I inadvertently added a 2x2Tb Disk Raid to an existing single 2Tb disk which has my main Dataset on it, I was experimenting with the new GUI in Corral. Anyway, crux of the problem is that I now have a single 2Tb and 2x2Tb Raid config in the same Volume which is not the intended RAIDZ1 config I was after, i.e. 3x2Tb disks with 4Tb usable and 2Tb as swap.

Code:
		NAME											STATE	 READ WRITE CKSUM
		nas_data										ONLINE	   0	 0	 0
		  gptid/8d07b3ca-ff88-11e2-9b4c-001ec963b570	ONLINE	   0	 0	 0
		  mirror-1									  ONLINE	   0	 0	 0
			gptid/06a3d022-0f47-11e7-a00c-50e54955253a  ONLINE	   0	 0	 0
			gptid/0709773c-0f47-11e7-a00c-50e54955253a  ONLINE	   0	 0	 0


After inquiring on the forums the only way I could undo the above is to destroy the Dataset and start again and if I wanted to save the data then it would be to move the Dataset to another disk.
Well I have sourced another 2Tb drive to move the main Dataset from so I can do just that, destroy my mess :)

So before I make anymore mistakes and irreversibly do something I'll regret ... hence this post.

What I expect to do is to import the spare disk as a separate Volume from the existing Volume, create a new Dataset to copy my existing data into and then destroy the original Dataset, split the RAID + single drive, create a RAID5 type setup and then copy the data back in.

I would use the following to do the copy, again taken from posts on the forum :-

Code:
zfs snapshot -r ExistingDS/Storage@copy

2. Copy the data:
zfs send -Rv ExistingDS/Storage@copy | zfs receive -F SpareDS/Storage

3. Delete created snapshots:
zfs destroy -r SpareDS/Storage@copy ; zfs destroy -r ExistingDS/Storage@copy


1. Is this the correct thing to do?
2. With above zfs commands does the zfs destroy of the snapshot actually destroy the original dataset too?
3. Is a RAID5 (I think RAIDZ1 for ZFS) the best set up with 3 x 2Tb disks?

I've now moved from Corral to 11 Nightlies.

Thanks in advance.
Br.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
After inquiring on the forums the only way I could undo the above is to destroy the Dataset and start again and if I wanted to save the data then it would be to move the Dataset to another disk.
Well I have sourced another 2Tb drive to move the main Dataset from so I can do just that, destroy my mess :)
No, that's not how ZFS works. You can't move data around like that (trust me, if you could, ZFS' greatest problems would not exist).

Wait, what? I'm sorry, your terminology is very wrong and I'm confused now.

A pool is a volume. A dataset is a filesystem within the pool (sort of like a partition, in a very crude example). Your problem is at the pool level.

You have two options:
  • Export the pool in the GUI, import in the CLI, add a mirror to the single drive, export the pool and import it with the GUI - you'll end up with two mirror vdevs.
  • Backup the data (possibly zfs send | zfs recv to a new pool), destroy the pool, remake it as you wish and transfer the data back.
 

Brer

Explorer
Joined
Mar 2, 2017
Messages
63
Wait, what? I'm sorry, your terminology is very wrong and I'm confused now.

A pool is a volume. A dataset is a filesystem within the pool (sort of like a partition, in a very crude example). Your problem is at the pool level.

You have two options:
  • Export the pool in the GUI, import in the CLI, add a mirror to the single drive, export the pool and import it with the GUI - you'll end up with two mirror vdevs.
  • Backup the data (possibly zfs send | zfs recv to a new pool), destroy the pool, remake it as you wish and transfer the data back.

Yes my terminology is way off for which I apologise and thanks for the advice, I managed to figure it out using the zfs send | zfs revc to the new pool but I had to manually create each dataset within the new pool, I only had a few so only took a few minutes to replicate them. It is midway through now ...

Again thanks for the response, it helped.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Wait.

I thought we had a GIGANTIC WARNING SCREEN in the volume manager, a gigantic warning that said, "You are about to add a single disk to an existing pool. This is a very very very very stupid thing to do 99.9923414% of the time. Are you ABSOLUTELY CERTAIN of your intent here?"

Yet that's exactly what the gentleman did.

Am I mistaken about that warning existing still? I feel like he would not have done this had the warning been there.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
It's still there. People still ignore it all the time.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
And Corral.


Anyway, did the the OP get his pool restructured and the data restored?
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
That's not what the OP did.
Right you are. I guess I had it backwards. OP had a single disk ALREADY, thought he could add two more disks and make it a RAID-Z?
 

Brer

Explorer
Joined
Mar 2, 2017
Messages
63
Right you are. I guess I had it backwards. OP had a single disk ALREADY, thought he could add two more disks and make it a RAID-Z?

Yes I had a single disk and I was playing around with the Corral GUI seeing what configurations I good get with my additional 2 disks.
 

Brer

Explorer
Joined
Mar 2, 2017
Messages
63
Anyway, did the the OP get his pool restructured and the data restored?

Yes, following these exact steps using FN11 with the standard default interface from 9.10, please forgive my terminology and perhaps the approach, I'm in no way an expert and there is probably a better way to do this.

TAKE A BACK UP OF YOUR CONFIG, I cannot stress this enough, it is very easy to make a mistake and this can get you out of a whole lot of mess, it won't save your data if you accidentally trash your data but it will allow you to revert back prior to you making any configuration changes.

After adding the disk, import it using the Import Volume option from the left hand menu, accepted No: Skip to import for the Encrypted ZFS volume? Then selected what I think should be your only disk to import. Give it a name, i.e. SPARE.

I don't think this step is needed, I did it just in case as I think all permissions and settings are copied across, I selected each one of my Datasets from my original Volume, lets call it ORGINAL and took a note of the setup, most options were default as in Share Type UNIX, Zero quota, Zero Reserved etc... Permissions on each Datatset was also taken, again most were default for root:wheel rwxr--r--, pretty easy to note down.

I then created a copy of the root Datasets from ORGINAL to SPARE so if I had ORGINAL\downloads then I would have SPARE\downloads, If I had ORGINAL\vm\arch and ORGINAL\vm\ubuntu then I would just have SPARE\vm, I couldn't find a way to copy or replicate the root Datasets in the Volume so created them manually, didn't take long, about 5-10 minutes.

Once that was setup I ran the following to 1st create a snapshot of the original dataset, then run send and recv to copy the data from ORGINAL to SPARE and finally delete all snapshots created as part of the process.
Code:
zfs snapshot -r ORIGINAL/vm@copy
zfs send -Rv ORIGINAL/vm@copy | zfs receive -F SPARE/vm
zfs destroy -r ORIGINAL/vm@copy; zfs destroy -r SPARE/vm@copy

PLEASE NOTE: These commands required elevated privileges using either root or sudo, if you are using sudo then ensure each command above is prefixed with sudo, as in:
sudo zfs send -Rv ORIGINAL/vm@copy | sudo zfs receive -F SPARE/vm

I scripted the above:-
Code:
echo "Migrating dataset $3 from volume $1 to $2"

echo "Clean up:"
zfs destroy -r $1/$3@copy
zfs destroy -r $2/$3@copy

echo "Creating snapshot of $1/$3"
zfs snapshot -r $1/$3@copy

zfs list -t snapshot $1/$3@copy

echo "Moving data for $3 from $1 to $2"

zfs send -Rv $1/$3@copy | zfs receive -F $2/$3

echo "Clean up:"
zfs destroy -r $1/$3@copy
zfs destroy -r $2/$3@copy

Run the above with sudo ./migrate.sh ORIGINAL SPARE vm

Once you have completed the data migration you can then start to move your system over, i.e. repoint any shares you may have or disks for VM's. PLEASE NOTE: You only need to replace the ORIGINAL name with the SPARE name, e.g. if you have a share of /mnt/ORIGINAL/downloads then just update ORIGINAL to SPARE, /mnt/SPARE/downloads. This was the same for any path setting including paths to VM disks and the home directory as I have that in its own dataset.

Once all the settings had been moved to SPARE I detached the ORIGINAL Volume and rebooted, checked everything was fine and OK including starting up all VM's, processes, jails etc... before destroying the ORIGINAL Volume. Once destroyed I could then create the type of RAID I required with the 3 disks and repeat the above process except I went from SPARE to ORIGINAL.

Hope this helps anyone in the same situation as myself.
Br.
 
Status
Not open for further replies.
Top