Howto: migrate data from one pool to a bigger pool

peter2cfu

Dabbler
Joined
May 14, 2020
Messages
25
I realise this is an old thread, I recently made a Freenas device from the owner of my old servers, 10x4TB and 2x8TH SAS HDDs with 2c 512GB SSD as boot drives for FreeNAS "(FreeNAS-11.3-U2.1) using a fully loaded Megaraid 9380 4e4i set to JBOD

I have set up 3 pools (A)6x4TB drive (media) (B)4x4TB drives (data) (C)2x8TB drives (scratch). I wish to condense in 2 pools and change the raid level.

So the question is
I wish to convert A+B into A (making it one pool of 10 Drives and change the raid level is this possible without destroying the data?
 

peter2cfu

Dabbler
Joined
May 14, 2020
Messages
25
Sorry for the repost seems I can't delete or edit my previous post?

I realise this is an old thread, I recently made a Freenas device from one of my old servers, 10x4TB and 2x8TH SAS HDDs with 2x512GB SSD as boot drives for FreeNAS "(FreeNAS-11.3-U2.1) using a fully loaded Megaraid 9380 4e4i set to JBOD

I have set up 3 pools (A)6x4TB drive (media) (B)4x4TB drives (data) (C)2x8TB drives (scratch). I wish to condense in 2 pools and change the raid level.

So the question is
I wish to convert A+B into A (making it one pool of 10 Drives and change the raid level is this possible without destroying the data?
 
Last edited:
Joined
Jan 4, 2014
Messages
1,644
So the question is
I wish to convert A+B into A (making it one pool of 10 Drives and change the raid level is this possible without destroying the data?
No. Backup. Rebuild the pool. Restore.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I wish to convert A+B into A (making it one pool of 10 Drives and change the raid level is this possible without destroying the data?
Depends on how much data you have in the pools and how desperate you are to do it (will you take some risks with your data for the duration of the process, potentially losing all or some of it if a disk happens to fail before you get to the final picture?). If you're risk-averse, go with @Basil Hendroff 's statement.

You would need first to be able to move all of the data from the 2x8 (assuming there is any of value on a scratch volume).

Then recreate it as a stripe (unless it already was, then fine).

now move all data from the 6x4TB to it.

At this point, if you have enough space to also move the 4x4TB contents to the 2x8TB as well, you're golden, just do it, then recreate your 10x4TB and move the data back.

If you can't fit all the data on the 2x8TB, then the next step gets tricky... I'm going to assume that your 4x4TB is RAIDZ1, but if it's RAIDZ2 even better...

Take the number of redundant disks you have out of the 4X4TB and wipe it/them. Let's assume worst case, so you now have 2x8TB holding all the data from the 6x4TB and 3x4TB holding the rest with 1 spare. (of course assuming it all fits to get to this point)

Now you're in a position to build a 10 disk RAIDZ3 with the 3 parity disks missing:


Then put all the data from the 3x4TB and 2x8TB into the 7x4TB pool you created (actually a 10x4TB RAIDZ3 with 3 missing disks).

Then kill the 3x4TB pool and return those disks as the 3 missing ones in the 10x4TB pool and do what you want with the 2x8TB.

It could work to do RAIDz2 for the 10 disks if your 4x4TB pool is RAIDZ2 currently, but you haven't shared enough information for us to see that.

FreeNAS-11.3-U2.1
This is good as you won't be putting data at risk if you export pools

Megaraid 9380 4e4i set to JBOD
This is a terrible choice, see here: https://www.ixsystems.com/community/threads/whats-all-the-noise-about-hbas-and-why-can't-i-use-a-raid-controller.81931/
 
Last edited:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Your raid card is terrible. Multiple pools is general a bad idea and no you can't change vdevs and create new pools without destroying the data on the old vdevs.
 

peter2cfu

Dabbler
Joined
May 14, 2020
Messages
25
Thanks for all your comments, in the end, I just copied the data to another NAS devices re rebuilt from scratch, seemed the best way, I have a 10GB NICs and Switches so did not take as long I thought it would.
 
Last edited:

peter2cfu

Dabbler
Joined
May 14, 2020
Messages
25

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
It's what I have, I update all the bios/firmware system seems stable, I even pulled the plug live and no issues at all. I suspect because of the battery backup for the RAID system.
You're not understanding. Yeah it works great you think until all your data has just gone in the blink of an eye.
 

peter2cfu

Dabbler
Joined
May 14, 2020
Messages
25
You're not understanding. Yeah it works great you think until all your data has just gone in the blink of an eye.
I understand I have been through all the forums on this, I would prefer and HBA, however, I'm not wanting to spend money, this is just an old server, I can see no technical reason (other than what you listed) why this should be an issue, I'm aware it's not ideal. My preference would be to run RAID via my card and not use ZFS for sure, ZFS rebuild times are criminal.
 
Last edited:

peter2cfu

Dabbler
Joined
May 14, 2020
Messages
25
in the end, I'm likely to go back to windows server core, it has been a good experiment but I really want to leverage my existing hardware and not spend any more money.
 

Lighthouse

Dabbler
Joined
Nov 15, 2018
Messages
15
in the end, I'm likely to go back to windows server core, it has been a good experiment but I really want to leverage my existing hardware and not spend any more money.

Just let you know this RAID card issue is not FreeNAS specific.

Basically, whether you use FreeNAS, Windows or any other OS the issue is that you simply do not have direct access to your HDDs, making sure that your chosen OS won't be able to tell you something has gone wrong with HDDs until it is too late.

Probably only way to somewhat ensure everything is ok is routine scrubbing (I highly recommend Stablebit to configure your array if you are going for windows server core) but in the end if the controller of the RAID card has gone bad and feeding wrong information, even scrubbing won't help until you actually try to open the file, realized it is completely corrupted.

But if you are not willing to spend about 50(+20 if the case is not server case) dollars for proper HSA card, then it's all right, as long as you have backup. Just be aware your RAID card is the single point of failure and have appropriate backup stragtegy for that, regardless of your choise of OS.
 

peter2cfu

Dabbler
Joined
May 14, 2020
Messages
25
Basically, whether you use FreeNAS, Windows or any other OS the issue is that you simply do not have direct access to your HDDs, making sure that your chosen OS won't be able to tell you something has gone wrong with HDDs until it is too late.

Thanks, I am aware of points of failures saying that I never in my 30years had a raid controller failure personally, on top of that should ever I have one I can get a replacement off the shelf and reimport the configs, I have done for clients several times over the years, so nothing theoretical here.

This was just an experiment to see if it was worth moving over to unRAID or FreeNAS as both from what I could other would do what I wanted, run Plex and qbtorrent and no more, I have tried to get the system as functional as possible so I could give it a fair crack of the whip, where I have had issues the people on this forum have been very helpful, for that I am extremely grateful.

It's not a factor of money for me it's fit for purpose, the FreeNAS system I have works very well, however, the fact that ZFS removes the comfort of my RAID system I find uncomfortable, I have 5 servers running individually in my office in Uganda with over24TB each locally with a further 380TB NAS storage, 2of these servers are over 8 yrs old, I had no issues at all ever even when on the odd occasion when a hard disk or disks failed.

ZFS has lots of nice features to be sure, but not enough to swing me away from a good raid controller.
 
Last edited:

liteswap

Dabbler
Joined
Jun 7, 2011
Messages
37
Thanks for a great thread. I'm also about to migrate a full pool to new disks.

The pool currently attaches via an LSI 9211-8I HBA. So I've acquired a second LSI HBA (9211-8i) for £42 from eBay, and will cross-flash it to IT mode, and then attach the new disks. I've 3D printed a temporary disk mounting structure for the new disks, as the server is physically as well as logically full.

Migration: I'm comfortable with rsync - I've been using it for daily backups for decades - so I'll probably migrate the data using that.

But just to be sure: the snapshot send recv method fails for me, I think, because I also want to change the record sizes of a couple of datasets during the process, as this is an ideal opportunity.

So my question is whether I'm missing anything. Is it possible to change dataset record sizes using snapshot send recv?
 
Last edited:

peter2cfu

Dabbler
Joined
May 14, 2020
Messages
25
The pool currently attaches via an LSI 9211-8I HBA. So I've acquired a second LSI HBA (9211-8i) for £42 from eBay, and will cross-flash it to IT mode.

A quick Question if your controllers are already HBA why would you flash to IT mode, what's the benefit? Genuine question as I'm curious.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
A quick Question if your controllers are already HBA why would you flash to IT mode, what's the benefit? Genuine question as I'm curious.
If I understand it correctly, the IR version of the firmware (the other alternative to IT) supports some kind of RAID interference in the passing of the disks to the OS... basically the vendors are using the HBA with some kind of other chips they may add on the board or in their systems to do RAID or other funky stuff before the OS gets a look at the disk, which as we know, is bad for ZFS.
 

liteswap

Dabbler
Joined
Jun 7, 2011
Messages
37
A quick Question if your controllers are already HBA why would you flash to IT mode, what's the benefit? Genuine question as I'm curious.
The correct answer is above - although I plan to see how attaching disks and copying data work without flashing first (because I'm lazy, and why introduce a procedure that might go wrong if it's not strictly necessary). I'll just use some old disks for this process.
 

liteswap

Dabbler
Joined
Jun 7, 2011
Messages
37
Shall I start a new thread with my question (post 93 refers)?
 

Lighthouse

Dabbler
Joined
Nov 15, 2018
Messages
15
Shall I start a new thread with my question (post 93 refers)?

Yea, it is completely different question so I think it is better to start new thread.

And I am sorry to say I have no knowledge about your problem so I cannot help you, sorry!
 

runevn

Explorer
Joined
Apr 4, 2019
Messages
63
Thanks @depasseg for this great guide. I have completed all the steps until step 5.

I have disconnected both my old pool (storagepool) and my new pool (storagepool_temp), and have ensured that I only diconnected the pools while retaining the dataset and the shares on the origanal pool.

But when I run the import command I get the following:
Code:
root@nas[~]# zpool import storagepool storagepool_old
cannot import 'storagepool': no such pool available

Both my pools are encrypted with passphrase and encryption key.

I'm running FreeNAS-11.3-U3.2
 

runevn

Explorer
Joined
Apr 4, 2019
Messages
63
Thanks @depasseg for this great guide. I have completed all the steps until step 5.

I have disconnected both my old pool (storagepool) and my new pool (storagepool_temp), and have ensured that I only diconnected the pools while retaining the dataset and the shares on the origanal pool.

But when I run the import command I get the following:
Code:
root@nas[~]# zpool import storagepool storagepool_old
cannot import 'storagepool': no such pool available

Both my pools are encrypted with passphrase and encryption key.

I'm running FreeNAS-11.3-U3.2
Maybe an easier solution is to just change the pointers of the shares from the old pool to the new pool? I only have about 4 shares.
 
Top