best way to copy everything to a new device?

Status
Not open for further replies.

Melvil Dui

Cadet
Joined
Jun 21, 2018
Messages
8
Scenario:

Largely static collection of files totaling ~ 240 TB on old FreeNAS 9.3 system. 90+% of the files (by number, more by space consumed) should never change. This system recently had a drive replaced and is estimating about 500 hours for the resilver to finish. This system is also serving files to computers on the network.

New replacement system is FreeNAS 11.1 with more space. It will not be serving files until the copy is completed.

The machines are next to one another. At this moment the old system has a 10G link and the new one has a 1G link. (The faster ethernet device was not recognized and will be replaced.)

While waiting, I'd still like to use the link that is there to start copying everything. (Based on the resilver speed, I think normal system usage on old system will limit copy throughput anyway.)

I gather that ZFS replication will be a bad idea, because it's not interruptible. So that would mean rsync is the best and only option, or is there a third path?
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630

Melvil Dui

Cadet
Joined
Jun 21, 2018
Messages
8
45 x 8TB. Don't trust the hardware enough to want to add more to it, hence the new system. (Which is 60 x 8TB.)
 

sunrunner20

Cadet
Joined
Mar 13, 2014
Messages
8
Do some research on resumable send/recv and see if that fits your needs, otherwise yes. Rsync is your method of choice. I'm much more interested in the hardware though. PPS: 500hrs doesn't sound right- even a full 8tb disk without ZFS sequential resilver (new to 11.something) is far less than 500 hours. Have you given it time to get past the reading metadata stage and actually start to write stuff to disk?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
*blink*

*blink*
o_O
What?

LOL...
I have a system at work with 265TB of data on it. I need to replicate that to a new system that we will be receiving later this year.
 

Melvil Dui

Cadet
Joined
Jun 21, 2018
Messages
8
500hrs doesn't sound right- even a full 8tb disk without ZFS sequential resilver (new to 11.something) is far less than 500 hours. Have you given it time to get past the reading metadata stage and actually start to write stuff to disk?

Started about a week ago. It was going much faster while the server was offline and not serving data. I attribute the slowness to load.
 

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
Using resume-able zfs send receive:

All commands are ran on the sending machine. I use SSH using the same keys that are setup for replication. I create the target datset on receive machine before starting. I start zfs send in a tmux session ( tmux) to keep the send running after I disconnect (ctrl-b d while in tmux). I could then reconnect to it ( tmux attach-session) to check on the status.

Take an initial snapshot for sending:

zfs snapshot -r tank/dataset@backup

Start the transfer with the -s option on zfs receive x.x.x.x is the IP address of the receiving machine:

/sbin/zfs send -v -p tank/dataset@backup | /usr/bin/xz | /usr/local/bin/throttle -K 120 | /usr/local/bin/pipewatcher $$ | /usr/local/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 x.x.x.x "/usr/bin/env xzdec | /sbin/zfs receive -F -s 'tank/targetdataset'"

Resuming the transfer
I put the following in a script file so I can restart easily when needed.
I am able to experiment with different compression and adjust the throttle as needs dictated by editing the script, killing and restarting.

Code:
#!/bin/sh
RESUME_TOKEN="$(/usr/local/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 x.x.x.x "zfs get -H -o value receive_resume_token 'tank/targetdataset'")"
/sbin/zfs send -t ${RESUME_TOKEN} -v | /usr/bin/xz | /usr/local/bin/throttle -K 120 | /usr/local/bin/pipewatcher $$ | /usr/local/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 x.x.x.x "/usr/bin/env xzdec | /sbin/zfs receive -F -s 'tank/targetdataset'"
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
So that would mean rsync is the best and only option, or is there a third path?
I would say that there is a third path, but that would depend largely on the hardware you have available and your willingness / ability to be flexible.
I have actually been in a similar situation before and anticipate being in a similar situation again, in the not distant future.
What I did before is to connect, by way of long SAS cables, all the drives to a single system. Create a pool on the new drives, then it is a local transfer at the speed of SAS, instead of needing to go across the network. Once the transfer is complete, just detach the pool and connect it the way it needs to be going forward. You have a little down time for wiring and the system may be slow to respond to network requests while it is copying data 'internally' from one pool to the other. The time savings was significant for me because 10Gb networking is fast, but the speed of the pool internally was between 2 and 3 times faster.
 

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
Also, on my replication instructions - that is what I used to transfer data over a slow WAN link - so you may not want some of the piped commands in there such as throttle and xz compression maybe too cpu intensive on a fast LAN link. Some experimentation will be necessary.
 

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
Just had another thought: you will probably need to update the FreeNAS version on your 9.3 version to use resumable replication, I don't think it was available until 9.10 (and only in certain versions of 9.10 as it came and disappeared and came back).
 

Melvil Dui

Cadet
Joined
Jun 21, 2018
Messages
8
What I did before is to connect, by way of long SAS cables, all the drives to a single system. Create a pool on the new drives, then it is a local transfer at the speed of SAS, instead of needing to go across the network. Once the transfer is complete, just detach the pool and connect it the way it needs to be going forward.

Okay, for background, I'm more of a software guy than a hardware guy. While I agree this sounds promising, the photos of the device (which I have never seen in person) make it look like there are no spare connections available on the old head unit.

Here's what I know: old head ("Whale") is a 1u Supermicro server, about 2.5 years old. Disk shelf is also Supermicro branded. The original shelf died this month and was replaced with cannibalized parts from another, out of service, device. Everyone who knew details about this thing have left the company, I'm a contractor and another contractor has been involved actually doing the labor of say that shelf swap.

New head is Dell R430 ("Melville") with a QCT QuantaVault JB4602 shelf.

While they are close, they are not super close, there is one rack between them. The QuantaVault is extremely heavy (~300lbs) and was installed where there was space close to the ground. I've attached zpool status output as files. The configuration of the raidz on Melville was left up to the software, under the assumption the software works.

I did tentatively start an rsync last night, if there's good reason, anything on Melville can still be reconfigured.
 

Attachments

  • status.whale.txt
    5.7 KB · Views: 293
  • status.melville.txt
    5.8 KB · Views: 297

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
Cancel the rsync - don't use raidz1 (only one disk of redundancy - not recommended anymore) - these should be at least raidz2. The old system was raidz3.
 

Melvil Dui

Cadet
Joined
Jun 21, 2018
Messages
8
Okay, I can do that.

(And per your earlier recommendation: I'm not using any compression on the copies because the bulk of the content is JPEG, which doesn't compress well without lots of CPU.)
 

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
I think (I only use mirrors), you will want 5 12 disk raidz3 zvols to utilize the 60 disks you have. What are the logs and cache volumes used for - do you still need those?
 

Melvil Dui

Cadet
Joined
Jun 21, 2018
Messages
8
Melville now rebuilt as raidz3.

Any logs on Whale are okay to lose. That said, I can't figure out if the logs and cache volumes are in use anywhere. Those are apparently the 10x 256GB SSD installed in the Supermicro. Neither "mount" nor "zfs mount" show logs or cache, and I don't see them in the GUI either.
 

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
Melville now rebuilt as raidz3.

Any logs on Whale are okay to lose. That said, I can't figure out if the logs and cache volumes are in use anywhere. Those are apparently the 10x 256GB SSD installed in the Supermicro. Neither "mount" nor "zfs mount" show logs or cache, and I don't see them in the GUI either.

Okay, Those would be ZFS log and cache drives for performance purposes. I'm not experienced in those, so can't really give you advice. I know it depends on the work load. Best to wait for someone else to respond.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Any logs on Whale are okay to lose. That said, I can't figure out if the logs and cache volumes are in use anywhere. Those are apparently the 10x 256GB SSD installed in the Supermicro. Neither "mount" nor "zfs mount" show logs or cache, and I don't see them in the GUI either.
I am not sure how this system is being used but if the original build included 10 SSDs for L2ARC and SLOG, I would hope that the designer thought it was needed and didn't just throw it in for fun. The general purpose of those would be to accelerate the performance of read and write to the pool.
I would need to know a bit more before I could make suggestions that are more than wild guesses. Does this new Dell head system have any SSDs for cache drives? Who selected the hardware? Did anyone do capacity planning / discovery before making a purchase?
 

Melvil Dui

Cadet
Joined
Jun 21, 2018
Messages
8
I don't think the new Dell has has any SSDs. I don't know for sure, but inspection of /dev/ does not show any devices that look like SSD.

No one did much planning for this, it was largely just thrown together from hardware that could be purchased quickly.

The data on the old server (Whale) is archives of customer uploads from 5+ years ago, it is pretty much read only at this point. For every ~60 JPEG files there is one XML file that does get updated when customers change things, that file is about the size of one JPEG.

There are ten to thirty such changes a day. This quantity became very obvious while Whale was out of commision due to the shelf failure I mentioned earlier. It is not expected that this volume will grow in size by any significant amount in the next N years.

New data, and new growth, goes on different servers.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Sounds like there is no longer a good reason for the cache drives.
Is this the active storage with a backup copy somewhere else?
If so, you can probably get away with 6 drives each vdev at RAIDz2 and 10 vdevs. It willbe faster that way. That's going to give you about 320TB to work with.
It will rebuild a drive faster than if you're using wide vdevs.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 
Status
Not open for further replies.
Top