Transferring data from drobo to new TrueNAS build

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
I have just built a new TrueNAS server. I am wondering the best method to transfer all of the data from my drobo to my new Truenas build.

This is all new to me and any advice would be appreciated. Thanks in advance.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Connect your Drobo to your computer.

Connect to a network share (probably SMB) that you created on your TrueNAS server.

Copy the files from the Drobo to that network share using rsync or robocopy.

If you wanted to be more specific about the Drobo model that you have, maybe I can be more specific with the answer (there's a networked model and a bunch of direct attached models... I've assumed the latter in the response above).
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
Connect your Drobo to your computer.

Connect to a network share (probably SMB) that you created on your TrueNAS server.

Copy the files from the Drobo to that network share using rsync or robocopy.

If you wanted to be more specific about the Drobo model that you have, maybe I can be more specific with the answer (there's a networked model and a bunch of direct attached models... I've assumed the latter in the response above).
Sorry, i should have been more specific. The drobo is a B810N. I am new to TrueNAS. I have been trying to find a good guide on how to use rsync or robocopy.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I suspect rsync will be the best candidate to go directly from the Drobo to TrueNAS (although you will actually do it from the TrueNAS side).

Step 1, get an SSH session to your TrueNAS server (look that up separately, but you'll need to activate the SSH service and use a client like Putty or just ssh at the command prompt depending on which client OS you have).

something like ssh root@192.168.1.2 and then enter your password (obviously knowing that your TrueNAS IP will not be that one, so modify as needed)

Step 2, get yourself into a new tmux session (so that if you get disconnected, the job won't stop and you can re-connect to check in on it later:

tmux new-session -s transfer

Later, if you get disconnected (or use CTRL + B, then D to disconnect on purpose)

tmux attach -t transfer

Step 3, from your tmux session, run:

rsync -auv --progress drobousername@192.168.1.3:/path/on/drobo /mnt/pool/dataset/on/truenas

Input your drobo password for the user you specified.

Then either watch it go or move away and come back later to see how it's moving and use the tmux attach and disconnect accordingly as instructed above.

That's assuming you're allowing connection to the drobo with SFTP or file copy over SSH (which is probably the simplest way to make it go).
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
I suspect rsync will be the best candidate to go directly from the Drobo to TrueNAS (although you will actually do it from the TrueNAS side).

Step 1, get an SSH session to your TrueNAS server (look that up separately, but you'll need to activate the SSH service and use a client like Putty or just ssh at the command prompt depending on which client OS you have).

something like ssh root@192.168.1.2 and then enter your password (obviously knowing that your TrueNAS IP will not be that one, so modify as needed)

Step 2, get yourself into a new tmux session (so that if you get disconnected, the job won't stop and you can re-connect to check in on it later:

tmux new-session -s transfer

Later, if you get disconnected (or use CTRL + B, then D to disconnect on purpose)

tmux attach -t transfer

Step 3, from your tmux session, run:

rsync -auv --progress drobousername@192.168.1.3:/path/on/drobo /mnt/pool/dataset/on/truenas

Input your drobo password for the user you specified.

Then either watch it go or move away and come back later to see how it's moving and use the tmux attach and disconnect accordingly as instructed above.

That's assuming you're allowing connection to the drobo with SFTP or file copy over SSH (which is probably the simplest way to make it go).
Thank you for your help, I have it transferring. Is there anyway to speed up the transfer? It is at about 10MB/s and I have about 8TB to transfer.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Is there anyway to speed up the transfer? It is at about 10MB/s and I have about 8TB to transfer.
It depends on many factors... are you transferring large numbers of small files? (maybe you're at the maximum speed already)

How have you connected your hardware? (network in particular)

What's your Pool layout?

How much RAM do you have?

Does the Drobo support NFS or SMB? (and what speeds of transfer have you see from those?)
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
It depends on many factors... are you transferring large numbers of small files? (maybe you're at the maximum speed already)

How have you connected your hardware? (network in particular)

What's your Pool layout?

How much RAM do you have?

Does the Drobo support NFS or SMB? (and what speeds of transfer have you see from those?)
I am transferring alot of smaller files. Its all pictures, adobe photoshop/illustrator files, etc.

Both the Drobo and Truenas are connected via 1Gbe connection through a ubiquiti switch

Single Pool layout - 2Vdevs
8 - 2TB Ironwolf SSD's Raid-Z2 1VDEV
4 - 4TB Ironwolf SSD's Raid-Z2 2VDEV
1- 1TB Samsung NVME 980 pro cache drive
1 -1Tb samsung NVME 980 pro log drive

The system has dual AMD EPYC 7542's
128GB DDR4-3200 ECC Ram per CPU for 256GB Total

Drobo does support both NFS and SMB - With the kind of data we pull from the server to work from, speed has not been an issue from the drobo, so I have never really tested the transfer speeds.

I have noticed from the truenas dashboard that ZFS Cache is at about 220GB right now.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
ZFS Cache is at about 220GB right now.
Normal, but not helpful in this case as you're not reading/writing any of the same data repeatedly as part of an evacuation to a new system.

I am transferring alot of smaller files. Its all pictures, adobe photoshop/illustrator files, etc.
You're probably metadata bound in that case and are not likely to see a lot of good performance during this activity no matter what you do.

You may have seen better performance if you had a special metadata VDEV on fast mirrored SSDs in your pool and/or had configured your pool with mirrors... but that may or may not be a good thing for the future performance of it if your use case was already fine on the Drobo (which won't have been mirrors either).

1 -1Tb samsung NVME 980 pro log drive
Maybe that will help you if there are sync writes in your setup (but it's way too big... 30GB is about the max you would ever be able to use... and can't help you in this case where writes should be async)

1- 1TB Samsung NVME 980 pro cache drive
Also too big in general and I question if you would need it at all with as much RAM as you have.

As a reflection, perhaps using your 2 NVME drives as a special/metadata VDEV would be a better option that has some chance of bringing you a lot more speed now and in the future.

You can either scrap the pool and start again or remove the log and cache and add them back, but you've already written the metadata for what's copied now, so you would need to re-copy the first part to get the metadata in the right place.
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
You may have seen better performance if you had a special metadata VDEV on fast mirrored SSDs in your pool and/or had configured your pool with mirrors... but that may or may not be a good thing for the future performance of it if your use case was already fine on the Drobo (which won't have been mirrors either).
I am very new to all of this. I just kept reading everywhere that a Raid-Z2 was the way to go.

Maybe that will help you if there are sync writes in your setup (but it's way too big... 30GB is about the max you would ever be able to use... and can't help you in this case where writes should be async)
Yeah, I know they are big drives, but I had them laying around so I thought I would utilize them.

As a reflection, perhaps using your 2 NVME drives as a special/metadata VDEV would be a better option that has some chance of bringing you a lot more speed now and in the future.

You can either scrap the pool and start again or remove the log and cache and add them back, but you've already written the metadata for what's copied now, so you would need to re-copy the first part to get the metadata in the right place.
once my transfer is complete, what would be the procedure for doing this? Would it be of any benefit for what we use the server for? I may end up in the future try creating some VM's just for very light use. I would rather not scrap what has been going if I don't have to.


Also, I did add a Intel X540T2 NIC. I am currently using the onboard Ethernet. But could I use the NIC to increase bandwidth? I have a total of 4 Ethernet ports.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
once my transfer is complete, what would be the procedure for doing this?
You might be better not to wait... the transfer can either be restarted or can resume from where it left off depending on what you decide to do.

Remove the cache and log drives (zpool remove command...https://illumos.org/man/8/zpool), wipe the NVME devices and then go into the pool and "Add VDEV" and select the Special VDEV type, then mark the datasets (or the whole pool/root dataset) with the settings for it (see the last line in Edit Options for your dataset(s))

Also, I did add a Intel X540T2 NIC. I am currently using the onboard Ethernet. But could I use the NIC to increase bandwidth? I have a total of 4 Ethernet ports.
With the Drobo only having 1Gbit, you're not going to help anything with more NICs on TrueNAS.

Also, LAGG probably doesn't help in the way you might think it does... each client can get only 1 NIC worth of bandwidth at a time.

Would it be of any benefit for what we use the server for?
For loads of relatively small files, it's going to be much better.

I may end up in the future try creating some VM's just for very light use.
Single Pool layout - 2Vdevs
8 - 2TB Ironwolf SSD's Raid-Z2 1VDEV
4 - 4TB Ironwolf SSD's Raid-Z2 2VDEV
Those 2 posts are not a good match... for VMs (block storage), you won't get good performance out of RAIDZ2, you need mirrors... even a separate pool of just 2 SSDs in mirror would be much better to run those on.
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
You might be better not to wait... the transfer can either be restarted or can resume from where it left off depending on what you decide to do.

Remove the cache and log drives (zpool remove command...https://illumos.org/man/8/zpool), wipe the NVME devices and then go into the pool and "Add VDEV" and select the Special VDEV type, then mark the datasets (or the whole pool/root dataset) with the settings for it (see the last line in Edit Options for your dataset(s))
So I can do this now, while the transfer is going? Is example 14 what you are referring to on that page you linked? I am sorry, I am just not very familiar with all of this. But I do greatly appreciate your help with all of this. So any full instructions on this would be very helpful.

Those 2 posts are not a good match... for VMs (block storage), you won't get good performance out of RAIDZ2, you need mirrors... even a separate pool of just 2 SSDs in mirror would be much better to run those on.
If I end up wanting to create VM's, I will create a new pool in the future specifically for VM performance.

With the Drobo only having 1Gbit, you're not going to help anything with more NICs on TrueNAS.

Also, LAGG probably doesn't help in the way you might think it does... each client can get only 1 NIC worth of bandwidth at a time.
I was more referring to added bandwidth in general, not necessarily for speeding up this transfer. Just for regular use. I know the drobo I have 2 ethernet connections to it.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
So I can do this now, while the transfer is going?
I can't claim to have done it during a mass-transfer event, but it does work with the pool still online, so at least in theory, yes.

Personally, I would interrupt the transfer (go back to your tmux and CTRL + C to stop it, then after messing around with the pool, start the same rsync again and it will resume with the files not already copied in full... just a bit of checking of file lists to get going again).

If you want to share the zpool status -v from your pool I can help you to formulate the commands, but example 14 is mostly what you need... you just won't be working with the mirror, rather the individual cache and log devices individually.

EDIT: also worth thinking about is that adding a special VDEV is pool integral (unlike the log and cache), so you won't be able to remove those later without a pool rebuild, so maybe do a little reading on metadata VDEVs to make sure you agree with me that it's a good thing first.
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
root@KJBRANDING[~]# zpool status -v
pool: KJDATA-Z2
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
KJDATA-Z2 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/4d9ba04a-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4db3b8ea-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4d5c35ae-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4d42f6d5-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/4d689280-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4d804afc-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4d8f2a94-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4d86d3e6-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4d9e0bcb-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4daabf3d-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4d620304-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4d497072-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
logs
gptid/4d25f652-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
cache
gptid/4d2928b7-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0

errors: No known data errors

pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:00:02 with 0 errors on Sat May 21 03:45:03 2022
config:

Here is the configuration. Thank you for your help with this.

EDIT: I am not really sure of the perfect setup for our use case, just because i am really new to this. We do alot of photoshop work and design work. So whatever setup would be the best. Most all files are under 1Gig. Lots of small files
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Most all files are under 1Gig. Lots of small files
OK, so it seems the use-case matches a special/metadata VDEV then.

To remove the log disk:

zpool remove KJDATA-Z2 gptid/4d25f652-d85f-11ec-b763-a0369f81fecc

and for the L2ARC (cache)

zpool remove KJDATA-Z2 gptid/4d2928b7-d85f-11ec-b763-a0369f81fecc

I would give things a minute to settle in between those 2 commands.
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
OK, so it seems the use-case matches a special/metadata VDEV then.

To remove the log disk:

zpool remove KJDATA-Z2 gptid/4d25f652-d85f-11ec-b763-a0369f81fecc

and for the L2ARC (cache)

zpool remove KJDATA-Z2 gptid/4d2928b7-d85f-11ec-b763-a0369f81fecc

I would give things a minute to settle in between those 2 commands.
Ok, I stopped the transfer and removed both drives

EDIT: i am assuming i go into the drives tab and wipe both of those drives. Then go into the pool, choose add vdev. Add metadata and add both drives to metadata and mirror? And then do i need to change the Metadata (special) small block size value? And once that is done, how do i resume the transfer?
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
@sretalla What about a L2ARC, metadata only rather than a special? It has the advantage of not being pool critical, but can't do small blocks obviously
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
@sretalla What about a L2ARC, metadata only rather than a special? It has the advantage of not being pool critical, but can't do small blocks obviously
I do have the 2 samsung 980 pro 1TB NVME drives. Would it be beneficial for me to have 1 drive for metadata and 1 drive for L2ARC
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
I do have the 2 samsung 980 pro 1TB NVME drives. Would it be beneficial for me to have 1 drive for metadata and 1 drive for L2ARC
How many users do you have cos 256GB is a shit ton of memory for one person, not so much for 1000 heavy users?
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
How many users do you have cos 256GB is a shit ton of memory for one person, not so much for 1000 heavy users?
There is a team of about 5 designers. And about 3 or 4 other people who use the files from the server. I built this server to last and to grow. Eventually I would like to create some VMs with a new pool.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
And what do you use the NAS for? Use case?
Designing what?
 
Top