Transferring data from drobo to new TrueNAS build

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
And what do you use the NAS for? Use case?
Designing what?
Its mainly all graphic design. Some video editing. In the future, I would like to create a few VMs for workstations. But i would create a new pool for that.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I do have the 2 samsung 980 pro 1TB NVME drives. Would it be beneficial for me to have 1 drive for metadata and 1 drive for L2ARC
While you "could" do that, it would be highly risky since the special VDEV is pool integral and if it fails, your whole pool would be lost (hence assuming a mirror is the least you would want for it).

I would suggest rather than the Metadata only L2ARC that you should set a tunable to stop the eviction of metadata from ARC in the first place (rendering the use of L2ARC for that a little redundant).

You create a tunable of type sysctl and do the setting vfs.zfs.arc.meta_min with a value of something like 4294967298 (that's 4GB... you could possibly afford to go higher with your RAM size).

Also, using ARC/L2ARC for metadata does nothing to speed the writes to it, whereas the special VDEV will absorb them all, so will avoid slower writing of that data to the pool disks even if it then doesn't get read back out all the time since it's in ARC.
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
While you "could" do that, it would be highly risky since the special VDEV is pool integral and if it fails, your whole pool would be lost (hence assuming a mirror is the least you would want for it).

I would suggest rather than the Metadata only L2ARC that you should set a tunable to stop the eviction of metadata from ARC in the first place (rendering the use of L2ARC for that a little redundant).

You create a tunable of type sysctl and doe the setting vfs.zfs.arc.meta_min with a value of something like 4294967298 (that's 4GB... you could possibly afford to go higher with your RAM size).

Also, using ARC/L2ARC for metadata does nothing to speed the writes to it, whereas the special VDEV will absorb them all, so will avoid slower writing of that data to the pool disks even if it then doesn't get read back out all the time since it's in ARC.
Ok, So I created the tunable with what you suggested. What would you suggest I set the value for the Metadata small block size in the pool dataset options?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
What would you suggest I set the value for the Metadata small block size in the pool dataset options?
I don't see harm to go to the maximum accepted size (1M) for it, since you want to attract more traffic toward it where possible.
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
I don't see harm to go to the maximum accepted size (1M) for it, since you want to attract more traffic toward it where possible.
So after these changes, the speed of the transfer seems to be the same. Maxing out at about 10.5MB/s. I am assuming i am being bottlenecked by the drobo?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I am assuming i am being bottlenecked by the drobo?
That's certainly a possibility, particularly if we're talking about loads of small files.

Just let it settle for a bit and see where it goes.
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
That's certainly a possibility, particularly if we're talking about loads of small files.

Just let it settle for a bit and see where it goes.
Sounds good, I appreciate your help setting all of this up. I am hoping I set all of this up correctly.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I appreciate your help setting all of this up. I am hoping I set all of this up correctly.
Happy to help (which is what I hope I have done).

Have a look (and share with us if you want) at the output from:

zpool status -v

zpool list -v
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
You have been very helpful. Here are the results from those commands.

root@KJBRANDING[~]# zpool status -v
pool: KJDATA-Z2
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
KJDATA-Z2 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/4d9ba04a-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4db3b8ea-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4d5c35ae-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4d42f6d5-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/4d689280-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4d804afc-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4d8f2a94-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4d86d3e6-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4d9e0bcb-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4daabf3d-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4d620304-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
gptid/4d497072-d85f-11ec-b763-a0369f81fecc ONLINE 0 0 0
special
mirror-3 ONLINE 0 0 0
gptid/0ad07767-dc4a-11ec-ad8b-a0369f81fecc ONLINE 0 0 0
gptid/0acbf913-dc4a-11ec-ad8b-a0369f81fecc ONLINE 0 0 0

errors: No known data errors

pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:00:02 with 0 errors on Sat May 21 03:45:03 2022

boot-pool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvd0p2 ONLINE 0 0 0
nvd1p2 ONLINE 0 0 0

errors: No known data errors
root@KJBRANDING[~]# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
KJDATA-Z2 30.0T 1.62T 28.4T - - 0% 5% 1.00x ONLINE /mnt
raidz2-0 14.5T 658G 13.9T - - 0% 4.41% - ONLINE
gptid/4d9ba04a-d85f-11ec-b763-a0369f81fecc - - - - - - - - ONLINE
gptid/4db3b8ea-d85f-11ec-b763-a0369f81fecc - - - - - - - - ONLINE
gptid/4d5c35ae-d85f-11ec-b763-a0369f81fecc - - - - - - - - ONLINE
gptid/4d42f6d5-d85f-11ec-b763-a0369f81fecc - - - - - - - - ONLINE
raidz2-1 14.5T 980G 13.6T - - 0% 6.58% - ONLINE
gptid/4d689280-d85f-11ec-b763-a0369f81fecc - - - - - - - - ONLINE
gptid/4d804afc-d85f-11ec-b763-a0369f81fecc - - - - - - - - ONLINE
gptid/4d8f2a94-d85f-11ec-b763-a0369f81fecc - - - - - - - - ONLINE
gptid/4d86d3e6-d85f-11ec-b763-a0369f81fecc - - - - - - - - ONLINE
gptid/4d9e0bcb-d85f-11ec-b763-a0369f81fecc - - - - - - - - ONLINE
gptid/4daabf3d-d85f-11ec-b763-a0369f81fecc - - - - - - - - ONLINE
gptid/4d620304-d85f-11ec-b763-a0369f81fecc - - - - - - - - ONLINE
gptid/4d497072-d85f-11ec-b763-a0369f81fecc - - - - - - - - ONLINE
special - - - - - - - - -
mirror-3 928G 16.1G 912G - - 0% 1.73% - ONLINE
gptid/0ad07767-dc4a-11ec-ad8b-a0369f81fecc - - - - - - - - ONLINE
gptid/0acbf913-dc4a-11ec-ad8b-a0369f81fecc - - - - - - - - ONLINE
boot-pool 232G 1.29G 231G - - 0% 0%
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
So you can see your special VDEV has already had 16.1G of data written to it, so it's doing its job already.
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
I'm glad it is working. Now only about 7TB of data left to move. I am hoping it is just the drobo that is bottlenecking this transfer speed. Won't go above 10.5MB/s
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I am hoping it is just the drobo that is bottlenecking this transfer speed. Won't go above 10.5MB/s
Now's probably not the best time to be messing around to identify bottlenecks (running things like iperf3 and fio to work out if it's network or disks on the TrueNAS side), but we can certainly look at that if it's still poor following the transfer.
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
OK, I will just let this run and finish. I am assuming this will take a few days to complete. But I am thinking the speeds from this new server will be just fine for our use case. The only other thing I am wondering, is if I can setup multiple ethernet cables to help increase bandwidth or increase speed or anything. I have four ports, two 10Gbe and two 1Gbe. I do have two ethernet cables connected to the drobo.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
The only other thing I am wondering, is if I can setup multiple ethernet cables to help increase bandwidth or increase speed or anything. I have four ports, two 10Gbe and two 1Gbe. I do have two ethernet cables connected to the drobo.
I think you mentioned having a few users involved, so having LAGG would potentially improve the way that different users can access the files.

Generally speaking, a single 10Gbit NIC may already be more than what's needed to move the bottleneck to another component like the pool disks (you are, after all, running just a pair of RAIDZ2 VDEVs, so the IOPS of just 2 HDDs would possibly be one place to expect issues... we improved that a bit with the special VDEV, but it won't be a magic solution).

If you have a proper 10Gbit network setup, switching to that would be useful for all the other components (I think the Drobo is only 1Gbit, so nothing to help there (anyway, we're seeing under 10% of 1Gbit from it now).
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
I think you mentioned having a few users involved, so having LAGG would potentially improve the way that different users can access the files.

Generally speaking, a single 10Gbit NIC may already be more than what's needed to move the bottleneck to another component like the pool disks (you are, after all, running just a pair of RAIDZ2 VDEVs, so the IOPS of just 2 HDDs would possibly be one place to expect issues... we improved that a bit with the special VDEV, but it won't be a magic solution).

If you have a proper 10Gbit network setup, switching to that would be useful for all the other components (I think the Drobo is only 1Gbit, so nothing to help there (anyway, we're seeing under 10% of 1Gbit from it now).
Is this a normal transfer speed for 1Gbit setup? Or is there possibly something I can do on the drobo end to increase speed?
 

neofusion

Contributor
Joined
Apr 2, 2022
Messages
159
If you have a 1GbE network then 10 MBytes/s is slow. The theoretical max speed is around 110 MBytes/s but in practice 70 - 80 MBytes/s is very achievable after accounting for things like protocol overhead.

I've found Drobos to be slow. I don't know exactly which model you have but there are reports that even if you have a model that is equipped with a 1GbE jack it's common to see speeds to be a fraction of what is typical using other appliances. I suppose you could double-check what speed your Drobo's and Truenas' network adapters have negotiated, hopefully you'll see 1000baseT there and not something like 100baseTX.

Having said that, since you've said that you're transferring many small files, 10 MBytes/s might just be what you can get.
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
If you have a 1GbE network then 10 MBytes/s is slow. The theoretical max speed is around 110 MBytes/s but in practice 70 - 80 MBytes/s is very achievable after accounting for things like protocol overhead.

I've found Drobos to be slow. I don't know exactly which model you have but there are reports that even if you have a model that is equipped with a 1GbE jack it's common to see speeds to be a fraction of what is typical using other appliances. I suppose you could double-check what speed your Drobo's and Truenas' network adapters have negotiated, hopefully you'll see 1000baseT there and not something like 100baseTX.

Having said that, since you've said that you're transferring many small files, 10 MBytes/s might just be what you can get.
Yes, I have been reading a lot about the problems with drobo. The reason I am switching is because we have had nothing but issues with the drobo. The model I have is a B810N. I am trying to transfer about 8TB of pretty small files. I did double check both connection speeds and they are both at 1000baseT.
 
Top