Replicating VOLUME or iSCSI Block Shares in TrueNAS CORE (Same NAS Box)

mgoh

Cadet
Joined
Apr 29, 2022
Messages
2
Hello,

I’m very new to this, so I apologize in advance if I’m not using the right terminologies or if it has been answered.

I’m trying to see if there’s a way to replicate VOLUME or iSCSI Block Shares so that all the VOLUMEs or iSCSI Block Shares have the same exact content.

Here’s my setup and goal:

I have tens to hundreds of Windows PC, and each PC has a D:\ drive that’s sourced from the NAS. The content of this drive is exactly the same across all PCs, and the connection is iSCSI as that allows the network drive to be seen by the OS as a lettered drive rather than an IP address (i.e. lettered drive path is needed by the applications that call the files in the storage).

Currently to set the iSCSI drive, I created a pool with multiple volumes. Since the content is identical, deduplication is used to save physical storage space. Once the volumes are created, I set the volumes as Block Shares (iSCSI). Once this is done, I connect a Windows PC to the first volume and copy the desired content into the first volume. I then disconnect first volume, connect the second volume, and copy the same desired content into the second volume. This process is repeated until all the volumes have the identical content.

This is highly inefficient method, so I’m looking as to whether there’s a way in FreeNAS to replicate the volume via command line or web interface. For example, in the image below, you can see Drive 002 to 004 have the same content. I’d like to create Drive 005 to have the same content as Drive 004 without doing what I described above.

Thank you

1651279594288.png
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Have you tried using a mapped drive letter and/or the susbt command to trick the application into using a virtual drive pointing to a mapped folder?

Regarding the need for multiple copies; are the client PC's only reading this data, and making only minimal (ie: last-access timestamps) changes? If not, you could find your deduplication tables growing very quickly.

This could likely be done with ZFS snapshots and mounting them as r/w (if necessary) on different extents/datasets.

You'll want to keep a close eye on your deduplication table size regardless. What kind of hardware are you running here? (CPU, MBD, RAM, drive count/layout, controller(s) and network interfaces)
 

mgoh

Cadet
Joined
Apr 29, 2022
Messages
2
@HoneyBadger Thank you for your quick reply. I think I understand what you're suggesting.

As I understand, while iSCSI allows users to map the network storage as a lettered drive, one limitation with iSCSI is that multiple PCs connected to the one iSCSI wouldn't work. Hence my current solution of manually replicating the iSCSI drive content. On the other hand, I believe and recall seeing a network mapped folder (which I think uses SMB) allowing multiple PCs to connect to the same folder. This avoids the operator from having to replicate content of that folder. So, your solution is to share as a folder (using SMB?) and use subst to map that folder into a drive.

It sounds very promising, so I'll try that.

As for your question on the need for multiple copies: the client PCs is mostly reading the data (95% of the time), so your concern, I think may not be an issue, but definitely something to look out for especially the dedup table. As for the hardware, I'm looking at Supermicro SSG-110P-NTR10 with initial RAM of 64GB, 1TB for dedup table, 4TB of NVMe SSD, and 10GbE. I'm still at POC stage, and eventually the size will grow to 32TB and possibly increasing the networking speed to 40GbE.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
You can try just mapping the SMB folder directly to a drive letter first rather than going through the subst trickery.

The question still stands as to whether or not the application in question can handle multiple clients working on the same shared folder - eg: does it always write to new files (which will likely work if each client uses some sort of unique temporary file or filename pattern) or does it have to update existing ones (which will likely fail due to or cause SMB locking issues) and ultimately you'll need to test out the behaviour with a shared folder.

If not, then you'll have to use snapshots and similar trickery to make the multiple copies, and have to sort out the deduplication challenges.
 
Top