So, please heed @Ericloewe 's suggestion - try out mc, which'll look a lot like Norton Commander to you, for direct server-to-server file transfer and on-screen file selection, the benefits of which I also understand and support.
Thank you.
I'll take a look at it to see if I can run it as a graphical file manager that runs directly on the TrueNAS server.
In the meantime, there's always mc
.
Stupid question - does mc run in a GUI on the TrueNAS server?
but TrueNAS (and FreeNAS before it) have never been designed for console usage; they've always intended to be managed over the network.
I understand that.
But even if I log in to say a QNAP NAS, I AM managing it over the network, but operations can still be performed directly on/from/with the server itself., otherwise, if, for example, all of the data processing is handled by the client, then really; I wouldn't need anything more powerful (in terms of a CPU on the TrueNAS server side) than an Intel Atom or an ARM processor if that were to be the case (unless you need PCIe lanes for NVMe SSDs, then you might need Threadripper and/or Epyc).
Another example is that two of my QNAP NAS units have built-in 10 GbE connections. My desktop client does not.
And if I am managing the server from a slightly older Intel NUC (8th generation), you can't add 10 GbE SFP+ NIC to it to perform that kind of a management, whereas, again, in a comparative analysis between TrueNAS and QNAP (for example, as that's what I have and therefore; that's what I have experience with, but I'm sure other NAS vendors might have similar capabilities as well), I can completely bypass the client by executing inter-server data transfers vs. having to route through a slow client.
Other than being web-based (which would be the point of it--it's always been possible to manage files using a client via a file-sharing protocol), there's no inherent reason that any of these would be necessary.
My apologies, but I would have to disagree with that.
I'm about to decommission one of my QNAP NAS units because I want to move it onto TrueNAS actually.
But what this also means is that I am currently in the process of evacuating all of the data from said QNAP NAS unit onto my other QNAP NAS units and some of the files that's on the second, bigger QNAP NAS unit - I have to partially evacuate some of the data from that onto a temporary CentOS install that I've got on the same physical hardware that's intended to run TrueNAS.
From the older, smaller QNAP NAS unit that's about to be decommissioned, I'm moving something like 5.5 million files off of it, and if I were to set said data evacuation task as one giant task, it would take forever and then some.
Conversely, with the QNAP File Manager (that I access via the Web GUI), I am able to select, for example, 100 folders at a time, and move them in batches, therefore; leveraging parallelisation in the data transfer itself so that when you're moving so many smaller files, the transfers can be parallelised vs. running it as a single, monolithic task.
This is where having a graphical file manager can come in very handy.
Suppose you have folders numbered from 0-200, but you have random numbered folders that are missing in the middle of that range somewhere, and possibly, relatively about 10% of them are missing; if you script it using a text based/CLI shell command, the system is going to report back errors. If there were errors or problems during the transfer, now you have to sift through the log to figure out which errors were because the folders were missing vs. which errors were actual problems with the data evacuation task.
With the graphical file manager, I don't have to worry about that.
In fact, with QNAPs web based, graphical file manager, because I am moving 100 folders at a time, if there was a problem with one of those batches of 100 files (which, if there are missing folders in the middle of that range, because you're clicking on a "check all" box, it doesn't know and doesn't care that there are missing folders in that numbered sequence. It will just move what it sees.
So, there are a LOT of use case where something like this can be handy.
My desktop client only has a single GbE NIC in it.
My QNAP NAS units have both dual GbE AND dual 10 GbE NICs in them (4 NICs total for a total of 22 GbE worth of bandwidth available).
Think about how much faster it is to move about 5.5 TB of data over 22 GbE worth of bandwidth (another server) vs. 1 GbE worth of bandwidth (client).
But security is really the issue--the file manager is almost certainly going to run as root, and that increases the risk significantly.
Yeah....I'm not sure if this is necessarily a true statement either.
Again, drawing from my experiences with QNAP NAS units, as long as I have created the same user on all of the NAS units (because I am not using a centralised user/login authentication server/ADS), I can log in NOT as admin, but rather as myself on the individual NAS units and remotely mount each other using the same user-level credentials to pass data back-and-forth.
So, I'm not sure why there is a requirement for said web based graphical file manager to be run as root if you have your user accounts already set up across any and all TrueNAS servers.
I'd be interested in finding out what your rationale that's behind this statement.
For example, if NAS1 is using POSIX1E ACLs and NAS2 is using NFSv4 ACLs, then simple rsync won't be able to preserve permissions, but going through SMB protocol from NAS1 to NAS2 will basically do the magic of converting relevant ACL info. Ditto for NFSv4.
Per my original question though, it's TrueNAS to TrueNAS migration, but it's
SELECT data migration rather than a pure ZFS replication/
"migrate everything" mindset.
With that being said, given that it's a TrueNAS to TrueNAS migration, ACL and permissions methodology/protocol would be identical between the two unless iXSystems changes that.
Sidebar:
I LITERALLY installed CentOS 7.7.1908 on the same physical hardware as what I was going to use as a TrueNAS server BECAUSE I am able to install the GNOME graphical desktop environment on the system so that I can ensure that I can manage the data migration using a graphical file manager interface.
A part of this is also because with dual GbE NICs on the Supermicro X7DBE motherboard, I don't have those NIC bonded together (either in the OS nor at the L2 managed switch level) because in GNOME, it is LITERALLY way easier and faster for me to just mount the QNAP NAS that I am evacuating the data of off, directly via a SMB mount that's tied/directly connected to GbE NIC2.
That way, I can ensure that I can leverage the bandwidth that's available from BOTH GbE NICs rather than it being less-than-intelligent, and will try to route all of the traffic through the primary GbE NIC.
So on said CentOS system, I just installed tigervnc-server on it, and now I am managing the system (even the graphical desktop) remotely. On the console itself, it is running in init 3.
Sidebar #2 (as to why I want to do this and/or am looking for a graphical file manager that runs natively on the TrueNAS server):
If I am going from server1 -> client -> server2, I am putting that much traffic on the client's (in my case, single) GbE NIC to the extent that it literally eats up the full GbE NIC's worth of bandwidth, just shuffling data around.
The downside to this is that insodoing, it has also made it mostly unusable for anything else that I need or want to do that requires the network (browse the web, watch YouTube videos, etc.) because almost 100% of the available bandwidth is dedicated to the data evacuation and data migration.
If I take the client out from the middle of the equation, I've now have all four systems (three QNAP NAS units plus the server that I was going to put TrueNAS on again) all connected together with it's own switch, and so, it can shuffle data between each other and my client just only needs MAYBE 1 MB/s (10 Mbps) worth of bandwidth to manage the servers' server-to-server data transfers/data evacuation tasks.
That frees up the GbE NIC on my client for it to do other things whilst the servers are doing their things. It seems unintuitive to me why the solution that people appear to be recommending is to inject a client in between a server-to-server data transfer/selective migration, which means, really, that if you don't want your single GbE NIC to have all of its bandwidth sucked up for said server-to-server data transfer/selective migration, you're going to need to deploy at least a dual 10 GbE NIC plus a 10 GbE switch solely for a purpose like this.
This seems like it's such a massive oversight and as a result, such a bummer.
Solaris can create and manage ZFS pools and it has a graphical file manager, but Solaris has its own set of problems.
TrueNAS helps to manage the ZFS, but the absence of a graphical file manager, that just seems like it's a massive oversight to me.
CentOS has the graphical file manager, but trying to deploy ZFS on Linux is not trivial.
It's no wonder why QNAP charges almost $4000 for their 12-bay QuTS hero NAS (Source:
https://qnapdirect.com/products/qna...diskless-rackmount-nas?variant=31989134098483) because it is literally the combination of everything that I used talked about.
Hmmm.....pity/shame.
Thank you, everybody, for your input.