cobia-cobia replication w/ default certs - Unable to connect to remote system: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed

gwjunk

Dabbler
Joined
Aug 30, 2023
Messages
18
I performed a fresh install of truenas cobia 23.10.0.1 on my secondary server and imported my zpool. Primary server is also 23.10.0.1. My existing replication tasks failed when triggered. I went to create a new ssh connection and keypair from my primary server pointing to secondary server. I have performed this many times in the past. This time, it fails to create the connection with an invalid certificate error. Default certs from ix-systems are on both servers.
Error: [EFAULT] Unable to connect to remote system: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:992)

Full error log:
Error: Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/plugins/keychain.py", line 590, in remote_ssh_semiautomatic_setup
client = Client(os.path.join(re.sub("^http", "ws", data["url"]), "websocket"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/client/client.py", line 289, in __init__
self._ws.connect()
File "/usr/lib/python3/dist-packages/middlewared/client/client.py", line 72, in connect
self.socket = connect(self.url, sockopt, proxy_info(), None)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/websocket/_http.py", line 136, in connect
sock = _ssl_socket(sock, options.sslopt, hostname)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/websocket/_http.py", line 271, in _ssl_socket
sock = _wrap_sni_socket(sock, sslopt, hostname, check_hostname)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/websocket/_http.py", line 247, in _wrap_sni_socket
return context.wrap_socket(
^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/ssl.py", line 517, in wrap_socket
return self.sslsocket_class._create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/ssl.py", line 1075, in _create
self.do_handshake()
File "/usr/lib/python3.11/ssl.py", line 1346, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:992)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 201, in call_method
result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1341, in _call
return await methodobj(*prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 177, in nf
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 44, in nf
res = await f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/keychain_/ssh_connections.py", line 97, in setup_ssh_connection
resp = await self.middleware.call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1398, in call
return await self._call(
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1352, in _call
return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1251, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 181, in nf
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 50, in nf
res = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/keychain.py", line 592, in remote_ssh_semiautomatic_setup
raise CallError(f"Unable to connect to remote system: {e}")
middlewared.service_exception.CallError: [EFAULT] Unable to connect to remote system: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:992)
 

gwjunk

Dabbler
Joined
Aug 30, 2023
Messages
18
I replicated the issue from both directions. Any time that I try to create an SSH connection in backup credentials, self-signed certificates are rejected in cobia. Replication completes if I allow http connections, but now they are unencrypted transfers.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Replication completes if I allow http connections, but now they are unencrypted transfers.
Unencrypted transfers only of the SSH keys. The actual replication takes place over a SSH connection, which is still encrypted.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Let me check on this - while replication is best done over a trusted/closed network, initial setup should probably at least have an option to present the thumbprint and accept a self-signed SSL certificate.
 

amp88

Explorer
Joined
May 23, 2019
Messages
56
Hi, I just ran into the exact same error on a new install of TrueNAS-SCALE-23.10.0.1, which stymied a migration. I was wondering if there's any update on this issue. Thanks.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hi, I just ran into the exact same error on a new install of TrueNAS-SCALE-23.10.0.1, which stymied a migration. I was wondering if there's any update on this issue. Thanks.
Hey @amp88

We do have a Jira ticket in for some certificate work that appears related, but it's co-mingled with some other issues so it isn't public; my apologies.

Let me see if I can get anything more definitive. In the interim, you can still manually configure and exchange keys to build the SSH connection for replication, although I do understand that is more complex.
 

amp88

Explorer
Joined
May 23, 2019
Messages
56
Hey @amp88

We do have a Jira ticket in for some certificate work that appears related, but it's co-mingled with some other issues so it isn't public; my apologies.

Let me see if I can get anything more definitive. In the interim, you can still manually configure and exchange keys to build the SSH connection for replication, although I do understand that is more complex.
OK, thanks for the information.
 

kev89

Cadet
Joined
Jun 27, 2018
Messages
1
Hi, I Had the same issue after upgrading. I managed to create the replication task by doing the following.

1. On the source server go to Credentials > Backup Credentials and add a ssh Keypairs.
2. Copy the public key.
3. On the destination servers go to Credentials > Local Users and edit the root user. Paste the public key in the authorized keys section.
4. Go back the source server and add an SSH Connection, select manual method. Select the private key that was created and then click on "Discover Remote Host Key"
5. Now you can go and create your replication task to your remote server.

Regards,
 

ThatOneUnit

Cadet
Joined
Dec 28, 2023
Messages
1
Hmmm... on my two boxes running both Truenas Scale Cobia 23.10.1 I cannot get an SSH connection to work per the typical documentation or @kev89 instructions. I also have the same starting issue creating replication tasks using the typical GUI wizard (SSH certs not being accepted due to being self signed).

When trying to manually create the SSH connection I get the error below while trying to "Discover Remote Host Key." Happens on both a DHCP'd network interface and a direct connected static network interface (connecting the two boxes). Both network interfaces are between genuine Intel X520-DA2 NIC cards. Was previously working from Bluefin to Core.

Has anyone else run into difficulties creating SSH connections in Cobia 23.10.1?
Did I miss an on going bug report for this?

Error------------------------------------------------------------------------------------
[EFAULT] ssh-keyscan failed: getaddrinfo https://xxx.xxx.x.x: Name or service not known getaddrinfo https://xxx.xxx.x.x: Name or service not known getaddrinfo https://xxx.xxx.x.x: Name or service not known getaddrinfo https://xxx.xxx.x.x: Name or service not known getaddrinfo https://xxx.xxx.x.x: Name or service not known



Error: Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 201, in call_method
result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1342, in _call
return await methodobj(*prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 177, in nf
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 44, in nf
res = await f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/keychain.py", line 553, in remote_ssh_host_key_scan
raise CallError(f"ssh-keyscan failed: {proc.stdout + proc.stderr}")
middlewared.service_exception.CallError: [EFAULT] ssh-keyscan failed: getaddrinfo https://xxx.xxx.x.x: Name or service not known
getaddrinfo https://xxx.xxx.x.x: Name or service not known
getaddrinfo https://xxx.xxx.x.x: Name or service not known
getaddrinfo https://xxx.xxx.x.x: Name or service not known
getaddrinfo https://xxx.xxx.x.x: Name or service not known

End of Error------------------------------------------------------------------------------------
 

PhilD13

Patron
Joined
Sep 18, 2020
Messages
203
Yes. I am having the same issue with not accepting the default self signed certificate which prevents creating a replication task. I was to get an rsync task to work so that may be an option for you for now. My Rsync task still has at least 2-4 days to go (it won't display progress) to transfer the data based on experience, so I really don't wish to mess with keys or ssh at this point in the process. I can revisit the task and docs when the transfer is complete and maybe post some error messages.
 
Joined
Jan 4, 2024
Messages
1
Hi, I Had the same issue after upgrading. I managed to create the replication task by doing the following.

1. On the source server go to Credentials > Backup Credentials and add a ssh Keypairs.
2. Copy the public key.
3. On the destination servers go to Credentials > Local Users and edit the root user. Paste the public key in the authorized keys section.
4. Go back the source server and add an SSH Connection, select manual method. Select the private key that was created and then click on "Discover Remote Host Key"
5. Now you can go and create your replication task to your remote server.

Regards,
Just wanted to add some feedback. Going from core to scale @kev89 's post works just fine.

Generate keys under: System -> SSH Keypairs
Create connection under: System -> SSH Connections

Thanks @kev89
 

PhilD13

Patron
Joined
Sep 18, 2020
Messages
203
I can generate the SSH key pair and connection just fine for the two systems (Bluefin sys1 and Cobia sys2) and in fact I am using the ssh connection actively right now between both Truenas servers with Rsync just fine, though it is glacial slow. That's the only way connecting the two machines will work. The SSH connection does not work at all if the replication task is created and attempted to be used instead of a Rsync task This is where the errors are being generated from; the replication task.
The convoluted special workaround settings for the replication task I found in the documentation only under advanced replication, say the issue is because of upgrades from systems that used root (earlier-Bluefin) and the eventual forced change to the use of admin and removal of root. I have tried following those special workaround instructions, but they also don't seem to work at this point. Changing from root to admin appears to have caused quite a number of issues for lots of things. I also could not find a way to not use SSH between the two systems.

When I tried core a year or so ago everything was fine and when I tried older Scale versions up until June everything was fine.
 

elorimer

Contributor
Joined
Aug 26, 2019
Messages
194
This is somewhat related, and not exactly a problem but I assume ok. I migrated my main system and my backup system to Cobia. I followed the steps to create a new admin user and disabled root password access. My existing keypair based on root login worked fine for replicating a number of datasets. This is over a local network.

I then thought I should change the keypair to the new admin user instead of root. The semi-automatic method generated a new keypair, but any replication failed with an authentication error. I generated a new keypair for the admin user but the only keypair that would work was for the root user, not the admin user. (In other words, login as admin but be the root remote user.) I assume this is right because this is how the documentation describes the manual method, but this seems at odds with moving away from the root user.

I then reconstructed my several replication tasks. Each one gave me a pop up warning that there were no snapshots in common and a full replication would be required. However, when I ran the tasks, they each kept all the original snapshots on the remote system and only did the incremental. (Because of the size, a full replication takes days, but an incremental takes just a few seconds.)

So there are some things that were odd to me but not the problems described here some have faced.
 

PhilD13

Patron
Joined
Sep 18, 2020
Messages
203
I won't have time next week to try setting up a new replication and trying more fiddling with a replication task trying various key combinations, user settings, and root/root, root/admin, admin/root, admin/admin, user/user or whatever. I got tired of trying to make a replication task work between the two systems and wanted the data on the second server for safety. The Rsync w/ssh task is working between the two servers though slow as molasses and I can find nothing wrong. It is estimated by me to be Sunday or Monday for completion of the Rsync between the two servers. I don't want to do anything that might make it fail or stop, then have to start over.

The QNAP servers these two Truenas servers are replacing, kind of forced my hand in that the primary backup one failed in July with a bad motherboard and the other secondary backup failed end of December with a backplane failure (QNAP works until it don't syndrome) taking 4 of 6 disks (1 raid array) with it making it critical I replace the failed December one immediately with what I had onhand as the data was lost on that server and quit messing around trying to slowly work things out with Scale. I had intended to just replace the one primary server then setup another TN server, work all the bugs out of the two then copy files to it and retire the second QNAP from secondary backup and let it be the third backup storage server.
 

elorimer

Contributor
Joined
Aug 26, 2019
Messages
194
Perhaps someone who knows about this will weigh in, but I thought Rsync and replication were different. Rsync has to examine each file on both sides and think a bit about it, while replication just looks at the snapshot and thus the changed blocks and so can be lightning fast. And I didn't think that you could do rsync and then change to replication without starting over from scratch, although I am most definitely not sure about that.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Rsync has to examine each file on both sides and think a bit about it, while replication just looks at the snapshot and thus the changed blocks and so can be lightning fast.
This is correct.
I didn't think that you could do rsync and then change to replication without starting over from scratch
This is also correct.
 

PhilD13

Patron
Joined
Sep 18, 2020
Messages
203
I tried for several days reading, and making attempts at doing a Replication from Bluefin to Cobia without success. I made a couple test Rsync tasks from the same machines and they were successful where no replication task configuration was ever able to even connect. At that point the only way forward was to forget about replication and do the transfer with Rsync which locks me into Rsync. That was my choice and I can live with it. The origional intent was to use replication between the two like servers and not rsync and the only questions I origionally had on replication was did replication work in the way I thought (intital data transfer then snapshots) or not which was answered in another thread.

The trouble I and others appear to be having is with replication and appears to be a permission issue between the 2 servers that so far shows up only when attempting a replication task (which also wants keys and ssh setup) so it has nothing to do with blocks or files, as the actual method of transfer, just permissions or lack thereof. The issues appear to be a result of the recent change from a root user to an admin user for tasks normally were previously done by root. Evidenced by the added information in the advanced replication docs for Scale Cobia outlining that the admin user needs additional options and permissions checked and added for replication to work that are not defaults for the admin user if the server was Bluefin or earlier and updated to use admin and had previously used root. My particular Blufin server has been updated 3 times since installation and rests at the latest version of Bluefin. This means it went through the root to admin change. The other server is a new install using the latest released Cobia. I fail to understand how one type of task can connect properly while another task cannot when they are basically using the same connection methods.
 

GatoPat

Cadet
Joined
Feb 18, 2024
Messages
1
I performed a fresh install of truenas cobia 23.10.0.1 on my secondary server and imported my zpool. Primary server is also 23.10.0.1. My existing replication tasks failed when triggered. I went to create a new ssh connection and keypair from my primary server pointing to secondary server. I have performed this many times in the past. This time, it fails to create the connection with an invalid certificate error. Default certs from ix-systems are on both servers.
Error: [EFAULT] Unable to connect to remote system: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:992)

Full error log:
Error: Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/plugins/keychain.py", line 590, in remote_ssh_semiautomatic_setup
client = Client(os.path.join(re.sub("^http", "ws", data["url"]), "websocket"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/client/client.py", line 289, in __init__
self._ws.connect()
File "/usr/lib/python3/dist-packages/middlewared/client/client.py", line 72, in connect
self.socket = connect(self.url, sockopt, proxy_info(), None)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/websocket/_http.py", line 136, in connect
sock = _ssl_socket(sock, options.sslopt, hostname)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/websocket/_http.py", line 271, in _ssl_socket
sock = _wrap_sni_socket(sock, sslopt, hostname, check_hostname)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/websocket/_http.py", line 247, in _wrap_sni_socket
return context.wrap_socket(
^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/ssl.py", line 517, in wrap_socket
return self.sslsocket_class._create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/ssl.py", line 1075, in _create
self.do_handshake()
File "/usr/lib/python3.11/ssl.py", line 1346, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:992)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 201, in call_method
result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1341, in _call
return await methodobj(*prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 177, in nf
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 44, in nf
res = await f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/keychain_/ssh_connections.py", line 97, in setup_ssh_connection
resp = await self.middleware.call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1398, in call
return await self._call(
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1352, in _call
return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1251, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 181, in nf
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 50, in nf
res = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/keychain.py", line 592, in remote_ssh_semiautomatic_setup
raise CallError(f"Unable to connect to remote system: {e}")
middlewared.service_exception.CallError: [EFAULT] Unable to connect to remote system: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:992)
where you able to get any updates or solutions to this issue? i am having the same issue on TrueNAS-SCALE-23.10.1.3
 
Joined
Oct 1, 2021
Messages
2
I tried for several days reading, and making attempts at doing a Replication from Bluefin to Cobia without success. I made a couple test Rsync tasks from the same machines and they were successful where no replication task configuration was ever able to even connect. At that point the only way forward was to forget about replication and do the transfer with Rsync which locks me into Rsync. That was my choice and I can live with it. The origional intent was to use replication between the two like servers and not rsync and the only questions I origionally had on replication was did replication work in the way I thought (intital data transfer then snapshots) or not which was answered in another thread.

The trouble I and others appear to be having is with replication and appears to be a permission issue between the 2 servers that so far shows up only when attempting a replication task (which also wants keys and ssh setup) so it has nothing to do with blocks or files, as the actual method of transfer, just permissions or lack thereof. The issues appear to be a result of the recent change from a root user to an admin user for tasks normally were previously done by root. Evidenced by the added information in the advanced replication docs for Scale Cobia outlining that the admin user needs additional options and permissions checked and added for replication to work that are not defaults for the admin user if the server was Bluefin or earlier and updated to use admin and had previously used root. My particular Blufin server has been updated 3 times since installation and rests at the latest version of Bluefin. This means it went through the root to admin change. The other server is a new install using the latest released Cobia. I fail to understand how one type of task can connect properly while another task cannot when they are basically using the same connection methods.
So much this.

I don't understand why this is being forced and mandated when an admin should be able to decide what they want to allow or not. I am also banging my head against a wall due to this change. I will update if I find something that works but so far my latest bluefin that PULLs from cobia is still not working even after making a new admin user, allowing sudo commands, etc. etc. all per the documentation.
 
Top