Need Help: Disks Not Showing in TrueNAS SCALE GUI But Visible in CLI

atom5ive

Dabbler
Joined
Sep 11, 2023
Messages
17
Hello TrueNAS Community,

I'm currently setting up a temporary remote backup system for VM image replication and have encountered an issue where my drives are visible in the CLI but not in the TrueNAS SCALE GUI. This setup is not typical and is intended for replicating VM images to a remote location for backup purposes. Below are the details of my setup and the issue I'm facing:

Server Details:

  • Model: iKoolcore R2 server i3-N300
  • Memory: 16GB
  • Storage: 128GB (OS Installed on this)
Drives:

  • HDDs: 3 x Seagate IronWolf ST12000VN0008 12TB 3.5" Internal SATA HDD
  • SSD: 1 x Samsung Electronics 870 EVO 2TB 2.5" SATA III Internal SSD (MZ-77E2T0B/AM)
Enclosure:

  • Model: Yottamaster 4 Bay Hard Drive Enclosure (PS400U3), Aluminum 4 Bay SATA 2.5/3.5 Inch External HDD Enclosure SATA3.0, Support 64TB (4 x16TB), Non-RAID Direct Attached Storage (DAS) with Fan Data Storage
This setup is connected to my main server, a Dell R730XD, and is intended to hold VM image backups.

Issue: The TrueNAS SCALE system recognizes all drives when using lsblk command, but these drives do not appear in the TrueNAS SCALE GUI. Here's the output of lsblk for reference:

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 1.8T 0 disk
sdb 8:16 0 10.9T 0 disk
sdc 8:32 0 10.9T 0 disk
sdd 8:48 0 10.9T 0 disk

├─sdd1 8:49 0 128M 0 part
└─sdd2 8:50 0 10.9T 0 part
nvme0n1 259:0 0 119.2G 0 disk
├─nvme0n1p1 259:1 0 1M 0 part
├─nvme0n1p2 259:2 0 512M 0 part
├─nvme0n1p3 259:3 0 102.7G 0 part
└─nvme0n1p4 259:4 0 16G 0 part
└─nvme0n1p4 253:0 0 16G 0 crypt [SWAP]

I've also encountered a specific error message (not detailed here for brevity) that suggests a UNIQUE constraint failed on the storage_disk table in the system's database.

Steps Taken:

  • Verified all hardware connections and compatibility.
  • Attempted to restart TrueNAS services and the entire system.
  • Used the CLI to list and query disk information, which successfully shows all disks.
I'm looking for any advice, insights, or suggestions on how to resolve this issue so the TrueNAS SCALE GUI reflects the disks currently visible in the CLI. This is crucial for setting up and managing the replication tasks efficiently.

Thank you in advance for your help!
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
TrueNAS does not normally support USB attached storage for data drives.

Further, it is a bad idea;
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
How is that Yottamaster thingy connected to the server? USB? Not going to work.
 

atom5ive

Dabbler
Joined
Sep 11, 2023
Messages
17
This is a separate system. It's a separate remote system. Just a simple, low powered solution for in taking remotely at a friend's house some images from TrueNAS. I'm avidly aware of the limitations TrueNAS has with USB. Was hoping for a more helpful answer other than,"don't do that". Something more, "if you must do that, I've found the best solution is.." again, I'm aware of why it's bad. Nothing of great importance will be stored on this very simple setup. Just looking for any kind of solution someone has who may have run into this situation before and what they may have done to resolve it. As the system sees the drives with zero issue. The GUI just doesn't see them.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Yes, and it will never see them because USB storage is fundamentally and terminally unsupported. If I had a solution for you, even an unofficial workaround, I would have told you. I like hacking systems. What doesn't show in the UI won't work.

Use a plain Linux or FreeBSD system and create your Zpool and and SMB shares etc. manually.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I agree with @Patrick M. Hausen, their is no reliable work around or solution. TrueNAS was simple not designed to work with every piece of available hardware.

The issue is that multi-disk USB enclosures likely return the same serial number for each disk. TrueNAS SCALE, (and Core), want to use the serial number to identify the disk. That's not possible when the hardware, (USB multi-disk controller chip in this case), lies to the OS. This issue is covered in the Resource I referenced.

This same problem can occur with several single disk USB enclosures, if they too return the same serial number because that is what the USB chip's vendor programmed in.


Could TrueNAS SCALE be made to work with multi-disk USB enclosures?

Yes, and that is the beauty of open source software, someone can change this behavior.


Now I understand this is not what you want to hear. It would be very nice to have a cheap, low end NAS as the backup target, for a remote site or even just for media storage. At present TrueNAS & ZFS just have too many limitations for low end computers. (TrueNAS does not support RaspberryPi's ARM CPU for example, but ZFS likely does work on ARM64.)
 

atom5ive

Dabbler
Joined
Sep 11, 2023
Messages
17
@Patrick M. Hausen and @Arwen,

First and foremost, I want to express my deepest gratitude for taking the time to respond and provide such clear and informative insights into the challenges I'm facing with my TrueNAS SCALE setup. Your expertise and willingness to help are immensely appreciated.

Understanding now that USB storage with multi-disk enclosures fundamentally presents compatibility issues with TrueNAS due to serial number conflicts is a crucial insight. It clarifies the situation considerably and helps set my expectations correctly. I admire the hacking spirit and the candid advice to explore alternative setups using plain Linux or FreeBSD systems. This guidance is invaluable and provides a clear direction for me to proceed.

Given the limitations with TrueNAS and ZFS for low-end hardware and unconventional setups like mine, I'm now considering adjusting my approach to ensure reliability and compatibility for my off-site backup solution.

In light of this, I have a follow-up question for the community: What hypervisor would you recommend that works well with ZFS Replication Tasks for an off-net remote location? My goal is to create a robust, yet straightforward, solution for replicating VM images to a remote location (at a friend's house, where I wish to impose as little as possible).

The ideal hypervisor would offer seamless integration or compatibility with ZFS for replication tasks, be relatively lightweight, and not require extensive hardware resources, considering the constraints of my setup.

Thank you once again for your time and assistance. Your support not only aids in navigating these technical challenges but also fosters a spirit of learning and collaboration within the community.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I am not sure if you intend to allow running the VM images at the remote / friend's house.

But, a simple ZFS Send to a file can work to any remote computer, MS-Windows, Linux, FreeBSD, as long as they have enough storage for the VM files. You can then restore if needed.


ZFS was designed to be able to Send a Dataset or zVol as a pipeline of data. But, their is nothing stopping someone, (and I have personally done this IN PRODUCTION AT WORK), from capturing the ZFS Send output and manually copying it where it needed to be. Then restoring the file successfully. (Or you could automate sending it.)

I am not sure I have described this right. Here is some command lines to help;
zfs snap MY_POOL/MY_ZVOL@Todays_Date zfs send -Rpv MY_POOL/MY_ZVOL@Todays_Date | gzip -9cv >MY_ZVOL.zfs_send.gz scp MY_ZVOL.zfs_send.gz REMOTE_COMPUTER:/PATH/TO/USE/

Whether you use GZip to compress the ZFS Send file or not, depends on what is stored and if you enabled compression on the zVol. Plus, if you are not using zVols, but files in a Dataset, then you would want 1 VM per dataset, (though could be more than 1 file system for that single VM in that dataset).

If you don't have enough temporary local space, you can do this instead;

zfs snap MY_POOL/MY_ZVOL@Todays_Date zfs send -Rpv MY_POOL/MY_ZVOL@Todays_Date | gzip -9cv | ssh REMOTE_COMPUTER dd bs=512 of=/PATH/TO/USE/MY_ZVOL.zfs_send.gz


Please note that I have not tested these commands and options. The concept DOES WORK, but the details have not been checked.
 

atom5ive

Dabbler
Joined
Sep 11, 2023
Messages
17
I am not sure if you intend to allow running the VM images at the remote / friend's house.

But, a simple ZFS Send to a file can work to any remote computer, MS-Windows, Linux, FreeBSD, as long as they have enough storage for the VM files. You can then restore if needed.


ZFS was designed to be able to Send a Dataset or zVol as a pipeline of data. But, their is nothing stopping someone, (and I have personally done this IN PRODUCTION AT WORK), from capturing the ZFS Send output and manually copying it where it needed to be. Then restoring the file successfully. (Or you could automate sending it.)

I am not sure I have described this right. Here is some command lines to help;
zfs snap MY_POOL/MY_ZVOL@Todays_Date zfs send -Rpv MY_POOL/MY_ZVOL@Todays_Date | gzip -9cv >MY_ZVOL.zfs_send.gz scp MY_ZVOL.zfs_send.gz REMOTE_COMPUTER:/PATH/TO/USE/

Whether you use GZip to compress the ZFS Send file or not, depends on what is stored and if you enabled compression on the zVol. Plus, if you are not using zVols, but files in a Dataset, then you would want 1 VM per dataset, (though could be more than 1 file system for that single VM in that dataset).

If you don't have enough temporary local space, you can do this instead;

zfs snap MY_POOL/MY_ZVOL@Todays_Date zfs send -Rpv MY_POOL/MY_ZVOL@Todays_Date | gzip -9cv | ssh REMOTE_COMPUTER dd bs=512 of=/PATH/TO/USE/MY_ZVOL.zfs_send.gz


Please note that I have not tested these commands and options. The concept DOES WORK, but the details have not been checked.
Thank you for the detailed explanation and the script suggestions for managing ZFS send/replicate tasks. I understand the flexibility and power of ZFS's send/receive capabilities and appreciate the effort to provide a workaround that could technically address my needs.

However, I have some reservations about implementing a solution that requires manual scripting and scheduling via cron jobs, especially outside the TrueNAS GUI. While I'm confident in the robustness of your approach, my primary concern is the potential for human error on my part and the subsequent need to troubleshoot issues without the safety net of a managed environment. I'm looking for a solution that remains within the supported boundaries of my setup to minimize the risk of creating a problem that could be challenging to resolve later.

I've recently invested in hardware with the expectation of using it for this project, and I'm trying to find a way to leverage this investment without resorting to solutions that might require extensive manual intervention or deviate significantly from supported configurations.

Given these considerations, I'm exploring the feasibility of alternative solutions, such as deploying a QNAP DAS connected via an HBA, though I'm concerned about the additional cost and the potential lack of support for running small VMs, which my current setup can handle.

Do you think running Proxmox or a similar hypervisor might be a viable path forward? This would ideally allow me to use ZFS replication within a more standardized and GUI-supported environment, possibly providing the balance I'm seeking between leveraging my current hardware and maintaining system reliability and ease of management.

I'm open to suggestions and would greatly value any insights or experiences you might have with using Proxmox or other hypervisors in similar scenarios. My goal is to find a practical and reliable solution that allows me to use the hardware I've invested in, without the need for complex manual processes or risking unsupported configurations.

My old script I went away from using desigened to act as a "sync" script:
Code:
bash

VM_POOL="VM Pool"
RASPBERRYPI_USER="user"
RASPBERRYPI_IP="IP"
REMOTE_PATH="/mnt/backup"
SSH_PORT="22"
SSH_KEY_PATH="/root/.ssh/id_rsa"
LOGFILE="/var/log/snapshot_sync.log"
DRY_RUN="false" # Set to "true" for a non-destructive test run

Script Overview

The script operates in two main phases: snapshot transfer and synchronization, with a third optional phase for restoration. Each phase is meticulously logged to $LOGFILE, providing a detailed record of operations performed, including timestamps, actions taken, and the status of each snapshot.
Snapshot Transfer

    List Snapshots on TrueNAS: Use zfs list -H -o name -t snapshot to enumerate all snapshots in the VM_POOL. This list is the reference point for what needs to be synchronized to the Raspberry Pi.

    Transfer Missing Snapshots: For each snapshot identified on TrueNAS, check its existence on the Raspberry Pi using SSH and ls command in $REMOTE_PATH. If a snapshot is missing (not found remotely), it's transferred using zfs send | ssh $RASPBERRYPI_USER@$RASPBERRYPI_IP "cat > $REMOTE_PATH/<snapshot_name>". Each transfer action is logged, including the snapshot name and the result of the transfer.

Synchronization

    Fetch Remote Snapshot List: Retrieve the list of snapshots stored on the Raspberry Pi to identify any snapshots that no longer exist on TrueNAS but are still present remotely.

    Delete Outdated Snapshots Remotely: Compare the remote snapshot list against the local TrueNAS snapshot list. For any snapshot existing remotely but not locally, issue a delete command via SSH. This action is critical to ensure the Raspberry Pi only retains snapshots that are currently present on the TrueNAS server.

Logging

    Every action, including listing, transferring, and deleting snapshots, is logged to $LOGFILE. The logging is verbose, detailing the operation, the involved snapshots, and the outcomes. This verbosity is crucial for auditing and troubleshooting.
    The script also logs the start and completion of each phase, including timestamps to measure duration.

Restoration (Optional)

    In "Restore Mode", the script reverses its operation, transferring snapshots from the Raspberry Pi back to the TrueNAS server. This mode is toggled by setting a flag, and similarly, it involves checking for the existence of snapshots on the destination before transferring.

Advanced Features

    Progress Monitoring: Utilize pv or equivalent to monitor the progress of snapshot transfers, providing real-time feedback in the logs.
    Dry Run Mode: The DRY_RUN variable allows the script to simulate operations without making any changes, enabling verification of what actions the script would take under normal operation.
    Existence Check Prior to Actions: Before transferring or deleting, the script checks the target status to prevent unnecessary operations, a critical feature for maintaining data integrity and efficiency.
    Verbose Echo and Logging: Each significant action and decision point is echoed to the console and logged, including the rationale for skipping transfers or deletions based on the comparison logic.

Safety Precautions

    The script ensures no modifications are made on the local (TrueNAS) system; all operations affecting data integrity occur on the Raspberry Pi (host machine).
    Extensive use of conditional checks prevents actions from being taken unless necessary, safeguarded by the DRY_RUN mode for validation.

Implementation Considerations

    SSH Key Authentication: Ensure passwordless SSH access to the Raspberry Pi using $SSH_KEY_PATH for seamless operation.
    Error Handling: Implement robust error checking after each command to catch and log failures, preventing cascading issues.
    Scalability: Design the script to handle a large number of snapshots efficiently, considering the performance implications of extensive snapshot lists.

This advanced script architecture not only facilitates rigorous synchronization between TrueNAS and Raspberry Pi but also ensures high transparency through verbose logging, making it an effective tool for system administrators and engineers managing ZFS snapshot backups.

It's a shell based script. Right now, it's not going through each snapshot found as well. Please update and fix, add anything mentioned above to the below script to modify and make it better. Provide the full script with no placeholders, this is extremely important and could lead to massive critical issues if placeholder are found.

#!/bin/bash

# Configuration settings
DEBUG="true"  # Enable debugging for more verbose output
VM_POOL="VM Pool"  # ZFS volume name on TrueNAS, without a trailing slash
RASPBERRYPI_USER="username"  # Username on Raspberry Pi
RASPBERRYPI_IP="IP"  # Raspberry Pi's IP address
REMOTE_PATH="/mnt/local/dataRetention/TrueNAS_backups"  # Destination directory on Raspberry Pi
SSH_PORT="22"  # SSH port
SSH_KEY_PATH="/root/.ssh/id_rsa"  # Path to the SSH private key
LOGFILE="/root/applications/rsync_backup.log"  # Path to the log file
RESTORE_MODE="false"  # Set to "true" to enable restore functionality

# Logging function
echo_and_log() {
  echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOGFILE"
}

# Initialize log file and echo start message
echo "Starting ZFS snapshot transfer process..." > "$LOGFILE"
echo_and_log "Testing SSH connection and permissions..."
if ! ssh -i "$SSH_KEY_PATH" -p "$SSH_PORT" -o BatchMode=yes -o ConnectTimeout=10 "$RASPBERRYPI_USER@$RASPBERRYPI_IP" "echo 'SSH connection test successful'">
  echo_and_log "SSH connection test failed."
  exit 1
else
  echo_and_log "SSH connection test passed."
fi

if [ "$RESTORE_MODE" = "false" ]; then
    # Backup mode: Transfer snapshots from TrueNAS to Raspberry Pi
    echo_and_log "Fetching list of snapshots from $VM_POOL..."
    snapshots=$(zfs list -H -t snapshot -o name -r "$VM_POOL")

    if [ -z "$snapshots" ]; then
        echo_and_log "No snapshots found. Exiting."
        exit 1
    fi

    current=1
    total=$(echo "$snapshots" | wc -l)
    echo_and_log "Total snapshots found: $total"

    echo_and_log "Starting the transfer of snapshots..."
    echo "$snapshots" | while read -r snapshot; do
        snapshot_filename=$(echo "$snapshot" | sed 's|[^a-zA-Z0-9/]|_|g' | sed 's|/|_|g').zfs
        remote_snapshot_path="$REMOTE_PATH/$snapshot_filename"

        echo_and_log "[$current/$total] Checking snapshot: $snapshot"
        if ssh -i "$SSH_KEY_PATH" -p "$SSH_PORT" "$RASPBERRYPI_USER@$RASPBERRYPI_IP" "[ -f \"$remote_snapshot_path\" ]"; then
            echo_and_log "Snapshot $snapshot already transferred. Skipping..."
        else
            echo_and_log "Transferring snapshot: $snapshot..."
            if zfs send "$snapshot" | pv | ssh -i "$SSH_KEY_PATH" -p "$SSH_PORT" "$RASPBERRYPI_USER@$RASPBERRYPI_IP" "cat > \"$remote_snapshot_path\""; then
                echo_and_log "Snapshot $snapshot transferred successfully."
            else
                echo_and_log "Error transferring snapshot $snapshot. Please check logs for details."
            fi
        fi
        ((current++))
    done

    echo_and_log "Snapshot transfer attempts complete."

    # Synchronize snapshots: Remove from Raspberry Pi if not present on TrueNAS
    remote_snapshot_files=$(ssh -i "$SSH_KEY_PATH" -p "$SSH_PORT" "$RASPBERRYPI_USER@$RASPBERRYPI_IP" "ls $REMOTE_PATH | grep '.zfs$'")
    echo "$remote_snapshot_files" | while read -r file; do
        original_snapshot_name=$(echo "$file" | sed 's|_|/|g' | sed 's|.zfs$||')
        if ! echo "$snapshots" | grep -q "$original_snapshot_name"; then
            echo_and_log "Deleting outdated snapshot file from Raspberry Pi: $file"
            ssh -i "$SSH_KEY_PATH" -p "$SSH_PORT" "$RASPBERRYPI_USER@$RASPBERRYPI_IP" "rm \"$REMOTE_PATH/$file\""
        fi
    done

    echo_and_log "Snapshot synchronization complete."
else
    # Restore mode: Present a menu for restoring snapshots from Raspberry Pi to TrueNAS
    echo_and_log "Restore mode enabled. Fetching list of available snapshots for restoration..."
    available_snapshots=$(ssh -i "$SSH_KEY_PATH" -p "$SSH_PORT" "$RASPBERRYPI_USER@$RASPBERRYPI_IP" "ls $REMOTE_PATH | grep '.zfs$'")

    if [ -z "$available_snapshots" ]; then
        echo_and_log "No snapshots available for restoration. Exiting."
        exit 1
    fi

    echo_and_log "Available snapshots for restoration:"
    select opt in $available_snapshots; do
        snapshot_to_restore="${REMOTE_PATH}/${opt}"
        echo_and_log "Restoring snapshot: $opt..."
        if ssh -i "$SSH_KEY_PATH" -p "$SSH_PORT" "$RASPBERRYPI_USER@$RASPBERRYPI_IP" "cat \"$snapshot_to_restore\" | zfs receive -F \"$VM_POOL\""; then
            echo_and_log "Snapshot $opt restored successfully."
        else
            echo_and_log "Error restoring snapshot $opt. Please check logs for details."
        fi
        break  # Assuming one snapshot restoration at a time for simplicity
    done
fi

echo_and_log "Process complete."











Also, this script was made:

#!/bin/bash

# Configuration settings
DEBUG="false"  # For debugging
SKIP_SPACE_CHECK="false"  # To skip space check
VM_POOL="VM Pool"  # ZFS volume name on TrueNAS, ensuring it targets the correct dataset
RASPBERRYPI_USER="USERNAME"  # Username on Raspberry Pi
RASPBERRYPI_IP="IP"  # Raspberry Pi's IP address
REMOTE_PATH="/mnt/local/dataRetention/TrueNAS_backups"  # Destination directory on Raspberry Pi
SSH_PORT="22"  # SSH port
SSH_KEY_PATH="/root/.ssh/id_rsa"  # Path to the SSH private key
LOGFILE="/root/applications/rsync_backup.log"  # Path to the log file

# Logging function
echo_and_log() {
  echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOGFILE"
}

# Initialize log file and echo start message
echo "Starting ZFS snapshot transfer process..." > "$LOGFILE"
echo_and_log "Testing SSH connection and permissions..."
if ! ssh -i "$SSH_KEY_PATH" -p "$SSH_PORT" -o BatchMode=yes -o ConnectTimeout=10 "$RASPBERRYPI_USER@$RASPBERRYPI_IP" "echo 'SSH connection test successful'"; then
  echo_and_log "SSH connection test failed."
  exit 1
else
  echo_and_log "SSH connection test passed."
fi

# Fetch and display the list of snapshots
echo_and_log "Fetching list of snapshots from $VM_POOL..."
snapshots=$(zfs list -H -t snapshot -o name -r "$VM_POOL/" | awk '{print $1}')

if [ -z "$snapshots" ]; then
    echo_and_log "No snapshots found. Exiting."
    exit 1
fi

echo "$snapshots" | while read -r snapshot; do
    echo_and_log "Snapshot found: $snapshot"
done

snapshot_count=$(echo "$snapshots" | wc -l)
echo_and_log "Total snapshots found: $snapshot_count"

# Skip disk space check if requested
if [ "$SKIP_SPACE_CHECK" = "false" ]; then
    echo_and_log "Performing disk space check..."
    # Disk space check logic (omitted for brevity)
fi

# Transfer the snapshots
echo_and_log "Starting the transfer of snapshots..."
echo "$snapshots" | while read -r snapshot; do
    echo_and_log "Attempting to transfer snapshot: $snapshot..."
    if ! ssh -i "$SSH_KEY_PATH" -p "$SSH_PORT" "$RASPBERRYPI_USER@$RASPBERRYPI_IP" "zfs send $snapshot | zfs receive -Fdu $REMOTE_PATH"; then
        echo_and_log "Error transferring snapshot $snapshot. Please check logs for details."
    else
        echo_and_log "Snapshot $snapshot transferred successfully."
    fi
done

echo_and_log "Snapshot transfer attempts complete."

Initially, I was routing the images to a basic Raspberry Pi 5 setup linked to an external hard drive enclosure. My aim was to have a system with modest VM capabilities to slightly expand according to my needs. However, I soon realized that for replication, the target needed to be a machine running a ZFS-compatible OS. Attempting straightforward copy operations proved problematic, leading to several issues. This prompted me to invest in a more capable system that could run TrueNAS SCALE, hoping to establish a replication setup. Unfortunately, I discovered that achieving this setup was not feasible, much to my disappointment.
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Yes, if you could find a GUI solution, it would be fully supported. Unlike a cron'ed SHELL script. So I understand your hesitation.

As I said before to others, TrueNAS, (SCALE or Core), is not the end all to home or small office NAS. With ZFS being the only supported file system in TrueNAS, that puts forth limitations for smaller or lower end designs.

None of us will be offended if TrueNAS does not work out for you. Either as the primary NAS or backup NAS.


I can't speak about Proxmox, I has no experience with it.
 
Top