Is it safe to install missing dependencies to the underlying OS for an application I wish to run (Autotier)?

MostHated

Dabbler
Joined
Apr 25, 2021
Messages
15
Hey all,
I am running the current version of TNS Beta, and I was wanting to install a data tiering application (Autotier) that monitors the mount locations you specify (ex. /mnt/ssd/data/ and /mnt/hdd/data/) file activity and will move files between them depending on usage, while presenting a single combined mount (ex /mnt/data which shows a combination of /mnt/ssd/data and /mnt/hdd/data in one).

I threw Debian on a VM with a few random 1gb disks just to test it out and it seemed to work well and is said to work with any type of filesystem as it only uses the existing mount points and moves data back and forth between them while presenting a new separate mount for the combined data. The issue I am running into is that when I went to install it, I received the following message about missing dependencies.

Code:
The following packages have unmet dependencies:
 autotier : Depends: libboost-system1.71.0 but it is not installable or
                     libboost-all-dev but it is not installable
            Depends: libboost-filesystem1.71.0 but it is not installable or
                     libboost-all-dev but it is not installable
            Depends: libboost-serialization1.71.0 but it is not installable or
                     libboost-all-dev but it is not installable
            Depends: librocksdb-dev but it is not installable or
                     librocksdb5.17 but it is not installable
            Depends: libtbb2 but it is not installable or
                     libtbb-dev but it is not installable


I saw in a video by Lawrence Systems on youtube the other day that he had mistakenly ran apt upgrade and an addition at the end of his video said it messed it up, so I assume there are pretty specific dependency requirements for the TrueNAS system.

This leads me to my main question, would it be safe if I were to attempt to manually install any missing dependencies I come across such as the ones listed above, or is TrueNAS tied into the OS in such a way that I would too easily risk messing it up by doing so?

Thanks,
-MH
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
To survive a software update, its best to make it into a boot script...

t's not clear whether it will work, so would need some experimentation. Looking forward to the results.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
This leads me to my main question, would it be safe if I were to attempt to manually install any missing dependencies I come across such as the ones listed above, or is TrueNAS tied into the OS in such a way that I would too easily risk messing it up by doing so?
This, you might very well break everything completely.
apt-get is not designed to be used and it might (for example) accidentily update truenas dependencies and break your system like so many other people already found out.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Reading into autotier, it might be an interesting solution for those needing fast intake of data, something (imho) zfs is not well suited for currently as none of the caches speed that up very much.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
That being said, isn't tiering already possible with Gluster?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I guess it's a question to be answered regarding the ability to activate that gluster function and how it interacts with ZFS pools in doing what is stated in the feature description.

It would be great if it can be done via config files for testing (and subsequently offered in the GUI as settings)
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
I guess it's a question to be answered regarding the ability to activate that gluster function and how it interacts with ZFS pools in doing what is stated in the feature description.

It would be great if it can be done via config files for testing (and subsequently offered in the GUI as settings)
It is possible to use native gluster commands to create (for example) a single-node gluster cluster, I think the same can be done with other features of gluster that are not (currently) available in TrueCommand for testing.
 

MostHated

Dabbler
Joined
Apr 25, 2021
Messages
15
Yeah, I had asked about this a while back in that other post, at the time Autotier didn't have a Debian build, but it just came out yesterday, so I figured it was worth revisiting. It was extremely easy to setup, basically just telling it where the mount points are and it did its thing, so I was excited about that. I don't know anything about gluster, and from what I remembered in the limited research I tried to do on tiering for it when I made that other post, I want to say it seemed unnecessarily complicated, but I can't recall if it was actually gluster that was, or one of the other things I was looking into at the time.
I was able to get Autotier set up in about 10-15 minutes in a VM with a few new virtual disks attached

W1ykVc1.png



That said, my NAS OS install is on a NVME in a small USB enclosure, so is relatively easy to make an image back up, so I was thinking that it might be worth a go at this.

I wasn't planning on trying to use apt update or upgrade, but just downloading and installing and dependencies manually, or at least attempting it, then worst case, hopefully, I can just revert back the OS from an image.

The other idea I had was, perhaps it could just run in a Debian docker container which has access to the mounts that you want monitored/tiered, and it can just have the dependencies it needs and runs from there. I have never attempted something like that with docker, but started seeing how I could piece together a dockerfile and compose setup for it if native install wasn't going to work due to dependencies.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
It's certainly worth noting that gluster is already in the build though, so there's that even if it's a bit more complicated due to the cluster capabilities.

And it would be relatively easy to get going... if only we had a version that was aware of tiering... sadly, we don't in 21.06

The commands would look something like this:

systemctl start glusterd

mkdir /mnt/hdd/gluster-cold
mkdir /mnt/ssd/gluster-hot

gluster volume create mytieredgluster servername.or.ip.address:/mnt/hdd/gluster-cold

gluster volume start mytieredgluster

Which mounts in /run/gluster/vols/ with no intervention.

You can certainly get this far with a non-tier-aware version, but the next bit is a deal-breaker

gluster volume tier mytieredgluster attach servername.or.ip.address:/mnt/hdd/gluster-hot
on my system, it produces this:
unrecognized word: tier (position 1)

I found this helpful in understanding how the tiering would work by default and how it might be configured if only we had the right version in place:
 
Last edited:

MostHated

Dabbler
Joined
Apr 25, 2021
Messages
15
I got Autotier up and running. I made sure to restart a time or two to ensure TrueNAS was still ok and all and from what I can tell, it seems to be.

I just need to figure out what to do about sharing the combined mount, as trying to do it via TrueNAS NFS share menu says "The path must reside within a pool mount point". I had just made a folder of /mnt/data and then had autotier mount that, so now /mnt/data shows a combination of the /mnt/ssdarray and /mnt/hddarray created via TrueNAS.

Code:
/mnt
    /ssdarray/ssddata   # - zfs dataset pool
    /hddarray/hdddata   # - zfs dataset pool
    /data               # - combined view of above


I don't know if putting the shared view on to /mnt/ssdarray/data instead would be a good idea?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I don't know if putting the shared view on to /mnt/ssdarray/data instead would be a good idea?
I don't think there would be anything wrong with that and it should remove the error for sharing.
 

MostHated

Dabbler
Joined
Apr 25, 2021
Messages
15
Yeah, I forgot to report back that I got all that working. I am going to have to spend some time this weekend making some changes before I can actually use it, as I have multiple systems using the underlying arrays via scripts and such, and for it to operate properly you need to always use the combined mounted share so that it can manage the files. Once I do that, though, I should hopefully be good to go.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703

dragon2611

Dabbler
Joined
Feb 22, 2022
Messages
10
It's also possible to abuse docker for this purpose, not this is rather Janky, make sure to take backups.etc
I suspect I could and restrict privileges to the minimum needed for fuse as currently this container is privileged

dockerfile
Code:
FROM rockylinux:8
RUN dnf update -y
RUN dnf install https://github.com/45Drives/autotier/releases/download/v1.2.0/autotier-1.2.0-1.el8.x86_64.rpm -y
COPY autotier.conf /etc/autotier.conf
ENTRYPOINT autotierfs /mnt/tier -o allow_other,default_permissions -vv & sleep infinity


docker-compose.yml
Code:
version: '3.2'
services:
  autotier:
    build: .
    container_name: autotier
    restart: unless-stopped
    volumes:
      - /path/to/tier1_dir:/mnt/ssd
      - /path/to/tier2_dir:/mnt/hdd
      - /path/on/host_for_tierd_folder:/mnt/tier:rw,rshared
    privileged: true
networks:
  default:
    driver: bridge


Note you will need to umount /path/on/host_for_tierd_folder on the host when the container is bought down or you'll be left with a dead mount (and you won't be able to restart the container)
 

Intel

Explorer
Joined
Sep 30, 2014
Messages
51
hey I am looking into 'autotier' - has it been stable and fast performing for you?
 

MostHated

Dabbler
Joined
Apr 25, 2021
Messages
15
Unfortunately not. I only ended up using it for a day or so and I ran into too many issues, so I stopped using it. It's been a little while though, so it's possible things have been worked out, but it's been too long now to even remember exactly what my issues were, only that they were show stoppers for me.
 

FraiseVache

Cadet
Joined
Nov 8, 2022
Messages
4
Unfortunately not. I only ended up using it for a day or so and I ran into too many issues, so I stopped using it. It's been a little while though, so it's possible things have been worked out, but it's been too long now to even remember exactly what my issues were, only that they were show stoppers for me.
Sad, I’d want that too. I bought 2 fast NVMe SSDs only to discover they’re basically useless (or at least very hard to use properly). Hope this gets a solution "soon".
 

Intel

Explorer
Joined
Sep 30, 2014
Messages
51
autotier is very unreliable and poor performing.

I ended up building my own custom solution using ubuntu + zfs + btrfs + snapraid + mergerfs
https://github.com/TheLinuxGuy/free-unraid

I'm wrapping up testing in the next week but I have moved all my media data to this so far and been able to recover from backups etc.
 
Top