My experience with FreeNAS Corral (old FreeNAS 9.x user)

Do you want to upgrade to FreeNAS Coral

  • Yes, of course. I need new features!

    Votes: 8 47.1%
  • No, if it's not broken, don't touch it.

    Votes: 7 41.2%
  • I'm a new user. Go Corral!

    Votes: 2 11.8%
  • I'm also new user but I'll use the old one...

    Votes: 0 0.0%

  • Total voters
    17
Status
Not open for further replies.

Plato

Contributor
Joined
Mar 24, 2016
Messages
101
I was using FreeNAS 9.10 and decided to try the FreeNAS Corral.

I have 9xWD Red 6 TB HDDs in raidz1 configuration 2x8GB USB for boot and 1x256GB SSD for jails.

I was using boot drives as mirror, so I decided to remove one of them and install on the other one ( as clean install mind you ).

Everything went good.. After restart it booted.....veeery slowly..... I don't really know why it took that long or if it'd take that long every restart. Anyway, I imported my volumes one by one. Because it was clean install it didn't warn me about boot volume degradation ( because I broke the mirror ).

Then I started exploring, Hmm.. no jails at all. So we're not using that any more. While my jail data is safe in my SSD, first I thought about importing it somehow.... But as I explored, I saw that there are now new systems in place instead of jail.. Docker and VM.. Good for you I thought... Then created a collection, a docker host ( this is actually a VM inside FreeNAS, but it says host in Docker tab ).

The host was pretty simple. It almost didn't have anything at all and it seems while FreeNAS is FreeBSD based, when I created a "boot2docker" host it created a Linux system.. I created it as bridged so it took an IP from my router which I don't prefer at all. I was using my jails with an IP assigned so I could assign the ports. Well there was no option to assign an IP to the VM ( or I couldn't find it ). Most probably I'd have to do from within VM, but I decided to let it go for now because I could see the IP the router is assigned it to, so I can connect from within my network. My router is from my ISP and there is no option to assign a static IP to a MAC address (very stupid device I know, but because there is a need to login to ISP I don't really have a choice on that matter) so I had to do this from the host.

Anyways, when I checked the VM it created, it was really barebones, the console was running with /bin/sh. Just ran bash to get a comfortable enough command line first, and tried a few package managers like apt, deb, rpm etc.. Nothing worked.... So, how should I work on this I thought first. Then I remembered the docker containers. They are like plugins in the old FreeNAS but they're built on top of Docker Hosts (how I still don't understand).

I first tried with Plex, and it was very good for first users, because almost all of the configurations you make on it when setting up were on the WebUI. First I created it with networking as bridged, also saw that there is an IP field and filled it. Yay! So I could assign an IP to my Plex machine!!! After creating it, I connected to its' console and it was different. First, it was definitely Ubuntu. So there was aptitude and everything. Then started searching for plex's location and surprised to find it on /root/Library/Application Support/Plex Media Server"... I didn't use the server on Linux before, and as you may know, Plex on FreeBSD jails is installed in /usr/local folder.

Fiddled a bit with the Plex, and thought if this is how I go then I could continue like this. Then came a realization. I was using deluge daemon, sickrage, plex and sabnzb in one jail and everything was working together very well (I don't like plugins on FreeNAS 9.x at all ). So there were two options. I'd have to install everything I installed on my jail to this Plex container or I have to create a container for every tool I was using with different IPs.

The first option required me to get close to a Linux system again and learn all of its quirks ( which are very different from how I remembered ). For example there is an entirely new service system in Ubuntu it seems. I'm ashamed to say that I couldn't find how to stop Plex Media Server at first. I dug into system and found the start_plex script and from there I found how to stop it (not via service but from the executable). That's understandable because I mostly dabbled in Debian and it was pretty easy to do (while it may feel more arduous to some). At least it was classic. So I decided it's not an option to use this plex machine as base system. I'd need a Debian or (I don't believe myself to say this but) FreeBSD. That means installing everything one by one from sources or from packages one by one. I decided to let it go for now (because time was short) and thought about....:

The second option required me to create all those containers with different IPs (really?) with the containers. So I started creating them one by one... also realizing that I could use host IP for all of the containers so they'll all share the same IP of the host which I didn't want :)D). Again like Plex almost all required configuration options were there for every container I tried and it was super easy to create all of them. I also saw that now they were sharing the same IP as host.

Then I realized that when I created a host, it was a VM under FreeNAS, which was very different because instead of using everything on FreeNAS system you need to assign how many CPUs and how much RAM you'll give to them. That felt weird because it meant that I was now dividing the system between storage and applications.

Also realized when I created the containers they were like mini-VMs under the VM I created. Actually I should say they were like jails on FreeNAS. You use same resources as the host, and you can also limit their mem usage but literally they were like jails because they were separate OSes working under one host.

I started configuring them one by one after midnight and decided to let it go for now to continue working on them tomorrow.

It seems this was deciding factor. Because when I tried to connect to my NAS interface I found out that I cannot reach it at all. The IPMI system also didn't respond so I didn't know what was going on. After trying for a few minutes connecting into SSH or WebUI I decided to forget everything about it and called my wife and told her shutdown the system from the button and switch USB adapters to the mirror of my previous FreeBSD 9.10 installation. After restarting the machine, I again got the IPMI interface and after a while my old good interface.

And it's really really good to be back from that hell.

If you're new to the FreeNAS give it a go, you may like it. There are a lot of powerful options (like usable console on webUI or VGA console and of course VMs other than Linux/FreeBSD systems).

But if you are using 9.X and liking it, there really is no need to go up to the Corral. Everything is different on it. You may like it or hate it. You should know that upgrading doesn't really work (especially for jails). So you need to reconfigure every app you ever needed from scratch (you may use your configuration backups for that though to make things easier). If you were using plugins before, than you'll feel yourself at home, because the containers are like plugin jails and it's easier to configure them from before. I think it doesn't worth the trouble.

Just wanted to share my experience here.
 
Last edited by a moderator:

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
J

jkh

Guest
Just one general observation about this user's experiences vis-a-vis jails vs Docker containers:

Docker is a fine solution if you just want to install FreeNAS Corral and then start adding containers with little to no preconceived notions about how you did them before in 9.10; you just follow the Docker creation templates and fill in the blanks, and as you can see from numerous tutorials already posted here in the Forum, applications like Plex and Crashplan don't need that many steps until they're fully running.

If you already come from the Docker universe, which is a huge ecosystem, then you've also already figured out how to create your own Docker networks and wire up your containers in even more complex topologies of mutual dependency where some containers route for other containers, provide generic database services, and so on. You probably want even more docker power-user capabilities, not less, at this point!

However, if you fall into the category of a long-time 9.x user who likes to run multiple apps inside a jail, then the simplest and most direct thing to do might be to simply create a VM (either Linux or BSD, depending) and simply run all of the applications together in one merged environment, just like you used to do, because Docker is just in your way now - you already have a workflow you're used to.

Another common misconception when configuring VMs is that by giving VMs "cores" or "memory" the user is somehow now "dedicating" those resources to the VM as if it was actual hardware partitioning of some sort. Not quite. It's more of a resource boundary upper limit on number of vCPU threads and memory allocation, and it's what the VM (or containers running in that VM) actually do that will determine how much resources they actually use.
 

Plato

Contributor
Joined
Mar 24, 2016
Messages
101
That's good to know about the resource allocation. I thought it was like VMware or VirtualBox and allocated all those resources for the VM only.

But for the last few days, there was a lot of outcry about the performance of the VMs ( the apps running under VMs actually ) and the slow boot up speed of FreeNas Corral ( while not that slow, it's slower than 9.10.2 )..

I actually understand why I cannot assign IP address at the creation of VM. Also when I'm introduced to a Linux system without any package tools it felt weird. Maybe there was a package manager but I didn't want to spend too much time to check for it because my time was short. One thing though, while I created the containers I didn't see any new files anywhere. So I assume all the containers are created under the host's virtual disk, is that correct?
 
Last edited by a moderator:
J

jkh

Guest
One thing though, while I created the containers I didn't see any new files anywhere. So I assume all the containers are created under the host's virtual disk, is that correct?
Not quite. The VM runs entirely out of memory (memory disks) and thus starts fresh with every reboot. The containers are expected to express their permanent data requirements via volumes into the host filesystem, which of course is fully persistent (unless you delete the volume or something).
 
Status
Not open for further replies.
Top