Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.
Resource icon

[HOWTO] FreeNAS 11, RancherOS (Docker), and Portainer

SaskiFX

Dabbler
Joined
Mar 18, 2015
Messages
27
Thanks for the writeup SaskiFX! but I cant get rancher to boot.

So, you can leave in the 'rancher.password=rancher' part of grub.cfg and it should let you login with rancher/rancher as a username/password and let you troubleshoot the SSH stuff. I'm only on Windows, so I'm using PuTTY/KiTTY and having it pass my private key there.

The strange error messages about 'rdmsr to register 0x34 on vcpu 1' and 'Unhandled ps2 mouse command 0xe1' are normal (ish?) and can be ignored. I got them everytime I had to boot the VM from the command line.

You're using this term a couple of times in this thread, and I think you're using it incorrectly. A zvol is a pseudo-block device that you can create inside a pool or a dataset. I think you mean "pool" where you're using the term "zvol".

I think you also have a couple of places where you're confusing rancheros and RancherOS, unless iohyve is case-insensitive.

But aside from that, thanks! Following these instructions, I was able to get RancherOS and portainer up and running. Now to figure out what they are, and why everybody's so in love with Docker...

You are correct on both spots. I'm saying zvol and I mean pool most of the time. I suppose the only true zvol is the 20G file I create to be the drive of the VM. That's likely where I got confused. I have updated the post in the Resources version.

I also corrected the capitalization of RancherOS in a couple spots, thanks! I copied and pasted a ton of this out of Notepad++ where I was keeping a running tally of the commands I was using to do everything so I could make a post.

Well, this helped for me:
$ ssh -i /path/to/private/key rancher@<ip-address>
From the rancheros Docs.
So once you have installed rancheros and are no longer using the iso, you need to ssh directly into your vm
The Path to your private key will be unique to the system you are running to connect.

Yep, once I had SSH working, I never used the console command again to get in. It's weird and flaky if you wind up restarting the VM incorrectly, and sometimes doesn't pass commands right.

This would be good to have in the Resources section. Could you please create a new Resource with the content you have here? Once you've done that, I'll link it to this thread, which will serve as the discussion thread for it.
Any questions, just ask!

Done, and I think I linked it correctly on the Resources side, but please verify. Not sure how to link it here.
 

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
17,958
Done, and I think I linked it correctly on the Resources side, but please verify. Not sure how to link it here.
Thanks. You can't do it yourself, it's a moderator-only option to properly link a thread as the discussion thread (which means it gets neatly integrated in the Resources section). I'll take care of that.
 

Zwck

Patron
Joined
Oct 27, 2016
Messages
371
some small indentation erros in the yaml files.

fixed here
Code:
#/var/lib/rancher/conf/cloud-config.d/nfs-config.yml
write_files:
  - path: /etc/rc.local
	permissions: "0755"
	content: |
	  #!/bin/bash
	  [ ! -e /usr/bin/docker ] && ln -s /usr/bin/docker.dist /usr/bin/docker
rancher:
  services:
	nfs-config:
	  image: d3fk/nfs-client
	  labels:
		io.rancher.os.after: console, preload-user-images
		io.rancher.os.scope: system
	  net: host
	  privileged: true
	  restart: always
	  volumes:
		- /usr/bin/iptables:/sbin/iptables:ro
		- /mnt/config:/mnt/config:shared
	  environment:
		SERVER: 192.168.0.2
		SHARE: /mnt/somewhere/rancher-storage
		MOUNTPOINT: /mnt/config
 

danjng

Explorer
Joined
Mar 20, 2017
Messages
51
Theoretically, after you've created the initial NFS mount and volume, could you do any new container provisioning through the Portainer UI (using a path on the mount/volume)?

I am just wondering though, since I am not seeing the volume listed. Is this one of those cases where since we did it manually in a yml file, the GUI "doesn't know about it"?

I wanted to check before proceeding further, as I would much rather manage all these bad boys via the UI if I could.

Otherwise, AWESOME WRITE UP! All of this worked flawlessly.
 

danjng

Explorer
Joined
Mar 20, 2017
Messages
51
Actually, I may have prematurely drawn some conclusions. If I'm understanding this nested container thing those mounts/volumes created in the yml files should appear as local paths to Portainer? I'll try it out tomorrow. Thanks again for the nice guide!


Sent from my iPhone using Tapatalk
 

Zwck

Patron
Joined
Oct 27, 2016
Messages
371
I have been playing with the nfs-client docker and it never worked for me with multiple nfs-whatever.yml for me only the last nfs file was loaded every time. So in order to mount multiple volumes from NFS shares you'll have to write one nfs.yml which should contain the following

Code:
#/var/lib/rancher/conf/cloud-config.d/nfs.yml
write_files:
  - path: /etc/rc.local
	permissions: "0755"
	content: |
	  #!/bin/bash
	  [ ! -e /usr/bin/docker ] && ln -s /usr/bin/docker.dist /usr/bin/docker

rancher:
  services:
	nfs:
	  image: d3fk/nfs-client
	  labels:
		io.rancher.os.after: console, preload-user-images
		io.rancher.os.scope: system
	  net: host
	  privileged: true
	  restart: always
	  volumes:
		- /usr/bin/iptables:/sbin/iptables:ro
		- /mnt/mm:/mnt/mm:shared
		- /mnt/docker:/mnt/docker:shared
		- /mnt/dbs:/mnt/dbs:shared
	  environment:
		SERVER: 192.168.0.2
		SHARE: /mnt/volume01/multimedia
		MOUNTPOINT: /mnt/mm

#cloud-config
mounts:
  - ["192.168.0.2:/mnt/volume01/docker", "/mnt/docker", "nfs", ""]
  - ["192.168.0.2:/mnt/volume01/db", "/mnt/dbs", "nfs", ""]



The first part create 3 volumes to mount into namely /mnt/mm, /mnt/dbs, and /mnt/docker and mounts 192.168.0.2:/mnt/volume01/multimedia to /mnt/mm. After this is done and only after this docker is loaded sudo mount -t nfs 123.123.123.123:/wtf /mnt/wtf will work on RancherOS, then it is possible to add the mounts into the cloud config. This can all be done in the nfs.yml file.

After creating the yml file always check at least the formating using sudo ros config validate -i wtf.yml if there is a formating error you will receive a message. e.g.
> FATA[0000] yaml: [while parsing a block collection] did not find expected '-' indicator at line 6, column 3
 
Last edited:

Zwck

Patron
Joined
Oct 27, 2016
Messages
371
I can't get rancher to have an IP for eth0. Any ideas why?

try this.

open a shell to your freenas system

type ifconfig

you should see an output that should show all sorts of things with a correctly working setup you should see

bridge0 that has the adapter that iohyve creates called tap0 or similar + if you have jails epair1a or similar + your physical interface igb0.

also double check if you set iohyve setup pool=ssd kmod=1 net=igb0 correctly check in your network summary for the correct NIC
Capture.PNG


ifconfig bridge0
Capture3.PNG




If i misunderstood your problem and the real problem is just that you cant get a static ip try the following user_config.yml addition

Code:
#/var/lib/rancher/conf/cloud-config.d/netconfig-config.yml
rancher:
  network:
	interfaces:
	  eth0:
		addresses:
		  - 192.168.0.16/24
		gateway: 192.168.0.1
		dhcp: false
	dns:
	  nameservers:
		- 8.8.8.8
		- 8.8.4.4
 
Last edited:

danjng

Explorer
Joined
Mar 20, 2017
Messages
51
Alright. So I've gone through the guide and I've gotten portainer set up and running. Those NFS shares are mounted as docker volumes on the RancherOS instance and I can see them just fine if I log in. If I wanted to use the web interface to create new containers and be able to access those mounts, I would need to create a docker volume using the local driver with a mount point of /mnt/path, right?

If that's the right way to do it, it's not working for me. I keep getting an error stating that I have an "invalid option key 'mountpoint'

Any ideas?


Sent from my iPhone using Tapatalk
 

Zwck

Patron
Joined
Oct 27, 2016
Messages
371
Hey,

While the portainer ui is fairly nice I'd run some basic tests beforehand.

Use a Ssh connection to connect to your vm.

Navigate to the mount point /mnt and check permission with ls -la

Run a simple container from the shell for example_/ghost with
docker run --name some-ghost -v /path/to/ghost/blog:/var/lib/ghost ghost run it without - d to see if it starts correctly

If it does start go to the portainer ui and check how it is done there.
 
Last edited:

danjng

Explorer
Joined
Mar 20, 2017
Messages
51
I'm so dumb. Chalk it up to my mind-blown state of encountering Rancher's container within a container model, but I completely overlooked bind mounts when creating the new containers. I would just be able to access the mount directly using bind mounts. This is pretty much what I was looking for originally, but I was looking in the wrong spot.

Thanks for the response, Zwck!
 
Joined
Mar 30, 2015
Messages
32
try this.

open a shell to your freenas system

type ifconfig

you should see an output that should show all sorts of things with a correctly working setup you should see

bridge0 that has the adapter that iohyve creates called tap0 or similar + if you have jails epair1a or similar + your physical interface igb0.

also double check if you set iohyve setup pool=ssd kmod=1 net=igb0 correctly check in your network summary for the correct NIC
View attachment 18783

ifconfig bridge0
View attachment 18784



If i misunderstood your problem and the real problem is just that you cant get a static ip try the following user_config.yml addition

Code:
#/var/lib/rancher/conf/cloud-config.d/netconfig-config.yml
rancher:
  network:
	interfaces:
	  eth0:
		addresses:
		  - 192.168.0.16/24
		gateway: 192.168.0.1
		dhcp: false
	dns:
	  nameservers:
		- 8.8.8.8
		- 8.8.4.4
I have the same problem my VM's that I created from GUI in freenas 11 works perfectly, but on iohyve i can't get access to my network on RancherOS.
I followed this guide but I made a typo when I did "iohyve setup net=igb1" I typed igp1.
anyway another "iohyve setup net=igb1" should fix it.. Right? I thought so... but no.
I double checked my /etc/rc.local file and rebooted.
VM didn't auto start. strange...
I manually started it with iohyve start RancherOS
and i got
Code:
# GRUB Process does not run in background....
If your terminal appears to be hanging, check iohyve console RancherOS in second terminal to complete GRUB process...
open of tap device /dev/tap0 failed
rdmsr to register 0x34 on vcpu 1
Unhandled ps2 mouse command 0xe1
Unhandled ps2 mouse command 0x0a
Unhandled ps2 mouse command 0x01
Unhandled ps2 mouse command 0x41
Unhandled ps2 mouse command 0x88


I thought that the problem is "open of tap device /dev/tap0 failed"
Googled it came up with nothing
just for the hell of it I gave /dev/tap0 chomd 777 didn't help.

my ifconfig shows
Code:
igb0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
		options=2400b9<RXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWTSO,RXCSUM_IPV6>
		ether d0:50:99:64:21:e4
		inet 192.168.2.221 netmask 0xffffff00 broadcast 192.168.2.255
		nd6 options=9<PERFORMNUD,IFDISABLED>
		media: Ethernet autoselect (1000baseT <full-duplex>)
		status: active
igb1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
		options=6403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
		ether d0:50:99:64:21:e5
		inet 192.168.2.222 netmask 0xffffff00 broadcast 192.168.2.255
		nd6 options=9<PERFORMNUD,IFDISABLED>
		media: Ethernet autoselect (1000baseT <full-duplex>)
		status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
		options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
		inet6 ::1 prefixlen 128
		inet6 fe80::1%lo0 prefixlen 64 scopeid 0x3
		inet 127.0.0.1 netmask 0xff000000
		nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
		groups: lo
bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
		ether 02:9a:d7:4b:5a:00
		nd6 options=1<PERFORMNUD>
		groups: bridge
		id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
		maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
		root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
		member: tap1 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
				ifmaxaddr 0 port 7 priority 128 path cost 2000000
		member: tap0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
				ifmaxaddr 0 port 6 priority 128 path cost 2000000
		member: epair0a flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
				ifmaxaddr 0 port 5 priority 128 path cost 2000
		member: igb0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
				ifmaxaddr 0 port 1 priority 128 path cost 20000
epair0a: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
		options=8<VLAN_MTU>
		ether 02:ff:20:00:05:0a
		nd6 options=1<PERFORMNUD>
		media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
		status: active
		groups: epair
tap0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
		options=80000<LINKSTATE>
		ether 00:bd:b2:97:f9:00
		nd6 options=1<PERFORMNUD>
		media: Ethernet autoselect
		status: active
		groups: tap
		Opened by PID 5990
tap1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
		options=80000<LINKSTATE>
		ether 00:bd:ea:97:f9:01
		nd6 options=1<PERFORMNUD>
		media: Ethernet autoselect
		status: active
		groups: tap
		Opened by PID 5997



humm there is another tap
So I tried
Code:
 iohyve set RancherOS tap=tap1

Didn't help.
went into rancher with console and enabled DHCP on eth0... still no use...

I'm running out of ideas
anyone can help?
 

Zwck

Patron
Joined
Oct 27, 2016
Messages
371
for you since the bridge that has internet connection is brdige0 which uses igb0 set iohyve setup net=igb0 and reboot the vm

Generally i'd recommend the following:

Code:
sudo ifconfig delete tap0
sudo ifcondig delete tap1

iohyve stop <yourVM>
iohyve start <yourVM>



Tap0 or 1 will be recreated as part of bridge0 i assume the VM that uses epair0a works fine so there is no need to delete the bridge completely







Also it is not recommended to operate 2 physical NIC on one Subnet
 
Last edited:
Joined
Mar 30, 2015
Messages
32
for you since the bridge that has internet connection is brdige0 which uses igb0 set iohyve setup net=igb0 and reboot the vm

Generally i'd recommend the following:

Code:
sudo ifconfig delete tap0
sudo ifcondig delete tap1

iohyve stop <yourVM>
iohyve start <yourVM>



Tap0 or 1 will be recreated as part of bridge0 i assume the VM that uses epair0a works fine so there is no need to delete the bridge completely







Also it is not recommended to operate 2 physical NIC on one Subnet
but I think tap1 or tap0 are being used by my GUI VMs
But thanks you gave me an idea for the solution

Code:
iohyve set RancherOS tap=tap3


Worked! :D
 

danjng

Explorer
Joined
Mar 20, 2017
Messages
51
So my container is getting an internal Docker network IP (172.x.x.x). The network is set to bridge0, but it's not getting a 192.x.x.x probably because of the whole container in a container model?

Is there a way I can get my container to have an IP on the top level network (192.x.x.x)? I know Corral used to work in this fashion. Is this sort of thing possible with RancherOS?

Any assistance is greatly appreciated!
 

Zwck

Patron
Joined
Oct 27, 2016
Messages
371
So my container is getting an internal Docker network IP (172.x.x.x). The network is set to bridge0, but it's not getting a 192.x.x.x probably because of the whole container in a container model?
Is there a way I can get my container to have an IP on the top level network (192.x.x.x)? I know Corral used to work in this fashion. Is this sort of thing possible with RancherOS?
Any assistance is greatly appreciated!



Yes this is of course possible, and sometimes needed, However, i made the mistake in the beginning as well to assign every single container its own static ip, which is not needed.

Typically, one just assigns a port to the container and than you can access the container via http://rancherosip:port, if you want to have s host-ip you'll have to start the container via --net=host and when you define the port you have to use 192.168.0.XX:1337:80 or similar. Better check the official docker wiki for more info.
 

bass_rock

Dabbler
Joined
Jul 9, 2016
Messages
13
Is anyone else seeing poor nfs performance with this setup? I have 4 docker nfs containers for different mount points. Could that be the cause of the issue?
 

Zwck

Patron
Joined
Oct 27, 2016
Messages
371

bass_rock

Dabbler
Joined
Jul 9, 2016
Messages
13
Ok I modified it so it is using one NFS docker as specified. But I am still getting what I think is slow performance.

Code:
# dd if=/dev/zero of=./test.dd2 bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 10.2069 s, 105 MB/s
# dd if=/dev/zero of=./test.dd2 bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 8.93195 s, 120 MB/s
# dd if=/dev/zero of=./test.dd2 bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 8.63112 s, 124 MB/s


Are there any mount options or settings I need to set to improve performance?
 

bass_rock

Dabbler
Joined
Jul 9, 2016
Messages
13
I should also mention I am using FreeNAS with a ssd cache drive on ZFS, I have read that the write sync can cause issues with NFS. I can also note that NZBGet downloads are incredibly slow when they used to be lightning fast in a FreeNAS Jail.
 
Top