hi,
I am using juju with lxd clustering. I used three virtual machine(node1, node2, node3) as a node in the same vairtualBox.
Cluster created from lxd init. then in one node1 installed jujuj (snap install juju --classic). from node1 it is not possible to create controller in node2 or node3. at the same time if deploy any application it is working fine only node1 but not possible to deploy any application in node2 and node3.
Hi there, thanks for trying out the lxd cluster work. We’ve got a post walking through setup here:
Typically you’d setup the cluster and have a trust password setup on the nodes. From there you’d have the Juju client on another machine (your main laptop?) that can reach the cluster over the network and go through the add-cloud steps for talking to the cluster. Note that it’s not the same as bootstrapping “localhost”. If you go through the add-cloud and add the cluster then spreading the deployed things (controllers, workloads, etc) is handled by the cluster itself. Juju just asks the cluster API for a new machine and it comes up somewhere in the cluster.
Enter the API endpoint url for the remote LXD server: https://10.55.60.244:8443 at this point used one ip address from cluster node. then added credential for ‘lxd-remote’ but when used:
juju bootstrap lxd-remote command it is following
root@node1:~# juju bootstrap lxd-remote
Creating Juju controller “lxd-remote-default” on lxd-remote/default
Looking for packaged Juju agent version 2.7-beta1 for amd64
No packaged binary found, preparing local Juju agent binary
To configure your system to better support LXD containers, please see: https://github.com/lxc/lxd/blob/master/doc/production-setup.md
Launching controller instance(s) on lxd-remote/default…
juju-e5250d-0 (arch=amd64)
Installing Juju agent on bootstrap instance
almost 15 min at this point!!! but nothing proceeding from last line!!! any kind of help for this one? even i tried with --to zone=“node_name” also … nothing happened !!
Hmm, so it looks like Juju is trying to get it going and that will involve lxd bringing down the image for the machine. I’d check the logs in lxd on the cluster instance and see if you can see what’s up. Another thing is to look at the debug version of output by doing juju bootstrap --debug and see if there’s any more details in there. Your first time bringing up a machine on a cluster might take a bit depending on things.
juju bootstrap is working for existing juju node but when trying to use in different node the following log is getting via debug :
Attempting to connect to 10.101.153.77:22
18:56:06 DEBUG juju.provider.common bootstrap.go:576 connection attempt for 10.101.153.77 failed: ssh: connect to host 10.101.153.77 port 22: No route to host
18:56:14 DEBUG juju.provider.common bootstrap.go:576 connection attempt for 10.101.153.77 failed: ssh: connect to host 10.101.153.77 port 22: No route to host
18:56:22 DEBUG juju.provider.common bootstrap.go:576 connection attempt for 10.101.153.77 failed: ssh: connect to host 10.101.153.77 port 22: No route to host
18:56:30 DEBUG juju.provider.common bootstrap.go:576 connection attempt for 10.101.153.77 failed: ssh: connect to host 10.101.153.77 port 22: No route to host
18:56:38 DEBUG juju.provider.common bootstrap.go:576 connection attempt for 10.101.153.77 failed: ssh: connect to host 10.101.153.77 port 22: No route to host
18:56:46 DEBUG juju.provider.common bootstrap.go:576 connection attempt for 10.101.153.77 failed: ssh: connect to host 10.101.153.77 port 22: No route to host
so far I know juju will be installed in one node in the cluster. so i have juju in node1 but node2 is including cluster without juju.
any suggestion ?
That looks a lot like we are being told an IP address of a container which is not routable from the machine you are running “juju” on. (eg, you are talking to a LXD agent that is exposed on the network, but that cluster creates containers that are only exposed on a local bridge inside the machine).
Yes, as @jameinel points out, it looks as though you have containers deploying onto the default LXD bridge (lxdbr0) inside the nodes, which means they do not have ingress from outside.
If this is the issue, the solution is to manually bridge whatever device on the nodes is providing external connectivity, and specify that as the bridge to use when you run “lxd init”. Do this on each node of the cluster so they are homogeneous.
There is more detail on manually bridging for LXD clusters here.
First created lxd bridge (lxc network create lxdbr0) then assigned that bridge through lxd init for all nodes. is there any difference between dynamically or manually created a bridge network?
but when i am trying to bootstrap in juju5 node from juju1 node it is showing following debug output:
Attempting to connect to 10.78.54.137:22
10:52:09 DEBUG juju.provider.common bootstrap.go:576 connection attempt for 10.78.54.137 failed: ssh: connect to host 10.78.54.137 port 22: No route to host
and also it is possible to create normal lxc container in the cluster two nodes. but making problem when i am using juju.
totally confused here, how should I continue here !!!
This is a NAT bridge (ipv4.nat: true) using lxdbr0 which is local to only the machine it is on. Which means the containers can get to the outside world (via NAT), but the outside world cannot see them.
If you wanted the containers externally visible, then you would need to create a bridge on your network interface (ens0, etc) and then use that as the bridge, instead of lxdbr0. (usually we use a name like br-ens0 to indicate it is the bridge with ens0 on it.)
to be honest, It is so confusing. my cluster is connected with lxdbr0 but why again i need another bridge network. If i can create a lxc container without juju in the cluster why not juju is not getting within the same lxdbr0 network?! Sorry to say, I am blank here now.
btw thank you for your reply. but i am not getting why I am so confused here!!
They are not on the same network. There is a separate lxdbr0 network inside each node.
The LXD servers themselves are connected by the network that nodes are on. So the cluster can coordinate and create containers, but those containers are on bridges inside the nodes that the world outside knows nothing about.
So you need to manually bridge the node devices that communicate with the outside world, so that traffic can be routed to the containers from outside.
Are the nodes running Bionic? If so, post the contents of one of the /etc/netplan/{your config}.yaml files and we can lend a hand regarding how to set up the bridge.
Finally It’s done. Thank’s to all. what I did I am writing for next person of me:
if you are using VirtualBox: go to VM settings-> network->advance->Promescuous Mode -> allow all
Install juju : snap install juju --edge --classic
then change the every node /etc/netplan/(yaml file name).yaml to :
then run: ‘sudo netplan apply’. You have to log in again with the newly assigned ip address with br0.
[keep it mind do it all nodes before initialize the cluster first node.]
then run lxd init for first cluster node and used this created br0 network. do the same things in all node.
then do the others things as documents. like juju add-cloud and so on…
have fun with juju in lxd cluster.