That error message is trying to tell you that the model isn’t deploying to Kubernetes. Once you’ve deployed Charmed k8s, you need to then run juju add-k8s to register it with the controller. You can then create a k8s model
The model you are deploying to is the model for managing the k8s cluster itself, not workloads in the cluster. ie the underlying cloud is MAAS, and charmed k8s is itself a workload deployed to that MAAS cloud.
What you need to do is use the add-k8s command to create a new “cloud” in the controller representing the k8s cluster. You’ll need to ensure you’ve created a default Storage Class and then, assuming your kubeconfig is in the standard place, juju add-k8s myk8s. Or you can pipe kubeconfig to add-k8s: cat kubeconfig | juju add-k8s myk8s. See juju help add-k8s for more examples.
Once the k8s cluster is registered with the controller, you can then add a model to it and deploy your k8s charms to that new k8s model. For gitlab, you’ll also need to deploy the mariadb charm and relate it so gitlab has a suitable backend.
If you have a Storage Class which has been set up as the default for the cluster, the --storage option should not be needed. Hopefully you’re using Juju 2.7.1.
Maybe @tvansteenburgh can help with the k8s storage class set up with CK and MAAS?
I added the storage to K8s the regular way I do it without Juju by just using kubectl and rook (rook.io). I am now able to add the k8s substrate it seems and things are humming along. It would have been ideal to use Juju to set up the storage, though.
One thing to check is that the client from which you bootstrap has connectivity to the machine which has been started to run the Juju controller. The client needs to be ave to connect to port 17070.
If all looks ok on that front, you can run bootstrap with --keep-broken and Juju will keep the instance running even after bootstrap fails. Then you can ssh in and inspect thing like the cloud-init log to see if there’s any errors which would point to reasons why the Juju agent might not be run.