Migrating from 2.9 to 3.0

See also: How to upgrade your Juju deployment from 2.9 to 3.x

Juju 3 is out and people are wondering about how to upgrade existing 2.9 models to 3.x. The answer is model migration. Upgrading in place is possible in some, but not all, circumstances and can me more prone to losing access to multiple models if something goes wrong.

The general approach for model migration is simple on the surface:

  • bootstrap a 3.x controller
  • ensure the 2.9 controller and all hosted models are at version 2.9.37 or higher
  • migrate models across from old 2.9.x controller to new 3.x controller
  • upgrade the migrated models to 3.x

Note: to upgrade the migrated models, you’ll need Juju 3.0.2 client (currently in 3.0/candidate).

However, there’s a bit more to it - the 2.9 controller will have:

  • controller configuration
  • clouds
  • cloud config
  • users
  • user credentials
  • user permissions (access grants to the controller itself, models, clouds)

that need to come across to the new 3.x controller. This is where the juju-restore utility comes in handy. This utility has a --copy-controller option which can be used to “clone” the core data from an existing controller into a new, empty controller.

Note: the new 3.x controller retains its own CA certificate and other such set-once config - only config which is able to be modified by the end user is copied across.

To get started, you’ll need the juju-restore utility found here.

Then:

  1. Create a backup of the 2.9 controller using juju create-backup
  2. Bootstrap a new 3.x controller and do not add any models to it (it must be empty)
  3. juju scp the backup tarball and juju-restore utility the 3.x controller
  4. juju ssh into the 3.x controller
  5. Run juju-restore --copy-controller <backup-tarball>

The admin user who maintains the 2.9 and 3.x controllers has access to both. They can now start migrating models using juju migrate <modelname> <3.x-controllername>.

Existing (ordinary) users who may have created the models, or who had been granted access to the models, will be prompted to transition to the new controller the next time they run juju status or some other command against the model.

Note: for the transition of users to the new 3.x controller to be seamless (interactively managed by Juju), the old 2.9 controller needs to be kept up and running so that the users’ Juju client can connect to it and be redirected to the migrated model(s) on the new 3.x controller.

Once a 2.9 model has been migrated to the new 3.x controller, the model itself can be upgraded to the same version as the controller. Say the controller is running Juju 3.0.2, then: juju upgrade-model <modelname> --agent-version=3.0.2 will upgrade the model.

Note: we expect the next Juju 3.x point release to automatically choose the controller version without the need for the --agent-version arg.

Note: we expect Juju 3.1 to upgrade the migrated model to match the controller automatically.

The main remaining caveat is that ordinary users cannot easily get access to the new 3.x controller until the admin user migrates at least one model across. This is because the ordinary user’s Juju client does not know about the new controller connection details (until a migration redirection happens). The way around this, not yet implemented, would be for the juju add-user command to be able to regenerate the registration string a second time for an existing user - that user could juju register and set up access to the new controller and migrate their own models across. We can look to implement this as well.

If you try this and notice for your scenario anything has been overlooked in terms of copying across the core 2.9 controller details to the new 3.x controller, let us know and we’ll look into it.

4 Likes

@hallback let’s try it this week or next?

1 Like

Thanks @wallyworld, brilliant summary! I was successful in migrating a 2.9.37 model to a 3.0.2 controller. Also nice to finally have tested juju-restore.

Some things I noticed on my environment in this early stage of testing:

  • I ran snap refresh juju --channel=3.0/candidate before starting, note, no --classic
  • It is not possible to upgrade the model to 3.x on the new controller if any machines in the model are running bionic (or lower). This is probably mentioned in some release notes and is according to plan I guess, but beware.
  • I used LXD here and bootstrapped the two controllers using different users first, which led to a credential mismatch first (i.e. certificate mismatch, see ~/.local/share/juju/credentials.yaml). Bootstrapping the new controller with the same credentials as the old fixed this. May be worth thinking about when doing this in production.
  • When in the migrated model on the 3.0 controller, I needed to do like this to be allowed to upgrade the model for the time being: juju upgrade-model --agent-version 3.0.2 --agent-stream proposed.

Great work :slight_smile:

OMG, what happens to those models? We have some models running stuff with like trusty (redis). But since there isn’t any focal charm for redis available, would we need to remove all systems with bionic (or lower) to be able to perform the upgrades?

Glad it worked!

Yeah, 3.0 has dropped support for bionic as you have noticed. There’s a line in the release notes but not something that was widely advertised prior to release as the decision was made late.

Since the Juju 3.0 snap is strictly confined, --classic is not needed as you noted.

With the credential mismatch, the juju-restore utility does not copy across the controller model credential from the 2.9 controller which is supposed to allow you to bootstrap the 3.0 controller with a different credential (but on the same cloud). Can you please describe how the issue manifested itself? What errors did you see after running which commands etc? Whatever the issue is, we should be able to fix it.

With upgrading the migrated model, you do currently need to use --agent-version but we plan on fixing this so that the model will by default just select the same version as the host controller.

1 Like

Yes, you do need to upgrade any bionic or earlier machines to focal or jammy. And unfortunately, if a charm does not support a recent version of Ubuntu, that makes it hard. The juju upgrade-machine command does support a --force option which allows the machine distro to be upgraded even if the hosted charms do not support focal etc. But there’s no guarantee that the older charm will work on a newer Ubuntu version.

There are other redis charms, eg https://charmhub.io/omnivector-redis but that one just supports bionic. Ultimately, a more modern redis charm is be needed. Canonical is writing a supported redis charm but it’s just for k8s as far as I know.

Totally. I think we could roll our own redis vm charm for the purpose.

Too bad Canonical is mainly K8 charming nowadays.

1 Like

@tmihoc (FYI for doc)

The Juju 2.9 and 3.0 clients will get an update to the register command.

The updated command will be in 2.9.38 and 3.0.3 releases (or you can use the relevant edge snaps). The changes are client only, so no controller update needed.

juju register --replace <token>

What the optional --replace option does is that it allows the connection details for a new controller to replace an existing (named) controller in the user’s client config.

So the admin of the new controller would do this:

juju change-user-password <user> --reset

which will print a registration token. The user then runs the register --replace command and they have the option to replace one of their existing controller configs. ie if the 2.9 controller they were using was called “staging” and a new 3.0 controller was launched and their models were already migrated across, they could replace their “staging” config to point to the new controller. Of course, they don’t have to replace anything; choosing a different name for the new 3.0 controller is ok too; in this case they would have access to both under different names.

The common scenarios would be:

  1. The user wants to migrate their own models from 2.9->3.0 controller
  • the users would maintain config for both controllers under different names
  1. The admin migrates all the models and later informs the user(s)
  • the users would simply replace the original controller config with the new controller

Scenario 1 requires the old 2.9 controller to stay running until all models are moved across, and even afterwards until all users have switched across to the new controller (the old controller would issue redirects if someone tried to access the migrated models).

To hasten the ability to shutdown the old controller, the admin would run the change-user-password command above to generate registration tokens for the new controller and get all users to register.

2 Likes

May I suggest to write about the changes on the client side? It was shocking to see that Canonical decided to change some commands drastically. Needless to say, that decision triggers a significant re-work in our scripts/pipelines etc. E.g.:

juju version
2.9.37-ubuntu-amd64

juju bootstrap --bootstrap-series=focal --no-gui --constraints tags=juju --constraints mem=2G onibaba c0
Creating Juju controller "c0" on onibaba/default
Looking for packaged Juju agent version 2.9.37 for amd64
Located Juju agent version 2.9.37-ubuntu-amd64 at https://streams.canonical.com/juju/tools/agent/2.9.37/juju-2.9.37-linux-amd64.tgz
...
Bootstrap complete, controller "c0" is now available
Controller machines are in the "controller" model
Initial model "default" added

juju run -m controller --machine 0 -- uptime
 10:19:50 up 2 min,  0 users,  load average: 2.16, 0.96, 0.37

sudo snap refresh --channel=3.0/stable juju
juju (3.0/stable) 3.0.0 from Canonicalâś“ refreshed

juju run -m controller --machine 0 -- uptime
ERROR option provided but not defined: --machine

juju exec -m controller --machine 0 -- uptime
 10:20:48 up 3 min,  0 users,  load average: 0.79, 0.78, 0.34

@langyal The changes are listed in the release notes: Juju | Roadmap & Releases

I know. I just don’t understand why…

I was trying to upgrade an old model with some stopped containers when the upgrade of the model went sideways.

So, I’m either forced here to start or delete this model to be able to proceed?

Not sure if this situation is a known case. But I thought to let you know that this situation might be tricky in a situation with many models.

Model upgrades required that all hosts where juju agents are deployed are running (not stopped) so that the new upgraded jujud can be deployed as part of the upgrade. In Juju 2.9, a lot of the upgrade logic was client side but in 3.0 everything is managed on the controller so there’s scope to run a bunch of pre-checks and abort the upgrade if needed. If we don’t check for stopped machines it’s worth a bug.

In any major Juju version change, we take the opportunity to add significant new functionality, which sometimes mean breaking compatibility. To help ease the transition, the Juju 2.9 CLI client is compatible with 2.9 and 3.0 controllers, so you can use that until your scripts are updated.

The CLI changes you refer to are for actions - he’s an earlier post explaining the new features. In 2.9 or earlier, it was opt in. In 3.0, the new behaviour becomes the default.

1 Like

Thanks for the answer. Compatibility is a key in production deployments. However, I understand that you want to change things time to time - still, it’s weird to see a command changing its context…

@wallyworld I read this post and gleaned a misunderstanding that the 3.0 client would work reliably in maintaining 2.9 controller/model, which turns out not to be the case. E.g deploying charm-hub charms to a 2.9 model appears to fail (see Error series required for charm-hub charms).

The Juju 3.0 CLI client cannot operate a 2.9 controller since in 3.0, older APIs have been removed.

However, a Juju 2.9 CLI client can operate both a 2.9 and 3.0 controller. So if you are still needing to work with both versions, you’ll need to use the 2.9 CLI. Or you can install both the 2.9 and 3.0 snaps in parallel (with aliases like juju_29 and juju_30) and use the relevant CLI.

1 Like