We’re working through getting CI and stuff setup for the 1.3.0 bigtop charm refresh and I’d like to rename the hadoop-slave charm hadoop-worker.
I appreciate this means I have to do a bit more work on the bundle update side, but when I’m demoing stuff I’m uneasy explaining the slave nodes. I appreciate that master->slave etc is very historical in the IT world, but doesn’t mean we have to stick to it.
Ahoy @magicaltrout, I’m +1 on this, but have concerns on upgrading existing deployments. I’ve never tried going from charm A -> B with the expectation that B is actually an upgrade to A.
Perhaps, at least in the big data space, this can be handled by a staged upgrade, as in:
juju deploy hadoop-slave -n 3
… time passes …
juju remove-unit hadoop-slave/2
juju deploy hadoop-worker
… make sure the hadoop cluster recognizes hadoop-worker and rebalances …
juju remove-unit hadoop-slave/1
juju add-unit hadoop-worker
… make sure hdfs is still working as expected …
juju remove-unit hadoop-slave/0
juju add-unit hadoop-worker
… make sure hadoop-worker is now the hdfs/yarn-manager that hadoop-slave used to be …
Fair point on the upgrade, I’ll test it later in the week. I’d also advocate at least a release or 2 with a deprecation notice in it assuming it worked to give people fair warning. Just making sure I wasn’t going to do something stupid (that will happen anyway etc etc) or knowingly break something.
Yea, I mean I’m +1 on the change as well but I do think there’s an issue in the upgrade step of juju upgrade --switch having a bit of a fit if the charm names aren’t the same. So it’ll definitely need some testing and having a solid path forward.