Code reviews are carried out on github pull requests.
Any Juju core team member can review any pull request. However as the branch author it is your responsibility to ensure that the reviewer sufficiently understands the changes.
approval
All branches prior to being landed need to have an approved review from another team member.
Approved branches can be merged using the $$merge$$ comment by an team member.
The exception to this rule is simple forward or back porting of a fix, or merging a previous release branch forwards.
conflict
Sometimes, as happens when there are more than one person involved, there can be conflicting opinions on how to do things.
For very minor things, most of the time it doesnât matter. As long as the code works and is well tested, weâre fine.
Sometimes it goes deeper than that. There are issues of code consistency to consider, and patterns of development within the codebase. Examples might include how we version API server facades.
The best way to resolve these issues is to get a call going between the branch author and reviewer to work through the issues.
blocking landing
In the severest of cases, a reviewer can block the landing by a -1 comment, or
In these situations if the conflict canât be resolved between the author and reviewer, escalate to the appropriate managers.
A review that has been blocked should not be landed without the reviewer that has given the removing the block.
Weâve talked about in the past if the changes are only to CI python test files that once approved you can hit the green merge button, as none of the functional code or unit tests have been touched.
So the issue is that I donât think human code review would have caught the problem with your change, as it looks valid. And the real issue is whether it can be passed to Jenkins correctly.
Our options would be:
Split out a staging Jenkins that we could use for iterating on patches to the jenkins config. And then only allowing a bot to actually merge that to master.
Make it plausible to run up your own Jenkins for testing.
Do as we are, and iterate on the primary Jenkins instance. In which case, youâve already rolled out the changes by the time you put them up for review.
Regarding the CI python test, I see no reason to do a unit test run for a PR like that as none of the CI tests get run so none of the changed code will get exercised (if we add a CI test or two to the check/merge pipeline this changes things).
My suggestion would be to hit the merge button on purely python test change PRs and save some time.
I am 100% +1 for PRs only for juju-qa-jenkins. Itâs been a while since Iâve pushed directly due to bad old habits, but I have done it.
If anything the PRs allows someone else to know whatâs going on.
For simple validation that doesnât rely on the author having already deployed their changes we could have a CI step (travis? or selfhosted, it is a private repo) that does jenkins-jobs test -r jobs/ci-run (and for jobs/infrastructure, jobs/github, jobs/mergejobs) to at least show the changes are valid.
(breaking out my replies across posts to stay relevant to the content).
The problem with having a local or testing jenkins is that you really need to run the job to make sure it works as expected, as the jenkins-jobs test ... will make sure the yaml makes sense but not that what it does is correct.
We also have the added complexity of having an âinternalâ and âexternalâ jenkins instance that is split on the PR/CI responsibility line.
A âsimpleâ but perhaps time consuming process step would be:
If you change a job (i.e. after itâs landed and deployed) you must make sure the next run (or 2) are successful.
This may require re-running a previous run if needed, so youâre not waiting around for hours
This at least would catch any easy errors. There would need to be care where if you change a âcoreâ script (i.e. the lxd-runner) you would need to check a handful of jobs, some which you probably werenât interested in changing.
On that note, I feel Iâve let the jenkins stuff get out of hand and itâs really quite hard to understand whatâs happening with all the different scripts and macros that call other macros that slurp in scripts that have placeholders that are passed in from the original job many levels up.
I really wish I had found a way to make this nicer to use.
I would prefer that we use GitHubâs functionality, e.g. click âReview Changesâ, then âAcceptâ and âRequest Changesâ, rather than using raw comments. That will play nicer with downstream tooling, such as Trello.
There were at least 4 things wrong with my original proposal, but it all looked reasonably sane.
Having non-experts do manual review might help them get better, but it doesnât lead to catching a lot of issues.
I think the big key is to iterate fast, and be on the hook for monitoring if things are working as expected.