Tech

The Reviewer Roulette: Balancing Code Review Efforts in Software Engineering Teams

Tom Horak

June 14, 2023

Sometimes you win, sometimes you lose. That’s one mantra that could spring to your mind when thinking about roulette. But in this post, we want to shed some light on a different kind of roulette, the Reviewer Roulette - and how our software teams are using it within the code review process to make sure everyone wins.

Our reviewer-roulette bot as seen by AI. Generated with Midjourney.

What are Code Reviews and why are they important?

Before explaining how you can get lucky with the reviewer roulette, let’s take a step back and have a short look at the bigger picture: Developing software is a complex endeavor where many things can go wrong. Every time software engineers are changing something in the source code, there is the potential of smaller or bigger mistakes finding their way into the software by accident. To avoid this, it is common practice to put preventing measures into place, including automated tests, manual tests, and code reviews.

Code reviews follow the idea of a four-eye-principle. The changes an engineer has made in order to add a certain feature are put into display and are reviewed by one or more other engineers, often in an asynchronous way. 

  • Are there any mistakes? 
  • Is everything working as required? 
  • Is the code written in a suitable and clear way? 
  • Could something be improved further? 

All these are questions that reviewers are having in mind when following through this process, which is often titled as “pull request” or “merge request”. Typically, there are detailed (written) discussions and consequent adaptations going on, and at some point all involved engineers agree that the changes can be “merged”. This means that the changes can go into the actual application and are later rolled-out to the users.

The Roulette: Spreading the Load

One of the biggest challenges with code reviews is that they are time-consuming! This is true with respect to two aspects: firstly, with respect to the time that the reviewers have to spend to look through the changes, understand and test them, as well as to follow through the discussion on possible improvements. Secondly, they are time-consuming as reviewers might not always instantly be available when the pull request is created. They might be working on their own features or are still busy with another code review. Both aspects also indicate that defining who will review a pull request is quite significant. 

Typically, either reviewers are handpicked - which often leads to the same people being picked all the time - or everyone is assigned and engineers are looking at it on a first-come first-serve basis - which can sometimes lead to delays if no one feels directly responsible. 

Let’s put some numbers to it: in 2022, our web application teams had over 900 merged pull requests - that’s over 3 pull requests per working day for which we have to assign reviewers, who then spend multiple hours on it until the pull request can be merged.

It’s time to open the floor for the reviewer roulette! 

The goal of the roulette is to spread the review load across engineers in a team more equally, while also ensuring clear assignments. Within our web application teams, also called the WAPP teams, we introduced this roulette in September 2021 and started to randomly assign two reviewers to each pull request. Specifically, we use a tool called Danger, which is used for defining custom checks on pull requests, to pick two engineers provided as potential reviewers for the proposed pull request in Bitbucket (the service where we host our code and pull requests are created). 

This change was instantly well received by the team members. Not only did it ensure a fair distribution of review loads, but it also turned out to be an effective tool to enforce knowledge sharing. By assigning reviewers randomly, also engineers who weren’t yet too familiar with this specific part of the application, were forced to have a deeper look ‘by accident’. 

In short: everyone wins.

Screenshot of random distribution of reviewers

Fine-tuning the Roulette: Considering the Required Expertise

While we were happy with our initial version of the roulette, we noticed that new challenges surfaced when the team kept growing and eventually split into two teams in February 2022. This meant not only more ‘players on the table’, but also some changes in the team organization and an increased diversity of covered topics due to a larger feature set in the applications. 

However, as both teams worked still with the same code basis and had some overlap functionality wise, we didn’t want to separate the reviewer roulette per team.

After internal discussions, we adapted the roulette to enforce a cross-team reviewing, meaning that from each team one reviewer is assigned. The basic idea is quite simple: the reviewer from the same team can provide a more contextual review, as he or she knows the envisioned features and their requirements well. Meanwhile, the reviewer from the other team can provide an outside-perspective and focus more on general code quality aspects. 

In addition, the reviewing still served as a tool for fostering knowledge exchange between the teams. With these changes, we had the second version of our roulette running and kept happily playing it.

Fast-forward another six months - with teams having grown further - more often we noticed that we had reviewer constellations where changes swept through the code review that did not meet our own quality standards. Most often, this happened due to certain expertise being missing, e.g., when two just-recently-joined members were assigned. This motivated us to again change the roulette. 

Now, we started to let one reviewer be picked from a senior engineer pool and one additional reviewer from the remaining entirety of engineers. This increased the overall quality of reviews, however, comes with the tradeoff of an again increased reviewer load for senior developers. Nevertheless, among the senior engineers, the load is fairly balanced. And, in addition, this version of the roulette also pays tribute to the responsibility of senior persons to actively share their knowledge with their peers. 

In fact, we noticed that our senior engineers actually liked being strongly involved in the review processes and thus all benefited from it.

What’s next?

As mentioned in the beginning, code reviews are a high-stakes process for ensuring high-quality software application. For us, the reviewer roulette is an important part of this process, enabling us to balance quality and speed of it. While the roulette is here to stay, the specifics of it will continue to evolve - always in sync with the ways of how our teams are evolving. As a matter of fact, due to a further increased number of topics as well as number of team members, we are now already planning to again adjust the roulette to a mixture of a team-based and expertise-based assignment, e.g., by preferring reviewers from the same team but ensuring the incorporation of at least one senior person.

No matter how you prefer to play it, the reviewer roulette is a handy tool for improving code reviews, which we highly recommend. It’s a game where you cannot lose.

Latest blog posts

Research Projects

elevait as Company Partner for IPCEI-CIS Project AIDED

Titelbild elevait as Company Partner for IPCEI-CIS Project AIDED

Read more...
Press Release
Company

New partnership for AI-based project data management

Titleimage with the logo of ALLPLAN and elevait titled by new technology partnership of ALLPLAN and elvait

Read more...
Tech

Out of Domain Detection: AI cannot know Everything - And does not need to

title image blog post out of domain detection

Read more...
Tech

The Reviewer Roulette: Balancing Code Review Efforts in Software Engineering Teams

Title picture of small robot, generated by AI, sitting on a roulette wheel

Read more...