You are doing code reviews wrong

Šimon Tóth
ITNEXT
Published in
5 min readSep 28, 2021

--

Doing code reviews correctly is hard. In this short article, I will guide you through the goals of code reviews and talk about the incremental levels of implementing code review at your organisation.

The non-goals of code reviews

I will start with what code reviews are not.

Code reviews are not design reviews. If you often discuss design during code reviews, there is a systemic failure somewhere in your team processes. Writing code is implementing an agreed-upon design. Therefore the design shouldn't be up for debate when reviewing the code.

Code reviews are not tests. If you are spending time determining whether a code change works as expected it is a clear signal that your test coverage is insufficient and you are wasting valuable developer time.

The main goals of code reviews are speeding up development, improving knowledge distribution, making better use of domain experts and removing friction. We will discuss these goals as we go through the different levels of code reviews.

Level up your code reviews

If you are not doing code reviews or feel that you are not getting great value from code reviews, the following levels will allow you to build your review pipeline.

Level 0 — Fully automated review

Level 0 code reviews are more of a gating technique as no humans are involved in the review process.

  • checking and enforcing code formating
  • checking for code style violations
  • checking for failing tests
  • checking for errors reported by static checking tools

Do research what tools can be run automatically for your languages and set them up. Taking the time to set up these checks is almost always worthwhile since the benefit is multiplicative (number of developers) and compounding. I would also recommend taking the time to set up a way to run these checks locally so that each developer can check if their code change is compliant before even sending it for a code review. If you are using git, have a look at the pre-commit framework as a starting point.

Reviewer: automation

Goals: speed, decrease friction by unification

Cost: negligible

Level 0.5 — Automated review with human judgement

Setting up a hard threshold on some of the automated checks might be detrimental to code quality.

For example, if a code change introduces a new feature, it might be OK if it also decreases performance. Either way, it is worthwhile to run the performance test automatically but have a human approve or request a change.

Reviewer: automation, approved by senior team member, preferably someone making product-level decisions

Goals: speed, make better use of domain experts

Cost: negligible

Level 1 — Pure readability review

If you want to do a lean version of code review, this is it. A pure readability review is answering a straightforward question. Is the code readable? I mean a specific property here: "Is it obvious what the code is doing?". Here are two code snippets that might be confused for the opposite.

Here is an example of unreadable code:

int Update(vector<User>& users) {
int sum = 0;
for (auto &user : users) {
sum += Calculate(user);
}
return sum;
}

This snippet is unreadable because it is unclear what is happening here. What exactly are we calculating? Why is the function called Update , is it updating something? Does perhapsCalculate mutate the state of user ?

Here is an example of readable code:

float sin_taylor(float x) {
// Taylor series approximation for sinus function:
// https://en.wikipedia.org/wiki/Taylor_series#Approximation_error_and_convergence
assert(x > -1 && x < 1); // only precise for (-1,1)
return x - (x*x*x)/(3*2) + (x*x*x*x*x)/(5*4*3*2*1) - (x*x*x*x*x*x*x)/(7*6*5*4*3*2);
}

It is clear what is happening here. More information on the formula is linked, and the code is just an expansion of the mathematical formula as listed on Wikipedia.

Readability reviews can serve a double purpose by making junior developers more familiar with the codebase. Luckily, junior developers are also great judges of what code is readable.

Reviewer: junior team members

Goals: speed, share knowledge

Cost: small

Level 1.5 — Language readability review

Language readability becomes important in big organisations. It serves a similar purpose as linters, but where linters are limited, humans can review the code for style and idiomatic use of the language, even in complex cases. But obviously, for a much higher cost.

The main goal of language readability review is to decrease friction and increase language knowledge across the organisation. It ensures that one team does not use a particular language or library, or service in a fundamentally different way than another team. Going through this review, developers will learn over time the idiomatic ways to use a language, library, or service.

This review should focus purely on the correct use of the language (or library, etc.) and should rely on approved guidelines (not subjective preference).

Reviewer: language/library expert based on established guidelines

Goals: decrease friction by unification, share knowledge

Cost: significant

Level 2 — Ownership review

The purpose of an ownership review is for the maintainer to confirm that they are OK with taking ownership of this piece of code. But, of course, what exactly that means will differ based on the context and especially the maintainer.

The typical areas of focus in this review would be:

  • local stylistic considerations
  • local expert knowledge (e.g. correct use of internal libraries)
  • maintenance burden

This review becomes very important in an environment where teams can cross-contribute to each other's projects to deliver the features they need. The same applies to many open-source projects as they receive code changes from external contributions.

Reviewer: project/library maintainer or senior developer

Goals: speed, make better use of domain experts

Cost: significant

What to pick

The presented levels build on top of each other. So, for example, there is no point in doing a Level 1 review if a Level 0 review is incomplete. However, you don't necessarily have to use all levels.

Here are some recommendations based on your project:

  • Small opensource project: Level 0 + Level 2
  • Medium sized opensource project: Level 0 + Level 1 + Level 2
  • Big opensource project: Level 0 + Level 0.5 + Level 1 + Level 2
  • Single-team company: Level 0 (+ Level 0.5) +Level 1
  • Multi-team company: Level 0 + Level 0.5 + Level 1 + Level 1.5 + Level 2

Thank you for reading

Thank you for reading this article. Did you enjoy it?

I also publish videos on YouTube. Do you have questions? Hit me up on Twitter or LinkedIn.

--

--

I'm an ex-Software Engineer and ex-Researcher focusing on providing free educational content.