Code review

 Why do you do code reviews? Perhaps it’s company policy, just an automatic part of your process, but have you ever sat down with your team and asked what everyone hopes to get out of it?

As a developer, has it ever felt like playing a strange board game where the rules are secret and keep on changing? Or as a delivery manager, have you ever been puzzled why reviews sometimes seems to take longer than writing the code itself?

These are some of the things we were asking ourselves at BBC News a year ago. We’re not sure we’ve found all the answers yet. But we think what we’ve learned so far has improved our engineering culture and helped to make code reviews a better experience for everyone.

In this post I’ll share why we started asking these questions, and some of the things we found out along the way.

Our first taste of remote-first working

Suddenly we weren’t just building one page type, we were rebuilding all 41 World Service language websites from scratch.

Screenshot of the BBC Russian home page showing recent news stories in Russian

There were other changes. At the start of the year, everyone was co-located around the same bank of desks at Broadcasting House in London. Instant feedback and advice were usually available by turning to the person sat next to you, and pairing came relatively easily.

By December however, the engineers were split across six geographical locations in three time zones. While nobody could have imagined the upheavals of 2020 that lay ahead, we found ourselves having to adapt to a remote-first culture rapidly.

Growing pains

But there were definitely challenges along the way, and one area that was often raised as a source of difficulties was peer code reviews. A large influx of engineers from other teams, most of whom were learning about the codebases for the first time, started opening pull requests (PRs) to get feedback on their work.

We saw that these PRs would sometimes spend considerable time in the code review stage, often cycling between review and re-work multiple times. Approaches to giving and receiving reviews seemed inconsistent. Some developers reported that they found the process unclear, and at times stressful.

It was clear it was time to talk and reflect as an engineering community. We started asking what this ‘code review’ thing we were doing automatically was actually for. We decided we needed a kind of charter for code reviews.

Creating our very own charter

The answer was twofold. Firstly, we wanted to create a shared, living document that would evolve over time as we learned more about what works and does not work for our teams.

Secondly, while existing resources offer many useful global principles, a lot of the questions we had required local answers, specific to the structure of our organisation, our codebases and the design of our automated tooling and Continuous Delivery processes. For example: who can ‘approve’ a PR? Do all types of changes get reviewed in the same way (documentation, config, infrastructure code etc.)?

As Gergely Orosz points out, having an engineer-initiated guide to code reviewing is one hallmark of organisational support for the practice. So we started an anonymous collaborative document to source ideas, frank observations and anecdotes about code reviews from engineers of all levels of experience. Then, with feedback and advice from a range of viewpoints we gradually distilled that advice into our guide.

We wanted to make sure that the guide was easy to discover and would continue to be updated, so we put it right into our repositories and linked to it from our top-level READMEs and PR templates.

And as an added bonus, the project we were working on is open source, meaning that potential contributors from outside the BBC benefit from more transparency about how we aim to review contributions.

A quick quiz

  1. To catch defects before they reach production
  2. To share knowledge about helpful patterns and best practices
  3. To discuss alternative approaches and viewpoints
  4. To allow developers of all levels of experience to learn
  5. To link to useful documentation and other resources
  6. To distribute knowledge across the team
  7. To ask questions and check understanding
  8. To improve readability and maintainability of the code
  9. To identify documentation needs
  10. To ensure that work meets quality standards
  11. To record the rationale for certain changes
  12. To coach and mentor junior developers
  13. To promote sharing ownership of the codebase
  14. To notify other affected teams of changes that are being made
  15. All of the above

If you answered “all of the above,” you’ve come to the same conclusion that we did.

We’ve seen over and again that a good code review can achieve all of the above and more (which is not to say that code reviews are the only way to achieve these things). Participating in code reviews in a spirit of open, egoless collaboration is the key to unlocking all of these benefits.

In fact, we sometimes wonder if catching defects is one of the weakest arguments for code reviews, at least in the relatively unstructured way we do them. Think about the last time a bug hit production and caused you a problem, perhaps because of something as trivial as a typo in some config. Could it realistically have been caught during code review? Was it? The truth is that busy developers are not always great at playing ‘spot the difference’.

By the way, the idea that there is a discrepancy between what we expect from code reviews and what they actually achieve is not a new one. An empirical analysis of hundreds of code reviews at Microsoft in 2013 found that sharing understanding and social communication were more common outcomes than finding defects.

What we learned from the process

Reviewing code is a first-class activity

Communication is at the heart of code review

This is especially important for growing, distributed teams where there is greater scope for confusion and misunderstandings. We also find group (aka swarm) code reviews effective, especially for significant new abstractions.

Don’t miss the chance to learn

Our primary aim in participating in code reviews is to learn from each other, increasing our understanding of the codebase we are responsible for and of the technologies we use.

Code reviews aren’t just about getting a rubber stamp of approval from someone (often written LGTM, or Looks Good To Me). Every code review is an opportunity to ask questions, share knowledge and consider alternative ways of doing things. Done properly, both authors and reviewers can expect to learn and grow from this process.

And if potential bugs are caught along the way, that’s awesome too.

Remember to be kind

Taking a cue from the Agile retrospectives Prime Directive, we assume that everyone did the best job they could, given what they knew at the time and the resources available. As we found, trying always to understand the other person’s perspective is particularly critical when new team members are contributing to an established codebase.

Language is also important (Philipp Hauer provides some excellent examples of this). Careless use of evaluations and jargon can undermine the spirit of collaboration that ought to underpin code review. For the same reason we also make a conscious effort to to replace terminology that has racist, sexist or ableist associations.

Keep things in proportion

Time spent on code review should be kept in proportion with other stages of the development life cycle including testing and accessibility and UX review.

We find that as a rule of thumb, if a change has spent longer in review than it took to implement the code that might also point to a wider problem that needs attention. Are team members prioritising reviews correctly? If there are lengthy conversations, perhaps some of them could have happened earlier in the process?

It may also be helpful for dev teams to regularly reflect on reviews which were harder or less effective than they should have been.

Look for every opportunity to make reviews easier

And as April Wensel suggests, automate what you can, for example using linters effectively to minimise trivial nitpicks.

More detail on the above can be found in the full guide at https://github.com/bbc/simorgh/blob/latest/docs/Code-Reviews.md. Bear in mind that some of it is intentionally quite specific to how we work at BBC News and may not apply in your context.

Ask difficult questions!

But in the spirit of continuous improvement and trying to make the way we collaborate more transparent, we found writing our guide a really positive experience. And with remote working now the norm, having a clear shared understanding of these activities is more vital than ever.

The document is now part of our onboarding process for new engineers, and we’ve seen promising feedback about the cultural benefits that talking frankly about code reviews has made.

But the journey doesn’t end there — we’ll continue to update our guide as we learn more about what works for our teams. Sometimes it’s only by asking difficult questions about every stage in our development process that we can really improve the way we build software together.

Comments

Popular posts from this blog

Flutter for Single-Page Scrollable Websites with Navigator 2.0

A Data Science Portfolio is More Valuable than a Resume

Better File Storage in Oracle Cloud