Human Considerations in Mandatory Formal Code Reviews - Part 1
Most mandatory formal code reviews are ineffective, costly, and often create disharmony within the team.
Mandatory code reviews (PR approvals) may sound great in theory, but in practice they often fall short. From conversations with my mentees, former colleagues, and IT friends, I’ve heard recurring complaints: formal code reviews at their workplaces are ineffective, costly, and even spark human conflicts.
In this article, let’s step away from the idealized world and consider some real-world questions. Software development is a human activity. Programmers aren’t robots — we need to account for the human factors.
∘ 1. How does the coder feel when his PR is rejected?
∘ 2. What are the fake “LGTM” review rates?
∘ 3. Is the reviewer (regardless of job title or rank) always correct when he/she disapprove a PR?
In Part 2:
∘ 4. Software Engineers: Would you like to pause your current work to review a colleague’s PR?
∘ 5. Do you really think software developers can effectively spot the majority of real issues just by eyeballing the code changes in a PR?
∘ 6. Do software developers want to carry the extra responsibility for someone’s code?
∘ 7. Software engineers are evaluated based on the quality and functionality of their code, not the number of PRs they review.
and upcoming part 3:
∘ 8. “Yes, it is my check-in that broke the production. But B OKed it!”
∘ 9. What will the coder do while waiting for the approval?
∘ 10. Who promotes the formal Code Review Process?
∘ 11. What is the solution then? Think about Refactoring.
1. How does the coder feel when his PR is rejected?
Naturally, no one enjoys the feeling of rejection — especially high-ego software engineers. And I’m not talking about a one-off incident; when a mandatory formal code review process is enforced, this can happen every single day.
Some might argue, “I get it, but the purpose of a review is to get a second opinion, so developers should welcome it.” Setting aside whether the disapproval was justified (B could be wrong), we need to consider human factors. Even when feedback is 100% objective and correct, there are better, more informal ways to communicate it. No one wants to have a record of ‘the highest PR disproval rate’.
In many client projects I worked on, the team knew “Zhimin found the most bugs”. The more accurate way is “Zhimin’s Automated E2E (UI) Tests”. However, there were none or just a few defects I raised in a defect tracking system visible to the whole team. In such cases, I had to be mindful of protecting the developers’ egos, for a simple reason, they are human beings. I’ll cover this topic in a separate article.
Once we understand the human factors in mandatory code reviews, it becomes clear why the “LGTM” (short for “Looks Good To Me”) style code review is so common. The reason behind this: “I want the favour back for my Pull Requests.”
2. What are the fake “LGTM” review rates?
Many code review approvals amount to nothing more than a quick “LGTM”. Extracting the basic statistics is easy — just note the PR’s creation and approval times. If a PR passed a peer review in under 30 seconds, it’s almost certainly a fake one, pointless.
From my observation, while working as a test automation engineer (2010–2021), for software projects conducting a formal ‘Git PR code review’ process, if taking out initial a few weeks, the ‘LGTM’ fake code review rate is at least 90% (a few at 100%), and about 60% later aborted it because it was useless and feeling stupid.
Many years ago, I worked on a government project as a developer where every code check-in had to be linked to a requirement or change request in StarTeam — for the sake of “traceability.” In reality, we all used the same change request called “Broken.” After six months of this pointless exercise, the process was finally scrapped.
3. Is the reviewer (regardless of job title or rank) always correct when he/she dis-approve a PR?
Of course, not every PR rejection from B is necessarily justified. Unlike objective defect reports backed by screenshots or error logs, assessments of what constitutes 'good code' are often subjective opinions.
When developers (A) have doubts, how do they usually respond? Whether they push back or simply accept it, neither outcome is ideal. Why inject conflict into the team unnecessarily? And remember, code reviews aren’t a one-time event — they happen daily.
Over time, this can lead developers to care less about the codebase, adopting a “whatever” attitude. A parent will understand this.
Effective leaders understand that subtle guidance — nudging people toward self-driven change — is far more powerful than constant checking/monitoring.
Related reading: