Rain Lag

The Three-Lens Code Review: A Simple Way to Catch Bugs, Clarity Issues, and Design Flaws in One Pass

Learn a practical three-lens framework for code reviews that helps teams catch bugs, improve clarity, and spot design flaws—while sharing knowledge and shipping faster.

The Three-Lens Code Review: A Simple Way to Catch Bugs, Clarity Issues, and Design Flaws in One Pass

Code reviews are often treated like a glorified bug hunt: skim through, spot a few problems, approve, move on.

That mindset sells code reviews short.

Done well, code reviews are one of the highest-leverage practices a team can adopt. They catch defects, yes—but they also improve internal code quality, unify style and patterns, spread knowledge, and keep your system design from quietly drifting into chaos.

This post introduces a simple framework to do all of that in one pass: the Three-Lens Code Review.

Instead of reviewing code in a vague, unstructured way, you look at every change through three clear lenses:

  1. Correctness Lens – Does it work and is it safe?
  2. Clarity Lens – Is it understandable and maintainable?
  3. Design Lens – Does it fit and age well in the broader system?

Combined with a few process tweaks and expectations, this approach can make reviews faster, more consistent, and more valuable.


Code Review Is Not Just Bug Hunting

Before diving into the lenses, it helps to clarify what code review is and isn’t.

Code review is:

  • A quality gate for correctness and safety
  • A mechanism to improve readability and uniformity
  • A tool for knowledge sharing and spreading context
  • A feedback loop for design decisions

Code review is not:

  • A replacement for static analysis (linters, type checkers, security scanners)
  • A replacement for tests (unit, integration, end-to-end)
  • A substitute for pair programming or design discussions
  • A last-minute place to debate fundamental architecture

Those activities are complementary and should happen before the review where possible.

  • Static analysis & formatting: Catch style violations, simple bugs, and security smells automatically. Don’t waste human attention on what tools can do better.
  • Self-checks & local tests: The author should run tests and sanity checks before requesting review.
  • Pairing & design sessions: Use these for complex or risky changes so the review is a confirmation, not a surprise.

Once those are in place, the code review can focus on where humans shine: judgment, clarity, trade-offs, and system thinking.


The Three-Lens Framework

The Three-Lens framework gives reviewers a simple mental checklist. You can do it in one pass, but mentally switch lenses as you go.

1. The Correctness Lens: “Will this break things?”

This is the most obvious lens, but it’s worth making explicit.

Questions to ask:

  • Logic & edge cases
    • Does the code do what the description and requirements say?
    • What happens in edge cases: empty inputs, timeouts, failures, nulls, large data, race conditions?
  • Error handling & resilience
    • Are errors handled or propagated appropriately?
    • Could this crash a service, corrupt data, or leak resources?
  • Tests
    • Are there tests for the main paths and critical edge cases?
    • Do tests describe behavior clearly and cover bug-prone areas?

What not to do with this lens:

  • Don’t nitpick style that a linter can enforce.
  • Don’t demand 100% test coverage—focus on risk and behavior.

If correctness is severely in doubt, it’s often better to pause the review and request:

  • More tests
  • A quick design discussion
  • A smaller, more focused change

2. The Clarity Lens: “Can someone else understand and maintain this?”

Many bugs come not from bad intentions, but from confusing code that future you (or someone else) misreads.

This lens is about:

  • Naming and structure
    • Are variable, function, and class names meaningful and consistent?
    • Is the code modular, or is logic tangled and deeply nested?
  • Readability
    • Is the flow easy to follow without constantly jumping around files?
    • Are magic numbers, complex conditionals, and implicit assumptions explained?
  • Documentation in code
    • Are comments used where needed—but not to explain what clearer code could express?
    • For non-obvious decisions, is there a short rationale in comments or commit messages?

Helpful prompts:

  • If you disappeared tomorrow, could someone else safely change this?
  • Can a new team member understand the intent by reading this diff and its tests?

Clarity is where you improve internal quality: not visible to users now, but critical to long-term velocity.

3. The Design Lens: “Does this fit well in the system?”

This lens zooms out from the line-by-line level.

Questions to ask:

  • Cohesion and coupling
    • Is this code living in the right module, service, or layer?
    • Is it reusing existing patterns or introducing a one-off variation?
  • Consistency and conventions
    • Does it follow established architectural and style guidelines?
    • Is it adding a new dependency, pattern, or abstraction—and is that justified?
  • Evolution and impact
    • How will this choice age if we add more features in this area?
    • Does it make the system simpler or more complex overall?

Not every change needs a deep design discussion. For small changes, a quick check for consistency may suffice. For larger changes, design lens feedback might include:

  • Suggestions to split responsibilities
  • Proposals to reuse existing abstractions
  • Requests for a short design doc if the impact is broad

The design lens is where code reviews help prevent architectural erosion and keep the system coherent.


Setting Expectations: Authors and Reviewers

The Three-Lens framework only works if everyone knows what’s expected.

For authors:

  • Run linters, formatters, and tests before asking for review.
  • Prepare a clear summary: what changed, why, and how to review it.
  • Organize the change for reviewability (more on that next).
  • Highlight areas where you want specific feedback (correctness, clarity, design).

For reviewers:

  • Commit to using all three lenses, not just correctness.
  • Be timely—slow reviews create bottlenecks and discourage discipline.
  • Be specific, kind, and actionable in your feedback.
  • Distinguish between blocking issues and nice-to-have suggestions.

Having a short, written code review guideline (often in your repo’s docs) helps solidify these expectations.


Organizing Code for Reviewability

Even the best reviewer can’t help much if the pull request is a 3,000-line monster.

Well-organized changes dramatically improve review quality and speed.

Principles for reviewable changes:

  1. Keep changes small and focused

    • Aim for single-responsibility PRs: one feature, one fix, one refactor.
    • If you must change many files, group them logically (e.g., “renames”, “mechanical move”, “core logic”).
  2. Provide context upfront

    • In the description, explain:
      • Problem / motivation
      • High-level approach
      • Any trade-offs or risks
    • Link to tickets, design docs, or previous PRs when relevant.
  3. Mark non-functional changes clearly

    • If there are formatting-only changes or file moves, call them out so reviewers don’t waste time.
  4. Use commits intentionally

    • Group commits by logical steps: “Introduce interface”, “Switch caller to new API”, “Remove old code”.
    • This lets reviewers walk through the evolution if needed.

You’re not just writing code; you’re curating a review experience so your teammates can apply the three lenses effectively.


Code Review as Knowledge Sharing

If you treat reviews purely as defect detection, you miss one of their biggest benefits: spreading knowledge across the team.

You can encourage this by:

  • Inviting diverse reviewers
    • Include at least one domain expert and, when possible, someone less familiar with the area.
  • Explaining reasoning in the PR
    • Why this approach over alternatives?
    • What trade-offs did you consider?
  • Asking questions in comments
    • Authors can ask, “Is there a better existing pattern to reuse here?”
    • Reviewers can ask, “Can you walk me through this flow?” instead of silently guessing.

Over time, this creates a shared understanding of how things work, which reduces bottlenecks around “the one person who knows this part of the code.”


Streamlining the Review Process for Speed and Quality

Good code reviews should help you ship faster, not slower.

A few practices that keep things moving while maintaining quality:

  • Define SLAs for reviews (e.g., first response within 24 working hours).
  • Use checklists based on the three lenses to keep reviews consistent.
  • Automate what you can: CI, linting, formatting, basic security checks.
  • Allow small, low-risk changes to be approved quickly, with more scrutiny reserved for high-impact areas.

The goal is not endless polish. It’s to minimize bugs, improve maintainability, and get valuable features into users’ hands without compromising the health of your codebase.


Putting It All Together

The Three-Lens Code Review is a simple way to make reviews more systematic and more impactful:

  • Correctness Lens: Will it work safely?
  • Clarity Lens: Is it understandable and maintainable?
  • Design Lens: Does it fit the system and age well?

Supported by:

  • Clear expectations for authors and reviewers
  • Small, focused, well-explained changes
  • Automation for mechanical checks
  • A culture of knowledge sharing

You don’t need a complex process or heavy tooling to improve code reviews. Start by introducing these lenses to your team, write a short guideline, and try them on your next few pull requests.

Chances are you’ll catch more bugs, reduce confusion, and make better design decisions—while moving faster, not slower.

The Three-Lens Code Review: A Simple Way to Catch Bugs, Clarity Issues, and Design Flaws in One Pass | Rain Lag