The 90-Minute Coding Lab: Tiny Experiments That Turn Confusion Into Clear Next Steps
How to use 90-minute, time-boxed coding experiments (“spikes”) to cut through confusion, reduce risk, and make consistent technical progress without burning out.
The 90-Minute Coding Lab: Tiny Experiments That Turn Confusion Into Clear Next Steps
If you write code for a living (or for fun), you know this feeling:
You open your editor.
You stare at a problem that’s fuzzy, half-defined, or way bigger than you can hold in your head.
You poke at it for a few hours. You read some docs. You try something. It half-works. You get distracted. Now it’s 5 PM and you’re not even sure what you learned.
The problem is not just the code.
The problem is unstructured learning.
A simple fix? Turn your coding time into a series of tiny experiments—90-minute “coding labs” designed to answer one clear question at a time.
This mindset comes straight from Agile “spikes” and good debugging practice: you learn more, faster, by testing focused hypotheses than by wandering around your codebase hoping insight appears.
What Is a Coding “Spike” (and Why It Matters)
In Agile development, a spike is a short, time-boxed effort to explore an idea, technology, or risk before committing to full implementation.
Teams use spikes to:
- Investigate a new framework or library
- Clarify fuzzy requirements
- Explore architectural options
- De-risk a large epic by testing the hardest part first
The key is that a spike is not about production-ready code. It’s about learning:
A spike trades polish for clarity. You are buying information, not building features.
This same concept applies perfectly to solo work or debugging. Instead of trying to “solve the whole thing,” you design tiny, low-stakes experiments that answer one narrow question.
From Vague Confusion to Concrete Questions
Confusion in coding often looks like this:
- “I don’t really understand how this system fits together.”
- “I’m not sure why this bug is happening.”
- “I don’t know whether this approach will scale.”
- “This feature is huge; I don’t know where to start.”
Those are feelings, not questions. You cannot experiment on a feeling.
The first move is to convert that vague fog into one specific question:
- “Can I get a minimal request to succeed against this new API?”
- “Is the performance issue in the database query or the network layer?”
- “Can I render 1,000 items smoothly using this UI library’s virtualization?”
- “Can I isolate the bug to this function or the one calling it?”
A good experiment starts with one concrete question that could reasonably be advanced in a single 90-minute session.
Why 90 Minutes? The Sweet Spot for Flow and Feedback
Why not 25 minutes like a Pomodoro, or a whole afternoon?
A 90-minute coding lab hits a productive balance:
- Long enough for deep work: You can load a complex system into your head, explore, and build a meaningful spike.
- Short enough to avoid burnout: A hard stop prevents you from grinding for 4–5 unfocused hours.
- Naturally time-boxed: You are forced to prioritize and simplify. You cannot do everything, so you decide what matters now.
- Built-in feedback cycle: Each session ends with a mini-retrospective—what you learned and what your next experiment should be.
Instead of one massive, blurry “work session,” your day becomes a set of structured learning cycles.
The 90-Minute Coding Lab Template
Here’s a structure you can use immediately.
1. Define the Question (5–10 minutes)
Write this down somewhere visible: notes app, markdown file, or ticket.
- Question: What specifically am I trying to learn or decide in this session?
- Scope: What will I not try to do in this session?
Examples:
-
Question: “Can I reproduce this bug with a minimal test case?”
- Scope: Not fixing the bug yet, only reproducing it reliably.
-
Question: “Is library X capable of handling our auth flow?”
- Scope: Not integrating with the full app, just a POC login + token refresh.
This step alone often reduces anxiety. You have turned “this is a mess” into “I’m answering this one question.”
2. Form a Hypothesis (5 minutes)
Effective debugging and exploration hinge on good hypotheses: clear, testable guesses grounded in the evidence you have.
Your hypothesis should look like:
If I do X, then I expect Y, because Z.
Examples:
- If I downgrade the dependency from v3.2 to v3.1, then the error will disappear, because the stack trace mentions a new method added in v3.2.
- If I paginate at 50 items per page, then scroll performance will stay under 16ms per frame, because the UI library’s docs suggest that as a safe limit.
A hypothesis gives you a sharper lens for the rest of the session. You are not just “trying stuff”—you are evaluating a specific idea.
3. Set Success Criteria (5 minutes)
Define what success looks like for the experiment, not the overall project.
- Success: What outcome would tell you this path is promising?
- Failure: What outcome would tell you to try something else?
Examples:
- Success: “I can reproduce the bug in a unit test with < 50 lines of code.”
- Failure: “After 90 minutes, I still cannot reproduce it in isolation; I probably need to gather more runtime data.”
This keeps you honest. You are testing reality, not defending your favorite idea.
4. Run the Experiment (60–70 minutes)
Now you code, but with guardrails.
Guidelines:
- Stay within scope: If you discover adjacent ideas, write them down for later; do not chase them now.
- Prefer minimal examples: Smaller surfaces make learning faster. Strip away everything not necessary to test your hypothesis.
- Log what you try: Just quick bullet points:
- Tried: change cache config → no effect.
- Tried: disable feature flag X → bug disappears.
This creates a breadcrumb trail, prevents repeated dead ends, and is gold when you summarize or ask for help.
5. Debrief and Decide Next Step (10–15 minutes)
When the timer goes off, stop. Close the experiment and reflect.
Write down:
- What did I learn? (Even if the answer was “this doesn’t work.”)
- Was my hypothesis supported or disproven?
- What new questions came up?
- What is the next experiment?
Examples:
Learned: The performance issue is not in the database query (query time < 20ms) but appears when rendering the UI list (frame drops when > 200 items visible).
Next experiment: Measure render time with virtualization enabled vs disabled.
Each 90-minute lab ends with a clear next step, not just “I guess I’ll keep hacking tomorrow.”
Debugging as a Series of Micro-Experiments
Debugging is where this approach shines.
Bad debugging looks like:
- Randomly changing code.
- Commenting out chunks until something “seems to work.”
- Restarting services and hoping.
Good debugging is hypothesis-driven:
- Gather evidence (logs, stack traces, repro steps).
- Form a hypothesis: “The bug appears when cache is stale and fallback logic kicks in.”
- Design a tiny experiment: “Force cache miss and log fallback path; compare behavior.”
- Run, observe, refine.
Instead of one huge “debug session,” think of three or four 90-minute labs, each answering a narrower question:
- Lab 1: Can I get a minimal, reliable reproduction?
- Lab 2: Is the bug tied to data shape, timing, or configuration?
- Lab 3: Does changing component X remove or move the symptom?
This builds a chain of evidence that leads toward the root cause.
Making Progress Measurable and Repeatable
The real power of the 90-minute coding lab mindset is that it turns messy work into a repeatable process.
Each session is:
- Self-contained: One question, one hypothesis, one set of observations.
- Documented: Future you (and your teammates) can see what was tried, what worked, and what failed.
- Comparable: You can look back over a week and see a trail of experiments and learnings—not just blurry memories of “I worked on that bug for days.”
This measurability has practical benefits:
- Easier status updates: “I ran three spikes. We now know X, Y, and Z, and we’ve eliminated two risky approaches.”
- Faster onboarding for others: New teammates can follow your experimental trail instead of relearning everything from scratch.
- Better decision-making: Clear tradeoffs emerge when you have concrete experimental results instead of gut feelings.
How to Start Using 90-Minute Labs Tomorrow
You do not need a full process overhaul. Start tiny:
- Pick one gnarly task you are stuck on.
- Schedule 90 minutes on your calendar as “Coding Lab: [short question].”
- Use the template:
- Question
- Hypothesis
- Success criteria
- Experiment notes
- Debrief + next step
- Respect the timebox. Stop when the 90 minutes are up, even if you are mid-idea. Capture that idea as the seed for the next lab.
Do this a few times and you will notice:
- Less dread when facing ambiguous work
- Clearer, more focused coding sessions
- More confidence in your technical decisions
Conclusion: Trade Drama for Data
Confusion in coding is not a personal failing; it is a signal that the problem is bigger or fuzzier than your current mental model.
You do not need more willpower or longer hours.
You need smaller, sharper experiments.
By treating each 90-minute session as a tiny lab—with a clear question, a grounded hypothesis, and explicit success criteria—you turn uncertainty into a structured learning process. You reduce risk, debug more effectively, and make consistent progress without burning out.
Next time you feel stuck, do not say, “I’ll just keep working on it.”
Say, “I’m going to run a 90-minute experiment.”
Then design it, time-box it, and see what you learn.