14 min read

The Real Bug Is in Your Brain

The hardest bugs are not in your repo but in your reasoning. Confirmation, optimism, anchoring, availability, and Dunning-Kruger warp decisions. Fight back with disconfirming tests, independent estimates, broader hypotheses, and KISS. Debug your mind first.
The Real Bug Is in Your Brain
Photo by Getty Images on Unsplash

How Cognitive Biases Trip Up Programmers and How to Fix Them


I spent hours hunting a bug in my code, only to realize the flaw was in my approach. I was sure the backend was working okay, and the bug had to be on the frontend.

It wasn’t.

The culprit? My own brain’s blind spot.

In that moment of debugging déjà vu, I discovered that the hardest bugs aren’t in our codebase at all. They’re in how we approach the problem. Sound familiar? (Famous last words: “My code is fine.”)

TL;DR: Developers are logical, right? Not always. Our brains have cognitive biases that introduce errors in software development — from assuming our code must be correct (confirmation bias) to grossly underestimating timelines (optimism bias). This piece highlights 6 common biases that affect coding, with real examples and tips to debug not just your code, but your thinking. Being aware of these “brain bugs” can potentially save hours of frustration and make you a better engineer.

Cognitive Biases

We like to think programming is pure logic. However, as long as programmers are human, psychology plays a part in our PRs. Cognitive biases are mental shortcuts or blind spots that can lead us astray even when we think we’re being rational. In fact, a 2022 field study found that nearly 70% of developer actions observed were influenced by at least one cognitive bias. In other words, these hardwired habits of thought regularly affect how we design, debug, and make decisions in software development.

So what exactly are cognitive biases?

They’re like the “legacy bugs” in our brain’s OS. Deeply rooted ways of thinking that helped our ancestors survive but can trip up an engineer’s problem-solving. Even the most seasoned developer isn’t immune to this.

We cut corners in reasoning to cope with complexity, and for programmers, this can mean choosing a solution just because it’s the first one that comes to mind or sticking with an assumption longer than we should.

The outcome?

Wasted time, buggy code, and “I can’t believe it was something this obvious” moments. We spend countless hours debugging code; perhaps we should debug our own thought process with the same rigor. (If only git blame worked on our brain's faulty logic!)

1/6 Confirmation Bias in Debugging

Confirmation Bias is our tendency to favor information that confirms what we already believe while ignoring anything that contradicts it. In a coding context, this often means we assume our implementation is correct and look for problems everywhere else. We’ve all been there: you’re debugging and certain the bug isn’t in your module. It must be that library or someone else’s code. You then spend half a day down a rabbit hole, only to find the bug was indeed in your code all along.

Time for a story.

Jane, a frontend developer, is investigating a mysterious crash on the website. Mysteriously enough, she’s convinced the issue lies in a third-party API because “my code is fine.” So, Jane combs through API docs and logs, looking for any evidence that the external service failed. Hours pass. Finally, out of sheer desperation, she double-checks her own implementation and spots a trivial logic error. A stray elsethat was causing the crash.

Oops. Her initial assumption (“my code couldn’t possibly be wrong”) blinded her to the real culprit. Jane’s confirmation bias led her to reinforce her preconceived notion and filter out the evidence right in front of her.

And Jane is not alone. Studies have shown that confirmation bias in testing leads to higher defect rates. Basically, if you only look for proof that your code works, you’ll miss the cases where it doesn’t. When debugging, a developer might focus only on the suspected cause (like a recent code change) and overlook other possibilities. This often leads to wasted effort on the wrong path and prolonged downtime.

The takeaway?

How do we debug our bias here? The trick is to actively seek out disconfirming evidence. Treat your hypothesis (“the bug must be in the API”) as exactly a hypothesis to test, not a truth to confirm. Try to prove yourself wrong. Check your own code or other components early on, even if you’re sure they’re fine. This mindset shift can prevent hours of wheel-spinning.

Another tip is to involve others. Pair programming or code reviews can inject fresh eyes that don’t share your assumptions. Your colleague isn’t biased toward your code’s infallibility, so they’ll often spot issues you overlooked.

Simply put, be your own skeptic.

It’s better to spend 10 minutes double-checking your code than 10 hours assuming it’s flawless (trust me).

2/6 Optimism Bias in Estimates

Next is the Optimism Bias, a close cousin of the planning fallacy. This is our natural tendency to underestimate how long tasks will take and to overestimate our ability to handle them. In software, it’s the classic “Sure, I can build that feature in two weeks” scenario.

And then six weeks later, you’re still not done.

We developers are optimistic creatures when scoping out work; we imagine best-case scenarios where everything goes smoothly and every unknown is magically resolved.

Reality, of course, has other plans.

Another story.

Willa, a team lead, asks Robert for an estimate on a new feature. Robert, eager to impress (and genuinely thinking it sounds straightforward), replies, “Two weeks, tops.” He bases this on the assumption that the existing architecture will support the change easily and that he won’t run into major bugs.

Fast forward: two weeks in, the feature is only half-built.

Integrating with an older module turned out trickier than expected; there were merge conflicts galore, and some third-party library didn’t behave as documented. In the end, it took six weeks and a couple of 11th-hour pivots to deliver the feature.

Robert’s initial estimate was a textbook case of optimism bias. Focusing on best-case outcomes and low-balling the time required.

Psychologists have a name for this phenomenon: the planning fallacy. What that means is that we consistently underestimate the time, costs, and risks of future actions, even if we’ve done similar tasks before. In software projects, the planning fallacy is practically a rite of passage: requirements grow, bugs appear, and integration is harder than imagined. Yet when asked for timelines, our brains give an overly rosy number.

We forget how many “unknown unknowns” lurk in the implementation. This bias doesn’t just affect individuals; even whole teams and companies fall for it, leading to project overruns and blown budgets. Hofstadter’s Law captures it perfectly: it always takes longer than you expect, even when you take into account Hofstadter’s Law.

The takeaway is that to combat optimism bias, it helps to inject a dose of historical data and pessimism into your estimates. One strategy is to recall a similar past project. How long did it actually take? Chances are, longer than you initially thought. Use that as a reality check.

Another approach is to explicitly add “unknown buffer” time to your estimates, recognizing that surprises will happen. If you think it’s 2 weeks, maybe say 3 or 4. In team settings, techniques like planning poker (if you’re in an Agile setup, this should be familiar—everyone privately writes an estimate before revealing it) can prevent the most optimistic person from anchoring the whole team. Encourage a culture where it’s okay to say, “I’m not sure. Let’s break this into smaller pieces and see,” rather than pushing a blind commitment.

Bottom line: plan for rain even on a sunny day.

Future you will be grateful when those buffers catch the inevitable curveballs.

3/6 Anchoring Bias in Code Review and Meetings

Anchoring Bias is the tendency to give disproportionate weight to the first piece of information you encounter. Once an idea or number is planted in your mind, it serves as an “anchor” that biases subsequent thinking. In software teams, anchoring can sneak in during estimations, design discussions, and code reviews. The first opinion voiced often frames the entire conversation, for better or worse.

Consider a code review scenario: Dev A submits a PR. Reviewer 1 comments right away, “This approach is inefficient. It’s doing X in O(n²).” Now, even if later reviewers notice that the code might actually be okay or that the inefficiency is negligible in context, it’s hard to shake the first reviewer’s framing. The discussion becomes anchored around that initial critique. Similarly, in project planning, if a project manager opens with “I think this should take 3 months,” everyone else’s estimates will orbit around that figure. It’s human nature. The first number or idea drops a pin in our mental map, and all other estimates cluster near it.

Again, story time.

In a sprint planning meeting, the team is estimating a new user story. Before anyone else speaks, the tech lead casually says, “This one’s pretty small, I’d say about 3 story points. What do you think?” Even if other team members had different numbers in mind, they’re now subconsciously influenced by that 3. One by one, the team’s votes gravitate to 3 or maybe 5, whereas if the tech lead hadn’t spoken first, someone might have said 8 or 13 based on unseen complexities. By voicing an estimate early, the lead unintentionally anchored the team’s thinking.

Agile veterans know this, which is why many teams adopt silent estimation or planning poker to avoid anchoring by revealing everyone’s estimates simultaneously.

Anchoring bias doesn’t only apply to numbers; it can be an initial design choice or diagnosis as well. If the first theory for a bug is “it’s a database issue”, the whole investigation might get anchored on databases, even if logs eventually hint at a different cause. Our brains stick to the first shiny idea and can be stubborn about pivoting.

To reduce anchoring, it’s important to delay locking on one idea. In meetings, encourage multiple perspectives before converging. For example, when estimating, gather individual estimates privately, then discuss the range. You might be surprised how varied they are once freed from anchoring. In design discussions or debugging, consciously ask, “What if our initial assumption is wrong? What else could it be?” Consider writing down several hypotheses or approaches before debating any one of them.

Also, be mindful of hierarchy-driven anchoring. If you’re the senior dev or manager, hold back your opinion initially to let others think independently.

By leveling the playing field of ideas, you ensure the best idea wins.

Not just the first one.

4/6 Availability Bias in Incident Postmortems

The Availability Bias (or availability heuristic) causes us to judge situations based on what’s easily recalled from memory. In other words, our brain grabs the most “available” example, usually something recent or dramatic, and uses it to explain what’s happening now. In software, this bias can mislead us during outages, postmortems, or even technology choices. We tend to reach for the first cause or solution that comes to mind, which is often colored by our most recent experiences.

Story.

A company suffered a major outage because a database table locked up under peak load. It was a big fire drill that everyone still remembers vividly. Now, today, the site is sluggish. Immediately, engineers start muttering, “Is the database acting up again?” The last outage is top of mind, so everyone’s first instinct is to blame the database. They spend an hour investigating the DB, only to realize the real issue is a completely different one (say, a misconfigured load balancer).

Because that previous incident stood out in memory, the team fell victim to availability bias. They assumed the current problem must be the familiar one. It’s like always suspecting the last villain who attacked the city, even when a new threat emerges.

We see this bias in other ways, too. A developer might choose a tool or library simply because they saw a cool blog about it last week (it’s readily available in memory), not because it’s objectively the best fit.

Or during root cause analysis, people latch onto a cause they’ve been talking about a lot recently, overlooking other possibilities. After a high-profile security breach via an API, for example, a team might focus all their energy on API security thereafter and become blind to other vulnerabilities.

Or during root cause analysis, people latch onto a cause they’ve been talking about a lot recently, overlooking other possibilities. After a high-profile security breach via an API, for example, a team might focus all their energy on API security thereafter and become blind to other vulnerabilities.

Essentially, whatever’s easiest to recall feels like the right answer, which skews our judgment.

To counter this, the key is to force a broader search of possibilities. In incident response, it helps to methodically list multiple potential causes before diving into one.

Even if one hypothesis is screaming for attention.

Ask yourself and your team, “What else could it be if not that?”

Write them down.

This simple step pushes you beyond the reflexive answer.

Data and documentation can also keep availability bias in check. Look at all the metrics, not just the familiar pattern you recognize. During postmortems, be wary of reactive preventive measures that only guard against the last incident. Perform a wider risk analysis to cover different scenarios.

In short, don’t let the most memorable incident dictate every future fix.

Each problem deserves a fresh, evidence-based look, not just a replay of last time.

5/6 & 6/6 Dunning-Kruger & the Curse of Expertise

Our final bias is a two-sided coin. The Dunning-Kruger effect on one face and the curse of knowledge on the other. Both relate to how we perceive our own (and others’) knowledge, especially in a field as complex as programming.

The Dunning-Kruger effect is a cognitive bias where people with low skill in a domain overestimate their competence. In other words, a little knowledge can be a dangerous thing. Freshmen often don’t know what they don’t know, leading them to believe they’ve got it all figured out. If you’ve ever seen a newbie programmer boast that their code has zero bugs or that “this project will be easy,” you’ve likely witnessed Dunning-Kruger in action.

On the flip side, those with a lot of expertise can suffer from what’s often called the curse of knowledge. Once you know something deeply, it’s hard to remember what it was like not to know it.

The pros might assume certain things are “obvious” and inadvertently overlook explaining them, or they might underestimate how challenging a task is for others because it’s second nature to them.

Bear with me, two more stories to go.

First. Say a guy named Sam, who just finished a coding bootcamp. Brimming with confidence, Sam tackles a new project at work and writes an implementation for handling user input. It works for the basic test cases, and Sam proudly declares the feature complete. He skips writing thorough tests. After all, the code “seems straightforward”. A week later, a weird edge case (special characters in input) crashes the app in production.

In hindsight, the more seasoned devs knew this was a tricky area with many corner cases. Sam’s initial overconfidence was classic Dunning-Kruger. With limited experience, he didn’t realize how much complexity was lurking beneath the surface, so he overestimated the soundness of his solution. As the saying goes, “a little knowledge is enough to make you think you’re right.”

Been there, done that.

Early in my career, I was sure I had built a perfect solution, only for it to fall apart under real-world conditions.

Last one. We have an expert this time. Now we have Mathilda, a senior engineer regarded as a guru in distributed systems. Mathilda writes code that is brilliantly optimized but almost indecipherable to others.

She uses arcane language features and assumes everyone can follow her clever one-liners. When she hands off a service to the team, no one else can maintain it without frequent huddles with Mathilda.

In meetings, Mathilda might also breeze past explanations, using jargon and assuming the rest of the team is on the same page. This is the curse of knowledge in action.

Mathilda has been deep in this domain so long that she forgets what it’s like to be a newcomer.

Ironically, her expertise, instead of elevating the whole team, ends up isolating her contributions because they’re hard for others to grasp.

An expert’s bias can thus introduce complexity and communication gaps that become their own kind of bug.

Whether you’re a novice or a seasoned pro, self-awareness is the antidote to these knowledge biases. For less experienced developers, the mantra should be “trust, but verify.” By all means, be confident. We need that to tackle problems, but temper it with healthy tests and code reviews. Assume there are things you might have missed (because there usually are).

One study found that increasing expertise can mitigate confirmation bias in testing, suggesting that novices benefit from guidance and more meticulous verification. In practice, that means don’t fly solo in a bubble of “I know what I am doing”. Get feedback, write tests to prove your code works beyond the sunny-day scenarios, and keep learning.

For the experts among us, the challenge is to stay humble and empathetic. Remember that if something has become obvious to you, it’s probably thanks to years of experience, not because it’s inherently easy.

Make a habit of explaining your thought process and any non-trivial code.

If you design a clever solution, ask a teammate to review it and be open to simplifying it if it’s too convoluted.

Often, over-engineering is an ego trap.

It can be tempting to architect a grand, complex solution (I’ve built my share of needless “Rube Goldberg” contraptions in code—fun to create, painful to maintain).

Fight that urge unless complexity is absolutely warranted. Embrace principles like KISS (“Keep It Simple, Stupid”), which originated in the U.S. Navy to remind engineers that simplicity wins in the end.

As computer science legend Tony Hoare once quipped:

There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies; and the other way is to make it so complicated that there are no obvious deficiencies— Tony Hoare

In other words, simplicity isn’t stupidity. It’s a skill. Don’t let your knowledge trick you into making things harder than they need to be.

True expertise often shows in how approachable and well-explained your solutions are, not just how advanced they are.

Share knowledge, mentor others, and keep an open mind.

After all, the tech world is always evolving, and even experts are forever students in some corner of the field.


Recap & Mitigation Strategies

We’ve walked through some of the common “brain bugs” that plague even the best of us in software development.

The first step to fixing a bug is recognizing it, so give yourself a pat on the back for sticking with such a long read. You’re now more aware of these cognitive quirks than many others.

To help solidify these ideas, here’s a quick summary of each bias, how it manifests for developers, and how we might mitigate it:

1️⃣ Confirmation Bias

How it Messes with You: “My code can’t be wrong.” Assumes one’s own code or hypothesis is correct, ignoring evidence to the contrary.

How to Mitigate It: Seek out disconfirming evidence; double-check your own code first. Use pair programming or code reviews for fresh perspectives

2️⃣ Optimism Bias

How it Messes with You: Underestimating time and complexity. Believes things will go smoothly (e.g., unrealistic deadlines, “no problem, I got this”).

How to Mitigate It: Add buffers to estimates; use past project data as a reality check. Break tasks down and assume some unknowns will occur. Encourage honest postmortems to improve future estimates.

3️⃣ Anchoring Bias

How it Messes with You: First info heard sticks. Initial estimates or opinions (especially from leaders) overly influence everyone’s thinking.

How to Mitigate it: Get independent estimates before group discussion. Use planning poker or silent brainstorming to consider alternatives without a strong initial anchor. Consciously re-evaluate if you notice you’re stuck on the first idea.

4️⃣ Availability Bias

How it Messes with You: Judging based on what’s easily recalled. Recent or vivid incidents dominate thinking (“The last outage was X, so this must be X too”).

How to Mitigate it: Look at objective data, not just memory. Deliberately list multiple possible causes or solutions before choosing. Use checklists in postmortems to ensure all areas are examined, not just the familiar ones.

5️⃣ + 6️⃣ Dunning–Kruger Effect (and Curse of Knowledge)

How it Messes with You: Novices overestimate their skill (think they’ve got it all figured out), while experts assume knowledge is obvious and may over-complicate or under-explain.

How to Mitigate it: For novices, stay curious, test thoroughly, and seek feedback. You don’t know what you don’t know. For the pros, remember not everyone has your context; practice KISS and clear communication. Both: keep learning and welcome critique.

Keep in mind that these biases often operate subconsciously. You might catch yourself in a bias only after the fact (“Oops, I was totally anchoring on that number” or “I only looked for proof I was right”). That’s normal. The goal isn’t to achieve some perfect, unbiased mind (if only!), but to reduce the impact of these biases on your work. By building habits like testing your assumptions, planning for contingencies, and welcoming diverse viewpoints, you create an environment where biases have less hold.


Conclusion

We spend endless effort debugging our code. Maybe it’s time we debug our own thinking.

The toughest bugs to squash often live between the keyboard and chair, in the blind spots of our cognition.

The next time you know your code is correct or feel certain a deadline is no problem, take a step back and ask, “Is that confidence, or is a cognitive bias whispering in my ear?”

By being vigilant of these mental pitfalls, we can code smarter, plan better, and collaborate more effectively.

After all, good engineers don’t just improve their code—they continuously improve their mindsets too!


Enjoyed this piece?

If this piece was helpful or resonated with you, you can support my work by buying me a Coffee!

Click the image to visit Alvis’s Buy Me a Coffee page.
Subscribe to our newsletter.

Become a subscriber receive the latest updates in your inbox.