There’s a limit to how much information humans can process at a time. For this reason, we often produce mental shortcuts (heuristics) that can manifest as cognitive biases. You’ve probably heard of them. And if you can’t define them, here’s some help: A cognitive bias is a systematic way humans make quick, lazy judgments based not on all the available and unavailable relevant information but on learned patterns.
For example, just because you have seen two things happen one after the other several times, it does not necessarily mean that the thing that happened first caused the thing that happened second. That’s an illusion of causality, which happens when people see a cause-and-effect relationship where there’s none.
You see, cognitive biases are systematic inferences that help us react quickly to our environment. This can be useful at times, while in other contexts, being responsible for major cognitive errors with significant negative consequences. Here are some extra details.
Why Biases Look Consistent?
According to Hilbert (2012), one big reason biases can look so consistent is noise (i.e., mental distortions). Your mind may be a somewhat impressive information-processing system, but it is still built by evolution. You are not a perfectly logical machine, so some (or a lot) of sloppiness is expected.
Let’s take a look at Hilbert’s proposal on how to think about several classic biases using a simple pipeline:
Evidence (E) → Memory (M) → Estimate (Ê)
Think of judgment as a three-step process: you notice information (Evidence/E), it gets stored/filtered through your memory (M), and then you produce a final guess or conclusion (Estimate/Ê).
If your mind had an advanced version of a brain-computer interface, evidence “e1” would become estimate “ê1” every time (meaning the same input e1 would always produce the same correct output ê1). But your biological mind allows mistakes to happen: sometimes e1 gets stored/retrieved as ê2, where ê2 is an output produced after processing the input in a distorted manner; i.e., your output gets ‘contaminated’ by ‘noise’.
In other words, with a perfect machine (which we’ve already established you are likely not at this point in space-time), the same input would always produce the same correct output. But human memory and interpretation can distort what came in, so the output can drift.
A useful twist: you can look at the pipeline in both directions, that is, you can study mistakes either by starting from the facts and seeing what conclusions people reach, or by starting from someone’s conclusions and working backward to what facts they likely encountered (or remembered).
Example:
1. P(Ê|E): given evidence, what estimate do you produce?
2. P(E|Ê): given your estimate, what evidence was probably “really” there?
One view (1) asks, “Given what you saw, what did you decide?” The reverse view (2) asks, “Given what you decided, what were you probably seeing—or what did your brain treat as the evidence?”
They’re linked by Bayes’ theorem, meaning the way you get things wrong in one direction is mathematically tied to the uncertainty in the other direction.
In other words, because these two directions are connected by probability math, your errors aren’t random in a totally free way: the kinds of wrong conclusions you produce are connected to how uncertain or noisy the evidence (and memory) is in the first place.
8 Biases from “Noisy” Processing
At least 8 well-studied decision biases can be generated by noisy deviations in a few memory-based information processes. Hilbert doesn’t say other mechanisms (heuristics, motivation, social influence) don’t matter; he argues that simple noise is sufficient for these effects.
The 8 biases are:
1) Conservatism: not extreme enough. This is the classic “regression toward the middle” pattern: people overestimate low values and underestimate high values. In probability terms, rare things get judged less rare, common things get judged less common, meaning everything gets pushed toward the average.
Noise story: if your channel occasionally mixes neighboring possibilities, your outputs become less spread out than your inputs, less extreme. For example, imagine you’re estimating how long a task took: one took 20 minutes, and another took 2 hours, but interruptions and fuzzy recall make you report 35 minutes and 90 minutes, pushing both toward the center.
2) Bayesian likelihood bias: conservative conditional probabilities. This is conservatism’s “conditional” cousin: when asked to estimate the probability of A given that B is true, people’s estimates are systematically too conservative compared to the mathematically correct probability of A given B.
Noise story: even if the evidence within a condition strongly favors one outcome, crossover/mixing pushes estimates toward less extreme values. In other words, even when the facts in a situation clearly point to one answer, your brain’s “signal” can get slightly contaminated by nearby alternatives, so your final judgment ends up less bold than the evidence suggests; you become more prone to say “probably” than “virtually surely.”
For example, imagine you’re doing a listening test where one sound clip is almost certainly “anger” (all the acoustic cues scream anger), but because some cues overlap with “fear” and your brain occasionally blends close categories, you answer “anger, but maybe fear” and rate your confidence lower than the evidence warrants.
3) Illusory correlation: patterns you “see” that aren’t really there. Illusory correlation (including well-known stereotyping cases) happens when estimates on a two-dimensional distribution (two variables at the same time, such as a person’s characteristics and a person’s behavior) become correlated in people’s judgments, so you “detect” a relationship that’s stronger than (or different from) what the evidence shows you.
Noise story: once evidence is filtered through a noisy memory/estimate process, distortions can create apparent structure.
For example, let’s say you believe people wearing glasses are, on average, more well-read/educated. Maybe they are, maybe they aren’t. But if you have this assumption in mind, you may be prone to remember more the instances when someone with glasses seemed sophisticated than the instances when someone with glasses seemed ignorant. As a result, you may be overestimating an existing correlation or assuming a correlation that doesn’t even exist.
4) Biased self–other placement: When you think you rock. Placement bias here refers to estimates about your own standing, which tend to look better than estimates about others.
Noise story: If judgments about self and others pass through different distortion patterns (or if one channel is richer in information than the other because you know more about yourself than about others), you can get consistent self-favoring placement without needing a special ego module in the brain. To be clear, you are not necessarily wrong about yourself; it’s just that you might be.
5) Subadditivity: the parts add up to more than the whole. Subadditivity means that when you split one event into several non-overlapping parts, people often give each part a probability that seems reasonable on its own—but when you add those probabilities together, the total ends up bigger than the probability they gave for the overall event (so the “parts” oddly add up to more than the “whole”).
Noise story: when probabilities are small, our estimates often get pulled a bit (they feel more “possible” than the math warrants). If you unpack one event into many small pieces, you end up making several slightly-too-high guesses, and when you add those small overestimates together, you overestimate the true total. This is also why two-part splits (A vs not-A) behave differently: with only two options, people typically make them balance out, but with many little options, each one can get a small extra bump.
For example, let’s assume you ask someone, “What’s the chance my flight is delayed?” They say 20%. Then you unpack it: “What’s the chance it’s delayed because of weather?” (10%) “air-traffic control?” (8%) “maintenance?” (7%) “late incoming plane?” (6%). Added up, that’s 31%, which is higher than the original 20%.
6) Exaggerated expectation: reality may be less extreme than your predictions. Exaggerated expectations happen when your prediction will be too extreme compared with what actually happens; when you expect something to go really well, the result may be less great than you pictured, and when you expect something to go really badly, the result may not be as bad as you thought.
Noise story: Your estimates may be a bit noisy (too optimistic sometimes, too pessimistic other times). If you look from evidence to estimate, that noise can make estimates look “pulled toward the middle” (a conservatism-style effect). But if you look from estimate to evidence, the same noise shows up as “my extreme predictions don’t get matched by equally extreme outcomes,” because real outcomes don’t follow your assumptions, so it looks like the world is consistently less extreme than your expectations.
For example, you estimate this year’s budget to be 80% larger than last year, but you end up with 40%; another time, you estimate -30%, but you end up with 55%. Your expectations were more extreme than reality in both directions.
7) Confidence bias: confidence is too extreme. Hilbert treats confidence as a relationship between memory-based correctness (“hit rate”) and stated confidence. The bias: confidence can be too extreme relative to what memory/evidence supports.
Noise story: distortions in the retrieval sub-channel (from memory to estimate) can stretch confidence judgments. For example, you assume there’s a 70% chance you will win the lottery this week because you’ve already won a few times in the last 6 months. But if you calculate how many times you won relative to how many times you played, you may realize that you’ve only won 5% of the time.
8) Hard–easy effect: confidence gets conservative as tasks change in difficulty. Hard–easy effect means your confidence doesn’t stay calibrated when task difficulty changes: on easy tasks, people tend to be less confident than they should be (too conservative), and on hard tasks they often end up more confident than they should be (or, put differently, their confidence shifts in a difficulty-dependent way instead of tracking real accuracy smoothly).
Noise story: Hilbert’s point is that “confidence bias” and the “hard–easy effect” can come from the same basic setup: judgments are made through a noisy channel where we confuse similar things more than dissimilar things, and we’re usually more right than wrong. The pattern you see depends on what you’re asking, given what: if you look at accuracy given reported confidence (how often you’re right at each confidence level), you see one kind of mismatch; if you look at reported confidence given task difficulty (how confident you feel on easy vs hard items), you see the hard–easy shift.
For example, on an arguably easy question, “What’s the capital of Scotland?”, you might say 70% sure even though you’re right ~100% of the time, so underconfident on easy items. On harder ones, such as “What year did X event happen?”, you might still say 70% sure but only be right ~30–40%, so overconfident on hard items.
Bottom Line: Biases Are Everywhere
You probably have no idea just how often your judgment is influenced by biases. The 8 biases you’ve read about are just a few from the total collection. If you want to learn about other biases, feel free to also check this Catalogue of Bias.
Can we get rid of biases? Not without some engineering (and even then, we could probably just perfect them). Are biases a bad thing? They obviously can be, as they can lead to poor judgment and contribute to negative (or positive) stereotypes, misjudgments of other kinds, and, in some contexts, poor or very poor decision-making.
But there’s a reason why we are prone to biases: sometimes the brain needs to make sense of the world under time pressure, uncertainty, and limited attention. Nature didn’t optimize humans to be perfect statisticians; it optimized us to survive and coordinate in complex environments. If someone runs towards you in a dark alley, it doesn’t necessarily mean they want to harm you, but there’s a good chance they do. In such a situation, your best option is typically to run from the person running towards you, as you have no time or possibly to gather more information. There are plenty of less extreme cases where biases can come in handy as well.
But just because you need biases at times, it doesn’t mean you cannot develop strategies meant to mitigate their negative effects. And the best way to do so is to understand what biases are and how they work. By doing so, you might be more capable of detecting your biases and those of others and assessing whether a situation should be judged quickly or analyzed in a more sophisticated (and computationally demanding) manner.
Once you look for it, you see the footprint of bias everywhere. In relationships, shortcuts shape first impressions, attribution (“they did it because they’re like that”), and conflict narratives (“I’m right because I’m this and that). In politics, biases influence what feels “obvious,” who seems trustworthy, what threats feel urgent, and which stories spread. In markets and organizations, shared biases can synchronize behavior (crowd optimism, crowd panic). In other words: biases don’t just affect what we think; they shape what we notice, what we remember, what we feel certain about, and what we’re willing to do next.
So yes, sometimes we need to rely on shortcuts such as pattern-matching, rule-of-thumb estimates, and quick causal stories to get to a workable decision. But we don’t want to end up making systematic misjudgments again and again. So we need to develop a bias buster that works when we need it to.
Sources
Hilbert, M. (2012). Toward a Synthesis of Cognitive Biases: How Noisy Information Processing Can Bias Human Decision Making. Psychological Bulletin, 138(2), 211-237. Link.
