The Wall of Constraints: Why Your Model Might Be Lying to You
- egonzalez267
- Apr 16
- 2 min read
You know the phrase: there are lies, damn lies, and statistics. But there's a lesser-known cousin: lies, statistics, and... constraints.
Let’s talk about a moment we’ve all faced—running a statistical model that doesn’t give stakeholders the answers they were hoping for. Maybe you've heard something like: "That can’t be right—price sensitivity can't be positive, can it?" And then someone says, "Let’s just constrain it to be negative."
Sounds harmless, right? But if you've ever felt a little uneasy in that moment, your instincts are on point. Let’s unpack why.
The Constraint Trap
Here’s a classic example:"Price sensitivity can’t be positive, so we’ll constrain it to be negative."
The logic makes intuitive sense—people generally buy less when prices go up. But what if the data doesn’t quite say that? What if the true effect is actually zero (i.e., no impact), and our model is just picking up noise?
Here’s where things get dicey.
Let’s say we’re estimating parameters at the individual level for a sample of, say, a thousand respondents. Even if the true value is zero, some individuals will have positive estimates and some will have negative ones—because data is noisy. That’s just statistics.
But now we add constraints. Specifically, we force any negative estimate to zero, because “that can’t be right.”
So what happens?

A Visual Metaphor: Bouncy Balls and a Wall
Imagine dropping thousands of bouncy balls right next to a wall. The balls represent individual estimates. The drop point—the true value—is zero, right next to the wall. Some balls bounce away from the wall (positive values), and others bounce toward it (negative values).
But now we add a rule: balls can’t pass through the wall. Any ball that hits it, just stops. So we keep all the positive bounces and discard or truncate the negative ones.
The result? The average position of the balls shifts away from the wall—farther from the true value.
This is exactly what happens in constrained estimation. Even when the real effect is zero, applying constraints causes the average estimate to become biased—in this case, extremely positive.
And that’s the number that ends up in the PowerPoint deck. That’s the number stakeholders use to decide where to advertise, how to price, what features to launch. Yes—billion-dollar decisions have been made based on flawed models like this.
So, What Can We Do?
Constraints aren’t inherently bad—they can even be necessary. But applying them blindly, especially at the individual level, introduces serious bias if you're not using sophisticated methods to correct for it.
There are advanced techniques to handle constraints properly, especially in Bayesian estimation. But let’s be honest: they’re rarely implemented in everyday research workflows.
Trust Your Gut, Ask the Question
So next time you’re building a model and someone says “just constrain it,” pause for a second. Picture that bucket of bouncy balls slamming into a wall.
If your gut says something’s off, trust it. Ask questions. Challenge assumptions.
Because the worst kind of model error isn’t a bug—it’s a bias dressed up as certainty.
Comments