10 min read

Superstition Deadlock

When product organizations freeze in plain sight

A team walks into a weekly product review with the kind of clarity leaders say they want.

The user problem is understood. The team has evidence. Engineering has a path. Support is asking for relief. Everyone can feel the cost of waiting.

And yet the meeting ends exactly the way these meetings so often end: not with a decision, but with a promise of one.

A follow-up session. A new deck. A request for “one more cut” of the numbers. A reframe that magically relocates the decision into the future, where it will be safer to make.

From the outside, this looks like indecision.

From the inside, it feels like professionalism.

What I want to name in this series is the hidden mechanism that turns “professionalism” into paralysis: Superstition Deadlock.1

It is not a productivity issue. It is not a framework issue. It is not a talent issue.

It is what happens when an organization runs on beliefs that are individually reasonable, culturally protected, and mutually incompatible.

Corporate superstition is not a joke

When I say “corporate superstition,” I am not talking about lucky charms, rituals, or irrational people.

I am talking about beliefs that behave like laws even when they have never been tested. They travel through onboarding, performance reviews, “how we do things here,” and the quiet social penalties that teach people what not to say.

In the work behind this theory, those beliefs cluster into recurring archetypes: imitation without understanding, caution that performs as process, measurement that replaces meaning, rigidity disguised as realism, and reorgs that substitute for truth.2

When these beliefs stay isolated, they are manageable. They read like preferences, or scars from past failures.

The trouble begins when two of them collide and the organization treats both as sacred.

The contradiction nobody is allowed to name

Superstition Deadlock is the organizational version of an impossible instruction.

You have heard the lines, sometimes in the same sentence:

  • We must move faster. We must not break anything.
  • We must be user-centered. We must not slow delivery.
  • We must be data-driven. We all know the metrics are misleading.
  • Teams must own outcomes. Nothing ships without approvals.

Each instruction can be defended. Each instruction has a story behind it. Each instruction, in isolation, signals maturity.

Together, they create a system where any action is punishable.

That structure has a name in communication theory: a double bind. It describes a situation where contradictory demands operate at different levels, and where commenting on the contradiction is itself unsafe.1 In organizations, the bind does not need to be explicit. It lives in incentives, governance, and reputation.

Once a team learns that naming the contradiction gets them labeled as “not collaborative,” or “not strategic,” or “not a team player,” the contradiction becomes invisible. And an invisible contradiction cannot be resolved. It can only be managed.

So the team learns the coping behavior that looks best under scrutiny: produce artifacts, request alignment, and delay commitment.

This is how deadlock starts.

How deadlock shows up in UX and product work

In UX practice, the most common deadlock is not methodological. It is moral.

A team is told that decisions must be grounded in user reality. Many teams even have standards and language to support this: explicit understanding of users, iterative evaluation, multidisciplinary work.3 These are not controversial ideas.

Then the same team is told, implicitly or explicitly, that delivery commitments are non-negotiable and that uncertainty is a liability. The roadmap is treated as a promise rather than a hypothesis. A research finding becomes “interesting,” then quietly downgraded to “not actionable right now.”

So research turns into theater. Not because researchers are performing, but because the organization has made it impossible for evidence to change direction without political cost.

The pattern repeats in design critique. A design review is framed as collaborative, but the real purpose is risk management. People ask for “alignment” when they mean “permission.” Stakeholders ask for more options when they mean “don’t make me responsible for choosing.” The meeting does not reduce uncertainty. It redistributes it.

On paper, the company is building a culture of collaboration. In practice, it is outsourcing decision-making to process.

This is Risk Theater, one of the superstition archetypes, expressed as a workflow.2

The sprint that never ends

Product teams often sell themselves a story about autonomy.

The story is attractive: give a cross-functional team a goal, trust them to self-organize, and let the best solution emerge.4 In many environments, that story is even written into operating models.

Then reality arrives. Decisions require approval from multiple groups with different risk incentives. Legal has a different definition of “done.” Security has a different definition of “safe.” Brand has a different definition of “acceptable.” Architecture has a different definition of “clean.”

The team is still held accountable for outcomes, but authority is distributed across committees. The team is asked to “own it,” and also asked not to make irreversible decisions.

So teams become excellent at producing reversible work: prototypes, tickets, documentation, sprint rituals, and carefully worded updates.

The backlog grows. The sprint cadence continues. The system remains in motion.

And still, nothing crosses the threshold where it becomes real.

Deadlock is rarely dramatic. It is often polite.

The metrics that stop being information

There is a particular cruelty in deadlocks that involve measurement.

Organizations want the safety of numbers. Teams want the legitimacy of data. Leaders want the confidence that comes from dashboards.

But there is a known failure mode in any environment where metrics are tied to rewards: the moment a measure becomes a target, it stops behaving as a measure.5 It becomes a game.

Teams learn what improves the number. They also learn what improves the story. Over time, the organization develops two simultaneous beliefs:

Numbers are the only credible language.

Numbers are not credible.

That pair is one of the easiest deadlocks to spot because it produces a distinctive behavior: an endless debate about the metrics that never changes the decision. Meetings become contests of interpretation. Evidence becomes ammunition. The real question, “what are we willing to trade off,” stays unasked because it would force accountability into the room.

This is Measurement Blindness expressed as organizational etiquette.2

Public institutions shows the same behavior.

Deadlocks are not limited to corporations.

Public institutions often publish explicit standards that insist services should evolve, that iteration should continue once real people are using the service, and that services are never finished.6 The intent is sound: services exist in changing contexts, and user needs do not freeze.

But institutions also operate under procurement rules, audit constraints, political risk cycles, and legacy infrastructure. Those forces can create a structural contradiction: you are expected to iterate frequently, but you are funded and governed as if change is failure.

So a service can be simultaneously required to improve continuously and punished for deviating from an approved scope. “Agile ways of working” can be encouraged while budgets and contracts reward predictability.6

When that contradiction becomes undiscussable, teams do what people always do in double binds: they comply with the form. They run rituals. They produce documentation. They show movement.

They cannot move in the direction that matters.

Why the deadlock tightens over time

A single episode of hesitation is normal. It can be wise.

A deadlock is different because it is self-reinforcing.

When an organization delays a decision, it often creates the very conditions that justify further delay: more dependencies, more stakeholder anxiety, more accumulated risk. The longer the system stays frozen, the more expensive action becomes. The more expensive action becomes, the more attractive rituals feel.

This is where defensive routines take over. People detect a mixed message, act as if it does not exist, make that avoidance socially acceptable, then make the avoidance itself impossible to discuss.7

Once that pattern hardens, it starts to look like culture. Not the kind on posters, but the kind that sits underneath posters.

There is a well-known way to describe that deeper layer: the difference between visible artifacts, stated values, and the underlying assumptions that quietly determine what is safe to do.8 Superstition Deadlock lives at that bottom layer.

It is why the same organization can say “we value autonomy” and still punish initiative.

It is why “we are user-centered” can coexist with a system that treats user evidence as optional.

It is why a team can be busy every day and still make no irreversible choices for months.

What this series is really about

Superstition Deadlock Theory is not an attack on process. Good process makes complex work possible.

It is a warning about what happens when process becomes an alibi, and when contradictions become unnameable.

If you are a UX researcher, designer, product manager, or engineer, you do not need a new framework to recognize this. You have likely felt it already, in that moment when the room agrees on the problem and still cannot choose.

This first post is about recognition. Naming the phenomenon matters because it separates two things that are often confused:

  • Normal trade-offs, where teams can argue openly and decide.
  • Deadlocks, where arguments are a way to postpone deciding.

Later in the series, I will go into how deadlocks form, how they hide, and how teams can dismantle them without triggering the immune response of the organization. For now, the goal is simpler.

To help you see the trap while you are still inside it.

References

Footnotes

  1. Bateson, G., Jackson, D. D., Haley, J., & Weakland, J. (1956). Toward a theory of schizophrenia. Behavioral Science, 1(4), 251-264. Introduces the “double bind” structure and its conditions. (Docslib) 2

  2. Project working notes. Desarrollo concepto supersticiones corporativas. Core taxonomy of corporate superstition archetypes and the deadlock spiral framing. 2 3

  3. National Institute of Standards and Technology. Human-centered design overview and key principles, including iterative evaluation and explicit understanding of users. (NIST)

  4. Agile Alliance. “Agile Q&A: What is a Self-Organizing, Cross-Functional Team?” Background and interpretation of self-organizing teams in practice. (Agile Alliance)

  5. “When a measure becomes a target, it ceases to be a good measure.” Overview and background of the metrics-as-target failure mode. (Wikipedia)

  6. UK Government Service Manual. “Iterate and improve frequently” and the idea that services are never finished. (GOV.UK) 2

  7. Action Science. Model I reasoning and the pattern “make undiscussables undiscussable,” a core mechanism behind defensive routines. (Action Science)

  8. Schein, E. H. (2010). Organizational Culture and Leadership (4th ed.). Three levels of culture, including basic underlying assumptions that drive behavior. Excerpted material. (studylib.net)