An interesting observation in organizations is the fact that we often struggle to pin down the root cause of a problem and then end up fixing things that don’t need fixing.

We call it “execution failure” and move on. Which is convenient, because “execution” is a label that can mean anything and therefore explains nothing.

Whenever I’m out sailing, I see similar things happen. On a calm day, a crew looks sharp: clean tacks, tidy lines, confident timing. Then the sea gets rougher. The same people suddenly “forget” what they did an hour ago. Someone pulls the wrong rope first. Someone hesitates mid-maneuver. Not because they became incompetent in the last ten minutes, but because competence was calibrated to familiar conditions. The context changed, and what looked like mastery turns out to be early-stage competence that doesn’t yet transfer under novelty.

And sometimes it’s not the crew at all. A worn block jams. A winch slips. The boat itself can’t safely take the load. You can coach the crew all day, but the environment will still cap performance.

And then there’s the third thing you only notice in heavier weather: people can know what to do, have a decent boat, and still not commit in the moment because the perceived cost of making the wrong move is too high. It’s a system of consequences shaping whether action initiates and sustains.

Performance depends on three gates: competence, environment, activation.

Miss one, performance drops.

Miss two, performance collapses.

Miss all three, well…


Why “execution failure” keeps producing wrong prescriptions

When something doesn’t work, organizations reach for the levers they like.

Training feels responsible. Process change feels decisive. “People don’t care” feels satisfying.

But if you pull the wrong lever, you create secondary damage: cynicism, fatigue, learned helplessness, and a workforce that stops believing you diagnose before you prescribe. The central issue is that leaders often choose the intervention before they understand the cause.


Gate 1: Competence

Competence is the reliable delivery of outcomes, including when the situation is unfamiliar.

This is why the oldest assessment distinction still matters: there’s a difference between knowing and doing. In medical education, George E. Miller made it famous with a simple progression (knows → knows how → shows how → does). The important point is the last step: real-world conditions change the game. If you only measure “knows how,” don’t act surprised when “does” fails.

And the “rough sea” moment has a name: transfer. Applying learning in a new context is not automatic. Research reviews on transfer (Barnett & Ceci are a good reference point) emphasize how dependent transfer is on similarity of context, cues, and practice. In organizations we routinely assume far transfer while funding only near-transfer training.

So the competence diagnostic is specific:

  • Can you deliver in a clean setting?
  • Can you deliver when the case is unfamiliar, inputs are imperfect, and timing matters?
  • Can you explain why the steps change when conditions change?

If not: you don’t have an execution problem. You have a competence-development problem (often a transfer problem) and the fix is practice design, scaffolding, feedback, and progression.


Gate 2: Environment

Environment is everything that makes delivery possible or impossible: time, tooling, decision rights, data quality, coordination load, handoffs, incentive design, and the policies that govern what happens when things go wrong.

This is basic psychology and organizational science: outcomes emerge from interaction between person and environment. Kurt Lewin’s framing is the classic shorthand for that insight.

Modern research tightens the screw with the idea of “situational strength”: in high-constraint contexts, individual differences matter less because the situation dictates behavior. Translation: you can hire better people and still get the same output if the system is a straitjacket.

The environment diagnostic is equally concrete:

  • Do you have authority aligned to accountability?
  • Do you have access to the inputs required to deliver (data, tools, permissions)?
  • Are handoffs designed for flow or designed for blame distribution?
  • Do incentives reward outcomes or reward risk avoidance?

If the environment blocks delivery, developing competence becomes only one thing: costly. You don’t become a captain by asking your crew to row a leaking boat faster. You need to fix the boat.

The system can make competence irrelevant… fast.


Gate 3: Activation

Activation is not “being motivated.” It’s the real-time willingness and capacity to initiate and sustain action inside a specific set of consequences. Motivated or not

People activate when:

  • the goal is clear enough to act,
  • autonomy is exists (not performative),
  • the cost of trying is tolerable,
  • feedback loops make progress visible,
  • leadership doesn’t punish initiative the moment uncertainty appears.

People don’t activate when the organization trains them to wait. Or when the downside of action is personal and the upside is collective. Or when organizational stress turns initiative into self-harm.

This is why changing activation conditions is often more effective than anything else.

But, activation is not “the missing third ingredient you now need and all problems vanish.” It’s just one of the gates that might be failing.

Sometimes activation is the main issue. Sometimes it’s a symptom: competent people in a hostile environment eventually stop initiating action because they learned it’s unsafe.

You need to diagnose the pattern… not the person

The root cause can be:

  1. Pure competence gap (they can’t yet deliver under novelty)
  2. Pure environment constraint (the system blocks delivery)
  3. Pure activation failure (consequences or stress inhibit initiation/sustainment)
  4. Any combination, including the common ones such as competence exists, environment kills it, activation fades or environment is fine, competence is partial, activation overcompensates until it burns out or competence is high, activation is high, environment is chaotic

“Execution failure” doesn’t tell you which pattern you’re in and that’s why you need to work on diagnostics.


A practical diagnostic

If performance drops, don’t start with a solution. Start with three questions:

  1. Can you deliver in a clean setting, and can you transfer under novelty? If not, design practice for the unfamiliar case. Don’t punish yourself and the people around you for being exactly as trained.
  2. If you put a genuinely competent person into this system, does the system still slow or block delivery? If yes, fix the operating system first. Training people to navigate broken handoffs is equal to outsourcing responsibility.
  3. What is the perceived cost of initiating and sustaining action here? If the cost is high, don’t demand courage. Change consequences, and feedback so action becomes rational.

Then (and only then) choose the lever to work with. Sometimes you’ll pull one. Sometimes two. Occasionally all three. The win is that you’ll know why.

Performance fails way too often because diagnosis is lazy. You need to treat performance like a system of gates (competence, environment, activation) and identify the failing gate(s) before you prescribe a fix. The wrong prescription can look productive for months, right up until the moment your best people stop trying.

Privacy Preference Center