There was a time when “doing the work” meant going to the work.

You’d disappear into a university library for weeks, shuffle between stacks, chase footnotes like a detective with a caffeine dependency, and slowly build a literature review the way people build IKEA furniture: methodically, but with a recurring suspicion that one important screw is missing.

You earned clarity through friction.

Now you can compress a first pass of that same process into an afternoon. Sometimes a few hours. The machine will surface themes, outline debates, propose a structure, and serve you a confident synthesis… often with citations attached like garnish.

It feels like magic. It’s not.

It’s a shift in where the difficulty lives.

AI moved the baseline.


The floor just moved

We have decent evidence that the speed and “first-draft quality” gains are real. Studies have found in a preregistered experiment on mid-level professional writing tasks, people using ChatGPT finished up to 40% faster while independent evaluators rated their outputs up to 18% higher on quality.

And in a large field study of a generative AI assistant deployed to thousands of customer-support agents, productivity increased by up to 14%, with the biggest lift going to less experienced workers.

That’s the sentence that matters: the biggest lift goes to the bottom and the middle.

AI raises the floor. The acceptable draft. The competent summary. The “not embarrassing” email. The passable analysis. Your last year’s definition of “good enough” is no longer good enough. You’re falling behind because mediocrity becomes easier. Not because everybody else suddenly got brilliant.


The University Case: the old bottleneck was retrieval

Let’s start with the university example, because it’s obvious.

A literature review used to be a structured grind: identify the canonical papers, track citations forward and backward, map competing schools of thought, and build an argument that could survive a supervisor who has seen this movie before. It wasn’t “hard” because it required mystical talent. It was hard because it required time… and time forced discipline.

The library had an underrated feature: it made you selective. You could not read everything, so you had to choose. You learned what mattered by bumping into what didn’t. You learned that half the field is repetition, a quarter is distraction, and the remaining quarter is where the disagreement lives. That slow exposure was the training.

Now the machine can do retrieval and first synthesis at warp speed. You can get broad coverage, fast pattern recognition, and a draft narrative with impressive fluency. Great.

But this does not make the task easier. It changes the task.

Herbert Simon nailed this decades ago: in an information-rich world, what becomes scarce is attention. Information consumes the attention of its recipients, and abundance creates a poverty of attention.

The new bottleneck is no longer “finding sources.” It’s deciding what to trust, what to ignore, what to verify, and what to build… under conditions of overload.

Information overload isn’t a modern complaint invented by people who check too many channels too often. It’s a studied phenomenon with predictable effects: once information exceeds processing capacity, cue utilization gets worse and decision quality suffers. Reviews of the literature repeatedly land on the same point: more information does not reliably mean better decisions.

Which means the craft moves. The prestige skill becomes filtering. The art here is to leave filtering as a personal preference behind (“I like that source”), and focus on filtering as defensible judgment (“Here is why this evidence is stronger, and here is what would weaken my conclusion.”)

In other words, the new academic competence is not producing a pile of text but a position with traceable reasoning.

The assignment stayed the same length. The deadline stayed the same. The expected rigor didn’t “scale down” because tools improved. So the only thing that changed is where you can spend your time. That’s the opportunity… and it’s also the threat.

Exactly the same is happening to strategy work: the retrieval is cheap, but the judgment is not.


The Work Case: the old bottleneck was production

Organizations are going through the same shift. A first draft used to be expensive. It cost hours, attention, coordination, and usually three rounds of feedback that could have been avoided if someone had talked about the elephant in the room at the beginning.

That friction had an upside: it forced prioritization. If a memo took a week, you didn’t write ten. If a deck was painful, you thought twice before building it. If analysis required a human, you argued about whether the question was worth asking.

Now first drafts are cheap. Everyone can produce a decent deck, a decent strategy note, a decent customer email, a decent market scan… fast. Which sounds like a productivity miracle, until you realize what it does to the definition of “good.”

When the draft becomes abundant, the differentiator becomes everything that comes after the draft: judgment, verification, synthesis, and decision quality. You don’t win because you have output. Everyone has output. You win because your output is anchored in reality, connected to a decision, and structured so others can take action.

AI made production cheaper.


More output is not the same as more value

The optimistic story says, “Great, now we can do more.”

The realistic story says, “Yes, and now we’ll expect more.”

This is where teams get trapped. The machine accelerates the front end, and the organization uses the saved time to produce additional artifacts. The output looks polished, the volume feels like motion, but the decision quality stays flat.

Worse: fluent output increases the risk that people stop checking. Human factors researchers have been writing about this for years in the language of automation “misuse” and monitoring failures. Humans tend to over-rely on decision aids, especially when they are busy, under load, or juggling multiple tasks.

Automation bias shows up in two ugly forms: omission errors (you miss something because the tool didn’t flag it) and commission errors (you accept something because the tool suggested it). And it’s not limited to juniors and has been observed in expert users as well.

This translates to: the tool will be wrong sometimes, and your organization will still ship the output… especially when you are moving fast.

If you want a case study that doesn’t require theory, there’s the legal one everybody references now: Mata v. Avianca (June 2023), where attorneys submitted filings with fictitious cases generated by ChatGPT and were sanctioned.

Speed is becoming less impressive.

When everyone has access to the same accelerant, speed gets commoditized. “I produced a draft quickly” is now the default setting and no longer a signal of competence. AI is not raising the ceiling first. It’s raising the floor.

And once the floor rises, the room looks different. People who used to be “strong performers” because they could produce tidy outputs on time find themselves competing with tools that can produce tidy outputs instantly. Meanwhile, the people with judgment (the ones who can sense weak evidence, name trade-offs, and hold a narrative) become the scarce asset.

This is why the conversation about AI “skills” is often misguided. The relevant skill is knowing what to ask, what to doubt, and what to decide.


The upgrades leaders need

If you’re leading in an AI-saturated environment, your job is to upgrade four things.

First: question quality becomes a core competency.

Not “prompting.” Question quality. Hypotheses. Constraints. Intent. The people who win are not the ones who “use AI.” Everyone uses AI. The people who win are the ones who can frame the work so that the tool accelerates something meaningful rather than generating decorative text. You can see this in how organizations drift: if the question is vague, the answer becomes plausible nonsense. If the question is sharp, the answer becomes a useful starting point. Leadership requires the discipline of making questions sharp enough that the organization can move without hallucinating a direction.

Second: verification becomes non-negotiable.

If you cannot trace a claim, you do not have a claim. You have a story. The AI era forces a return to provenance. The boring stuff: sources, assumptions, boundaries, confidence scores etc. You don’t need an “AI policy” for this. You need a taste test that makes verification a standard. There’s a practical reason this matters beyond principle: research suggests that accountability can reduce automation bias. When people are held accountable for accuracy or overall performance, they tend to monitor decision aids more carefully and commit fewer automation-bias errors. The goals is to design expectations so that verification is part of competent work.

Third: filtering becomes the new productivity.

Your team can generate 50 options now. Congratulations. That is not strategic capability. A valuable skill to embrace it the ability to kill weak options early, to keep criteria explicit, and preserve coherence under abundance. This is the unglamorous work that makes execution possible: reducing options without reducing ambition.

Fourth: synthesis beats volume.

Leaders are paid to convert complexity into direction. AI will ambitiously hand you five plausible plans. The challenge is choosing one, naming trade-offs, and being able to explain why you didn’t choose the other four. This is where “good enough” becomes visible. In the AI era, you can have a beautiful narrative that is structurally empty. The antidote is synthesis anchored to decisions: what are we doing, what are we not doing, and what would make us change course?


A leadership move that doesn’t require a task force

Use AI to accelerate the draft. Use humans to protect integrity.

That sounds obvious, but it has teeth when you operationalize it. The problem isn’t that teams use AI. The problem is that teams ship AI-shaped output without a human-grade standard for evidence, trade-offs, and decision relevance.

So instead of writing a 12-page “responsible AI usage guideline” that nobody reads, raise the bar in the workflow: Make it normal that AI-assisted output includes where it came from (inputs and sources), what it assumes, what would break it, and what decision it serves. It sounds like a bureaucratic add-on, but it bridges the gap between “content” and “value.” And yes, this will feel strict to some people. Good. Strict is what sets standards of excellence.

The old game rewarded effort. The new game rewards judgment.

The old game had friction that forced prioritization. The new game removes friction and exposes who actually has standards.

So yes: AI gives you time back. And then it asks with a straight face: What are you going to do with it?

If your answer is “more output,” you’ll get exactly that… more output and more noise wrapped in perfectly formatted paragraphs. If your answer is “better thinking,” then AI becomes what it should have been all along.

What bar are you setting before the machine sets it for you?

Because here’s the final, uncomfortable line: AI didn’t make you faster. It made your old definition of “good enough” embarrassing.

Privacy Preference Center