Let’s begin with a kitchen disaster. You know the meme. A well-meaning soul tries to make a peanut butter and jelly sandwich by following a child’s written instructions. Literally. The result? A sticky mess of bread, spread, and crushed expectations.
Why? Because the instructions, clear in the child’s head, skipped the obvious: open the jar, use a knife, define what a finished sandwich looks like. Adults fill those gaps with common sense. Usually.
It’s funny… until you realize how often we do the same at work. Leaders hand out vague tasks, lob “work packages” over the fence, and hope someone divines the intent. Then we’re surprised when the outcome looks nothing like what we imagined.
Our New Digital Sous-Chefs
Enter Large Language Models: Our digital sous-chefs. They are the ultimate literalists. They don’t guess. They don’t infer. They don’t bring years of sandwich-making wisdom. They just give you exactly what you asked for, which is often not what you wanted.
Feed them a half-formed dataset and you’ll get a half-formed output… or worse. Forget to mention “unscrew the jar lid,” and you’ll be staring at a useless recipe.
The hidden value? Working with AI forces us to become precise. Prompting is practice in clarity. If you can get a good result from a machine that only follows your words, you’ve sharpened the very skills collaboration depends on.
The Recipe for Success
Making a sandwich (and making collaboration work) comes down to three ingredients.
The Meal Plan: Clarity of Outcome
Before you touch the bread, you need a vision of the finished sandwich. The same with AI. Are you after a bulleted summary, a persuasive email, or a limerick about corporate synergy? Each requires a different recipe. With humans, the stakes are higher. “Could you look into that report?” is not an instruction. Do you need a three-point board summary or a 50-page analysis? The difference is weeks of work. I once asked for a market report and got thirty pages on the history of paper manufacturing. The fault wasn’t the analyst’s. It was mine.
The Pantry: Supplying the Inputs
No bread, no sandwich. No data, no output. AI only works with the ingredients you provide. If you leave out context, constraints, or examples, the result will be bland… or made up. The same applies to teams. I once asked a designer for “some graphics” for a website. Without brand guidelines or audience details, I got neon artwork perfect for a nightclub. Not so much for a corporate homepage. The bruise was mine to carry.
The Taste Test: Defining Success
How do you know the sandwich is ready? For AI, it’s success markers: word count, tone, references. For people, it’s the same. Without markers, you end up in the endless loop of “Is this good enough?”… which usually means no. Clear completion criteria cut the cycle. They let people self-assess and move on.
Stop Briefing Like a Fortune Cookie
So how do you actually get better at this? It’s not about writing “longer” documents or adding more words. It’s about asking sharper questions (that give you the clarity of what you expect) before you hand off a task. Here are four that change the quality of collaboration immediately:
If someone had zero context, what would they need to get this right?This strips out the lazy “they know what I mean” assumption. Often, they don’t.
What will I recognize in the result that proves it’s exactly what I wanted? Not a feeling, not “close enough.” Something you could point to and say: Yes, that’s it.
Where am I making an assumption I haven’t spoken out loud? Every missed step hides here. Name what is obvious to you, but invisible to others. It might be the hardest of the four questions, consider to get someone in and interview you about your intentions.
Do I want a polished deliverable, or just an early draft to react to? Half the wasted effort comes from this unasked question. Decide before you delegate.
These are not questions for the other person. They are questions for you, before you speak.
The Fast-Food Fallacy
Of course, some say AI will make us lazy and uncritical. That’s possible, if you treat it like a vending machine. But conscious use does the opposite. Human-machine collaboration demands thinking clearly, defining intent, and judging outcomes. It’s an iterative loop: ask, check, refine, repeat. Done right, it strengthens analytical skills.
Machines are teaching us what we should have mastered long ago: Being precise in what we expect and want to get done. We cannot blame machines when the results aren’t what we want. They force us to be clear, because they only give us what we ask for.
And that’s the deeper point: blame is a dead end. Whether with AI or humans, blaming the tool or the team doesn’t fix outcomes. Clear briefings do. Precise inputs do. Success markers do. Instead of blaming, our job is to give the right impulses that are meaningful enough for others to act on with confidence.
If you wouldn’t give an AI your vague half-sentence and expect magic, why on earth do you do it with your colleagues? AI won’t tolerate that, and your colleagues shouldn’t either.