top of page
Search

The AI Overwhelm: Inputs, Expectations, and the Burnout Spiral


If you work anywhere near knowledge work right now, you’ve felt it: AI has turned the input firehose to full blast. Content is easier to create than ever. Expectations are higher than ever. But the one thing that hasn’t magically changed? Our throughput, the human capacity to focus, verify, and land work that actually matters.


Over the last few weeks, I noticed a weird kind of AI overwhelm. Not “I can’t learn this tool” overwhelm. More like “everyone expects better/faster/more… but no one can tell me what ‘done’ looks like” overwhelm. It’s the pressure of a helicopter coach who knows a lot of drills, yells “harder, faster,” and forgets to explain how we’ll win the game.


We’ve seen versions of this before. After Enron and friends, Sarbanes-Oxley didn’t just ask for better reports; it demanded verifiable process. “Park and post” in accounting meant two sets of eyes before anything hit the ledger. Paired programming tried to do the same in software. Two humans. Two brains. One shared definition of “good.”


Now we’ve swapped the second set of eyes for a chatbot and started treating confident output as correct output. The editorial muscle — the painful but necessary verify step — quietly slips out the back door.


Here’s the paradox: AI massively increases inputs and raises output expectations, but it doesn’t auto-install alignment. In fact, it can bury misalignment under a mountain of polished words, pretty slides, and passable code. Stakeholders skim, nod, and still feel off, because what’s being produced doesn’t match what they actually needed (which, by the way, they never wrote down).


Another way to say it: most knowledge work is a push system. Requests get lobbed at you from every direction — boss, peers, clients, tools — and AI just makes it cheaper to lob more. Great pull systems (think good manufacturing lines) limit work-in-progress and pull the next job only when capacity and clarity exist. Knowledge work rarely does that by default.


So what do we do besides complain?

  1. Define the output before you touch the tools. A one-line “Definition of Done” saves hours of AI thrash. “Two-page brief, one decision, three options ranked, due Wednesday” beats “make a deck.” If your stakeholders won’t define it, draft it yourself and get a yes/no on that.

  2. Add a verify step, even if it’s just you . Editors exist for a reason. Before you ship, run a short checklist: “Does this answer the question asked? Did I cite or show my sources? Would I sign my name to this if it were wrong?” AI can draft; you are accountable.

  3. Cap WIP like a pro. Limit concurrent “AI-enabled” work to the number of things you can actually finish this week. Pull the next thing only after you land the current one. Yes, you’ll say no more often. That’s the point.

  4. Separate artifacts from outcomes. Slides, posts, and code are artifacts. Decisions, shipped features, and customer behavior change are outcomes. Don’t celebrate artifacts. Ask, “What outcome does this artifact unlock?” If the answer is vague, stop.

  5. Create a tiny ritual that proves it worked. Prescriptive → Ritual → Report. If your ritual doesn’t produce a tiny proof (a “report inside the ritual”) that you can check in 30 seconds, it’s theater. Example: “Every Friday, we ship one customer-visible improvement and tag it in the changelog.” Easy to verify. Hard to fake.


Here’s a simple litmus test for the week: if AI helped you go faster, did you also get clearer about where you were headed and how you’d know you arrived? If not, you just built a faster treadmill.


Process Debt Truth: When inputs explode and verification disappears, misalignment compounds quietly — until the burnout shows up loudly.

 
 
 

Comments


bottom of page