top of page
Search

The AI Conundrum - Balancing Innovation with Human Ingenuity


If you work in operations long enough, you start to notice a pattern: every time a new tool promises to “think for us,” we get worse at thinking for ourselves. Generative AI is the newest miracle—fast, fluent, and strangely persuasive. But here’s the contrarian take I can’t shake: if we over-delegate our thinking, we’ll ship more, polish more… and understand less.


Picture the smartphone era. The day you stopped memorizing phone numbers wasn’t a crisis—until you needed to think through a problem without the crutch. AI feels similar. It’s brilliant at summarizing, classifying, and storytelling. It gives you the “answer” with confidence. But confidence isn’t competence, and fluent isn’t the same as true.

I’ve watched this movie before. When drag-and-drop dashboards first landed, some leaders concluded they didn’t need analysts anymore. “We’ll just click our way to insight.” What they actually did was click their way to prettier guesses. Tools compressed the distance between question and chart—but not the distance between chart and reality.


Here’s the turn: AI doesn’t just accelerate good work; it accelerates whatever system it drops into. If your process is designed poorly, AI will help you do the wrong thing… faster and with nicer slide templates. If your metrics don’t tie to real outcomes, AI will help you report with style—on things nobody actually uses.


So, how do we keep the “generative” in people while using generative AI?

  1. Stay closer to customers than your tools do. Put AI between you and your repositories, not between you and your relationships. Retrieval is a perfect job for machines; relevance still belongs to humans. Sit in on the messy calls. Watch the handoffs. Ask the uncomfortable “why do we even click this button?” questions. Proximity produces judgment—and judgment is the thing AI can’t fake.

  2. Practice the work, don’t just present the work. When a model drafts the first pass, it’s tempting to accept the fluent version of shallow thinking. Resist. Rebuild one report by hand this week. Trace a metric to its origin. Try solving one issue without the tool—notice what you learn about the edges and exceptions. Skill atrophy is real; practice keeps your instincts sharp.

  3. Design before you automate.Most “AI questions” I see are actually design problems: unclear ownership, ambiguous inputs, or rituals with no embedded proof. Use a simple cadence: Prescriptive → Ritual → Report.

    1. Prescriptive: Write the minimal “how.”

    2. Ritual: Define the “how often” and the expected outcome.

    3. Report: Bake in the evidence that the ritual worked—right where the work happens. Only then should you ask AI to accelerate it.

  4. Audit for confident nonsense.LLMs tell great stories. That’s a feature and a risk. Create a short “confidence trap” checklist for your team: Did we verify the source? Did we test the claim on a live workflow? Did someone who owns the outcome sign off? A strong narrative isn’t a substitute for a strong mechanism.

  5. Measure thinking, not typing. If your dashboards reward activity—tickets closed, slides created, prompts written—you’ll get a lot of busywork that looks like progress. Shift your scorecard one level up: defects removed, cycle time reduced, customer pain retired. AI will happily generate artifacts; you need to insist on outcomes.


For this week’s “Fix-It Friday,” try one experiment: pick a recurring report and skip it once. See who notices. Then ask the team why the report exists at all and what decision it’s meant to inform. If the purpose is fuzzy, that’s process debt hiding in plain sight—and it’s the perfect place to use AI after you fix the design.


Process Debt Truth: AI multiplies the quality of your process; if you don’t design for judgment and curiosity, it will multiply your waste.

 
 
 

Comments


bottom of page