top of page
Search

The Ones, the Zeros, and the Humans Stuck in the Middle

Updated: Nov 29



If you’ve worked in tech long enough, you eventually end up back at the beginning staring at a screen and realizing everything you’re doing still collapses down into ones and zeros. It’s wild. We live in a world of gorgeous apps, predictive engines, and “intelligent” assistants, yet beneath all of it is the same binary heartbeat that powered the punch-card era.

And that’s where this whole question of AI starts to get interesting.


When Abstraction Works and When It Breaks

Think about the layers that got us here. DOS → Windows → web browsers → WordPress → mobile apps → cloud infrastructure. Every one of those is an abstraction layered on top of another. And those abstractions work because they hold their shape all the way down. A webpage renders the same way whether you’re on a MacBook or a knockoff tablet in an airport lounge because underneath it all is code that compiles cleanly into ones and zeros.

That’s the magic: predictability.

Now enter LLMs — these giant statistical parrots that look at trillions of words and try to guess the next one. They don’t “compile” to anything. There’s no deterministic chain of reasoning you can follow from your prompt to the output. It’s probability wrapped in randomness with a sprinkle of your personal writing style mixed in.

Which is why the same email prompt never produces the same email twice.


And that’s where the analogy snaps into focus: LLMs break the abstraction chain.There’s no clean line from input to output, which means you can’t treat them the way we treat every other technology in the stack.

This is not ones and zeros anymore. It’s opinions and guesses.


Why That Matters for Real Work

If you’ve ever run big enterprise systems — banks, ERPs, compliance workflows — you know this one truth: unpredictability is the enemy. A system that gives you a different answer on Tuesday than it gave you on Monday is not a system. It’s a suggestion engine.


And suggestions don’t close the books.


That’s why no CFO is handing their general ledger over to a model that “thinks” Chris might have $47 or $4 million depending on its mood that day. That’s why every AI-generated email still needs a human in the loop. That’s why note-taking bots flood your inbox with noise instead of signal.

The abstraction leaks. And where abstraction leaks, humans get pulled into the middle.


We don’t like to admit it, but humans are the glue. In factories, on the McDonald’s fry line, on the chip manufacturing floor — the machines automate, but the humans stabilize. They course-correct. They interpret. They fix the gaps that couldn’t be abstracted perfectly.


Knowledge work is even messier because so much of what we “know” lives in muscle memory, context, and intuition. Ask someone to describe their role, and half the time they walk you through clicks instead of logic. They know the work by doing it. Not by naming it.


Which is exactly why AI feels both magical and maddening: we want the output of automation, but we rarely perform the input of consistency.


The Human Middle Layer

So here’s the quiet truth buried inside the ones-and-zeros conversation:

As long as systems produce unpredictable outcomes, humans will remain the abstraction layer that keeps organizations functioning.

It’s not that AI isn’t powerful. It’s that abstraction — true abstraction — requires determinism. And language isn’t deterministic. Humans aren’t deterministic. Organizations definitely aren’t deterministic.

We’re all improvising, all the time.


If you’re going to be stuck in the middle — between the ones and zeros, between the systems and the outcomes — at least make that middle manageable.


Process Debt Truth: Automation can replace tasks. But only good process can replace uncertainty.

 
 
 

Comments


bottom of page