
I’m writing to you from a future that would probably disappoint you, and if I’m being honest, one that would likely fascinate you even more.
Not because the machines rose up. They didn’t (well, not yet).
Not because humanity was conquered. We weren’t (well, not yet).
But because the moment you warned us about – the subtle one, the quiet one – arrived exactly as you feared: not with violence, but with comfort.
We didn’t lose our humanity in a war with intelligent machines.
We misplaced it while automating our conveniences, our decisions, and eventually our sense of purpose.
You once wrote that the real danger of technology wasn’t malice, but indifference. That prediction aged far better than most of our trusted and valued institutions!
Here’s the current situation:
By around 2026, machines didn’t become conscious but they became competent enough. Competent enough to draft, summarize, plan, analyze, recommend, route, simulate, optimize, and to a certain extent, to clone. Competent enough to absorb the first drafts of thought – the cognitive grunt work we had mistaken for intelligence itself.
And that’s when the trouble started.
You see, Issac, for two centuries we told ourselves a story…
That work equals worth.
That productivity is proof of value.
If a person contributes economically, they deserve dignity and if they don’t, something must be wrong with them.
It was a fragile story even when it worked.
AI didn’t break it. It exposed it.
What disappeared first weren’t jobs, but illusions…
Illusions that “knowledge work” was sacred.
Illusions that white collar work made labor immune from calamitous decisions.
Illusions that intelligence was rare.
We learned possibly too late, that perhaps intelligence was never the scarce resource. Judgment was. Responsibility was. Care was. Empathy was.
Machines could generate answers faster than we could ask questions. They could optimize workflows without understanding why those workflows existed in the first place. They could recommend actions without owning the consequences of being wrong.
And because they were efficient, we let them. Most didn’t question them. Most became bobbleheads nodding in agreement at the output.
That’s the part I think would trouble you most – not that we built powerful systems but that we were so eager to hand them the parts of ourselves that required moral weight.
We began saying things like “the model decided” or “the system flagged it” or “the algorithm recommended” as if language itself could absolve us from accountability.
You warned us about this. Not with fear, but with structure. Your Laws of Robotics weren’t really about robots. They were about humans trying to offload responsibility while pretending they hadn’t.
By the time we noticed what was happening, the problem wasn’t that machines were making decisions.
It was that humans had stopped practicing judgment.
Leadership got lighter on paper and heavier in reality. Intelligence became cheap, abundant, and unimpressive. What became rare was the willingness to say no. To slow things down. To choose restraint in a world that rewarded boundlessness in activity.
And here’s the irony I wish I could say we appreciated sooner: The more capable our machines became, the more human our responsibilities grew.
But since we didn’t see this happening, we weren’t ready to make that trade.
People didn’t panic because they lost jobs. They panicked because they lost legitimacy. When your role evaporates or even worse, hollows out, you don’t just ask, “How will I pay my bills?” You ask, “What am I for?”
We had no answer prepared.
We had trained people to be workers.
We had not trained them to be humans with agency in a machine-rich world.
So resentment filled the vacuum. Nostalgia for the good ol’ days. Rage against the machines. Denial that it was even happening. A desperate clinging to the idea that if we just banned, paused, or regulated the technology, we could rewind the story. But time, as you knew well, doesn’t reverse for comfort’s sake.
The truth is this: AI didn’t make us obsolete. It revealed how much of our identity was outsourced to economic necessity. It showed us that much of what we called “purpose” was just employment with better branding.
What survived automation were not skills, but relationships. Not outputs, but presence.
Not optimization, but caring and empathy.
Parenting. Teaching. Mentorship. Stewardship. Ethical judgment under uncertainty. These things stubbornly refused to scale. And for the first time in a long while, we had to admit that maybe scale was never the point.
The most human act in the era that began in 2026 turned out not to be creation but restraint. Choosing not to automate something simply because we could. Choosing slowness where speed erased wisdom. Choosing to keep humans in the loop not for efficiency but for accountability.
Civilizations don’t collapse because they lack power; they collapse because they lose the ability to govern it ethically. You tried to tell us that in fiction because reality wasn’t ready to hear it.
Despite the unbridled enthusiasm of investors and innovators, we are trying to hear it now.
I won’t pretend we’ve solved it. We haven’t.
The next decades will be uneven, volatile, and uncomfortable. We will make mistakes. We already are making mistakes. But there is a growing realization that is quiet but percolating like the smell of coffee in the early morning that humanity isn’t defined by what we can produce faster than machines.
It’s defined by what we refuse to delegate.
Meaning. Responsibility. Care. Judgment. The willingness to carry the cost of being wrong.
Those things don’t scale.
They never did.
And now, finally, many are beginning to understand why that matters.
If you could see us now, I think you’d be disappointed in how long it took but not surprised by the structure of the lesson. You knew the danger was never the machines becoming like us.
It was us forgetting what we were supposed to be.
With respect, and a bit of overdue humility,
Steve
