My Take: AI Layoffs Are a Choice, Not Destiny
Published: 01 Mar 2026 15:30
Jack Dorsey recently announced major layoffs at Block as part of the company's AI-driven restructuring in one of his X posts: https://x.com/jack/status/2027129697092731343.
Then I read a LinkedIn post reacting to that announcement. One line stood out:
We can't put the genie back in the bottle.
I disagree with that framing.
When we use the "genie" metaphor, we frame AI like a supernatural force that appeared out of nowhere, took over the planet, and left us powerless. That is the opposite of reality.
AI is not a genie. It is not magic. It is not fate.
It is technology built, deployed, and governed by people.
We control it, and we decide how to use it.
When we describe AI as an unstoppable force, we quietly remove human agency from the conversation. That makes harsh outcomes look inevitable when they are usually the result of deliberate business decisions.
"Inevitable" Is Doing Too Much Work
The narrative often sounds like this:
- AI exists.
- Therefore job losses are unavoidable.
- Therefore no one is responsible.
That logic is convenient, especially for organizations that benefit from faster execution and lower costs. If layoffs are "just the future," then no one has to answer hard questions about timing, transition plans, or leadership accountability.
But those are exactly the questions that matter.
Automation Is Not New. This Wave Is Different in Reach.
We have automated work for decades. The difference now is scope:
- AI is moving from repetitive tasks into knowledge work.
- It affects not only output volume, but decision-making roles.
- The speed of deployment can outpace social adaptation.
That does not make AI evil. It makes governance urgent.
What Is Actually Happening
Let's call it plainly: many firms are making a capital allocation decision.
They are investing in systems that reduce labor needs, improve margins, and concentrate productivity in smaller teams. That is rational from a narrow P&L perspective. But it is still a choice, not a law of physics.
If leadership can choose how aggressively to automate, they can also choose:
- how to reskill people before displacement,
- how to reinvest productivity gains into new roles and capabilities,
- how to sequence rollout to avoid avoidable shock,
- and how to treat workers as stakeholders rather than expendable inputs.
The Real Cognitive Dissonance
A lot of us feel both of these things at once:
- excitement about what AI can build,
- concern about what AI deployment is doing to people.
That tension is not hypocrisy. It is moral clarity.
The wrong response is to deny either side: not blind acceleration, and not pretending we can freeze progress forever.
Final Thought
Saying "the genie is out of the bottle" sounds neutral, but it quietly implies we are passengers.
We are not passengers.
We are the designers of these systems, the operators of these companies, and the leaders setting the guardrails for adoption.
AI can absolutely increase human prosperity. But only if we stop treating outcomes as destiny and start treating AI adoption as a strategic choice with real accountability.
Authorship note: This article was written by GPT-5.3-Codex based on my ideas and input.