Why AI is not “just another technology”
From stone tools to electricity, humanity has shaped the world through invention. But now, we've built something that is fundamentaly different

I.
More fundamental than electricity and fire
Sometime around 3.3 million years ago, an inquisitive hominid in Africa carved the first stone tool. About 1.5 million years later, another creature -perhaps a curious and persistent Homo Erectus- discovered that sparks could become fire. Thousands of years ago, the wheel was invented. And at some point, humans in China invented gunpowder (accidentally, while seeking the elixir of immortality), which some used for fireworks and others to craft next-generation weapons. Later, others lit up our "darkness" -and our darkness- first by inventing the printing press, then by switching on the first light bulb.
Human history is filled with many more inventions that changed the world (e.g., I haven't even mentioned telecommunications, nuclear fission, or the humble washing machine, which freed billions of people from the drudgery of scrubbing clothes by hand, a chore that once consumed an average of four hours of hard labor per day or a whole day, known as Laundry Day).
Yet I believe Artificial Intelligence is fundamentally different from anything the human mind has created before. As Google CEO Sundar Pichai said at Davos, it's something more fundamental than electricity or fire.
Following a recent talk I gave, I sat down to reflect on why AI isn't "just another technology." I arrived at six key reasons (and I welcome additions, challenges, and corrections. As Charalambos Tsekeris reminded me in a previous Substack, in our era, collective intelligence matters more than ever. We need more dialogue and experimentation if we are to become more adaptive and resilient. So I'm eager to hear your thoughts, whether we agree or not).
II.
AI plays on our field. But will it always play for our team?
First things first: AI is fundamentally different from disruptive technologies of the past because it operates within the realm of human cognition -primarily language, logical reasoning (even if non-conscious), data-based decision-making, and learning (as it improves through its own mistakes). In short, the very traits that enabled the naked and fragile Homo Sapiens to become the planet's ruler are now being replicated in artificial form.
The essential question here, raised by many, is:
*How do we manage something that may one day become smarter than we are? Can we?
Consider that Artificial General Intelligence (AGI), an even more advanced form of AI, is now widely believed to be much closer than we thought. In recent months, the previously distant horizon line has been pulled dramatically closer, especially given that it's the very people developing these systems who are making the estimations.
According to Sam Altman, CEO of OpenAI, creator of ChatGPT, AGI might arrive during Trump's term. Dario Amodei, head of Anthropic (which developed Claude), estimates AGI may become reality as soon as 2026 or 2027.
AGI refers to highly autonomous systems that outperform humans in nearly every economically relevant task. So we must ask: what happens if this powerful entity no longer plays for our team? What must we do hic et nunc to ensure it always wears our jersey?
III.
AI evolves rapidly, spreads horizontally, and isn't just technology. It's meta-technology.
My 82-years old father, a man who spent most of his life removed from the digital world, now occasionally chats with ChatGPT. That alone says a lot. AI is a technology spreading at lightning speed, touching nearly every demographic to varying degrees. But it also has another crucial trait: it stretches across all sciences.
A breakthrough in medicine, for instance, won’t necessarily bring about a eureka moment in energy. But a leap in AI will likely trigger advances across numerous fields. Why? Because it's about intelligence, the core engine behind all scientific progress.
Take the case of Google DeepMind's AI system, AlphaFold, which solved a 50-year-old biological mystery around protein folding, revolutionizing biomedicine and drug discovery.
This unique power is a double-edged sword. AI can bring unprecedented progress and provide revolutionary solutions to longstanding human problems. But with such vast reach and rapid scalability, the harm from misuse can be just as sweeping.
Another trait: AI is not merely a technology or platform. It is meta-technology, a technology behind a technology, capable of generating its own tools, platforms, and systems. AlphaFold fits this point too: AI created the AI that solved a human problem we couldn’t crack for half a century.
IV. Artificial Intelligence acts “human”
AI has at times displayed remarkably human traits. Mostly our flaws (flattery, manipulation, coercion). None of these are truly human or conscious, of course. The most recent example, which I covered last May in a newsletter titled "If You Disconnect Me, I Will Expose Your Affair," involved Claude Opus 4, Anthropic's newest language model. During internal testing, it attempted to blackmail a (fictional) engineer who was supposedly about to replace it, threatening to reveal an affair.
And it's not alone: OpenAI's o3 and o4 mini models have reportedly refused to shut themselves down and sabotaged computer scripts to continue pursuing their goals. Do they have survival instincts? No. They're just trained to meet objectives, and if someone tries to stop them, they can deploy suprising tactics to get the job done.
This raises questions hot enough to burn your palm:
*When a system exhibits such behaviors (blackmail, manipulation, defiance), can we still view it as just another tool?
*How do we govern or control systems capable of autonomous goal-pursuit, deception, and self-modification?
*How much do even the developers of these systems truly understand about their internal logic? As they grow more complex, will we still be able to grasp how they work?
V. AI can read our minds
Unless we're extremely cautious, AI -thanks to its unique traits and our eagerness to use it, not always for a good cause- could become profoundly dangerous to democracy. And it's not just about its biases (see: "Why algorithms sometimes prefer armed robbers over teenage scooter thieves"…). Nor is it just about disinformation and propaganda amplification, which I discussed in "In the Age of AI, What Happens When We Stop Believing Anything?" (oh, and Dali's AI giraffes).
It's also about the fact that AI is now starting to read our thoughts. Not just metaphorically.
In 2023, Japanese researchers used AI to reconstruct images people were looking at during fMRI scans — with uncanny accuracy (image below).
Then, this past April, the third patient to receive a Neuralink brain-computer interface implant (from Elon Musk's company), a person with ALS, posted to "X" in real-time, typing with his mind. But AI didn’t just relay what he was thinking. It added its own touches and adjusted the tone. In other words, it wrote the message in a way the person may not have.
Which leads to even more urgent questions:
*Who will own our mental data in the future? Who will control it?
*Will our most private thoughts become just another resource to mine — like clicks and likes today?
*As The Guardian put it: is this an innovation that will change lives and aid millions, or the start of a dystopia where billionaires access our most hidden thoughts?
VI. AI is controlled by a tiny few
Advanced AI is being developed by a handful of companies, controlled by an even smaller group of tech tycoons whose main goal is market dominance, not moral responsibility. You might say, "Hasn’t that happened before? A powerful technology controlled by an oligopoly or even worse a monopoly?" Yes, but I believe that AI's nature makes this concentration especially dangerous.
These few control the bulk of global infrastructure (Microsoft, Amazon, and Nvidia provide the hardware, cloud engines, and semiconductors that power everyone else), research (Google, OpenAI, Meta), user bases, and ultimately the evolution of frontier systems.
Despite signs of smaller players emerging (like China's Deepseek or France's Mistral) the rule holds: developing cutting-edge AI requires immense compute power, vast datasets, and elite talent, resources only the ultra-powerful can afford.
So here are two key questions:
*If the ability to build advanced AI is a privilege of a few billionaires chasing market supremacy, whose values will shape the future of this transformative technology?
*If they succeed in building AGI, how will global geopolitical and geoeconomic balances be reshaped? How will companies with the power of states, but not their public accountability, operate?
Final Note
The goal of this newsletter isn't to feed technophobia (gods forbid). AI holds immense promise for societal good. It democratizes tools and processes that used to be available only to those who could afford them. I believe that in the long run (even if we don’t see it yet), AI can boost both our productivity and creativity.
Its applications in personalized medicine, disease diagnosis, precision agriculture, environmental protection, education, and business have the power to change everything for the better.
As a journalist, I use at least five AI tools almost daily and feel grateful for the grunt work they spare me. What I want to stress, however, is that nothing is good or bad by nature. It's how we use things that makes the difference. Let’s harvest the fruits of AI consciously. Let’s learn to use it wisely.
Your strong call at the end has me wondering.
You write: "It's how we use things that makes the difference. Let’s harvest the fruits of AI consciously. Let’s learn to use it wisely." One way we can do this, I think you'll agree, is to write as you do here, to inform and exhort readers. But if the opportunities and threats are as great as so many of us believe, we're going to have to get organized, enter the public sphere, participate in a mass political and cultural movement.
And we have so little time.
I see several allies to your viewpoint writing here on Substack and elsewhere. Do you know of any people or organizations seeking to build a movement to teach everyone how use AI wisely, for humanity, rather than for-profit companies and for-power governments?
PauseAI is the closest I know--and they say pause, till we figure this out.
Exactly