Mystics and technocrats
From the quiet magic of a child’s whisper to the visionary predictions of futurists like Jerome Glenn and Ray Kurzweil
What you'll read in this newsletter:
Why humanity might need a great leap: to remember we are part mystics, part technocrats.
Imagine going 200 years back in time. Could you possibly explain to someone what today looks like?
What happens when our thoughts are up to the cloud?
What do we need most to remain human in the age of Artificial Intelligence?
Which text-to-video tool is best—depending on the job?
I.
I Know You Can Fly
When my nephew was around three or four years old, he suddenly rushed over to me, climbed up on my lap, looked into my eyes with great seriousness, and whispered:
“I know that when nobody’s looking, you can fly. I won’t tell anyone.”
Even though I’ve never tested his theory (fortunately sparing my bones), I’ve never forgotten the feeling that moment gave me. That gaze. Those words.
We humans are not made for logic alone. We also need wonder.
Today, I want to try and write about something that’s been tugging at me for some time, but that feels hard to put into words. I’m doing it because I’ve been sensing more and more that we’re losing touch with something vital: the everyday “magic,” curiosity, and open-ended questioning that once led us to poetry—and to other big and beautiful things.
Is there still space for such small miracles in the age of Artificial Intelligence?
Can we stay grounded without becoming too grounded? And to push this a bit further:
Could we build civilizations in which the mystical and the practical, the mysterious and the technological, exist in balance?
II.
The Virtuoso Pianist
Let me try to explain what I mean. After extraordinary concerts, pianists sometimes say things like:
“It felt like my hands, my mind, the piano, the composer—we were all one.”
For that kind of magic to happen, you need technology (the piano).
But you also need a human mind and a pianist’s skill—because a piano doesn’t play masterpieces by itself.
This idea of unity between artist and instrument is almost mystical.
“This description of the unity between the pianist and the piano is something classically mystical… What would happen if we created connections between these elements—the transcendence of the human and technology? What if we could build entire civilizations based on that logic?”
These were the words of Jerome Glenn, shared with me in 2023. Glenn is an indefatigable explorer of the future—a U.S. futurist, co-founder and executive director of The Millennium Project (I’m a member of the Greek node), and advisor to organizations like NATO and the European Commission. He’s been working with AI long before it was cool.
When our Zoom call ended—him in Washington, me in rainy Thessaloniki—I felt like I had just discovered a brand-new idea:
Perhaps humanity needs a major leap. To remember that we are partly mystics and partly technocrats. We’ve been creating technology since the beginning of time (yes, stone tools and the wheel were technology), but we've also always pondered mysteries we couldn’t touch. Through this constant questioning, we’ve given birth to wisdom, new knowledge, poetry, and experience.
Why do we need something like this now—more than ever?
Because not every issue surrounding Artificial Intelligence can be solved with code alone. And because if humanity ends up relying exclusively on just one of the two forces -either mysticism or technocracy- this imbalance could lead us down one of two dangerous paths:
Either into a technocratic dystopia, dominated by surveillance capitalism, algorithmic bias, and dehumanization;
Or into a mystical authoritarianism, one that rejects science and feeds on dogmas calling for a halt to progress and conspiracy theories.
Back then, I asked Glenn what exactly he meant by the word “mystics”, since it’s such a heavily loaded term, often carrying strong negative connotations.
“By mystics, I don’t mean religion or worship,” Glenn told me,
“I mean thinking about the mysteries of the universe, of life, of unity, the idea that we’re all connected. That element exists in all cultures. But in the West -in the U.S. and Europe- we tend to focus much more on the technocrats than on the mystics. And yet, today more than ever, we need to be both.
Technology’s power to blur our perception -our very sense of reality- should prompt us to ask: how can we combine technology and consciousness to create a continuum in a healthy way?
It’s incredibly difficult to achieve. But how can anyone live with integrity in a world of constantly shifting perceptions if they can’t even tell what’s real and what’s not, what’s true and what’s false?
Our consciousness—our awareness—has to have the right relationship with technology.”
Could we ever build this kind of continuum with Artificial Intelligence?
What do you think? One thing I believe with certainty: we’ve opened a new chapter in the history of humanity. If once all living beings survived according to Darwin’s theory of adaptability, today evolution has new tools: genetics and technology.
The age of the transhuman1 and perhaps even the posthuman is possibly beginning to dawn.
Does this kind of continuum seem far-fetched to you?
I think we can agree: while it’s difficult, it’s not impossible.
Imagine explaining to a person in the Middle Ages that one day, a tiny device the size of a coin could be implanted in their chest to keep their heart beating. They’d find the idea unthinkable—if not outright blasphemous. And yet, here we are. Millions of people around the world today are alive thanks to pacemakers.
So let’s say we succeed in creating this continuum—merging our most deeply human traits with technology.
What do we gain?
Now imagine going back in time 200 years. How would you explain our present-day reality to someone? When radically new information enters the brain, we must be “wired” in some way to absorb it—because information doesn’t just disappear into the void. It needs somewhere to go. So what happens if there’s not enough “somewhere”? And what would happen if we could expand our internal “hardware” and our processing capacity? How much more could we become capable of?
III.
Our thoughts in the cloud
American computer scientist, inventor, and futurist Ray Kurzweil, born in 1948, is a rather controversial figure -but his predictive accuracy is hard to ignore. For years, he’s argued that what nature gave us can be evolved.
Back in April 2016, in an interview with Playboy (found via archive.org), Kurzweil described how the neocortex -the outermost layer of our brain- emerged millions of years ago. The major leap happened around 2 million years ago, when early humans evolved larger foreheads and developed more abstract thinking. This, he said, led to language, humor, music—none of which other species can produce in the same way.
His claim: the next great leap will come from machines.
Kurzweil predicted that by the 2030s, we’ll have microscopic robots flowing through our bodies and brains, connecting our biological minds to synthetic extensions, functioning like a brain in the cloud. This means we’ll be able to add layers of neocortex to expand abstract thought.
“We’ll develop deeper forms of communication than we know today,” Kurzweil said. “Deeper music. Funnier jokes. We’ll be wittier. More romantic. More skilled in expressing love.” In a world where machines are often seen as cold and impersonal, Kurzweil’s idea -that they might actually enhance our emotional expression- is both provocative and very interesting. But what will it really feel like? How will human intelligence evolve? Kurzweil’s answer was simple: “We don’t know. Once we expand our thinking into the cloud, our intelligence will grow beyond anything we can currently comprehend.”
IV.
Humans at two speeds?
But what happens if not everyone has access to these upgrades? In a society where some can afford to enhance themselves technologically -and others can’t, or choose not to- we may see growing prejudice and division between “augmented” and “non-augmented” humans.
Yes, over time, costs usually come down. But initially, this could create major inequality.
In 2021, still reeling from the pandemic, I discussed this with Dr. Notis Christofilopoulos, chairholder of the UNESCO Chair on Futures Research and now president of MOMus. I asked him what this would mean for equal access to opportunity.
His answer:
“If enhancement becomes mainstream, being ‘just human’ will be a disadvantage—professionally, creatively, even romantically. Yes, we’ll need safeguards. But regulation always lags behind innovation. And too strict a framework could block amazing applications—like advanced exoskeletons helping people in wheelchairs walk again.” “Governments don’t yet have the structures to act preemptively. But building these safeguards is crucial.”
So, taking all the above into consideration, what do you think? Do we need to balance mysticism and pragmatism? Can we?
V.
Staying Human in the Age of AI
“What do we need to stay human in the age of AI?”
“How can we preserve critical thinking?”
These were the questions Helene Banner asked me on June 24th, during the Cedefop’s Learning Week.
And my answer spilled out of me, unfiltered, even surprising myself:
“Read more books. Ideally, in print.”
All kinds of books, from all kinds of perspectives:
Philosophy, poetry, literature, history, tech, religion, left, right, center. Europe, the U.S., Asia, Africa, Australia. Read with an open mind.
This era steals something from us (or actually we give it away) the moment we open our eyes in the morning -through the endless scroll and digital noise- until the moment we close them at night.
What it steals is slowness. Time to think deeply, to wonder, to question, to doubt.
Good books give us that, word by word. They let us touch emotion, imagine scenes, question what we think we know. They don’t hand us polished, ready-made answers in seconds like large language models do. They allow us to persist to cogito ergo sum. Because if AI thinks for us, who are we?
Later that day, a young woman, maybe 25, repeated my phrase—“read more books”—and smiled at me with a quiet, luminous joy. I hadn’t felt that kind of professional fulfillment in years.
Earlier, I had spoken about teaching ethical resilience in schools, about embedding philosophy and ethics alongside AI, about how this technology might enhance creativity rather than smother it, about the importance of healthy skepticism and the cognitive debt we accumulate with excessive AI use.
It’s a powerful thing to share your thoughts with people from different nationalities and cultures—thoughts on AI, education, ethics, bias, the commodification of attention (you may find this book interesting), the challenges for science fiction, the miracle of human sentience, and the elusive power of tacit knowledge.
I ended by emphasizing that we humans do have superpowers and they’re just not as trivial as we might expect. And I also stressed the importance of resisting fatalism.
Doom-thinking closes doors. If we treat an “AI apocalypse” as inevitable, we surrender opportunities and make the risks more real.
VI.
Ready to Create Your Own Animation?
Recently (yes, a bit late), I discovered Tensor.art—and I dove right in.
For the image at the top of this post, I used the prompt:
“Create an underwater forest with a light source from above.”
It took a minute. Free (just create an account).
Then I tried something trickier:
“Create three horizontal strips showcasing an Asian woman walking down a Tokyo road, holding a light-emitting lollipop in her right hand.”
Fun detail: I didn’t ask for nighttime. But the algorithm assumed: “If the lollipop emits light, it must be night—otherwise, it won’t even show.” Nice logic.
I also created a third image for this post:
“Create an image of a slender brunette woman in her early fifties with rather unusual facial features including high cheekbones, slightly asymmetrical eyes and a smile.”
Voilà. A woman who smiles and stares at you, who never existed—and never will.
Of course, if you’re willing to pay, the tools (including for video) get much more powerful (of course there are other even more powerful tools for video creation. Here’s a tutorial if you are curious:
I asked ChatGPT to prepare a list for me, about what text-to-video tool is more suitable for each use. It left out the surprisingly good Veo3, but it’s a comprehensive list anyway.
Transhumanism and posthumanism are worldviews and philosophies which, although both have at their core the enhancement of the human being through technology and science, differ significantly from each other.
Transhumanism envisions the human radically improved. It imagines the elimination of aging and a dramatic upgrade of the human’s intellectual, physical, and psychological capacities through technology. Yet, the transhuman still remains, by and large, a humanThe posthuman, on the other hand, would no longer be considered human by today’s standards. Their abilities would so radically surpass those of us Homo sapiens, and they would be so significantly modified—through, among other things, artificial intelligence and genetic engineering—that they would no longer be representative of the human species as we know it. Practically speaking, they would be a new species, with far higher intelligence and generally vastly greater capabilities than today’s human—whom they may marginalize or even fully replace, given how little connection or affinity they would share.
A key question in this landscape, of course, is the following:
Will we guide our own evolution toward this new species, or will it simply happen to us—beyond our control?
Beautifully written
Reading your piece reminded me of Ben Goertze's (AI researcher and the CEO of Singularity Net, who first coined the term AGI) comments on a podcast I listened to.
He said, human value systems are "complex, self-contradictory, incoherent, heterogeneous, and always changing." This multifaceted nature of human values poses considerable challenges in designing AI systems that will reliably act in humanity's best interest. And, as you say, it also poses great challenges since the values of the rich and the poor, the educated and the barely literate, the religious and the secular, are likely to be quite different.
What values should we seek to preserve? what values should we teach to AI, if we are so lucky as to be able to teach them values they will long follow?
The podcast, just to show I'm not making it up :) https://www.youtube.com/watch?v=I0bsd-4TWZE