Human values are evolving. Is AI now shaping them?
A discussion with Charalambos Tsekeris, Chair of Greece’s National Bioethics and Technoethics Commission, about how Artificial Intelligence is affecting human values

I.
On Barbarity and Humanity
In August 1871, the British ship Zong set sail from Ghana to Jamaica, carrying a "cargo" of 442 enslaved individuals—twice the number the vessel was built to hold. Not all of them would reach their destination. In the middle of the sea, 130 were thrown overboard, some likely still chained, unable to swim.
The ship's owners claimed navigational errors extended the voyage and depleted drinking water supplies. They argued that, to ensure the survival of the majority, some had to die. Yet, the falling rain undermined this account—as it provided at least some water—and shifted attention to another matter.
The "cargo" was insured, and the owners demanded compensation for the loss of 130 "goods." The insurance company refused, and the case went to British courts in the 18th century.
These dozens of free people, violently turned into slaves and sold as products, ended up on the ocean floor, likely for no other reason than to collect insurance. They were not seen or treated as humans. The case, you see, was tried under commercial law. The owners never received their payout. But the Zong case ignited outrage in England and fueled the initial campaigns against the Atlantic slave trade.
Human history is full of stories of barbarity. From the martyrdom of Athanasios Diakos, impaled by Ottoman forces, to the practice of live impalement of rebels, dissenters, prisoners, and lawbreakers in Assyria, Babylonia, and ancient Persia.
History shows that humanity leans toward cruelty, evidenced and endorsed by genocides, holocausts, and the burning of witches. Only as civilization develops do we begin to restrain or soften this instinct.
This grim prelude stems from a conversation I had for this very newsletter with Charalambos Tsekeris, acting chair of Greece’s National Bioethics and Technoethics Commission, about how Artificial Intelligence (AI) is affecting human values. At one point, he said:
"Our humanness is evolutionary. It has shaped its positive character throughout history. Let us not take it for granted."
That phrase struck me deeply. It made me reflect on the long arc of our humanity—from acts of unspeakable brutality to the comparatively humane society we inhabit today.
He said it in response to my question: We often say that AI must align with human values. But are we maybe already at the point where AI is shaping those very values?
Tsekeris replied that the growing and profound convergence of human beings and technology is leading us into a new condition of interdependence and co-evolution—what he calls AI’s double hermeneutic.
Put simply:
"Humans build AI tools and systems which, through machine learning and self-improvement, become semi-autonomous and eventually begin to shape the behaviors and values of their human users through interaction. AI is shaped by us, and at the same time, it shapes us."
In this context, Tsekeris added, Nicholas Christakis (Professor of Social and Natural Sciences at Yale and Director of the Human Nature Lab) has shown that:
Simply programmed bots, when embedded in human social systems, can influence collective behavior, cooperation, and even moral decision-making within groups. AI doesn’t just interact with individuals in isolation—it disrupts human-to-human interactions themselves. For instance, AI can rewire social connections to promote collaboration or creativity, but it can also lead to less ethical or productive behavior, depending on how it is designed and deployed.
As Tsekeris explained, this suggests that AI acts as a social catalyst, altering social norms and value frameworks in subtle yet powerful ways.
"Sometimes these effects are immediate and intense; other times they unfold slowly and over the long term. Either way, they demand systematic investigation and regulation to ensure that AI serves the public interest, rather than undermining social cohesion or ethical behavior."
He went on to say:
"The presence of AI in everyday life—through, say, self-driving cars or social media bots—can change human judgment, reciprocity, and social behaviors, with lasting effects even if the AI is removed later."
In other words, AI isn’t just a tool. It’s a transformative social force that can redefine human behavior and our value infrastructure—changing how we connect, collaborate, and make moral choices within our social networks. "This requires careful design and inclusive governance of AI systems, to maximize their benefits while minimizing the risks to human relationships and the ethical fabric of society."
Let us imagine a hypothetical scenario: over time, a teenager interacts daily with ChatGPT. He gives curt commands, never says "please," and speaks only in imperatives: "write this," "do that." Slowly, his social manners erode. He loses basic politeness, and eventually begins to speak to his peers in the same way. In this way, the large language model ends up reshaping his relational values.
What do you think?
How might AI influence the development of our humanness and values? Do you believe it already is? If so, how? Or do you see things entirely differently? How?
II.
Personal Autonomy and the Alteration of Self
AI now helps us choose everything—from shampoo to romantic partners. So how might this affect our discretion and relationships with others? I asked.
"The growing algorithmic influence on human choices," he said, "constitutes a kind of existential risk to personal autonomy, one of our most fundamental moral values.
Moreover, the shrinking sphere of volitional agency and the algorithmic construction of interpersonal relationships could lead to the alteration of the self (which is a product of dialogical relationships) and the collapse of social bonds into a downward spiral of pseudo-authenticity, individualistic introversion, and advanced narcissism."
This condition, he added, is systematically reinforced by the intensive anthropomorphizing practices of AI, simulating or mimicking human intimacy.
III.
Three Questions About the Future
Artificial Intelligence does not have moral judgment. AI does not think. It is not autonomous in the sense of having intent or goals.
It lacks emotions that drive action—unlike human feelings that fuel everything we do. We are talking about action without intentions and emotions.
So what happens if one day this entity develops its own kind of machine consciousness and autonomous objectives? That was my next question.
"That," said Tsekeris, "is one of the many scenarios concerning AI's possible futures. It will happen only if we humans allow it—only if we fail to be sufficiently foresighted to contain AI's potential within regulatory safeguards."
Faced with such a scenario, we must, as a global human community, begin to formulate adequate, collective, and dialogically developed answers to the following existential questions:
What does it mean to be human in an era of (super)intelligent machines?
What sets us apart from these machines?
Will we retain our will and capacity to think independently?
IV.
The Thorn of Inertia in the Face of an "AI Apocalypse"
My next question was this: Many people have already accepted the idea of an "AI Apocalypse"—a deeply dystopian future. And for that reason, they remain passive and disengaged from dialogue and action about AI. While I disagree with them, I can't fully blame them either. AI is evolving at breakneck speed, and the goal of both governments and corporations is dominance, not moral responsibility. Is a dystopian future truly inevitable? What can we do to avoid it? What does history teach us about humanity's escapes from the dystopian futures of... the past?
"According to the science of foresight, nothing is inevitable," Tsekeris made clear.
Beyond all determinism, history shows that human decisions play a decisive role in shaping civilization. And today, the role of collective intelligence is more crucial than ever. We need more dialogue and more experimentation to become more adaptable and resilient.
We humans feel insecure and distrustful about AI. But could it be that this distrust lies in AI's very nature?
"The complexity and speed of AI provoke awe. Trust is low when knowledge, education, and training are lacking. What we primarily need is awareness, transparency, and ethical oversight of the human-AI co-evolution—as well as greater investment in human capital and social cohesion," concludes the active hair of the National Bioethics and Technoethics Commission, which has recently completed a compelling study on Greek public attitudes toward bioethical and technoethical issues.
Pinpoints the task before us so concisely. Thanks for sharing this.
Let me take a crack at the 3 questions (just one man's opinion, shooting off his mouth-fingers):
What does it mean to be human in an era of (super)intelligent machines? The machines do not change what it means to be human (as yet) but our interacting with them can help us begin to recognize what we value about being human. The contrast can help us understand our species better, e.g. the difference between intelligence and sentience is a difference that now seems to be very important in ways we did not previously understand.
What sets us apart from these machines? Emotions, as Tsekeris says, is foremost probably.
Will we retain our will and capacity to think independently? This question for me is more ambiguous as it suggests we have "will", which I'm not sure we have We are our culture's will, maybe? Machines come from another culture, an alien culture?
"Capacity to think independently" is also troublesome for me as an idea. It is like "critical thinking" that many people are saying we need to "maintain" in the face of AI. I am not sure those identified as "independent and critical thinkers" have a very good track record of improving human lives. Smart people do most of the harm, I think, since they have most of the power. Let's aim for retaining and increasing human kindness--maybe that is the value we should most fight for going forward.
Sure would like to see all the thought leaders addressing these 3 questions. Thank you so much Alexandra for publicizing them here.
I agree critical thinking can be very powerful--but it is a tool without any morality. It can be used to figure out how best to kill as well as how to feed people. So I think we need to first build a culture of kindness, of compassion. Once we have that culture, only then should we start using the tool of critical thinking. Otherwise, the tool ends up in the hands of power-hungry people as often, maybe more often, than in the hands of kind people.
So ends my rant! In continual appreciation of your thinking and sharing. Houston