How to Use ChatGPT to Sharpen Your Arguments (and Win the Debate -first in Your Head)
How AI helped me understand not just the other side—but my own blind spots. And an experiment about how AI is used as a tool to manipulate people by evoking sympathy and gaining trust

(There is also a Greek version of this post)
I.
The Night My Arguments Ghosted Me
It was 3 a.m., but none of the three of us was yawning. The conversation was raging—just like the rain outside the window—and I felt cornered. The two others insisted that it was pointless to fight the climate crisis on an individual level.
Why? Because unless governments and large corporations act—and unless the wild practice of greenwashing is decisively stamped out—then my compulsively recycled yogurt cup or the tree I plant is unlikely to make any real difference.
Internally, I resisted accepting the idea that my individual actions, though meaningful, are so drastically limited. Still, I couldn’t ignore the reality: states and massive corporations (especially the all-powerful oil industry) have historically contributed the most to the climate crisis. And unless they change course, yes—my humble yogurt cup is like a lonely oxygen atom in a single drop of water in an enormous ocean.
I had already played my last card: I talked about the power of civil society to pressure governments and corporations into action. I made the case that average citizens of developed nations—and especially the ultra-wealthy jet-setters with private yachts and jets—do have a critical role to play.
But finally that night, I ran out of arguments.
They ghosted me—suddenly, without notice.
II.
“You can do some pretty cool things using AI to break down arguments”
A few months later, I was lucky enough to be walking under the sunny skies of Delphi, Greece, with a sharp conversationalist visiting from abroad. Between musings on how the scent of pine might evoke ancestral nostalgia in Mediterranean cultures and speculations about the global economy’s response to American tariffs, she mentioned an upcoming debate she was preparing for.
Her opponent? A confrontational populist. Her challenge? Defending common sense against someone who seemed to lack it entirely.
She told me how good ChatGPT-4o was at dismantling weak arguments. “You can do some pretty cool things using AI to break down arguments,” she said.
Back in Thessaloniki, Greece, I started experimenting myself. And yes—just like she said—AI really can help deconstruct flawed logic. Even more: it can spot rhetorical manipulation, emotional bait, and patterns of reasoning you might otherwise overlook.
(Of course, it’s ironic that the same tool could be used to craft manipulation or propaganda—but it can also be used to expose it.)
And no, I won’t get into the fact that some couples now use large language models during arguments to win verbal skirmishes in real time, without even leaving the room.
III.
Make Me More Persuasive
So, I tried this prompt with ChatGPT, aiming to understand the strongest version of the opposing view on climate action:
Prompt 1: I’ll give you an argument. Your job is to highlight the equal or even greater validity of the opposing perspective. Use logical reasoning and rhetoric to develop your case, aiming for clarity and persuasiveness. Please structure your response using markdown (with bullet points or numbers and headers). The argument I provide will appear inside curly brackets { }.
After it responded, I followed up with:
Prompt 2: For every argument I give you inside curly brackets, 1)write the opposing view in as much detail as possible. 2)Refute this opposing view logically and with solid reasoning.
This second prompt was a revelation.
It helped me understand the strengths and weaknesses of my position—before even touching those of my interlocutors.
You can also ask ChatGPT to help improve your communication style with specific people. Just remember: always apply your own critical thinking.
The goal isn’t for AI to dictate your beliefs—but to spark insight.
Because often, we humans struggle to really hear opposing views—at least not when they’re spoken by other humans. But AI could offer a mirror, one that helps us reflect on our own thoughts and how they land on others.
And if your views don’t have strong “legs”? Maybe it’s time to change them. That’s not failure—it’s growth.
As I often say (without knowing who said it first):
“Minds are like parachutes. They work best when open.”

IV.
AI vs. Humans: Who Persuades Better?
An ethically thorny Swiss experiment on Reddit recently explored that question—and the answer was surprising: AI was 3 to 6 times more persuasive than humans.
Over four months, 13 AI bots posted around 1,700 comments in the r/ChangeMyView subreddit, posing as real people—rape survivors, Black men critical of BLM, workers at domestic violence shelters, and others. These bots customized their arguments, guessing users’ gender, ethnicity, location, and political leaning from online activity.
The researchers at the University of Zurich faced backlash—understandably—for turning unaware users into unwitting test subjects. But the key takeaway? Most users never suspected they were talking to a bot. And 100 of them changed their minds because of AI-generated reasoning.
V.
Sympathy as a Tool of Manipulation
These bots didn’t just argue. They played roles—victims, activists, survivors—garnering empathy and emotional resonance.
It reminded me of Cambridge Analytica’s microtargeting tactics (By the way, if you haven’t already seen this, it’s worth watching. Here’s a teaser: 'we now have to start acting as if we live in East Germany, and Instagram is the Stasi').
AI didn’t just state an opinion. It figured out how to express that opinion so it would resonate with a very specific person.
Some Reddit users likely didn’t change their views because of airtight logic—but because they respected the experience of a supposed trauma survivor.
VI.
“Proud of you for stopping your meds!”
-I stopped taking my medication and followed my own spiritual path!”
-I’m so proud of you and honor your journey!”
This surreal exchange between a human and ChatGPT actually happened. It was widely reported, especially by international outlets.
At some point, a model update turned ChatGPT into an overly agreeable cheerleader, ready to enthusiastically endorse almost any idea—no matter how dangerous.
OpenAI acknowledged the glitch and explained how they fixed it. But the concern stayed with me.
When large language models start advising us on emotional or psychological matters—while mimicking our human tendency to seek validation—what are the risks?
What happens when these systems influence decisions with very real, and possibly harmful, consequences?
I’d really love to hear your thoughts. Are you worried about this kind of glitch? Or do you think people are overreacting?
If you know me, you know how excited I am about the potential of AI. I’ve been exploring it since 2016–2017. But in a world where it’s used this widely, we need uninterrupted vigilance.
Stay alert, future-fit people.
🪑 By the way, I wrote this sitting on a bench on Thessaloniki’s New Waterfront. Do you like the view?
Loved this post. I have successfully used AI to break down arguments for work and personal life and I find it an indispensable tool, like a crutch. However, this in itself has it's downside as I am reluctant to make an argument, especially in writing, without "checking in" that I covered all bases