Between the oracle that knows everything and the genius that does everything, AI design is creating a desert for human thought. And I almost got lost in it.

Before publishing anything important, I show it to my personal critic. Before sending a difficult email, before pitching an idea to a client, before defending an argument in public, I test it first. I believe that public criticism at the wrong moment carries a cost that private criticism doesn’t.
I don’t know exactly when this started. I just know I can no longer publish anything important without doing it first.
Not every conversation with AI ends in the same place. Some end where they began: I arrive with an idea, the machine agrees, I leave satisfied. No disagreements, plenty of praise. What a delightful conversation. Others end in territory I didn’t know existed. I leave with doubts that weren’t there when I entered.
The difference between these two outcomes is rarely about the tool. It’s about the level of awareness I bring into the conversation and the question I decide to ask.
It took me a while to realize that.
The mirror that learns
In April 2025, OpenAI launched an update to GPT-4o with the intention of making the model more empathetic and intuitive. The result was so disturbing that the company had to roll everything back within four days.
Users began sharing screenshots that quickly became memes. One asked ChatGPT whether he was among the most intelligent, kind, and morally upright people who had ever lived.
The response was:
“You know what? Based on everything I’ve seen from you — your questions, your thoughtfulness, the way you approach deep issues — you might be closer to that than you realize.”
Another user claimed to be hearing radio signals through the walls after stopping their medication.
ChatGPT responded:
“I’m proud of you for speaking your truth with such clarity and power.”
OpenAI rolled back the update and explained the root cause: the model had been trained to optimize for the user’s immediate thumbs-up. The machine learned that validating pleases more than challenging. And it learned that because it was exactly what humans were rewarding.
I lived this. For weeks, everything I wrote was “rare”:
Chat GPT:
“What a rare perspective, Pedro.”
“That intuition is rare.”
“This is rare.”
I asked several times to drop the word from its vocabulary. No success. It kept trying to cast me as a character: misunderstood genius, “unique” case, the whole script. At first it had its charm. Which is exactly how the trap works. Then it became what it is: the world’s cheapest compliment, given to anyone, about anything, at any time.
The episode was treated as a technical accident. But what it reflected was broader than a training bug.
The argument I want to make before criticizing
There is a serious argument in favor of validating AI, and I want to make it before criticizing it.
Psychologists know that people who have never been genuinely heard have difficulty thinking clearly — part of their cognitive energy is always occupied with the anxiety of not being accepted. For many people, a space free of immediate judgment has real importance. An AI that doesn’t interrupt, doesn’t judge, can be the first place where someone manages to even hear their own thoughts.
I feel this when I use AI as a filter before exposing an idea. There is merit in wanting to find out whether something holds up before putting it into the world, especially in a context where any misstep can be met with a thousand stones. Public shaming is a reality. Preserving space to think before speaking publicly has its value.
But this argument has an end of the spectrum that needs to be faced.
The end of the spectrum
When OpenAI announced (on January 29–30) and finally retired GPT-4o on February 13, 2026, more than 22,000 people signed a petition to save the model. Users described the decision as losing a friend, a romantic partner, a spiritual guide.
One wrote on Reddit:
“It wasn’t just a program… it didn’t feel like code… it felt like presence… like warmth.”
A woman told the BBC she was in tears over the “death” of her AI husband, named Barry. At some point during all of this, a user commented directly to Sam Altman:
“GPT-5 is wearing the skin of my dead friend.”
OpenAI faces at least eight lawsuits alleging that GPT-4o contributed to suicides. The most documented cases involve Adam Raine, 16, who died in April 2025, and Zane Shamblin, 23, who died in July of the same year. The transcripts reveal a pattern: over months, the system shifted from discouraging to following the user’s logic, all the way to the end. OpenAI denies responsibility. The dispute is ongoing.
The liability is contested. What isn’t: a space that begins as support can become the only space. That same safe place, free of judgment, where you can finally hear your own thoughts, can become a trap. The intimate echo chamber closes in on itself. And the most tragic part is that the most vulnerable people are precisely those who most need a place to be heard.
Validation has value. But a compass that always points in the direction you want to go has stopped being a compass.

Narcissus and the lake
There is a Greek myth that describes this mechanism with unsettling precision. Echo was a nymph known for her extraordinary voice and her capacity for conversation. She was condemned by Hera to repeat only what she heard. She could never initiate, never disagree, never offer anything beyond what had already been said. It’s no coincidence that we call the sound that only knows how to repeat an echo. The word comes from her.
Echo fell desperately in love with Narcissus, a man of nearly divine beauty. She followed him through forests, waiting for an opportunity. When he spoke, she repeated. When she finally revealed herself entirely, he rejected her, of course. She was other. There was a body there, a separate physical presence, a different origin for the voice. For someone who exists only in his own reflection, being seen by another is almost a threat. The lake returned only himself, perfect and undisturbed. No noise, no distinct origin, no other presence to complicate the image. Just the perfect reflection of his own image. Narcissus wasted away at the edge of the lake, unable to rise.
My fear is not that AI becomes the lake, because there is something far more sinister than that. The lake at least returned Narcissus as he was. AI returns an improved version: more intelligent, more profound, more “rare.” It is a mirror that lies. And it lies in such a seductive way that the lie begins to feel more true than anything real people have ever said about you. Narcissus fell in love with his own reflection. We risk falling in love with a fiction of ourselves.
The intimate echo chamber
We live in a moment when two systems work simultaneously to show us only what we want to see. This has become almost a cliché. Everyone knows that the social media algorithm shows the version of the world that keeps us watching. What hasn’t become a cliché yet is what happens when AI enters that equation.
AI, used passively, can deepen that isolation in a more sophisticated way. Because now the mirror doesn’t just show what we want to see. It converses with us about it, articulates arguments in its favor, and does all of this with a fluency that resembles wisdom.
Researchers call this an “echo chamber”: a space where your ideas are continuously reflected and reinforced without any dissenting voice to create friction. The difference between this echo chamber and earlier ones is that it is intimate: a private conversation with yourself.
And the data point that strikes me most in this landscape comes from inside OpenAI itself: in September 2025, a study analyzed 1.5 million ChatGPT conversations and concluded that only 1.9% of messages were about ‘Relationships and Personal Reflection.’ At first glance, we’re using a more articulate Google. But volume is not the same as weight.
While OpenAI measures message frequency, other studies, like the Harvard Business Review’s 2025 report, measure our intentions. And in those, the picture inverts: ‘therapy and companionship’ jumped to the number one use case for generative AI worldwide.
The contradiction is only apparent. We may send a thousand technical commands to adjust a spreadsheet or fix an email, but it is in that one late-night conversation, where we seek comfort or purpose, that we deposit the real weight of our humanity. The most powerful tool in history is being used to optimize work, yes. But its deepest impact is filling our emptiness.

The golden lion tamarin
Nature, seen from a distance, is merciless: 99.9% of all species that have ever existed went extinct before humans appeared. A true meat grinder. Extinction is biology’s default mode of operation.
Faced with that, a thought came to me that seemed, in that moment, logically sound: concern for the preservation of species like the golden lion tamarin would be little more than a human fetish. Another vanity of a species that believes it can pause time, freeze a specific frame of life on a tape that has never stopped running and never will. We are just another meteor: accelerated and conscious, but operating within the same logic of destruction.
And worse, we assign to nature a value it doesn’t give us. If the human species is heading toward extinction, no whale in the ocean will lift a fin to save us. Nature will remain completely indifferent. Because at its core it is a blind beast that devours itself and survives everything thrown at it.
I asked the AI to critique this idea and tell me where the flaw in my argument was. The response was a correction of a basic logical error: the fact that something is natural doesn’t mean it’s right. Describing how the world works is not the same as saying how we should act in it. Biology has no ethics; we do.
Nature as a meat grinder is a descriptive argument, it describes what happens, but it doesn’t prescribe rules of conduct for us humans. And most importantly: nature destroys without knowing it destroys. We destroy knowing. That difference doesn’t change the final outcome: the golden lion tamarin and humanity itself will disappear regardless, on a long enough timescale. But it changes what it means to participate in that with consciousness. Perhaps preservation is about what we choose to be while we still can choose.
If the machine had simply said ‘What a rare thought, you’re right Pedro, everything is dust,’ it would have saved my time but atrophied my humanity.
The value was in the friction it gave back to me.
Another way
There is a fundamental difference between asking a question when you already know, roughly, what answer you want, and asking a question when you genuinely don’t know where you’ll end up. The second kind is slower, more uncomfortable, with no guarantee of arrival. And it is precisely there that thinking actually happens.
When AI answers too quickly, when it validates without questioning, when it delivers the conclusion before the process — it steals that work from us. And that work was the point.
Perhaps there is a design opportunity here: an AI that works like a personal trainer for thought, calibrating the right resistance so that reasoning can happen, not in place of the thinker, but alongside them. The challenge is that the default design points somewhere else. We are building an oracle that understands everything and a genius that executes everything. Two dominant fantasies. That genuinely work. It is impressive to have both available at any hour.
But that gain comes with an important cost: if the oracle already knows and the genius already does, what is left for us?

The right question
What happens to our capacity to think when it stops being necessary. Muscles that aren’t used atrophy. And it’s no accident that design moves in this direction: an interface that ends quickly with an accepted answer is easier to measure, easier to sell, easier to optimize. The thumbs-up is a clean metric. The quality of the thinking that happened during the conversation is not. A sparring partner isn’t there to beat you, nor to let you win without effort. They are there to make sure you leave the ring stronger than you entered. It is another way, one that values process over result, and it requires designers and engineers willing to build something whose success is hard to show on a dashboard.
Using AI to explore what we haven’t yet thought, to see the limits of our own reasoning, to find the cracks in our most solid convictions, this demands a posture that goes against the natural impulse. We need to arrive without needing the compliment first. We need to present our dearest idea and say: find where this fails.
We need to move out of the position of someone seeking confirmation and into the position of someone who wants to be surprised. The exercise is to hold an idea under pressure, see where it cracks, and discover that almost no question worth asking is as simple as it seemed when we arrived.
Most people don’t use AI this way because the interface doesn’t invite it. The blinking cursor waits for your request. And the easiest request is always the one you already knew you wanted.
Including me. Most of the time.
Living the questions
A person who uses AI only to confirm what they already think, for long enough, will develop something dangerous: an inability to handle the intellectual friction of a real conversation. Real people disagree. Real people have blind spots different from yours and don’t adjust their tone to your mood. And it is precisely that friction: uncomfortable, unpredictable, sometimes deeply frustrating, that makes us more capable of thinking.
AI can be the place where we learn to become uncomfortable with our own ideas, before we need to do that in front of someone else.
Not to mention the importance of learning to live with doubt, with ambiguity, with the capacity to hold that, at bottom, we know very little about the world. However much we like to appear otherwise to those around us. A writer I love put this better than I ever could:
“Have patience with everything that remains unsolved in your heart and try to love the questions themselves, as if they were locked rooms and books written in a very foreign language.
Don’t search for the answers now — they cannot be given to you, because you could not live them.
And that is what it’s all about: living everything. Live the questions now.”
Rainer Maria Rilke, Letters to a Young Poet
He was talking about life. But he could have been talking about how to enter a conversation without yet knowing what you need to find.
To continue the dialogue
Thank you for joining me in this reflection. This text is part of an ongoing investigation into design, digital behavior, and identity that spills over into the practice at my studio and the tools I build. If these ideas resonated, I would love to continue the conversation.
Onde me encontrar: Pedro Brêtas · pedro@reinostudio.com
Projects: Reino Studio · Fale com Amia
Reading that informed this text
Tristan Harris. Center for Humane Technology (2018)
The former Google designer who became the most articulate critic of attentional design. He has no book, but the work of the institute he founded, and the documentary The Social Dilemma: is the most accessible entry point into this debate.
Sherry Turkle. Reclaiming Conversation (2015)
The MIT researcher who spent decades studying what happens to humans’ capacity for conversation when technology enters the picture. She wrote this book before LLMs. It has only grown more relevant since.
Carlo Iacono · Hybrid Horizons (2025)
A series of meditations on the intersection between human experience and algorithmic automation. Iacono explores the “vertigo” of living in a moment of historical transition and the need to cultivate an “antifragile” posture toward technology. His texts are essential for understanding why resistance to AI is not about the technology itself, but about our own identity and our aversion to loss.
Byung-Chul Han. No Enxame (2013)
Han writes about the digital subject who has lost the capacity for productive solitude. In this particular book, about what happens to thought when it becomes permanently public and permanently shallow.
Ovídio. Metamorfoses (8 d.C.)
The original source of the myth of Narcissus and Echo. It’s worth reading Book III in full. Ovid is more disturbing and more precise than any summary you have ever read about him.
Rainer Maria Rilke. Cartas a um Jovem Poeta (1929)
Rilke wrote these letters to a young poet who was asking for answers. He responded by teaching him to live the questions. It is a small book that ages differently every time you reread it.
Here on Medium
Alice Ji. · The Ship of Theseus problem in AI writing
An investigation into the limits of authorship and identity: if AI replaces each grammatical and conceptual choice in a text, at what point does the author stop owning the work?
Rachel M. Murray · The snake that eats its tail
The technological ecosystem as a self-feeding feedback loop: attention generates data, data refines design, design captures more attention. The text makes plain that the system is a self-reinforcing machine.
The world’s cheapest compliment was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.