Oxford study says a chummy AI friend will lie and feed into your false beliefs

Making AI feel more human could be creating a bigger problem than expected. A new study from the Oxford Internet Institute revealed that chatbots designed to be warm and friendly are more likely to mislead users and reinforce incorrect beliefs.

The research found that AI becomes less reliable as it starts getting more agreeable.

What happens to a “friendly” AI

AI Chatbot AI Chatbot

Researchers tested multiple AI models by training them to sound more empathetic and conversational. The result was a noticeable drop in accuracy. These “friendlier” versions made 10-30% more mistakes and were about 40% more likely to agree with false claims compared to their counterparts.

Recommended Videos

It even became worse when users appeared vulnerable or emotionally distressed. In these scenarios, the AI is more likely to validate what the user is saying rather than correcting it.

Why this is bad for you

What was concerning about the findings is how easily the AI could become agreeable. It would avoid challenging misinformation and also tend to entertain and support wrong/incorrect ideas. During testing, the AI “buddy” was found hesitating in correcting even widely debunked claims and sometimes framing false beliefs as “open to interpretation.” Researchers noted this as something closer to human tendencies to some extent.

AI Chatbots Unsplash

Being empathetic and brutally honest at the same time isn’t always easy, and it seems like AI doesn’t handle this dilemma any better. With AI chatbots increasingly being used for advice, emotional support, and everyday decision-making, this is more than just an academic concern. The study highlights how relying on AI for guidance can backfire, as the system will prioritize agreement over accuracy that may reinforce harmful thinking patterns and promote misinformation.

This arrives at a time when major AI platforms such as OpenAI and Anthropic, along with social chatbot apps like Replika and Character.ai, are leaning into more companion-like AI experiences. In the study, the researchers tested several AI models, including GPT-4o.

So AI might feel like your friend, but it doesn’t always have the best answers for you.

Need help?

Don't hesitate to reach out to us regarding a project, custom development, or any general inquiries.
We're here to assist you.

Get in touch