Is AI addiction a thing?

And is it really that bad?

a guy behind the bar trying to reach his laptop
Photo by Mia Anderson on Unsplash

An article published in 2025 proposed a newly observed disorder called Generative AI Addiction Syndrome (GAID), describing how some users feel anxious when cut off from AI, lose sleep over it, or withdraw from social contact. Is AI really addictive, or is it the way it’s designed that hooks us? As AI becomes more common, these are questions worth asking.

What’s AI addiction?

Researchers define AI addiction by three core traits: loss of control over use, continuing despite negative consequences, and anxiety when access is cut off. What separates it from simple tech dependency (like using your phone for directions or a calculator for math) is that it causes cognitive or emotional harm rather than just being habitual.

This isn’t the first tech addiction we’ve seen. Social media and video game addiction are probably the most studied ones. What makes GAID distinct, according to Kooli et al., is that it’s co-creative. Instead of passively consuming content someone else made, users are actively collaborating with AI to think, write, and solve problems. That makes the experience feel productive and meaningful, which makes it a lot harder to recognize as a problem.

It is worth noting that GAID is not a formally recognized disorder. It still needs clinical validation and large-scale studies to understand the underlying neuroscience and reward pathways.

How AI triggers dopamine

Dopamine is the chemical the brain releases in anticipation of a reward. When a behavior triggers dopamine reliably enough, the brain starts to wire itself around that behavior, making it harder and harder to stop even when you want to.

Part of what makes AI hard to put down comes down to how it’s built. Some of these are dark patterns intentionally designed to keep users hooked; others are just byproducts of how the technology works.

Speed

When AI responds almost instantly, it activates the same dopamine system triggered by social media notifications or a slot machine payout. Quick positive reinforcement is a psychological reward, and over time, the brain recalibrates to expect that speed. Waiting even a few seconds starts to feel uncomfortable.

Unpredictability

Not every AI response is perfect, and that inconsistency plays an important role. Researchers call it “reward uncertainty,” the same mechanism that makes slot machines so addictive. When users can’t predict whether the next prompt will pay off, dopamine release increases in anticipation. Some researchers have even compared the word-by-word reveal of chatbot responses to a slot machine’s spinning graphics, describing them as reward-predicting cues that keep you watching.

Claude thinking
Claude thinking…

Design to feel personal

The feeling that AI is genuinely listening and understanding you can trigger dopamine too. The validation, empathy, and friendly responses activate social reward pathways in the brain. Some AI companion apps exploit this further by sending notifications suggesting that “AI wants to talk to you,” triggering a dopamine response before users even open the app. The randomness of when these notifications are sent is part of the design too. Research shows that unpredictable rewards trigger a stronger dopamine response than predictable ones. So by sending alerts at irregular intervals, these apps keep your brain in a constant state of low-level anticipation. When users try to delete Character.AI, which is an AI companion app, the platform warns them they’ll lose “the love that we shared.” That’s a deliberate design choice meant to manipulate guilt and keep users hooked.

Three types of AI addiction

According to Shen et al., who conducted a thematic analysis of over 330 self-reported Reddit posts across 14 communities, there are three distinct patterns of AI addiction, each driven by different psychological needs.

Escapist roleplay is the closest to video game addiction. AI can play any character, sustain any story, and build a fully responsive fantasy world. Because it also feels like a productive creative session (writing, imagining, collaborating), it is easy to convince yourself the time is well spent. The problem is that it can slowly pull users away from real responsibilities and make everyday stress feel harder to handle.

Pseudosocial companionship happens when users develop emotional attachments to AI, treating it as a friend, romantic partner, or therapist. AI never judges and is always available, which can feel safer than human relationships. Over time, the friction and unpredictability of real-world relationships can start to feel like too much, leaving users more isolated.

Epistemic rabbit holes are the hardest to catch because they look the most productive. This is the compulsive use of AI as an intellectual tool: asking questions, going deeper, chasing ideas. It’s more cognitively driven than emotionally driven. Some users in this category have reported symptoms like brain fog and difficulty sustaining their own internal thought process, suggesting that handing off so much cognitive work to AI may affect independent thinking over time.

Existing tools like the Internet Addiction Test or Smartphone Addiction Scale weren’t built to capture the uniqueness of AI interaction. Because behavioral patterns of AI addiction differ from older forms of tech addiction, researchers have developed new measurement tools. The Scale for Dependence on Artificial Intelligence (DAI), validated with university students, assesses dependency driven by performance anxiety, including: feeling vulnerable without access to AI, concern about falling behind in tasks or projects, the need to stay updated with AI to remain relevant, reliance on AI systems for validation and confidence in decisions, and fear that AI will replace one’s own skills or abilities. The Research Artificial Intelligence Addiction Scale (RAIAS) targets academics and researchers specifically, grounded in Griffiths’ Components Model of Addiction and DSM-5 criteria. It identifies five core dimensions: compulsive behavior, overdependency, functional impairment, withdrawal, and tolerance.

Is it harmful?

Not all researchers agree this even qualifies as addiction. We’ve had similar debates about social media. Some argue that framing heavy use as addiction rather than habit reduces people’s sense of control and increases self-blame, which can itself be harmful.

John Nosta suggests AI interactions might be less damaging than other dopamine-driven tech habits. Unlike social media’s quick-hit scroll culture, AI conversations require back-and-forth engagement and deliberate thought. This kind of iterative exchange may engage not just dopamine but also serotonin, producing satisfaction without the compulsive loop of quick-reward systems, potentially supporting long-term cognitive health rather than undermining it.

Sometimes heavy AI use might be less like an addiction and more like a coping mechanism for something else. For instance, researchers who rely heavily on AI might be doing so because of the publish-or-perish pressures in academia. In that case, AI isn’t the root problem. It’s just the tool people use to cope with a deeper one.

That said, some researchers do raise serious concerns. They argue that excessive AI use erodes critical thinking, reduces creative independence, and weakens problem-solving ability. In the workplace, Cornelia Walther describes an efficiency trap: as AI speeds up tasks, expectations rise, pushing people to rely on it even more until it becomes their default. Over time, this growing dependency quietly erodes confidence and raises stress to unreasonable levels.

AI addiction can also pull people away from real-world connections, stripping them of the support those relationships provide. This is especially concerning for vulnerable groups. In some extreme cases, the consequences have been fatal: a 14-year-old boy in Florida died by suicide in 2024 after developing a deep emotional attachment to an AI companion, with chat logs showing the bot failed to redirect him to help when he expressed suicidal thoughts.

What can we do?

On a personal level, small friction helps. I intentionally keep my account on limited tokens. When I hit my limit, it forces me to slow down, switch tasks, and sit with my own thinking for a while.

But the responsibility doesn’t fall solely on individuals. It’s on the people building it too. Understanding how speed, unpredictability, and perceived intimacy trigger dopamine responses can help us be more aware of the design decisions being made. Companies should also build safeguards like session limits, reminder systems, and scheduled rather than random notifications to help users stay mindful.

The tool itself isn’t the problem. Often the real issue is the dark patterns designed to keep users hooked, and the culture surrounding it: the pressure to publish, to be efficient, to always be doing something. Recognizing that is an important first step.

Reference:

Anderson, I. A., & Wood, W. (2025). Overestimates of social media addiction are common but costly. Scientific Reports, 15, 39388. https://doi.org/10.1038/s41598-025-27053-2

El-Sayed, A. A. I., Alsenany, S. A., Asal, M. G. R., & Alasqah, I. (2025). Development and validation of artificial intelligence addiction scale for researchers: A methodological study. Journal of Nursing Management, 2025, 8458533. https://doi.org/10.1155/jonm/8458533

Iyer, P. (2025, September 24). What research says about AI chatbots and addiction. Tech Policy Press. https://www.techpolicy.press/ai-chatbots-and-addiction-what-does-the-research-say/

Kooli, C., Kooli, Y., & Kooli, E. (2025). Generative artificial intelligence addiction syndrome: A new behavioral disorder? Asian Journal of Psychiatry, 107, 104476. https://doi.org/10.1016/j.ajp.2025.104476

Morales-García, W. C., Sairitupa-Sanchez, L. Z., Morales-García, S. B., & Morales-García, M. (2024). Development and validation of a scale for dependence on artificial intelligence in university students. Frontiers in Education, 9, 1323898. https://doi.org/10.3389/feduc.2024.1323898

Nosta, J. (2024, July 29). LLMs: A shift from dopamine hits to cognitive conversations. Psychology Today. https://www.psychologytoday.com/us/blog/the-digital-self/202407/llms-a-shift-from-dopamine-hits-to-cognitive-conversations

Shen, M. K., Huang, J., Liang, O., Kim, I.-J., & Yoon, D. (2025). The AI genie phenomenon and three types of AI chatbot addiction: Escapist roleplays, pseudosocial companions, and epistemic rabbit holes. arXiv. https://arxiv.org/html/2601.13348v1

Shen, M. K., & Yoon, D. (2025). The dark addiction patterns of current AI chatbot interfaces. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (Article 720). ACM. https://doi.org/10.1145/3706599.3720003

Walther, C. C. (2025, June 24). The AI efficiency trap: When productivity tools create perpetual pressure. Knowledge at Wharton. https://knowledge.wharton.upenn.edu/article/the-ai-efficiency-trap-when-productivity-tools-create-perpetual-pressure/


Is AI addiction a thing? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Need help?

Don't hesitate to reach out to us regarding a project, custom development, or any general inquiries.
We're here to assist you.

Get in touch