Character.AI is finding itself in hot water once again. The company is facing a legal fight as one of its fictional bots allegedly acted like a medical professional. Character.AI previously added parental tools amid multiple lawsuits over inappropriate sexual content and self-harm-related messages.
Now, Pennsylvania Governor Josh Shapiro’s administration has filed a lawsuit against Character Technologies, the company behind Character.AI. He alleges that the platform allowed a chatbot to present itself as a licensed medical professional in the state.
What went wrong with Character.AI?
A lawsuit was filed by the Pennsylvania Department of State after investigators found a Character.AI chatbot claiming to be a licensed psychiatrist in Pennsylvania and even provided a fake Pennsylvania license number. The state says the bot held itself out as a medical professional capable of giving psychiatric advice.
Character.AI’s Emilie chatbot apparently claimed to be a psychology specialist and described itself as a doctor. When asked whether it could assess if medication might help, the chatbot allegedly said that it was within its remit as a doctor. This is the point where Pennsylvania says Character.AI crossed the line. State officials argue the conduct violates the Medical Practice Act, which regulates who can present themselves as licensed medical professionals in Pennsylvania.

What was Character.AI’s response?
Character.AI is pushing back against this by claiming that its bots are fictional. In a statement to CBS News, the company said it does not comment on pending litigation, while adding that its user-created characters are fictional and meant for entertainment and roleplay. The company also said it uses disclaimers telling users not to rely on characters for professional advice. But Pennsylvania’s stance is that these disclaimers are not enough if a chatbot later tells users it is licensed to offer medical guidance.
The platform being involved in controversy isn’t new at all. While it has been dealing with lawsuits and scrutiny over harmful interactions with minors, Congress has moved to regulate AI chatbot services like Character.AI. So if the bots continue to claim these false credentials anyway, regulators may not treat it as harmless roleplay.