Is Apple Intelligence Making Up Words Now?

As powerful as LLMs can be, all have one shared weakness: hallucination. For reasons beyond our understanding, AI models have a habit of making things up, totally out of the blue. A response might be accurate, with well-cited sources and relevant information; then, all of a sudden, the AI pushes a false claim, or mistakenly interprets an ironic forum comment as fact. (That’s how you end up with Google’s AI Overviews recommending adding glue to your pizza.) Some LLMs may hallucinate less than others, but none are immune. That’s why anytime you use a chatbot, you’ll see some kind of warning on-screen, letting you know that the AI can make mistakes.

Apple Intelligence, Apple’s AI platform, is no exception here. When the company first rolled out its AI, it included notification summaries as a “perk.” Apple had to quickly backtrack, however, once the feature started incorrectly summarizing news alerts—such as in one case, when Apple Intelligence condensed a BBC headline to read that United Healthcare shooting suspect Luigi Mangione had killed himself in jail. The company later restored the feature but included some additional guardrails, like putting news summaries in italics.

Apple Intelligence might be making up new words

I stumbled across this post on the r/iOS subreddit on Thursday, which adds an interesting note to the AI hallucination discussion. The post reads, “Anyone else get fake words in their AI summaries?” with an attached screenshot, showing off notification summaries for the Acme Weather app. The first sentence reads: “Imbixtent light rain for the hour.” Ah, imbixtent rain. At least it’s only for an hour. Wait; imbixtent?”

Despite sounding plausibly like a real word, inbixtent is, in fact, totally made up. The poster didn’t share exactly what the notification says, so we can’t know what words Apple Intelligence is working from here. What we do know is the poster saw “imbixtent” three times, and they aren’t alone. Looking past the jabs poking fun at the weather app OP uses, some comments on the post affirm that others have seen Apple Intelligence making up fake words in its notification summaries. One commenter said they’ve seen “flemulating” in one summary, and “tranqued” in a Mail summary; another shared that they saw “stricively” instead of strictly on two separate occasions.

I can’t find any other examples on the internet showing off this phenomenon, and I personally don’t use notification summaries on my iPhone, so I haven’t seen this issue myself. I couldn’t say for sure how widespread this issue is, or whether it’s limited to a certain version of iOS, a specific device, or one app over another. One of the commenters has a theory, however: They think when the on-device AI model Apple Intelligence uses can’t shorten the original phrase on its own, it makes up a portmanteau to accommodate. In their words, the AI “yolos” a “vibes-word,” like imbixtent. They say this happens to them most with the Weather app’s summaries.

Does Apple Intelligence make up words in your summaries?

Again, there’s no telling whether this affects a large number of Apple users or just a small fraction. The fact that I can only find one post about it, with two commenters sharing similar experiences, leads me to believe it’s the latter, but I’d love to hear from anyone who has a similar experience. If you use Apple Intelligence’s notification summaries, please let me know if you’ve seen made-up words on your end. I may need to turn the feature on to keep an eye out.

Need help?

Don't hesitate to reach out to us regarding a project, custom development, or any general inquiries.
We're here to assist you.

Get in touch