AI is rewriting the rules. Language is following.

Language has always evolved. But this time, it’s being rewired faster than most of us can keep up with, and our sentences are caught in the middle.

There’s a word you’ll have noticed lately. It turns up in LinkedIn posts, academic lectures, workplace emails, and the kind of semi-formal prose that used to carry a whiff of genuine effort. The word is delve. It has been there, technically, for centuries. But something changed around the end of 2022, and now it appears with an unusual frequency that isn’t quite human anymore.

That’s not a coincidence. It’s a signal.

Language has always changed. New words emerge, old ones fall away, spelling rules shift, and conventions that once felt permanent quietly dissolve. None of that is alarming. What’s different now is the pace of the change, who (or what) is driving it, and what we might be slowly giving up in the process.

Illustration of a circular feedback loop showing a person writing on a scroll that feeds into a mechanical machine with a screen, keyboard, and gears. The machine outputs text that flows back up and over, forming a continuous ribbon that influences the person’s writing again.

The machine at the keyboard

When ChatGPT launched in November 2022, most of the conversation centred on whether AI could write. The more pressing question, it turns out, is whether AI can write in ways that change how we write.

A study from the Max Planck Institute for Human Development analysed close to 280,000 YouTube videos from academic channels and tracked the frequency of words associated with AI-generated text. The findings are clear: in the first 18 months after ChatGPT’s release, the use of the word “delve” increased by 48%, “realm” by 35%, and “adept” by 51% in academic spoken content. These weren’t scripted readings. In a further analysis of 50 randomly selected videos where “delve” appeared, speakers showed no signs of reading from a script approximately half the time, suggesting it had worked its way into their natural speech.

The study’s lead author, Hiromu Yakura, summarises it plainly:

“We internalise this virtual vocabulary into daily communication.”

This is the feedback loop that should concern us. We train AI on human-generated text. AI redistributes that text back to us, statistically optimised and lightly laundered. We absorb it, repeat it, and feed it back in again. Over time, the boundary between what the machine sounds like and what we sound like starts to blur.

The “AI giveaway” problem (and why it’s more complicated than it looks)

Spotting AI-generated text has become something of a popular (and often warranted) sport, and its favourite exhibit is the em dash. Some have taken to calling it the “ChatGPT hyphen,” arguing its use is a strong indicator that a piece was machine-written, partly because not all keyboards have a dedicated key for it, and most people reach for a regular hyphen instead. The claim spread quickly on Reddit and beyond. Now it appears regularly in comment sections wherever suspicion takes hold.

It’s a satisfying theory. It’s also not especially reliable.

The em dash didn’t originate with AI. It has been a literary device for centuries, used by Dickinson, by Woolf, by journalists and writers well before anyone had heard of a large language model. The popular suspicion may be a vestige of earlier, less sophisticated models, which did tend to overuse the em dash as a way of mimicking formal or stylised writing. As ChatGPT itself put it: “some early AI-generated content, especially before 2023, used em dashes more frequently than the average human writer.” Current models vary considerably in how often they use it, and can adjust their punctuation based on tone prompts. The goalposts keep moving.

The same problem applies to the growing blacklist of suspect phrases: furthermore, it is worth noting, in today’s world, delve, at its core, tapestry. Multiple detection services rank phrases like “at its core” and “at the heart of the matter” among the top AI giveaways, describing them as convenient fillers used when shifting from definition to explanation. The list isn’t wrong, exactly. But as these phrases become widely known as AI markers, writers start avoiding them, which nudges AI systems to avoid them too, which invalidates the list. At Montclair State University in the US, staff were advised to view good grammar itself as suspect, on the grounds that AI-written essays “tend to be atypically correct in grammar, usage, and editing.”

We have arrived at a peculiar moment: grammatically correct prose is now evidence of automation. Polished sentences invite suspicion. Writers are describing a new self-consciousness, asking themselves whether that em dash was really necessary, whether their voice sounds perhaps a little too impersonal.

On a personal level, the number of times I’ve had to mournfully wave goodbye to an em dash in my own writing to dodge this problem has been quite a downgrade — and I am, frankly, rather salty about it.

Is our grammar actually getting worse?

This is where the question gets actively contested, and the answer is probably not the one you’d expect.

On the surface, grammar appears to be getting better. Tools like Grammarly have been shown to reduce grammatical errors and broaden vocabulary, particularly among non-native speakers. AI assistants catch typos, smooth out syntax, and flag misplaced commas with a consistency no human editor could sustain across millions of documents simultaneously. By measurable surface standards, AI-augmented writing is often more correct than what it started out with.

But surface correctness and writing competence are not the same thing, and confusing the two is where the conversation tends to go wrong.

A spellchecker doesn’t teach you to spell. A grammar assistant doesn’t teach you grammar. It just catches what you missed. In that sense, AI correction isn’t improving grammar. It’s covering for the absence of it. The output looks better while the underlying skill quietly atrophies. It’s a band-aid applied at skin-deep level, with the wound staying open underneath.

The real question is whether people are developing and retaining the ability to write without the tool. It’s not that they are ignoring grammar rules; it’s that they may be offloading grammatical competence to a tool, which is a meaningful distinction.

The worry is that overdependence on AI could lead to reduced effort in crafting well-structured sentences and critically evaluating sources, and could ultimately weaken the ability to perform independent analysis. A 2025 MIT Media Lab study explored this more directly, using EEG brain scans to measure the mental effort involved in writing essays using a chatbot, a search engine, or no tools at all. The results found that excessive reliance on AI-driven solutions may contribute to “cognitive atrophy” and a shrinking of critical thinking abilities. It’s worth noting that this was a small, non-peer-reviewed preprint, and the outcomes shouldn’t be overstated. But it aligns with a broader pattern in the research that’s harder to dismiss entirely.

In short, AI can produce grammatically immaculate prose. What it can’t do is replicate the thinking that good writing requires: finding the right structure, or the phrase that only lands because you’ve been searching for it. When that process is routinely skipped, something other than grammar is at risk.

Illustration of a machine labelled “Standardisation Filter” with a globe visible through its central window. On the left, diverse speech bubbles and shapes enter the machine, labelled with words like “Diverse,” “Unique,” “Messy,” “Varied,” “Complex,” “Raw Data,” and “Opinions.” On the right, the machine outputs identical uniform blocks stacked neatly, labelled “Uniform,” “Same,” “Standardised,” “Identical,” “Clean,” “Normalised,” and “Output.”

The homogenisation problem

This is where the detection culture starts to collapse under its own logic, at least in part. AI didn’t invent its vocabulary or its rhythms. It learned them from us. Some people are undoubtedly using it to pass off generated text as their own, and that suspicion is warranted. But for others, getting flagged has nothing to do with AI use at all. They simply write in ways that AI absorbed and redistributed at scale. The accusation arrives anyway. The tell isn’t artificial. It’s just human writing, thrown back at us in enough volume that we’ve started mistaking the original for the copy.

Perhaps the most structurally significant change, though, is not the one that gets the most attention. It’s not whether any individual piece of writing sounds like a chatbot. It’s what happens to linguistic diversity at scale.

A 2024 Cornell University study recruited 118 participants from the US and India and asked them to write essays with and without AI assistance. The data showed that when both groups had access to it, their writing flattened toward a common style, mainly to the detriment of Indian expression. Senior author Aditya Vashistha described it as follows:

“People start writing similarly to others, and that’s not what we want. One of the beautiful things about the world is the diversity that we have.”

Indian participants got less out of the technology, and paid a higher price for using it, as its suggestions led them to adopt Western writing styles, altering not just what was written but also how it was written.

This is not a minor point. English has many varieties, shaped by different histories, rhetorical traditions, and ways of organising thought. When AI writing assistance is trained predominantly on Western, and more specifically American, text, it gravitates toward those norms and pulls everything else along. AI’s influence is accelerating concerns about homogenising regional and international variations of English.

A parallel pattern is visible in academic writing. An analysis examining over a million social science abstracts found a marked rise in words associated with ChatGPT after its release. Non-native English-speaking regions showed the sharpest increase.

There’s a contradiction buried here. For non-native English speakers, AI assistance can reduce barriers and improve fluency, though that depends on how actively they engage with the output rather than simply adopting it. The tool helps. But it helps in a particular direction, toward a particular kind of English, and that direction is not culturally neutral.

Language as a living thing

None of this is to say that AI is destroying language. Language is considerably more resilient than its mourners tend to suggest. English has absorbed printing presses, telegraph shorthand, text message conventions, and the full chaos of internet communication, emerging recognisable each time. New forms take shape; old ones fall away. That’s what languages do.

What is truly new is that for the first time, a non-human system is an active participant in that process, and one operating at enormous scale. UK Members of Parliament were reportedly using ChatGPT-influenced phrases in their speeches, with the characteristically American phrase “I rise to speak” appearing 26 times in a single day in Parliament. That’s not a stylistic curiosity. It’s an example of a machine smuggling idiom from one cultural context into another.

Researchers suggest we are approaching a point where AI’s impacts on language will move between two poles: standardisation in professional and formal contexts, and something more expressive in personal and emotional ones. There are already signs of self-correction: people who actively avoid “delve,” writers who deliberately introduce imperfection, communities that police AI prose as a matter of cultural authenticity. Whether that resistance scales is a different question.

The deeper risk, though, is not that everyone will write the same way. It’s that we lose conscious awareness of how our language is being shaped, and by whom. Writing is not just communication. It is thinking made visible. When the tool that helps us write begins to think for us, what we lose is harder to see than a grammatical error, and considerably harder to recover.

Thanks for reading! 📖

If you enjoyed this, follow me on Medium for more on design, psychology and technology.

References & Credits


AI is rewriting the rules. Language is following. was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Need help?

Don't hesitate to reach out to us regarding a project, custom development, or any general inquiries.
We're here to assist you.

Get in touch