Gemini, ChatGPT and most other AI chatbots think alike, and it’s bad for human creativity

AI chatbots are supposed to expand your creativity, not quietly narrow it. But new research suggests that’s exactly what may be happening when you rely on them too heavily.

A study published in Engineering Applications of Artificial Intelligence shows that leading models, including Gemini, GPT, and Llama, often land in the same conceptual territory when tackling creative tasks. On their own, many responses feel original and useful. When you zoom out, though, a different pattern emerges. Across many prompts and users, outputs begin to converge.

Recommended Videos

Researchers compared human participants with a wide range of AI models using standard creativity tests, like brainstorming new uses for everyday objects or listing unrelated words. Individually, AI held up well. As a group, its ideas were far less spread out.

Different bots, same patterns

The team didn’t focus on just one system. It tested more than 20 models from different companies against over 100 people. The outcome stayed consistent across the board. AI responses showed a tighter range, even when the models came from different families.

Gemini and ChatGPT are two of the most popular AI companions. Google, OpenAI / Google, OpenAI

When mapped for similarity, chatbot answers clustered closely together, while human responses covered a much wider space.

That same pattern showed up across tasks. Whether generating ideas or unrelated concepts, models leaned on familiar structures and repeated phrasing.

Attempts to push more variety didn’t go very far. Increasing randomness helped a bit but quickly reduced coherence. Prompting the AI to be more imaginative nudged results slightly, but it didn’t meaningfully widen the range.

Why this matters for your ideas

On the surface, AI can still look impressive. Many responses match or even edge past the average human answer in originality.

The issue becomes clearer at scale. When lots of people use the same tools for brainstorming or writing, they’re often drawing from the same underlying patterns. Over time, that compresses the range of ideas, even if each one seems different in isolation.

Digital Trends

Part of the limitation comes from what these systems lack. They don’t have lived experience, intent, or personal context. That absence may limit how far their ideas can diverge, no matter how they’re prompted.

There’s also a behavioral angle. The research suggests people may lean too heavily on AI suggestions instead of extending their own thinking. That shift can further reduce idea diversity over time.

What to watch next

This doesn’t look like a problem tied to one product. It appears to be a shared trait across modern AI systems. Even models built by different companies produced overlapping outputs, pointing to a deeper constraint in how these tools generate ideas.

For now, AI works best as a starting point, not a finish line. Use it to spark direction then build beyond it yourself. Otherwise you’re not really thinking, you’re just remixing the same ideas as everyone else.

Need help?

Don't hesitate to reach out to us regarding a project, custom development, or any general inquiries.
We're here to assist you.

Get in touch