1. Ambivalence of information means receiving mixed, conflicting, or contradictory messages that make it hard to know what’s true or false. In today’s digital age, where facts, opinions, and misinformation coexist online, this ambivalence is silently embedding itself into society’s fabric. As people consume and share unclear or contradictory content, the very foundation of informed decision-making — critical thinking and trust in knowledge — grows weaker. This erosion threatens how future generations understand the world, weakening the pillars of education, journalism, and public discourse.
2. Large language models like GPT are trained on vast swaths of internet data — a mix of verified knowledge, opinion, propaganda, and misinformation. These models don’t “know” truth. They generate what is probable, not necessarily what is factual.
3. The result? When users — students, journalists, content creators — rely on GPT outputs without critical thinking or fact-checking, they unintentionally contribute to a growing fog: content that sounds authoritative but may be misleading, biased, or contradictory. In doing so, they amplify the ambivalence of information — where the line between truth and falsehood becomes increasingly blurry.
4. To be fair, GPTs can reduce ambiguity — but only in the hands of informed, discerning users who craft precise prompts and verify sources. Unfortunately, that level of awareness is the exception, not the rule.
5. In a world flooded with AI-generated text, clarity is no longer a default — it’s a responsibility.
https://orcid.org/0000-0002-9097-2246
