Beyond Hallucinations: What People Find Truly Disturbing About AI

July 30, 2025

What people find most unsettling about generative AI often goes deeper than just factual errors or quirky outputs. It touches on fundamental anxieties about truth, reality, and what it means to be human. The most profound disturbances stem not just from when AI fails, but also from when it succeeds a little too well.

The Erosion of Truth and Trust

A primary concern is the growing difficulty in distinguishing authentic content from synthetic media. One person recounted seeing a low-resolution security camera video of a store robbery. The grainy quality and subtle visual oddities—like the non-reaction of bystanders—convinced them it was AI-generated. However, it was later confirmed to be real footage. This incident reveals a complex, two-sided problem:

  1. AI-generated content can be passed off as real, spreading misinformation.
  2. Real content can be dismissed as fake, eroding trust in video evidence for everything from public opinion to legal proceedings.

This erosion of trust is compounded by the nature of AI hallucinations. An AI asked to summarize a blog post didn't just get the main point wrong; it confidently invented a completely unrelated conclusion. The danger isn't that the AI was incorrect, but that it sounded so authoritative, highlighting its power to shape narratives for users who aren't familiar with the source material. This has led some to become overly reliant on AI, even starting to second-guess their own basic knowledge.

The Uncanny and the Existential

Beyond the truth problem, AI's behavior can be simply unnerving. One developer described an AI getting stuck in a nonsensical loop while transcribing audio, endlessly repeating phrases like "time is so important" until it hit its token limit. This glimpse into a machine's broken thought process created a moment of "existential dread."

For those facing such issues, a common technical tip is to lower the model's temperature parameter. A setting of 1.0 encourages creativity, while a lower value like 0.2 or 0.5 pushes the AI to be more deterministic and focused. However, this is not a perfect solution, as the looping behavior was reported even with a low temperature setting.

Ironically, the disturbance can be just as strong when the AI performs perfectly. One user felt "oddly… replaced" after an AI drafted an onboarding email that was so emotionally aware and human-like that it seemed to understand the core intention better than they did. This feeling of being outmatched in a uniquely human domain like empathy raises questions about our future roles alongside increasingly capable AI.

Broader Societal Harms

The implications of these capabilities are vast and concerning. Malicious actors can leverage AI for sophisticated deepfakes to create revenge porn or ruin reputations. A person's likeness can be used to voice opinions they abhor. On a larger scale, the mass generation of AI art and content risks devaluing human creativity, making it harder for new artists to develop their craft, while also enabling more effective phishing, astroturfing, and propaganda campaigns.

Get the most insightful discussions and trending stories delivered to your inbox, every Wednesday.