AI Hallucination hangups - did the chat just gaslight me?
- Niv Nissenson
- Jul 3
- 2 min read

Remember that kid in school who could lie straight to the teacher's face, and when caught, would double down with an even bolder fabrication? Well, let me introduce you to my AI chatbot.
On a Post I did about Thinking Machines I asked ChatGPT to pick up the names of the founding team members (which was conveniently plastered on Thinking Machines' single page website). Confidently ChatGPT gave me the names without missing a bit. Fortunately, I fact-checked—every single name was completely fabricated, cobbled together from random AI industry figures.
When I called out this error, ChatGPT acknowledged its "mistake" and claimed to have "re-opened" the correct site. This time, it confidently declared that Thinking Machines was a startup from the Philippines. Strike two.

Maybe be OpenAI inserted some "attitude" towards Thinking Machines (a competitor who poached 24 OpenAI executives) since the hangups didn't end there. When I asked for help with drafting a LinkedIn post ChatGPT now wrote a post that Thinking Machines only raised $30M while the whole premise of the post was highlighting the ground shattering $2Bn seed raise. The AI correctly discussed my speculation about what Thinking Machines was building, proving it had actually read my content, yet still generated completely false financial data.

What Are AI Hallucinations?
This phenomenon has a name: AI hallucinations. For a technical deep-dive, I recommend Lily Weng's article on the subject (Weng formerly worked at OpenAI and is now part of Thinking Machines).
Weng defines hallucinations like this:
Hallucination in large language models usually refers to the model generating unfaithful, fabricated, inconsistent, or nonsensical content. As a term, hallucination has been somewhat generalized to cases when the model makes mistakes. Here, I would like to narrow down the problem of hallucination to cases where the model output is fabricated and not grounded by either the provided context or world knowledge.
She categorizes hallucinations into two types:
Intrinsic hallucinations: Inconsistencies with the provided context
Extrinsic hallucinations: Fabrications about world knowledge from pre-training
My ChatGPT experience appears to involve intrinsic hallucinations—the AI had access to all the correct information I provided but chose to ignore it in favor of creative fiction.
The AI community is actively working on solutions to reduce hallucinations. AI star Ruben Hassid proposed a specific prompting technique to tackle this issue, though early user feedback suggests it hasn't completely eliminated the problem—yet.
Why Understanding AI Hallucinations Matters?
Recognizing AI hallucinations is crucial because they can significantly affect decision-making in vital areas like healthcare, finance, and education. If we mistakenly trust AI-generated data, it can lead us astray. This is why we always caution AI users to "Trust but verify".
AI is remarkable technology that's transforming how we work and think. But it's not infallible—and recognizing its limitations is just as important as celebrating its capabilities. Until these systems achieve perfect accuracy, healthy skepticism and verification remain our best defenses against AI's occasional flights of fancy.
This issue deserves ongoing attention as AI continues to evolve. In upcoming posts, we'll explore it further.