Raphael Thys
  • About
Contact me
Digital Transformation and Digital Products at the age of AI
LinkedInInstagramFacebookXSpotify
Futurist · Keynote Speaker · AI Coach
Futurist · Keynote Speaker · AI Coach
/
📰
Blog
/
Blog RT
/
How to spot a hallucination before it spots you?
How to spot a hallucination before it spots you?

How to spot a hallucination before it spots you?

The AI will never tell you it's guessing. Here's how to tell for yourself.

Addressed to: Discover-level users | Theme: Quality & Trust

AI models generate text by predicting what comes next. They do not look things up in a database and they do not "know" they are wrong. When they produce false information, a made-up citation, a wrong date, a non-existent regulation, they present it with the same fluency and confidence as everything else. This is called a hallucination, and spotting it is your responsibility, not the tool's.

image

The tips

image

Be suspicious of precision. If the AI gives you a specific article number, a named official, a percentage with two decimal places, or an exact date, verify it. High specificity is where hallucinations hide, because it feels authoritative.

Watch for "too perfect" answers. If the response fits your question suspiciously well, exactly the regulation you hoped existed, exactly the precedent you needed, slow down. Convenience is a warning sign.

image
image

Check citations one by one. AI can produce reference titles that look real, combine a real author with a fake paper, or invent plausible URLs. Never trust a citation without opening the source yourself.

Ask the same question a different way. If the answer changes significantly, the model was not drawing from solid ground.

image
image
image

Know the high-risk zones: names, dates, legal references, numerical figures, quotations, and translations of binding texts are where hallucinations are most frequent and most dangerous.

image

Key takeaways

image

Fluency is not accuracy. A confident answer is not a correct answer.

image

The riskiest hallucinations look the most credible, specific, precise, well-formatted.

image

Verify before you use. Every time.

Remember

image

The AI will not flag its own mistakes. It has no uncertainty signal, no blinking red light. The verification step is yours. Build it into your workflow the same way you would proofread a document before sending it, not as an optional extra, but as part of the job.

Go deeper

  • Practice exercise: Ask the GenAI Hub to name three EU regulations relevant to your unit's work. Then verify each one: does it exist? Is the title correct? Is the article number real? Record what you find. This exercise usually surprises people.
  • Related tips: The two-source rule (Use-level) · Checking AI-produced citations (Use-level) · Confidence ≠ correctness (Discover-level, Myth vs. Reality series).
  • Self-Consistency: A prompt pattern, that uses variance to improve fiability.
image
  • Technical background: Hallucinations occur because language models are trained to predict statistically likely text, not to retrieve verified facts. They have no internal mechanism that distinguishes "I know this" from "this sounds right." This is a structural feature of the technology, not a bug that will be patched away. Mitigation strategies (retrieval-augmented generation, grounding, citation verification layers) reduce the frequency but do not eliminate the risk.
image