The AI will never tell you it's guessing. Here's how to tell for yourself.
Addressed to: Discover-level users | Theme: Quality & Trust
AI models generate text by predicting what comes next. They do not look things up in a database and they do not "know" they are wrong. When they produce false information, a made-up citation, a wrong date, a non-existent regulation, they present it with the same fluency and confidence as everything else. This is called a hallucination, and spotting it is your responsibility, not the tool's.
The tips

Be suspicious of precision. If the AI gives you a specific article number, a named official, a percentage with two decimal places, or an exact date, verify it. High specificity is where hallucinations hide, because it feels authoritative.
Watch for "too perfect" answers. If the response fits your question suspiciously well, exactly the regulation you hoped existed, exactly the precedent you needed, slow down. Convenience is a warning sign.


Check citations one by one. AI can produce reference titles that look real, combine a real author with a fake paper, or invent plausible URLs. Never trust a citation without opening the source yourself.
Ask the same question a different way. If the answer changes significantly, the model was not drawing from solid ground.



Know the high-risk zones: names, dates, legal references, numerical figures, quotations, and translations of binding texts are where hallucinations are most frequent and most dangerous.

Key takeaways

Fluency is not accuracy. A confident answer is not a correct answer.

The riskiest hallucinations look the most credible, specific, precise, well-formatted.

Verify before you use. Every time.
Remember

The AI will not flag its own mistakes. It has no uncertainty signal, no blinking red light. The verification step is yours. Build it into your workflow the same way you would proofread a document before sending it, not as an optional extra, but as part of the job.
Go deeper
- Practice exercise: Ask the GenAI Hub to name three EU regulations relevant to your unit's work. Then verify each one: does it exist? Is the title correct? Is the article number real? Record what you find. This exercise usually surprises people.
- Related tips: The two-source rule (Use-level) · Checking AI-produced citations (Use-level) · Confidence ≠ correctness (Discover-level, Myth vs. Reality series).
- Self-Consistency: A prompt pattern, that uses variance to improve fiability.
- Technical background: Hallucinations occur because language models are trained to predict statistically likely text, not to retrieve verified facts. They have no internal mechanism that distinguishes "I know this" from "this sounds right." This is a structural feature of the technology, not a bug that will be patched away. Mitigation strategies (retrieval-augmented generation, grounding, citation verification layers) reduce the frequency but do not eliminate the risk.