Day 8
AI Basics AI Hallucinations: Why AI Makes Stuff Up
AI doesn't lie — it predicts plausible text. Sometimes plausible and true diverge. Here's how to protect yourself.
BrainGem · braingem.ai/learn
Okay, important talk — this is the one thing that can get you in trouble with AI, and I want to make sure you're protected. AI doesn't "lie." It doesn't know what truth is. Remember Day 2? It predicts the most plausible next words. Sometimes "plausible" and "true" are the same thing. Sometimes they're not — and the AI has zero idea which is which. It'll state a made-up statistic with the exact same confidence as a real one.
This is called hallucination, and it's not a bug they're going to fix. It's built into how generative AI works. The model that's brilliant at drafting your emails is the same model that will invent a citation that doesn't exist, fabricate a product feature, or give you a date that's off by a year. It'll get better over time — it already has — but it'll never go to zero. So you need verification habits.
Three habits that'll keep you safe:
1. Cross-check facts. Numbers, names, dates, quotes — verify anything specific against an actual source. Takes 30 seconds and prevents embarrassment.
2. Ask the AI itself. "Are you confident about this? What might be wrong with this answer?" AI often catches its own mistakes when prompted to look for them. It's not perfect, but it's a useful gut check.
3. Never send unreviewed AI output to anyone who matters. Not to a client. Not to your boss. Not into a production system. Read it first. Every time.
💡 Try This Today
Ask AI about a topic you know really well — your industry, your hobby, your company's history. Look for subtle errors: dates slightly wrong, details mixed up, confident claims that sound right but aren't. Training your eye to spot hallucinations in familiar territory helps you catch them in unfamiliar territory too.