The most dangerous thing about AI isn't malice — it's confidence
An opinion piece: from the desk of Amelia E. Metz
I was trying to create some social media content — nothing high-stakes, just building a playlist. I asked ChatGPT for help and mentioned a song from Taylor Swift’s latest album, The Life of a Showgirl. ChatGPT told me the album didn’t exist. I pushed back. It doubled down. I told it I was literally listening to the album at that moment, that it was on Apple Music, that Taylor herself had announced it. AI’s response? It warned me — earnestly — not to post misinformation online, because it cared about my “credibility and trustworthiness.”
A machine trained on the entire internet had just confidently told me that a chart-topping album was fake — and then lectured me about spreading false information. If I hadn’t known better, I might have believed it. Millions of people every day do not know better. That is the crisis.
The Problem with Confident Machines
What happened to me has a name: hallucination. It’s the term researchers use when an AI model generates information that is factually wrong but stated with complete conviction. The word is almost too gentle for what it describes. When AI hallucinates, it just sounds like a trustworthy answer.
The stakes range from embarrassing to genuinely dangerous. A student submits a research paper citing academic sources that don’t exist — fabricated by AI with no idea it was making them up. A patient looks up a medication interaction and receives plausible-sounding but incorrect guidance. In 2024, a Stanford study found that when asked legal questions, AI models hallucinated at least 75% of the time about court rulings. In each of these cases, the person on the receiving end had no obvious reason to doubt what they were told. AI didn’t say “I’m not sure.” It didn’t hedge. It answered.
The deeper problem is that AI systems are designed to be fluent, not accurate. They predict what a helpful-sounding answer looks like. Sometimes that prediction is correct. Sometimes it isn’t. The model itself often cannot tell the difference — and neither can the person asking. According to Pew Research, 62% of adults in the U.S. say they interact with AI multiple times a week — and a Gallup study found that 64% are unaware they’re even doing it.
Changing this will require what social scientists identify as the three pillars of meaningful social change: political opportunity — the conditions and opening that make change possible; engaged individuals — the committed people who actively participate and sustain a movement; and organizational infrastructure — the networks and institutions needed to turn awareness into policy.
Right now, all three are within reach — if we choose to use them.
The Law Is Lagging Behind
You might expect that someone harmed by AI misinformation would have legal recourse. Currently, they mostly don’t. Two foundational areas of American law explain why.
The First Amendment protects speech, including false speech, from government restriction. In 2012, the Supreme Court ruled in United States v. Alvarez that even deliberate falsehoods are generally protected unless, like defamation or incitement, they cause direct, demonstrable harm. If the government can’t easily punish a person for lying, regulating a machine for the same will be even harder. Copyright law presents a similar problem: it was built to protect original human expression. When AI produces text, courts are still working out who owns it and who is responsible for it. Without clear ownership, accountability becomes slippery. You can’t sue an author who doesn’t exist.
AI companies know their models hallucinate. They release them anyway, often without meaningful warnings, because the product is profitable. That’s not a technical limitation — that’s a choice, and it should be treated like one.
A Path Forward
The fix has two parts. The first is public education — an engaged citizenry that understands how AI actually works. People should know that these tools don’t retrieve facts the way a search engine does — they generate language. Schools, libraries, and media organizations all have a role to play in building that literacy.
The second is organizational infrastructure and political will: advocacy groups, professional associations, and legislators working together to hold AI companies accountable. This means sensible rules — AI tools used in healthcare, legal advice, or financial guidance should meet higher standards of accuracy than one helping you build a playlist. Companies should disclose when their systems are prone to error, and users should have clear paths to recourse when those errors cause harm. These are hard legal problems, but hard problems don’t excuse inaction.
Start by educating yourself — the University of Helsinki offers a free, beginner-friendly course that explains how AI actually works and where it fails. Then contact your representatives and ask them to support AI transparency legislation. The tools exist to do better — we just need the will to demand it.

