If This Is the Future, We’re F**ked: When AI Decides Reality Is Wrong
“ChatGPT can make mistakes. Check important info.”
- Warning below ChatGPT’s prompt
While I’ve often suspected I was arguing with a bot on Twitter, this was the first time I knowingly entered an exchange with a machine.
Needing an illustration to accompany a tribute to Limp Bizkit bassist Sam Rivers, I uploaded a screenshot from the band’s Woodstock ’99 performance into an AI engine and asked for a pencil-sketch version with the caption “Sam Rivers 1977–2025.”
Instead of producing an image, the chatbot informed me that my request violated its terms of service. When I asked why, it replied: “There is no indication that Sam Rivers is dead.”
I should have moved on. Instead, I tried to persuade the AI that the bassist was, in fact, deceased. I provided links to articles reporting his death, along with a press release from the band itself.
The AI was unyielding. No matter what proof I offered, it insisted it could not create the image using “false information.” Eventually, I gave up. I had it generate the image without the dates and added them myself using a graphic editor.
The incident turned out not to be unique, but part of a broader pattern.
After completing the first draft of an article analyzing partisan reactions to the murders of Charlie Kirk and Rob Reiner, I submitted it to ChatGPT for suggestions on tightening the piece. Instead of stylistic feedback, the AI flagged what it described as “very serious…factual credibility risks,” warning that these issues “undermine the article.”
That accusation was deeply concerning. I take great care to ensure the accuracy of my writing. Readers may disagree with my conclusions, but they should never have reason to doubt that those conclusions are grounded in fact.
Curious to see where I had supposedly gone wrong, I read on. According to the AI:
Charlie Kirk is alive. Writing as though he was murdered is a fatal factual error unless the piece is speculative fiction, satire, or an unstated alternate reality. If the claim is metaphorical or hypothetical, it must be made explicit immediately. As written, the article is disqualifying for publication.
Unlike the news of Sam Rivers’ death, which was only hours old when I encountered resistance from an AI, Kirk’s murder had been extensively documented and had occurred three months earlier. The event had reshaped the country’s political and social landscape, spawning major secondary stories of its own, including the temporary suspension of Jimmy Kimmel following his comments on the national response.
The AI continued. It asserted that there was “no public record of Rob Reiner being murdered,” claiming that USA Today links dated December 2025 appeared to be fabricated or “future-dated,” and therefore “severely damaging” to my credibility. It also insisted that “JD Vance is not currently Vice President as of real-world timelines.”
This was no longer a matter of incomplete data or delayed updates. It was a system that confidently rewrote reality, doing so while presenting itself as an authority.
Whether the cause is from a preprogrammed bias, flawed training data, or simple design limitations, mistakes are inevitable. The question is what happens when those mistakes are treated not as errors, but as facts. What mechanisms of accountability exist when AI systems are empowered to override documented reality, and increasingly, to mediate our access to it?
What unsettled me was not that the AI made mistakes, humans do that constantly, but that it insisted its mistakes were reality. That is the part Hollywood has been warning us about for decades.
With real-world AI still in its infancy, the HAL 9000 in 2001: A Space Odyssey killed most of the crew aboard Discovery One after being given contradictory directives. This serves as a cautionary tale about the risks of developing advanced AI without clear ethical and operational safeguards. WarGames showed what happens when we let machines make decisions that require human judgment, empathy, and hesitation. In The Terminator, Skynet becomes self‑aware and launches a nuclear war as an act of self-preservation, illustrating the nightmare scenario of an AI so deeply embedded in our infrastructure that humans can no longer intervene.
These films all assumed intelligence was the threat. None anticipated a world where authority would be granted without understanding. What if AI were not humankind’s intellectual superior but were, instead, omnipotent and profoundly dumb? When “garbage in, garbage out” shows up as an AI misreading a news article, the consequences are merely annoying. When the same flawed logic is steering a two‑ton vehicle down a highway at 75 miles an hour, the consequences can be deadly.
The danger is not that AI will outthink us, but that we will continue to hand authority to systems that cannot distinguish between truth and a confidently delivered error.
We are building machines capable of overriding human judgment without understanding the world they are reshaping. If this is the future, we are not doomed because AI is smarter than us; we are doomed because we are trusting it while relinquishing the ability to verify its actions.
“A valiant fighter for public schools,” Carl Petersen is a former Green Party candidate for the LAUSD School Board. Shaped by raising two daughters with severe autism, he is a passionate voice for special education. Recently, he relocated to the State of Washington to embrace the role of “Poppy” to two grandsons. Explore more at TheDifrntDrmr.

