In news that isn’t surprising but should probably worry publishers — and readers who would like to retain a modicum of trust in journalism — The Guardian announced today that they’ve discovered nonexistent articles credited to the newspaper that were actually cited by the artificial intelligence chatbot ChatGPT.
The discovery happened after a journalist for the paper was contacted about an article that they couldn’t remember writing but involving a subject they had a record of covering. After doing some additional research, they couldn’t find any trace of the article’s existence — because ChatGPT had simply made up the reference.
“In response to being asked about articles on this subject, the AI had simply made some up,” says Chris Moran, The Guardian’s head of editorial innovation. “Its fluency, and the vast training data it is built on, meant that the existence of the invented piece even seemed believable to the person who absolutely hadn’t written it.”
The Biggest Trends in Artificial Intelligence, According to Stanford
Stanford’s 2023 State of AI report suggests we’re still a bit wary of artificial intelligence, but it’s being embraced by business (and China)While the chatbot hadn’t recreated or forged the look of the paper (or written a fake article), it had created a citation that didn’t exist but was used as a way of suggesting authority. If these misattributions continue, “It could well feed conspiracy theories about the mysterious removal of articles on sensitive issues that never existed in the first place,” Moran notes.
Given that AI technology is being embraced at a rapid pace — ChatGPT took just three months to register 100 million users, a three times faster pace than TikTok — publications will need to work quickly to combat this type of misinformation. The Guardian‘s expedited AI plan includes the creation of a working group and small engineering team to “focus on learning about the technology, considering the public policy and IP questions around it, listening to academics and practitioners, talking to other organizations, consulting and training our staff, and exploring safely and responsibly how the technology performs when applied to journalistic use.”
So even with these concerns, the publisher admits it will soon publish a plan on how it will employ generative AI, suggesting even those being harmed (currently) by the nascent technology will need to embrace it.
In the meantime, please double-check your sources and treat ChatGPT like you used to treat Wikipedia — as a tool for further research, not as a factual guide.
Thanks for reading InsideHook. Sign up for our daily newsletter and be in the know.