Learning about AI “hallucinations”
Media commentary by Steve Dunlop
Listen to the podcast version here
We’re well into the digital age, and by now you’ve surely googled yourself to see what the search results say about you. Maybe you do so regularly.
But have you ever tried ChatGPT’ing yourself? Maybe you should.
Not long after I hosted an online panel on artificial intelligence and the future of journalism, I embarked on a modest experiment. I asked ChatGPT to write a three-paragraph essay about my career as a television reporter and anchor. It returned a series of inaccuracies that AI experts now refer to as “hallucinations.” But the essay was so deftly written that it carried a seductive air of truth.
“He began his career as a reporter for WABC-TV in the 1970’s,” the essay wrote. Wrong. I never worked at WABC. I was a news writer for the AP and a radio reporter on Long Island in the 1970’s, and later a news editor at WOR Radio.
The essay claimed I moved to WNEW-TV’s Ten O’Clock News in 1975. That was eight years off. I started at the Ten O’Clock News in 1983 before moving on to WNBC, and later, CBS News.
ChatGPT went on to call my reporting on 1977’s “Son of Sam Murders” perhaps my “most notable achievement” and claimed I became a “respected authority on the case.”
But I never covered Son of Sam.
It stated I was “one of the first reporters on the scene.” But anyone who remembers the case knows there was no single “scene” to be at – Son of Sam was a string of murders at a variety of locations, and the first was actually in 1976.
I asked a few of my journo colleagues to replicate my AI experiment, and they got back similarly false results. Like the old newsroom saying goes, never let facts get in the way of a good story.
How about the big stories of the era that I did cover? The Bernhard Goetz subway vigilante case, the insider trading probe of Ivan Boesky, the Robert Chambers “Preppy Murder” trial, the sentencing of the Mafia Commission, the construction and opening of the Javits Center, and Donald Trump’s brushes with bankruptcy, among others? ChatGPT made no mention of any of them.
The algorithm did try to flatter me, though. ChatGPT claims I was known for my “friendly and approachable personality,” called my approach to journalism “thorough and insightful,” and claimed my non-existent Son of Sam reporting was “praised for its sensitivity.” If I had made some of these claims on my resumé, I could stand accused of fraud at worst, puffery at least.
As a storyteller, however, ChatGPT does have one admirable instinct: it saves the best for last. In closing, it wrote, “his contributions to the field of journalism and his impact on the New York City media landscape continue to be remembered and celebrated to this day... “
...although, it added, “he passed away in 1999.”
As Mark Twain said when a newspaper mistakenly published his obituary, “reports of my death are greatly exaggerated.” At least Twain could cable the editor to get a correction. Who can I contact? Beats me. Which is yet another problem with AI.
Maybe the need to set so many records straight will be good for actual journalism in the long run. But those of us for whom truth and facts are coin of the realm already had our work cut out for us by the deluge of fake news. The advent of AI will only make misinformation more ubiquitous – and the need to debunk it more urgent.
If you don’t believe that, try asking ChatGPT to write a three-paragraph essay about yourself. And good luck getting a correction.