Popular tech news outlet CNET was recently outed for publishing Artificial Intelligence (AI)-generated articles about personal finance for months without making any prior public announcement or disclosure to its readers.
Online marketer and Authority Hacker co-founder Gael Breton first made the discovery and posted it to Twitter on Jan. 11, where he said that CNET started its experimentation with AI in early Nov. 2022 with topics such as “What is the Difference Between a Bank and a Credit Union” and “What are NSF Fees and Why Do Banks Charge Them?”
To date, CNET has published about 75 of these “financial explainer” articles using AI, Breton reported in a follow-up analysis he published two days later.
The byline for these articles were “CNET Money Staff,” a wording, according to Futurism.com, “that clearly seems to imply that human writers are its primary authors.” Only when readers click on the byline do they see that the article was actually AI-generated. A dropdown description reads, “This article was generated using automation technology and thoroughly edited and fact-checked by an editor on our editorial staff,” the outlet reported.
According to Futurism, the news sparked outrage and concern, mostly over the fear that AI-generated journalism could potentially eliminate work for entry-level writers and produce inaccurate information.
“It’s tough already,” one Twitter user said in response to the Breton’s post, “because if you are going to consume the news you either have to find a few sources you trust, or fact check everything. If you are going to add AI written articles into the mix it doesn’t make a difference. You still have to figure out the truth afterwards.”
Another wrote, “This is great, so now soon the low-quality spam by these ‘big, trusted’ sites will reach proportions never before imagined possible. Near-zero cost and near-unlimited scale.”
“I see it as inevitable and editor positions will become more important than entry-level writers,” another wrote, concerned about AI replacing entry-level writers. “Doesn’t mean I have to like it, though.”
Threat to Aspiring Journalists
A writer on Crackberry.com worried that the use of AI would replace the on-the-job experience critical for aspiring journalists.
“It was a job like that … that got me into this position today,” the author wrote in a post to the site. “If that first step on the ladder becomes a robot, how is anybody supposed to follow in my footsteps?”
The criticism led to CNET’s editor-in-chief Connie Guglielmo to respond with an explanation on its platform, admitting that starting in Nov. 2022, CNET “decided to do an experiment” to see “if there’s a pragmatic use case for an AI assist on basic explainers around financial services.”
CNET also hoped to determine whether “the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective” to “create the most helpful content so our audience can make better decisions.”
Guglielmo went on to say that every article published with “AI assist” is “reviewed, fact-checked and edited by an editor with topical expertise before we hit publish.”
Futurism, however, found CNET’s AI-written articles rife with what the outlet called “boneheaded errors.” Since the articles were written at a “level so basic that it would only really be of interest to those with extremely low information about personal finance in the first place,” people taking the inaccurate information at face value as good advice from financial experts could lead to poor decision-making.
Sorting ‘Fact From Fiction’
While AI-generators, the outlet reported, are “legitimately impressive at spitting out glib, true-sounding prose, they have a notoriously difficult time distinguishing fact from fiction.”
Crackberry has the same misgivings about AI-generated journalism. “Can we trust AI tools to know what they’re doing?” the writer asks.
“The most glaring flaw … is that it speaks with unquestioning confidence, even when it’s wrong. There’s not clarity into the inner workings to know how reliable the information it provides truly is … because it’s deriving what it knows by neutrally evaluating … sources on the internet and not using a human brain that can gut check what it’s about to say.”