CNET issued corrections on 41 of the 77 stories the outlet published that were written using an AI tool. In a note published today, CNET editor-in-chief Connie Guglielmo defended the use of the AI writing tool but said that an internal review of stories uncovered numerous errors in the articles at the center of the controversy.
Earlier this month, Futurism broke the news that CNET had been quietly publishing articles written by AI for months without drawing much public attention or making a formal announcement. In a follow-up story, the outlet noted numerous errors in a CNET article about compound interest, which eventually resulted in a lengthy correction. Following the errors, a disclaimer appeared at the top of all AI-written stories: “We are currently reviewing this story for accuracy. If we find errors, we will update and issue corrections.”
Last week, WM Leader reported that automated tools have been in use at CNET for much longer than the article-writing robot and that staff sometimes didn’t know if content was written by a machine or a human co-worker. The AI-written articles are designed to game Google searches with SEO-friendly keywords so lucrative affiliate ads can be plastered on the pages. CNET
After weeks of debate about CNET’s disclosure policies around AI tools, Red Ventures and CNET leadership told staff in a meeting on Friday that the company was temporarily pausing AI-generated content across all websites. The errors, though, don’t appear to be stopping CNET
“Expect CNET to continue exploring and testing how AI can be used to help our teams as they go about their work testing, researching and crafting the unbiased advice and fact-based reporting we’re known for,” Guglielmo wrote in her memo today. “The process may not always be easy or pretty, but we’re going to continue embracing it – and any new tech that we believe makes life better.”