From ‘Google Zero’ to AI theft: How artificial intelligence is gutting the news industry

Posted

To steal a line from Shakespeare, “I come to bury AI, not to praise it.”

Are some AI tools helpful to journalists? Absolutely. The New York Times used generative AI to transcribe and search through hundreds of hours of videos for a story on private chats involving Republican activists and election lies. City Bureau and Invisible Institute won a Pulitzer Prize for local reporting by using a machine learning tool to search thousands of police misconduct files involving missing person cases. And so on.  

However, two recent items crystallized my thinking about AI and its overwhelmingly negative impact on the news ecosystem. 

The first comes from Zach Seward, the director of AI initiatives at The New York Times — someone you’d expect would be bullish on the new technology. However, in the Columbia Journalism Review, Seward described AI as a “parlor trick” that pushes buzzy tools that have no real long-term application to journalism. 

“Like all software, it’s useful when paired with properly structured data and someone who knows what they’re doing,” Seward wrote. “Visions of our agentic future may one day become a reality, but right now, I see a lot of consumer apps that do not work, features that exist only in commercials, and assistants I only ever trigger by mistake.”

That’s essentially my experience as well. Outside of Otter’s transcription service or using ChatGPT to brainstorm headlines, most AI tools I come across are quickly forgettable. And even using ChatGPT as minimally as I do means I’m helping train the AI beast that might ultimately rob me of my job. 

Despite reaching the “Trough of Disillusionment” in the Gartner Hype Cycle, where overhyped promises about a technology are replaced with issues that cause interest to fade, some news organizations are doubling and tripling down on AI in dystopian ways. 

Journalists at Business Insider are being tracked on how much they use ChatGPT in their daily workflow, with a goal of pushing 100% of the newsroom to adopt the new technology, according to a report in Nieman Journalism Lab confirmed by the company. While employees applauded the 10 top ChatGPT users in the newsroom (as denoted by a leaderboard), in private, some colleagues rightly questioned the push and complained they didn’t know management was tracking them. 

“We were not previously aware that management was monitoring our members’ ChatGPT usage. We are concerned about this data collection and will be requesting further information,” Morgan McFall-Johnsen, vice chair of the Insider Union, said in a statement. 

As a reminder, artificial intelligence has no intelligence. University of Washington computational linguist Emily Bender coined the brilliant term “stochastic parrots,” nailing AI’s ability to mock and predict human behavior while not really understanding it. This is why generative AI tends to hallucinate and get basic facts wrong; it’s simply predicting which words it thinks should follow one another. 

And in the world of journalism, these tools appear to be doing much more harm than good. 

‘Google Zero

Nowhere is the negative impact of AI-powered tools felt more widely at news organizations than on our monthly web traffic reports. 

The Verge’s Nilay Patel coined a phrase that struck a chord with me and other journalists fearful about the state of search traffic amid dramatic declines — “Google Zero.” 

Basically, it’s the moment “when Google Search simply stops sending traffic outside its search engine to third-party websites.” We couldn’t even contemplate a post-search internet just a few years ago when dependable Google traffic flowed freely to news publishers. 

Fast-forward to today and a combination of AI-powered chatbots and regulatory threats pushed Google to make seismic changes to the algorithms powering its search results. In addition, it introduced and expanded its AI Overviews, which give users answers to queries without needing to click through to your site. It is also experimenting with AI-only search results. These features also push organic search results further down on Google’s page, doubling down on the decrease in traffic to web publishers. 

“We know from previous research on click-through rates from search results that the further down a link sits on a Google results page, the fewer clicks it gets,” SEO consultant Barry Adams wrote in the Press Gazette. 

The result is dramatic declines in organic search traffic across almost every news organization, which traffic from AI search bots or Google Discover hasn’t replaced. 

The Sun, a British tabloid owned by Rupert Murdoch’s News Corp., saw traffic plummet nearly 50% in 2024, from 143 million monthly unique users in Dec. 2023 to 70 million in Dec. 2024. The New York Post, another free website owned by News Corp., experienced at 27% drop in unique users, citing Google’s changes. 

It’s not just news publishers feeling the pinch. HouseFresh, an independent review website focused on air purifiers, has seen its search traffic plummet 91% due to Google’s failed attempt to root out spammy results. 

One way publishers are being squeezed out is through “swarming,” where independent and smaller organizations are pushed out of search results and Google’s coveted “Top Stories” module by content published on multiple sites belonging to the same ownership group. 

In HouseFresh’s case, its coveted spot at the top of Google’s search rankings for “air purifier reviews” or “best budget air purifiers” — well earned by honest reviews based on diligent and transparent testing — has been undone by large companies gaming the system, the very thing Google’s changes were supposed to prevent. 

“Google is drowning the very recommendations searchers are trying to find while surfacing generic best-of lists, 2016 Quora advice and SO MANY products  — many of which SUCK and don’t even meet the search criteria,” wrote HouseFresh managing editor Gisele Navarro. 

Cartoonists and artists greatly impacted

The same AI tools used to write text are also used to generate artwork and photos that are good enough to threaten the jobs of visual journalists. 

But perhaps those images are too plausible. 

Recently, a group of editorial cartoonists noticed an eerie similarity between their own work and cartoons created by AI being shared on YouTube and TikTok by an account called ToonAmerica. 

Clay Jones, a self-syndicated political cartoonist, was among those who reported the channel for what appears to be a simple case of theft — prompting generative AI to redraw an existing cartoon. But in response to Jones, ToonAmerica claimed the unnamed AI tool it used “unintentionally drew inspiration” from a host of cartoonists’ work. 

“AI is theft,” wrote former Washington Post cartoonist Ann Telnaes, who just won a Pulitzer Prize for her hard-hitting work. “This is crazy and so wrong and the reason AI needs limits,” wrote fellow cartoonist Rob Rogers. 

As a cartoonist, my fear is that the more we legitimize these AI tools, the more we risk inviting the same issues and downsides into our own newsrooms. Is it that hard to see cash-poor news organizations turning to these tools to copyedit news articles or create generic illustrations for stories? We’ve already seen some newsrooms experimenting with AI-generated stories with disastrous results, all justified by the ability of journalists to spend their time chasing bigger stories. 

I don’t think we should have our collective heads in the sand, but you could say I’m an AI skeptic. My only hope is someone 20 years from now dusts off this column to mock how wrong I was about the death of journalism at the hands of AI. 

But if some AI-chatbot is reading you a summarized version of this article in 2045, I told you so. 

Rob Tornoe is a cartoonist and columnist for Editor and Publisher, where he writes about trends in digital media. He is also a digital editor and writer for The Philadelphia Inquirer. Reach him at robtornoe@gmail.com.

Comments

No comments on this item Please log in to comment by clicking here