Google says it’s developing tools to help journalists create headlines, stories
NEW YORK (AP) — Google says it is in the early stages of developing artificial intelligence tools to help journalists write stories and headlines, and has discussed its ideas with leaders in the news industry.
The rapidly-evolving technology is already raising concerns about whether it can be trusted to provide accurate reports, and whether it would eventually lead to human journalists losing their jobs in an industry that is already suffering financially.
Leaders at The New York Times, The Washington Post and News Corp., owners of The Wall Street Journal, have been briefed on what Google is working on, the Times reported Thursday.
Google, in a prepared statement, said artificial intelligence-enhanced tools could help give journalists options for headlines or different writing styles when they are working on a story — characterizing it as a way to enhance work and productivity.
“These tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles,” Google said.
The Associated Press, which would not comment Thursday on what it knows about Google’s technology, has been using a simpler form of artificial intelligence in some of its work for about a decade. For example, it uses automation to help create stories on routine sports results and corporate earnings.
A debate over how to apply the latest AI writing tools overlaps with concerns from news organizations and other professions about whether technology companies are fairly compensating them to use their published works to improve AI systems known as large language models.
To build AI systems that can produce human-like works of writing, tech companies have had to ingest large troves of written works, such as news articles and digitized books. Not all companies disclose the sources of that data, some of which is pulled off the internet.
Last week, AP and ChatGPT-maker OpenAI announced a deal for the artificial intelligence company to license AP’s archive of news stories going back to 1985. The financial terms were not disclosed.
Chatbots such as ChatGPT and Google’s own Bard are part of a class of so-called generative AI tools that are increasingly effective at mimicking different writing styles, as well as visual art and other media. Many people are already using them as a time-saver to compose emails and other routine documents or helping with homework.
However, the systems are also prone to spouting falsehoods that people unfamiliar with a subject might not notice, making them risky for applications such as gathering news or dispensing medical advice.
Google has historically shown some caution in applying its AI advances, including in its flagship search engine which users rely on to surface accurate information. But the public fascination with ChatGPT after its release late last year has put pressure on tech companies to show off new AI products and services.
In an ideal world, technology like Google is discussing can add important information to the world, said Kelly McBride, an expert in journalism ethics for the Poynter Institute. It could document public meetings where there are no longer human journalists to attend and create narratives about what is going on, she said.
But there’s a likelihood that the technology will progress faster than a new business model can be discovered that supports local news — creating the temptation to replace human journalists with AI tools, she said.
That’s why developments are being closely watched by unions representing journalists, like the News Media Guild for The Associated Press.
“We’re all for technological advances helping our reporters and editors do their jobs,” said Vin Cherwoo, News Media Guild president. “We just don’t want AI doing their jobs.”
“What’s most important for us is to protect our jobs and maintain journalistic standards,” he said.
Producing routine sports or corporate earnings stories can be useful. But a baseball story created from a box score likely would have missed reporting about Aaron Judge leaving a New York Yankees game with a sore toe — arguably the most important development in the team’s season, said Dick Tofel, former president of ProPublica.
Rather than focus so intently on AI’s capacity to write stories, journalists should consider other uses, he said. Already, it enables news organizations with limited resources to use data journalism, or produce products in different languages.
Tofel, who writes a journalism newsletter called Second Rough Draft, asked AI to create an illustration in the style of Italian still-life painter Michelangelo Merisi da Caravaggio for a sports story he was writing recently. He got a useful piece of art for 14 cents.
News organizations should not ignore what the technology can do for them, he said.
“It’s like asking, ‘should the newsroom use the Internet?’ in the 1990s,” Tofel said. “The answer is yes, but not stupidly.”
Journalism organizations need to consider the possibility that the technology, particularly in its nascent stages, may be responsible for creating errors — and the reputational damage may be more than any financial advantages its use can bring.
“I don’t think there will be a single ethical explosion that will ruin everything,” McBride said. “Instead, I think it’s going to be more of an erosion of quality and a bunch of small things that erode confidence in the news media.”
News organizations are at a critical moment where they can use things that technology companies need — like access to archived information — and create a financial structure that doesn’t tilt too far in the direction of companies like Google, she said. History isn’t necessarily on their side.
“This is a whole new level of threat,” she said, “and it’s not like we can turn back.”
Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.