Photo credit: Aljazeera
Last year, the Wall Street Journal won a Pullitzer for an investigation that used artificial intelligence. To map out the transformation of Elon Musk’s politics, the Journal used machine learning to analyze 41,000 of his interactions on X, reported Andrew Seck in NiemanLab.
The Tyee reports that as a powerful use case for AI in journalism.
But it’s also an example of machine learning – not generative AI, the tool that no one has been able to stop talking about since the launch of OpenAIs ChatGPT in November 2022.
Not only is generative AI prone to inaccuracies, but experiments conducted by The Tyee while developing their AI policy revealed that text ChatGPT generated was also vague and bland – much weaker than what a human journalist could write.
The use of generative AI in journalism has also produced some embarrassing mistakes.
Last May, the Chicago Sun-Times and the Philadelphia Inquirer published a book recommendation list that included non-existent books.
This past August, Wired and Business Insider removed numerous features that had appeared under the byline “Margaux Blanchard” after concerns that they were likely AI generated. Many other US -and UK-based outlets published content under the same name.
Even newsroos that profess they intend on sticking to human-made journalism have had close brushes with AI scams. In November, the Local’s executive editor Nicholas Hune-Brown recounted the tale of how a suspicious pitch from a “freelancer” led him to investigate the byline and ultimately expose the “journalist,” whose works had already been published in the Cut, the Guardian and Architectural Digest.
In a profession that emphasizes accuracy, ethics, accountability to the public and engaging writing, those are serious drawbacks.
“Artificial intelligence” is an umbrella term that applies to many kinds of technology.
Global technology and consulting giant IBM defines AI as “technology that enables companies and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.”
Machine learning is a dominating subset of AI. It’s “focused on algorithms that can ‘learn’ the patterns of training data and, subsequently, make accurate inferences about new data,” allowing models to make predictions without hard-coded instructions, explains DaveBergmann, an AI models writer for IBM Think.


