HomeEthics & SocietyDo you trust AI to put in writing the news? It already...

Do you trust AI to put in writing the news? It already is – and never without issues

Businesses are increasingly using artificial intelligence (AI) to generate media content, including news, to have interaction their customers. Now, we’re even seeing AI used for the “gamification” of reports – that’s, to create interactivity related to news content.

For higher or worse, AI is changing the character of reports media. And we’ll should clever up if we would like to guard the integrity of this institution.

How did she die?

Imagine you’re reading a tragic article concerning the death of a young sports coach at a prestigious Sydney school.

In a box to the best is a poll asking you to take a position concerning the reason for death. The poll is AI-generated. It’s designed to maintain you engaged with the story, as this may make you more likely to answer advertisements provided by the poll’s operator.

This scenario isn’t hypothetical. It was played out in The Guardian’s recent reporting on the death of Lilie James.

Under a licensing agreement, Microsoft republished The Guardian’s story on its news app and website Microsoft Start. The poll was based on the content of the article and displayed alongside it, but The Guardian had no involvement or control over it.

If the article had been about an upcoming sports fixture, a poll on the likely final result would have been harmless. Yet this instance shows how problematic it might probably be when AI starts to mingle with news pages, a product traditionally curated by experts.

The incident led to reasonable anger. In a letter to Microsoft president Brad Smith, Guardian Media Group chief executive Anna Bateson said it was “an inappropriate use of genAI [generative AI]”, which caused “significant reputational damage” to The Guardian and the journalist who wrote the story.

Naturally, the poll was removed. But it raises the query: why did Microsoft let it occur in the primary place?

The consequence of omitting common sense

The first a part of the reply is that supplementary news products reminiscent of polls and quizzes actually do engage readers, as research by the Center for Media Engagement on the University of Texas has found.

Given how low cost it’s to make use of AI for this purpose, it seems likely news businesses (and businesses displaying others’ news) will proceed to accomplish that.

The second a part of the reply is there was no “human within the loop”, or limited human involvement, within the Microsoft incident.

The major providers of enormous language models – the models that underpin various AI programs – have a financial and reputational incentive to ensure their programs don’t cause harm. Open AI with its GPT- models and DAll-E, Google with PaLM 2 (utilized in Bard), and Meta with its downloadable Llama 2 have all made significant efforts to make sure their models don’t generate harmful content.

They often do that through a process called “reinforcement learning”, where humans curate responses to questions that may result in harm. But this doesn’t at all times prevent the models from producing inappropriate content.

It’s likely Microsoft was counting on the low-harm points of its AI, relatively than considering how one can minimise harm which will arise through the actual use of the model. The latter requires common sense – a trait that may’t be programmed into large language models.

Thousands of AI-generated articles per week

Generative AI is becoming accessible and reasonably priced. This makes it attractive to business news businesses, which have been reeling from losses of revenue. As such, we’re now seeing AI “write” news stories, saving firms from having to pay journalist salaries.

In June, News Corp executive chair Michael Miller revealed the corporate had a small team that produced about 3,000 articles per week using AI.

Essentially, the team of 4 ensures the content is sensible and doesn’t include “hallucinations”: false information made up by a model when it might probably’t predict an appropriate response to an input.

While this news is prone to be accurate, the identical tools may be used to generate potentially misleading content parading as news, and nearly indistinguishable from articles written by skilled journalists.

Since April, a NewsGuard investigation has found lots of of internet sites, written in several languages, which are mostly or entirely generated by AI to mimic real news sites. Some of those included harmful misinformation, reminiscent of the claim that US President Joe Biden had died.

It’s thought the sites, which were teeming with ads, were likely generated to get ad revenue.

As technology advances, so does risk

Generally, many large language models have been limited by their underlying training data. For instance, models trained on data as much as 2021 is not going to provide accurate “news” concerning the world’s events in 2022.

However, that is changing, as models can now be fine-tuned to answer particular sources. In recent months, the usage of an AI framework called “retrieval augmented generation” has evolved to permit models to make use of very recent data.

With this method, it would definitely be possible to make use of licensed content from a small number of reports wires to create a news website.

While this will likely be convenient from a business standpoint, it’s yet another potential way that AI could push humans out of the loop within the strategy of news creation and dissemination.

An editorially curated news page is a beneficial and well-thought-out product. Leaving AI to do that work could expose us to every kind of misinformation and bias (especially without human oversight), or lead to a scarcity of essential localised coverage.

Cutting corners could make us all losers

Australia’s News Media Bargaining Code was designed to “level the playing field” between big tech and media businesses. Since the code got here into effect, a secondary change is now flowing in from the usage of generative AI.

Putting aside click-worthiness, there’s currently no comparison between the standard of reports a journalist can produce and what AI can produce.

While generative AI could help augment the work of journalists, reminiscent of by helping them sort through large amounts of content, we have now lots to lose if we begin to view it as a substitute.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read