HomeNewsPeople are concerned that the media is using AI for necessary stories...

People are concerned that the media is using AI for necessary stories but less for sports and entertainment

Advances in artificial intelligence (AI) are transforming many features of contemporary life, and the news industry is not any exception. In a 12 months of a record-breaking With elections going down all over the world, there was much discussion in regards to the potential impact of deepfakes and other synthetic content on democracies and further disruption to the business models and trust on which independent journalism relies.

Most viewers are only starting to form opinions about AI and news, but on this 12 months’s Digital News Report SurveyIn our survey, conducted on the Reuters Institute for the Study of Journalism on the University of Oxford, we asked questions on this topic in 28 markets and supplemented this with in-depth interviews within the UK, US and Mexico.

Our findings show that the usage of these technologies is extremely ambivalent. They also provide insights for publishers who need to implement these technologies without further compromising trust in news, which has declined in lots of countries in recent times.

It's necessary to keep in mind that awareness of AI continues to be relatively low, with around half of our sample (49% globally and 56% within the UK) having read little or nothing about it. However, when chatting with the more informed, concerns in regards to the accuracy of knowledge and the potential for misinformation top the list.

Manipulated images and videos, for instance of the Gaza war, have gotten increasingly common on social media and are already causing confusion. One male participant said: “I actually have seen many examples and sometimes they’re superb. Fortunately, they’re still quite easy to acknowledge, but in five years they are going to not be distinguishable.”

Some participants felt that the widespread use of generative AI technologies – people who can create text, image and video content for users – would likely make it harder to discover misinformation, which is especially worrying on the subject of necessary topics reminiscent of politics and elections.

In 47 countries, 59 percent of respondents say they’re concerned about their ability to differentiate between real and pretend online, up three percentage points from last 12 months. Others are more optimistic, saying these technologies could possibly be used to supply more relevant and useful content.

Use of AI within the news industry

The news industry relies on AI For two reasons. First, they hope that automating background processes reminiscent of transcription, editing and layout will reduce costs. Second, AI technologies could help personalize the content itself, making it more attractive to the audience.

Over the past 12 months, media firms have deployed a variety of AI solutions with various degrees of human control, from AI-generated summaries and illustrations to stories written by AI robots and even AI-generated news anchors.

How does the audience feel about all this? Across 28 markets, respondents to our survey were overwhelmingly unhappy with the usage of AI when the content is generally created by AI and partially controlled by humans. In contrast, there may be less discomfort when AI is used to help (human) journalists, for instance when transcribing interviews or summarizing materials for research.

Here, respondents are generally more comfortable than uncomfortable. However, we see differences on the country level which may be related to the signals people receive from the media. Coverage of AI within the UK press, for instance, has been characterised as predominantly negative and sensationalist, while coverage within the US media is dominated by the leading role of US firms and the opportunities for jobs and growth.

Acceptance of AI can also be closely related to the importance and seriousness of the subject being discussed. People say they feel less comfortable with AI-generated news on topics reminiscent of politics and crime, while they feel more comfortable with sports or entertainment news – topics where mistakes are likely to have less serious consequences.

“Chatbots really shouldn’t be used for more necessary news like war or political news, because the potential misinformation could possibly be the rationale someone votes for a selected candidate moderately than one other,” a 20-year-old man from the UK told us.

Our research also shows that individuals who trust the news generally usually tend to be OK with AI deployments where humans (journalists) remain on top of things than people where they don’t. This is because individuals who trust the news are also more more likely to trust that publishers can use AI responsibly.

The interviews we conducted show the same pattern at the extent of specific news agencies: individuals who trust certain news organizations – especially those they consider to be essentially the most reputable – also are likely to have greater trust of their use of AI.

On the opposite hand, the implementation of those technologies could further undermine the trust of audiences which can be already skeptical or cynical about news organizations.

As one woman from the US put it: “If a news organization is caught using fake images or videos in any way, they must be held accountable and I might lose trust in them even in the event that they were transparent that the content was created using AI.”

Thinking fastidiously about when disclosure is essential and communicate it, especially within the early stages when AI continues to be foreign to many individuals, will likely be a critical consider maintaining trust. This is very true when AI is used to create recent content that audiences will come into direct contact with. We know from our interviews that that is what audiences find most distrustful.

Overall, we’re still within the early stages of journalists using AI, but meaning it’s a high-risk time for news organizations. Our data shows that audiences are still very ambivalent about using these technologies, meaning publishers should be extremely cautious about where and the way they deploy these technologies.

With growing concerns about online platforms being flooded with synthetic content, trusted brands that use these technologies responsibly could possibly be rewarded. But get it fallacious and that trust can easily be lost.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read