HomeArtificial IntelligenceAI-generated misinformation: 3 teachable skills to assist combat it

AI-generated misinformation: 3 teachable skills to assist combat it

In my Digital Studies class, I asked students to make a question to ChatGPT and discuss the outcomes. To my surprise, some ChatGPT asked for my bio.

ChatGPT said I received my PhD from two different universities and in two different fields, only considered one of which was the main focus of my PhD.

This made for a fun lesson, but in addition helped highlight a serious risk of generative AI tools: This increases the likelihood that we’ll fall victim to convincing misinformation.

To overcome this threat, educators must teach skills to operate in a world with AI-generated misinformation.

Exacerbating the misinformation problem

We should expect more attempts by conspiracy theorists and misinformation opportunists to make use of AI to deceive others for their very own profit.
(Shutterstock)

Generative AI will make our existing problems separating evidence-based information from misinformation and disinformation even harder than they already are.

Text-based tools like ChatGPT can create compelling-sounding academic articles on a subject, complete with citations This can deceive individuals who do not know in regards to the topic of the article. Video, audio and image based AI can successfully spoof people's faces, voices, and even behaviors to create obvious evidence of behavior or conversations that never took place in the primary place.

As AI-generated text and pictures or videos are combined to create fake news, we must always expect more attempts Conspiracy theorists and misinformation opportunists use this to deceive others for their very own profit.

Before generative AI was widely available to humans, it was possible to create fake videos, news stories, or scientific articles, but the method required time and resources. Now convincing disinformation could be created far more quickly. New possibilities arise Destabilize democracies world wide.

New applications for critical considering needed

To date, a spotlight of teaching critical media literacy in each public schools and secondary schools has been on asking students to have interaction intensively with a text and get to comprehend it well in order that they will summarize it, ask questions on it and criticize it.

This approach is less prone to serve well in an era when AI can so easily falsify the very clues we search for when assessing quality.

While there aren’t any easy answers to the issue of misinformation, I suggest that teaching these three key skills will higher equip us all to be more resilient within the face of those threats:

1. Cross-reading of texts

Instead of reading a single article, blog, or website thoroughly at first glance, we’d like to arrange students for a brand new set of filtering skills, sometimes called “filters.” lateral reading.

In lateral reading, we ask students to search for clues before reading deeply. Questions to ask include: Who wrote the article? How do ? What are your References and are references that relate to the subject being discussed? What claims do they make and are these claims well supported within the scientific literature?

To do that task well, students have to be prepared to think about various kinds of research.

A teenager was seen holding a smartphone.
Lateral reading means in search of clues before reading deeply.
(Shutterstock)

2. Research skills

In many popular ideas and on a regular basis practices The term research has shifted to confer with a web search. However, This represents a misunderstanding of what characterizes the evidence gathering process.

We should teach students to distinguish sound, evidence-based claims from conspiracy theories and misinformation.

Students in any respect levels must learn to guage the standard of educational and nonacademic sources. This means teaching students research quality, journal quality, and various kinds of expertise. For example, a physician might speak about vaccines on a well-liked podcast, but when that doctor will not be a vaccine specialist or if the totality of the evidence doesn’t support his claims, it doesn't matter how convincing those claims are.

Thinking about research quality also means becoming acquainted with things like sample sizes, methods, and the scientific means of peer review and falsifiability.

Technological competence

Many people don't know that AI will not be actually intelligent, but is made up of intelligence Speech and image processing algorithms that detect patterns after which send them back to us in a random but statistically significant way.

Likewise, many individuals are unaware that the content we see on social media is dictated by algorithms Prioritize engagement to make cash for advertisers.

We rarely take into consideration why we see the content shown to us using these technologies. We don't take into consideration who’s developing the technology and what role programmer bias plays in what we see.



If all of us develop into more critical of those technologies, follow the cash, and ask who advantages once we are served certain content, we’ll develop into more resilient to the misinformation spread through these tools.

These three skills – lateral reading, research skills and technology skills – make us more resilient to misinformation of every kind – and fewer vulnerable to the brand new threat of AI-based misinformation.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read