The recent TikTok trend “AI Homeless Prank” has sparked a wave of concern and police response the United States and beyond. The prank involves using AI image generators to create realistic photos that show fake homeless people seemingly standing on the door or of their house.
Learning to differentiate between truth and untruth isn’t the one challenge facing society within the age of AI. We also have to think concerning the human consequences of what we create.
We study as professors of educational technology at Laval University and of education and innovation at Concordia University find out how to strengthen human agency – the flexibility to consciously understand, query and transform environments shaped by artificial intelligence and artificial media – to counteract disinformation.
A worrying trend
In probably the most viral “AI Homeless Man Prank” videos that has been viewed greater than two million times, creator Nnamdi Anunobi tricked his mother by sending her fake photos of a homeless man sleeping on her bed. The scene went viral and created a spark Wave of imitations across the country.
Two teenagers in Ohio have been charged for triggering false home burglary alarms, resulting in unnecessary calls to the police and real panic. Police departments in Michigan, New York and Wisconsin have publicly warned that these pranks waste emergency resources and dehumanize the vulnerable.
At the opposite end of the media spectrum, boxer Jake Paul agreed to experiment with the Cameo feature of Sora 2, OpenAI's video generation tool comply with using his image.
But the phenomenon quickly got uncontrolled: Internet users have hijacked his face to create ultra-realistic videos by which he appears to come back out as gay or gives makeup tutorials.
What was speculated to be a technical demonstration became one Flood of mocking content. His partner, skater Jutta Leerdam, denounced the situation: “I don't prefer it, it's not funny. People consider it.”
These are two phenomena with different intentions: one goals to make people laugh; the opposite follows a trend. But each reveal the identical mistake: that we’ve democratized technological power without considering questions of morality.
Digital natives with out a compass
Today’s cybercrime – Sextortion, Fraud, DeepNudes, Cyberbullying – don’t appear out of nowhere.
Their perpetrators are yesterday's teenagers: they were taught to code, create and publish online, but rarely to think concerning the human consequences of their actions.
Youth crime on the Internet is increasing rapidly, driven by the widespread use of AI tools and the perception of impunity. Young individuals are now not just victims. They also turn into perpetrators of cybercrime – often “out of curiosity”, due to the challenge, or just “for fun”.
And yet, for greater than a decadeSchools and governments have been educating students about digital citizenship and literacy: developing critical considering skills, protecting data, adopting responsible online behavior, and verifying sources.
Despite these efforts, cyberbullying, disinformation and misinformation persist and are increasing, to the purpose where it’s now recognized as considered one of them biggest global risks for the approaching years.
A quiet but profound desensitization
These abuses arise not from innate malice, but from a scarcity of ethical leadership adapted to the digital age.
We train young people who find themselves able to manipulating technology but are sometimes unable to evaluate the human impact of their actions, especially in an environment where certain platforms consciously push the boundaries of what’s socially acceptable.
Grok, Elon Musk's chatbot integrated into X (formerly Twitter), illustrates this tendency. AI generated characters make sexualizedviolent or discriminatory comments presented as easy humorous content. This type of trivialization blurs moral boundaries: in such a context, transgression becomes a type of expression and the absence of responsibility is confused with freedom.
Without guidelines, many young individuals are vulnerable to becoming so advanced criminals is in a position to manipulate, deceive or humiliate on an unprecedented scale.
The mere absence of malicious intent within the creation of content isn’t any longer enough to stop harm.
Creating without regard to the human consequences, even for curiosity or entertainment, promotes collective desensitization as dignity and trust are undermined – and makes our societies more vulnerable to manipulation and indifference.
From a knowledge crisis to an ethical crisis
AI competency frameworks—conceptual frameworks that outline the talents, knowledge, and attitudes required to grasp, use, and critically and responsibly evaluate AI—have led to significant advances in critical considering and vigilance. The next step is to incorporate a more human dimension: interested by the impact of what we create on others.
Synthetic media undermined our trust in knowledge because they make the false credible and the true questionable. The result’s that we find yourself doubting every part – facts, others, sometimes even ourselves. But the crisis we face today goes beyond the epistemic: it’s an ethical crisis.
Most young people today know find out how to challenge manipulated content, but they don't at all times understand its human consequences. Young activists, nevertheless are the exception. Whether in Gaza or amid other humanitarian struggles, they experience each the facility of digital technology as a mobilization tool – hashtag campaigns, TikTok videos, symbolic blockades, coordinated actions – and the moral responsibility that power brings.
But it isn’t any longer just the reality that fluctuates, but in addition our sense of responsibility.
The relationship between humans and technology has been studied extensively. But the connection between people through technology-generated content has not been sufficiently studied.
Towards moral sobriety within the digital world
The human impact of AI – moral, psychological, relational – stays the main blind spot in our interested by using the technology.
Every deepfake, every “prank”, every visual manipulation leaves a mark human footprint: Loss of trust, fear, shame, dehumanization. Just as emissions pollute the air, these attacks pollute our social bonds.
As we learn to measure this human footprint, we want to take into consideration the implications of our digital actions before they occur. It means asking ourselves:
- Who is affected by my creation?
- What emotions and perceptions does this evoke?
- What mark will it leave on an individual's life?
Building an ethical ecology of digital technology means recognizing that each image and broadcast shapes the human environment by which we live.
Educate young people to not need to cause harm
Laws like that European AI law We define what must be forbidden, but no law can teach why we must always not need to cause harm.
Specifically, this implies:
-
Cultivate personal responsibility by helping young people feel answerable for their creations.
-
Communicate values through experience by inviting them create after which reflect: How would this person feel?
-
Promote intrinsic motivation in order that they act ethically in accordance with their very own values and never out of fear of punishment.
-
Engaging families and communities, transforming schools, homes and public spaces into places for discussion concerning the human impact of unethical or just ill-advised use of generative AI.
In the age of manufactured media, interested by the human consequences of what we create is maybe essentially the most advanced type of intelligence.

