Artificial intelligence is quickly accepted to forestall abuse and to guard people in need of protection – including Children in nursing careAdults in nursing homes and Students in schools. These instruments promise to acknowledge danger in real time and to attract the authorities aware of before serious damage occurs.
For example, developers use natural language processing – a type of AI that interprets the written or spoken language – to attempt to try it Recognizing patterns of threats, manipulation and control in text messages. This information could help recognize domestic abuse and possibly support courts or law enforcement authorities in early interventions. Use some children's aid organizations Prediction modelAnother frequent AI technology to calculate which families or individuals are “most in danger” for abuse.
With careful implementation, AI tools can improve security and efficiency. For example predictive models Have supported social staff Prioritization of high -risk cases and former intervention.
But as a social employee with 15 years of experience Research into family violence -and five years on the forefront as a case manager, investigator for abuse of kids and early childhood coordinator I saw how well-intentioned systems that you just are purported to protect often haven’t existed.
Now I help with the event I maintainA AI-driven surveillance camera that the movements of the limb wines faces or voices analyzed to acknowledge physical violence. I cope with a critical query: Can AI really help to guard endangered people, or does it only automate the identical systems that do it for a very long time?
New technology, old injustice
Many AI tools are trained to “learn” by analyzing historical data. But the story is filled with inequality, bias and incorrect assumptions. So are individuals who design, test and finance AI.
This implies that AI algorithms can construct up the replicating systemic types of discrimination akin to racism or classism. A 2022 study In Allegheny County, Pennsylvania, it found that a predictive risk model for evaluating the danger of families – points that the hotline personnel were preserved to support the calls of the hotline – had marked black children more often than white children for examining 20% in the event that they are used without human supervision. If social staff were included in the choice -making, this inequality fell to 9%.
Language -based AI can even increase the distortion. For example, a study showed that natural language processing systems African -American vertical members were classified as “aggressive” at a significantly higher speed as a normal -American -English -up to 62% more often in certain contexts.
In the meantime, A 2023 study found that AI models often need to struggle with contexts, which suggests that sarcastic or joking news as serious threats or signs of need will be classified incorrectly.
Nickylloyd/E+ via Getty Images
These defects can replicate major problems in protective systems. People with color are long above -average In children's aid systems – sometimes resulting from cultural misunderstandings, sometimes resulting from prejudices. Studies have shown that that Black and indigenous families Face disproportionately higher installments the reporting, investigation and family separation in comparison with white families, even after taking income and other socio -economic aspects under consideration.
Many of those differences From structural racism comes Embedded in a long time of discriminatory political decisions in addition to implicit prejudices and discretionary decisions by overloaded case clerk.
Monitoring for support
Even if AI systems reduce the damage to endangered groups, they often do that to disruptive costs.
AI-capable cameras were utilized in hospitals and older care facilities To recognize physical aggression between employees, visitors and residents. While business providers promote these tools as security innovations, their use increases serious ethical concerns About the balance between protection and privacy.
In a 2022 Pilot program in AustraliaAI camera systems that were utilized in two nursing homes generated greater than 12,000 false notifications – overwhelming staff and a minimum of one real incident. According to the Independent Report, the accuracy of this system has “not reached any level that is taken into account acceptable for workers and management”.

Kazuma Seki/iStock via Getty Images Plus
Children are also affected. In US schools akin to AI monitoring like CrowdPresent Goguardian And secure are marketed as tools to guard the scholars. Such programs will be installed on the scholars of the scholars to watch online activity and mark all the things.
However, it was also shown that they characterize harmless behavior – akin to writing short stories with mild violence or researching topics related to mental health. As An investigation by Associated Press revealed, these systems have it too OUTED LGBTQ+ students to oldsters or school administrators by monitoring search queries or discussions about gender and sexuality.
Other systems use cameras and microphones within the classroom to acknowledge “aggression”. But she often incorrectly discover normal behavior Like laughter, cough or roughhousing – sometimes prompted intervention or discipline.
These will not be isolated technical disorders; They reflect deep defects within the training and cessation of the AI. AI systems learn from previous data chosen and marked by humans – data that always reflect Social inequalities and prejudices. As Sociologist Virginia Eubanks wrote in “Automate inequality“AI systems risk reducing these long-term damage.
Care, no punishment
I feel AI can still be a force for the nice, but provided that its developers have the dignity of individuals priority of those tools for cover. I actually have developed a framework of 4 necessary principles for what I call “trauma-reacting AI”.
-
Survival control: People must have a say in how, when and whether or not they are monitored. To give users greater control over your data can Improve trust in AI systems And increase the commitment to support services, e.g.
-
Human supervision: Studies show that the mix of social employee competence with AI support improves fairness and fairness and fairness Reduces the abuse of kids – As in Allegheny County, where case staff Used algorithmic risk values as an elementIn addition to her skilled judgment, to choose which child abuse reports to research.
-
Beginning test: Governments and developers are increasingly encouraged Test AI systems for racial and economic prejudices. Open source tools like IBMS AI Fairness 360Google Was-Wäre-Wenn toolAnd Fairlearn Support within the detection and reduction of such distortions in machine learning models.
-
Privacy in keeping with design: technology must be developed to guard people. Open source tools Like amnesia, Google Differential data protection library And Microsoft's Smartnoise Help anonymize sensitive data by removing or covering identifiable information. In addition, AI-powered techniques akin to the blurring of the facial dwellers can anonymize people's identities in video or photo data.
The honor of those principles means constructing systems that react with care.
Some promising models are already being created. The Coalition against stalker goods and his partner lawyers Include survivors In all phases of technical development – from must reviews to user tests and ethical supervision.
Legislation can be necessary. On May 5, 2025, the governor of Montana, for instance, signed a law that restricted state and native government Use of AI to make automated decisions About individuals without sensible human control. It requires transparency about how AI is utilized in government systems and prohibits discriminatory profiling.
As I tell my students, revolutionary interventions should disturb the cycles of the damage and don’t immortalize them. AI won’t ever replace human capability for context and compassion. But with the appropriate values in the center, it could help us deliver more of it.