HomeEthics & SocietyAI-powered cameras spark privacy concerns as usage grows

AI-powered cameras spark privacy concerns as usage grows

A brand new wave of AI-enhanced surveillance is spreading across the US and UK, as private corporations and government agencies deploy AI-powered cameras to investigate crowds, detect potential crimes, and even monitor people’s emotional states in public spaces.

In the UK, rail infrastructure body Network Rail recently tested AI cameras in eight train stations, including major hubs like London’s Waterloo and Euston stations, in addition to Manchester Piccadilly. 

Documents obtained by civil liberties group Big Brother Watch reveal the cameras aimed to detect trespassing on tracks, overcrowding on platforms, “antisocial behavior” like skateboarding and smoking, and potential bike theft.

Most concerningly, the AI system, powered by Amazon’s Rekognition software, sought to investigate peoples’ age, gender, and emotions like happiness, sadness, and anger after they passed virtual “tripwires” near ticket barriers. 

⚠️We’ve uncovered documents revealing that eight train stations trialled Amazon’s AI surveillance software with their CCTV cameras – with some analysing passengers’s age, gender and emotions

“The rollout and normalisation of AI surveillance in these public spaces, without much… pic.twitter.com/dJLS75ZURH

The Network Rail report, a few of which is redacted, says there was “one camera at each station (generally the gateline camera), where a snapshot was taken every second at any time when people were crossing the tripwire and sent for evaluation by AWS Rekognition.”

It then says, “Potentially, the shopper emotion metric could possibly be used to measure satisfaction,” and “This data could possibly be utilised to maximise promoting and retail revenue. However, this was hard to quantify as NR Properties were never successfully engaged.”

Amazon Rekogniton, a pc vision (CV) machine learning platform from Amazon, can indeed detect emotions. However, this was only a pilot test, and its effectiveness is unclear.

The report says that when using cameras to count people crossing railway gates, “accuracy across gate lines was uniformly poor, averaging roughly 50% to 60% accuracy in comparison with manual counting,” but this is anticipated to enhance. 

The use of facial recognition technology by law enforcement has also raised concerns. Not way back, London’s Metropolitan Police used live facial recognition cameras to discover and arrest 17 individuals in town’s Croydon and Tooting areas. 

The technology compares live camera feeds against a watchlist of individuals with outstanding warrants as a part of “precision policing.”

In February, the Met used the system to make 42 arrests, though it’s unclear what number of led to formal charges. 

“The rollout and normalization of AI surveillance in these public spaces, without much consultation and conversation, is sort of a concerning step,” commented Jake Hurfurt, head of research at Big Brother Watch.

Your emotions on a database

Critics have vehemently argued that facial recognition threatens civil liberties. 

Parliament members within the UK urged police to reconsider how they deploy the technology after suggesting they may access a database of 45 million passport photos to raised train these surveillance models. 

Experts also query facial recognition’s accuracy and legal basis, with Big Brother Watch arguing that almost all (85%+) of UK police facial recognition matches are misidentifications. 

Met Police officials attempted to allay privacy fears, stating that non-matching images are rapidly deleted and that the facial recognition system has been independently audited. 

However, talk is affordable when the misuse of those AI systems truly impact people’s lives. Predictive policing programs within the US have also generally failed to attain their objectives while causing collateral damage in the shape of police harassment and wrongful imprisonments. 

Concerns about bias and inaccuracy in facial recognition systems, especially for people of color, have been a serious point of contention. 

Studies have shown the technology might be significantly less accurate for darker-skinned faces, particularly black women.

Policymakers might want to grapple with difficult questions on these powerful tools’ transparency, accountability, and regulation. 

Robust public debate and clear legal frameworks will likely be critical to making sure that the advantages of AI in public safety and security aren’t outweighed by the risks to individual rights and democratic values. 

As technology races ahead, the time for that reckoning could also be short.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read