HomeEventsThe London Underground is conducting an AI surveillance experiment

The London Underground is conducting an AI surveillance experiment

The London Underground used AI surveillance technology in a year-long trial.

From October 2022 to September 2023, Transport for London (TfL) tested 11 different algorithms at Willesden Green tube station within the north-west of the town.

According to detailed documents from WIREDThe process involved monitoring the movements, behaviors and body language of 1000’s of passengers to discover potential criminal activity and security risks.

The AI ​​software has been linked to live CCTV footage (the pc vision (CV) branch of machine learning) and trained to detect aggressive behavior, weapons, fare evasion and accidents, equivalent to people potentially falling on the tracks.

British police have experimented with AI surveillance before and proceed to accomplish that at some public events, equivalent to one Beyoncé concert last 12 months.

However, It is often ineffective, and human rights groups have criticized the technology, calling it an annoying invasion of privacy and a source of prejudice and discrimination.

AI video technology has a troubled history Numerous projects worldwide deliver inadequate results while they generally associate dark-skinned individuals with crimes they didn't commit.

During TfL's testing period, around 44,000 alerts were generated, of which around 19,000 were passed on to staff for intervention.

Officers took part in tests by brandishing weapons equivalent to machetes and firearms inside view of video surveillance (albeit during times when the station was closed), with the aim of higher training the AI.

Here is the total list of results:

  1. Total variety of alerts: Over 44,000 alerts were issued by the AI ​​system.
  2. Real-time notifications for station staff: 19,000 were delivered in real time to station staff for immediate motion.
  3. Warnings about fare evasion: The AI ​​system generated 26,000 alerts related to fare evasion activities.
  4. Wheelchair warnings: There were 59 alerts regarding wheelchair users on the station, which doesn’t have appropriate wheelchair access.
  5. Safety line warnings: Nearly 2,200 were issued to people crossing yellow safety lines on train platforms.
  6. Platform Edge Alerts: The system generated 39 warnings for people leaning over the sting of the platforms.
  7. Advanced seat warnings: Nearly 2,000 alerts involved people sitting on benches for long periods of time, which could indicate various concerns including passenger well-being or safety risks.
  8. Warnings about aggressive behavior: There were 66 alerts related to aggressive behavior, although the AI ​​system struggled to reliably detect such incidents on account of insufficient training data.

However, the AI ​​system didn’t work well in some scenarios, leading to erroneous results, equivalent to: B. marking children who pass ticket barriers as potential fare evaders.

TfL says the last word aim is to create a safer and more efficient tube that protects each the general public and staff.

AI surveillance technology isn't inherently terrible when used for public safety, but once the technology is there, keeping it under control is a tough endeavor.

There is already evidence of AI misuse within the UK public sector, and scandals in other countries suggest that it is a dangerous path if not handled with utmost care.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read