These cameras are connected to an AWS facial analysis platform.
Officially, it’s a test to improve station security.
However, there have been some concerns raised about possibly using this technology to display advertising based on the passengers' emotions.
Network Rail, the public company responsible for British rail infrastructure, has allegedly been conducting covert trials of an AI video surveillance system developed by Amazon, as reported by Wired. The outlet had already reported on the practice in February, but now sheds light on the emotional aspect involved.
According to Wired, which had access to leaked documents detailing the system, the system has purportedly been tested at various busy subway stations in several British cities with the aim of detecting suspicious behavior and gauging travelers’ “satisfaction” by analyzing their facial expressions.
How it works. The system works by linking closed-circuit cameras at stations like Waterloo and Euston in London and Piccadilly in Manchester to Rekognition, the facial analysis platform operated by Amazon Web Services.
The AI can estimate the age and gender of passengers in the footage, and it can also infer their mood via gestures, categorizing expressions as happy, angry, and sad, among others. The aim is to automatically identify “aggressive” behavior, such as vandalism or trespassing on the tracks, and to then notify station personnel.
What Network Rail’s managers say. Network Rail has confirmed that it uses “a range of advanced technologies” at stations to “protect passengers, our colleagues, and the rail infrastructure from crime and threats,” The company stated that it always complies “with the relevant legislation” and works with “the police and security services.”
Purple Transform, the consultancy firm piloting the tests, claims that emotion detection was ruled out during the trials and that no images were stored. CEO Gregory Butler says that “AI helps human operators, who cannot monitor all cameras continuously, to assess and address safety risk and issues promptly.”
Unanswered questions. This move raises some questions:
- Lack of transparency: The documents reveal a “dismissive attitude” toward those who might feel spied on. “Typically, [no one is likely to object or find it intrusive], but there is no accounting for some people,” a subway staff member says, according to the documents Wired had access to.
- Risk of bias: Several studies show that these kinds of systems yield more false positives with certain ethnic groups.
- Expansion of surveillance: There’s a fine line between increased security and massive, automated scrutiny that threatens freedom. These types of moves are likely to prelude it.
In the spotlight. These documents suggest that officials could use AWS AI to measure customer satisfaction or even display personalized advertising on screens based on the detected profile. This is a disturbing possibility.
Regulators have this type of system in their sights. In Spain, for instance, supermarket chain Mercadona wanted to introduce something similar and finally backed out because, among other reasons, a judge put enough obstacles in its path.
Final thoughts. The discreet deployment of AI cameras to monitor and profile uninformed passengers speaks to the dilemmas posed by this technology. Proponents argue that it has the potential to fight crime and optimize resources. Its detractors warn of the risk it poses to fundamental rights.
Overall, an inescapable question remains: How much freedom are we willing to give up in exchange for security? Who can assure us that the owner of the infrastructure won’t use these systems for purposes that are for their own benefit and not for the general good?
Image | Iuliia Dutchak via Unsplash
Related | Big Tech Is Realizing Something: Their AI Can’t Keep Messing Up on This Scale
See all comments on https://www.xatakaon.com
SEE 0 Comment