London Underground Is Testing Real-Time AI Surveillance Tools To Spot Crime (wired.com) 31
Thousands of people using the London Underground had their movements, behavior, and body language watched by AI surveillance software designed to see if they were committing crimes or were in unsafe situations, new documents obtained by WIRED reveal. From the report: The machine-learning software was combined with live CCTV footage to try to detect aggressive behavior and guns or knives being brandished, as well as looking for people falling onto Tube tracks or dodging fares. From October 2022 until the end of September 2023, Transport for London (TfL), which operates the city's Tube and bus network, tested 11 algorithms to monitor people passing through Willesden Green Tube station, in the northwest of the city. The proof of concept trial is the first time the transport body has combined AI and live video footage to generate alerts that are sent to frontline staff. More than 44,000 alerts were issued during the test, with 19,000 being delivered to station staff in real time.
Documents sent to WIRED in response to a Freedom of Information Act request detail how TfL used a wide range of computer vision algorithms to track people's behavior while they were at the station. It is the first time the full details of the trial have been reported, and it follows TfL saying, in December, that it will expand its use of AI to detect fare dodging to more stations across the British capital. In the trial at Willesden Green -- a station that had 25,000 visitors per day before the Covid-19 pandemic -- the AI system was set up to detect potential safety incidents to allow staff to help people in need, but it also targeted criminal and antisocial behavior. Three documents provided to WIRED detail how AI models were used to detect wheelchairs, prams, vaping, people accessing unauthorized areas, or putting themselves in danger by getting close to the edge of the train platforms.
Documents sent to WIRED in response to a Freedom of Information Act request detail how TfL used a wide range of computer vision algorithms to track people's behavior while they were at the station. It is the first time the full details of the trial have been reported, and it follows TfL saying, in December, that it will expand its use of AI to detect fare dodging to more stations across the British capital. In the trial at Willesden Green -- a station that had 25,000 visitors per day before the Covid-19 pandemic -- the AI system was set up to detect potential safety incidents to allow staff to help people in need, but it also targeted criminal and antisocial behavior. Three documents provided to WIRED detail how AI models were used to detect wheelchairs, prams, vaping, people accessing unauthorized areas, or putting themselves in danger by getting close to the edge of the train platforms.
Pretty sure (Score:2, Informative)
This would be illegal in the EU. Oh well another Brexit win!
Re: (Score:2)
We're looking in the wrong places, surveillance should be in party offices.
Newspeak (Score:2)
Re: (Score:3, Insightful)
Re:Newspeak (Score:4, Informative)
Re:Newspeak (Score:4, Insightful)
If that was interpreted that way so that a computer or AI couldn't do it, then all forms of marketing would be illegal and prosecuted for all the data collect to 'stalk' everything you do.
It's clearly not classified as stalking if it's not a person and simply observes data in a public space. If I sit at a subway everyday, regardless if you go there or not, and I see you there, it's so far from stalking that people like you would who try to confuse the two are the ones we need to watch out for.
Re: (Score:2)
The distinction between "observation" and "stalking" is that stalking involves following. You could absolutely achieve the legitimate stated goals of the system without stalking anyone.
But you could *also* use such a system for purposes that violate peoples' rights without stalking them. For example, what if you include protesting as "antisocial behavior"?
You can even pursue the legitimate goals of the system in a way that violates peoples' rights. For example you could -- quite accidentally -- train the
Less stalking, Fixable Bias (Score:2)
Being in a public space does not mean people have the right to stalk you, not even "with a computer".
They already have the cameras installed for use by human operators. If anything, connecting this to an AI system will result in less "stalking" because while a human operator might use the camera system to virtually stalk someone they regard as attractive, an AI system will be much less likely to do so. Plus, while an AI system can have biases on whom it flags it is generally a lot easier to identify and retrain to correct biases in an AI system than it is to do the same for a human operator.
Re: (Score:2)
Re: (Score:2)
I'm conflicted... (Score:5, Insightful)
On one hand, having friends who'd worked in security, I know how difficult it can be to stare at a bank of monitors watching for specific behaviours for hours on end. If I were working subway security I'd really appreciate an automated system that tracks a person's location and tells me if they're on the tracks when the train isn't safely parked at the station.
On the other hand the image quality of your bog standard CCTV camera is pretty shit and I'm going to worry about a bunch of false positives, something looks like a guy wielding a manchette in the blocky pixelated mess so you send a squad to stop them where you find it's just an old guy with a walking stick doing tricks for his grandson, so now you're back to a person staring at the bank of monitors but with the newly added annoyance of endless pings from the system saying "Potential crime in progress!" only to now have you missing the "real" crime while you're dealing with that.
Re: (Score:2)
Police arresting a guy holding a manchette is a pretty funny picture. "Sir, put down the turkey leg!"
Re:I'm conflicted... (Score:4, Interesting)
A few years ago there was an article on SlashDot saying that the British police had credited their access to over two million cameras with preventing "dozens" of crimes. Not thousands, not hundreds, not even scores, just a couple dozen.
You're right, the wall of video screens shown in the movies don't really exist in actual security installations (my career for 17 years), for the simple reason that they're useless. IIRC one person can monitor up to 8-12 screens for 10-20 minutes before their brain is mush and they need to take a break of a quarter hour or more. One test had a guy in a sasquatch suit dance in front of one of the cameras on a 'video wall' and no one caught it (imitating a more famous psychological test). Most casinos and other high security installations have 3-6 screens per operator and provide frequent breaks.
And yeah, most security cameras are low resolution and low frame rate to save on storage and network bandwidth.
Re: (Score:2)
To be clear this isn't really anything "new" coming into a field that is already pretty established with doing things like this. Professional CCTV software systems for public spaces have been integrating and using features like this for years, the systems can track specific people as they move around, dwell times, fall detection, dropped items, loitering timers, traffic and perimeter detection and alerts.
Before the buzzword of "AI" was everywhere they would have just called this "intelligent surveillance"
Re: (Score:3)
>> so you send a squad to stop them where you find it's just an old guy
Anything that gets flagged by the AI will be reviewed by a human before action is taken.
>> the newly added annoyance of endless pings from the system saying "Potential crime in progress!"
I work with image recognition and you would be surprised at how well it can work even with low-rez video.
It will all be fun and games... (Score:2)
Up until the first random innocent is killed to "prevent crime". You know it will happen when police runs to a location assuming something wrong's going on.
And it may be worse than swatting.
Re: (Score:1)
This is being done in England rather than the US, so that's probably not a worry for a while. Do that today in Minneapolis and yeah, first fatality would be tomorrow.
Re: (Score:2)
Sure, it would happen quicker in the US. But England is no stranger to police murdering people.
Re: (Score:2)
Re: (Score:1)
That's from 2005.
https://mappingpoliceviolence.... [mappingpoliceviolence.us]
Police have killed 89 people so far in 2024. Police killed at least 1,246 people in 2023.
Re: (Score:2)
Bias Machine (Score:1)
The learning mechanism will likely pick up on certain ethnic groups who statistically have a higher crime rate, scoring them higher for the same actions.
(I'm not claiming some ethnic groups are inherently violent. It's more that those who feel or are disenfranchised are more likely to commit crimes.)
Re: (Score:1)
It's a touchy subject. Countries with different ethnic groups often struggle with similar issues. Humans are inherently tribal, and it's hard to "just remove" unconscious biases.
Two Thousand Twenty Four (Score:3)
Wow, England just keeps getting closer and closer to INGSOC.
Let's put up a picture of Putin for our two minutes hate. If you don't like it make sure you stay out of view of the telescreens!
What does ... (Score:2)
AI-monitored cameras for neighborhood security (Score:2)
Human figure detection has already filtered down to the lowest end of consumer security cameras. I can buy a $50 Wyze camera that does an excellent job of it. You can wear a hoodie and a face mask, but you can't hide your body.
The next step is right on the horizon - networked neighborhood cameras that are constantly monitored by AIs, watching for suspicious behavior, especially late at night.
Within a decade it will be standard for "trusted" neighborhood security systems to send real-time alerts to the loc