Chicago Developing 'Suspicious Behavior' Monitoring System 294
narramissic writes "Over the past few years, Chicago's Office of Emergency Management and Communications (OEMC) has been blanketing the city with a network of thousands of video cameras in an effort to remotely keep track of emergencies in real time. Now, with the help of IBM, the network is getting some smarts. IBM software will analyze the video and ultimately 'recognize suspicious behavior,' says OEMC spokesman Kevin Smith. 'The challenge is going to be teaching computers to recognize the suspicious behavior,' said Smith. 'Once this is done this will be a very impressive city in terms of public safety.'"
Re:London .. yes (Score:2, Informative)
Mired in statistical fallacies (Score:5, Informative)
Let's apply a little Bayesian reasoning, shall we?
Given that system X identifies your behavior as suspicious, what is the probability that you are a terrorist? This probability is written P(T|S). This is what we want to find.
Bayes' Rule: P(T|S) = P(S|T)*P(T)/P(S).
P(S|T) is the probability that the system will identify you as suspicious, given that you are a terrorist. You can call this the system's "accuracy." Let's be generous and say the accuracy is 99.99% = 0.9999.
P(T) is the probability that you are a terrorist. Let's say that this probability is one in a million: 0.000001.
P(S) is the probability that the system thinks you are suspicious. There are two sources of suspicion: true positives, and false positives. The true positives are given by P(S|T)*P(T). The false positives are given by P(S|~T)*P(~T).
Let's again, be generous, and say that the false positive rate P(S|~T) is only 0.1%, or 0.001.
P(~T) is just 1-P(T) = 0.999999.
So, let's substitute everything in:
P(T|S) = P(S|T)*P(T) / (P(S|T)*P(T)+P(S|~T)*P(~T)) = 0.9999*0.000001 / (0.9999*0.000001+0.001*0.999999)
What's that equal, everybody? 0.0009989 which is about 0.001, in other words 0.1%
What does it mean? Even with a system that has a true positive rate of 99.99% and a false positive rate of only 0.1%, the probability of a "suspicious person" actually being a terrorist is only 0.1%.
In other words, these systems are inherently useless in identifying terrorists. This is because terrorists are inherently RARE in the population. The massive accuracy of the test cannot make up for this fact.
Re:Good or bad? (Score:3, Informative)
as bruce schneier said (Score:2, Informative)
Assume one trillion possible indicators to sift through: that's about ten events -- e-mails, phone calls, purchases, web surfings, whatever -- per person in the U.S. per day. Also assume that 10 of them are actually terrorists plotting.
This unrealistically-accurate system will generate one billion false alarms for every real terrorist plot it uncovers. Every day of every year, the police will have to investigate 27 million potential plots in order to find the one real terrorist plot per month. Raise that false-positive accuracy to an absurd 99.9999% and you're still chasing 2,750 false alarms per day -- but that will inevitably raise your false negatives, and you're going to miss some of those ten real plots.
Re:Good or bad? (Score:3, Informative)
Just so you know, many of the cameras are encased in a bulletproof box and have acoustic sensors to detect gunshots; the camera will automatically focus on the source of the shot.
The things transmit their video wirelessly and allow for remote control via a wireless link to police cruisers in addition to their wired link to the monitoring center downtown. I'd focus on jamming the signal or disabling the wired link.