Video Surveillance System That Reasons Like a Human 143
An anonymous reader writes "BRS Labs has created a technology it calls Behavioral Analytics which uses cognitive reasoning, much like the human brain, to process visual data and to identify criminal and terroristic activities. Built on a framework of cognitive learning engines and computer vision, AISight, provides an automated and scalable surveillance solution that analyzes behavioral patterns, activities and scene content without the need for human training, setup, or programming."
Of course (Score:5, Insightful)
Nothing can go wrong!
I'll know it when I see it. (Score:5, Insightful)
It's a press release pretending to be journalism.
If it doesn't need training, how does it define "terroristic activity"? Is it the "I'll know it when I see it" definition?
The article seems to indicate it works like a Bayesian filter on the video - pointing out things that aren't typical for the camera.
Much like any automated system that is supposed to filter out false positives, it is probably pretty easy to train either the operators or the system itself to throttle back the sensitivity to a point where it ignores everything.
It's a lie (Score:4, Insightful)
The "machine learning engine" is a "datacenter" (warehouse) full of cheap African laborers who are all watching the cameras.
(this is a joke, it just isn't funny, and it is meant to illustrate a point. See the next line):
God/nature/FSM/evolution/al gore/$deity has done a pretty damn good job at building our brains, why are we trying to reinvent that wheel in a computer?
Re:Bit more info - can it be as good as humans? (Score:4, Insightful)
My guess is it applies a few simple heuristics to analyze the behavior and the real trick is identifying the behavior.
Example: In an alley behind a hotel people frequently walk out a door, put something in a container, and walk back in. This becomes "normal". Then someone goes out back and starts smoking. Whoops, wtf is this! Alert, alert. OK, so this gets flagged as OK a few times. The system decides it's OK. However, when two people hold a third at gunpoint and linger in an area of the alley not usually used for smoking, this would now trigger as abnormal.
Another thing it might notice is the same person coming back to the front of a convenience store, waiting a minute, then leaving, then coming back again. Most people only walk in, walk out - this is abnormal.
So it won't tell you someone is burglarizing you, but it might focus your attention on a camera where something could be happening. I'd assume it would get better over time as things were flagged "ok" or "not ok", but at best it would provide some simple pre-filtering to focus human attention on scenes that are slightly more likely to be "interesting".
Re:Photos (Score:3, Insightful)
So I guess this means that the camera is going to Harass people taking Photos now?
Even better. It will call some rentacops and tell them that there's "suspected terroristic activity" taking place, and suddenly a tourist will get a taser up some orifice because "the computer" already labeled him a terrorist and therefore Osama's second in command.
Human Intelligence (Score:5, Insightful)
Scary (Score:2, Insightful)
False positve and False negative readings (Score:4, Insightful)
Much like detecting terrorists by facial recognition, this is vaporware until they publish some numbers.
I once had someone misplace a sales call to me, being proud his facial recognition system was 70% accurate. He had no idea how much his system is a pain in the ass when its wrong, and for the airport security business he was trying to get, 90% accuracy is considered terrible.
Re:Bit more info - can it be as good as humans? (Score:2, Insightful)
Sick and tired (Score:3, Insightful)
Really I am sick and tired of the surveillance realm. If anybody really wants to do something nefarious they will make sure the cameras don't work. Simply pull them down, spray them with paint or whatever. The authorities will not come running. After the fact usage is good, but really it doesn't stop any crime, even the random ones.We are the ones funding this and do not even have a say in it.
Re:Human Intelligence (Score:1, Insightful)
Well, how's "our software" different from "our training", "our briefing", etc? I mean police are supposed to follow detailed regulations, not act as judges. It's only an officer's 'fault' right now if they don't follow regs.
And it will work as soon as . . . (Score:3, Insightful)
you give us another billion dollars to finish it.
Yeah, right.
A 1% error rate will produce a hundred times as many false positives - all innocent people accused of a crime - as real positives. And a 20% error rate is far, far more likely.
Scams like this are the reason why you have to show up at airports three hours early now.
Is it smart enough to knwo that "terroristic" isn't a real word, at least?
Re:Human Intelligence (Score:4, Insightful)
Also, I wonder how well these systems will handle contextual clues that people pick up on automatically?
"Contextual clues" like a dark-skinned guy in London rushing to catch the Tube wearing a ski jacket on a warmish day?
Those are the kind of "contextual clues" that people use all the time to make lethal misjudgements, and in the case at hand resulted in a completely innocent Brazilian who was legally in Britain going legally about his legal business being murdered by police.
Given how badly humans are known empirically to suck at making these kinds of judgments only an arrogant idiot would think of programming a machine to emulate us. But of course, arrogant idiots are incapable of adjusting their beliefs in response to empirical data, so they probably aren't even aware of how badly they suck at this.
Re:Of course (Score:5, Insightful)
The best of both worlds! Human stupidity plus the compassion of a machine.
Two things (Score:1, Insightful)
a) Terroristic? Not just terrorist activities? /is/ usually a crime, nothing special.
b) Terrorism
Re:Proof? (Score:5, Insightful)
Mod parent up. Said AI first needs to distinguish between "activity" and "the wind blew a leaf across the screen". Then you need to distinguish between "lights a cigarette" and "lights the fuse on dynamite".
So, if it already does all that, just one more question: how do you define "criminal and terrorist activities" programmatically when not even the law is clear? Even shooting people can be a non-criminal act.
Re:Proof? (Score:3, Insightful)
It must first differentiate between "time flies like an arrow" and "fruit flies like a banana". Then, and only then, can be the system be trusted.
Re:And it will work as soon as . . . (Score:2, Insightful)
"all innocent people accused of a crime"
Why? Even if the system flags the people as criminals, the operators will still be able to see the recordings, and then decide if it was a crime or not, no?
If only I had a nickel for every time some one took something off the computer as gospel (figuratively) and could not be swayed because the computer doesn't make mistakes... Think PHBs and customer service reps. Or maybe I missed your sarcasm tags.. if so mia culpa
Re:Human Intelligence (Score:3, Insightful)
Those are the kind of "contextual clues" that people use all the time to make lethal misjudgements, and in the case at hand resulted in a completely innocent Brazilian who was legally in Britain going legally about his legal business being murdered by police.
No system lacking full disclosure of all information is perfect. People, by definition, *have* to make judgments without enough information to be sure. Yet we *have* to be sure.
Sometimes this results in mistakes. And sometimes, those mistakes add up to a lethal combination. But the vast majority of the time, those judgments, lacking full information, seem to do a pretty good job. In fact, even if you compare these judgement rantes to something like the odds that a particular Apache install will be active, you'll find that people, despite their occasional flaws, actually do a pretty damned good job.
Comment removed (Score:3, Insightful)