Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Video Surveillance System That Reasons Like a Human 143

An anonymous reader writes "BRS Labs has created a technology it calls Behavioral Analytics which uses cognitive reasoning, much like the human brain, to process visual data and to identify criminal and terroristic activities. Built on a framework of cognitive learning engines and computer vision, AISight, provides an automated and scalable surveillance solution that analyzes behavioral patterns, activities and scene content without the need for human training, setup, or programming."
This discussion has been archived. No new comments can be posted.

Video Surveillance System That Reasons Like a Human

Comments Filter:
  • Of course (Score:5, Insightful)

    by sopssa ( 1498795 ) * <sopssa@email.com> on Monday September 21, 2009 @06:08PM (#29497305) Journal

    Nothing can go wrong!

  • by Jason Pollock ( 45537 ) on Monday September 21, 2009 @06:13PM (#29497337) Homepage

    It's a press release pretending to be journalism.

    If it doesn't need training, how does it define "terroristic activity"? Is it the "I'll know it when I see it" definition?

    The article seems to indicate it works like a Bayesian filter on the video - pointing out things that aren't typical for the camera.

    Much like any automated system that is supposed to filter out false positives, it is probably pretty easy to train either the operators or the system itself to throttle back the sensitivity to a point where it ignores everything.

  • It's a lie (Score:4, Insightful)

    by blhack ( 921171 ) on Monday September 21, 2009 @06:14PM (#29497349)

    The "machine learning engine" is a "datacenter" (warehouse) full of cheap African laborers who are all watching the cameras.

    (this is a joke, it just isn't funny, and it is meant to illustrate a point. See the next line):
    God/nature/FSM/evolution/al gore/$deity has done a pretty damn good job at building our brains, why are we trying to reinvent that wheel in a computer?

     

  • by RightSaidFred99 ( 874576 ) on Monday September 21, 2009 @06:16PM (#29497375)

    My guess is it applies a few simple heuristics to analyze the behavior and the real trick is identifying the behavior.

    Example: In an alley behind a hotel people frequently walk out a door, put something in a container, and walk back in. This becomes "normal". Then someone goes out back and starts smoking. Whoops, wtf is this! Alert, alert. OK, so this gets flagged as OK a few times. The system decides it's OK. However, when two people hold a third at gunpoint and linger in an area of the alley not usually used for smoking, this would now trigger as abnormal.

    Another thing it might notice is the same person coming back to the front of a convenience store, waiting a minute, then leaving, then coming back again. Most people only walk in, walk out - this is abnormal.

    So it won't tell you someone is burglarizing you, but it might focus your attention on a camera where something could be happening. I'd assume it would get better over time as things were flagged "ok" or "not ok", but at best it would provide some simple pre-filtering to focus human attention on scenes that are slightly more likely to be "interesting".

  • Re:Photos (Score:3, Insightful)

    by The Archon V2.0 ( 782634 ) on Monday September 21, 2009 @06:19PM (#29497405)

    So I guess this means that the camera is going to Harass people taking Photos now?

    Even better. It will call some rentacops and tell them that there's "suspected terroristic activity" taking place, and suddenly a tourist will get a taser up some orifice because "the computer" already labeled him a terrorist and therefore Osama's second in command.

  • Human Intelligence (Score:5, Insightful)

    by Reason58 ( 775044 ) on Monday September 21, 2009 @06:21PM (#29497421)
    What a great way to absolve any personal responsibility. Detained wrongfully? Not our fault, the machine said you were moving like a terrorist.
  • Scary (Score:2, Insightful)

    by celibate for life ( 1639541 ) on Monday September 21, 2009 @06:23PM (#29497435)
    Human judgment isn't accurate enough to distinguish between an actual terrorist and someone who may look like one. Why is there anyone expecting good results from a machine emulating this judgment that isn't reliable in the first place?
  • by mjensen ( 118105 ) on Monday September 21, 2009 @06:26PM (#29497473) Journal

    Much like detecting terrorists by facial recognition, this is vaporware until they publish some numbers.

    I once had someone misplace a sales call to me, being proud his facial recognition system was 70% accurate. He had no idea how much his system is a pain in the ass when its wrong, and for the airport security business he was trying to get, 90% accuracy is considered terrible.

  • by mhajicek ( 1582795 ) on Monday September 21, 2009 @06:30PM (#29497503)
    So it's a video Zone Alarm. I imagine the first while of operation would be rather labor intensive.
  • Sick and tired (Score:3, Insightful)

    by WillRobinson ( 159226 ) on Monday September 21, 2009 @06:41PM (#29497613) Journal

    Really I am sick and tired of the surveillance realm. If anybody really wants to do something nefarious they will make sure the cameras don't work. Simply pull them down, spray them with paint or whatever. The authorities will not come running. After the fact usage is good, but really it doesn't stop any crime, even the random ones.We are the ones funding this and do not even have a say in it.

  • by Anonymous Coward on Monday September 21, 2009 @07:07PM (#29497837)

    Well, how's "our software" different from "our training", "our briefing", etc? I mean police are supposed to follow detailed regulations, not act as judges. It's only an officer's 'fault' right now if they don't follow regs.

  • by taustin ( 171655 ) on Monday September 21, 2009 @07:27PM (#29498019) Homepage Journal

    you give us another billion dollars to finish it.

    Yeah, right.

    A 1% error rate will produce a hundred times as many false positives - all innocent people accused of a crime - as real positives. And a 20% error rate is far, far more likely.

    Scams like this are the reason why you have to show up at airports three hours early now.

    Is it smart enough to knwo that "terroristic" isn't a real word, at least?

  • by radtea ( 464814 ) on Monday September 21, 2009 @07:46PM (#29498189)

    Also, I wonder how well these systems will handle contextual clues that people pick up on automatically?

    "Contextual clues" like a dark-skinned guy in London rushing to catch the Tube wearing a ski jacket on a warmish day?

    Those are the kind of "contextual clues" that people use all the time to make lethal misjudgements, and in the case at hand resulted in a completely innocent Brazilian who was legally in Britain going legally about his legal business being murdered by police.

    Given how badly humans are known empirically to suck at making these kinds of judgments only an arrogant idiot would think of programming a machine to emulate us. But of course, arrogant idiots are incapable of adjusting their beliefs in response to empirical data, so they probably aren't even aware of how badly they suck at this.

  • Re:Of course (Score:5, Insightful)

    by bugi ( 8479 ) on Monday September 21, 2009 @08:10PM (#29498411)

    The best of both worlds! Human stupidity plus the compassion of a machine.

  • Two things (Score:1, Insightful)

    by Anonymous Coward on Monday September 21, 2009 @08:39PM (#29498695)

    a) Terroristic? Not just terrorist activities?
    b) Terrorism /is/ usually a crime, nothing special.

  • Re:Proof? (Score:5, Insightful)

    by Jurily ( 900488 ) <jurily&gmail,com> on Monday September 21, 2009 @09:06PM (#29498925)

    Mod parent up. Said AI first needs to distinguish between "activity" and "the wind blew a leaf across the screen". Then you need to distinguish between "lights a cigarette" and "lights the fuse on dynamite".

    So, if it already does all that, just one more question: how do you define "criminal and terrorist activities" programmatically when not even the law is clear? Even shooting people can be a non-criminal act.

  • Re:Proof? (Score:3, Insightful)

    by TheWingThing ( 686802 ) on Monday September 21, 2009 @09:13PM (#29498985)

    It must first differentiate between "time flies like an arrow" and "fruit flies like a banana". Then, and only then, can be the system be trusted.

  • by ImNotAtWork ( 1375933 ) on Tuesday September 22, 2009 @02:37AM (#29501031)

    "all innocent people accused of a crime"

    Why? Even if the system flags the people as criminals, the operators will still be able to see the recordings, and then decide if it was a crime or not, no?

    If only I had a nickel for every time some one took something off the computer as gospel (figuratively) and could not be swayed because the computer doesn't make mistakes... Think PHBs and customer service reps. Or maybe I missed your sarcasm tags.. if so mia culpa

  • by mcrbids ( 148650 ) on Tuesday September 22, 2009 @03:58AM (#29501361) Journal

    Those are the kind of "contextual clues" that people use all the time to make lethal misjudgements, and in the case at hand resulted in a completely innocent Brazilian who was legally in Britain going legally about his legal business being murdered by police.

    No system lacking full disclosure of all information is perfect. People, by definition, *have* to make judgments without enough information to be sure. Yet we *have* to be sure.

    Sometimes this results in mistakes. And sometimes, those mistakes add up to a lethal combination. But the vast majority of the time, those judgments, lacking full information, seem to do a pretty good job. In fact, even if you compare these judgement rantes to something like the odds that a particular Apache install will be active, you'll find that people, despite their occasional flaws, actually do a pretty damned good job.

  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Tuesday September 22, 2009 @04:00AM (#29501369)
    Comment removed based on user account deletion

New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman

Working...