Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Privacy The Military United States Your Rights Online

Homeland Security Department Testing "Pre-Crime" Detector 580

holy_calamity writes "New Scientist reports that the Department of Homeland Security recently tested something called Future Attribute Screening Technologies (FAST) — a battery of sensors that determine whether someone is a security threat from a distance. Sensors look at facial expressions, body heat and can measure pulse and breathing rate from a distance. In trials using 140 volunteers those told to act suspicious were detected with 'about 78% accuracy on mal-intent detection, and 80% on deception,' says a DHS spokesman."
This discussion has been archived. No new comments can be posted.

Homeland Security Department Testing "Pre-Crime" Detector

Comments Filter:
  • Not even close (Score:5, Interesting)

    by ShawnCplus ( 1083617 ) <shawncplus@gmail.com> on Tuesday September 23, 2008 @12:05PM (#25121865) Homepage
    Sorry, but 78% is not even REMOTELY accurate to consider someone dangerous. There is already a high enough false accusation rate.
  • Additional Locations (Score:3, Interesting)

    by UncleWilly ( 1128141 ) * <UncleWilly07@gmaSTRAWil.com minus berry> on Tuesday September 23, 2008 @12:05PM (#25121885)

    I propose the House, Senate and White House also.

  • If everyone was wearing a burka, then, there's no way that this system actually works. It may seem strange, but, what right does the public have to know my face?

  • by mveloso ( 325617 ) on Tuesday September 23, 2008 @12:14PM (#25122075)

    The device relies on the assumption that the physiology of people up to no good may be different than normal people.

    And that may be true.

    However, this'll be much more useful somewhere like an embassy or checkpoint than in an airport. In a sea of potentially hostile people, it's harder to pick out the ones who may actually do something. In a sea of basically docile people, it should be relatively simple to visually pick the nervous ones.

  • Re:Err (Score:3, Interesting)

    by Shadow Wrought ( 586631 ) * <shadow.wroughtNO@SPAMgmail.com> on Tuesday September 23, 2008 @12:21PM (#25122217) Homepage Journal
    Does this sound idiotic to anyone else?

    Yep. But this is slashdot. Tot he powers that be it probably shows "great promise" and, since it is a machine, would be "unbiased."

    All the things it is tagging as "suspicious" could also be explained by a bad phone call just before you come in range. Maybe your wife just caleld to say she's leaving you for your sister. Again.
  • by Anonymous Coward on Tuesday September 23, 2008 @12:30PM (#25122429)

    I feel like I have to bring this example up about once a month what with security news and all, but:

    Suppose there's a rare illness that occurs in 1 out of a million people. The test for that illness is 99.9% accurate, meaning that one in a thousand well people will be falsely diagnosed as sick and one in a thousand sick people will be falsely diagnosed as well.

    A million people come in for the test. On average one has the illness. But a full *thousand* are going to be tested positive. So if this 99.9% accurate test says you're sick, you in fact have a 99.9% chance of being well.

    This is a real and well studied problem in medicine (it has a name, but I forget it and if someone knows, please post). I've yet to see any evidence that the problem has been addressed or even acknowledged in the case of mass security screening.

    So, even if this crime test is accurate 8 or 9 times out of 10, because most people (of any race or religion) are not criminals or terrorists, the positives are going to be meaningless without further screening. Since "further screening" will almost certainly represent a gross violation of an innocent person's rights, this should be aberrant to anyone who values a free society.

    This of course raises the question: if it's so useless, why bother? I can think of two reasons, one cynical and one very cynical.

    The first: a well connected contractor is making the device. As our government gets more and more privatized, this kind of thing is running rampant.

    The second: an authoritarian police state loves a pretense to hold anyone they choose under suspicion.

    The (even more cynical) reality is probably a combination of the two.

  • Re:sensors... (Score:3, Interesting)

    by gnick ( 1211984 ) on Tuesday September 23, 2008 @12:34PM (#25122509) Homepage

    Particularly, a religious fanatic will be in a state of peace and righteousness-filled euphoria because he is finally "fulfilling his destiny" in life and just hours away from being rewarded by his God for being a faithful "Holy Warrior".

    I've got to disagree there. I don't want to praise the machine - This thing is nuts. And I agree that, just before detonation, a fanatic may experience a sense of euphoric peace. But, when going through security, it's a toss up between beautiful martyrdom and failure resulting in a good long stretch in Guantanamo Bay being questioned unmercifully by the infidels. A good lot of training may help them deal with that stress. And their faith may provide them with confidence that their gods wouldn't allow them to fail. But until you actually get through security, there's got to be a lot of stress to deal with - Probably even more than when they actually push the button / flip the switch / light their shoe laces.

  • by patio11 ( 857072 ) on Tuesday September 23, 2008 @12:37PM (#25122583)

    Most AIDS tests are 99%+ accurate at telling you that a person with HIV actually has HIV. They're also 99% accurate at saying a person who doesn't have HIV, doesn't have HIV. Its the combination of those two facts plus "Very few people in the general population have HIV" which makes mass one-time AIDS screenings a bad idea -- you successfully pull the guy out of 100 who had HIV, then you throw in one negative bystander, and you end up adding 99% accurate + 99% accurate to get 50% accurate.

    There are a heck of a lot less terrorists than 1% of the flying public.

    There is a countermeasure, of course -- you use the magic machine not as a definitive test but as a screening mechanism. Know why we aggressively screen high risk groups for AIDS? Because they're high risk -- if 1 out of every 4 screenies is known to be positive (not hard to reach with some populations) then the 99%/99% math adds up to better than 95%. Better news. (You then independently run a second test before you tell anyone they're positive. Just like you wouldn't immediately shoot anybody the machine said is a terrorist -- you'd just escalate the search, like subjecting them to a patdown or asking for permission to search their bags or what have you.)

    So you could use the magic machine to, say, eliminate 75, 90, 99%, whatever of the search space before you go onto whatever your next level of screening is -- the whole flying rigamarole, for example. Concentrate the same amount of resources on searching 20 people a plane instead of 400. Less hassle for the vast majority of passengers, less cursoryness to all of the examinations.

    The quick here will notice that this is exactly the mechanism racial profiling works by -- we know a priori that the 3 year old black kid and the 68 year old white grandmother is not holding a bomb, ergo we move onto the 20 year old Saudi who it is merely extraordinarily improbable to be holding a bomb. That would also let you lop off a huge section of the search space off the top.

    The difference between the magic machine and racial profiling is that racial profiling is politically radioactive, but the magic machine might be perceived as neutral. Whether you consider that a good or a bad thing is up to you. Hypothetically assuming that the machine achieves, oh, 80% negative readings for true negatives, many people might consider it an awfully nice thing to have 80% of the plane not have to take off their shoes or get pat down -- they could possibly get screened as non-invasively as having to answer two of those silly, routine questions.

    (Of course, regardless of what we do, people will claim we're racially profiling. But that is a different issue.)

  • Re:sensors... (Score:4, Interesting)

    by gnick ( 1211984 ) on Tuesday September 23, 2008 @12:38PM (#25122609) Homepage

    Sheesh! I've never seen a bunch of geeks so opposed to developing an immature technology before! Perhaps a toning down of the pessimism would be in order, and perhaps we may see some improvements in our understanding of human behaviour, and the programs built to understand it.

    It's not that they oppose the development of the technology. It's that they're fed up with privacy invasions and random harassment and see this device as a means of propagating both. Even if this thing threw up 50% correct red-flags, you'd see objections.

    Besides, Big Brother paranoia plays very well here - Especially when it's accurate.

  • by UnknowingFool ( 672806 ) on Tuesday September 23, 2008 @12:42PM (#25122671)

    Wouldn't "suspicious" also be highly subjective? Many times that's more reflective on the prejudices of the observer. So let's take a programmer who's been up all night trying to solve a problem. He's disheveled, unshaven, and probably unkempt. He's deep in thought and in his own world. He starts talking to himself about the problem. Is he suspicious?

  • Re:Not even close (Score:5, Interesting)

    by JustinOpinion ( 1246824 ) on Tuesday September 23, 2008 @12:46PM (#25122739)
    Cory Doctorow described it nicely in his recent book "Little Brother" (free download available [craphound.com]):

    If you ever decide to do something as stupid as build an automatic terrorism detector, here's a math lesson you need to learn first. It's called "the paradox of the false positive," and it's a doozy.

    Say you have a new disease, called Super-AIDS. Only one in a million people gets Super-AIDS. You develop a test for Super-AIDS that's 99 percent accurate. I mean, 99 percent of the time, it gives the correct result -- true if the subject is infected, and false if the subject is healthy. You give the test to a million people.

    One in a million people have Super-AIDS. One in a hundred people that you test will generate a "false positive" -- the test will say he has Super-AIDS even though he doesn't. That's what "99 percent accurate" means: one percent wrong.

    What's one percent of one million?

    1,000,000/100 = 10,000

    One in a million people has Super-AIDS. If you test a million random people, you'll probably only find one case of real Super-AIDS. But your test won't identify one person as having Super-AIDS. It will identify 10,000 people as having it.

    Your 99 percent accurate test will perform with 99.99 percent inaccuracy.

    That's the paradox of the false positive. When you try to find something really rare, your test's accuracy has to match the rarity of the thing you're looking for. If you're trying to point at a single pixel on your screen, a sharp pencil is a good pointer: the pencil-tip is a lot smaller (more accurate) than the pixels. But a pencil-tip is no good at pointing at a single atom in your screen. For that, you need a pointer -- a test -- that's one atom wide or less at the tip.

    This is the paradox of the false positive, and here's how it applies to terrorism:

    Terrorists are really rare. In a city of twenty million like New York, there might be one or two terrorists. Maybe ten of them at the outside. 10/20,000,000 = 0.00005 percent. One twenty-thousandth of a percent.

    That's pretty rare all right. Now, say you've got some software that can sift through all the bank-records, or toll-pass records, or public transit records, or phone-call records in the city and catch terrorists 99 percent of the time.

    In a pool of twenty million people, a 99 percent accurate test will identify two hundred thousand people as being terrorists. But only ten of them are terrorists. To catch ten bad guys, you have to haul in and investigate two hundred thousand innocent people.

    Guess what? Terrorism tests aren't anywhere close to 99 percent accurate. More like 60 percent accurate. Even 40 percent accurate, sometimes.

    What this all meant was that the Department of Homeland Security had set itself up to fail badly. They were trying to spot incredibly rare events -- a person is a terrorist -- with inaccurate systems.

  • by zotz ( 3951 ) on Tuesday September 23, 2008 @12:46PM (#25122743) Homepage Journal

    HEY! You have something here...

    So they are going to have to make flying a pleasant experience again if they hope to have this system work! Wow. Now that is going to be a tall order.

    all the best,

    drew

  • Re:sensors... (Score:3, Interesting)

    by HTH NE1 ( 675604 ) on Tuesday September 23, 2008 @12:46PM (#25122751)

    A better name than 'FAST' would be 'cattle-control'.

    Project Sheepdog.

  • Re:Err (Score:5, Interesting)

    by DriedClexler ( 814907 ) on Tuesday September 23, 2008 @01:06PM (#25123129)

    Yes, it does sound idiotic. My reaction was: ROFLcopter at the idea that you can successfully "tell people to act suspicious". Um, if it were possible in the first place for people to notice and control the aspects of themselves that make them look suspicious, others wouldn't be suspicious of those aspects in the first place!

    Think about it: people become suspicious of others based on criteria X,Y,Z because meeting X,Y,Z reveals a higher probability of intent to cause harm. But anybody trying to cause harm will suppress any *controllable* sign that they are trying to cause harm before it's too late to stop. So the only remaining criteria people use in dermining whether they'll be suspicious of someone are those that are very difficult if not impossible to control. As a bad example: someone will only look around to see if he's being watched (which looks suspicious), if he's about to do something objectionable (like picking a lock). But he can't suppress that because then he takes the chance of someone noticing him picking the lock.

    A better test would be to set up a scenario like a line at the airport where the screeners have to keep out dangerous items. Then, have a few of the participants try to smuggle items through, and get a huge reward if they succeed, while the screeners get the reward if smugglers don't succeed. Then, put a time limit on, so the screeners have to be judicious about who they check, so they only check the most suspicious. Oh, and make it double-blind as much as possible. Then, the people trying to smuggle will have the same incentive structure that real smugglers have, and thus will give off all the real-world signs of planning something objectionable.

    But then, that would be too much work.

  • Re:sensors... (Score:2, Interesting)

    by Anonymous Coward on Tuesday September 23, 2008 @01:22PM (#25123427)

    Of course, they could just detonate right there in the security line and kill everyone in the bottleneck.
    The checkpoints are a greater point of vulnerability than the aircraft.

  • Re:sensors... (Score:3, Interesting)

    by nabsltd ( 1313397 ) on Tuesday September 23, 2008 @01:49PM (#25123929)

    Even known terrorist groups are now using "non-traditional" people as attackers, so either positive (i.e., "you look like a terrorist") or negative ("you don't look like a terrorist") profiling will cause too many false positives and negatives.

    Second, it wouldn't be surprising to see people that aren't part of the "traditional" terrorist groups perfoming acts of terror for reasons unrelated to the political goals of groups like al-Qaida. In the US, it might be one of the "militias", while in Germany it might be a neo-Nazi. There might even just be a nutjob who got fired from an airline. If you're going to have terrorist screening slowing down airports, bus terminals, etc., then you want to catch all terrorists, not just ones that fit some previous profile.

    <rant>It really pisses me off that all the steps taken to "protect" us are completely reactive. Sure, there's the standard behind-the-scenes investigations that have always taken place, but all the general-public screening is reactive. So, when several groups of terrorists shoot up a bunch of shopping malls, then we'll have screening at the entrance to every shopping mall, but until then, nobody would even consider unobtrusive metal detectors at every entrance, yet if this hardware really works, it would be great to install in those shopping malls.</rant>

  • Re:sensors... (Score:3, Interesting)

    by ericfitz ( 59316 ) on Tuesday September 23, 2008 @03:54PM (#25126085)

    A dry run would not work.

    If the attackers knew it was a dry run, then they would not exhibit the signs of stress that the machine detects, therefore all would test negative.

    If the attackers did NOT know it was a dry run, then they must also carry attack devices with them through the screening process, and be at risk of detection of the devices or by an observant screener or secondary screening.

    Plus, they must either carry out the attack, making their future use moot, or have the attack called off at the last moment, which potentially subjects the organizers to risk of capture.

    There are certainly other reasons to criticize the effort, but dry run attacks are not one of them.

  • Re:sensors... (Score:3, Interesting)

    by ckaminski ( 82854 ) <slashdot-nospam.darthcoder@com> on Tuesday September 23, 2008 @04:06PM (#25126323) Homepage
    He said many, not all, and as a New Englander, I happen to agree with him.

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...