Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Privacy

ACLU Examines Face-Recognition System 165

nate_drake and others wrote in about an ACLU report on face-recognition (PDF) (see also their press release and an MSNBC article). We've posted several previous stories about the Tampa police using face-recognition systems at the Super Bowl and on the streets of Ybor City.
This discussion has been archived. No new comments can be posted.

ACLU Examines Face-Recognition System

Comments Filter:
  • ... so the police abandoned it after not catching anyone with it.

    all this means is the companies developing this stuff will have to improve their face-matching algorithm and then we'll all be back at square one.

    given the demand for this stuff a la Sept 11, i dont think the companies will have a problem with motivation
    • Re:ok... (Score:5, Insightful)

      by bourne ( 539955 ) on Friday January 04, 2002 @01:05PM (#2786374)

      all this means is the companies developing this stuff will have to improve their face-matching algorithm and then we'll all be back at square one.

      It's worse than that:

      • There's no evidence that the face-matching algorithm is the problem. It is for false positives, but the lack of true positives is just as likely to be a lack of... positives.
      • If so, it isn't the facial recgnition that needs work - it's the facial database! We'll have to start requiring facial shots for all incoming student visa holders.
      • Heck, make that all visa holders.
      • Frankly, everybody belongs in the database but me and thee... and I'm not so sure about thee.

      How long until these companies start lobbying the gov't for mandatory inclusion of, say, license photos in the pool of database data so that people can be picked up as soon as they do something?

      • the idea of my post was to show that the problem wont go away anytime soon.

        I do agree, however, that the photo db is part of the problem. The system probably demands a very high-res pic in order to work properly. Most store/bank security cameras arent high-res enough to be useful for Face-It.

        Building the db striaght from a photo db of licensed drivers, visas, etc wont happen, as even politicians are smart enough to realize that putting everybody in that db would create too many false-positives than true-positives.

        OTOH, if someone is picked up by police w/o ID with them, that's where something like that would be helpful. But then again, such instances are rare, so justifying the costs of doing so would be difficult.
      • Re:ok... (Score:2, Funny)

        by well_jung ( 462688 )
        How long until these companies start lobbying the gov't for mandatory inclusion of, say, license photos in the pool of database data so that people can be picked up as soon as they do something?

        Umm... That'd be great. I never look as bad as I do on my DL. I'm not even sure it's me.

      • it isn't the facial recgnition that needs work - it's the facial database! If you are trying to ID foreign terrorists, that's true, but pictures of the visa holders won't help much. If you've got one grainy picture of a guy in turban and beard, with his face half turned away, no ID system is going to tell you that the cleanshaven guy in western clothes waiting in the immigration line is the same guy.

        But in Tampa apparently they were after ordinary criminals. Most of these have been arrested and mug-shots taken multiple times. If that doesn't give them good enough pictures to work with, there's something seriously wrong in the mug shots, face-matching software, or both. I suspect that it's really the false-positives problem; the cops got tired of running over to look at the latest "bingo" when the pictures don't look at all alike to humans, and missed the few times when it really was the same guy.
      • There's no evidence that the face-matching algorithm is the problem. It is for false positives, but the lack of true positives is just as likely to be a lack of... positives.

        There is evidence that the face-matching is the problem, and that evidence is in fact the false positives, of which there were quite a number (fourteen over three days). Personally, I see false positives as a more damning failure than a lack of true positives.

        There's a fundamental fact that is why both the lack of true positives isn't the problem, and why false positives are nigh-inevitable. That fact is that criminals are ridiculously less common than non-criminals. Even an extremely accurate and specific face recognition system is going to make more false positives than true ones. It's just an artifact of probability, which is why I'm against using these automated systems in the first place. That this particular system made so many false positives is indeed an indication that it sucks, however.

        But humans don't make very many false positives, you might say. And that's true (though it does happen). We do have a very advanced image recognition algorithm in our meat-computers. But that's not all. Think about whose faces you recognize. Have you only seen them in a single photograph? Or have you talked with them, seen their face from many angles and with many facial expressions? How many of those people would you recognize having only seen their driver's license photo? Even us, the world-renowned best human face recognizers ever, need some help to get the job done. Having a photo of everyone on earth won't fix this problem.

        Basically, you are doomed from the start not only by probability, but also by the handicaps you're starting your system with.
    • If the system has failed to catch a single criminal, why on earth are taxpayers funding a failure? The company should give the technology away until it is proven.
    • Yeah, someday they will figure it out, better than they have.

      Right now the best facial identifiers we have are people. Think about how many people you know and can identify by their face. Even people who's names you've forgetten but still recognize on sight. (The problems of mug shots and line-ups not withstanding) under good conditions human beings only very rarely make false positives indicating that a stranger is someone they actually know.

      We have evolved a specific part of our brains specialized for identifying human faces, and I see no fundemental reason that computers couldn't someday obtain the same level of skill. Problem being that the computer will be infinitely better at remembering names and other personal details which we forget, which leads to a whole lot of problems.
  • by actappan ( 144541 ) on Friday January 04, 2002 @01:02PM (#2786347) Homepage
    So really, how well can this work? Half the porple I know look completely diferent from day to day anyway . . .

    But take this for an example:

    Soon after Sept 11th one of my colegues flew east on a buisness assignment. He was concerned because, as he said "Did I forget to mention, I look like one of those terrorists?"

    He did, a little. Darker complextion, goatee, curlly hair, glasses. He looked more than a little like one of the suspects.

    He's sinced shaved his goatee, so he no longer looks like that . . . but would this system have been abel to diferentiate?

    I also seem to remeber that one of the guys they arested and incarcerated in Tampa turned out to be the wrong guy . . . .
    • He's sinced shaved his goatee, so he no longer looks like that . . . but would this system have been abel to diferentiate?

      given what some people said in the MSNBC article, your friend would be fine

      glasses, mustache, beards, longer hair, lighting, even turning 15 degrees were enough to foil the system.
    • Actually, these types of software focus on portions of the face that can not be changed easily, such as the distance between the eyes and other facial features, cheek bone structure, etc... Casinos use this technology all the time to identify known cheaters and card counters.
    • I saw a TV show about this on PBS a while back, it was talking about how the technology works in Britian. I guess the cameras do all the recognition by the distance between the eyes, and a couple other places on the face. They had an example of a person walking in front of the camera with all sorts of things to try and mask their looks, the only one that works was when the guy wore a hat and sun glasses and looked at the ground. Nate Tobik
  • Lie detectors. (Score:3, Interesting)

    by ImaLamer ( 260199 ) <john.lamar@NospaM.gmail.com> on Friday January 04, 2002 @01:02PM (#2786349) Homepage Journal
    After they take these digenerate pieces of sh*t down they need to go after the 'lie detectors'.

    Soon we will have a camera on the market which is *AS* reliable as a polygraph test. 25% of the time they are wrong.

    All of these things are a Bad Thing. These cameras will be used next to track you at the game [like the Bears? get calls at dinner about season ticket prices], at the bar [you like to go to bar!], at the strip club [the cops watch you now because they know you're a perv!]...

    Just wait till this stuff goes all private!

    Reminds me of Demolition Man
    • by Anonymous Coward
      I can't wait for the surviellance society.
      Pretty damn quick the powers that be will learn that people believe one thing Sunday morning and a completely different thing Friday night.

      There's a reason Playboy and High Times stay in business -- although they don't freely admit it, people like sex and drugs. Just imagine our moral leaders' dismay at having their noses shoved in that nasty little fact with corroborating statistics. And the realization that you can't legislate human nature.
  • DMCA.... (Score:1, Funny)

    by MosesJones ( 55544 )

    Hang on, thats my face they are capturing, imaging, processing and then storing. Clearly I own broadcast rights to my own face and all of its characteristics.

    If they match my face then they've reverse engineered my face into another format to circumvent its current storage mechanism.

    Totally silly application of DMCA... but then again its a pretty silly bill.
    • heh, it would just be fun to see how far people could twist the dmca until we finally got it repealed. ;)

      i wonder if we could attack the fingerprint database too since it would fall under the same category of reverse engineering your prints into another format and circumventing the current storage mechanism.

      heh, then imagine if this lawsuit were a class action lawsuit. to claim shares of the payout, criminals would have to come and get them. (much like the classic tales of criminals being suckered out of hiding by letting them know they've won some sort of prize.) ;)
  • salient points (Score:4, Informative)

    by nomadic ( 141991 ) <nomadicworld.gmail@com> on Friday January 04, 2002 @01:06PM (#2786388) Homepage
    1. The system has never correctly identified a single face in its database of suspects, let alone resulted in any arrests.
    2. The system made many false positives, including such errors as confusing what were to a human easily identifiable male and female images.
    3. The photographic database contains a broader selection of the population than just criminals wanted by the police, including such people as those who might have "valuable intelligence" for the police, or who have criminal records.
    I wonder why they didn't mention that man who was a demo face [slashdot.org] for the system, and was subsequently misidentified then questioned as a felon. Guess it didn't make the logs.
    • It would seem that there is no value in using or abusing a system that doesn't work. The consequences of a false positive are great: Lawsuit, criminal charges (against police).

      As for straight surveillence, the fact that abuses can and are documented shows that the system CAN be monitored and the watchers CAN be watched.

      Good auditing practices conducted by 3rd parties WILL keep the system (fairly) honest.
    • &gt I wonder why they didn't mention that man who was a demo face for the system, and was subsequently misidentified then questioned as a felon.

      No Problem...
      Just change the Camera Lens [yahoo.com] and ask the guy.
      ;-)
    • The points you've listed are all the result of limited technology. Given time, each of these can be overcome.

      There must be a broader argument against what face-recognition technology is trying to do -- keep the populace under surveillance.
    • Except that the man who was a Demo Face was mis-identified by a Human, who thought he looked like her dead-beat ex-husband. She contacted the Police, who checked up on him and determined his actual identity. It had nothing whatsoever to do with the actual system. The Face Recognition Software did not return a False Positive, another Human did.
  • I should try to trademark my face and demand royalties...

    -Pete
    • I should try to trademark my face and demand royalties...
      Hmmm - that would be interesting, although probably legally futile. I imagine the thing to do would be to take several pictures of your face from different angles, copyright and watermark them, and publish them with a statement that use of these or similar images of your face without your express permission is prohibited. One question would be: where would you publish the images? On a web site might not be sufficient.

      That protection probably only applies to those specific images, though, and not others, including any taken in public but certainly any taken while in police custody.

      Still, it would be funny to see the look on the face recognition vendor's, well, face, when you hit him with copyright and DMCA violation lawsuits.

      sPh

  • It's like every other anti-anti device. Copy protection on software most hurts the legitimate users. Facial recognition invades the privacy of upstanding citizens more than any perceived benefit to crime fighting. It is casting the widest possible net while knowing full well that criminals are aware of you and will subvert your attempts to control them.
    • Consider this though. Are we guaranteed privacy when we are on public property?

      I am a true believer in the right to privacy when we are on our own property, but when we are on someone else's property, private or public (because public property is technically owned by a government), the rights of the property owner should come before those of the visitor.

      • (because public property is technically owned by a government)

        A common misconception, and untrue (granted, it's true enough in a practical sense). The government administers public lands, and holds them in trust for the public. Always remember, the goverment is YOUR servant, not the other way around. Public land is exactly that, public. You've heard of the Boston Commons? So called because it's help in common by the residents of Boston.

      • The legacy viewpoint is that when we are in public we do not have a reasonable expectation of privacy. However, that legal viewpoint was created before video cameras became standard in every business and street corner, before consumer-grade spy cameras and recording devices, and even before handheld video cameras.

        Legal rulings change. Our privacy is worth a rethinking of the situation.
  • by adamy ( 78406 ) on Friday January 04, 2002 @01:08PM (#2786407) Homepage Journal
    I used to park my bike in the garage beneath my work. The bike rack was about 3 meters from the attendent, and covered by a video camera. I had a cheap lock, but Ithought I was safe.

    One day, I came to the rack to find my lock cut and my bike stolen. The attendant refused to talk to the police. But that's OK, I had the perpertrator on camera.

    After getting the tape from the building security people, I took it to a camera shop. We sped through it to find the point where, sure enough,m you could see a guy walk up, try on my bike helmut, and ride off with the bike. Due to position, you could not see him cut the lock.

    I say him because I am pretty sure that it was a male. That was all I could tell from the poor quality of the tape. I could not tell skin color, clothes, hair color, or enough facial features to recognize.

    I don't think the best AI added to this image would have been able to do anything as far a facial recognition.

    I wouldn't want to be the attenandt working that booth. After they find his mangled corpse (ala Fargo) the police will tell his widow, "Sorry Ma'am, all we can say for certain is that they were in some sort of automiblie. We think a sedan, but we don't know for sure."

    Facial recognition is going to be even harder than this. As a programmer, you have two choices , go with an algorithm or try to use a neural network.

    Most of the weaknesses in the algorthim approach are what the ACLU document was complaining about.

    A neural network may work if you are looking for a specific person. The problem is that to identify two things as being different, they need to be as orthoganl as possible. To separate the sea of faces into two groups those we are loking for , and those we aren't based on a series of images is going to be nigh unto imposible. Certainly not with the amout of computing power per camera that they would put into it.
    • Video resolution is too low for identifying individuals with an area surveillance system. But that's changing. Megapixel cameras and higher are becoming cheap. The frame rate can be cut way down, and surveillance video can usually be heavily compressed from frame to frame without loss of detail, because so little is usually happening.

      A new generation of surveillance cams is starting to appear. There's the X10 junk at the low end, and megapixel camera / DVD-RW combos at the high end. Another year or two, and the midrange stuff will be replacing analog video.

      Big Brother's vision is about to improve substantially.

  • by DG ( 989 ) on Friday January 04, 2002 @01:09PM (#2786410) Homepage Journal
    As I understood it, the issue with facial recognition was the possibility of false positives; ie, I'm just trying to watch the SuperBowl. the FR system tags me as a known terrorist (incorrectly :) and the next thing I know I'm being dragged off to the can for some serious interregation (and not only unjustly tramatized, but I miss the game too)

    But from the ACLU's press release, there was always a human step in the process, where a real live human being would examine each purported match before anybody got dragged off anywhere.

    As such, all the face recognition software is is a _filter_, cutting down on the number of people a human agent must examine. Where's the problem?

    After all, law enforcement officers have placed themselves in public places, looking for people they knew, for probably as long as there have been law enforcement officers.

    A friend of mine was a sergent in the British Army, and he did a few tours in Northern Ireland. Part of his training was memorizing the faces of a large number of known IRA "players" (and apparently the IRA did the same thing with British soldiers' faces)

    How is this any different?

    I guess I don't understand the ACLU's beef here.

    DG
    • i think it's good that /. has some people here that realize the human element that would verify possible matches made by the system before dragging the person in for questioning.

      there are other ways this system may cause ease for people.

      it has to be worth the officer's time to go get the guy if it is indeed a match. someone wanted for petty theft would be better off than someone wanted for first degree murder.

      it also depends on what else is currently happening on the cop's beat. If there's a hostage situation around the corner, the cop will have to wait another day to catch the guy wanted for murder 1.

      i had a rather long and drawn out discussion with someone who just couldnt be conviced of this concept the last time this topic came up. good to see other open minds out there.
      • On the flip side of that coin, the more you bring humans into the equation, the more you open it up to biased based abuse.

        If the computer has given you two potential matches regarding similar crimes, then do you pursue the black guy or the white guy? In the Tampa system, the operator had to zoom in on particular faces to run the software, do you believe that there was no profiling in who he choose to look at? Will the operator be as good at eliminating false positives when the person is from a different racial group than he is?

        Sure there is a human layer that offers some protection. The machine will never decide guilt or innocence, but the human layer also diminishes many of the bias-nuetral advantages that this technology is purported to have.
        • i agree that bias may exist, but that would exist anyhow and there's not much that can be done to prevent other than fire the person if they're caught.

          The only way I see bias, however, is much different than yours. To my knowledge, the system in tampa scans/zooms/pans through crowds in whatever fashion it does, then notifies an operator of a possible match. Control of the camera get switched over to the operator so they can have a look for themselves to verify the match and initiate further action if they see fit. Chances are good these operators are going to be minimum wagers, not police officers, so the chance of bias is high (much like modern airport security), whether it be race/sex/whatever.

          I guess a solution would be to have multiple operators in multiple locations. When a match comes up, present it to more than one person. Differences of judgement go to a supervisor or something for a decision. While still not perfect, it does introduce some level of checks-and-balance.

          Aside from all that, it has been theorized (not yet proven) by psychologists that people have difficulty identifying someone of another race. An example I can think of off the top of my head is an old 20/20 (or maybe Dateline?) story where a white woman was raped by a black man. In a police line up, the woman picked out the suspect. DNA and other evidence set the man free without being charged. Several months later, that DNA evidence (stored in a DB of some sort), was used against someone caught in an another rape case. Turns out that guy was present in the original line up -- the woman picked out the wrong guy. The only similarity the two men had were height and skin color. Faces were not especially similar.
          • According to the ACLU, in Tampa: "[T]he system does not automatically scan the faces of people on the sidewalks - operators must manually zoom in on a citizen's face before it registers in the software"

            Which would be the basis for selection bias.

            i agree that bias may exist, but that would exist anyhow and there's not much that can be done to prevent other than fire the person if they're caught.

            Do you fire the person for being biased even if they don't realize that they are being so? There is a lot of research showing how people can have preferences that they aren't consciously aware of.
            • striaght from the company's website [visionics.com]:

              FaceIt® will automatically detect human presence, locate and track faces, extract face images, perform identification ....

              I dont see "The operator will manually control the camera's positioning and focus." Seems to me the ACLU got it wrong in a big way. From what I've seen on TV about this technology, this is how it works in Tampa, and has for years in the UK.

              As for firing people over bias, naturally there would be a "corrective action period" where the person is told they're being baised, how they determined that, and to shape up.
    • Well, the problem is that you should be able to go about your life withou the police scanning you at every moment you show your face in public. Yes, its public, but I sort of like the hands-off policy of policiing the US. And, our crime is really very low in most places so it hasn't caused any problems to do it this way.


      Also, the ACLU found that they identified a total of zero (0) criminals in Tampa with this system, so its also a waste of money....

    • A similar argument was once used in regards to lie detectors: Since there is a failsafe mechanism (a human) involved, the chance of abuse is lessened considerably. However, history has shown this not to be the case, and law enforcement frequently trusts the machine over the man. I doubt that this technology will be much different, at least in this respect. It is much easier to trust the machine and haul someone in for questioning.

      Further, the use of these machines at airports means that people who potentially don't even have a high school diploma will be operating them. From me, at least, that fails to inspire much confidence.

      • A similar argument was once used in regards to lie detectors: Since there is a failsafe mechanism (a human) involved, the chance of abuse is lessened considerably. However, history has shown this not to be the case, and law enforcement frequently trusts the machine over the man.

        But humans can't determine with their natural senses whether or not someone is lying, so if the machine catches an otherwise unknown sensation (ie, blood pressure rising, pulse quickening) that's when we take it's work over a humans'. However, I heartily agree with the person you're responding to...these things are filters, that reduce the number of candidates from 200,000 to 20. A person then comes in, looks at the picture and compares it to the real person. Heck, I don't care if security escorts me out into the hall briefly...I'm at a public event where they suspect someone may be endangering lives, I'll take 2 minutes out of my life to show them i'm not Arab or carrying a bomb.

        I am a liberal. This means I read books and have opposable thumbs.

        Frankly, I think this one sentence speaks volumes more than the rest of your post...

        --trb
      • by DG ( 989 ) on Friday January 04, 2002 @02:43PM (#2787155) Homepage Journal
        Well, there's a major difference between this technology and a lie detector.

        Humans are not very good at detecting lies, and can in fact get very good at certain, specialized forms of deception, even in the face of equally speciallized deception-detectors. (I'm thinking here of professional poker players, who make their living on deceiving and detecting deception, in a very specialized manner)

        So a lie detector is an attempt to augment an ability.

        But humans are VERY good at facial recognition, much better than any machine is. The limitation here that the machine is trying to address is one of storage capacity (memory) and speed, not ability per sae.

        I make the assumption here that the interface would display the picture of the person that it thinks the person in question matches alongside the picture from its camera (or print one for the officer to carry with him for an in-person comparison) so that the vastly superior human facial recognition abilities could be brought into play.

        So, really, I don't see any reason to get upset. It's no different than if an officer saw your photo on a "wanted" poster in the post office, and then made the match from memory, only now the "memory" is much larger.

        When an arrest warrent is cut, your driver's licence is flagged in the DMV computers. If you get pulled over (for any reason) and your licence is checked against this database, they get you. How is this any different?

        DG
        • Scenario: a FaceIt operator dismisses his 109th false positive side-by-side alert provided by the software. How confident do you think he/she is, at this point, that ANY of them will net a postitive identity? When you combine this with the equal likelihood that there can be false negatives (something the ACLU article didn't even mention), this is utter chaos - criminals getting away because they haven't been recognized, and innocent people being flagged as possible criminals. Where's the benefit?
  • I always thought it'd be a fun idea to put full size pictures of known terrorists' faces on t-shirts to screw around with it. I could just see it now: SWAT team converging on terminal B-9 at Miami Airport, over the radio you hear,

    "Osama's been spotted. Take him out, over."

    "I guess he didn't stay in that cave very long, over."



    -G
  • At what point, when the technology is capable of functioning better than this one has, do we say enough is enough? When do we shut down the system that says you are guilty until proven innocent? In the US we have been falsely raised to believe innocent until proven guilty is the prevailing law. Well, no more. Since 9-11 we have been hearing more and more about tracking individuals with this or that new gadget. As far as I am concerned this would be fine if you were not talking about American citizens. You want to track foreign nationals when they come into our country to make sure they are here doing as they said they would when then crossed our border. Fine. Since when did we decide that Big Brother was ok? I don't know, but I don't like it. We are never going back to the days when we moved about as freely as we once did, but do we have to give up every ounce of freedom for security's sake?

    "Clean underwear? Why?!? ... First you're gonna say it, then you're gonna do it" -Bill Cosby
    • I agree withmost of what you are saying except for

      As far as I am concerned this would be fine if you were not talking about American citizens. You want to track foreign nationals when they come into our country to make sure they are here doing as they said they would when then crossed our border.


      The bill of rights is based on the concept of the rights of an individual human being.
      The concept is that a bunch of equal human berings banded together to provide for the common defense of them. If we feel these are inalienable human rights, than anyone who falls under the umbrella of the US constitution, and by extension, the US government, should be guarenteed those rights.

      Everyone.

      The right to a fair trial is to ensure that, as best we as fallible human beings can determine, such and such a crime was commited on such and such a location. Remove that, and we'll have witch hunts.
    • > At what point, when the technology is capable of functioning better than this one has, do we say enough is enough? When do we shut down the system that says you are guilty until proven innocent?

      Who said anything about that?

      Indeed, the more accurate the technology gets, the less likely the software is to make a false positive.

      And supposing you are flagged by a false positive -- the cop who walks up to you and eyeballs you is likely to say "No, the software's wrong", and let you move on.

      And finally, suppose the cop also makes the same mistake. (Suppose, in the worst case, that you have a twin brother...) Well, sucks to be you. But then your lawyer subpoenas the various agencies to check the audit trail of your movements (security camera footage, face-idents, cell phone logs, IP addresses from Carnivore's logs of your /. postings), which demonstrate pretty conclusively that you were in East Bumfuck, Wisconsin at the time the crime took place in Los Angeles, California, and you're off the hook, with charges dropped.

  • The local police in Virginia Beach want to install this to monitor the ocean front area. They are so gung ho about it that they are willing to fight city hall on this issue. The mayor and the local city council have both said that they do not like the idea and the police chiefs reply was that they were going ahead with it anyways.

    The reason that I point this out is that I have to wonder what is going on behind the scenes. We have a technology that is unproven at best and most likely unworkable. Yet we see police chiefs risking their jobs to get it installed. Is it the kick backs or something more sinister? Probably kick backs but I would really like to know for sure.

    • It's biggERING (Score:2, Insightful)

      by Newer Guy ( 520108 )
      You don't know about biggERING? I guess you never read Dr. Seuss' "The Lifted Lorax" as a kid. Biggering is what happens to EVERY government everything...and many corporations as well. See, they're not happy being the size they are..so they bigGER. If you have one employee working for you, now you want two. Then two becomes four and so on and son on and so on. This police chief is the perfect example. he wants a biGGER police department. More toys = more employees needed to install, run and maintain the toys. Now he can say: Hey, I now run a 100 person dept. (as opposed to a 40 person one five years ago) I need a BIG raise (after all, crime is mostly down everywhere so he surely needs a biGGER dept, now doesn't he?)! And on it goes. Read the Lorax. You'll understand.
  • if( face.has_hat == TRUE && face.hat == TURBAN)
    {
    face.is_terrorist = TRUE;
    police.arrest(face.parent);
    police.beat(face.parent);
    police.eatDonuts();
    }

    • > face.is_terrorist = TRUE;
      > police.arrest(face.parent);
      > police.beat(face.parent);
      > police.eatDonuts();

      Wait a minute, if they beat your face in well enough the first time you get falsely-identified, you won't be recognized the second time.

      I'd say this tech will stop police face-beatings, not encourage 'em ;-)

  • Not to sound like a troll - This is a legitimate question.

    Why do we allow the local convienience store to film us, yet we seem to have a problem with our government doing the same thing? Is it the database of faces that scares us? What if K-Mart used facial recognition to associate our purchases with our face. Would it be wrong for them to have televisions throughout the store interactively suggest purchases?

    Don't get me wrong, I like retaining my rights,
    but in this case, I feel that the rights of law enforcement agencies are being restricted.
    • It's that K-Mart and other stores do not use the tapes to actively seek out criminals. They only use them to watch for crimes that may be occuring in their stores. Its also semi-private property and people know the cameras are there.

      There isn't near as much of a chance for someone to be wrongfully accused - and even if they are accused of something like shoplifting its a simple process to clear them of the charge in the store. Sure its embarrasing, but not nearly as big of a deal as if the police take you away for verification. If you are picked up on a camera by the government as a criminal, I am willing to bet that when they track you down it won't be near as simple of a process as what you'd experience at K-Mart.

      As for K-Mart using facial recognition to suggest purchases - I think its an awful idea, but for other reasons than those associated with the loss of privacy.
    • It seem to me that most people aren't to upset about just being filmed. The convenience store has security cameras, but it doesn't really come into use much unless a crime is actually commited. When you add in a database and a system that tags the wrong people, anyone could get grabbed by police for questioning because of a glitch.

      As for k-mart. i really don't think i'd care to much for them tracking my face for personalized ads. but then i don't like it when companies keep track of that much information on me in general.
    • by coyote-san ( 38515 ) on Friday January 04, 2002 @01:40PM (#2786637)
      Your local convenience store can't detain you.

      Well, technically they can. But the public, including that rent-a-cop in the convenience store, can only detain a person 1) if they witnessed a crime and 2) to turn them over to a sworn police officer at the earliest possible time. If either piece is missing, you can nail them for "false arrest." This is an important thing to remember if you're ever (wrongly) accused of shoplifting - demand a real cop, *now*, to either arrest you or release you. If they refuse to call the cops... life will soon get *very* interesting.

      Even those bounty hunters have limited rights. They can detain someone who signed the bond papers, but there are some well-documented cases where the bounty hunters were prosecuted for kidnapping after detaining the wrong person and failing to exercise due diligence in verifying the identity of that person.

      But sworn police officers can detain people even if the officer didn't witness a crime. They can detain people even if there's no witnesses at hand, e.g., if they reasonably believe that the person is the subject of an arrest warrant issued by another jurisdiction.

      The cost of a false positive in a convenience store is minimal. They think you're a shoplifter because of their face recognition software? Fine, you walk away and shop at another store where they're more careful with their accusations.

      But a false positive with a police officer may have you arrested, at gunpoint, and detained for hours or days until you can prove that you aren't the escaped mass murderer you resemble.

      (IANAL, but this is stuff that should be required knowledge for a walking around on the street!)
    • The convenience store is a private establishment. They have the right to secure it as they see fit, within established legal limits of course.

      Face scanning as performed by the police then differs in two ways: (1) It is being performed by public servants. (2) It is potentially done in any public spaces.

      If you don't want to be on a store security camera, don't go to the store. If you don't want to be in the police face-scan database, don't ever go out of your house.

      See the difference?

  • The ACLU attacks face recognition when it is used to find criminals among the general public. (In other words, the system flags an individual as "suspect" if their face is found in a database of images).

    I think a much better use would be in an access-control situation-- flag someone as suspicious if their face is NOT in the database.

    It isn't too difficult to socially engineer your way into a building if you don't work there. But if employees were required to look at a camera for a split second before passing by the security desk, it would be much easier to identify those who don't belong (especially in large companies).

    No, I don't think face recognition is accurate enough yet, but there are legit and non-privacy-invading uses for this technology.
  • The ACLU's statement of fact, that face recognition doesn't work, is interesting and useful. It means that more work is necessary for this to be a functional technology.

    But I'm not sure how that leads to their conclusion, "the technology does not deliver security benefits sufficient to justify the Orwellian dangers that they present." If the stuff doesn't work, then it doesn't present any dangers.

    At least, not any more dangers than the police officer standing out there looking for people. I'm not sure that the ACLU would grant police that right, actually, since it is a violation of what they seem to consider an absolute right of privacy.

    In my opinion, a society will always make exchanges of some about of liberty for security. For example, I don't have the liberty to shoot people. The boundaries will always be contentious, but it seems to be as if this sort of absolutism does not comprise a reasonable discussion of where the boundaries are most appropriately set.
    • The Orwellian danger is having a camera on every curb, and the potential for John Q. Public to be under constant surveillance. I'd expect that most people would be upset or put off if the police followed them around all day watching what they do. The police argue that the contentious and widespread use of cameras is justifiable because facial recognition gives them a legitimate and bias-nuetral use.

      The ACLU is saying that the system doesn't work, so there is no good reason for deploying lots of cameras. (Actually they are saying more than that, but that's the part that's easy to agree with.) In any case, it's clear that the survelliance capacity presents a danger even if the intended purpose for the system doesn't work.
      • Point well taken, thank you. I believe I now understand their concern. If this article had not been published, the police might have deployed hundreds of "face recognition" cameras which actually had a more nefarious purpose, since the face recognition didn't work.

        While I appreciate and commend the ACLU's work protecting me from overzealous police, the antagonistic tone may be counterproductive. I'd like to believe that the police would say, "Gee, this doesn't work, let's wait a few years and try again" rather than try to use it as cover for a genuinely Orwellian plot.

        Perhaps I'm naive, but as long as this is framed in terms of the ACLU wanting absolute privacy, and the police depicted as the agents of Big Brother, it's unlikely that we'll actually find any sort of useful compromise.

        Mind you, I live in Prince George's County, Maryland, where the police _are_ actively engaged in covering up abuses, as documented in The Washington Post. So perhaps I'm deliberately ignoring ugly truths rather than being simply naive.
  • All the ACLU wants to do is discredit the technology, not actually find out if it works. Therefore, they're going to adjust their "findings" to suit their benefit.

    Just because the technology failed in their "one month study" doesn't mean it's not a success. How many criminals could be walking down the street of NYC right now without anyone even noticing them? I'm sure there are plenty! Even if you have the person's face plastered on every telephone post, could you with 100% accuracy point your finger and say "THAT IS THE GUY IN THE PHOTO!". I know I couldn't. How many times have people misidentified criminals... seeing someone and saying it looked like someone else... then when the police did looked into the suspect, found he or she was not guilty? This happens all the time. So to say that the technology is flawed because it picked up innocent people is just dumb.

    The other point made was that was made in the MSNBC article was that the system failed to point out someone with a 15 degree variance in the image they had compared to the person being compared... or if the lighting situtation was different. Unless you know someone and you see them daily, can YOU tell a person by their profile view when you only have picture of their face? No, you can't. If it was dark, could you tell? No.

    Another point, that the system was gender bias... well, I know several people who could go either way sexually... and if you were to look at someone with long hair and lipstick, wouldn't you consider them female? All the guys who pick up drag queens accidently in NYC everyday surely couldn't tell the difference between male and female!

    ACLU is making lame points and has no real evidence. Then again, I haven't seen any university studies or other non-biased studies on this situation. The ACLU claims Tampa police have abandoned the technology, which seems to be an all out lie. It's all a bunch of bullshit, and that single item is the proof.
    • by sphealey ( 2855 ) on Friday January 04, 2002 @01:32PM (#2786582)
      Just because the technology failed in their "one month study" doesn't mean it's not a success.
      Polygraphs (so-called "lie detectors") have a failure rate of around 60%. In other words they are worse than just going before a judge and asking him to flip a coin to determine guilt. Yet they have been in use in the US for 70 years, and the every year the USG comes out with a larger list of people who must be polygraphed.

      So I would say it is definately worth it to fight very hard against bad technology right from the start.

      sPh

      • Polygraphs aren't used as the definitive answer either. They are used as an "aide" just as this is. Same thing, eye-witnesses make mistakes all the time (not sure about any percentages, but I'm sure it's up there).. so does that mean we should stop relying on eye whitnesses too?

        The fact the polygraph has been used for 70 years shows that it is an effective means of collecting evidence. If it wasn't, they wouldn't still use it.
        • by sphealey ( 2855 )
          The fact the polygraph has been used for 70 years shows that it is an effective means of collecting evidence. If it wasn't, they wouldn't still use it.
          Not to be rude, but you might want to do a little research on human nature. As far as is known, humans are unique in the animal kingdom in their ability to convince themselves that something which is objectively false or useless is "effective".

          If you read up on the history of the polygraph, this is clearly what is going on. Faced with a choice of admitting that they don't know who the heck committed a crime, or using a demonstrably useless machine, investigative agencies (particularly the FBI) have been using the useless machine for 70 years. And have now made such a culture of using it that they dare not admit to anyone, particularly themselves, that it just doesn't work.

          sPh

      • Polygraphs (so-called "lie detectors") have a failure rate of around 60%.

        What, exactly, does this mean? If it's wrong 60% of the time, couldn't you just use the opposite answer from the polygraph test and have it be wrong 40% of the time instead?

        Some clarification would be helpful.

        --Tom

        • If it's wrong 60% of the time, couldn't you just use the opposite answer from the polygraph test and have it be wrong 40% of the time instead?
          No, because what has to be measured is what the operator concludes. In statistically valid studies, even when operators have been informed that they are wrong more often than not, they still reach the wrong conclusion. You could argue that the superiors of the operators could apply the method you describe, but that would mean admitting that the system doesn't work, which would be a poor argument in court.

          Some clarification would be helpful.
          I agree, but the clarification would have to come from those who manufacture and support polygraphs, and I doubt that you will get honest answers from them - even if hooked up to a polygraph!

          sPh

  • If we are forced to live with Face Recognition Systems, we should require the person operating the camera system to have their photo and their personal information presented next to the camera. They shouldn't have anything to hide, should they? They have my data, why shouldn't I have theirs...
    • reminds me of this site, well worth wandering around in.

      sousveillance [wearcam.org]

      the guy has been at it a long time, and has come up with some interesting approaches from both a technical and a social perspective. the 25 MB video file inside sears had me both laughing on the floor and scared shitless. "Call security" takes on a whole new meaning...

      taking action isn't difficult, but the powers that be want you to think so.

      hope that helps.
  • by Zen Mastuh ( 456254 ) on Friday January 04, 2002 @01:27PM (#2786551)

    This facial recognition has a near-zero Hit rate and a high false-positive (Type III error???) rate. The false-positive rate is a killer because it may cause system operators to miss a Hit (true positive). So what do we end up with: an authoritarian tool that is completely worthless.

    Meanwhile, the failure of this project can be a selling point for Larry Ellison's proposed National ID card system. Perhaps the streetlamp cameras in Ybor City will soon be replaced by turnstiles manned by undereducated, undermotivated, understimulated, minimum-wage-earning Security Engineers (read: displaced airport security screeners) checking each person's National ID card. These people probably won't be able to grasp the concept of Type II/III errors; thus the implementation of the National ID Card will suffer from the same problems as the facial recognition system.

    In summary the two vendors will profit substantially from their products--which won't make the public any safer--and we will be eased into acceptance of the mercantilist authoritarian police state.

  • by Moorlock ( 128824 ) on Friday January 04, 2002 @01:32PM (#2786585) Homepage
    Check out this article from the Christian Science Monitor [csmonitor.com] - at the top is a photo captioned "Pelco CEO David McDonald walks through the X-ray checkpoint at Fresno Yosemite International Airport with a picture of Osama bin Laden to demonstrate a new face identification system."

    For more on McDonald's over-hyping of the emperor's new security blanket, see this article from the Fresno Bee [fresnobee.com].

    Sample quote: "This breakthrough technology makes us the safest airport in America," Mayor Alan Autry said.

    • Ha! Funny pic. Now I wonder what is the difference--to a camera--between a 2-D rendering and a 3-D face? The image of ibn Laden on the photo paper is a reflection (bad pun...) of the lighting, f-stop, and other conditions determined by when/where/how the picture was taken. An image taken of a person walking through the airport security checkpoint will be subjected to the same conditions, unique to the location. There will be some difference--to the camera--between a real-life ibn Laden walking through the checkpoint and a man of different stature walking through with an image of ibn Laden in front of his face.

      The Pelco system must use some very loose logic to indicate a probable Hit. This will lead to higher Type II/III errors and greater mistrust of the system. Not before Pelco profits outrageously, though.

  • "The move to permanently brand some people as under suspicion and monitor them as they move about in public places has deep and significant social ramifications. If we are to take that path - a step that the ACLU opposes - we should do so as a nation, consciously, after full debate and discussion, and not simply slide down that road through the incremental actions of local police departments."
  • ...and I want to emphasize: "part".

    To me this is just like predicting weather. All weather stations (SW & HW) out there do is "predict" the weather -- humans than jump in to validate the "prediction". To me, this system is no different.

    Granted, it still needs a lot more work, but wasn't the first weather station in the same state when it was first introduce? Lets give this thing some time before killing it down.

    ACLU is looking for a reason to discredit the system; they need to do a better job than spend two months reviewing one uint which used only 2 days a week
  • by bungalow ( 61001 ) on Friday January 04, 2002 @01:41PM (#2786645)
    As much as it sounds like good news, it is not great.

    Remember the good ol' days of th'Internet, where no one could possibly track you and where anonymity was technologically "guaranteed" ... now (some) users can be pinpointed via IP and login time - just check any ISP's radius logs...and activity can be, and is, logged by carnivore (that doesn't really exist we promise ... ok yes it does but we won't use it in a mean way we promise ... and we'll only use it if an isp lets us we proimise...but its easier for the ISP to leave it in place rather than get all LEGAL about it...but carnivore has really gone away and that's why there are no more articles about it...but it never really existed anyway...)

    My point is, that arguing the TECHNICAL weaknesses of this, or any other privacy-infringing item/product/software/etc. will only result in TECHNICAL innovations that make it more effective.

    We must argue the LEGAL weaknesses - the 4th amendment. We need to argue that no person waives their constitutional rights simply by the virtue of entering a commercial, travel, or other legal relationship with any other entity. (unfortunately, I fear we lost this one a long time ago)

    We need to argue against clickwrap agreements, and their cousins:
    • "by entering this building, you agree that..."
    • "by engaging in airplane / train / public bus / private automobile transportation, you agree that..."


    Our legal rights are important. The details of whatever technology the FBI, CIA, or any other no - such - agency uses in an attempt to violate those rights, are less so.
    Don't Frustrate their efforts. Fight them head - on!
  • With a little tweaking, I believe the commentary on this issue will be iterated in the same categories as cited here. [slashdot.org]
  • I heard they used a method of testing based on the same engine that actually powers the facial recognition system. Eleven out of twelve tests came back that the recognition system wasn't effective. The other one reported that the system was, in fact, a jelly dougnut.
  • The ACLU apparently isn't happy with the Idea, not the technology. They are using the primitive technology that is currently in place to discredit the "Idea" the Idea is superb. IF we can have monitoring stations around that can succesfully detect wanted criminals it would be very useful and an extremley powerful aide against crime. The technology doesn't store the faces anywhere, unless you're a criminal. The ACLU bitches b ecause our freedom and privacy is at risk. What Privacy? You're in a public area, an area which is under patrol by cops. If you do something illegal in this area you can be arrested. The lives of other citizens are also in this public area. Likewise you're freedom isn't hindered by this system, unless your a wanted criminal. This monitoring idea isn't bad and it doesn't infringe on your rights. You're in public. You're not in your house. The ACLU as with many other traditionally leftists organizations like to screw the betterment of mankind over with ramblings of Constituitional rights. When they know damn well that its simply not true. Its not like these things are being placed in your homes.

    Pardon the bad grammar. And most likely spelling I just woke up and hung over
    • Nitpick: the technology can store pictures of anyone it wants. It matches against the faces of a police-selected list of presumptively innocent people.
      • It won't store the image longer than needed to identify it. It goes bye bye. IT can store the images if it really wanted too. But all those facees would take up lots of space. And they won't do that. Its one of those "Mis-use" Clauses.

        Slashdotters are weird in that they have double standards. If Strong-Crypto can be misused by criminals thats ok because there is a legit reason for the existence of such a technology. If this camera system were misused by the government thats bad. Even though there is a legit reason for the technology. And in the event it was misused by storing your picture permanently (WHICH IT DOES NOT) then there would be a very very large public outcry and it would be political suicide to support such implementations.

        Leftists always mean well but the cold hard facts of reality and the real world have yet to be realized by them.

        • Yeah, go ahead and keep thinking that the pictures just go away. And believe the FBI when they promise that they NEVER read the content part of the emails carnivore sniffs. I think you are grossly optimistic about what would cause a very very large public outcry (example: well documented evidence of ongoing wiretap abuse by the LAPD. No public outcry at all).
          We should hold public servants to a (much) higher moral standard than we hold private individuals, and there should be signifigant oversight. I also don't agree that there IS a legitimate use for widescale monitoring of innocent, private people - one of the (theoretical) keystones of our legal system is that people are innocent until they are PROVED guilty, and that it's better to let 100 guilty people go than convict 1 innocent person. The modern corporate/government culture of assuming people will be criminals unless you stop them cuts at the foundations of our society.
    • The technology doesn't store the faces anywhere, unless you're a criminal.
      Are you sure about this? The police will store all sorts of records on you (fingerprints, mugshots, etc.) if youve ever been arrested, whether or not you were actually convicted of a crime. Why wouldnt digital images of faces be added to the list?

      And youre also ignoring the problem of false positives, i.e., innocent bystanders being recognized. You want to be denied access to some place or have an hour or two of your time wasted by the police because the computer thought you looked like someone in its database?

  • Dramatic changes (Score:3, Interesting)

    by Deanasc ( 201050 ) on Friday January 04, 2002 @02:08PM (#2786871) Homepage Journal
    I've shaved my head and put on 40 pounds since my drivers liscence photo was taken 8 months ago. Sometimes I wear contact lenses, sometimes glasses so thick my eyes look like dots. If I wear a turtleneck jersey my head looks like a lump on my sholders. If I wear a tshirt I look like a turtle stretching it's neck.

    How will this software be able to truly establish it's me if my photo in the database is 40 pounds lighter and with contact lenses if I walk around with my glasses on now.

    I also wear false teeth so I can change my jawline at will.

    Come and get me!

  • by zulux ( 112259 ) on Friday January 04, 2002 @02:13PM (#2786923) Homepage Journal

    I put a visible non-working video camera pointed at the street and our local crack-house, and their business collapsed. The house in question was rented by a slum-lord to the lowest bidder and the drug traffic was driving me nuts. I made a good show of the camera and suddenly, the 'customers' were a bit leery. The druggies soon moved out and were replaced by a rather nice poor family.

    I was set to get the camera working, but the it's presence was enough. Highly recomended. PS: I removed the camera once it became obvious that the new tenants were cool.
  • by Judebert ( 147131 ) on Friday January 04, 2002 @02:18PM (#2786958) Homepage
    We're accustomed to being recorded regularly: practically anywhere we purchases anything, at ATMs, etc. The technology everyone is getting so upset about is just a logical enhancement to the camera... and a d*mn poor one, at that.

    According to the report, it almost doesn't work at all. Other installations using face recognition have degenerated into checking out the girls. I understand; it's gotta be boring as can be after the first 50,000 false alarms.

    It seems to me that this software isn't really an invasion of our privacy. The cameras themselves may be, but if we accept the cameras, we can't really quibble about the face recognition software.

    It's time to reconsider our concepts of privacy, anyway. Read David Brin's The Transparent Society [kithrup.com] and see if you don't come up with a new view.

    I think the cameras should be everywhere... especially the police station. And we should all be allowed to watch them. It would certainly make everyone think twice about their biases before taking drastic action.

  • Lets hope one of those twins doesen't turn evil...
  • ...but I did imagine several lawyers all jostling each other as they peered into a camera connected to a computer.
  • I read the ACLU pdf file, and found their conclusions interesting, particularly regarding the high number of false positives, and that the system was taken out of use basically because they were discouraged how few matches they were getting.

    But the one question that wasn't asked or answered in this piece is very important -- did the crime rate in that area drop during that period? Because if it did, the face recognition system may actually be working, and not seeing criminals may be a GOOD thing!

    If the system actually deters crime, and criminals know about the system being in place, it wouldn't be unexpected that criminals would choose to actively avoid that area, leading to a lower detection rate! Did they consider that?
  • Easy Test (Score:2, Insightful)

    by 3ryon ( 415000 )
    It strikes me that there is a pretty easy test one could give to these machines. They talk about the database of millions of photos (of criminal, hopefully). Why don't they compare each of the photos to the others in the database? If there are many hits it's obvious that there is a big false positive problem....
    Or, it might find that some people believed to be different people are actually the same person.
  • by robbway ( 200983 ) on Friday January 04, 2002 @03:46PM (#2787649) Journal
    Back In late September I read and/or heard about these face recognition devices. According to statistics, the results were exactly correct.

    Assume a 90% accuracy with a database of 10 criminals. If there are 100,000 samples, 10% will be false positives. That's 10,000 alarms that mean nothing. Of the remaining 90,000 people, you'll get 9 true alarms (assume they walk by!) and 1 gets away. Therefore you have to fend off about a thousand errors to get a real suspect. At that point, human error could easily step in and assume it's another false alarm.

    This is the same objective reason racial profiling is wrong. You spend all your time and resources harrassing innocent people.

    Of course, a cheap solution to this problem is multiple camera angles.
  • by Secret Coward ( 427931 ) on Friday January 04, 2002 @04:01PM (#2787740)
    An article [mit.edu] at MIT describes the issues of surviellance societies.

    Some other negative aspects of the new surveillance can be briefly mentioned:

    IT IS CATEGORICAL in nature, involving "fishing expeditions" and "searches" absent any evidence of specific wrongdoing, thus violating the spirit of the Fourth Amendment. The presumption of innocence can be undermined-shifting the burden of proof from the state to the accused. There is also a danger of presumption of guilt (or an unwarranted innocence) by association or statistical artifact. And, because of the technical nature of the surveillance and its distancing aspects, the accused may (at least initially) be unable to face the accuser. The legal basis of some of the new surveillance's crime-prevention actions is also questionable.

    THE SYSTEM'S FOCUS on prevention entails the risk of wasting resources on preventing things that would not have occurred in any case, or, as sometimes occurs in undercover activities, of creating violations through self-fulfilling affects.

    POWERFUL NEW DISCOVERY mechanisms may overload the system. Authorities may discover far more violations than they can act upon. There is a certain wisdom to the ignorance of the three monkeys. Having information on an overabundance of violations can lead to the misuse of prosecutorial discretion or demoralization on the part of control agents. Charges of favoritism and corruption may appear as only some offenses can be acted upon.

    IN ORWELL'S AND OTHER science fiction accounts, control is both highly repressive and efficient. There is perfect control over information (whether the ability to discover infractions with certainty or to manage beliefs). As the examples cited suggest, the new surveillance has great repressive potential (in actuality or via myth). But it is invariable less than perfectly effective and certain, and it is subject to manipulation and error. 21

    ALTHOUGH DETERRING OR DISCOVERING some offenders, the routinization of surveillance, ironically, may grant an almost guaranteed means for successful violations and theft to those who gain knowledge of the system and take action to neutralize and exploit it. This suggests that, over time, many of these systems will disproportionately net the marginal, amateur, occasional, or opportunistic violator rather than the master criminal. The systematization of surveillance may grant the latter a kind of license to steal, even while headlines hail the effectiveness of the new techniques.

    • That's [mit.edu] a great article. It hits many of the points that typically come up in these discussions, particularly why collecting large amounts of information and processing it years later, out of its original context, is different from observing activities in real time. Sorry I don't have any mod points.

      sPh

  • Viisage technology is NOT the best on the market. However, one cannot check the results of testing online at the International Biometric Group [biometricgroup.com] website. Guess they don't think it's important for the whole world to know what algorithm gives the best results in independent testing. If your curious, The winner was the Canadian Company AcSys Biometrics Face Recognition Systems. [acsysbiometrics.com]

    Face Recognition is the least intrusive of the Big Brother security technologies. Especially when you take into consideration that scanning a crowd for a certain face will not work . Let's be real here kids, some companies are simply lying about what thier programs can do.
  • As a private (not public) citizen, the use of such face recognition systems inherently means they record my likeness.

    In theory, under time-tested requirements for media recording, they cannot use my likeness without my permission.

    Yet they are.

    I did not grant usage to them, when I walked on the sidewalk. I did not sign a contract granting them usage in a film or TV show.

    They stole my copyright!

    -
  • by symbolic ( 11752 ) on Friday January 04, 2002 @10:05PM (#2789324)
    Enter Facex. That's essentially what this face recognition stuff is doing - compiling a profile of a face that it sees, and then checking for other profiles that are similar. Given the multitude of conditions that can affect what it sees (lighting, angle, and facial expression to name a few), it's going to take some very ingenious programming to overcome these influences. Further, I'd love to see the ACLU (or other party) conduct some studies to see just how innaccurate this system is when dealing with other factors - like whether or not someone is wearing sunglasses, whether or not they have a beard and/or moustache, makeup (lipstick, blush, eye shadow), whether or not they're wearing a hat, etc.

    FaceIt, it appears, is an overhyped technology that isn't any where near ready for prime time. And knowing what it's up against, I don't think it ever will be.

"I've finally learned what `upward compatible' means. It means we get to keep all our old mistakes." -- Dennie van Tassel

Working...