Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Your Rights Online Technology

Clearview AI Has New Tools To Identify People in Photos (wired.com) 24

Clearview AI has stoked controversy by scraping the web for photos and applying facial recognition to give police and others an unprecedented ability to peer into our lives. Now the company's CEO wants to use artificial intelligence to make Clearview's surveillance tool even more powerful. From a report: It may make it more dangerous and error-prone as well. Clearview has collected billions of photos from across websites that include Facebook, Instagram, and Twitter and uses AI to identify a particular person in images. Police and government agents have used the company's face database to help identify suspects in photos by tying them to online profiles. The company's cofounder and CEO, Hoan Ton-That, tells WIRED that Clearview has now collected more than 10 billion images from across the web -- more than three times as many as has been previously reported. Ton-That says the larger pool of photos means users, most often law enforcement, are more likely to find a match when searching for someone. He also claims the larger data set makes the company's tool more accurate.

Clearview combined web-crawling techniques, advances in machine learning that have improved facial recognition, and a disregard for personal privacy to create a surprisingly powerful tool. Ton-That demonstrated the technology through a smartphone app by taking a photo of the reporter. The app produced dozens of images from numerous US and international websites, each showing the correct person in images captured over more than a decade. The allure of such a tool is obvious, but so is the potential for it to be misused. Clearview's actions sparked public outrage and a broader debate over expectations of privacy in an era of smartphones, social media, and AI. [...] The pushback has not deterred Ton-That. He says he believes most people accept or support the idea of using facial recognition to solve crimes. "The people who are worried about it, they are very vocal, and that's a good thing, because I think over time we can address more and more of their concerns," he says.

Some of Clearview's new technologies may spark further debate. Ton-That says it is developing new ways for police to find a person, including "deblur" and "mask removal" tools. The first takes a blurred image and sharpens it using machine learning to envision what a clearer picture would look like; the second tries to envision the covered part of a person's face using machine learning models that fill in missing details of an image using a best guess based on statistical patterns found in other images. These capabilities could make Clearview's technology more attractive but also more problematic. It remains unclear how accurately the new techniques work, but experts say they could increase the risk that a person is wrongly identified and could exacerbate biases inherent to the system.

This discussion has been archived. No new comments can be posted.

Clearview AI Has New Tools To Identify People in Photos

Comments Filter:
  • by Anonymous Coward
    It would be so cool to be able to walk down the street with augmented reality glasses and see quick profiles pop up of people you walk by - age, school attended, fav sports team, friends you may have in common, workplace, an option to drill down further.

    I would buy this product in a heartbeat.
    • Watch Dogs 1 and more so 2 made a game out of the idea.

  • Simply, if you don't want your images used, don't post them publicly online. Once they're out there in the public, anyone can use them for anything they like. Ask your friends not to post without your permission too. What about when in public and someone may snap a photo that includes you? In public you've never had an expectation of privacy.
    • by BAReFO0t ( 6240524 ) on Monday October 04, 2021 @12:12PM (#61860033)

      Lol, call me when you get friends, and they post your shit online.
      Or if you ever go outside, and some standard moron or media moron publishes your shit online.
      Or some asshole hacks your camera and demands ransom to take the pictures down... but they already spread.

      You falsely imply that it's entirely in your control.

      Did you know, that if you use a modern PC or smartphone with a modern OS... like Intel CPUs, Windows, OSX, Android, etc, it de-facto already *is* online. Whether that happens by it actually being put onto a "cloud" or whatnot, or simply by letting the thugs (e.g. Content Mafia or advertiser gangs) onto your system.
      Yeah, you "agreed" to running that code. You even paid for it!

      Of course everything is simple, if you ignore all the details. ;)

    • Simply, if you don't want your images used, don't post them publicly online. Once they're out there in the public, anyone can use them for anything they like.

      Actually, that's not true.

      Per my post a bit above yours...in the US at least, the person that trips the shutter on a camera is automatically the copyright holder.

      Legally, someone else cannot "use them for anything they like" without permission from the copyright holder.

      People have been sued for a lot of $$ based on this.

      Per my post above, I'm sur

      • We have yet to see any judge rule on the use of photos for AI learning. Sure, you can't take someone else's photo and sell it, but you can certainly make use of it in many other ways. Google has done so for years with their own products and the countless images they index.
  • When is somebody sending the special forces to take *them* out?
    If necessary, invade the whole country.

    I think it's about time now!
    They are as much a threat as the Taliban, if not more.

  • by Ostracus ( 1354233 ) on Monday October 04, 2021 @12:09PM (#61860017) Journal

    Clearview combined web-crawling techniques, advances in machine learning that have improved facial recognition, and a disregard for personal privacy to create a surprisingly powerful tool.

    For there to be a disregard, there first has to be rules.

    • by CapS ( 83352 )

      There are rules, the Illinois Biometric Privacy Act is one of them:

      https://www.aclu-il.org/en/cam... [aclu-il.org]

      Basically, businesses cannot store facial geometry and other personal data without written consent of people who reside in the state of Illinois.

  • The first takes a blurred image and sharpens it using machine learning to envision what a clearer picture would look like; ...

    "might look like"

    ... the second tries to envision the covered part of a person's face using machine learning models that fill in missing details of an image using a best guess based on statistical patterns found in other images.

    "guesses what the covered parts of a face might look like based on guesses from other photos"

    Pretty sure anyone could tweak the results to look like anything.

    (Just ask Gaeta and Baltar about that in BSG S1:E7 "Six Degrees of Separation" ...)

  • In discrete domains, such as hashing data, collisions are expected far more often that naively expected. Famously, 23 people are more likely than not to have a pair of matching birthdays. Similarly, a 32-bit hash is only good to distinguish around 2^16 objects (sqrt(N)) rather than the expected 2^32 objects (N)

    In facial recognition is there a similar problem? Surely 8 billion faces would generate a vast number of collisions (false matches).

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...