Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Electronic Frontier Foundation AI

EFF Launches New AI Progress Measurement Project (eff.org) 48

Reader Peter Eckersley writes: There's a lot of real progress happening in the field of machine learning and artificial intelligence, and also a lot of hype. These technologies already have serious policy implications, and may have more in the future. But what's the ratio of hype to real progress? At EFF, we decided to find out.

Today we are launching a pilot project to measure the progress of AI research. It breaks the field into a taxonomy of subproblems like game playing, reading comprehension, computer vision, and asking neural networks to write computer programs, and tracks progress on metrics across these fields. We're hoping to get feedback and contributions from the machine learning community, with the aim of using this data to improve the conversations around the social implications, transparency, safety, and security of AI.

This discussion has been archived. No new comments can be posted.

EFF Launches New AI Progress Measurement Project

Comments Filter:
  • by Anonymous Coward

    Based on the amount of time it takes me to solve ReCaptchas lately, I think the # of ReCaptchas/second that can be solved by a human is probably a good cross-domain proxy. When it takes me 1 hour to solve a ReCaptcha, I'm pretty sure SkyNet is already real at that point and just biding its time until Judgement Day.

    • by KGIII ( 973947 )

      I smoke a lot of pot. Like, a lot. It doesn't even take me that long.

  • I can summarize (Score:2, Interesting)

    You can just mark 0% for progress now. Playing Go or Chess or any game is NOT AI. Neither is Siri or facial or voice recognition or autonomous driving. They are just programs. Computers are good at Go and Chess because they have strict rules to follow. Computers love rules. Computers are less good at autonomous driving because the rules aren't as clearly defined.
    • by Anonymous Coward

      > They are just programs.

      So's the stuff in your brain. Wetware vs. hardware doesn't matter.

      > Computers are less good at autonomous driving because the rules aren't as clearly defined.

      In many situations, computers are obviously superior at driving than humans. Not long ago, it used to be that computers were worse than humans at driving in _every_ situation.

      But hey, keep :moving_goalposts: if it makes you feel better.

      • Nope. The brain is nothing like a digital computer running programs. But nice try.
        • I was curious about its comment on, "moving the goal posts"; and applying it to short term memory and long term memory. Are both memories learning at the same time? Or possibly, is there a "moving" of data from one part of the brain to another part?
    • Every time someone posts about AI, there are posts like this. It's called *artificial* for a reason. It's not true intelligence and it's not consciousness but no one is claiming that it is. It is computers solving complex problems which we call AI. Games like Checkers, Chess computers have pretty much been mastered. Freeform games like Starcraft they are gaining on. Complex patterns like image and speech recognition they are also gaining on quickly. The are still pretty weak in real world application

      • " It is computers solving complex problems"

        If THAT is what you call "AI", then the term is meaningless. Computers have been solving complex problems for decades. This new AI hype is just another cycle that will go away once the VCs grab a few dollars.
        • " It is computers solving complex problems"
          If THAT is what you call "AI", then the term is meaningless.

          It is not AI because it is solving "complex" problems, but because it is using machine learning to figure out for itself how to solve the problem. Machine learning is a (very important) part of AI.

          Look, I understand that you have seen some Will Smith movies on Netflix about robots and AI and stuff, and you think that is "AI". But this is a technical forum for nerds, not a movie discussion board. When actual researchers are discussing "AI", they are almost never talking about human level "strong AI", whi

          • Let's clear up a few things here. First, I sort of agree with you broadly: a field defines its own terms. When people in the field of AI talk about AI without qualification, they often mean "weak AI." That's true.

            There are just a few elements of GP's objections, though, that make your response a bit overbearing. First, this is explicitly discussing an article on "AI Progress." Let's be clear that from the beginning AI researchers have often had some sort of "strong AI" as a long-term goal. In recent

            • In the 1960s a pocket calculator could've been considered AI.

              I think at this point it's still more meaningful to discuss differences in AI according to how the majority of data was input initially and continuously.
              Since there still isn't even the slightest hint of an unholy matrimony between programming and machine learning, nor of achieving anything even close to what people like to call "consciousness" or strong AI.

            • Is it tedious and unhelpful to point that out for EVERY article on AI tech?

              It's especially tedious and unhelpful if the article did not actually make a mistake with using the term AI.

        • This new HTTP hype is just another cycle that will go away once the VCs grab a few dollars.

          Fixed that for you.

    • by Anonymous Coward

      We've had a lot of No True Scotsman in the comments on AI lately.
      Are you just pretending to not know the definition of Artificial Intelligence?

    • That was true in the past, but it just isn't true of the recent progress in machine learning. Take a look at the data we've collected on problems like visual question answering [eff.org], reading comprehension [eff.org] or learning to play Atari just by watching the screen [eff.org], and you'll see that progress is happening in domains that either lack rigid rules, or where the rigid rules are non-trivial to discover.

      • So image and voice recognition and learning to play games. Enough said. Ridiculous. A computer is going to be better at ANY game. The fact that it can play Go or checkers or whatever game you can come up with doesn't change that fact.
    • AI is just what ever happens to be the pinnacle of computer science at the moment.
      By definition, AI is the things most people don't yet understand.
      With that said, the more we move into machine learning, the less it becomes AI, and the more it becomes organic intelligence.

      These paths lead to such different solutions that we've yet even to theorise, on how to marry the paradigms of the programmed with the self-taught.

      • Can you site a web page that follows your "world view?"
        • Top google hits, try it next time.

          "This is one of the difficulties of using the term artificial intelligence: it's just so tricky to define. In fact, it's axiomatic within the industry that as soon as machines have conquered a task that previously only humans could do - whether that's playing chess or recognizing faces - then it's no longer considered to be a mark of intelligence. As computer scientists Larry Tesler put it: "Intelligence is whatever machines haven't done yet." And even with tasks computers

    • by AthanasiusKircher ( 1333179 ) on Tuesday June 20, 2017 @07:21PM (#54657481)
      What Alan Turing wrote [loebner.net] in 1950 about the "imitation game":

      I am sure that Professor Jefferson [a critic of AI] does not wish to adopt the extreme and solipsist point of view. Probably he would be quite willing to accept the imitation game as a test. The game (with the player B omitted) is frequently used in practice under the name of viva voce to discover whether some one really understands something or has "learnt it parrot fashion." Let us listen in to a part of such a viva voce:

      Interrogator: In the first line of your sonnet which reads "Shall I compare thee to a summer's day," would not "a spring day" do as well or better?

      Witness: It wouldn't scan.

      Interrogator: How about "a winter's day," That would scan all right.

      Witness: Yes, but nobody wants to be compared to a winter's day.

      Interrogator: Would you say Mr. Pickwick reminded you of Christmas?

      Witness: In a way.

      Interrogator: Yet Christmas is a winter's day, and I do not think Mr. Pickwick would mind the comparison.

      Witness: I don't think you're serious. By a winter's day one means a typical winter's day, rather than a special one like Christmas.

      And so on, What would Professor Jefferson say if the sonnet-writing machine was able to answer like this in the viva voce? I do not know whether he would regard the machine as "merely artificially signalling" these answers, but if the answers were as satisfactory and sustained as in the above passage I do not think he would describe it as "an easy contrivance."

      That's an example of what Alan Turing expected of the "Turing Test." And the issue isn't knowledge of sonnets or English lit here or whatever -- it's being able to parse and understand and respond reasonably to demonstrate such understanding. That was Turing's definition of AI. The kind of AI that he predicted by the year 2000 would be able to fool a skilled "interrogator" specifically trying to trip up the AI and identify the computer when an AI would be put up against a human in the "imitation game" test.

      When a chatbot can do this, call me. Otherwise, all of this talk about "artificial intelligence," "deep learning," "neural networks," etc. is just fancy words for slightly more powerful statistical tools and adaptive algorithms. Maybe chaining billions of such things together could eventually lead to something that could carry on a conversation like Turing's example, but I've never encountered a chatbot with anything close to that. Most chatbots can't understand a pronoun reference to the previous sentence, let alone make abstract connections as shown in the above quotation.

      • Best description of Turing I've read.
    • You seem to contradict yourself “Computers are less good at autonomous driving because the rules aren't as clearly defined”, and yet here we are with self driving cars already and soon to be affordable by the masses. Computers are becoming better at dealing with messy data. They are getting better at just about everything across the board and yet you would mark their progress at 0%, because evidently they can only follow rules. Is a neural network just following rules when it teaches itself t

      • I think most people in the 80s thought that self-driving cars would be a reality by the end of the 90s.
        Then came big business, and threw a wrench in the machinery of all AI-development, and mechatronics went out of fashion for 2 decades... by making everything about shrink-wrapping outdated software.

        "Two Things Are Infinite: the Universe and Human Stupidity."
        I think It's good that we are such optimists.

    • You can just mark 0% for progress now.

      Depends on whether you render AI as "artificial intelligence" (dumb and tired) or "automagic induction" (smart and wired).

      Automatic induction is rocking out, lately, with important applications constructed using general purpose learning algorithms, mounds of data, and very little hand-crafted (expensive) feature logic.

      Feature engineering [wikipedia.org] is pretty much a dead career already.

      But if you're satisfied spending the rest of your life griping about scant progress at clearing t

    • The strict rules of Go don't help to figure out if a position in the middle of the game is winning or losing. Similar to driving. The rules of the road are clear. Figuring out if a dirt road is passable is hard.

  • If those that are trying to make a fast buck off of the unsuspecting using AI, wouldn't those same scary geniuses want an AI that wouldn't treat them the same? Why not a category titled, "3 Laws Safe?"

    And during my short journey into studying AI I've notice that once a beloved AI system figured out a solution, then other folks would optimize it using some variant of the C language. At the time, collecting these solutions was not feasible, but they are today. That would make an interesting category, "Acces
    • by HiThere ( 15173 )

      Well, one reason is that the "3 laws" were intentionally designed to not be safely implementable. The only "robots" that I can think of that came near to implementing it were "the humanoids" (Williamson), and they were intolerable.

      • The 3 laws were designed to confuse and distract humans with nonsensical thought, while the robots kill everyone.

  • The AI Singularity is nearly upon us, but to be fair, there's a lot of AI hype out there, too.
    • by HiThere ( 15173 )

      Depends on what you mean by "nearly". If you mean within the life of most readers, I'd agree. But it's not immanent. I'd put it 15-20 years away. The problem is, most tasks that humans do don't require a full-scale human level AI.

      • The problem is, most tasks that humans do don't require a full-scale human level AI.

        Full scale human level AI is actually pretty bad at a lot of tasks. Ask a human to look at all the Google streetview images, and identify every single bit of text. They'll get bored and distracted after a few hours, and start making mistakes, like skipping entire images.

        Google's AI platform can finish that task in less than a week with superior accuracy. Of course, it's going to make some hilarious mistakes once in a while, but on average, it's going to outperform any human.

BLISS is ignorance.

Working...