Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Government AI Programming Software The Military Science Technology

DARPA Tackles Machine Learning 95

coondoggie writes "Researchers at DARPA want to take the science of machine learning — teaching computers to automatically understand data, manage results and surmise insights — up a couple notches. Machine learning, DARPA says, is already at the heart of many cutting edge technologies today, like email spam filters, smartphone personal assistants and self-driving cars. 'Unfortunately, even as the demand for these capabilities is accelerating, every new application requires a Herculean effort. Even a team of specially-trained machine learning experts makes only painfully slow progress due to the lack of tools to build these systems,' DARPA says."
This discussion has been archived. No new comments can be posted.

DARPA Tackles Machine Learning

Comments Filter:
  • Oblig... (Score:5, Funny)

    by famebait ( 450028 ) on Friday March 22, 2013 @04:48AM (#43244471)

    Even a team of specially-trained machine learning experts makes only painfully slow progress due to the lack of tools to build these systems

    Why not just teach a machine to do it?

  • Skynet (Score:5, Funny)

    by Edis Krad ( 1003934 ) on Friday March 22, 2013 @05:15AM (#43244567)
    Defense agency investing in Machine Learning technology? What could possibly go wrong?!
    • Re: (Score:3, Informative)

      Yep, just another stupid waste of time by DARPA, just like the internet.
    • Re: (Score:3, Interesting)

      by Anonymous Coward

      As somebody who is currently taking an advanced class in machine learning *shutter, no more prob stat, no more vector calculus, no more linear algebra, please!*, I'm not going to claim to be an expert by any means, but I will point out that as far as I can tell machine learning is more about classifiers, i.e., is this a square peg, or a round peg, and is that a square hole or a round hole. In other words, take a piece of data, figure out if it belongs in a particular class, or decide if a new class should

      • Re:Skynet (Score:4, Interesting)

        by Anonymous Coward on Friday March 22, 2013 @07:34AM (#43245209)

        Then you haven't seen my spam filter!

        Seriously, I am an AI PhD student/researcher. We get this kind of crap all of the time.
        "you are working on robots, when is SkyNet? Hahaha"
        "...so... the robot is lost and can't figure out where it is... I'm trying to make it so it can figure it out by how many steps its taken and looking around"
        "SkyNet!"

        "you are working on a program to control a controller for a video game, when is SkyNet? Hahaha"
        "...so... I'm trying to figure out how the computer can make Mario jump over the bad guys without telling him that the bad guys are 'bad'"
        "SkyNet!"

        "you are working on a program to figure out emotional states of students, how long before you unemploy all the nation's teachers?"
        "...so... I'm trying to figure out how to teach a computer to recognize when people are bored..."
        "Why do you hate your teachers?!"

        Seriously, the idea that we will be able to classify spam, or map a room, of jump over an obstacle, or recognize boredom so well that it gets sentience (and decides to kill all of us) is laughable.

        Posting Anon from work.

        • Oh come on, you know in your heart SkyNet is the only feasible solution to the spam problem.

          All narrative is driven by conflict. If our glorious utopian future entailed having all our needs and wants attended by robowetnurses, nobody would be making movies about that stagnant society. ex - "Zardoz" with its Eternals, Niven's "Safe at any speed", the society on Aurora in Asimov's robot stories.
        • by jafac ( 1449 )

          most likely, the guys who have solved this problem, are working for the large financial institutions around the world, writing trading algorithms. Darpa's not worth their time, and neither is the ad industry.

        • So what you are saying is that SkyNET essentially decides to kill us because we are:
          a) spamming
          b) building impossible rooms
          c) obstacling the world
          d) bored
          or a combination or all of these..?

          I say, you must be one of those who believe in AI through complexity.

    • It might be better than the military making decisions themselves ...

    • LoL....Singularity....SkyNet. Just use your imagination
  • by hildolfr ( 2866861 ) on Friday March 22, 2013 @05:27AM (#43244617)
    a headline for future 2030.
  • by Viol8 ( 599362 ) on Friday March 22, 2013 @05:43AM (#43244675) Homepage

    They've been trying it since the 50s without it has to be said, too much success given the amount of effort thats been put in. I suspect until we REALLY understand how boligical brains do it (not , "meh, some sort of neural back propagation", yeah , we know that , but what propagation and how exactly?) then machine learning will still remain at the bottom rung of the intelligence ladder.

    Personally I think at the moment pre programmed intelligence is still a more successful route to go down. Though hopefully that will change.

    • by snarkh ( 118018 )

      Machine learning and more broadly AI has had tremendous success recently. Google search is some sort of machine learning program. Pretty useful, no?

      I am not even talking about speech recognition, chess machines, auto-focus in your camera and so on.

      • by WillAdams ( 45638 ) on Friday March 22, 2013 @06:03AM (#43244775) Homepage

        A.I. is a classic case of moving goal posts --- there's an assumption a hard problem requires it, the problem gets solved using ever-more sophisticated analysis/pattern-matching/data-processing --- the problem domain is no longer considered A.I.

        • by g4b ( 956118 ) on Friday March 22, 2013 @06:24AM (#43244851) Homepage

          exactly.

          the research field of AI already considered the idea of "artificial intelligence" to be more "solutions based on imitating intelligence", and it has long been postulated, that while the dream is still the real thing, it probably will not be possible with electronics (which do great in calculus, but still have problems with parallelism).

          the results in the last decades were OOP, neuronal networks, or the good known Spamchecking algorithms.

          But the approach to learning in all these cases is still very different each time. I am e.g. not sure, if spam filters really use neuronal algorithms - it mostly concentrates on the relations of words in a text, or the alterations of a word in a text, and how to use the statistical data about these relations to flag content which is probably spam.

          Since humans (or any intelligent mammals) learn to learn by playing, both establishing recognition of rules, and the usage of data, I wonder if it will be ever possible to have an abstract learning machine, which not just "learns", but also learn "what to learn", and "why to learn" on its own. But each respective problem is getting addressed.

          Oh yes, and the latest implications, like gamification in industry, and the revelations of the true meaning of "playing", researched more in social and psychological sciences is maybe also an indirectly linked to the field of AI. Which still has a long way to go in a society, where "playing" is associated with "kids", and a waste of time.

        • by narcc ( 412956 )

          No.

      • by Viol8 ( 599362 ) on Friday March 22, 2013 @06:12AM (#43244809) Homepage

        They're hard coded and use massively parallel depth searching. The brute force approach has been the best for chess computers for decades.

        And google search and translate isn't really learning, they're just statistical systems that given the best result based on the data they've gathered. They don't "think" about it in any meaningful way.

        • And google search and translate isn't really learning, they're just statistical systems that given the best result based on the data they've gathered. They don't "think" about it in any meaningful way.

          This is machine learning, they derive results based on statistical data, but new data input changes statistics = learning

          • by Anonymous Coward

            Try changing the board and see what happens. The simplest change that could break search algorithms would be to wrap the chessboard around a sphere so there are no edges.

        • by snarkh ( 118018 )

          I never said that chess was machine learning.

          >And google search and translate isn't really learning, they're just statistical systems that given the best result based on the data they've gathered.

          And what exactly is your definition of learning?

        • They don't "think" about it in any meaningful way.

          O yes? And what does it mean to think about something in a meaningful way?

          • by Viol8 ( 599362 )

            Being able to analyse deeper meaning beyond statistics for a start. A machine would happily take a phrase where every word was "wibble" and attempt to translate it. A human wouldn't bother because they'd know it was rubbish. Learning the statistical relationships between bits of data isn't thinking when it has no idea what those bits of data actually mean.

            • At what point does one "know" it's rubbish? Saying "wibble wibbke wibble" to a baby will evoke a smile if said in the right tone of voice.

              • by Viol8 ( 599362 )

                True, but then you wouldn't try and hold an intelligent conversation with a baby. Quite how babies learn is another matter, but we haven't written anything yet than can mimic it.

            • A human wouldn't bother because they'd know it was rubbish.

              And how did the human find out that "(wibble )+" is rubbish? By learning, perhaps? From a large sample of sentences encountered in his lifetime, perhaps?

            • Sure, I get what you are saying, but there is a problem with it.You assume that the concept of "meaningful thinking" is well-defined. That is a perilous assumption that can get you into all sorts of trouble.

              It may well be that what we perceive as "meaningful thinking" is nothing but simple machine algorithms that get interpreted in a specific way but other machine algorithms in our brain. Our brains may very well be machines that have, instead of being programmed by another machine, evolved to categorize

    • If I store purchase data away in files and then have a re-order routine/program that generates replenishment orders based on purchase history, that is no more "learning" than any of this neural network stuff capturing patterns and interpreting it.

      I wrote a Double Deck Pinochle program back in 1981 that is hard coded logic, no "learning". There is as much or as little AI in it as anything else "AI".

      When programming applied to human like operations is stopped being called "artificial intelligence" until there

    • by Spottywot ( 1910658 ) on Friday March 22, 2013 @06:35AM (#43244895)

      I think that learning how the biological brain does it before building a learning machine is the wrong way around. I think that the person/team that builds the first genuinely successful learning machine will give the biological researchers a clue about potential mechanisms for learning, it will take a genuine leap of imagination as well as the type of grunt work the DARPA guys are doing.

      • P2P cluster of humans solving simple, but related, problems, and upload the results. Humans are more forgiving of ambiguities so it should be easier to jump start. Automate these tasks over time.
      • Comment removed based on user account deletion
      • by Anonymous Coward

        the first genuinely successful learning machine

        I'm feeling a No True Scotsman fallacy here. We've had successful machine learning projects. Didn't you enjoy Watson on Jeopardy?

        What, EXACTLY, would you consider "genuinely successful machine learning"?

    • by xtracto ( 837672 )

      I kind of agree with you, however I tink the science progress is running in the path you describe more than you think. For example, the "meh, some kind of back propagation" thought has is now being replaced by RBM and SVMs, which are based in new theories of how the brain works. This has given some kind of new 'life' to AI [github.com]. It is now known that the typical neural networks and other "classical" machine learning techniques are very prone to overfitting.

      As with every field in science, we put theories, and bas

    • It all depends on the definition of AI. If you think about a working human brain in a computer, virtualized AI based on neurological models may get us there. But what is the result? A miserable human-clone without any contact with the world? We are animals, machines are not (yet).

      But parallell to this, you could just as well acchieve an intelligence that is artificial and computational, but it could be so alien to us that we wouldn't understand it.
      Or perhaps we are misinterpreting what it means to be intell

  • Didn't IBM do this when they created a computer to play on that quiz show? (the name escapes me)

    • Its a form of A.I. for sure, but the skill shown has more to do with the volume of data it uses than it has to do with a skill at learning.

      Machine Learning is a very particular subset of A.I, often characterized by one or more training phases which build of model of the training set that is smaller than the set itself.
      • the skill shown has more to do with the volume of data it uses than it has to do with a skill at learning.

        Actually, the skill shown had more to do with being a fast button pusher. If the questions were distributed fairly, Watson would not have beaten its human opponents. The Jeopardy game was stage managed show business, not a fair contest.

    • Re: (Score:3, Informative)

      by DI4BL0S ( 1399393 )
      Jeopardy, and the machine is called Watson [wikipedia.org]
  • by Anonymous Coward on Friday March 22, 2013 @06:42AM (#43244925)

    There are a ton of off-the-shelf machine learning toolkits that are sufficient for 90% of possible use cases. The problem is getting annotated data to feed into these tools so they can learn the appropriate patterns. But all that requires is a host of annotators (i.e. undergrads and interns), not machine learning experts.

    • by lorinc ( 2470890 )

      There are a ton of off-the-shelf machine learning toolkits that are sufficient for 90% of possible use cases. The problem is getting annotated data to feed into these tools so they can learn the appropriate patterns. But all that requires is a host of annotators (i.e. undergrads and interns), not machine learning experts.

      Exactly this!

      Almost everything you ever dreamed of as a non machine learning expert is available at https://mloss.org/software/ [mloss.org]
      Please now annotate more data so that we can tune the algorithms ;-)

    • Comment removed based on user account deletion
    • by Hizonner ( 38491 )

      If I tried to teach a human, or indeed if I set an untaught human loose on an unstructured problem, and that human turned around and demanded a huge mass of annotated data, I would not conclude that the human was a good learner, or even "sufficient for 90% of possible use cases". I would conclude that the human didn't have the complete machinery of learning.

      • Comment removed based on user account deletion
        • by Hizonner ( 38491 )

          I'm not disappointed at all. I'm reacting to somebody who seems to think the job is done when it's not.

          All I'm saying is that the present, early stuff is NOT "sufficient for 90% of possible use cases". That doesn't mean I don't realize that things are still at an early stage and progress is being made.

        • by Anonymous Coward

          No, sorry, please try again.
          You've got a lot [wikipedia.org] to go through [wikipedia.org] before [wikipedia.org] even knowing [wikipedia.org] you're wrong [stackoverflow.com].

          Machine learning is a subset of artificial intelligence. We have obtained both AI and ML solutions for certain problems. High fives all around. We have yet to achieve "strong AI", and you have to get into a philosophy debate to define just what the hell that means. There is no point where ML stop and AI begins. Unsupervised ML has been in use for quite a while.

          I'm disappointed in the mysticism surrounding AI because

    • In my ML class, we used WEKA. Of course, there is also Matlab. Problem is, neither of these are free, and they are both slow as hell. I would not use either one outside of class/prototyping.

      Ideally there would be a free, open source toolkit written in a compiled language. The toolkit should have a variety of ML techniques that can be switched around with little pain. Only toolkit I know of like this is the ML part of OpenCV, and the documentation for OpenCV is... lacking.

      Another poster linked to mloss.

  • by Black Parrot ( 19622 ) on Friday March 22, 2013 @07:51AM (#43245327)

    Sounds like the 1990s fetish for making programming languages so simple that even your boss could make reports and do other stuff for himself. Unfortunately, programming language syntax wasn't the primary hurdle: I've had bosses request reports that would add pounds of product and shipping costs.

    For ML, it takes a good bit of training just to know what kinds of problems you can apply it to. A cookbook toolkit isn't going to reduce the need for expertise very much.

    • by Black Parrot ( 19622 ) on Friday March 22, 2013 @08:03AM (#43245439)

      Here's an analogy: We've had sophisticated, easy-to-use statistics software packages for decades. What percentage of the population can use them correctly for anything non-trivial?

      Tools are nice, but some stuff just inherently takes training. No tool is going to make me a competent oceanographer or particle physicist.

    • Comment removed based on user account deletion
      • by jlowery ( 47102 )

        It's never worked for untrained end-users, perhaps, but the are plenty of successful DSLs out there. Spreadsheets formulas, for one.

  • by account_deleted ( 4530225 ) on Friday March 22, 2013 @07:54AM (#43245361)
    Comment removed based on user account deletion
  • I read an interesting article the other day suggesting humans are the organic soup from which a new branch of binary-encoded as opposed to dna-encoded life will emerge. They argued that it's already happened given that computer viruses are self-replicating.

    They say soon it will be fish - lizard - hampster - chimp - neanderthal - human - roomba basically. And that these new "machine life forms" will entirely surpass us in so many ways - longevity, freedom from depression and the faults of our "blind watc

  • What they really want is the classic "Computer that Gives a Shit". Instead of the usual passive-aggressive taunting, using your own dumb SQL statement, it fixes it for you instead!
  • "Deja vu all over again" for lon-undersolved computer problems.
  • by scruffy ( 29773 ) on Friday March 22, 2013 @10:09AM (#43246801)
    Raw data need to be cleaned up and organized to feed into the ML algorithm.

    The results of the ML algorithm need to be cleaned up and organized so that they can be used by the rest of the system.

    No one (currently) can tell you which ML algorithm will work best on your problem and how its parameters should be chosen without a lot of study. Preconceived bias (e.g., that it should be biologically based, blah, blah) can be a killer here.

    The best results typically come from combinations of ML algorithms through some kind of ensemble learning, so now your have the problem of choosing a good combination and choosing a lot more parameters.

    All of the above need to work together in concert.

    Certainly, it's not a bad idea to try to make this process better, but I wouldn't be expecting miracles too soon.
  • by Anonymous Coward

    Just want to point out that this is about machine learning, not AI so need to worry yet about Skynet although the ability to understand data and learn from out is the first step, or at least one part of the jigsaw to achieve Artificial Intelligence.

    From what I can gather, this is trying to standardize how machine learns. It sounds like what we have at the moment where we have in t he education system where there are numerous systems on how to teach children to read and write. Rather than having numerous s

  • ... if they didn't kill AI research in the mid-eighties, they wouldn't have had to fund research today when it's more expensive? Thanks, DARPA...

If all the world's economists were laid end to end, we wouldn't reach a conclusion. -- William Baumol

Working...