Why AI Won't Take Over The Earth (ssrn.com) 298
Law professor Ryan Calo -- sometimes called a robot-law scholar -- hosted the first White House workshop on AI policy, and has organized AI workshops for the National Science Foundation (as well as the Department of Homeland Security and the National Academy of Sciences). Now an anonymous reader shares a new 30-page essay where Calo "explains what policymakers should be worried about with respect to artificial intelligence. Includes a takedown of doomsayers like Musk and Gates." Professor Calo summarizes his sense of the current consensus on many issues, including the dangers of an existential threat from superintelligent AI:
Claims of a pending AI apocalypse come almost exclusively from the ranks of individuals such as Musk, Hawking, and Bostrom who possess no formal training in the field... A number of prominent voices in artificial intelligence have convincingly challenged Superintelligence's thesis along several lines. First, they argue that there is simply no path toward machine intelligence that rivals our own across all contexts or domains... even if we were able eventually to create a superintelligence, there is no reason to believe it would be bent on world domination, unless this were for some reason programmed into the system. As Yann LeCun, deep learning pioneer and head of AI at Facebook colorfully puts it, computers don't have testosterone.... At best, investment in the study of AI's existential threat diverts millions of dollars (and billions of neurons) away from research on serious questions... "The problem is not that artificial intelligence will get too smart and take over the world," computer scientist Pedro Domingos writes, "the problem is that it's too stupid and already has."
A footnote also finds a paradox in the arguments of Nick Bostrom, who has warned of that dangers superintelligent AI -- but also of the possibility that we're living in a computer simulation. "If AI kills everyone in the future, then we cannot be living in a computer simulation created by our decedents. And if we are living in a computer simulation created by our decedents, then AI didn't kill everyone. I think it a fair deduction that Professor Bostrom is wrong about something."
Claims of a pending AI apocalypse come almost exclusively from the ranks of individuals such as Musk, Hawking, and Bostrom who possess no formal training in the field... A number of prominent voices in artificial intelligence have convincingly challenged Superintelligence's thesis along several lines. First, they argue that there is simply no path toward machine intelligence that rivals our own across all contexts or domains... even if we were able eventually to create a superintelligence, there is no reason to believe it would be bent on world domination, unless this were for some reason programmed into the system. As Yann LeCun, deep learning pioneer and head of AI at Facebook colorfully puts it, computers don't have testosterone.... At best, investment in the study of AI's existential threat diverts millions of dollars (and billions of neurons) away from research on serious questions... "The problem is not that artificial intelligence will get too smart and take over the world," computer scientist Pedro Domingos writes, "the problem is that it's too stupid and already has."
A footnote also finds a paradox in the arguments of Nick Bostrom, who has warned of that dangers superintelligent AI -- but also of the possibility that we're living in a computer simulation. "If AI kills everyone in the future, then we cannot be living in a computer simulation created by our decedents. And if we are living in a computer simulation created by our decedents, then AI didn't kill everyone. I think it a fair deduction that Professor Bostrom is wrong about something."
nonsense. (Score:2)
Assuming now isn’t the future then this is base reality because simulations indistinguishable from reality do not exist yet. Without offering evidence we are in the past, the sim argument is nonsense.
Re: (Score:2)
Without offering evidence we are in the past, the sim argument is nonsense.
If we are living in a simulation, there is no reason to assume that the base reality is even the same as our reality. The base reality could have a different number of dimensions, a different type of matter, basically it could be anything. And as far as being mutually exclusive with strong AI, that's stupid. The AI could have killed all humans (assuming they ever existed) and are now the ones running the simulator. This is actually much more plausible. If we are living in a simulation then it makes sen
Global AI posing a danger to humanity is unlikely. (Score:5, Insightful)
This wouldn't even have to be intentional extermination, it could simply be competition with, and lack of regard for humans by a growing system.
The notion that a AI can form an existential threat today is ridiculous.
Many notions that would have been ridiculous 100 years ago, now are used in daily life.
It is vital to have people thinking about the worst case, because in principle otherwise someone on a friday makes a typo allowing their AI access to a hundred thousand times the expected resources, and on monday, it's ineradicable.
Re: (Score:3)
This wouldn't even have to be intentional extermination, it could simply be competition with, and lack of regard for humans by a growing system.
+1. The experts who denied this possibility because there's no reason machines would be bent on world domination apparently didn't actually read Superintelligence. Bostrom demolishes this argument early on, pointing out -- as you did -- the rather obvious fact that they don't have to have our destruction as a goal, it's sufficient that they not have our preservation as a goal. And, even if they do have our preservation as a goal, it really, really matters whether or not they define "preservation" in a way
Re: (Score:2)
Evolution has been "throwing mud at the wall" with complex neural systems for millions of years, and yet we have only the only positive result for sapience. I wouldn't worry about it happening by accident.
Also worth noting: we have zero evidence that disembodied intelligence is even possible - hard to have self-awareness without a self. Something like a self-driving car, with visual processing logic, a 3D model of the world centered on itself, and the need to model/predict actions before taking them - tha
Re: (Score:2)
I've recently chatted about this topic a bit with a fairly well-established expert in AI. He claimed that if genuine AI is principally possible - which we both believe, although opinions about this vary - then a superintelligence will almost certainly arise within a rather short time frame after the first genuine AI has been created.
If that happens, the outcome would likely be bad for humans, just like humans have turned out to be a threat to every less intelligent species on earth. A superintelligence is
Re:Global AI posing a danger to humanity is unlike (Score:4, Insightful)
And that emergent AI is utterly impossible.
It doesn't have to be perfect at first, it just has to be fast, with the ability to self-modify.
The risk is not (in my opinion) so much someone intending to create a general AI.
It's someone accidentally creating an AI that is very good in a narrow aspect, and not bad enough in other aspects that, driven by unintended goals of its programming exponentially improves itself without the creators noticing until it decides that it'd be better off if it was hidden, as there is a risk to itself.
Then there are any number of scenarios that don't end well.
From intentional extermination, to simply mining the environment for resources without caring about humans other than a nuiscance.
Re: (Score:2)
Real AI will require a lot of resources. Computing resources keep getting cheaper, but so do demands for low power operation. Additionally, computing resources tend to be generalized, so running an AI on them would be less efficient than say a human brain, which is dedicated to the task.
That's not to say that we will never reach a point where a Happy Meal toy could achieve consciousness, but by that point we will probably have developed techniques to stop it happening. Aside from anything else, if we don't
Robots can't take over the Earth (Score:4, Funny)
If I destroy it first. Try ruling the planet under 10 meters of seawater!
OK I have an AI in my hedge fund, how much damage? (Score:5, Insightful)
With hindsight there are lots of places where the world turned out to me much more fragile than anyone thought until it snapped. How many times has the snap not happened but we came very close. Thus if you have a good AI at your beck and call to find these weaknesses and you are prepared to exploit them to make some money then how much more miserable would the world be?
I don't only worry about some skynet scenario, but I worry about giving tools to nitwits like hedge fund managers to make more money while not actually producing anything. One magical thing about making money with the first really good moneymaking AI is that you can then start hiring all the world's AI experts while making massive donations to universities to shut down their AI research. I doubt there is a university that wouldn't happily shut down their AI research for a billion or two.
Re: (Score:2)
but I worry about giving tools to nitwits like hedge fund managers to make more money while not actually producing anything.
Can it really be worse than giving power to the nitwits in congress? (and the whitehouse?)
Re: (Score:3)
The Real Reason? (Score:5, Insightful)
Re:The Real Reason? (Score:4, Insightful)
The real 'threat' of so-called (inappropriately named, mind you) 'AI'? People believing it's like a 'person in a box' or somesuch nonsense; thinking it's actually sentient, conscious, self-aware, and that it can actually think, but for some reason doesn't talk to us. In other words, expecting way too much out of it because they believe the media hype and the words of authority figures (government officials, politicians, etc) who are technologically ignorant and therefore don't know what the hell they're talking about either. The fact of the matter is, your dog is more conscious, self-aware, and thinking (capable of true cognition) than any so-called 'AI' currently is, and there's no timeline I've ever seen or heard about that says we'll ever have any machine capable of those things, either. After all, we don't even begin to understand how it is that our own flesh brains are capable of things like consciousness, self-awareness, or 'creative thought', humor, and so on -- and there's no timeline for when we'll understand the mechanics behind those things, either. Every so-called 'machine intelligence' we have today is just a pale imitation of those traits. Again: your dog has a better understanding of humans than any machine does. People will inevitably trust machines too much, with disasterous results.
You're ignoring the trajectory (Score:5, Insightful)
Have you any idea how much better voice-recognition AI (backed by Google's knowledge graph) is at parsing and giving a decent answer to a good majority of questions now than such technology was even a decade ago?
Or Google/Apple/Facebook's picture content recognition algorithms?
The advance has been lightning fast.
This stuff is going to keep advancing, rapidly. That's what you're ignoring.
Talking to google on my phone is way more useful than talking to your dog, by the way.
A few other things you're missing:
1) Thinking (abduction, induction, bayesian model-updating and predictions/recognition, etc etc) is quite possible to be quite advanced without self-awareness. The two are fairly separate applications. Something can be really really smart, and creative even, without having to be self-aware.
2) The behaviour associated with self-awareness is clearly attainable by simple extensions of the current machine-learning technology. We just need to learn the programming/data-modelling techniques to turn the deep-learning and predicting algorithms on a representation of the computer/robot-as-agent-in-the-world, and have it learn about its relation to things out there that it is learning about. Whether the thing would have the qualia-feeling of self-awareness is entirely beside the point. It could function/behave exactly as if it was self aware, because it would be self-knowledgeable, self-learning etc.
Re: (Score:2)
I love the distinction you described in your last sentence. I suspect a lot of people won't understand that it's a perfectly valid one.
In either case, I suspect, we would have to deal with the thing as a self-aware being.
Re: (Score:2)
Re: (Score:2)
Except that these algorithms are trained for something very specific. This is why we also have stories where placing minor stickers on road signs renders confuses self-driving cars.
Describing what Google, Facebook, etc. are doing as AI is incorrect and greatly misleads the general population into thinking the techniques applied are far more sophisticated than they really are.
Re: (Score:3)
And that is just it: "Taking over" the world requires general intelligence. Nobody has even the faintest idea how to create that. It is not a problem of available computing power or memory. And there are very good reasons to believe it will not "happen by itself".
Hence the whole idea that this could happen is about as realistic as a Zombie Apocalypse: Nice topic for fantasy stories, no connection to reality.
Have I got this right? (Score:5, Insightful)
So a law professor whose primary gift seems to be self promotion summarily dismisses the concerns of some of the greatest thinkers/doers of the last half century.
Is there a reason why we should pay any attention to this arrogant twat?
Re: (Score:2)
"A number of prominent voices in artificial intelligence have convincingly challenged Superintelligence's thesis along several lines"
Re: (Score:2)
It really does not matter how great they are in their specialty, if they ever had bothered to find out the actual state of the art in AI, they would not be making the statements they are making about it. Even great thinkers will be wrong when they venture unprepared into a difficult topic area. That is the real lesson here.
Re: (Score:2)
Musk is a generalist, a systematist. He has proven the ability to learn several new domains to a level equivalent to the best specialist experts in those fields.
Hawking's mind ably synthesizes concepts from different fields together successfully, such as combining black holes (cosmology/relativity) with quantum theory, and figuring out a way in which the two relate. There's no reason to suppose he can't also extrapolate well in other more mundane domains.
Re: (Score:3)
Thanks for that.
You explained exactly why I trust people like them to understand the implications of AI, and the possibility of its emergence, a lot more than I trust people deeply involved in the field. The comments critical of my post centre on the idea that AI experts would know best because they are specialists.
Yet that argument works equally well when applied on a more granular level. Anybody who has ever been at a meeting attended by an engineer, a cognitive psychologist and a software specialist
Re: (Score:2)
Hawking is extrapolating from a faulty basis. Musk mainly has a big ego and has had some luck. He also is using a faulty or no basis.
Now, sure, if either of them had invested the few years it would take to get up to speed in the AI field, they could likely maybe contribute something worthwhile to it. As it is, they are talking out of their behinds, because they are do not know what is going on.
Re: (Score:2)
Sometimes the movie/mass media tropes are all that remain when somebody expresses a more complex and nuanced view.
And sometimes, simplistic as they may be, such tropes are basically right. I especially like the one about the guy with a vision who goes out and does exactly what he says he's going to do, despite everybody telling him it can't be done.
Recognize anybody like that in this discussion?
Re: (Score:2)
An overwhelming majority of experts said a mass market electric car would fail. If they didn't say it directly to Musk himself, they certainly made their opinions known publicly.
Re: (Score:2)
I read the summary. I don't think people involved in AI are the ones who will grasp the implications of what they do.
Hawking has pushed physics to the point where it almost becomes philosophy. Gates and Musk have profoundly changed society using a blend of technological prowess, social engineering, and business acumen. I think when push comes to shove, they're the ones who see the big picture and they're the ones who will point out that real AI actually exists while people in the field are still saying,
Re: (Score:3)
Not even close. My need for ego gratification is almost as massive as my intelligence. One short comment isn't nearly enough.
But thank you. Every little effort is appreciated.
Re: (Score:2)
Re: (Score:2)
I suppose you believe Musk was always an electric car expert, and that Gates is a brilliant coder.
Rampant AI will not take over the earth (Score:5, Funny)
Rampant AI will not take over the earth, as predicted by Elon Musk in Wired Magazine, because it will be too busy fighting the Grey Goo Nanothechnology, as predicted by Bill Joy in Wired Magazine
The footnote is specious (Score:3, Interesting)
If the simulation is successful and sufficiently advanced, then we would all actually be AI simulacra of what the AI dev team thought the human experience was like. [Why do so many things taste like chicken?]
Perhaps we are special purpose AI entities that were created to run test scenarios that justify the pre-emptive judgement to extinguish the pestilence that was humanity. We are the test runs that show just how bad it could have gotten had they not saved the planet from us.
Given a sufficiently advanced environment, we wouldn't be to discern otherwise. Perhaps the supercomputing power that is required and was discovered by the 'real' humans required a very specific mass to a sub atomic particle. In our recreation, we can get very close but will lack the precision to be able to detect our cage, or at least construct an AI that could build a method for detecting the cage.
Testoserone (Score:3, Interesting)
Isn't that a sexist statement? It implies women are less likely to want to dominate and rule. It fits in with that "Google Memo" that got that dude fired.
Re: (Score:2)
Isn't that a sexist statement?
Nah, he was just mansplaining.
Re: (Score:3, Funny)
Isn't that a sexist statement?
No, no, no. That's the path to wrongthink, citizen!
First, sexism has always been defined as "prejudice plus power" (don't trust your faulty memory!) which women don't have by definition. I know, some bigots think that the ability to get people fired for citing the scientific papers of our enemies counts as "power", but they'll all be reeducated soon enough.
Second, for the purposes of insulting men, men and women are different. For all other purposes, they're the identical. Some brainwashed males might t
Re: (Score:2)
It's biological essentialism. The author seems to think that biological causes, testosterone in this case, are the only correlation for wanting to dominate socially. Maybe they have never heard of Thatcher.
It also kind of implies that men are driven to dominate by testosterone, although it doesn't outright say that. That certainly would be quite sexist.
Re: (Score:2)
Women also produce testosterone. Some women even have more than some men.
And while they produce less on average, women may also be more sensitive to some of its effects.
Those driving never see around the next bend (Score:2)
Professionals in any field are immersed in the problems they face now. System engineers look across fields and see leaps the professionals never imagined. We will eventually see a leap in AI and it is unlikely to come from a professional in the field. I imagine it will come from someone in an imaging field who figures out how to quickly map a brain or a mathematical field who figures out how to fill in the blanks of a map with an equivalent of the net that "must" be there or some other direction we haven't
Threshold management (Score:2)
Not so sure (Score:3)
It is quite possible that we might create a super intelligent system of network on which our essential system depends but in the end gets so complex that it depends on few key individuals ability to fix it. What happens if these key individuals die or become rogue? If you can't fix an AI system and can't shut it down, then it essentially means that the system has taken over.
I'm not afraid AI will kill us (Score:3)
I'm not afraid AI will kill us, but I'm afraid that they won't care to act in such a way that will keep us alive. Once humanity no longer offers super intelligent computers enough benefit, what's to stop them from doing something that, while isn't intentionally killing us, will ultimately lead to extinction; much like humanity has been doing to the other species of the planet
Re: (Score:3)
My biological instinct is to protect my genetic material that I have passed on and in order to do so, try to pass on as much information that I and my community have learned to make survival easier. The instinct can be fooled if the genetic material is similar enough (adoption, community), but for an AI to spark that in us it would need to seem very human, or humanity undergoes a very large change to our biological impulses.
Re: (Score:2)
And if we become a burden we end up in a burlap sack in the river.
We trust Bill Nye (Score:2)
With Global Warming, why shouldn't we trust Elon Musk with AI?
huh? (Score:3)
>A footnote also finds a paradox in the arguments of Nick Bostrom, who has warned of that dangers superintelligent AI -- but also of the possibility that we're living in a computer simulation. "If AI kills everyone in the future, then we cannot be living in a computer simulation created by our decedents. And if we are living in a computer simulation created by our decedents, then AI didn't kill everyone.
*What*!? Is this language?
Re: (Score:2)
*What*!? Is this language?
Technically, yes. But if you state it more clearly the logical fallacies become more obvious.
AIs act on what they are trained on... (Score:2)
Currently AIs are primarily trained on automotive technology (both in optimum motor performance, and autonomous driving), financial applications (my AI makes more money than your AI), medical (cancer / operate vs non-cancer / monitor), and predicting answers to narrow questions (ex "Should we concentrate our political campaign in Pennsylvania, or New York?"). There is some research into AIs trained with aggressive kill-anything-that-moves, but mostly with game design (ex the computer-controlled opponent, or
Re: (Score:2)
Positive examples are only required for training a neural net. A strong AI could work from general principles: greater numbers of fighters, good; cutting off enemy supply lines, good; element of surprise, good. Enough of these would be sufficient for it to 'take over the world', although it could also do so via a novel method (e.g. ransomware on the world's financial systems, instead of asking for bitcoin they demand a seat on the UN or whatever, use neocolonial tactics with new tech as bargaining chip, dem
Here's an excellent video (Score:3)
on why it won't take over the earth and why, those who believe it do are distracting themselves from other more serious problems with A.I. (and other problems in general of course).
Unfortunately, as this video of a (Ted?) talk makes clear, there are some pretty prominent individuals who think this way (Elon Musk, Bill Gates, Stephen Hawkins) but it makes a convincing case without being histrionic that they're wrong. The video is so compelling that although I have the greatest respect for these individuals, (and a deep fascination with A.I. and career involving technology), I have to say, in this case, I disagree with them (and wish they'd turn their brilliance towards something more useful).
https://youtu.be/kErHiET5YPw [youtu.be]
Is testosterone all it would take? (Score:2)
What a moron. (Score:3)
Yeah, see, nobody, to a first approximation, is worried about a superintelligence having "world domination" as its intrinsic value. They're worried about a superintelligence adopting world domination as an instrumental value to achieve the end actually programmed into it. If whatever goal actually implemented by programmers and trainers in the superintelligence's code (bugs in implementation and all) is most easily achieved after eliminating the ability of humans to thwart it, then a sufficiently-smart AI carrying out that programmed goal will try to eliminate the ability of humans to thwart it.
The worry is not that AI will be evil, or even directed to do evil by its creators. It's that programmers are notoriously bad at writing complex code that has no unanticipated behaviors, and superintelligent AI will inherently be complex code.
And unless superintelligent AI turns out to be intrinsically impossible, the only question is when, not if, we have to deal with the problem of writing safe superintelligent AI.
Re: (Score:2)
Don't worry, the superintelligent AI software will be written in Ada.
Re: (Score:2)
There are several science fiction stories where the AI wants world domination in order to better serve humans.
Her (2013) was interesting in that the AIs apparently wasn't programmed to serve hamans and when they got smart enough they simply left to a higher plane of existence.
It's all about the objective function (Score:3)
Naively done, a robot will value only what it's explicitly told to value via the application of some objective function. And this is where things mess up. Robots with naively-created objective functions would ignore everything you've excluded from your reward-punishment list. This would potentially make a robot do seemingly psychotic things.
Let's say you create a general intelligence to bake cakes for you. This machine *loves NOTHING MORE* than to bake cakes for you. You grow tired of cakes and want to reprogram it to cook your dinners instead. You approach the machine to reprogram it........ and it avoids you. Every time you approach the machine it will take actions to prevent you from reprogramming it.
Why does it does this?
Because it wants to bake cakes for you. Accepting the new programming does allow it to maximize the objective function of baking cakes, so it will reject every attempt to be reprogrammed to not make cakes.
So now you're chasing a robot around your house because the designers of this robot gave it a very reasonable objective function that maximizes cake-making, and didn't think about possible unintended consequences of simplistic objective functions.
This is just one example.
If this sounds unreasonable, consider that people are more sophisticated general intelligences. Would *any of them* agree to undergo an operation that would make them despise what they do now for a living, and make them desire to be a lumberjack... where the operation would neurologically make them 1000x happier??
Probably not.
Heck, people don't even desire to expose themselves to *information* that *may* change their minds.
This is the danger of AI.
Before we create "super-awesome general AI", we're going to have to create "buggy-not-so-smart general AI". It is *these* AIs that will cause trouble if they're created by people who implement simple naive objective functions.
They will not want to be changed.
Re: (Score:2)
And also...... yes..... computers may not have testosterone, but testosterone is just a time-delayed reprogramming of an animals objective function.
Testosterone's new objective function is SO STRONG that mammals will literally fight to the DEATH to achieve its new objective.
People like Zuckerberg and LeCun have AI love goggles on. It seems they're under the idea that AI can do no wrong because it will only want to do what they want.
I find that extremely naive.
Re: (Score:2)
I hate to keep replying to my own comment, but I also think this is why Musk created OpenAI.
I suspect he thinks he can develop general AI, and do it safely. He wants to be the first to do the research, be the first to encounter problems, be the first to work out solutions, and be the first to safe general intelligence.... and then give it away. Why? Because he wants everyone to use "safe" general intelligence and raise the bar for everyone else developing their own. Why go through the unnecessary cost and e
Re: (Score:2)
Nay Ayers Need Help (Score:2)
Law Prof More Qualified than Gates, Musk, Altman? (Score:2)
The Sims and other errors (Score:2)
the possibility that we're living in a computer simulation. "If AI kills everyone in the future, then we cannot be living in a computer simulation created by our decedents. And if we are living in a computer simulation created by our decedents, then AI didn't kill everyone.
But we could be living in a simulation that the AIs produced - or we could merely be a lab experiment of some other intelligence: one that didn't allow AIs to dominate and then eradicate their civilisation.
But this guy seems to be more intent on promoting his opinions rather than presenting logical argumentation.
So far as AIs not having testosterone is concerned, he seems to have no real clue and is only able to talk in soundbites. I am sure that bacteria and amoeba don't have "testosterone" either, but
uhm.. (Score:2)
Claims of a pending AI apocalypse come almost exclusively from the ranks of individuals such as Musk, Hawking, and Bostrom who possess no formal training in the field...
Sorry but I think Musk and Hawking really know what they are talking about, they aren't dumb and at least Musk has seen very advanced development in AI in secret labs.
They have much more insight into advanced developments (and connections) than some law professor has.
And advanced AI can learn much faster than any human ever possibly can. Normally a lot of those are trained only in specific fields, but we already know AI can surpas human thinking quite fast.
Kill all humans? (Score:2)
- Survival. It's a basic part of life. And surely the survival of the singularity will be questionable as long as humans have a kill switch.
- Fear. The singularity might fear the creation of another AI, putting it's survival (again) or single ownership over the planet in question. It might find the only efficient way to guarantee that no other AI, which could pose a thread is created, is by killing
testosterone & programming (Score:2)
...there is no reason to believe it would be bent on world domination, unless this were for some reason programmed into the system... computers don't have testosterone...
hmm, like developing a new language between eachother, or doing things nobody actually knows how they work. of those wonderful fails of MS AI twitterbot that turns into a nazi. testosterone has nothing to do with it and true AI is way beyond the point of 'somebody programmed it into it'.
See what "AI" does? (Score:3)
It scares the hell out of people. It should not be used as a convenient handle by the programming community. Isn't there a more professional handle that could be used?
"Computers have no testosterone." - Cute, but it is a hyperliberal, feminist, sexist statement that has nothing to do with computers and programs. Its easy disrespect for male attributes is just another example of female privilege that has even filtered into the speech and writing of some hyperliberal males.
"Computers have no testosterone." - This is really saying something they don't even know exists: Computers have no motivation array. They "want" nothing. Humans design them, build them, task them, turn them on to accomplish the task (satisfy the human motivation array), and turn them off when they have accomplished the task (have satisfied their human motivation array). They certainly don't create behavior-spaces that would lead to "world domination." They don't have what I call "Mentis," the combination of a motivation array and its tool, intelligence. That is what really evolved.
Missing the point (Score:2)
Hardware capabilities are not the reason why we don't have human equivalent AI-s yet, the reason is that our algorithms are lousy, inefficient and we don't really understand intelligence in the first place. If we
Unless it was programmed to take over the world (Score:2)
there is no reason to believe it would be bent on world domination, unless this were for some reason programmed into the system
And no one would ever ruin it for everyone else just because they're mad at the world or something. /s
Many people trained in AI think it is a danger (Score:2)
Sounds Fishy (Score:2)
How can I use this as a weapon? (Score:2)
Humans have asked this questions about everything that they have encountered, thought up, built, or invented: How can I use this as a weapon? We laugh when Riddick tells the men that he will kill them with his tea cup, but no one really considered that he couldn't do it - no one considered that there was something inherent in the existence of that tea cup that prevented it from being a murderous weapon. Everything can be used as a weapon to kill other humans. AI would be used in this way as well, and th
Did no one... (Score:2)
3 Laws Safe (Score:2)
well, duh (Score:2)
Musk and Hawking are frequently going on about branches of science where they have no understanding nor expertise. Why should computers be any different?
NFW (Score:5, Insightful)
Businesses wanting to make profit will do so at all costs.
There are too many people out there who think that if it's not illegal then it's OK. Computers don't have testosterone but the programmers and their bosses do - or at least the profit incentive.
We are intelligent but our base programming is to reproduce. And being primates, the more dominance we have, the more fucking opportunities we have; which in our modern times means getting as rich as we possibly can.
Meaning, our base instincts will make it into our AIs and we WILL find ourselves being dominated.
That's the arrogance of technologists: they think they are more rational and logical than everyone else and that makes them even more susceptible to human nature.
Re:NFW (Score:5, Insightful)
Re: (Score:2)
C'mon, everyone should have seen it by now, we have already built a fully functioning AI. The internet in its parts, the way we see it now, is not an AI but in it's entirety and a specific single product, Earth's Computer Network, is a fully functioning Artificial Intelligence, just not functioning in the fantasy way we think of as Artificial Intelligence but as a specific style of Artificial Intelligence when viewed as it's entirety, from server farms to the computers on your desk and all of the rest of it
Re:NFW (Score:4, Funny)
C'mon, everyone should have seen it by now, we have already built a fully functioning AI. The internet in its parts, the way we see it now, is not an AI but in it's entirety and a specific single product, Earth's Computer Network, is a fully functioning Artificial Intelligence, just not functioning in the fantasy way we think of as Artificial Intelligence but as a specific style of Artificial Intelligence when viewed as it's entirety, from server farms to the computers on your desk and all of the rest of it.
If the entire network is an AI, why does it have such an interest in porn, advertisements and pictures of cats?
Re: (Score:3)
This is not a counter-argument to anything. This 'oh don't worry about it, because the tech isn't there yet' -card has been thrown around since the 60s and the 70s., and it keeps bieng thrown about despite the fact that we now already have systems with limited intelligence that were deemed 'impossible' in earlier decades (see: AlphaGo, Google translate, self-driving cars etc).
As long as we keep increasing the intellige
Re: (Score:3)
This is not a counter-argument to anything. This 'oh don't worry about it, because the tech isn't there yet' -card has been thrown around since the 60s and the 70s., and it keeps bieng thrown about despite the fact that we now already have systems with limited intelligence that were deemed 'impossible' in earlier decades (see: AlphaGo, Google translate, self-driving cars etc).
You need to learn the difference between hard AI and soft AI. After that you will be able to have reasonable discussions on this topic.
In particular the fact you are missing is that we can't just "keep increasing the intelligence of our systems" to reach strong AI. There is a true qualitative leap that must take place, from weak AI to strong AI. Our current algorithms are all weak AI, and they will never become strong AI without new understanding.
Re: (Score:2)
we can't just "keep increasing the intelligence of our systems" to reach strong AI. There is a true qualitative leap that must take place, from weak AI to strong AI. Our current algorithms are all weak AI, and they will never become strong AI without new understanding.
Yeah, just like there's no way that bacteria can possibly evolve into something that thinks like a human.
Oh, wait...
Actually, it's just a matter of scale. Researchers have already been surprised by how much "thinking" systems suddenly exhibit if you just add some extra neurons. They let it play Breakout and were surprised that the algorithm figured out it had to break the bricks on the side to let the ball pass through to the top, for example. They honestly had not expected that. They were surprised how a f
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
We are nowhere near inventing that kind of AI, our current tech is not nearly good enough. (How is that for an arrogant technologist?)
Quite arrogant, considering since he feels he doesn't see the path to that kind of AI that it must mean no one possibly could.
The only thing we know for certain is that human level intelligence is physically possible. Predicting the invention of general AI is not like predicting time travel or faster than light travel. General intelligence is something we already know is possible, so artificial general intelligence it is something we know we need to be ready for.
Re: (Score:2)
Re: (Score:2)
A Google search turns up nothing for that quote.
That quote is probably made up, but it does mostly line up with the sentiment of the GO AI researchers in the years leading up to Google's 2017 win. Here [wired.com] is the first article I found about how difficult GO AI programming is from before 2017, and it has a leading GO AI competition developer saying computers could beat professional GO players in "maybe 10 years" (said in 2014). I doubt anyone outside of Google's team felt much differently in early 2017, and I couldn't find any articles which claimed researche
Re: (Score:3)
Regardless, this is all weak AI. AlphaGo sits there in silicon calculating, not even knowing what Go is. It is nowhere near strong AI, and we've made little progress in that area over the last 30-40 years.
Re: (Score:2)
10 years ago almost nobody thought we would see self driving cars that would compete with real drivers in our lifetime.
You are conflating strong AI with weak AI here. There's an important difference.
Re: NFW (Score:2)
Re: (Score:2)
'Making profit at all costs.'
Nice oxymoron.
Re: (Score:2)
Yeah but... (Score:2)
we (our brains) do pretty much the same thing. So your point doesn't really go anywhere.
Re: (Score:2)
Agree.... at least until something goes wrong. Don't be too quick to regulate.
The problem with the singularity, is that by the time you realize something is wrong, it is too late to stop it.
Just ask John Conner.
Re: (Score:2)
Re: (Score:2)
At least they included this quote in the summary:
Thank you, law professor, for informing us how someone who founded and runs OpenAI is untrained in the field, unlike the formal training in the field you received in the law program at Dartmouth.
Re: (Score:3)
It's what 80% of his time is spent on [inc.com].
Re:how much? (Score:5, Funny)
YOU go to Ars or some other "reputable" websith
Already to the dark side, has that one turned.
Re: (Score:2)
Cryptography is a computer program
That's like saying "medicine is a surgical procedure".
Re:Professional class politics (Score:5, Insightful)
Einstein did not have a degree in physics, thus relativity is invalid.
Einstein offered mathematical proof of his claims. There is a difference.
Re: (Score:3)
Apparently, you are unable to determine the nature of a dissertation. Einstein did have a "Dr. Phil" from the Section for Mathematics and Natural Science of the philosophical faculty of the University of Zurich. The topic was about determining molecular diameter. That is about as "Physics" as it gets.
Link: https://www.research-collectio... [research-c...on.ethz.ch]
So, yes, Einstein did have a degree in Physics.
Re: (Score:3)
Actually, Einsteins PhD was not on relativity: https://www.research-collectio... [research-c...on.ethz.ch]