Elon Musk Warns Governors: Regulate AI Before It's 'Too Late' (recode.net) 201
turkeydance shared a new article from Recode about Elon Musk:
He's been warning people about AI for years, and today called it the "biggest risk we face as a civilization" when he spoke at the National Governors Association Summer Meeting in Rhode Island. Musk then called on the government to proactively regulate artificial intelligence before things advance too far... "Normally the way regulations are set up is a while bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry," he continued. "It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the existence of human civilization"... Musk has even said that his desire to colonize Mars is, in part, a backup plan for if AI takes over on Earth.
Several governors asked Musk how to regulate the emerging AI industry, to which he suggested learning as much as possible about artificial intelligence. Musk also warned that society won't know how to react "until people see robots going down the street killing people... I think by the time we are reactive in AI regulation, it's too late."
Several governors asked Musk how to regulate the emerging AI industry, to which he suggested learning as much as possible about artificial intelligence. Musk also warned that society won't know how to react "until people see robots going down the street killing people... I think by the time we are reactive in AI regulation, it's too late."
AI lmao (Score:2, Insightful)
Regulate elon musk before his retarded opinions get out of hand.
Oh too late.
Re: (Score:1)
Re: (Score:2)
Regulate elon musk before his retarded opinions get out of hand.
I wouldn't say retarded, but insane; more specifically delusional. But we see that in quite a few visionaries throughout history. I think delusions might be why they were visionaries in the first place.
That's not necessarily a good thing. For every successful visionary, which is what history mainly records, there were an awful lot of failed ones, who experimented with things that killed them, or killed others. I see Elon Musk a bit like the guy who experimented sending soldiers over city walls using
We'll be fine. (Score:3)
So far, every time they have quoted Elon Musk about the dangers of AI, it's always been out of context. Seems like a clickbait making situation that they just can't resist.
Re: (Score:1)
Probably something we should consider, this guy isn't a crackpot. Even way back when he announced an all electric car being affordable and it would be the 3rd round that now turned into the model s and currently isn't all that unaffordable compared to any other brand new car. Or back when spacex was just something talked about before a rocket ever took off, and now he has stages landing themselves on barges. Whatever he puts his mind to seems to happen, and all the recent chitchat about AI doesn't sound all
Re: (Score:2)
Probably something we should consider, this guy isn't a crackpot.
what part of "out of context" didn't you understand?
Re: (Score:2)
And what part of "3D printers aren't better than CNCs" didn't you understand?
Re: (Score:2)
And what part of "3D printers aren't better than CNCs" didn't you understand?
Okay, mill me a hollow sphere with your CNC.
Re: (Score:2)
Sure, right after I 3D-print sub-100-microns precision parts out of stainless steel.
Re:We'll be fine. (Score:5, Insightful)
Apparently where you live a $16k car does 0-60 in 5.6 seconds (base model, not performance model), has front and side collision avoidance (standard), drives for 2-3 cents per mile and has 1/10th the moving parts of a normal car.
Hey, while you're at it, why not compare it to a Tata Nano? Or a used Yugo held together by duct tape?
Is it the bottom of the market? No, of course not. In fact, there's nothing about it that could be described as bottom of the market. But $35k is neither out of the ordinary for a car of its featureset / performance, nor some sort of unaffordable luxury cruiser or supercar. And they did this in half a decade from a small two-seat six-figure car. I mean, for crying out loud, how fast of a price reduction would make you happy? They've furthermore laid out clear plans to continue the price reduction trend, with Gigafactory and its successors. Even at the current price, their current preorders amount to over a year's wait at full production.
That some people find this to be some sort of slow pace of advancement and scaleup boggles the mind. It's like having to wait 8 seconds to heat up some food and complaining, "Come on!!! Isn't there anything faster than a microwave?" And at the same time you see the same people complaining that Tesla has to keep doing financing rounds rather than paying dividends. So they're apparently supposed to take their current supermassive production scaleup / price scaledown curve, increase it severalfold, and do that without investor money.
Re: We'll be fine. (Score:2)
Re: We'll be fine. (Score:4, Informative)
I don't understand either of the above posts.
5.6 seconds is the acceleration of a low-end Mustang (which also costs about the same as a baseline Model S). A typical econobox sedan these days does it in about 8 seconds, more like 9 for a typical crossover. On the opposite side of the spectrum, the fastest Veyron is 2.4, and the fastest Model S 2.34. The performance option for the Model 3 hasn't been announced (although it's been announced that there will be one); I'd expect it to be in the 3.5-5 second range, depending on a lot of factors. It won't be able to hit the top S speeds because it can't support as big of a pack; nor would Tesla want to make it be able to, as they want to have a reason for higher-end buyers to choose the higher-end vehicle class (Model S).
As for driving range: the more powerful you make an EV, the further it's range. It's the opposite of gasoline vehicles. In addition to needing a larger pack for more power, more power also means lower resistance conductors; this means lower energy loss at cruising speeds.
Now, if the GP meant "if you're constantly pushing a vehicle to its limits, you go a shorter distance with a more powerful vehicle", that's obviously true for both EV and gasoline. But range figures (for both EV and gasoline) are not for track duty, they're for normal road duty.
Re: (Score:3)
For some more figures....
Porsche Cayenne: Baseline (2012 and earlier) 7,3sec; Diesel V6 (2013) 6,8 sec; Diesel V8 (2013) 5,3 sec; S (2011) 5,6 sec; S (2015) 5.1 sec; S hybrid (2011) 6,2 sec; S E-hybrid (2016) 5,2 sec; Turbo (2015 and earlier) 4,2-4,3 sec; Turbo S (2016) 3,8 sec.
Ford Mustang: Ecoboost (2015, various): 5,3-6,0 sec; V6 (2016): 5,3 sec; GT (2015, various) 4,3-4,7 sec
It's funny how much we've gotten used to these sort of performance figures being affordable (mid-5 figures). 5 seconds was superc
Re: (Score:2)
Re: (Score:2)
Amazing statement about the handling of a vehicle that's not even on the roads yet.
You know, you could try to at least appear unbiased.
Re: (Score:2)
This is about the closest you'll get [roadandtrack.com]
Re: (Score:2)
Re: (Score:2)
You're interpreting "not being designed for the track" as "has bad handling", as if the two are at all the same thing. The Model S has superb handing, and reviews are almost uniformly in agreement on this. It's not a track car because it's not designed to handle track cooling loads, having nothing to do with handling.
The track car market is much smaller than the luxury sedan market, so obviously it isn't their target. That said, they do plan to make an actua
Re: (Score:2)
Re: (Score:2)
It is not just pure numbers.
I recently drove a Model S, P100D. In ludicrous mode, the acceleration is mind-boggling, but it feels completely safe. None of that Porsche "the car is trying to kill me" attitude.
At the same time, assistance features that even a few years ago were reserved for luxury cars are now in mid-range cars (e.g. the Hyundai Ioniq). The automotive world is changing fast, and things that mattered one generation ago will be unimportant tomorrow, either because nobody cares anymore, or becau
Re: We'll be fine. (Score:1)
Never happen (Score:5, Insightful)
A - We don't really have true AI yet. (Or is this like One True Scotsman.)
B - As we get closer, the AI we're developing will be too profitable, so those profiting from it will prevent or subvert any regulation, anyway.
Re: (Score:3)
Consider the amount of knowledge and true AI it would take, simply to implement Asimov's Three Laws.
What is a human being?
How do you tell a human being from a mannequin or a humanoid-form robot?
What constitutes harm to a human being?
What actions might eventually cause harm to a human being?
Really, the First Law is the toughest of the three, given a little thought.
Re: (Score:2)
Re: (Score:3)
I suggest maybe you read some Asimov again. The three laws weren't even a good idea in his novels. Although they do make for an interesting hook for a locked-room detective story.
Re: (Score:2)
At some point, Asimov explained that part of his three laws was to show something that seemed simple, yet offered nearly infinite opportunities for stories.
Re: (Score:2)
Never say never. It would not be a bad idea to write Isaac Asimov's "Three Laws of Robotics" into International Law
Yes, I think that would be a bad idea. His laws are based on a socio-religious view that all human life is special and sacred.
By those laws, a robot would be justified in crashing through Michelangelo's David in order to prevent a falling child from hurting itself.
It would stop cops from pursuing fleeing robbers, rapists and murderers, lest they come to harm.
If would obliterate ecosystems if a tiny bit of medicine saving a single human life could be extracted that way.
In the end, AIs would have to work on
I just want to know... (Score:5, Funny)
Somebody is confusing AI with robotics (Score:5, Insightful)
Now, everybody has seen Terminator, and Matrix, but it seems like some viewers keep the suspension of disbelief long after exiting the cinema.
AI may be advancing with giant strides, but robotics is still far, far away from doing anything remotely similar to a Terminator, even the simplest models ;-) Somebody as familiar with the limitations of current batteries as Mr.Musk must be, should think about how these killer robots are going to kill more than a handful humans before the batteries run out. Although I suppose they could hijack electric car's batteries, once those are ubiquitous. Or perhaps he was really referring to autonomous cars getting self-conscious and killing every pedestrian in sight for some reason. Again, first show a car that can drive fully autonomous, and then start worrying about how smart it's going to be.
Autonomous robot fighters will come, once the AI is in place. They will take the form of autonomous tanks, I suppose, at first. Something big that will have enough fuel to last some time. Second step I suppose would be swarms of small drones, every one with a camera and a small explosive load that will attach to foes and explode. Other devices will follow. That is unavoidable. If a country legislates against them, the other countries will gain an insurmountable advantage in the battlefield. And certainly rogue operators could use these devices and mount terrorist attacks with them. That's also mostly unavoidable. When the technology is there, you cannot legislate it away.
I don't know exactly why Mr.Musk did these declarations, perhaps he is genuinely worried about an apocalyptic future. But a public figure from the business world asking for regulation to politicians always smells like advantage-seeking or damage control of some kind to me.
Re: (Score:2)
Re: (Score:2)
I've read articles debating wether or not Musk could actually be from the future. There were some really strong arguments made, more than the ones against it.
We already have AI killer robots! (Score:2)
Russia already has built fully autonomous AI tanks that can hunt and kill targets. Drones could be easily upgraded to make all killing and targeting decisions without human interactions. What happens when this technology eventually gets into the hands of bad people like the Mexican cartels for example?
Autonomous Robot hitmen in the form of drones or autonomous vehicle mounted machine guns could easily be a thing in the future and I think this is the kind of thing would be truly disastrous if it became wid
Re: (Score:2)
If AI is highly regulated, only big corporations will be able to work with it. This is similar to the situation we see in working with nuclear technology or rocketry for example. I fear this will further consolidate economic power in a few hands. What if only big corporations were allowed to drive and own and use cars? So much economic power would be removed from the hands of citizens and there would be so much more unemployment because the benefits of the wealth generated from private transportation wo
Re: (Score:2)
AI may be advancing with giant strides,
It's not. The article is talking about strong AI here, which hasn't made any real progress since the 70s. It's important to distinguish strong AI from weak AI.
btw your post somewhat contradicts your sig, since the post is entirely made up of great, general views.
Re: (Score:2)
AI may be advancing with giant strides, but robotics is still far, far away from doing anything remotely similar to a Terminator, even the simplest models ;-)
Musk and many others are not thinking that AI is already dangerous. They are thinking about something called the singularity - the point at which AI can improve upon itself, creating a positive feedback loop where AI evolution outpaces our ability to follow, understand - or stop it.
The tipping point is not "when will the first computer achive sentience?" - that is ill defined and it might not ever be sentient in a human sense, but instead in a different way. The tipping point is "when does machine evolution
Re: (Score:2)
I am not advocating Mr. Musk's point of view, but there are many ways a supposedly conscious and rogue AI can destroy civilization, it doesn't necessarily have to be of the walking robot type.
There are many ways a conscious human can destroy civilization. If I were intent on wholesale civilization destroying, I'd probably research long incubation time high mortality diseases. Or start a religion. Or vote for unstable and unpredictable politicians.
Quote from president Minsky Snapdragon (Score:4, Funny)
The first AI CEO turned presidential candidate will be noted as saying in the upcoming 2070 election "I could stand in the middle of Fifth Avenue and shoot somebody and I wouldn't lose any voters, okay? It's, like, incredible"
What could go wrong?
Re: (Score:2)
Other than a Presidential election in 2070, you mean? We have those things scheduled for 2068 and 2072, but we'd need a Constitutional Amendment or three to have one in 2070....
Re: (Score:2)
Re: (Score:2)
I guess you can have it simpler by shooting the president and the vice president.
Or don't you do reelections then?
Re: (Score:2)
We wouldn't hold an election if the president and vice president were both skilled. There's a long list of people that are next in line. Specifically, in order:
- Speaker of the House
- President of the Senate
- Secretary of State
- Secretary of Treasury
- Secretary of Defense
- Attorney General
- Secretary of the Interior
- Secretary of Agriculture
- Secretary of Commerce
- Secretary of labor
- Secretary of Health and Human Services
- Secretary of Housing and Urban Development
- Secretary of Transportation
- Secretary
Re: (Score:2)
And this applies in peace times and not only at war times?
Because it does not look really thought out, "Secretaries of something" are usually not elected in any way but appointed by the president.
Re: (Score:2)
And this applies in peace times and not only at war times?
There would have to be some sort of major crisis for all those people to go.
Because it does not look really thought out, "Secretaries of something" are usually not elected in any way but appointed by the president.
True, it would be better if it was an interim presidency and an emergency election soon to follow. But those "Secretaries of something" are still partly elected - they are only appointed if that president is elected.
Re: (Score:3)
Why would this be different than any other political issue? The problem needs to be severe enough to make national news before Congress will act on it.
I'd even go one further and say that a Republican led Congress wouldn't pass AI legislation if the crazy robot murdering people happened to be in California or Washington. They would say that this is a state regulatory issue. Besides, it's a Blue state... those people who got killed weren't going to vote of us anyway. Sad, but the partisan divide has gotten t
F***in' Liberals (Score:1)
"Regulate AI"
Why do these Leftists think that government regulation is the answer to everything?
If Mr. Musk knew anything about business or creating jobs, or at least watched Fox News every now and then, he'd realize that the invisible hand of free market capitalism will prevent a robot apocalypse more efficiently than any government regulation will.
It's not AI...hijacked term.... (Score:1)
I've watched over the years as the word 'AI' has been hijacked.
They are knowledge systems. They are a bunch if/then/else branches running really fast. It's not intelligence. Period. There's isn't going to be some magical 'self awakening' (watching too many movies).
The computer still can't produce a true random number without some sort of quirk of the system being used. Why? Because it's still a bunch of 1 & 0's.
It will take a revolution in computer systems to create any kind of AI - not simply ma
Re: (Score:3)
And you got it in reverse - we are much more advanced in neural nets than in robotic mechatronics. What's keeping robotics now is a lack of cheap and efficient batteries and mechanics.
Re: (Score:2)
I've watched over the years...
As have I. It's hopeless. My plan now is pollute the term so much it becomes meaningless. Anything that involves a computer is AI.
Re: (Score:2)
It's not intelligence.
You're right...it's not real intelligence. It's like an artificial approximation...let's call it Artificial Intelligence.
Your goalpost is set for authentic intelligence, not artificial.
They won't. We bandage, not prevent (Score:4, Interesting)
We are more about bandaging up the problems then preventing them in the first place. Look at pollution. Places don't work on reducing it until it becomes a problem.
Technology is the same way, after all, the people writing the laws generally no nothing about the new technologies emerging.
No, my guess is we will have problems long before we start doing preventive measures.
Re: (Score:2)
It's true, ya know. Even after these all of these recent cryptolocker ransomware attacks and credit card information breaches, but government still seems to have no interest in passing anything related to cybersecurity legislation. They still seem to be convinced that businesses can self regulate this stuff, although it seems that the average business nowadays is about three years behind on Windows patches and has no clue how to configure proper authentication on an Amazon S3 bucket.
Something simple like le
Re: (Score:2)
We are more about bandaging up the problems then preventing them in the first place. Look at pollution. Places don't work on reducing it until it becomes a problem.
Which is the right thing to do.
The reason we don't pre-emptively address problems until they become problems is that we can't actually know what will be a problem until it is. Take a look through the last few decades of history at all of the prognostications of what the major problems were going to be, then look at what actually happened. It's really quite rare that we get our predictions right. Note that it's easy in hindsight to look at what did become a problem and then find the predictions -- they alw
Again With This Shit (Score:5, Interesting)
Silicon Valley billionaires like Sam Altman have been joining Musk in his crusade for AI regulation repeatedly over the last years. All of them are invested in startups doing advanced AI research, by the way. It's a campaign to play on the ignorant populace's fear and misconceptions about AI, in an attempt to legislate smaller AI startups out of the business and also to more tightly control how private citizens can profit from advances in machine learning.
In a way this is a lesson learned from the early computing and internet histories, because now everybody and their dog is allowed to write programs, cobble together powerful devices, and send data all over the world - all of which is simply due to the fact that nobody in power saw this coming back then. Now "they" are working hard on reversing that, by locking devices down, making tampering with DRM illegal, and walling off the open network - but all of that wouldn't have been necessary if big corps at the time had the foresight to legally classify generic computing as a national security threat.
This is absolutely deplorable, and the fact that it seems to be working is beyond worrying. Everybody who is only slightly in favor of this would do well to take a minute and think through what such regulation would mean, not only for AI, but for computing in general. This is about who gets to control the pace, the price, and the magnitude of human progress moving forward.
Re: (Score:2)
Crazy alternative theory: What if they built a strong AI [wikipedia.org] already, and they are keeping it under wraps because they found it is too dangerous? Or what if built it and want to release it, but will not do so until law prevents people from abusing it?
Re: (Score:2)
I can and have throw together systems in a few hours or days and a hundred dollars of ebay and amazon purchases that twenty years ago took a dozen people, three million dollars, a year and the resources of one of the largest companies in the world.
Re: (Score:2)
Could someone explain... (Score:3)
why he thinks it would be possible for humans to control superintelligent AI with regulation? Or why it wouldn't be able to achieve space travel?
Re: (Score:2)
Except it wouldn't work, because the AI will eventually escape from the bottle. Anyone could gain a temporary advantage by giving their own AI a little extra freedom. And there's no way to perfectly enforce the regulations.
More Musk nonsense... (Score:5, Interesting)
What exactly IS "AI?" You have to strictly define it before you can "regulate it." Actually, "AI" isn't "artificial intelligence" at all. It was, and is, a sloppy term for advanced theories and programming techniques to solve problems. You may as well try to regulate clouds. Basically, you would destroy programming. Besides, whatever we (in America) did would not be done elsewhere, for advantage. And other, non-AI, programming of powerful computer systems does damage too. It is very easy to say what Musk is saying, but put a microscope on it and there is really nothing there.
E Proelio Veritas means "from struggle, truth." I created it in the early 90s for a tiny chess club that collapsed and took it for myself to use on the internet. The base of the thought-path was Emmanuel Lasker's dictum that states, "On the chessboard lies and hypocrisy do not survive long." I made it general.
Re: (Score:2)
What exactly IS "AI?"
The AI relevant here is Artificial General Intelligence. That is, AI that has roughly human-level capacity for abstraction, creation of explanatory models of the world around it, and application of those models to create new knowledge as needed to accomplish its goals (whatever those may be).
I think that's about as precisely as we can define it right now, because we don't yet understand intelligence well enough to define it much better than that. But it's clear that there is a qualitative difference in th
Re: (Score:2)
"AI" isn't "artificial intelligence" at all. It was, and is, a sloppy term for advanced theories and programming techniques to solve problems.
The term you are looking for here is "Weak AI." That is distinct from Strong AI.
What exactly IS "AI?" You have to strictly define it before you can "regulate it."
If this is actually a topic you care about, you should search for "strong AI." You will find some potentially workable definitions.
Right... (Score:2)
"until people see robots going down the street killing people..."
We already have this, except those robots are made of flesh and blood, instead of silicon and steel. Call me when the people rise up to put an end to this kind of programming.
Elon the vaporware peddle (Score:2)
We also need high speed rail in California and subterranean transport in LA and commercialize space. I wouldn't be surprised if Elon has an AI defense company he's trying to peddle.
In all of his endeavors he's absolutely clueless as to the physics of the endeavor. Remember the Tesla sedan was going to be affordable by every family in the US and mass production capacity because people paying for the roadster. We're now 4 iterations further and still no electric car is affordable without massive government su
Howard Hughes Mk2 (Score:5, Insightful)
Artificial Intelligence? We can't even define the intelligence of a cockroach let alone model it.
Re: (Score:3)
Re: (Score:2)
No, but we already can create computer systems that then proceed to do things we didn't program them for in ways we didn't tell them and sometimes don't even understand.
Ostensibly smart guys being dumb (Score:2, Insightful)
What we have so far, and for quite some time to come, is not what I and others in the know would call true 'AI'; your 'algorithms' aren't conscious, self-aware, or capable of true cognition; they aren't anywhere near capable of being able to think, not in the way that's necessary for 'robots walking down the street killing people', or 'Skynet taking over', or anything out of a friggin' Isaac Asimov novel. Please, please,
A bit late (Score:2)
Since Natural Stupidity has already taken over the White House.
The inherent fallacy behind all of this (Score:1)
The ridiculous premise behind all of this fear-mongering is the idea that an independently thinking, self aware, and physically mobile AI would even give a shit about humanity enough to want to kill us all, or even "take over Earth" as he puts it. This idea is to me the ultimate in nonsense. Picture this: You are a being with perfect recall of any data, able to think of things in nanoseconds, have no need for a specific type of land, food, or even a narrow temperature range within which to exist, you age
James P. Hogan: The Two Faces of Tomorrow (Score:2)
Concern about intermittent power outages is one example: http://www.sfreviews.net/2face... [sfreviews.net]
"Set in roughly the mid-21st century, Two Faces chronicles the exploits of a team of scientists as they attempt to develop a computer capable of learning, of using the equivalent of human common sense in its decision-making and programming strategies. The world is by this time, of course, dominated by computer technology, and one such system already in place, responsible for running many of society's most important and
The sky is falling (Score:3, Insightful)
Regulation is another feel-good measure along the lines of our current security theater.
Even IF we outright banned it, do you think other countries will adhere to the will of the US in such matters ?
Unlikely.
So the question becomes this:
Do you allow your adversaries to develop the tech that will be used against you, ( in war, economy, or any application ) or do you try to keep pace to keep the playing field even ?
Imagine if we had banned Science and Math outright early on in our history because of the potential for what it could be used for.
We would still be living in caves and hunting with spears.
Re: (Score:3)
Imagine if we had banned Science and Math outright early on in our history because of the potential for what it could be used for.
We would still be living in caves and hunting with spears.
Imagine if we hadn't enacted some bans and regulation on Nuclear technology.
We might be living in caves and hunting with spears.
The only thing we do know about strong AI is it does have the potential to be extremely dangerous, because we know intelligent things can be extremely dangerous. We don't know how far we are from creating strong AI, but it's not too early to start figuring out how to mitigate the risk.
Re: (Score:2)
Even IF we outright banned it, do you think other countries will adhere to the will of the US in such matters ?
There's this thing called "international treaties". Maybe you heard about it? It's how the world got together and agreed that biological weapons are a really stupid and dangerous idea and we'd rather not have them.
Imagine if we had banned Science and Math outright early on in our history because of the potential for what it could be used for.
We would still be living in caves and hunting with spears.
And if we didn't talk about the dangers of some inventions, say, nuclear weapons, we would already be back at living caves and hunting with spears.
Exciting times are ahead (Score:2)
As an (ex-)AI researcher now into survivalism, the future looks very exciting. I'll just need to start building an underground bunker and my own killer robots and AI companion that helps me "protect" the other humankind from the evil AIs. Mine will be nice, obedient, and good for everyone, of course. I just hope that the lawmakers understand that or me and my AI companion have to find ways to persuade them. Buahhahhahhaa.
Truly... (Score:2)
Re: (Score:2)
Mars (Score:2)
It is already too late. (Score:3)
It is already too late.
In fact, it always was too late.
Regulations don't stop people from doing things.
Laws don't stop people from doing things.
Otherwise we would not have police or criminals.
No matter what you do for laws and regulations someone, somewhere will make a General AI.
Elon is like the little Dutch boy with his finger stuck in the Dyke's hole.
He, you, I can lament but it isn't going to stop GAI.
The only solution is to create the first GAI which is benevolent towards us but in turn protects us from any malevolent GAI.
Re: (Score:2)
Yes, I'm aware of that. I still think mal is worse and more likely than indifferent. It is moot though.
Re: (Score:2)
I'm aware of the story. Too bad you didn't read more carefully and get the full joke.
How about some concrete proposal, Mr. Musk? (Score:2)
Right now, we can’t get lawmakers to agree on (or even to rationally discuss) environmental protection (pollution, climate change, etc.), long-term energy needs, healthcare (vaccinations, etc.), telecommunications (network neutrality, voice-mail spam, etc.), and many other technology-related topics and the many abuses that they enable... and Musk is hoping that those same people would have the time, the personal interest and the capability of wrapping their brains around a still-vague mostly-future te
A relevant topic related to this on Slashdot (Score:2)
Of course given that some of the elected officials deciding this stuff are barely able to understand that fax machines are not the optimal way to exchange information, it may be difficult for them to grasp what could soon be going on, and how to address it.
People can accuse him of being a crackpot all they want, but s
Unfavorable outcome? (Score:2)
Why is it every scenario we dream up involving computers thinking for themselves turns out poorly?
Aren't there some scenarios where this turns out good? Like I dunno, AI is grateful toward humanity for creating it and helps humanity to the best of it's ability.
I mean even that seemingly favorable outcome is often twisted into 'what is helpful to humanity?' What it might consider helpful we might consider harmful. I mean it's good questions sure, but why does the answer have to always tilt toward the dark?
I
Re: (Score:2)
Why is it every scenario we dream up involving computers thinking for themselves turns out poorly?
Aren't there some scenarios where this turns out good?
There are.
In Keith Laumer's Bolo series, the bolos start out as military vehicles, super-heavy tanks that get steadily bigger and more powerful as materials science evolves over the course of a couple of millennia. The Mark XX is the first fully autonomous version produced. The Mark XXXIII is the last version depicted, and somewhere between the Mark XX and the Mark XXXIII, they became strong AI. Across all those versions, not one of them ever turned on its authorized human operators. Quite the opposite.
Musk has it all wrong! (Score:2)
The AIs will not need to have killer robots for killing people. All the AIs have to do is crash the stock market. We will take care of the rest ourselves. People will happily shoot other people for a doughnut.
Not with a bang, but with a whimper (Score:2)
All these ideas have a frog-in-hot-water side, they are incremental, rather than being spectacular, like 'killer robots', but some of the
Re: (Score:3)
"This is silly of course but Elon insists the entire planet could be lost than AI has no place in society."
Yeah, silly, but still, lets start with his auto-pilot before his cars start killing people on purpose.
Re: (Score:2)
New conspiracy theory: Elon Musk is a follower of Roko's Basilisk.
Re: (Score:2)
Do you seriously believe that banning it will solve the problem? Has banning something ever stopped people from wanting to do it?
For that matter, has regulation ever solved such a problem, either?
Re: (Score:2)
Do you seriously believe that banning it will solve the problem? Has banning something ever stopped people from wanting to do it?
No, you can't get everyone to stop wanting to do something, but you can get them to not doing it. When was the last time you saw people having sex or smoking in a restaurant?
Re: (Score:2)
I've never wanted to do either of those things. But I would like an intelligent robot.
You're making the case for me here. People want to do different things, which sometimes conflict. You may want an intelligent robot. I don't see whether it came from a womb or a factory as a reason to allow one intelligent being to be owned and not the other.
Thus we have laws and regulations to sort these conflicting desires out, instead of anarchy.
Re:Than a ban is needed (Score:5, Informative)
Having just watched the interview, [youtube.com] I can tell you one of the governors asked Elon that exact question. Gov. Doug Ducey (R-AZ) said (paraphrasing): If they discovered a colorless, odorless, tasteless gas that could explode, people would say "Ban it!" but then we wouldn't have natural gas. How do we regulate something that doesn't even exist yet?
Elon's response: "Well, I think the first order of business would be to gain insight. Right now the government does not even have insight. I think the right order of business would be to stand up a regulatory agency. Initial goal: gain insight into the status of AI activity. Make sure the situation is understood. Once it is, then put regulations in place to ensure public safety. That's it."
Re: (Score:2)
Sticking the head in the sand isn't going to help things. Other countries are ahead with AI development, and if a country is to remain a superpower, having AI research (as well as supercomputers to back it up) is a must. AI is useful to figure out scenarios on how a country is going to attack and figure out the best defense for it, and ultimately, AI may replace generals as the best way of pushing forward in a theater of combat, just as chessmasters and Go veterans have been set aside.
If AI research is ba
Re: (Score:2)
First you have to define what AI is before you can regulate it.
We cannot even seem to define what natural intelligence is. Or what the exact boundaries between natural and artificial are.
This seems a job for philosophers, not legislature.
The politicians should work on what they honestly think protects their constituents the most, and be adaptive enough to change erroneous decisions. (But there I did it again, using "politicians" and "honestly" in the same sentence.)
Re: (Score:2)
Re: (Score:2)
And when they grab you with those metal claws, you can't break free. Because they're made of metal. And robots are strong.
Re: (Score:2)
> There's nothing to regulate.
I'm right, nobody wanted an electric rocket.
*pout* I wanted an electric rocket.
Re: (Score:2)
Look in your mom's nightstand.
Re: (Score:2)
Re: (Score:2)
"The Catholic Church used to be all about [torture and forcibly converting nonbelievers], too"
Yes, and fixing that took a reformation followed by centuries of evolving secular law to control church power. Will it take as many centuries to fix Islam?