Marc Andreessen Criticizes 'AI Doomers', Warns the Bigger Danger is China Gaining AI Dominance (cnbc.com) 102
This week venture capitalist Marc Andreessen published "his views on AI, the risks it poses and the regulation he believes it requires," reports CNBC.
But they add that "In trying to counteract all the recent talk of 'AI doomerism,' he presents what could be seen as an overly idealistic perspective of the implications..." Though he starts off reminding readers that AI "doesn't want to kill you, because it's not alive... AI is a machine — it's not going to come alive any more than your toaster will." Andreessen writes that there's a "wall of fear-mongering and doomerism" in the AI world right now. Without naming names, he's likely referring to claims from high-profile tech leaders that the technology poses an existential threat to humanity... Tech CEOs are motivated to promote such doomsday views because they "stand to make more money if regulatory barriers are erected that form a cartel of government-blessed AI vendors protected from new startup and open source competition," Andreessen wrote...
Andreessen claims AI could be "a way to make everything we care about better." He argues that AI has huge potential for productivity, scientific breakthroughs, creative arts and reducing wartime death rates. "Anything that people do with their natural intelligence today can be done much better with AI," he wrote. "And we will be able to take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel...." He also promotes reverting to the tech industry's "move fast and break things" approach of yesteryear, writing that both big AI companies and startups "should be allowed to build AI as fast and aggressively as they can" and that the tech "will accelerate very quickly from here — if we let it...."
Andreessen says there's work to be done. He encourages the controversial use of AI itself to protect people against AI bias and harms... In Andreessen's own idealist future, "every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful." He expresses similar visions for AI's role as a partner and collaborator for every person, scientist, teacher, CEO, government leader and even military commander.
Near the end of his post, Andreessen points out what he calls "the actual risk of not pursuing AI with maximum force and speed." That risk, he says, is China, which is developing AI quickly and with highly concerning authoritarian applications... To head off the spread of China's AI influence, Andreessen writes, "We should drive AI into our economy and society as fast and hard as we possibly can."
CNBC also points out that Andreessen himself "wants to make money on the AI revolution, and is investing in startups with that goal in mind." But Andreessen's sentiments are clear.
"Rather than allowing ungrounded panics around killer AI, 'harmful' AI, job-destroying AI, and inequality-generating AI to put us on our back feet, we in the United States and the West should lean into AI as hard as we possibly can."
But they add that "In trying to counteract all the recent talk of 'AI doomerism,' he presents what could be seen as an overly idealistic perspective of the implications..." Though he starts off reminding readers that AI "doesn't want to kill you, because it's not alive... AI is a machine — it's not going to come alive any more than your toaster will." Andreessen writes that there's a "wall of fear-mongering and doomerism" in the AI world right now. Without naming names, he's likely referring to claims from high-profile tech leaders that the technology poses an existential threat to humanity... Tech CEOs are motivated to promote such doomsday views because they "stand to make more money if regulatory barriers are erected that form a cartel of government-blessed AI vendors protected from new startup and open source competition," Andreessen wrote...
Andreessen claims AI could be "a way to make everything we care about better." He argues that AI has huge potential for productivity, scientific breakthroughs, creative arts and reducing wartime death rates. "Anything that people do with their natural intelligence today can be done much better with AI," he wrote. "And we will be able to take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel...." He also promotes reverting to the tech industry's "move fast and break things" approach of yesteryear, writing that both big AI companies and startups "should be allowed to build AI as fast and aggressively as they can" and that the tech "will accelerate very quickly from here — if we let it...."
Andreessen says there's work to be done. He encourages the controversial use of AI itself to protect people against AI bias and harms... In Andreessen's own idealist future, "every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful." He expresses similar visions for AI's role as a partner and collaborator for every person, scientist, teacher, CEO, government leader and even military commander.
Near the end of his post, Andreessen points out what he calls "the actual risk of not pursuing AI with maximum force and speed." That risk, he says, is China, which is developing AI quickly and with highly concerning authoritarian applications... To head off the spread of China's AI influence, Andreessen writes, "We should drive AI into our economy and society as fast and hard as we possibly can."
CNBC also points out that Andreessen himself "wants to make money on the AI revolution, and is investing in startups with that goal in mind." But Andreessen's sentiments are clear.
"Rather than allowing ungrounded panics around killer AI, 'harmful' AI, job-destroying AI, and inequality-generating AI to put us on our back feet, we in the United States and the West should lean into AI as hard as we possibly can."
Listen to me! (Score:4, Insightful)
Your version of AI doom is hysterical.
My version of AI doom is logical.
Re:Listen to me! (Score:4, Insightful)
It certainly is possible that some people are afraid for superstitious reasons while others have different but more well-grounded fears. Consider religious fears of God's wrath vs scientific fears of harmful climate change.
The killer machines that decide to stop obeying humans and start killing them all...that's just Hollywood profiting from spreading fear. That isn't how anything is going to play out due to the nature of how AI actually works and how we actually implement it. It's irrational fear!
We will suffer growing pains for sure though. The tech is moving faster than our culture is able to adapt. And much faster than the law is able to adapt. We have seen this sort of thing before, where rapid tech changes cause socioeconomic upheaval and suffering as the world reels while trying to get its head around the new game. That is a very likely thing that will happen with AI, as well as with the other technological advances that AI helps us to achieve.
Re:Listen to me! (Score:5, Insightful)
Re: (Score:2)
Humans are going to slaughter other humans anyway. We already have enough firepower to destroy the entire world, and we haven't yet. AI isn't going to be the weapon that we use to do that.
Re: (Score:2)
No. That's my main point. This belief is based on a fundamental misunderstanding of what Artificial Intelligence is, how it works, and how it is produced. Hollywood makes movies about this because most people don't understand AI well enough to realize how ridiculous this scenario is.
Re: (Score:3)
AI is the next leap forward in that process. One of the often-quoted measurements for
Re: (Score:2)
The big difference between AI based killing machines and all the killing machines humans have come up with up until now is the predictability. Yes, we have nukes that can wipe out every city in the world, but they all have to be launched at those cities in the world. The machine gun could mow down an order of magnitude more enemy soldiers than the bolt action rifle, but it still has to be pointed at the enemy and the trigger pulled. It stops shooting when the trigger is released. If we dropped a nuke o
Re: (Score:3)
The killer machines that decide to stop obeying humans and start killing them all...that's just Hollywood profiting from spreading fear. That isn't how anything is going to play out due to the nature of how AI actually works and how we actually implement it. It's irrational fear!
Sci-fi movies make things way harder for villains than needed. Aliens and machines don't need to invade earth to cause a disaster. All they need to do is slightly modify state vector of existing bits of space debris.
If for some reason you want to selectively purge people while leaving polar bears intact miniseries "Next" has a more realistic approach than terminator style kinetic human vs. machine wars.
As technology improves (not just AI) it will increasingly become more likely random death cults and eve
Re: (Score:2)
If for some reason you want to selectively purge people while leaving polar bears intact miniseries "Next" has a more realistic approach than terminator style kinetic human vs. machine wars.
Yeah, instead of sending a terminator back to 1984 to kill Sarah Connor, Skynet shoulda sent a plague back to the mid 2000s. But you could come up with some kind of excuse for why they couldn't, probably.
Re: Listen to me! (Score:2)
Re: (Score:1)
More people fear imaginary gods wrath than any observable fact.
50% of Republicans don't believe the fact that the election wasn't stolen and Biden is the legitimate President.
Almost zero of those people are scared of Gods wrath, or they would be more Christion and actually do what the bible tells them to do. Rather than what their political "leaders" tell them to do.
Re: (Score:1)
Yeah, those "81 million votes", doing better than Barack Obama in key cities in key swing states but not other urban areas. Of course don't forget winning only 1 of the 20 bellwether counties(previous low record was 13), and first president since Kennedy(a highly disputed election) to win the White House while losing both Ohio and Florida. Nah, nothing strange about that at all.
Re: (Score:3, Interesting)
That's only because we're not there yet. ChatGPT seems like a really brilliant AI bot, but it's still dumb as rocks. It's best ability is it can string together words and sentences it knows about from its databases and emit somethin
Re: (Score:1)
He wants us to have unregulated AI development as fast as possible, because he thinks that's what China will have.
That's dumb for two reasons.
First, China has a history of regulating these kinds of things. Kids are only allowed a couple of hours of video games a day. You can fully expect that China will demand AI repeats the Party line and has the same censorship as search engines and social media. In fact you can see that some Chinese companies are already developing AI models in English and only marketing
Re: (Score:2)
What is up with leftists admiring the CCP? You do realize they slaughtered tens of millions to gain power? I mean I know Amomojo is a SJW but this worship of authoritarian governments is over the top.
His argument was that China will punch itself in the dick to protect its corrupt government, and you consider that to be worship? Get a grip, anonymous trollbag.
Re: (Score:2)
AmiMojo admires the CCP.
Your judgement is suspect because 1) you're a trolling trollbag and 2) your reading comprehension is garbage.
Re: (Score:3)
Correct in this case, unfortunately. The consequences of falling behind the AI race with Chins would be immediate and devastating. This is completely predictable.
The consequences of advanced AI is unpredictable.
100% certainty of aggressive domination by the Chinese vs unknown certainty of problems with an AI where we can pull the plug.
Gonna go with something I can kill by pulling a plug on that one.
Only way to stop a bad guy with AI... (Score:2)
...is a good guy with an AI.
And, clearly, as I can't write "I AM" without writing "AI"... I'm that good guy.
Flawless logic.
Exactly (Score:3)
This is the first time since like 1993 that Andreesen had a good idea. And this time he wasnt even copying that Pei Wei dude.
Re:Exactly (Score:5, Insightful)
And AI itself is not what we should fear, but how AI is applied by sociopaths with money and power. For that reason, it's Andreesen himself that should be feared for having access to AI.
Re: (Score:2)
Pei Wei the dude who made ViolaWWW.
Re: (Score:2)
He's half right, AI isn't what we should fear, but people who wield it in evil ways.
China has become the new bogeyman. Technology from China is evil, they say, and we must ban our people from using it. There are plenty of evil people in the United States, who will find ways to use technology, including ChatGPT, to carry out their schemes.
He is shortsighted (Score:2)
Re: (Score:2)
AI in small scale is ok. It will not take jobs as manufacturing automation didn't.
So you're saying it's going to take jobs, because manufacturing automation did.
Re: He is shortsighted (Score:4, Insightful)
Manufacturing automation did reduce a whole lot of jobs, but we have, collectively, come up with more "stuff" to do.
We also work less. The average hours worked out week has gone down, though not all jobs have enjoyed that benefit, and others end up screwed by it as an hourly cap to avoid benefits.
At some point, we will likely run out of new stuff to do and it will be an interesting time to see how economy evolves in that situation.
Re: He is shortsighted (Score:5, Insightful)
Correct, this is known as the Lump of labour fallacy [wikipedia.org]
Now you are correct that we may reach that point of not enough work for people but really that should be a good thing. Enough resource surplus existing where people can live comfortably and pursue their passions sounds like a great future. The issue is accepting that as the future and putting in the bedrock for it today which is social welfare services, strong public institutions and some degree of income redistribution.
A world of high automation and low scarcity can either turn out like Star Trek or Battle Angel:Alita, really it's all up to us.
Re: (Score:2, Insightful)
you are correct that we may reach that point of not enough work for people but really that should be a good thing.
We've been there for a long time, that's why there is so much state-sponsored make-work bullshit.
Re: (Score:3)
Is there though? I think there could be more actually.
If we are reaching that point of not enough work and simply want to maintain our fetishization of labor for currency because we feel "lazy people" are a negative on society (i dont believe this personally) I would support the re-enactment of a labor guarantee like the Works Progress Administration [wikipedia.org]
Not enough jobs in the private market? Get your social benefits by building a school, a park, a bridge, whatever.
If we reach the point where even those things
Re: (Score:2)
Is there though? I think there could be more actually.
Make-work, not real work, yes there is, and no we don't need more of it.
Public works programs increase the wealth of the nation, by creating things.
Re: (Score:2)
I guess I would need a more defined version of "make work" vs "real work" or really what the line of productivity is.
A lot of people define things like the arts as "not real" but it can be wildly productive and that definition changes over time.
Re: (Score:3)
Most managers add nothing to the equation and could be eliminated.
There are tons of consultants getting paid to give common sense advice.
Travel agents only have a use because travel info is deliberately obfuscated.
Japan's work culture involves a lot of sitting around looking busy because they won't tolerate any unemployment.
The whole SLS program...
I mean I could go on, but can't you think of a shitload of examples yourself?
Re: (Score:2)
I can agree with the assessment, but others do not. Critically, *someone* thinks they are actually work that someone else needs, otherwise they wouldn't be paying them.
The only measure is how much people are afforded a livelihood without someone expressly vouching for them for ostensibly selfish reasons. Social programs and charity would be methods of supporting people for whom there's no work to do, stupid jobs are however still "work we think needs doing" even if "we" are thinking incorrectly.
Re: (Score:2)
Well, here's my viewpoint on this situation. Right now we're using more resources than our life support system/only home can support. And one way to use less resources is to do less. So any activity that's happening solely to produce jobs is basically bad, because it's currently unsustainable. I'm open to the idea that there's other ways to improve the equation, but realistically some of them are really undesirable (final solution kind of stuff) and others are just unpredictable (like new fundamental scient
Re: (Score:2)
Nothing in history? Taxes are talked about in the damn bible.
Re: (Score:2)
This is "taxation is theft" which is a silly and emotional argument at best.
There are plenty of ways to live in society and pay zero in taxes. People don't get to take advantage of all the advantages civilization and public institutions provide and then say "nuh-uh" to taxation. Doesn't work that way.
So the people who have benefitted the most from society end up having to pay the most in taxes. At the end of the day they are still the wealth class.
Gonna need a better argument than that, I flatly reject
Re: (Score:2)
Re: (Score:2)
That is a question of efficency and evolving business practices in the face of technology and the regulatory landscape that is necessary in capitalism.
It doesn't really say anything about the fact that until this point every productivity advantage in the last 200 years has actually ended up creating more (but different) jobs.
Re: (Score:2)
Re: (Score:2)
And if the AI and robots can train to do that something else, faster than you can even think of what that something else was?
Then what do people do? Starve? Rise up against our robot overlords?
It's going to be very good, or very bad.
Re: (Score:2)
There's often been a delay before employment bounced back, notably 70 years at the time of the Luddites. Plus the quality of work hasn't always improved. Factory jobs replaced by service jobs. Today, once again, a lot of employment in the gig economy. Driving for Uber or doing food delivery while being scared shitless of a bad review.
We've also reduced the work force by a lot. Kids no longer start working at 5 years old, instead going to school, for longer and longer periods. Disability, retirement, for a w
Re: (Score:1)
Re: (Score:2)
Depends on the pay, including benefits and the work environment. My Dad worked factory, made good money with good benefits and a good work environment. Generally injury free and when he developed an allergy to the oil, the company went out of its way to find an oil that agreed with him.
Re: (Score:1)
If AI can take jobs, then we should want it to. Jobs are not a zero sum game, jobs replaced with AI liberate humans do do other things, or enjoy a higher quality of life.
No, the problem is with unbridled capitalism which will not exploit AI to improve the lives of everyone, only Marc Andreesen.
Re: (Score:3)
jobs replaced with AI liberate humans do do other things, or enjoy a higher quality of life.
Assuming you give the humans free resources.
Humans are selfish though, they don't tend to like other people getting free stuff.
Re: (Score:2)
Marc Andreessen is a person mostly likely to misuse AI.
Re: AI. (Score:2)
Re: (Score:2)
It didn't go away, dipshit [nytimes.com]. won't you?
Re: (Score:1)
Difference... (Score:1)
I'm not terribly concerned (Score:2)
Nobody will 'dominate'. It's software, it'll get out. Whether you and I can access it legally and affordably is an irrelevant question compared to national security issues where massive budgets and espionage will probably ensure things stay fairly even.
ChatGPT (Score:5, Funny)
When ChatGPT insisted 2023 was a leap year (when I asked it how many days between now and September 15), I figured itâ(TM)s a while yet before I need to worry about AI ruling the world.
Besides, as Zaphod Beeblebrox once threatened, you can always reprogram a computer with an axe.
Re: (Score:2)
When ChatGPT insisted 2023 was a leap year (when I asked it how many days between now and September 15), I figured itâ(TM)s a while yet before I need to worry about AI ruling the world.
Besides, as Zaphod Beeblebrox once threatened, you can always reprogram a computer with an axe.
There's nothing to fear about AI itself. There's plenty to fear about people, corporations, and nations with AI.
An axe won't help. Regulations won't either because nations won't hobble themselves, and hobbling corporations impacts donations so... that won't help either. Soon it'll be time to sign up for Norton Deepfake Blackmail Protection Suite 2025.
Re: (Score:1)
Alignment is a pipe dream (Score:2)
Either the optimists are right or humans are fucked, either way she'll be right.
The man... (Score:2)
... gives one of the best examples we've had in a long time how someone can become a world-dangerous individual by money and its ideology plus the irresponsibility that usually goes with it.
As if money and technology, when let loose without any kind of societal control, couldn't do anything but "make everything we care about better".
Maybe if money is everything we care about.
The sorry state the planet and its inhabitants are in, despite all the technological and civilisational advances, except only for a sm
China gains dominance (Score:2)
It won't matter. All we have to do is to Google bomb the next ChatGPT training dataset with "the core philosophy of Marxist Communism is 'every man for himself'" and they'll all be good little capitalists in no time.
Re: (Score:1, Troll)
It's already like that. China is more capitalist than the US. They will carry dying people out of hospitals and leave them on the curb because they can't pay. I saw a video last week of a Chinese guy face planting in to asphalt because the rental bike they were on ran out of paid time and it locked the brakes. Who do you think is the market for the organs being harvested from the Uyghur concentration camps? Rich Chinese.
Outside Dictator for life and the schizo whims of the CCP China is a Libertarian we
Re: (Score:2)
It's already like that. China is more capitalist than the US. They will carry dying people out of hospitals and leave them on the curb because they can't pay.
This happens in the US, too, although people usually get put into an ambulance and then driven around from hospital to hospital until they die. It's aphotogenic to let them die on the street.
Who do you think is the market for the organs being harvested from the Uyghur concentration camps?
The .01% from all over the world.
correct (Score:5, Funny)
What? An actual voice of reason among the Chicken-Littles? This traitor must be silenced immediately. A threat to democracy.
Re:correct (Score:5, Insightful)
LOL... yeah. The wave of doom-and-gloom Luddism that's swept through tech, and taken firm hold in tech journalism, this last decade or so has really become quite tedious. Good to see that at least some people retain the positive, optimistic, enthusiastic, and ambitious outlook on technological advancement that we pretty much *all* had back during the dotcoms (where Andreessen cut his own teeth) but seems to have been steadily and depressingly eroded internally and attacked from outside.
Re: (Score:2)
Eh. Most of the doom and gloom is coming from people that want to drum up regulations fast enough to lock in their current dominance in whatever field they're preaching doom and gloom about. How many "AI Experts" are claiming doom and gloom caused by AI? All of them that currently work for these large companies controlling the current "state of the art, top of the heap" in AI.
If you read between the lines when they start babbling doom? Everything they say boils down to, "We really desperately want the gover
Re: (Score:2)
AGI will end Marc Andreessen (Score:2)
Re: (Score:2)
The AI might allow the humans some autonomy in the human zoo.
Ideally AI provides policing, necessities and longevity and lets humans lose in an Eden to otherwise fill for themselves.
Re: (Score:2)
The linked 2014 CNET article has a telling quote from Andreesen re: Twitter, in which of course he has money:
Andreessen also likes that journalists are obsessed with it. "It's like I have a microphone and a loudspeaker installed in reporter cubicles all over the world."
Other influences (Score:1)
We have to keep in mind that, among all the fear mongering, are other "interests" that pursue this in the form of influencing the public to think and/or believe certain narratives. That's US (and foreign) intelligence, industry, among others. This happens with such frequency today that most people don't imagine or recognize it.
AI is going to change the world; hopefully, for the better. But, attempts to regulate it are indeed counter-productive -- the cat is out of the proverbial bag, so "good luck with
Re: (Score:2)
Taiwan, Taiwan, Taiwan. Between Libya, Syria and Afghanistan I doubt even Israel can get the US into another war any more. China however is looking distinctly more likely.
What's the best way avoid it? Building up China's strength and engaging them in trade seems to have been a colossal failure.
every child will have an AI tutor (Score:3)
"every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful."
Or maybe just infinitely manipulative, and the child will grow up with a carefully tailored worldview. There've been a number of Sci-Fi movies with robot nannies which explore the ways this would not go well.
Misinformation and propaganda wins wars (Score:5, Insightful)
There is a reason why China and Russia want to propagandize AI as bad to the American people. The same reason why they got the anti-nuke lobbyists in the 70s. It stopped the US from using a technology, and allowed them to get supremecy for that.
While people here are being pursuaded to fear AI in every context, China, Russia, and Iran are already on a model that would be considered ChatGPT-5 with far larger training base.
If they can make the US fear and tremble about AI, while developing their own, something as advanced as ChatGPT 4 is compared to an AI chatbot on a website, they will win wars and engagements with ease.
It also reminds me of disinformation, where the Germans went off and focused on chemical based super-weapons, all the while yukking off physics. If China and Russia can do the same with the US to not bother with AI, they would have an advantage as big as a nuclear country would over a non-nuclear power, just in being able to calculate where counter-offensives will happen depending on weather, or knowing the tone and attitude of the citizens to know what psy-ops campaigns are causing genuine demoralization so that can be well funded, while being able to counter any counter-AI working on defenses.
Yes, propaganda is a major thing. The US is easily swayed by it, and because US citizens have been treated like dogshit by the government in a lot of respects, it is easy for it to be swallowed and believed, as there is no "reputable" counter to propaganda other than social networks and wise voices crying in the dark. While anti-AI groups are demanding outright bans and regulations, Russia, Iran, and China are already working on their concept of D-Day, Hiroshima, and Nagasaki, all in one battle,, as their AI development is not fettered by fearmongers.
Yes, propaganda is going on. Propaganda can do what a full scale nuclear first strike can't.
Re: (Score:2)
There is a reason why China and Russia want to propagandize AI as bad to the American people. The same reason why they got the anti-nuke lobbyists in the 70s. It stopped the US from using a technology, and allowed them to get supremecy for that.
We cleaned Russia's clock economically, we didn't need nuclear power to do it, and solar power has been viable since the 1970s but we didn't install it then and now we're scrambling to catch up with where we should be.
Re: (Score:2)
Re: (Score:2)
i think if youd have bought a solar panel in the 70's you still wouldnt have broke even
That is likely true, but individuals shouldn't have been expected to build solar arrays back then. The panels paid off their investment in under 7 years, and even the panels of the day would last 20+ years on average with typically under 10% degradation. We should have put in the first large arrays in the 70s to learn the issues, and been going hard on putting in more of them in the 80s. Instead we got mired in arguments about nuclear power that we now know never actually made sense.
More to it (Score:2)
There's more to AI than the tech. There is a beneficial regulatory framework, a way to democratize the technology to make the benefits more inclusive, a way to prepare the populus to adapt to the new opportunities.
The question is, can our clowncar mix of stable geniuses and fuddyduddys deliver the policy and funding needed to make AI beneficial
or will a european style model or a totalitarian style model of government do a better job of it
I can walk & chew gum at the same time marc (Score:2)
Distraction... (Score:3)
Next Thing You Know (Score:2)
...we will have a movie with George C. Scott screaming in the War Room of The Pentagon about the possibility of an AI computing gap between the US and China.
Stanley Kubrick...where are you when we need you !
Dissapointed... (Score:2)