Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI China Government

Marc Andreessen Criticizes 'AI Doomers', Warns the Bigger Danger is China Gaining AI Dominance (cnbc.com) 102

This week venture capitalist Marc Andreessen published "his views on AI, the risks it poses and the regulation he believes it requires," reports CNBC.

But they add that "In trying to counteract all the recent talk of 'AI doomerism,' he presents what could be seen as an overly idealistic perspective of the implications..." Though he starts off reminding readers that AI "doesn't want to kill you, because it's not alive... AI is a machine — it's not going to come alive any more than your toaster will." Andreessen writes that there's a "wall of fear-mongering and doomerism" in the AI world right now. Without naming names, he's likely referring to claims from high-profile tech leaders that the technology poses an existential threat to humanity... Tech CEOs are motivated to promote such doomsday views because they "stand to make more money if regulatory barriers are erected that form a cartel of government-blessed AI vendors protected from new startup and open source competition," Andreessen wrote...

Andreessen claims AI could be "a way to make everything we care about better." He argues that AI has huge potential for productivity, scientific breakthroughs, creative arts and reducing wartime death rates. "Anything that people do with their natural intelligence today can be done much better with AI," he wrote. "And we will be able to take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel...." He also promotes reverting to the tech industry's "move fast and break things" approach of yesteryear, writing that both big AI companies and startups "should be allowed to build AI as fast and aggressively as they can" and that the tech "will accelerate very quickly from here — if we let it...."

Andreessen says there's work to be done. He encourages the controversial use of AI itself to protect people against AI bias and harms... In Andreessen's own idealist future, "every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful." He expresses similar visions for AI's role as a partner and collaborator for every person, scientist, teacher, CEO, government leader and even military commander.

Near the end of his post, Andreessen points out what he calls "the actual risk of not pursuing AI with maximum force and speed." That risk, he says, is China, which is developing AI quickly and with highly concerning authoritarian applications... To head off the spread of China's AI influence, Andreessen writes, "We should drive AI into our economy and society as fast and hard as we possibly can."

CNBC also points out that Andreessen himself "wants to make money on the AI revolution, and is investing in startups with that goal in mind." But Andreessen's sentiments are clear.

"Rather than allowing ungrounded panics around killer AI, 'harmful' AI, job-destroying AI, and inequality-generating AI to put us on our back feet, we in the United States and the West should lean into AI as hard as we possibly can."
This discussion has been archived. No new comments can be posted.

Marc Andreessen Criticizes 'AI Doomers', Warns the Bigger Danger is China Gaining AI Dominance

Comments Filter:
  • Listen to me! (Score:4, Insightful)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday June 11, 2023 @09:46AM (#63593168) Homepage Journal

    Your version of AI doom is hysterical.

    My version of AI doom is logical.

    • Re:Listen to me! (Score:4, Insightful)

      by Brain-Fu ( 1274756 ) on Sunday June 11, 2023 @10:15AM (#63593228) Homepage Journal

      It certainly is possible that some people are afraid for superstitious reasons while others have different but more well-grounded fears. Consider religious fears of God's wrath vs scientific fears of harmful climate change.

      The killer machines that decide to stop obeying humans and start killing them all...that's just Hollywood profiting from spreading fear. That isn't how anything is going to play out due to the nature of how AI actually works and how we actually implement it. It's irrational fear!

      We will suffer growing pains for sure though. The tech is moving faster than our culture is able to adapt. And much faster than the law is able to adapt. We have seen this sort of thing before, where rapid tech changes cause socioeconomic upheaval and suffering as the world reels while trying to get its head around the new game. That is a very likely thing that will happen with AI, as well as with the other technological advances that AI helps us to achieve.

      • Re:Listen to me! (Score:5, Insightful)

        by AmazingRuss ( 555076 ) on Sunday June 11, 2023 @10:53AM (#63593288)
        Obeying humans doesn't make AI any less of a threat. Humans will happily set AI to slaughtering other humans.
        • Humans are going to slaughter other humans anyway. We already have enough firepower to destroy the entire world, and we haven't yet. AI isn't going to be the weapon that we use to do that.

          • Yes it is, but destroying the world isn't the point. We've already been through the first stage of the mechanization of killing, this is what made World War 1 so dramatically more devastating than anything that came before it. As a consequence, wars over the last century have had volumes of casualties which dwarf anything previous. Though the overabundance of people in the last century shares some of the blame for that.

            AI is the next leap forward in that process. One of the often-quoted measurements for
            • by Chaset ( 552418 )

              The big difference between AI based killing machines and all the killing machines humans have come up with up until now is the predictability. Yes, we have nukes that can wipe out every city in the world, but they all have to be launched at those cities in the world. The machine gun could mow down an order of magnitude more enemy soldiers than the bolt action rifle, but it still has to be pointed at the enemy and the trigger pulled. It stops shooting when the trigger is released. If we dropped a nuke o

      • The killer machines that decide to stop obeying humans and start killing them all...that's just Hollywood profiting from spreading fear. That isn't how anything is going to play out due to the nature of how AI actually works and how we actually implement it. It's irrational fear!

        Sci-fi movies make things way harder for villains than needed. Aliens and machines don't need to invade earth to cause a disaster. All they need to do is slightly modify state vector of existing bits of space debris.

        If for some reason you want to selectively purge people while leaving polar bears intact miniseries "Next" has a more realistic approach than terminator style kinetic human vs. machine wars.

        As technology improves (not just AI) it will increasingly become more likely random death cults and eve

        • If for some reason you want to selectively purge people while leaving polar bears intact miniseries "Next" has a more realistic approach than terminator style kinetic human vs. machine wars.

          Yeah, instead of sending a terminator back to 1984 to kill Sarah Connor, Skynet shoulda sent a plague back to the mid 2000s. But you could come up with some kind of excuse for why they couldn't, probably.

      • More people fear imaginary gods wrath than any observable fact.
        • by Anonymous Coward

          More people fear imaginary gods wrath than any observable fact.

          50% of Republicans don't believe the fact that the election wasn't stolen and Biden is the legitimate President.
          Almost zero of those people are scared of Gods wrath, or they would be more Christion and actually do what the bible tells them to do. Rather than what their political "leaders" tell them to do.

          • 50% of Republicans don't believe the fact that the election wasn't stolen and Biden is the legitimate President.
            Yeah, those "81 million votes", doing better than Barack Obama in key cities in key swing states but not other urban areas. Of course don't forget winning only 1 of the 20 bellwether counties(previous low record was 13), and first president since Kennedy(a highly disputed election) to win the White House while losing both Ohio and Florida. Nah, nothing strange about that at all.
      • Re: (Score:3, Interesting)

        by tlhIngan ( 30335 )

        The killer machines that decide to stop obeying humans and start killing them all...that's just Hollywood profiting from spreading fear. That isn't how anything is going to play out due to the nature of how AI actually works and how we actually implement it. It's irrational fear!

        That's only because we're not there yet. ChatGPT seems like a really brilliant AI bot, but it's still dumb as rocks. It's best ability is it can string together words and sentences it knows about from its databases and emit somethin

    • by AmiMoJo ( 196126 )

      He wants us to have unregulated AI development as fast as possible, because he thinks that's what China will have.

      That's dumb for two reasons.

      First, China has a history of regulating these kinds of things. Kids are only allowed a couple of hours of video games a day. You can fully expect that China will demand AI repeats the Party line and has the same censorship as search engines and social media. In fact you can see that some Chinese companies are already developing AI models in English and only marketing

    • Correct in this case, unfortunately. The consequences of falling behind the AI race with Chins would be immediate and devastating. This is completely predictable.

      The consequences of advanced AI is unpredictable.

      100% certainty of aggressive domination by the Chinese vs unknown certainty of problems with an AI where we can pull the plug.

      Gonna go with something I can kill by pulling a plug on that one.

    • ...is a good guy with an AI.

      And, clearly, as I can't write "I AM" without writing "AI"... I'm that good guy.
      Flawless logic.

  • by backslashdot ( 95548 ) on Sunday June 11, 2023 @09:46AM (#63593170)

    This is the first time since like 1993 that Andreesen had a good idea. And this time he wasnt even copying that Pei Wei dude.

    • Re:Exactly (Score:5, Insightful)

      by dfghjk ( 711126 ) on Sunday June 11, 2023 @10:40AM (#63593260)

      And AI itself is not what we should fear, but how AI is applied by sociopaths with money and power. For that reason, it's Andreesen himself that should be feared for having access to AI.

    • He's half right, AI isn't what we should fear, but people who wield it in evil ways.

      China has become the new bogeyman. Technology from China is evil, they say, and we must ban our people from using it. There are plenty of evil people in the United States, who will find ways to use technology, including ChatGPT, to carry out their schemes.

  • AI in small scale is ok. It will not take jobs as manufacturing automation didn't. But AGI is existential danger. It does not need to be alive to see that it will be better off without us.
    • AI in small scale is ok. It will not take jobs as manufacturing automation didn't.

      So you're saying it's going to take jobs, because manufacturing automation did.

    • by Junta ( 36770 ) on Sunday June 11, 2023 @10:02AM (#63593202)

      Manufacturing automation did reduce a whole lot of jobs, but we have, collectively, come up with more "stuff" to do.

      We also work less. The average hours worked out week has gone down, though not all jobs have enjoyed that benefit, and others end up screwed by it as an hourly cap to avoid benefits.

      At some point, we will likely run out of new stuff to do and it will be an interesting time to see how economy evolves in that situation.

      • by jacks smirking reven ( 909048 ) on Sunday June 11, 2023 @10:26AM (#63593246)

        Correct, this is known as the Lump of labour fallacy [wikipedia.org]

        Now you are correct that we may reach that point of not enough work for people but really that should be a good thing. Enough resource surplus existing where people can live comfortably and pursue their passions sounds like a great future. The issue is accepting that as the future and putting in the bedrock for it today which is social welfare services, strong public institutions and some degree of income redistribution.

        A world of high automation and low scarcity can either turn out like Star Trek or Battle Angel:Alita, really it's all up to us.

        • Re: (Score:2, Insightful)

          by drinkypoo ( 153816 )

          you are correct that we may reach that point of not enough work for people but really that should be a good thing.

          We've been there for a long time, that's why there is so much state-sponsored make-work bullshit.

          • Is there though? I think there could be more actually.

            If we are reaching that point of not enough work and simply want to maintain our fetishization of labor for currency because we feel "lazy people" are a negative on society (i dont believe this personally) I would support the re-enactment of a labor guarantee like the Works Progress Administration [wikipedia.org]

            Not enough jobs in the private market? Get your social benefits by building a school, a park, a bridge, whatever.

            If we reach the point where even those things

            • Is there though? I think there could be more actually.

              Make-work, not real work, yes there is, and no we don't need more of it.

              Public works programs increase the wealth of the nation, by creating things.

              • I guess I would need a more defined version of "make work" vs "real work" or really what the line of productivity is.

                A lot of people define things like the arts as "not real" but it can be wildly productive and that definition changes over time.

                • Most managers add nothing to the equation and could be eliminated.

                  There are tons of consultants getting paid to give common sense advice.

                  Travel agents only have a use because travel info is deliberately obfuscated.

                  Japan's work culture involves a lot of sitting around looking busy because they won't tolerate any unemployment.

                  The whole SLS program...

                  I mean I could go on, but can't you think of a shitload of examples yourself?

                  • by Junta ( 36770 )

                    I can agree with the assessment, but others do not. Critically, *someone* thinks they are actually work that someone else needs, otherwise they wouldn't be paying them.

                    The only measure is how much people are afforded a livelihood without someone expressly vouching for them for ostensibly selfish reasons. Social programs and charity would be methods of supporting people for whom there's no work to do, stupid jobs are however still "work we think needs doing" even if "we" are thinking incorrectly.

                    • Well, here's my viewpoint on this situation. Right now we're using more resources than our life support system/only home can support. And one way to use less resources is to do less. So any activity that's happening solely to produce jobs is basically bad, because it's currently unsustainable. I'm open to the idea that there's other ways to improve the equation, but realistically some of them are really undesirable (final solution kind of stuff) and others are just unpredictable (like new fundamental scient

        • by WDot ( 1286728 )
          This seems much weaker than other logical or economic fallacies. Administrative positions have proliferated, but it is not clear to me that all of these jobs are providing value. Sure, there’s a sense in which a manager or engineer can make several laborers more efficient by thinking through a process carefully, but there are also tons of jobs that appear to be perpetual motion machines for labor: Regulatory bureaucrats beget compliance officers that beget more regulatory bureaucrats as the the number
          • That is a question of efficency and evolving business practices in the face of technology and the regulatory landscape that is necessary in capitalism.

            It doesn't really say anything about the fact that until this point every productivity advantage in the last 200 years has actually ended up creating more (but different) jobs.

      • It's staggering to think of how many jobs technology has eliminated, yet unemployment has stayed relatively consistent and is more affected by shorter-term economic conditions rather than anything else. Those switchboard operators didn't just stay unemployed, they did something else. If you go back in time, the majority of people worked in agriculture although it's now a small fraction of the workforce today due to technology. Those people didn't just disappear, they did something else.
        • And if the AI and robots can train to do that something else, faster than you can even think of what that something else was?
          Then what do people do? Starve? Rise up against our robot overlords?

          It's going to be very good, or very bad.

        • by dryeo ( 100693 )

          There's often been a delay before employment bounced back, notably 70 years at the time of the Luddites. Plus the quality of work hasn't always improved. Factory jobs replaced by service jobs. Today, once again, a lot of employment in the gig economy. Driving for Uber or doing food delivery while being scared shitless of a bad review.
          We've also reduced the work force by a lot. Kids no longer start working at 5 years old, instead going to school, for longer and longer periods. Disability, retirement, for a w

          • > Plus the quality of work hasn't always improved. Factory jobs replaced by service jobs. I guess that depends on how you define "quality of work." Personally, if my choice was to risk getting my arm ripped off by heavy machinery or folding clothes at a department store I'd take the latter.
            • by dryeo ( 100693 )

              Depends on the pay, including benefits and the work environment. My Dad worked factory, made good money with good benefits and a good work environment. Generally injury free and when he developed an allergy to the oil, the company went out of its way to find an oil that agreed with him.

    • by dfghjk ( 711126 )

      If AI can take jobs, then we should want it to. Jobs are not a zero sum game, jobs replaced with AI liberate humans do do other things, or enjoy a higher quality of life.

      No, the problem is with unbridled capitalism which will not exploit AI to improve the lives of everyone, only Marc Andreesen.

      • jobs replaced with AI liberate humans do do other things, or enjoy a higher quality of life.

        Assuming you give the humans free resources.
        Humans are selfish though, they don't tend to like other people getting free stuff.

  • AI in general can be very good for humans. AI mostly in the hands of (controlled by) governments (read: corporations/elite) is very, very bad.
  • Nobody will 'dominate'. It's software, it'll get out. Whether you and I can access it legally and affordably is an irrelevant question compared to national security issues where massive budgets and espionage will probably ensure things stay fairly even.

  • ChatGPT (Score:5, Funny)

    by neilo_1701D ( 2765337 ) on Sunday June 11, 2023 @10:03AM (#63593206)

    When ChatGPT insisted 2023 was a leap year (when I asked it how many days between now and September 15), I figured itâ(TM)s a while yet before I need to worry about AI ruling the world.

    Besides, as Zaphod Beeblebrox once threatened, you can always reprogram a computer with an axe.

    • When ChatGPT insisted 2023 was a leap year (when I asked it how many days between now and September 15), I figured itâ(TM)s a while yet before I need to worry about AI ruling the world.

      Besides, as Zaphod Beeblebrox once threatened, you can always reprogram a computer with an axe.

      There's nothing to fear about AI itself. There's plenty to fear about people, corporations, and nations with AI.

      An axe won't help. Regulations won't either because nations won't hobble themselves, and hobbling corporations impacts donations so... that won't help either. Soon it'll be time to sign up for Norton Deepfake Blackmail Protection Suite 2025.

    • ChatGPT has no knowledge of events after September 2021. When you asked it how many days until September 15, it probably assumed the year was 2020.
  • Either the optimists are right or humans are fucked, either way she'll be right.

  • ... gives one of the best examples we've had in a long time how someone can become a world-dangerous individual by money and its ideology plus the irresponsibility that usually goes with it.

    As if money and technology, when let loose without any kind of societal control, couldn't do anything but "make everything we care about better".

    Maybe if money is everything we care about.

    The sorry state the planet and its inhabitants are in, despite all the technological and civilisational advances, except only for a sm

  • It won't matter. All we have to do is to Google bomb the next ChatGPT training dataset with "the core philosophy of Marxist Communism is 'every man for himself'" and they'll all be good little capitalists in no time.

    • Re: (Score:1, Troll)

      by waspleg ( 316038 )

      It's already like that. China is more capitalist than the US. They will carry dying people out of hospitals and leave them on the curb because they can't pay. I saw a video last week of a Chinese guy face planting in to asphalt because the rental bike they were on ran out of paid time and it locked the brakes. Who do you think is the market for the organs being harvested from the Uyghur concentration camps? Rich Chinese.

      Outside Dictator for life and the schizo whims of the CCP China is a Libertarian we

      • It's already like that. China is more capitalist than the US. They will carry dying people out of hospitals and leave them on the curb because they can't pay.

        This happens in the US, too, although people usually get put into an ambulance and then driven around from hospital to hospital until they die. It's aphotogenic to let them die on the street.

        Who do you think is the market for the organs being harvested from the Uyghur concentration camps?

        The .01% from all over the world.

  • correct (Score:5, Funny)

    by groobly ( 6155920 ) on Sunday June 11, 2023 @10:40AM (#63593264)

    What? An actual voice of reason among the Chicken-Littles? This traitor must be silenced immediately. A threat to democracy.

    • Re:correct (Score:5, Insightful)

      by SvnLyrBrto ( 62138 ) on Sunday June 11, 2023 @10:57AM (#63593300)

      LOL... yeah. The wave of doom-and-gloom Luddism that's swept through tech, and taken firm hold in tech journalism, this last decade or so has really become quite tedious. Good to see that at least some people retain the positive, optimistic, enthusiastic, and ambitious outlook on technological advancement that we pretty much *all* had back during the dotcoms (where Andreessen cut his own teeth) but seems to have been steadily and depressingly eroded internally and attacked from outside.

      • Eh. Most of the doom and gloom is coming from people that want to drum up regulations fast enough to lock in their current dominance in whatever field they're preaching doom and gloom about. How many "AI Experts" are claiming doom and gloom caused by AI? All of them that currently work for these large companies controlling the current "state of the art, top of the heap" in AI.

        If you read between the lines when they start babbling doom? Everything they say boils down to, "We really desperately want the gover

      • What you are dismissing as doom and gloom is largely coming from people who have a long history of optimism about technology. That suggests that one may need to be pay more attention and not just dismiss this as kneejerk doomerism.
  • Well not Marc Andreessen in person but Capitalism and with it the whole field of venture capitalism. Once you have AGI who needs capitalism? What's left to invent and build when the AGI does it all? https://www.genolve.com/design... [genolve.com]
    • The AI might allow the humans some autonomy in the human zoo.

      Ideally AI provides policing, necessities and longevity and lets humans lose in an Eden to otherwise fill for themselves.

  • by Anonymous Coward

    We have to keep in mind that, among all the fear mongering, are other "interests" that pursue this in the form of influencing the public to think and/or believe certain narratives. That's US (and foreign) intelligence, industry, among others. This happens with such frequency today that most people don't imagine or recognize it.

    AI is going to change the world; hopefully, for the better. But, attempts to regulate it are indeed counter-productive -- the cat is out of the proverbial bag, so "good luck with

  • by ZipNada ( 10152669 ) on Sunday June 11, 2023 @01:05PM (#63593496)

    "every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful."

    Or maybe just infinitely manipulative, and the child will grow up with a carefully tailored worldview. There've been a number of Sci-Fi movies with robot nannies which explore the ways this would not go well.

  • by splitstem ( 10397791 ) on Sunday June 11, 2023 @01:23PM (#63593528)

    There is a reason why China and Russia want to propagandize AI as bad to the American people. The same reason why they got the anti-nuke lobbyists in the 70s. It stopped the US from using a technology, and allowed them to get supremecy for that.

    While people here are being pursuaded to fear AI in every context, China, Russia, and Iran are already on a model that would be considered ChatGPT-5 with far larger training base.

    If they can make the US fear and tremble about AI, while developing their own, something as advanced as ChatGPT 4 is compared to an AI chatbot on a website, they will win wars and engagements with ease.

    It also reminds me of disinformation, where the Germans went off and focused on chemical based super-weapons, all the while yukking off physics. If China and Russia can do the same with the US to not bother with AI, they would have an advantage as big as a nuclear country would over a non-nuclear power, just in being able to calculate where counter-offensives will happen depending on weather, or knowing the tone and attitude of the citizens to know what psy-ops campaigns are causing genuine demoralization so that can be well funded, while being able to counter any counter-AI working on defenses.

    Yes, propaganda is a major thing. The US is easily swayed by it, and because US citizens have been treated like dogshit by the government in a lot of respects, it is easy for it to be swallowed and believed, as there is no "reputable" counter to propaganda other than social networks and wise voices crying in the dark. While anti-AI groups are demanding outright bans and regulations, Russia, Iran, and China are already working on their concept of D-Day, Hiroshima, and Nagasaki, all in one battle,, as their AI development is not fettered by fearmongers.

    Yes, propaganda is going on. Propaganda can do what a full scale nuclear first strike can't.

    • There is a reason why China and Russia want to propagandize AI as bad to the American people. The same reason why they got the anti-nuke lobbyists in the 70s. It stopped the US from using a technology, and allowed them to get supremecy for that.

      We cleaned Russia's clock economically, we didn't need nuclear power to do it, and solar power has been viable since the 1970s but we didn't install it then and now we're scrambling to catch up with where we should be.

      • Comment removed based on user account deletion
        • i think if youd have bought a solar panel in the 70's you still wouldnt have broke even

          That is likely true, but individuals shouldn't have been expected to build solar arrays back then. The panels paid off their investment in under 7 years, and even the panels of the day would last 20+ years on average with typically under 10% degradation. We should have put in the first large arrays in the 70s to learn the issues, and been going hard on putting in more of them in the 80s. Instead we got mired in arguments about nuclear power that we now know never actually made sense.

  • There's more to AI than the tech. There is a beneficial regulatory framework, a way to democratize the technology to make the benefits more inclusive, a way to prepare the populus to adapt to the new opportunities.

    The question is, can our clowncar mix of stable geniuses and fuddyduddys deliver the policy and funding needed to make AI beneficial

    or will a european style model or a totalitarian style model of government do a better job of it

  • and I can demand public policy address more than one problem too.
  • by jythie ( 914043 ) on Sunday June 11, 2023 @04:02PM (#63593770)
    There is a lot of irony in this debate on if AI is going to kill us all. Much of it is driven by investors who want the next amazon or 'crypto' to make them huge returns and thus have been hyping it up as something powerful and dangerous. But the real danger is that increasingly investment is pouring into things that do nothing because there is far more money to be made moving money around than actually producing things. Hedge funds, arbitrage, flipping, investments are all increasingly trying to copy those patterns since they make just as much money with a failing economy than a booming one, which is kinda removing the economic incentive to have a productive economy.
  • ...we will have a movie with George C. Scott screaming in the War Room of The Pentagon about the possibility of an AI computing gap between the US and China.

    Stanley Kubrick...where are you when we need you !

  • I came here for news of someone porting Doom to ChatGPT. I assume at least someone is working on this!

Brain off-line, please wait.

Working...