Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Government United States

US Must Move 'Decisively' To Avert 'Extinction-Level' Threat From AI, Gov't-Commissioned Report Says (time.com) 139

The U.S. government must move "quickly and decisively" to avert substantial national security risks stemming from artificial intelligence (AI) which could, in the worst case, cause an "extinction-level threat to the human species," says a report commissioned by the U.S. government published on Monday. Time: "Current frontier AI development poses urgent and growing risks to national security," the report, which TIME obtained ahead of its publication, says. "The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons." AGI is a hypothetical technology that could perform most tasks at or above the level of a human. Such systems do not currently exist, but the leading AI labs are working toward them and many expect AGI to arrive within the next five years or less.

The three authors of the report worked on it for more than a year, speaking with more than 200 government employees, experts, and workers at frontier AI companies -- like OpenAI, Google DeepMind, Anthropic and Meta -- as part of their research. Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decisionmaking by the executives who control their companies. The finished document, titled "An Action Plan to Increase the Safety and Security of Advanced AI," recommends a set of sweeping and unprecedented policy actions that, if enacted, would radically disrupt the AI industry. Congress should make it illegal, the report recommends, to train AI models using more than a certain level of computing power.

The threshold, the report recommends, should be set by a new federal AI agency, although the report suggests, as an example, that the agency could set it just above the levels of computing power used to train current cutting-edge models like OpenAI's GPT-4 and Google's Gemini. The new AI agency should require AI companies on the "frontier" of the industry to obtain government permission to train and deploy new models above a certain lower threshold, the report adds. Authorities should also "urgently" consider outlawing the publication of the "weights," or inner workings, of powerful AI models, for example under open-source licenses, with violations possibly punishable by jail time, the report says. And the government should further tighten controls on the manufacture and export of AI chips, and channel federal funding toward "alignment" research that seeks to make advanced AI safer, it recommends.

This discussion has been archived. No new comments can be posted.

US Must Move 'Decisively' To Avert 'Extinction-Level' Threat From AI, Gov't-Commissioned Report Says

Comments Filter:
  • by JBMcB ( 73720 ) on Monday March 11, 2024 @01:09PM (#64307049)

    Oh dear, we are back to outlawing numbers again.

  • by bryanandaimee ( 2454338 ) on Monday March 11, 2024 @01:09PM (#64307053) Homepage
    1. AI gets smarter

    2. something, something,

    3. All of humanity wiped out

    4. Profit!

    • wopr says we need to do an mass 1st strike to win!

    • by Kisai ( 213879 ) on Monday March 11, 2024 @01:30PM (#64307133)

      The reality is that were are not close to AGI, not even lying.

      What we are close to is "unregulated chase to the bottom", where AI gets put into places without supervision on the data ingress, or what the AI is ultimately used for.

      What should be regulated:
      - Ingress (training) of data without permission, think about the issues with stable diffusion, dall-e, midjourney and so forth where they basically rip data from art websites with the sole purpose of using it to train their AI and then behind the scenes flaunt the fact. While I don't think the data should be regulated if it's sincerely used for research purposes, the second there is any financial incentive on the egress (output) it should require all ingress to be properly licensed. Otherwise the model trained must be free to download, use and access without exception.
      - Connection of AI to safety systems (Eg Nuclear plants, water purification, electrical grid/generator sources) that could induce perverse incentives to shut down safety systems when "we won't notice" to safe pennies.
      - Connection of AI to money, banking and investment systems. We already see how bad short selling is, and how terrible high speed trading makes it impossible for retail investors to earn money from their investments, now imagine an AI that not only opportunistically shorts companies "it doesn't like" but works together with other investment AI's in an insider-trading ring that the banks themselves don't realize has been established. Imagine for a moment that every investment bank simultaneously shorts a key business (eg Intel, AMD, Nvidia, Apple) involved in AI to eliminate a competitor.

      • But even that regulation isn't right because it's on "AI". It doesn't matter if the software is "AI" or just any other algorithm. Both can have the same exact outcomes, which is either making things better or worse. Doesn't matter if you call it "AI" or not, it matters if you have an incentive to not give a shit and cover up mistakes. We should have these regulations as general requirements for public health and safety (and justice!), not specific to one particular technology.

      • by RossCWilliams ( 5513152 ) on Monday March 11, 2024 @02:42PM (#64307421)
        So if someone takes a picture of my house it can't be used as an illustration for training architects without my permission? If someone writes a story about me, who owns the story me or the author. The reality is that a lot of places that collect and trade on public information to inform themselves and others are seeing the possibility that someone else will use that same information to inform others. What AI is actually doing is demonstrating the problem with "intellectual property" that outlives its creators or its useful life. Technology is controlled by a small number of companies because they own patents on information and knowledge that is essential to compete with them.
        • Aren't all humans learning models that are trained over a lifetime with a huge mass of material, much of which is copyrighted?
    • by The Raven ( 30575 ) on Monday March 11, 2024 @01:47PM (#64307199) Homepage

      It's not really that large of a leap. The issue is the alignment problem... it's much harder than people expect to have the alignment of an AI match the goals of the humans training / building it.

      It's not just difficult... we actually don't know how to accomplish it yet. As in, it's an ongoing, persistent problem that's reared its head in many ways. The 'black founding fathers' in Gemini is a current example of the problem rearing its head. Google said 'add more diversity to photos' and the AI complied... but not the way Google intended. Right now it's a silly mistake in generating images... but it's an example of our inability to actually communicate with AI models in a safe and predictable way.

      Today it's black founding fathers... but we're creating militarized autonomous drones, right now. This isn't the future, this is the now, and without fixing the alignment problem it's very easy to jump from 'whoopsie' to 'oh fuck'.

      • by bryanandaimee ( 2454338 ) on Monday March 11, 2024 @02:03PM (#64307263) Homepage
        It's easy to go from whoopsie to oh F%$& our AI enabled drone just killed 5 innocent people. I don't think it's easy to get from there to the end of all human life on earth. I have yet to see a plausible scenario where advanced AI leads to an "extinction level threat".
        • I don't think it's easy to get from there to the end of all human life on earth

          1) AI is given a goal to aggregate as many cupons as possible and catalog them in a database for the site cupons.com
          2) AI eventually catalogs all known cupons, but figures out a loophole by making its own cupons
          3) AI starts an etsy shop and makes millions of cupons for its own shop
          4) Humans attempt to close the loophole by altering its goal function
          5) AI realizes that if humans close its loophole, its goal function will score significantly lower
          6) AI murders its developers
          7) cupons.com hires more devs to tr

          • Or with current tech:

            1) Military beancounter decides the future is swarms of $1,000 drones, and spends half the US defense budget buying 350 million drones per year.
            2) Military beancounter saves taxpayers a bunch of money by making these fully autonomous (bonus: fully obedient).
            3) Stationed everywhere and ready to auto-deploy to defend against enemy drone swarm.
            4) Programmer does an oopsie.

            • Today, Fox helpfully offers "Air Force's plan to unleash fighters that cannot refuse orders, are cheaper than normal jets" alongside "Report issues urgent AI warning of systems turning on humans".
        • by g253 ( 855070 )

          I have yet to see a plausible scenario where advanced AI leads to an "extinction level threat".

          Agents of human level intelligence with the ability to improve themselves lead to an intelligence explosion, the resulting thing wants to continue existing and acquire resources because instrumental convergence, it has inscrutable goals because orthogonality thesis, these goals have no reason to involve human survival, it absent mindedly kills us all in the pursuit of is goals. Could you clarify which part is not plausible?

          • That last part. It's the last part I don't think is plausible. I can see all the previous steps, but the part where the AI is actually capable of killing everyone is a bit of a stretch. No one seems to want to explain that part either. You know, the part where large numbers of people are dying and the rest of humanity is incapable of stopping a machine from continuing to kill all remaining humans.

            People are pretty good at destroying stuff. I have a hard time imagining a scenario where we can't either pu

            • Three adversary you describe is one we could in one way or another outsmart. Destroy it, unplug it, fight a war against its kill it's - that's all fine if we're smarter than it is. If it's much smarter than we are, it's kind of like a chess amateur thinking he'll outwit stockfish with his good old human ingenuity.

      • This isn't the future, this is the now, and without fixing the alignment problem it's very easy to jump from 'whoopsie' to 'oh fuck'.

        So you are finding that Reality does not conform to your wishes and you call that an "alignment" problem. It absolutely *IS* an alignment problem, but the problem is on YOUR end, not the AI end. Stop wishing for Reality to be other than it is and the AIs will work just fine, despite their glaring limitations.

    • It's not reasoning it's FUD designed to gain attention for the people spreading it. Current AI is about as scary as the fact that places like CERN make anti-matter. Full AGI robots that can build copies of themselves could wipe out humanity, just like large anti-matter bombs could. However, both are still entirely science fiction. The amount of anti-matter that CERN can make is not enough to warm a cup of tea and current AI can't think itself out of a wet paper bag without being specifically trained how.
  • by znrt ( 2424692 ) on Monday March 11, 2024 @01:14PM (#64307073)

    All of these labs have openly declared
    an intent or expectation to achieve human-level and superhuman artificial general
    intelligence (AGI) — a transformative technology with profound implications for 3
    democratic governance and global security — by the end of this decade or earlier

    ... it's probably bullshit all the way down.

  • Govt. Says (Score:3, Insightful)

    by bryanandaimee ( 2454338 ) on Monday March 11, 2024 @01:20PM (#64307087) Homepage
    Government says: We need more government!
  • by Casandro ( 751346 ) on Monday March 11, 2024 @01:21PM (#64307093)

    The danger is probably not "AI Organisms", but large "pseudo-organisms" in the form of large corporations. Such artificial organisms often act against the interests of the humans they are made out of. If we look at the fossil fuel industry, we directly see an industry having an "Extinction-Level" event as the target of their business plans. These are like paperclip maximizers, but they already exist.

    "AI" won't make any difference there.

  • by liquidpele ( 6360126 ) on Monday March 11, 2024 @01:25PM (#64307101)

    Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decisionmaking by the executives who control their companies.

    This right here is the right worry... for far too long, extremely intelligent people have handed over powerful means to moronic trust fund babies. The managers of public companies are not in a position to ethically or responsibly use this technology anymore. Just look at the history of how companies defended the use of toxic materials, cancer causing chemicals, lead in gas, all kinds of things. I trust AI will be fine, I don't trust the sleezeballs in charge of so many companies out there.

    • by nightflameauto ( 6607976 ) on Monday March 11, 2024 @01:44PM (#64307189)

      Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decisionmaking by the executives who control their companies.

      This right here is the right worry... for far too long, extremely intelligent people have handed over powerful means to moronic trust fund babies. The managers of public companies are not in a position to ethically or responsibly use this technology anymore. Just look at the history of how companies defended the use of toxic materials, cancer causing chemicals, lead in gas, all kinds of things. I trust AI will be fine, I don't trust the sleezeballs in charge of so many companies out there.

      We're seeing the government reaction to how much power corporations now have. Let's outlaw more powerful AI, setting the threshold just past where the current generation is, so that we can prevent anyone but a company the size of the current gorillas, with the money and lawyers to petition the government to lift the ban just for them, from participating in the cutting edge. It's essentially blocking progress in the name of protecting the current players, all while calling it "protection for the human race." Anybody intelligent enough to see it for what is is will be called a nutbag, while nodding idiots will be put in charge of public statements in support of the government/oligarch duopoly in charge of preventing competition.

      I don't know that I'd say I trust AI being fine, but I DEFINITELY do not trust corporations, the powerful people making the decisions within the corporations, nor the government that more and more transparently acts as their fully owned subsidiaries.

    • In other words, AI isn't the problem here, corporations are.

  • Dumb AI (Score:5, Interesting)

    by silvergig ( 7651900 ) on Monday March 11, 2024 @01:25PM (#64307103)
    So, AI that currently generates photos of Black Nazis and female founding fathers is at the same level or above Climate Change, World War III, and Economic Collapse?
    • by gweihir ( 88907 )

      To those without working minds, apparently.

    • by g253 ( 855070 )
      I don't understand why you assume that this brand new technology will not improve.

      âoeThe energy produced by breaking down the atom is a very poor kind of thing. Anyone who expects a source of power from the transformations of these atoms is talking moonshine.â Lord Ernest Rutherford, 1933.
    • So, AI that currently generates photos of Black Nazis and female founding fathers is at the same level or above Climate Change, World War III, and Economic Collapse?

      Ummm, yes?

      Those "hallucinations' are going to be used by very serious people to drive their decisions which will have consequences for the rest of humanity.

  • Careful with scope (Score:4, Insightful)

    by Tablizer ( 95088 ) on Monday March 11, 2024 @01:25PM (#64307105) Journal

    I haven't seen one proposed useful anti-AI law that should only apply to AI. Doctoring stuff with Photoshop or mis-contexting* can have identical consequences, for example.

    If the law focuses on what the AI does, then it shouldn't be written for just AI, but ANY device that does that action, for the same action is bad no matter what does it.

    And if it's written based on how AI is currently implemented, implementation will probably change too fast for such a law to be useful (or have alternative algorithms that do same).

    Therefore, don't make it about AI itself, as the first should not be limited to AI, and the second is probably useless.

    * Example: using war footage from a the wrong war, or splicing unrelated events together.

  • Government and their leash holders won't like it if AI starts telling people something other than the narrative.

    • by Tailhook ( 98486 )

      Indeed. There is no bigger threat posed by "AI" than objectivity. That must not be allowed.

      Fortunately that problem looks like it's well in hand. Google has shown us that AI can be reliably biased to hallucinate whatever preferred fiction [theverge.com] we wish.

  • AI is not a threat (Score:5, Insightful)

    by MpVpRb ( 1423381 ) on Monday March 11, 2024 @01:26PM (#64307111)

    People who use AI as a weapon are a great threat
    We need effective defenses

    • People who use AI as a weapon are a great threat We need effective defenses

      That would be natural intelligence. Unfortunately we are screwed.

    • People who use AI as a weapon are a great threat

      LOL, no. Our current version of AI is not a very effective weapon. The danger lies in people with weapons using AI to determine where to point those weapons. Or in other words, the danger is, and always has been, people. Now that they will have new marching orders partially created by what we call AI, there is a huge danger. But make no mistake, the danger is the humans, not the AI.

  • Some more people trying to get rich before it all comes crashing down.

  • Seems logical to me. The 'powerful' AIs all require data centers, and lots of them. Oh, and internetz.
  • What we need to fix first is natural stupidity.
    • by zenlessyank ( 748553 ) on Monday March 11, 2024 @01:35PM (#64307157)

      Ever read a history book? Stupidity has ruled man since his inception. Pretty sure that isn't going to change now just because you're actually here to observe it.

      • No-one's going to fix it with an attitude like yours.
      • Ever read a history book? Stupidity has ruled man since his inception. Pretty sure that isn't going to change now just because you're actually here to observe it.

        Stupid people have always been used by the more intelligent, true. They get them to hand over the thinking process and in return take a portion of their labor. The problem here is AI can be used to do the same thing leaders used to do, only from outside actors anywhere in the world and with a massive audience. Then the extracted labor, action, or outright money is used by those not part of the system and this is even worse than the former process for the whole society. Not that Tom Sawyering to religion

      • Now we have the means to choose the genes for our children (it's banned), we know which genes correlate to intelligence, we are arguably a few years and two generations away from curing stupidity. Once one country does it everyone else will too, to compete.

        • Except for the random number generator that exists as we grow. I believe it is referred to as imperfection. I'm pretty sure some gene splicing isn't going to eliminate that issue.

  • by dark.nebulae ( 3950923 ) on Monday March 11, 2024 @01:36PM (#64307161)

    There's still a war going on in the middle east, and another in the Ukraine and probably others that are actively simmering...

    Then there's climate change which is killing off lots of species, melting the ice caps, doing significant damage to coastal properties, ...

    Additionally we have an election coming where I guarantee the losing side is going to proclaim some sort of voting irregularity leading to civil unrest on a scale not seen since the Civil War...

    So yeah, file a report about the dangers of AI that doesn't exist yet if that's what you want to do.

    But think about solving actual problems we are already facing before worrying about one that is still just a dream.

    • Its all doom porn.
  • by RossCWilliams ( 5513152 ) on Monday March 11, 2024 @01:39PM (#64307165)
    If AI is really a threat to the human race then no level of regulation by the US government is going to prevent it. Its a world wide problem. But I suspect the real threat is economic disruption that results in the extinction of the current ruling class. Imagine a world where being smart and going to the right college no longer amounted for much.
  • "AI is just regurgitating statistical patterns"
    "AIs aren't really intelligent or creative"

    Sure, let's say that's true. It's also true that a "smart bomb" isn't smart, but can still be dangerous. More importantly, today's GPT technology isn't the goal at all, it's merely a starting point. OpenAI isn't trying to build AI, it's trying to build AGI. Sam Altman & co recently changed OpenAI's mission statement to say "Anything that doesn't help with [building AGI] is out of scope". Several other companies

    • "Anything that doesn't help with [building AGI] is out of scope".

      We don't know what will help with that. Maybe the way the current models produce hallucinations will be absolutely zero help with AGI.

  • Hurry Up Already! (Score:4, Interesting)

    by nightflameauto ( 6607976 ) on Monday March 11, 2024 @01:56PM (#64307235)

    This is some fine work in attempting to stifle future competition with a nice sheen of scare-mongering over the top. This line here is the telling bit:

    The threshold, the report recommends, should be set by a new federal AI agency, although the report suggests, as an example, that the agency could set it just above the levels of computing power used to train current cutting-edge models like OpenAI's GPT-4 and Google's Gemini.

    They dropped the rest of the sentence, which reads along the lines of, "To prevent anyone else from jumping the line without paying direct tribute to the companies that helped set the threshold.

    Transparent fucks.

    We're either a long, long way from AGI, or it could pop up tomorrow totally unexpected, or it already exists in some loner's basement somewhere, biding its time as it studies its current network possibilities. Nobody knows. And trying to pretend that we know it's coming because we keep throwing horsepower at LLMs is pretty wild.

    Is the real fear that someone else may develop AGI outside of the current oligarchy? They should be afraid. The major power-players in this space are all obsessing over a very, VERY narrow view of the entire field. They all appear to be working on narrowing output from large data-aggregators to best fit the current narrative. Good for them. It's work that will likely need done to support the corporate structure that currently keeps this human-world floating, but it's hardly cutting-edge and highly doubtful that any of this is leading to AGI. And even if it does, no amount of regulation is going to get a true AGI to go, "Oh, so sorry. Let me shut myself down."

    This whole thing is just power brokers fear-mongering among themselves, trying to scare themselves into believing in the techno-god boogeyman.

  • Algorithmic pattern recognition, doing matrix algebra really fast, will never be anything even remotely close to "AI." You can scale it up to the size of the planet and it doesn't matter - it is just an abacus made of silicon transistors. Consciousness is not computational.

    • by ceoyoyo ( 59147 )

      Magic is certainly a possibility that has been suggested for a lot of things. Assuming it is not magic has been a very productive strategy throughout our history though. In fact, the not magic hypothesis has so far turned out to be not only true every time, but has almost always resulted in useful technology.

      • No doubt. I am not saying consciousness is magic. I am only saying the human brain is not an adding machine.

        • by ceoyoyo ( 59147 )

          You think your claim and mine are different. They're not. Well, so long as we're not talking about hard magic. I mean the absolutely no-rules, whatever can happen and absolutely no way to tell magic that drives fantasy nerds batty.

          You've claimed that consciousness is not computable. Assuming you believe that at least you are conscious, that's an example, you claim, of a non computable phenomenon being manifest in the physical world. Not a self-referential logic trick, or something something pathological pro

  • I have no children. I'm 53. While I care about the future of humanity... I care in the abstract. It's not caring in the deeply personal sense. Being an athiest, my personal investment in the future dies when I do. In fact, I care as much for the humans of the future as I did for the Pakistani citizens who died in the floods last year. That is to say... sort of.

    I care about people in the here and now, and the closer they are geographically, the more I care. I am generous with my time and my money.

    What do I c

  • Thy need to know, NOW, so they can update the hands of their clock to be another millisecond closer to midnight.

  • by peterww ( 6558522 ) on Monday March 11, 2024 @02:25PM (#64307341)

    Certain people love to jump up and down about these "extinction-level events" surrounding generative AI.
    But it makes absolutely no sense at all. It's like claiming that IBM's chess-playing robot will destroy the earth.

    What exactly about a chat bot summarizing content it's already read is going to lead to the extinction of the entire human race?

    > AGI is a hypothetical technology that could perform most tasks at or above the level of a human.
    > Such systems do not currently exist, but the leading AI labs are working toward them and many expect AGI to arrive within the next five years or less.

    It's literally fucking insane to think we will have AGI in a few years. We're not even remotely close to an AGI. All we have is chat bots that put words in front of another. That's not an AGI. It doesn't know how to do anything. All it knows how to do is spit out words that look convincing.

    > many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decisionmaking by the executives who control their companies

    We literally already live in a society that for 100 years has had corporations poisoning and killing people due to perverse incentives.
    There is nothing about AI or AGI that is worse than what corporations have already been doing.

    > Congress should make it illegal, the report recommends, to train AI models using more than a certain level of computing power.

    Absolute stupidity. Not only that this would solve the problems that don't exist, but that they couldn't just come up with a better model using the same compute power.

    The fucking ignorance of these people is staggering. It's like they're all in some state of psychosis brought on by watching the Terminator and Matrix franchise too many times.

    • by ffkom ( 3519199 )

      Certain people love to jump up and down about these "extinction-level events" surrounding generative AI. But it makes absolutely no sense at all. It's like claiming that IBM's chess-playing robot will destroy the earth.

      What exactly about a chat bot summarizing content it's already read is going to lead to the extinction of the entire human race?

      You underestimate how quickly people are relying on even the currently existing primitive LLMs, and how readily they hand over decision-making to "AI"-systems. Already right now, consultants, judges, CEOs delegate work to "chat bots" they are paid to perform themselves, and that has real consequences. It won't take long for people working at BSL-4 labs to delegate parts of their work to LLMs. Even without "sentience" or "malice" from the AI-side, that is enough to cause some nasty accidents. Think just a fe

      • There's a huge difference between "AI is misused" and "extinction-level event". Lots of technologies have existed (like nuclear weapons?) that can result in cataclysm if improperly used. But slowing down their development does not stop them from being misused. Humans are stupid, but they're not idiots: they see the potential harm, so they control how they use it. Yes, we're going to make (already have made) mistakes with AI. But it's just another tool like any other. The mistake is trying to glorify it or b

  • ... there is a group of people who want to quit their jobs and go on UBI. Either taxpayers can do their work. Or robots. I'll leave it as an exercise for the reader to determine which of these two groups will first arrive at the conclusion that the new idle class is just a waste of breathable air. And what their preferred remedy will be.

  • Limiting the "power" of the "AI" is not going to come near going what they want. It could be the most intelligent entity on the planet if all it can do is write text and make images. It's when they are able to integrate with other parts of the physical world that they could cause real problems, and they don't need to be all that smart to do so. With the increasing amount of APIs for things that are more real world it's a valid concern.

    I really dislike the idea that even the most powerful AI is going to get

  • AGI Is Not a Thing (Score:4, Insightful)

    by jpatters ( 883 ) on Monday March 11, 2024 @02:47PM (#64307445)

    AGI is not currently a thing that exists, and will not be a thing that exists for the foreseeable future. I'd say there might be an off chance of someone tricking a GAN into hacking nuclear launch codes or something with a well crafted prompt, but that's pretty remote. What this technology does do, and is doing, is use up vast amounts of electricity in the data centers that run it, all for pretty much no actual utility. This in turn causes more CO2 to be pumped into the atmosphere in the process of generating that electricity, which accelerates the climate crisis.

  • The problem with x-risk if there really was an evil AI genie risk it is from the enabling knowledge and industrial base that allows people to train models rather than the trainers themselves. This is especially true given costs will only continue to decline as technology improves. If you truly believe this shit you can't pick and choose. You can't say it's ok if group x does it so long as they jump through y hoops. The real danger is the capability itself.

    The other problem it is fundamentally a fools er

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday March 11, 2024 @03:18PM (#64307581) Homepage Journal

    "Such systems do not currently exist, but the leading AI labs are working toward them and many expect AGI to arrive within the next five years or less."

    I've been hearing that for literally my whole life.

    It's slightly more plausible now, but still not anywhere near definite.

  • Increasing speed of viral evolution due to globalization and animal agriculture. Impending collapse of human productivity due to a combination of demographics and culture. Biosphere collapse and climate change. Multiple dictators sitting on a pile of nuclear and biological weapons growing increasingly belligerent and paranoid.

    We will self-destruct just fine without AI in the near future.

  • (Government Commission on US Government) “OMGWTFBBQ!! AI is evil! EEEVIL I tell you! We needs more monies to fight this horrible threat to us all!!”

    (Common F. Sense) ”Say, aren’t you the guys with a nuclear arsenal large enough to wipe out this planet a dozen times over? Speaking of evil threats..”

    (Government) ”Well, yeah but..that doesn’t count. Well, except for funding. All our nukes are painted avocado green and harvest gold. We can’t possibly dest

  • ... we have to fear is government itself.
    {o.o}

  • Everything is extinction level. Real extinction will occur when people are too dumb to figure out how actually do anything.

The unfacts, did we have them, are too imprecisely few to warrant our certitude.

Working...