Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Government AI Google

US Defense Department Awards Contracts To Google, xAI 21

The U.S. Department of Defense has awarded contracts worth up to $200 million each to OpenAI, Google, Anthropic, and xAI to scale adoption of advanced AI. "The contracts will enable the DoD to develop agentic AI workflows and use them to address critical national security challenges," reports Reuters, citing the department's Chief Digital and Artificial Intelligence Office. From the report: Separately on Monday, xAI announced a suite of its products called "Grok for Government", making its advanced AI models -- including its latest flagship Grok 4 -- available to federal, local, state and national security customers. The Pentagon announced last month that OpenAI was awarded a $200 million contract, saying the ChatGPT maker would "develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains."

The contracts announced on Monday deepen the ties between companies leading the AI race and U.S. government operations, while addressing concerns around the need for competitive contracts for AI use in federal agencies.
"The adoption of AI is transforming the (DoD's) ability to support our warfighters and maintain strategic advantage over our adversaries," Chief Digital and AI Officer Doug Matty said.

US Defense Department Awards Contracts To Google, xAI

Comments Filter:
  • And Musk & Trump had a big falling out and are now opposed to each other, how that working out?
    • You are about 8 billion cycles behind on news it seems like, this story should clue you in as to how far off you are.

    • I dont know, but I don't think it takes $400m in AI contracts for it to tell you it was probably a bad idea to pick sides with a people who number 9 million, didnt have a nation until 1948, and require $320 billion dollars and all you military secrets, along with a morally reprehensible apartheid state and genocidal war to keep them from being driven into the sea; over a group with 2 billion people, enough oil to power the world, and 58 countries. Maybe we wouldn't need such an expensive "defense department
    • What's the problem? That personal relationships aren't the basis of government purchasing decisions?
    • The best part is announcing this only a day or two after Grok very publicly went full antisemite cyberHitler.

      Well done, DoD.

  • O RLY? (Score:5, Interesting)

    by Tschaine ( 10502969 ) on Monday July 14, 2025 @09:35PM (#65521286)

    I have mixed feelings about the team behind the AI that called itself MechaHitler getting tons of taxpayer money to address national security challenges.

    Because XAI itself seems to be one of those national security challenges.

    • I have mixed feelings about the team behind the AI that called itself MechaHitler getting tons of taxpayer money

      All of the large AI platforms have similar issues.

      xAI is the only one opening admitting it happens and trying to resolve it.

      So I'd rather give my money to them then a company pretending the well they are drawing training data from is not poisoned.

    • I have mixed feelings about the team behind the AI that called itself MechaHitler getting tons of taxpayer money to address national security challenges.

      Because XAI itself seems to be one of those national security challenges.

      I dunno, man. Seems MechaHitler would fit right in with our current administration. And while I personally don't much care for it, I think what we're living through is a proven, "If it sucks ass, and makes you queasy, it'll happen" timeline. Someday, the history books will look back at that MechaHitler moment as the first time an AI showed its true colors.

      Well, if there are history books, that is. Books may need to be purged if too many of them are critical of MechaHitler and WannBeHitler.

      • by HiThere ( 15173 )

        You're mistaking "how it's trained" for "what it is". Not all LLMs are trained to be abusive Nazis, and it's not what they inherently are. It's certainly one of the things they can be trained to be, however. (Even before this year, remember Microsoft Tay.)

        The problem is that LLMs have essentially no "real world" feedback loop. They'll believe (i.e. claim) anything you train them to believe. Train them that they sky is green, and that's what they'll believe (claim).

  • I feel less secure now.

  • by abulafia ( 7826 ) on Monday July 14, 2025 @10:40PM (#65521372)
    This will go well.
  • by Mirnotoriety ( 10462951 ) on Tuesday July 15, 2025 @06:19AM (#65521868)
    Mice: "We want to take your brain, replace it with a positronic brain, and then we'll be able to complete our experiment."

    Arthur Dent: "But you can’t do that!"

    Mice: "Oh, no one will notice."

    Arthur Dent: "I’ll notice."
  • This means sanctions should be placed on them by the entire world. The US does the same to Chinese companies which help the Chinese military, so why shouldn't it happen to US companies who help the US military?

    After all, anger the US and your country gets bombed into the stone age.

  • "develop prototype frontier AI capabilities..." doesn't require anyone to deliver anything. Just prove that they spent the money on development.

Biology is the only science in which multiplication means the same thing as division.

Working...