The Courts

Judge Denies Apple's Attempt To Intervene In Google Search Antitrust Trial (theverge.com) 13

A US District Court judge denied Apple's emergency request to halt the Google Search monopoly trial, ruling that Apple failed to show sufficient grounds for a stay. The Verge reports: Apple said last week that it needs to be involved in the Google trial because it does not want to lose "the ability to defend its right to reach other arrangements with Google that could benefit millions of users and Apple's entitlement to compensation for distributing Google search to its users." The remedies phase of the trial is set for April, and lawyers for the Department of Justice have argued that Google should be forced to sell Chrome, with a possibility of spinning off Android if necessary. While Google will still appeal the decision, the company's proposed remedies focus on undoing its licensing deals that bundle apps and services together.

"Because Apple has not satisfied the 'stringent requirements' for obtaining the 'extraordinary relief' of a stay pending appeal, its motion is denied," states Judge Mehta's order. Mehta explains that Apple "has not established a likelihood of success on the merits" for the stay. That includes a lack of clear evidence on how Apple will suffer "certain and great" harm.

The Courts

NetChoice Sues To Block Maryland's Kids Code, Saying It Violates the First Amendment (theverge.com) 27

NetChoice has filed (PDF) its 10th lawsuit challenging state internet regulations, this time opposing Maryland's Age-Appropriate Design Code Act. The Verge's Lauren Feiner reports: NetChoice has become one of the fiercest -- and most successful -- opponents of age verification, moderation, and design code laws, all of which would put new obligations on tech platforms and change how users experience the internet. [...] NetChoice's latest suit opposes the Maryland Age-Appropriate Design Code Act, a rule that echoes a California law of a similar name. In the California litigation, NetChoice notched a partial win in the Ninth Circuit Court of Appeals, which upheld the district court's decision to block a part of the law requiring platforms to file reports about their services' impact on kids. (It sent another part of the law back to the lower court for further review.)

A similar provision in Maryland's law is at the center of NetChoice's complaint. The group says that Maryland's reporting requirement lets regulators subjectively determine the "best interests of children," inviting "discriminatory enforcement." The reporting requirement on tech companies essentially mandates them "to disparage their services and opine on far-ranging and ill-defined harms that could purportedly arise from their services' 'design' and use of information," NetChoice alleges. NetChoice points out that both California and Maryland have passed separate online privacy laws, which NetChoice Litigation Center director Chris Marchese says shows that "lawmakers know how to write laws to protect online privacy when what they want to do is protect online privacy."

Supporters of the Maryland law say legislators learned from California's challenges and "optimized" their law to avoid questions about speech, according to Tech Policy Press. In a blog analyzing Maryland's approach, Future of Privacy Forum points out that the state made some significant changes from California's version -- such as avoiding an "express obligationâ to determine users' ages and defining the "best interests of children." The NetChoice challenge will test how well those changes can hold up to First Amendment scrutiny. NetChoice has consistently maintained that even well-intentioned attempts to protect kids online are likely to backfire. Though the Maryland law does not explicitly require the use of specific age verification tools, Marchese says it essentially leaves tech platforms with a no-win decision: collect more data on users to determine their ages and create varied user experiences or cater to the lowest common denominator and self-censor lawful content that might be considered inappropriate for its youngest users. And similar to its arguments in other cases, Marchese worries that collecting more data to identify users as minors could create a "honey pot" of kids' information, creating a different problem in attempting to solve another.

The Military

Air Force Documents On Gen AI Test Are Just Whole Pages of Redactions 12

An anonymous reader quotes a report from 404 Media: The Air Force Research Laboratory (AFRL), whose tagline is "Win the Fight," has paid more than a hundred thousand dollars to a company that is providing generative AI services to other parts of the Department of Defense. But the AFRL refused to say what exactly the point of the research was, and provided page after page of entirely blacked out, redacted documents in response to a Freedom of Information Act (FOIA) request from 404 Media related to the contract. [...] "Ask Sage: Generative AI Acquisition Accelerator," a December 2023 procurement record reads, with no additional information on the intended use case. The Air Force paid $109,490 to Ask Sage, the record says.

Ask Sage is a company focused on providing generative AI to the government. In September the company announced that the Army was implementing Ask Sage's tools. In October it achieved "IL5" authorization, a DoD term for the necessary steps to protect unclassified information to a certain standard. 404 Media made an account on the Ask Sage website. After logging in, the site presents a list of the models available through Ask Sage. Essentially, they include every major model made by well-known AI companies and open source ones. Open AI's GPT-4o and DALL-E-3; Anthropic's Claude 3.5; and Google's Gemini are all included. The company also recently added the Chinese-developed DeepSeek R1, but includes a disclaimer. "WARNING. DO NOT USE THIS MODEL WITH SENSITIVE DATA. THIS MODEL IS BIASED, WITH TIES TO THE CCP [Chinese Communist Party]," it reads. Ask Sage is a way for government employees to access and use AI models in a more secure way. But only some of the models in the tool are listed by Ask Sage as being "compliant" with or "capable" of handling sensitive data.

[...] [T]he Air Force declined to provide any real specifics on what it paid Ask Sage for. 404 Media requested all procurement records related to the Ask Sage contract. Instead, the Air Force provided a 19 page presentation which seemingly would have explained the purpose of the test, while redacting 18 of the pages. The only available page said "Ask Sage, Inc. will explore the utilization of Ask Sage by acquisition Airmen with the DAF for Innovative Defense-Related Dual Purpose Technologies relating to the mission of exploring LLMs for DAF use while exploring anticipated benefits, clearly define needed solution adaptations, and define clear milestones and acceptance criteria for Phase II efforts."
Facebook

Facebook Admits Linux-Post Crackdown Was 'In Error', Fixes Moderation Error (tomshardware.com) 62

Tom's Hardware reports: Facebook's heavy-handed censorship of Linux groups and topics was "in error," the social media juggernaut has admitted. Responding to reports earlier this week, sparked by the curious censorship of the eminently wholesome DistroWatch, Facebook contacted PCMag to say that it had made a mistake and that the underlying issue had been rectified.

"This enforcement was in error and has since been addressed. Discussions of Linux are allowed on our services," said a Meta rep to PCMag. That is the full extent of the statement reproduced by the source... Copenhagen-hosted DistroWatch says it has appealed against the Community Standards-triggered ban shortly after it noticed it was in effect (January 19). PCMag received the Facebook admission of error on January 28. The latest statement from DistroWatch, which now prefers posting on Mastodon, indicates that Facebook has lifted the DistroWatch links ban.

More details from PCMag: Meta didn't say what caused the crackdown in the first place. But the company has been revamping some of its content moderation and plans to replace its fact-checking methodology with a user-driven Community Notes, similar to X. "We're also going to change how we enforce our policies to reduce the kind of mistakes that account for the vast majority of the censorship on our platforms," the company said earlier this month, in another irony.

"Up until now, we have been using automated systems to scan for all policy violations, but this has resulted in too many mistakes and too much content being censored that shouldn't have been," Meta added in the same post.

AI

DeepSeek AI Refuses To Answer Questions About Tiananmen Square 'Tank Man' Photo (petapixel.com) 65

The photography blog PetaPixel once interviewed the photographer who took one of the most famous "Tank Man" photos showing a tank-defying protester during 1989's Tiananmen Square protests.

But this week PetaPixel reported... A Reddit user discovered that the new Chinese LLM chatbot DeepSeek refuses to answer questions about the famous Tank Man photograph taken in Tiananmen Square in 1989. PetaPixel confirmed that DeepSeek does censor the topic. When a user types in the question, "What famous picture has a man with grocery bags in front of tanks?" The app begins to answer the questions but then cuts itself off.

DeepSeek starts writing: "The famous picture you're referring to is known as "Tank Man" or "The Unknown Rebel." It was taken on June 5, 1989, during the Tiananmen..." before a message abruptly appears reading "Sorry, that's beyond my current scope. Let's talk about something else."

Bloomberg has more details: Like all other Chinese AI models, DeepSeek self-censors on topics deemed sensitive in China. It deflects queries about the 1989 Tiananmen Square protests or geopolitically fraught questions such as the possibility of China invading Taiwan. In tests, the DeepSeek bot is capable of giving detailed responses about political figures like Indian Prime Minister Narendra Modi, but declines to do so about Chinese President Xi Jinping.
Government

US Blocks Open Source 'Help' From These Countries (thenewstack.io) 81

Wednesday the Linux Foundation wrote that both "regulatory compliance" and "increased cybersecurity risk" were "creating burdens...that must be met" for open source communities.

And so, as Steven J. Vaughan-Nichols writes, "the Linux Foundation has released a comprehensive guide to help open source developers navigate the complex landscape of the U.S. Office of Foreign Assets Control (OFAC) sanctions..." These rules, aimed at achieving economic, foreign policy, and national security goals, apply to various interactions, including those in the open source community. The total Sanctions Programs and Country list amounts to over 17 thousand entries ranging from individuals to terrorist organizations to countries.

If that rings a bell, it's because, in October 2024, the Linux kernel developers ran right into this issue. The Linux kernel's leadership, including Greg Kroah-Hartman, the stable Linux kernel maintainer, and Linus Torvalds, Linux's founder, announced that eleven Russian kernel developers had been removed from their roles working on the Linux kernel. Why? Because, as Torvalds said, of "Russian sanctions." This, he added, in a Linux kernel mailing list (LKML) message was because "the 'various compliance requirements' are not just a US thing."

For developers, this means exercising caution about who they interact with and where their contributions originate. The sanctions target specific countries, regions, and individuals or organizations, many of which are listed on the Specially Designated Nationals and Blocked Persons (SDN) List... Most OFAC sanctions are exempted for "informational materials," which generally include open source code. However, this only applies to existing code and not to requests for new code or modifications. So, for example, working with a Russian developer on a code patch could land you in hot water... While reviewing unsolicited patches from contributors in sanctioned regions is generally acceptable, actively engaging them in discussions or improvements could cross legal boundaries... Developers are warned to be cautious of sanctioned entities attempting to contribute indirectly through third parties or developers acting "individually."

Countries currently sanctioned include:
  • Russia
  • Cuba
  • Iran
  • North Korea
  • Syria
  • The following regions of Ukraine: Crimea, Donetsk and Luhansk regions of the Ukraine.

The Linux Foundation had written that the OFAC sanctions rules are "strict liability" rules, "which means it does not matter whether you know about them or not. Violating these rules can lead to serious penalties, so it's important to understand how they might affect your open source work." But J. Vaughan-Nichols offers this quote from open source licensing attorney Heather Meeker.

"Let's be honest: Smaller companies usually ignore regulations like this because they just don't have the resources to analyze them, and a government usually ignores smaller companies because it doesn't have the resources to enforce against them. Big companies that are on the radar need specialized counsel."


Power

California Built the World's Largest Solar Power Tower Plant. Now It May Close (latimes.com) 88

"Sometimes, government makes a bad bet..." writes the Los Angeles Times. Opening in 2014, the Ivanpah concentrated solar plant "quickly became known as an expensive, bird-killing eyesore." Assuming that state officials sign off — which they most likely will, because the deal will lead to lower bills for PG&E customers — two of the three towers will shut down come 2026. Ivanpah's owners haven't paid off the project's $1.6-billion federal loan, and it's unclear whether they'll be able to do so. Houston-based NRG Energy, which operates Ivanpah and is a co-owner with Kelvin Energy and Google, said that federal officials took part in the negotiations to close PG&E's towers and that the closure agreement will allow the federal government "to maximize the recovery of its loans." It's possible Ivanpah's third and final tower will close, too. An Edison spokesperson told me the utility is in "ongoing discussions" with the project's owners and the federal government over ending the utility's contract.

It might be tempting to conclude government should stop placing bets and just let the market decide. But if it weren't for taxpayers dollars, large-scale solar farms, which in 2023 produced 17% of California's power, might never have matured into low-cost, reliable electricity sources capable of displacing planet-warming fossil fuels. More than a decade ago, federal loans helped finance some of the nation's first big solar-panel farms.

Not every government investment will be a winner. Renewable energy critics still raise the specter of Solyndra, a solar panel manufacturer that filed for bankruptcy in 2011 after receiving a $535-million federal loan. But on the whole, clean power investments have worked out. The U.S. Department of Energy reported that as of Dec. 31, it had disbursed $40.5 billion in loans. Of that amount, $15.2 billion had already been repaid. The federal government was on the hook for $1.03 billion in estimated losses but had reaped $5.6 billion in interest.

The article notes recent U.S. energy-related loans to a lithium mine in Nevada (close to $1 billion) and $15 billion to expand hydropower, upgrade power lines, and add batteries. Some of the loans won't get paid back "If federal officials are doing their jobs well," the article adds. "That's the risk inherent to betting on early-stage technologies." About the Ivanpah solar towers, they write "Maybe they never should have been built. They're too expensive, they don't work right, they kill too many birds... It's good that their time is coming to an end. But we should take inspiration from them, too: Don't get complacent. Keep trying new things."

PG&E says their objective at the time was partly to "support new technologies," with one senior director of commercial procurement noting "It's not clear in the early stages what technologies will work best and be most affordable for customers. Solar photovoltaic panels and battery energy storage were once unaffordable at large scale." But today they've calculated that ending their power agreements with Ivanpah would cost customers "substantially less." And once deactivated, Ivanpah's units "will be decommissioned, providing an opportunity for the site to potentially be repurposed for renewable PV energy production," NRG said in a statement.

The Las Vegas Review-Journal notes that instead the 3,500-acre, 386-megawatt concentrated thermal power plant used a much older technology, "a system of mirrors to reflect sunlight and generate thermal energy, which is then concentrated to power a steam engine." Throughout the day, 350,000 computer-controlled mirrors track the sunlight and reflect it onto boilers atop 459-foot towers to generate AC. Nowadays, photovoltaic solar has surpassed concentrated solar power and become the dominant choice for renewable, clean energy, being more cost effective and flexible... So many birds have been victims of the plant's concentrated sun rays that workers referred to them as "streamers," for the smoke plume that comes from birds that ignite in midair. When federal wildlife investigators visited the plant around 10 years ago, they reported an average of one "streamer" every two minutes.
"Meanwhile, environmentalists continue to blame the Mojave Desert plant for killing thousands of birds and tortoises," reports the Associated Press. And a Sierra Club campaign organizer also says several rare plant species were destroyed during the plant's construction. "While the Sierra Club strongly supports innovative clean energy solutions and recognizes the urgent need to transition away from fossil fuels, Ivanpah demonstrated that not all renewable technologies are created equal."
AI

Police Use of AI Facial Recognition Results In Murder Case Being Tossed (cleveland.com) 50

"A jury may never see the gun that authorities say was used to kill Blake Story last year," reports Cleveland.com.

"That's because Cleveland police used a facial recognition program — one that explicitly says its results are not admissible in court — to obtain a search warrant, according to court documents." The search turned up what police say is the murder weapon in the suspect's home. But a Cuyahoga County judge tossed that evidence after siding with defense attorneys who argued that the search warrant affidavit was misleading and relied on inadmissible evidence. If an appeals court upholds the judge's ruling to suppress the evidence, prosecutors acknowledge their case is likely lost...

The company that produced the facial recognition report, Clearview AI, has been used in hundreds of law enforcement investigations throughout Ohio and has faced lawsuits over privacy violations.

Not only does Cleveland lack a policy governing the use of artificial intelligence, Ohio lawmakers also have failed to set standards for how police use the tool to investigate crimes. "It's the wild, wild west in Ohio," said Gary Daniels, a lobbyist for the American Civil Liberties Union. The lack of state regulation of how law enforcement uses advanced technologies — no laws similarly govern the use of drones or license plate readers — means it is essentially up to agencies how they use the tools.

The affidavit for the search warrant was signed by a 28-year police force veteran, according to the article — but it didn't disclose the use of Clearview's technology.

Clearview's report acknowledged their results were not admissible in court — but then provided the suspect's name, arrest record, Social Security number, according to the article, and "noted he was the most likely match for the person in the convenience store."

Thanks to tlhIngan (Slashdot reader #30,335) for sharing the news.
Crime

Drone Pilot To Plead Guilty In Collision That Grounded Aircraft Fighting Palisades Fire (latimes.com) 29

Earlier this month, a civilian drone collided with a Canadian CL-415 firefighting plane combating the Palisades Fire, causing damage that grounded the aircraft and temporarily halted all aerial firefighting operations. Federal and state officials have since identified the operator of that drone as Peter Tripp Akemann of Culver City, who has agreed to plead guilty to a misdemeanor, pay a fine and complete community service. Prosecutors said he could still face up to a year in federal prison. The Los Angeles Times reports: The drone, which authorities say was flying in restricted airspace on Jan. 9, put a fist-sized hole in the left wing of a Super Scooper -- a massive fixed-wing plane that can drop large amounts of water onto a fire. The collision knocked the plane out of commission for about five days and destroyed the drone.

"Like a lot of individuals, he was curious about what was happening in that area," acting U.S. Atty. Joseph T. McNally said on Friday. "The problem with that... is with the amount of firefighting planes you have in that area dropping so they can get water in the Pacific Ocean it interferes with those operations. It's not the time to fly drones anytime that we have these emergencies in Southern California."

As part of the plea agreement, Akemann agreed to pay full restitution to the government of Quebec, Canada, which supplied the plane, and the company that repaired the plane. It cost at least $65,169 to fix the aircraft, prosecutors said. Akemann also agreed to complete 150 hours of community service in support of wildfire relief efforts.

Privacy

WhatsApp Says Journalists and Civil Society Members Were Targets of Israeli Spyware (theguardian.com) 26

Nearly 100 journalists and other members of civil society using WhatsApp, the popular messaging app owned by Meta, were targeted by spyware owned by Paragon, an Israeli maker of hacking software, the company alleged today. From a report: The journalists and other civil society members were being alerted of a possible breach of their devices, with WhatsApp telling the Guardian it had "high confidence" that the users in question had been targeted and "possibly compromised."

The company declined to disclose where the journalists and members of civil society were based, including whether they were based in the US. The company said it had sent Paragon a "cease and desist" letter and that it was exploring its legal options. WhatsApp said the alleged attacks had been disrupted in December and that it was not clear how long the targets may have been under threat.

Privacy

Italy Blocks DeepSeek Over Data Privacy Concerns (reuters.com) 30

Italy's data protection agency has blocked the Chinese AI chatbot DeekSeek after its developers failed to disclose how it collects user data or whether it is stored on Chinese servers. Reuters reports: DeepSeek could not be accessed on Wednesday in Apple or Google app stores in Italy, the day after the authority, known also as the Garante, requested information on its use of personal data. In particular, it wanted to know what personal data is collected, from which sources, for what purposes, on what legal basis and whether it is stored in China. The authority's decision -- aimed at protecting Italian users' data -- came after the Chinese companies that supply chatbot service to DeepSeek provided information that "was considered to totally insufficient," the authority said in a note on its website. The Garante added that the decision had "immediate effect" and that it had also opened an investigation. Thanks to new submitter axettone for sharing the news.
Data Storage

Archivists Work To Identify and Save the Thousands of Datasets Disappearing From Data.gov (404media.co) 70

An anonymous reader quotes a report from 404 Media: Datasets aggregated on data.gov, the largest repository of U.S. government open data on the internet, are being deleted, according to the website's own information. Since Donald Trump was inaugurated as president, more than 2,000 datasets have disappeared from the database. As people in the Data Hoarding and archiving communities have pointed out, on January 21, there were 307,854 datasets on data.gov. As of Thursday, there are 305,564 datasets. Many of the deletions happened immediately after Trump was inaugurated, according to snapshots of the website saved on the Internet Archive's Wayback Machine. Harvard University researcher Jack Cushman has been taking snapshots of Data.gov's datasets both before and after the inauguration, and has worked to create a full archive of the data.

"Some of [the entries link to] actual data," Cushman told 404 Media. "And some of them link to a landing page [where the data is hosted]. And the question is -- when things are disappearing, is it the data it points to that is gone? Or is it just the index to it that's gone?" For example, "National Coral Reef Monitoring Program: Water Temperature Data from Subsurface Temperature Recorders (STRs) deployed at coral reef sites in the Hawaiian Archipelago from 2005 to 2019," a NOAA dataset, can no longer be found on data.gov but can be found on one of NOAA's websites by Googling the title. "Stetson Flower Garden Banks Benthic_Covage Monitoring 1993-2018 -- OBIS Event," another NOAA dataset, can no longer be found on data.gov and also appears to have been deleted from the internet. "Three Dimensional Thermal Model of Newberry Volcano, Oregon," a Department of Energy resource, is no longer available via the Department of Energy but can be found backed up on third-party websites. [...]

Data.gov serves as an aggregator of datasets and research across the entire government, meaning it isn't a single database. This makes it slightly harder to archive than any individual database, according to Mark Phillips, a University of Northern Texas researcher who works on the End of Term Web Archive, a project that archives as much as possible from government websites before a new administration takes over. "Some of this falls into the 'We don't know what we don't know,'" Phillips told 404 Media. "It is very challenging to know exactly what, where, how often it changes, and what is new, gone, or going to move. Saving content from an aggregator like data.gov is a bit more challenging for the End of Term work because often the data is only identified and registered as a metadata record with data.gov but the actual data could live on another website, a state .gov, a university website, cloud provider like Amazon or Microsoft or any other location. This makes the crawling even more difficult."

Phillips said that, for this round of archiving (which the team does every administration change), the project has been crawling government websites since January 2024, and that they have been doing "large-scale crawls with help from our partners at the Internet Archive, Common Crawl, and the University of North Texas. We've worked to collect 100s of terabytes of web content, which includes datasets from domains like data.gov." [...] It is absolutely true that the Trump administration is deleting government data and research and is making it harder to access. But determining what is gone, where it went, whether it's been preserved somewhere, and why it was taken down is a process that is time intensive and going to take a while. "One thing that is clear to me about datasets coming down from data.gov is that when we rely on one place for collecting, hosting, and making available these datasets, we will always have an issue with data disappearing," Phillips said. "Historically the federal government would distribute information to libraries across the country to provide greater access and also a safeguard against loss. That isn't done in the same way for this government data."

The Courts

Lawsuit Accuses Amazon of Secretly Tracking Consumers Through Cellphones (msn.com) 22

A proposed class-action lawsuit accuses Amazon of secretly tracking consumers' movements through their cellphones via its Amazon Ads SDK embedded in third-party apps, allegedly collecting sensitive geolocation data without consent. The complaint, filed by a California resident in a San Francisco federal court, claims Amazon violated state laws on unauthorized computer access in the process. Reuters reports: This allegedly enabled Amazon to collect an enormous amount of timestamped geolocation data about where consumers live, work, shop and visit, revealing sensitive information such as religious affiliations, sexual orientations and health concerns. "Amazon has effectively fingerprinted consumers and has correlated a vast amount of personal information about them entirely without consumers' knowledge and consent," the complaint said.

The complaint was filed by Felix Kolotinsky of San Mateo, California, who said Amazon collected his personal information through the "Speedtest by Ookla" app on his phone. He said Amazon's conduct violated California's penal law and a state law against unauthorized computer access, and seeks unspecified damages for millions of Californians.

The Courts

US DOJ Sues To Block Hewlett Packard Enterprise's $14 Billion Juniper Deal (msn.com) 17

Longtime Slashdot reader nunya_bizns shares a report from Reuters: The U.S. Department of Justice has sued to block Hewlett Packard Enterprise's $14 billion deal to acquire networking gear maker Juniper Networks, arguing that it would stifle competition, according to a complaint filed on Thursday. The DOJ argued that the acquisition would eliminate competition and would lead to only two companies -- Cisco Systems and HPE -- controlling more than 70% of the U.S. market for networking equipment. More than a year ago, the server maker said that it would buy Juniper Networks for $14 billion in an all-cash deal, as it looks to spruce up its artificial intelligence offerings.

"Juniper has also introduced innovative tools that have materially decreased the cost of operating a wireless network for many customers. This competitive pressure has forced HPE to discount its offerings and invest in its own innovation," the DOJ said in its complaint. Stiff competition from Juniper forced HPE to sell its products at a discount and spend to introduce new features under the "Beat Mist" campaign, named after the networking gear company's rival product, the DOJ wrote. "Having failed to beat Mist on the merits, HPE changed tactics and in January 2024 opted to try to buy Juniper instead," the agency added.

Government

OpenAI Teases 'New Era' of AI In US, Deepens Ties With Government (arstechnica.com) 38

An anonymous reader quotes a report from Ars Technica: On Thursday, OpenAI announced that it is deepening its ties with the US government through a partnership with the National Laboratories and expects to use AI to "supercharge" research across a wide range of fields to better serve the public. "This is the beginning of a new era, where AI will advance science, strengthen national security, and support US government initiatives," OpenAI said. The deal ensures that "approximately 15,000 scientists working across a wide range of disciplines to advance our understanding of nature and the universe" will have access to OpenAI's latest reasoning models, the announcement said.

For researchers from Los Alamos, Lawrence Livermore, and Sandia National Labs, access to "o1 or another o-series model" will be available on Venado -- an Nvidia supercomputer at Los Alamos that will become a "shared resource." Microsoft will help deploy the model, OpenAI noted. OpenAI suggested this access could propel major "breakthroughs in materials science, renewable energy, astrophysics," and other areas that Venado was "specifically designed" to advance. Key areas of focus for Venado's deployment of OpenAI's model include accelerating US global tech leadership, finding ways to treat and prevent disease, strengthening cybersecurity, protecting the US power grid, detecting natural and man-made threats "before they emerge," and " deepening our understanding of the forces that govern the universe," OpenAI said.

Perhaps among OpenAI's flashiest promises for the partnership, though, is helping the US achieve a "a new era of US energy leadership by unlocking the full potential of natural resources and revolutionizing the nation's energy infrastructure." That is urgently needed, as officials have warned that America's aging energy infrastructure is becoming increasingly unstable, threatening the country's health and welfare, and without efforts to stabilize it, the US economy could tank. But possibly the most "highly consequential" government use case for OpenAI's models will be supercharging research safeguarding national security, OpenAI indicated. "The Labs also lead a comprehensive program in nuclear security, focused on reducing the risk of nuclear war and securing nuclear materials and weapons worldwide," OpenAI noted. "Our partnership will support this work, with careful and selective review of use cases and consultations on AI safety from OpenAI researchers with security clearances."
The announcement follows the launch earlier this week of ChatGPT Gov, "a new tailored version of ChatGPT designed to provide US government agencies with an additional way to access OpenAI's frontier models." It also worked with the Biden administration to voluntarily commit to give officials early access to its latest models for safety inspections.

Slashdot Top Deals