The Courts

Anthropic Sues the Pentagon After Being Labeled a Threat To National Security 48

Anthropic is suing the Department of Defense after the Trump administration labeled the company a "supply chain risk" and canceled its government contracts when Anthropic refused to allow its AI model Claude to be used for domestic surveillance or autonomous weapons. Fortune reports: The lawsuit, filed Monday in the U.S. District Court for the Northern District of California, calls the administration's actions "unprecedented and unlawful" and claims they threaten to harm "Anthropic irreparably." The complaint claims that government contracts are already being canceled and that private contracts are also in doubt, putting "hundreds of millions of dollars" at near-term risk.

An Anthropic spokesperson told Fortune: "Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners." "We will continue to pursue every path toward resolution, including dialogue with the government," they added.
AI

AI Allows Hackers To Identify Anonymous Social Media Accounts, Study Finds (theguardian.com) 37

An anonymous reader quotes a report from the Guardian: AI has made it vastly easier for malicious hackers to identify anonymous social media accounts, a new study has warned. In most test scenarios, large language models (LLMs) -- the technology behind platforms such as ChatGPT -- successfully matched anonymous online users with their actual identities on other platforms, based on the information they posted. The AI researchers Simon Lermen and Daniel Paleka said LLMs make it cost effective to perform sophisticated privacy attacks, forcing a "fundamental reassessment of what can be considered private online".

In their experiment, the researchers fed anonymous accounts into an AI, and got it to scrape all the information it could. They gave a hypothetical example of a user talking about struggling at school, and walking their dog Biscuit through a "Dolores park." In that hypothetical case, the AI then searched elsewhere for those details and matched @anon_user42 to the known identity with a high degree of confidence. While this example was fictional, the paper's authors highlighted scenarios in which governments use AI to surveil dissidents and activists posting anonymously, or hackers are able to launch "highly personalized" scams.

The Almighty Buck

Swiss Vote Places Right To Use Cash In Country's Constitution (politico.eu) 40

Swiss voters overwhelmingly approved a constitutional amendment guaranteeing the right to use physical cash. "The vote means Switzerland will join the likes of Hungary, Slovakia and Slovenia, which have already written the right to cold, hard cash in their constitutions," reports Politico. From the report: Official results revealed that 73.4 percent of voters backed the legal amendment, which the government proposed as a counter to a similar initiative by a group called the Swiss Freedom Movement. The Swiss Freedom Movement triggered the national referendum after its initiative to protect cash collected more than 100,000 signatures, triggering a national referendum. Its initiative secured only 46 percent of the final vote after the government said some of the group's proposed amendments went too far.
United States

US Military Tested Device That May Be Tied To Havana Syndrome On Rats, Sheep (cbsnews.com) 26

An anonymous reader quotes a report from CBS News: Tonight, we have details of a classified U.S. intelligence mission that has obtained a previously unknown weapon that may finally unlock a mystery. Since at least 2016, U.S. diplomats, spies and military officers have suffered crippling brain injuries. They've told of being hit by an overwhelming force, damaging their vision, hearing, sense of balance and cognition. but the government has doubted their stories. They've been called delusional. Well now, 60 Minutes has learned that a weapon that can inflict these injuries was obtained overseas and secretly tested on animals on a U.S. military base. We've investigated this mystery for nine years. This is our fourth story called, "Targeting Americans." Despite official government doubt, we never stopped reporting because of the haunting stories we heard [...]. 60 Minutes interviewed Dr. David Relman, a scientific expert and professor from Stanford University who was tasked by the government to lead two investigations into the Havana Syndrome cases. What he and his panel of doctors, physicists, engineers and others found was that "the most plausible explanation for a subset of these cases was a form of radiofrequency or microwave energy," the report says.

According to confidential sources cited in the report, undercover Homeland Security agents bought a miniaturized microwave weapon from a Russian criminal network in 2024 and tested it on animals at a U.S. military lab. The injuries reportedly matched those seen in the human cases. "Our confidential sources tell us the still classified weapon has been tested in a U.S. military lab for more than a year," says Dr. Relman. "Tests on rats and sheep show injuries consistent with those seen in humans."

He continues: "Also, as a separate part of the investigation, security camera videos have been collected that show Americans being hit. The videos are classified but they were described to us. In one, a camera in a restaurant in Istanbul captured two FBI agents on vacation sitting at a table with their families. A man with a backpack walks in and suddenly everyone at the table grabs their head as if in pain. Our sources say another video comes from a stairwell in the U.S. embassy in Vienna. The stairs lead to a secure facility. In the video, two people on the stairs suddenly collapse. Those videos and the weapon were among the reasons the Biden administration summoned about half a dozen victims to the White House with about two months left in the president's term."

Former intelligence officials and researchers claim elements of the U.S. government downplayed or dismissed the theory for years, possibly to avoid political consequences of accusing a foreign state like Russia of conducting attacks on American personnel.
Government

EFF, Ubuntu and Other Distros Discuss How to Respond to Age-Verification Laws (9to5linux.com) 91

System76 isn't the only one criticizing new age-verification laws. The blog 9to5Linux published an "informal" look at other discussions in various Linux communities. Earlier this week, Ubuntu developer Aaron Rainbolt proposed on the Ubuntu mailing list an optional D-Bus interface (org.freedesktop.AgeVerification1) that can be implemented by arbitrary applications as a distro sees fit, but Canonical responded that the company does not yet have a solution to announce for age declaration in Ubuntu. "Canonical is aware of the legislation and is reviewing it internally with legal counsel, but there are currently no concrete plans on how, or even whether, Ubuntu will change in response," said Jon Seager, VP Engineering at Canonical. "The recent mailing list post is an informal conversation among Ubuntu community members, not an announcement. While the discussion contains potentially useful ideas, none have been adopted or committed to by Canonical."

Similar talks are underway in the Fedora and Linux Mint communities about this issue in case the California Digital Age Assurance Act law and similar laws from other states and countries are to be enforced. At the same time, other OS developers, like MidnightBSD, have decided to exclude California from desktop use entirely.

Slashdot contacted Hayley Tsukayama, Director of State Affairs at EFF, who says their organization "has long warned against age-gating the internet. Such mandates strike at the foundation of the free and open internet."

And there's another problem. "Many of these mandates imagine technology that does not currently exist." Such poorly thought-out mandates, in truth, cannot achieve the purported goal of age verification. Often, they are easy to circumvent and many also expose consumers to real data breach risk.

These burdens fall particularly heavily on developers who aren't at large, well-resourced companies, such as those developing open-source software. Not recognizing the diversity of software development when thinking about liability in these proposals effectively limits software choices — and at a time when computational power is being rapidly concentrated in the hands of the few. That harms users' and developers' right to free expression, their digital liberties, privacy, and ability to create and use open platforms...

Rather than creating age gates, a well-crafted privacy law that empowers all of us — young people and adults alike — to control how our data is collected and used would be a crucial step in the right direction.

The Courts

Judges Find AI Doesn't Have Human Intelligence in Two New Court Cases (yahoo.com) 61

Within the last month two U.S> judges have effectively declared AI bots are not human, writes Los Angeles Times columnist Michael Hiltzik: On Monday, the Supreme Court declined to take up a lawsuit in which artist and computer scientist Stephen Thaler tried to copyright an artwork that he acknowledged had been created by an AI bot of his own invention. That left in place a ruling last year by the District of Columbia Court of Appeals, which held that art created by non-humans can't be copyrighted... [Judge Patricia A. Millett] cited longstanding regulations of the Copyright Office requiring that "for a work to be copyrightable, it must owe its origin to a human being"... She rejected Thaler's argument, as had the federal trial judge who first heard the case, that the Copyright Office's insistence that the author of a work must be human was unconstitutional. The Supreme Court evidently agreed...

[Another AI-related case] involved one Bradley Heppner, who was indicted by a federal grand jury for allegedly looting $150 million from a financial services company he chaired. Heppner pleaded innocent and was released on $25-million bail. The case is pending.... Knowing that an indictment was in the offing, Heppner had consulted Claude for help on a defense strategy. His lawyers asserted that those exchanges, which were set forth in written memos, were tantamount to consultations with Heppner's lawyers; therefore, his lawyers said, they were confidential according to attorney-client privilege and couldn't be used against Heppner in court. (They also cited the related attorney work product doctrine, which grants confidentiality to lawyers' notes and other similar material.) That was a nontrivial point. Heppner had given Claude information he had learned from his lawyers, and shared Claude's responses with his lawyers.

[Federal Judge Jed S.] Rakoff made short work of this argument. First, he ruled, the AI documents weren't communications between Heppner and his attorneys, since Claude isn't an attorney... Second, he wrote, the exchanges between Heppner and Claude weren't confidential. In its terms of use, Anthropic claims the right to collect both a user's queries and Claude's responses, use them to "train" Claude, and disclose them to others. Finally, he wasn't asking Claude for legal advice, but for information he could pass on to his own lawyers, or not. Indeed, when prosecutors tested Claude by asking whether it could give legal advice, the bot advised them to "consult with a qualified attorney."

The columnist agrees AI-generated results shouldn't receive the same protections as human-generated material. "The AI bots are machines, and portraying them as though they're thinking creatures like artists or attorneys doesn't change that, and shouldn't."

He also seems to think their output is at best second-hand regurgitation. "Everything an AI bot spews out is, at more than a fundamental level, the product of human creativity."
AI

AI CEOs Worry the Government Will Nationalize AI (thenewstack.io) 123

Palantir's CEO was blunt. "If Silicon Valley believes we are going to take away everyone's white-collar job... and you're going to screw the military — if you don't think that's going to lead to the nationalization of our technology, you're retarded..."

And OpenAI's Sam Altman is thinking about the same thing, writes long-time Slashdot reader destinyland: "It has seemed to me for a long time it might be better if building AGI were a government project," Sam Altman publicly mused last week... Altman speculated on the possibility of the government "nationalizing" private AI companies into a public project, admitting more than once he's wondered what would happen next. "I obviously don't know," Altman said — but he added that "I have thought about it, of course" Altman's speculation hedged that "It doesn't seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super important."

Could powerful AI tools one day slip from the hands of private companies to be controlled by the U.S. government? Fortune magazine's AI editor points out that "many other breakthroughs with big strategic implications — from the Manhattan Project to the space race to early efforts to develop AI — were government-funded and largely government-directed." And Fortune added that last week the Defense Department threatened Anthropic with the Defense Production Act, which allows the president to designate "critical and strategic" goods for which businesses must accept the government's contracts. Fortune speculates this would've been "a sort of soft nationalization of Anthropic's production pipeline". Altman acknowledged Saturday that he'd felt the threat of attempted nationalization "behind a lot of the questions" he'd received when answering questions on X.com.

How exactly will this AI build-out be handled — and how should AI companies be working with the government? In a sprawling ask-me-anything session on X that included other members of OpenAI leadership, one Missouri-based developer even broached an AGI-government scenario directly with OpenAI's Head of National Security Partnerships, Katherine Mulligan. If OpenAI built an AGI — something that even passed its own Turing test for AGI — would that be a case where its government contracts compelled them to grant access to the Defense Department?

"No," Mulligan answered. At our current moment in time, "We control which models we deploy"

The article notes 100 OpenAI employees joined with 856 Google employees in an online letter titled "We Will Not Be Divided" urging their bosses to refuse their models' use in domestic mass surveillance and autonomously killing without human oversight.

But Adafruit's managing director Phillip Torrone (also long-time Slashdot reader ptorrone ) sees analogies to America's atomic bomb-building Manhattan Project, and "what happened when the scientists who built the thing tried to set conditions on how the thing would be used." (The government pressured them to back down, which he compares to the Pentagon's designating Anthropic a "supply chain risk" before offering OpenAI a contract "with the same red lines, just worded differently".)

Ironically, Anthropic CEO Dario Amodei frequently recommends the Pulitzer Prize-winning 1986 book The Making of the Atomic Bomb...
Government

Daylight Saving Time Ritual Continues. But Are There Alternatives? (apnews.com) 147

Would you move sunrise to 9 a.m. in Detroit? Or to 4:11 a.m. in Seattle...

Though both options have problems, "There's no law we can pass to move the sun to our will," argues the president of the nonprofit "Save Standard Time". The Associated Press explains why America remains stuck in that annual ritual making clocks "spring forward, fall backward..." The U.S. has tinkered with the clock intermittently since railroads standardized the time zones in 1883. So has a lot of the world. About 140 countries have had daylight saving time at some point; about half that many do now. About 1 in 10 U.S. adults favor the current system of changing the clocks, according to an AP-NORC poll conducted last year. About half oppose that system, and some 4 in 10 didn't have an opinion.

If they had to choose, most Americans say they would prefer to make daylight saving time permanent, rather than standard time. ince 2018, 19 states — including much of the South and a block of states in the northwestern U.S. — have adopted laws calling for a move to permanent daylight saving time. There's a catch: Congress would need to pass a law to allow states to go to full-time daylight saving time, something that was in place nationwide during World War II and for an unpopular, brief stint in 1974. The U.S. Senate passed a bill in 2022 to move to permanent daylight saving time. A similar House bill hasn't been brought to a vote.

U.S. Rep. Mike Rogers, a Republican from Alabama who introduces such a bill every term, said the airline industry, which doesn't want the scheduling complexity a change would bring, has been a factor in persuading lawmakers not to take it up. U.S. Rep. Greg Steube, a Florida Republican, is proposing another approach. "Why not just split the baby?" he asked. "Move it 30 minutes so it would be halfway between the two." Steube thinks his bill could get bipartisan support. The change would make the U.S. out of sync with most of the world — though India has taken a similar approach and in Nepal, the time is 15 minutes ahead of India.

The Almighty Buck

Prediction Market 'Kalshi' Sued for Not Paying $54 Million for Bets on Khamenei's Death (reuters.com) 44

An anonymous reader shared this report from the Independent: A popular predictions market app will not pay out the $54 million some of its users believed they were owed after correctly forecasting the death of Ayatollah Ali Khamenei, according to a report.

Kalshi, which allows players to gamble on real-world events, offered customers favorable odds on Khamenei, 86, being "out as Supreme Leader" in response to the announcement of joint U.S.-Israeli airstrikes on Tehran in the early hours of Saturday morning. The company promoted the trade on its homepage and app and tweeted [last] Saturday: "BREAKING: The odds Ali Khamenei is out as Supreme Leader have surged to 68 percent." It continued: "Reminder: Kalshi does not offer markets that settle on death. If Ali Khamenei dies, the market will resolve based on the last traded price prior to confirmed reporting of death." Khamenei was later confirmed dead in the airstrikes and the company clarified in a follow-up post: "Please note: A prior version of this clarification was grammatically ambiguous. As a customer service measure, Kalshi will reimburse lost value due to trades made between these clarifications...."

While the company has offered to reimburse any bets, fees or losses from the trade placed prior to its clarification message, it has nevertheless attracted a firestorm of complaints on social media.

A Kalshi spokesperson told Reuters they'd reimbursed "net losses" out of pocket "to the tune of millions of dollars". But a class action lawsuit was filed Thursday saying Kalshi had failed to pay $54 million: Kalshi did not invoke a "death carveout" provision until after the Iranian leader was killed to avoid paying customers in Kalshi's "Khamenei Market" what they were owed, the lawsuit said... The language specifying that Khamenei's departure could be due to any cause, including death, was "clear, unambiguous and binary," the lawsuit said, describing Kalshi's actions as "deceptive" and "predatory."
"In a notice filed Monday, the company proposed standardizing the terms of all its markets that implicitly depend on a person surviving..." reports Business Insider. "The update comes after Kalshi paid $2.2 million to resolve complaints from users who were confused by the way it divided the $55 million wagered on Iran's Supreme Leader Ali Khamenei's ouster after his targeted killing by Israel and the US."

Their article cites a DePaul University law professor who says "There's now sort of this nascent, but bipartisan movement against prediction markets. I think Kalshi's feeling the heat." For example, U.S. Senator Chris Murphy told the Washington Post, "People shouldn't be rooting for people to die because they placed a bet."
Government

Indonesia To Ban Social Media For Children Under 16 (theguardian.com) 47

Indonesia will ban children under 16 from having accounts on major social media platforms as part of a government push to protect minors from harmful content, addiction, and online threats. The rule will roll out starting March 28 and makes Indonesia the first country in Southeast Asia to impose such a restriction. The Guardian reports: Meutya Hafid said in a statement to media said that she signed a government regulation that will mean children under the age of 16 can no longer have accounts on high-risk digital platforms, including YouTube, TikTok, Facebook, Instagram, Threads, X, Roblox and Bigo Live, a popular livestreaming site. With a population of about 285 million, the fourth-highest in the world, the south-east Asian nation represents a significant market for social networks.

The implementation will start gradually from 28 March, until all platforms fulfill their compliance obligations. "The basis is clear. Our children face increasingly real threats. From exposure to pornography, cyberbullying, online fraud, and most importantly addiction. The government is here so that parents no longer have to fight alone against the giant of algorithms," Hafid said.

She added that the government is taking this step as the best effort in the midst of a digital emergency to reclaim sovereignty over children's futures. "We realize that the implementation of this regulation may cause some discomfort at first. Children may complain and parents may be confused about how to respond to their children's complaints," Hafid said.

Government

Trump Administration Says It Can't Process Tariff Refunds Because of Computer Problems (theverge.com) 162

U.S. Customs and Border Protection (CBP) said in a filing on Friday that it currently cannot process billions in tariff refunds because its import-processing system is "not well suited to a task of this scale." The Verge reports: The CBP's admission comes after the Supreme Court struck down the tariffs imposed by Trump under the International Emergency Economic Powers Act (IEEPA) last month. This week, the International Trade Court ruled that importers impacted by the tariffs are entitled to refunds with interest. The CBP estimates that it collected around $166 billion in IEEPA duties as of March 4th, 2026. [...]

The CBP says it currently processes imports through its Automated Commercial Environment (ACE) system. In the filing, Lord says that using the department's existing technology, it would take more than 4.4 million hours to process refunds for the over 53.2 million entries with IEEPA duties. Despite these current limitations, the CBP says it's "confident" it can develop and launch new capabilities to "streamline and consolidate refunds and interest payments on an importer basis" -- but this could take 45 days. "The process will be simpler and more efficient than the existing functionalities, and CBP will provide guidance on how to file refund declarations in the new system," Lord says.

Operating Systems

System76 Comments On Recent Age Verification Laws (phoronix.com) 84

In a blog post on Thursday, System76 CEO Carl Richell criticized new state laws in California, Colorado, and New York that would require operating systems to verify users' ages and expose that information to apps, arguing the rules are easy for kids to bypass and ultimately undermine privacy and freedom more than they protect minors.

"System76's position is interesting given that they sell Linux-loaded desktops, workstations and laptops plus being an operating system vendor with their in-house Pop!_OS distribution and COSMIC desktop environment," adds Phoronix's Michael Larabel, noting that they're also based out of Colorado. Here's an excerpt from the post: "A parent that creates a non-admin account on a computer, sets the age for a child account they create, and hands the computer over is in no different state. The child can install a virtual machine, create an account on the virtual machine and set the age to 18 or over. It's a similar technique to installing a VPN to get around the Great Firewall of China (just consider that for a moment). Or the child can simply re-install the OS and not tell their parents. ... In the case of Colorado's and California's bills, effectiveness is lost. In the case of New York's bill, liberty is lost. In the case of centralized platforms, potential is lost. ... The challenges we face are neither technical nor legal. The only solution is to educate our children about life with digital abundance. Throwing them into the deep end when they're 16 or 18 is too late. It's a wonderful and weird world. Yes, there are dark corners. There always will be. We have to teach our children what to do when they encounter them and we have to trust them." "We are accustomed to adding operating system features to comply with laws," writes Richell, in closing. "Accessibility features for ADA, and power efficiency settings for Energy Star regulations are two examples. We are a part of this world and we believe in the rule of law. We still hope these laws will be recognized for the folly they are and removed from the books or found unconstitutional."
AI

Iran War Provides a Large-Scale Test For AI-Assisted Warfare 112

An anonymous reader quotes a report from Bloomberg, written by Katrina Manson: The U.S. strikes on Iran ordered by President Donald Trump mark the arrival on a large scale of a new era of warfare assisted by artificial intelligence. Captain Timothy Hawkins, a Central Command spokesperson, told me last night that the AI tools the U.S. military is using in Iran operations don't make targeting decisions and don't replace humans. But they do help "make smarter decisions faster." That's been the driving ambition of the U.S. military, which has spent years looking at how to develop and deploy AI to the battlefield [...].

Critics, such as Stop Killer Robots, a coalition of 270 human-rights groups, argue that AI-enabled decision-support systems reduce the separation between recommending and executing a strike to a "dangerously thin" line. Hawkins said the military's use of AI assistance follows a rigorous process aligned with U.S. policy, military doctrine and the law. Artificial intelligence helps analysts whittle down what they need to focus on, generating so-called points of interest and helping personnel make "smart" decisions in the Iran operations, he told me. AI is also helping to pull data within systems and organize information to provide clarity.

Among the AI tech used in the Iran campaign is Maven Smart System, a digital mission control platform produced by Palantir [...]. That emerged from Project Maven, a project started in 2017 by the Pentagon to develop AI for the battlefield. Among the large language models installed on the system is Anthropic's Claude AI tool, according to the people, who said it has become central to U.S. operations against Iran and to accelerating Maven's development. Claude is also at the center of a row that pits Anthropic against the Department of Defense over limits on the software.
Further reading: Hacked Tehran Traffic Cameras Fed Israeli Intelligence Before Strike On Khamenei
Privacy

Proton Mail Helped FBI Unmask Anonymous 'Stop Cop City' Protester (404media.co) 58

Longtime Slashdot reader AmiMoJo shares a report from 404 Media: Privacy-focused email provider Proton Mail provided Swiss authorities with payment data that the FBI then used to determine who was allegedly behind an anonymous account affiliated with the Stop Cop City movement in Atlanta, according to a court record reviewed by 404 Media. The records provide insight into the sort of data that Proton Mail, which prides itself both on its end-to-end encryption and that it is only governed by Swiss privacy law, can and does provide to third parties. In this case, the Proton Mail account was affiliated with the Defend the Atlanta Forest (DTAF) group and Stop Cop City movement in Atlanta, which authorities were investigating for their connection to arson, vandalism and doxing. Broadly, members were protesting the building of a large police training center next to the Intrenchment Creek Park in Atlanta, and actions also included camping in the forest and lawsuits. Charges against more than 60 people have since been dropped.
The Courts

AI Startup Sues Ex-CEO Saying He Took 41GB of Email, Lied On Resume (arstechnica.com) 34

An anonymous reader quotes a report from Ars Technica: Hayden AI, a San Francisco startup that makes spatial analytics tools for cities worldwide, has sued its co-founder and former CEO, alleging that he stole a large quantity of proprietary information in the days leading up to his ouster from the company in September 2024. In a lawsuit filed late last month in San Francisco Superior Court but only made public this week, Hayden AI claims that former CEO Chris Carson undertook what it called "numerous fraudulent actions," which include "forged board signatures, unauthorized stock sales, and improper allocation of personal expenses." [...] Hayden AI, which is worth $464 million according to an estimated valuation on PitchBook, has asked the court to impose preliminary injunctive relief, requiring Carson to either return or destroy the data he allegedly stole. Specifically, the lawsuit alleges that Carson secretly sold over $1.2 million in company stock, forged board signatures, and copied 41GB of proprietary company emails before being fired in September 2024. The complaint also claims Carson fabricated key parts of his resume, including a PhD and military service. It's a "carefully constructed fraud," says Hayden AI.

"That is a lie," the complaint states. "Carson does not hold a PhD from Waseda or any other university. In 2007, he was not obtaining a PhD but was operating 'Splat Action Sports,' a paintball equipment business in a Florida strip mall."

Slashdot Top Deals