Government

Pentagon Purchases a Device Allegedly Linked To Havana Syndrome (cnn.com) 72

"Since the United States reopened its embassy in Cuba in 2015, a number of personnel have reported a series of debilitating medical ailments which include dizziness, fatigue, problems with memory, and impaired vision," writes longtime Slashdot reader smooth wombat. "For ten years, these sudden and unexplained onsets have been studied with no conclusive evidence one way or the other. Now comes word that a device, purchased by the Pentagon, has been tested which may be linked to what is known as Havana Syndrome." From a report: A division of the Department of Homeland Security, Homeland Security Investigations, purchased the device for millions of dollars in the waning days of the Biden administration, using funding provided by the Defense Department, according to two of the sources. Officials paid âoeeight figuresâ for the device, these people said, declining to offer a more specific number. [...]

The device acquired by HSI produces pulsed radio waves, one of the sources said, which some officials and academics have speculated for years could be the cause of the incidents. Although the device is not entirely Russian in origin, it contains Russian components, this person added. Officials have long struggled to understand how a device powerful enough to cause the kind of damage some victims have reported could be made portable; that remains a core question, according to one of the sources briefed on the device. The device could fit in a backpack, this person said.

[...] One key concern now for some officials is that if the technology proves viable it may have proliferated, several of the sources said, meaning that more than one country could now have access to a device that may be capable of causing career-ending injuries to US officials.
Further reading: 'Havana Syndrome' Debate Rises Again in US Government
Government

Senate Passes a Bill That Would Let Nonconsensual Deepfake Victims Sue (theverge.com) 63

The U.S. Senate unanimously passed the Disrupt Explicit Forged Images and Non-Consensual Edits Act (DEFIANCE Act), giving victims of sexually explicit AI deepfakes the right to sue the individuals who created them. The Verge reports: The bill passed with unanimous consent -- meaning there was no roll-call vote, and no Senator objected to its passage on the floor Tuesday. It's meant to build on the work of the Take It Down Act, a law that criminalizes the distribution of nonconsensual intimate images (NCII) and requires social media platforms to promptly remove them. [...] Now the ball is again in the House leadership's court; if they decide to bring the bill to the floor, it will have to pass in order to reach the president's desk.
Power

Trump Says Microsoft To Make Changes To Curb Data Center Power Costs For Americans (cnbc.com) 42

An anonymous reader quotes a report from CNBC: President Donald Trump said in a social media post on Monday that Microsoft will announce changes to ensure that Americans won't see rising utility bills as the company builds more data centers to meet rising artificial intelligence demand. "I never want Americans to pay higher Electricity bills because of Data Centers," Trump wrote on Truth Social. "Therefore, my Administration is working with major American Technology Companies to secure their commitment to the American People, and we will have much to announce in the coming weeks."

[...] Trump congratulated Microsoft on its efforts to keep prices in check, suggesting that other companies will make similar commitments. "First up is Microsoft, who my team has been working with, and which will make major changes beginning this week to ensure that Americans don't 'pick up the tab' for their POWER consumption, in the form of paying higher Utility bills," Trump wrote on Monday. Utilities charged U.S. consumers 6% more for electricity in August from a year earlier, including in states with many data centers, CNBC reported in November.

Microsoft is paying close to attention to the impact of its data centers on local residents. "I just want you to know we are doing everything we can, and I believe we're succeeding, in managing this issue well, so that you all don't have to pay more for electricity because of our presence," Brad Smith, the company's president and vice chair, said at a September town hall meeting in Wisconsin, where Microsoft is building an AI data center. While Microsoft is moving forward with some facilities, the company withdrew plans for a data center in Caledonia, Wisconsin, amid loud opposition to its efforts there. The project would would have been located 20 miles away from a data center in the village of Mount Pleasant.

Government

EPA To Stop Considering Lives Saved By Limiting Air Pollution (nytimes.com) 145

An anonymous reader quotes a report from the New York Times: For decades, the Environmental Protection Agency has calculated the health benefits of reducing air pollution, using the cost estimates of avoided asthma attacks and premature deaths to justify clean-air rules. Not anymore. Under President Trump, the E.P.A. plans to stop tallying gains from the health benefits caused by curbing two of the most widespread deadly air pollutants, fine particulate matter and ozone, when regulating industry, according to internal agency emails and documents reviewed by The New York Times.

It's a seismic shift that runs counter to the E.P.A.'s mission statement, which says the agency's core responsibility is to protect human health and the environment, environmental law experts said. The change could make it easier to repeal limits on these pollutants from coal-burning power plants, oil refineries, steel mills and other industrial facilities across the country, the emails and documents show. That would most likely lower costs for companies while resulting in dirtier air.
"The idea that E.P.A. would not consider the public health benefits of its regulations is anathema to the very mission of E.P.A.," said Richard Revesz, the faculty director of the Institute for Policy Integrity at New York University School of Law.

"If you're only considering the costs to industry and you're ignoring the benefits, then you can't justify any regulations that protect public health, which is the very reason that E.P.A. was set up."
Security

Fintech Firm Betterment Confirms Data Breach After Hackers Send Fake $10,000 Crypto Scam Messages (theverge.com) 3

An anonymous reader quotes a report from The Verge: Betterment, a financial app, sent a sketchy-looking notification on Friday asking users to send $10,000 to Bitcoin and Ethereum crypto wallets and promising to "triple your crypto," according to a thread on Reddit. The Betterment account says in an X thread that this was an "unauthorized message" that was sent via a "third-party system." TechCrunch has since confirmed that an undisclosed number of Betterment's customers have had their personal information accessed. "The company said customer names, email and postal addresses, phone numbers, and dates of birth were compromised in the attack," reports TechCrunch.

Betterment said it detected the attack on the same day and "immediately revoked the unauthorized access and launched a comprehensive investigation, which is ongoing." The fintech firm also said it has reached out to the customers targeted by the hackers and "advised them to disregard the message."

"Our ongoing investigation has continued to demonstrate that no customer accounts were accessed and that no passwords or other log-in credentials were compromised," Betterment wrote in the email.
The Courts

Supreme Court Takes Case That Could Strip FCC of Authority To Issue Fines (arstechnica.com) 49

An anonymous reader quotes a report from Ars Technica: The Supreme Court will hear a case that could invalidate the Federal Communications Commission's authority to issue fines against companies regulated by the FCC. AT&T, Verizon, and T-Mobile challenged the FCC's ability to punish them after the commission fined the carriers for selling customer location data without their users' consent. AT&T convinced the US Court of Appeals for the 5th Circuit to overturn its fine (PDF), while Verizon lost in the 2nd Circuit and T-Mobile lost in the District of Columbia Circuit. Verizon petitioned (PDF) the Supreme Court to reverse its loss, while the FCC and Justice Department petitioned (PDF) the court to overturn AT&T's victory in the 5th Circuit. The Supreme Court granted both petitions to hear the challenges and consolidated the cases in a list of orders (PDF) released Friday. Oral arguments will be held.

In 2024, the FCC fined the big three carriers a total of $196 million for location data sales revealed in 2018, saying the companies were punished "for illegally sharing access to customers' location information without consent and without taking reasonable measures to protect that information against unauthorized disclosure." Carriers challenged in three appeals courts, arguing that the fines violated their Seventh Amendment right to a jury trial. [...] While the Supreme Court is only taking up the AT&T and Verizon cases, the T-Mobile case would be affected by whatever ruling the Supreme Court issues. T-Mobile is seeking a rehearing in the District of Columbia Circuit, an effort that could be boosted or rendered moot by whatever the Supreme Court decides.

Government

More US States Are Preparing Age-Verification Laws for App Stores (politico.com) 57

Yes, a federal judge blocked an attempt by Texas at an app store age-verification law. But this year Silicon Valley giants including Google and Apple "are expected to fight hard against similar legislation," reports Politico, "because of the vast legal liability it imposes on app stores and developers." In Texas, Utah and Louisiana, parent advocates have linked up with conservative "pro-family" groups to pass laws forcing mobile app stores to verify user ages and require parental sign-off. If those rules hold up in court, companies like Google and Apple, which run the two largest app stores, would face massive legal liability... California has taken a different approach, passing its own age-verification law last year that puts liability on device manufacturers instead of app stores. That model has been better received by the tech lobby, and is now competing with the app-based approach in states like Ohio. In Washington D.C., a GOP-led bill modeled off of Texas' law is wending its way through Capitol Hill. And more states are expected to join the fray, including Michigan and South Carolina.

Joel Thayer, president of the conservative Digital Progress Institute and a key architect of the Texas law, said states are only accelerating their push. He explicitly linked the age-verification debate to AI, arguing it's "terrifying" to think companies could build new AI products by scraping data from children's apps. Thayer also pointed to the Trump administration's recent executive order aimed at curbing state regulation of AI, saying it has galvanized lawmakers. "We're gonna see more states pushing this stuff," Thayer said. "What really put fuel in the fire is the AI moratorium for states. I think states have been reinvigorated to fight back on this."

He told Politico that the issue will likely be decided by America's Supreme Court, which in June upheld Texas legislation requiring age verification for online content. Thayer said states need a ruling from America's highest court to "triangulate exactly what the eff is going on with the First Amendment in the tech world.

"They're going to have to resolve the question at some point."
Piracy

Italy Fines Cloudflare 14 Million Euros For Refusing To Filter Pirate Sites On Public 1.1.1.1 DNS (torrentfreak.com) 39

An anonymous reader quotes a report from TorrentFreak: Italy's communications regulator AGCOM imposed a record-breaking 14.2 million-euro fine on Cloudflare after the company failed to implement the required piracy blocking measures. Cloudflare argued that filtering its global 1.1.1.1 DNS resolver would be "impossible" without hurting overall performance. AGCOM disagreed, noting that Cloudflare is not necessarily a neutral intermediary either.

[...] "The measure, in addition to being one of the first financial penalties imposed in the copyright sector, is particularly significant given the role played by Cloudflare" AGCOM notes, adding that Cloudflare is linked to roughly 70% of the pirate sites targeted under its regime. In its detailed analysis, the regulator further highlighted that Cloudflare's cooperation is "essential" for the enforcement of Italian anti-piracy laws, as its services allow pirate sites to evade standard blocking measures.

Cloudflare has strongly contested the accusations throughout AGCOM's proceedings and previously criticized the Piracy Shield system for lacking transparency and due process. While the company did not immediately respond to our request for comment, it will almost certainly appeal the fine. This appeal may also draw the interest of other public DNS resolvers, such as Google and OpenDNS. AGCOM, meanwhile, says that it remains fully committed to enforcing the local piracy law. The regulator notes that since the Piracy Shield started in February 2024, 65,000 domain names and 14,000 IP addresses were blocked.

Businesses

Lawsuit Over OpenAI For-Profit Conversion Can Head To Trial, US Judge Says 32

Longtime Slashdot reader schwit1 shares a report from Reuters: Billionaire entrepreneur Elon Musk persuaded a judge on Wednesday to allow a jury trial on his allegations that ChatGPT maker OpenAI violated its founding mission in its high-profile restructuring to a for-profit entity. Musk was a cofounder of OpenAI in 2015 but left in 2018 and now runs an AI company that competes with it.

U.S. District Judge Yvonne Gonzalez Rogers in Oakland, California, said at a hearing that there was "plenty of evidence" suggesting OpenAI's leaders made assurances that its original nonprofit structure was going to be maintained. The judge said there were enough disputed facts to let a jury consider the claims at a trial scheduled for March, rather than decide the issues herself. She said she would issue a written order after the hearing that addresses OpenAI's bid to throw out the case.

[...] Musk contends he contributed about $38 million, roughly 60% of OpenAI's early funding, along with strategic guidance and credibility, based on assurances that the organization would remain a nonprofit dedicated to the public benefit. The lawsuit accuses OpenAI co-founders Sam Altman and Greg Brockman of plotting a for-profit switch to enrich themselves, culminating in multibillion-dollar deals with Microsoft and a recent restructuring.
OpenAI, Altman and Brockman have denied the claims, and they called Musk "a frustrated commercial competitor seeking to slow down a mission-driven market leader."

Microsoft is also a defendant and has urged the judge to toss Musk's lawsuit. A lawyer for Microsoft said there was no evidence that the company "aided and abetted" OpenAI.

OpenAI in a statement after the hearing said: "Mr Musk's lawsuit continues to be baseless and a part of his ongoing pattern of harassment, and we look forward to demonstrating this at trial."
Privacy

Illinois Health Department Exposed Over 700,000 Residents' Personal Data For Years (techcrunch.com) 14

Illinois Department of Human Services disclosed that a misconfigured internal mapping website exposed sensitive personal data for more than 700,000 Illinois residents for over four years, from April 2021 to September 2025. Officials say they can't confirm whether the publicly accessible data was ever viewed. TechCrunch reports: Officials said the exposed data included personal information on 672,616 individuals who are Medicaid and Medicare Savings Program recipients. The data included their addresses, case numbers, and demographic data -- but not individuals' names. The exposed data also included names, addresses, case statuses, and other information relating to 32,401 individuals in receipt of services from the department's Division of Rehabilitation Services.
Piracy

French Court Orders Google DNS to Block Pirate Sites, Dismisses 'Cloudflare-First' Defense (torrentfreak.com) 34

Paris Judicial Court ordered Google to block additional pirate sports-streaming domains at the DNS level, rejecting Google's argument that enforcement should target upstream providers like Cloudflare first. "The blockade was requested by Canal+ and aims to stop pirate streams of Champions League games," notes TorrentFreak. From the report: Most recently, Google was compelled to take action following a complaint from French broadcaster Canal+ and its subsidiaries regarding Champions League piracy.. Like previous blocking cases, the request is grounded in Article L. 333-10 of the French Sports Code, which enables rightsholders to seek court orders against any entity that can help to stop 'serious and repeated' sports piracy. After reviewing the evidence and hearing arguments from both sides, the Paris Court granted the blocking request, ordering Google to block nineteen domain names, including antenashop.site, daddylive3.com, livetv860.me, streamysport.org and vavoo.to.

The latest blocking order covers the entire 2025/2026 Champions League series, which ends on May 30, 2026. It's a dynamic order too, which means that if these sites switch to new domains, as verified by ARCOM, these have to be blocked as well. Google objected to the blocking request. Among other things, it argued that several domains were linked to Cloudflare's CDN. Therefore, suspending the sites on the CDN level would be more effective, as that would render them inaccessible. Based on the subsidiarity principle, Google argued that blocking measures should only be ordered if attempts to block the pirate sites through more direct means have failed.

The court dismissed these arguments, noting that intermediaries cannot dictate the enforcement strategy or blocking order. Intermediaries cannot require "prior steps" against other technical intermediaries, especially given the "irremediable" character of live sports piracy. The judge found the block proportional because Google remains free to choose the technical method, even if the result is mandated. Internet providers, search engines, CDNs, and DNS resolvers can all be required to block, irrespective of what other measures were taken previously. Google further argued that the blocking measures were disproportionate because they were complex, costly, easily bypassed, and had effects beyond the borders of France.

The Paris court rejected these claims. It argued that Google failed to demonstrate that implementing these blocking measures would result in "important costs" or technical impossibilities. Additionally, the court recognized that there would still be options for people to bypass these blocking measures. However, the blocks are a necessary step to "completely cease" the infringing activities.

Privacy

Samsung Hit with Restraining Order Over Smart TV Surveillance Tech in Texas (texasattorneygeneral.gov) 59

Texas Attorney General Ken Paxton has secured a temporary restraining order against Samsung, blocking the company from continuing to collect data through its smart TVs' Automated Content Recognition technology.

The ACR system captured screenshots of what users were watching every 500 milliseconds, according to the state's lawsuit, and did so without consumer knowledge or consent. The District Court found good cause to believe Samsung's actions violated the Texas Deceptive Trade Practices Act. The TRO prohibits Samsung and any parties working in concert with the company from using, selling, transferring, collecting, or sharing ACR data tied to Texas consumers.

Samsung is one of five major TV manufacturers the Texas Attorney General's office has sued over ACR deployment. Paxton previously secured a similar order against Hisense.
The Courts

Google and Character.AI Agree To Settle Lawsuits Over Teen Suicides 36

Google and Character.AI have agreed to settle multiple lawsuits from families alleging the chatbot encouraged self-harm and suicide among teens. "The settlements would mark the first resolutions in the wave of lawsuits against tech companies whose AI chatbots encouraged teens to hurt or kill themselves," notes Axios. From the report: Families allege that Character.AI's chatbot encouraged their children to cut their arms, suggested murdering their parents, wrote sexually explicit messages and did not discourage suicide, per lawsuits and congressional testimony. "Parties have agreed to a mediated settlement in principle to resolve all claims between them in the above-referenced matter," one document filed in U.S. District Court for the Middle District of Florida reads.

The documents do not contain any specific monetary amounts for the settlements. Pricy settlements could deter companies from continuing to offer chatbot products to kids. But without new laws on the books, don't expect major changes across the industry.
Last October, Character.AI said it would bar people under 18 from using its chatbots, in a sweeping move to address concerns over child safety.
Government

California Lawmaker Proposes a Four-Year Ban On AI Chatbots In Kids' Toys (techcrunch.com) 22

An anonymous reader quotes a report from TechCrunch: Senator Steve Padilla (D-CA) introduced a bill [dubbed SB 867] on Monday that would place a four-year ban on the sale and manufacture of toys with AI chatbot capabilities for kids under 18. The goal is to give safety regulators time to develop regulations to protect children from "dangerous AI interactions."

"Chatbots and other AI tools may become integral parts of our lives in the future, but the dangers they pose now require us to take bold action to protect our children," Senator Padilla said in a statement. "Our safety regulations around this kind of technology are in their infancy and will need to grow as exponentially as the capabilities of this technology do. Pausing the sale of these chatbot-integrated toys allows us time to craft the appropriate safety guidelines and framework for these toys to follow." [...] "Our children cannot be used as lab rats for Big Tech to experiment on," Padilla said.

Crime

Founder of Spyware Maker PcTattletale Pleads Guilty To Hacking, Advertising Surveillance Software (techcrunch.com) 3

An anonymous reader quotes a report from TechCrunch: The founder of a U.S.-based spyware company, whose surveillance products allowed customers to spy on the phones and computers of unsuspecting victims, pleaded guilty to federal charges linked to his long-running operation. pcTattletale founder Bryan Fleming entered a guilty plea in a San Diego federal court on Tuesday to charges of computer hacking, the sale and advertising of surveillance software for unlawful uses, and conspiracy.

The plea follows a multi-year investigation by agents with Homeland Security Investigations (HSI), a unit within U.S. Immigration and Customs Enforcement. HSI began investigating pcTattletale in mid-2021 as part of a wider probe into the industry of consumer-grade surveillance software, also known as "stalkerware."

This is the first successful U.S. federal prosecution of a stalkerware operator in more than a decade, following the 2014 indictment and subsequent guilty plea of the creator of a phone surveillance app called StealthGenie. Fleming's conviction could pave the way for further federal investigations and prosecutions against those operating spyware, but also those who simply advertise and sell covert surveillance software. HSI said that pcTattletale is one of several stalkerware websites under investigation.

Government

Congressman Introduces Legislation To Criminalize Insider Trading On Prediction Markets (axios.com) 55

Ritchie Torres has introduced a bill to ban government officials from using insider information to trade on political prediction markets like Polymarket. The bill was prompted by reports that traders on Polymarket made large profits betting on Nicolas Maduro's removal, raising suspicions that some wagers were placed using material non-public information. "While such insider trading in capital markets is already illegal and often prosecuted by the Justice Department and Securities and Exchange Commission, online prediction markets are far less regulated," notes Axios. From the report: Rep. Ritchie Torres' (D-N.Y.) three-page bill, a copy of which was obtained by Axios, is called the Public Integrity in Financial Prediction Markets Act of 2026. It would ban federal elected officials, political appointees and bureaucrats from making insider trades on prediction sites sites such as Polymarket. Specifically, the bill prohibits such government officials from trading based on information that is not publicly available and that "a reasonable investor would consider important in making an investment decision." [...] It's not clear if House Speaker Mike Johnson (R-La.) would put Torres' bill to a vote in the House or if President Trump would sign it. "We're looking at the specifics of the bill, but we already ban the activity it cites and are in support of means to prevent this type of activity," said Elisabeth Diana, a spokesperson for the prediction website Kalshi.

Diana added that the "activity from the past few days" did not occur on their platform.
AI

An AI-Generated NWS Map Invented Fake Towns In Idaho (washingtonpost.com) 42

National Weather Service pulled an AI-generated forecast graphic after it hallucinated fake town names in Idaho. "The blunder -- not the first of its kind to be posted by the NWS in the past year -- comes as the agency experiments with a wide range of AI uses, from advanced forecasting to graphic design," reports the Washington Post. "Experts worry that without properly trained officials, mistakes could erode trust in the agency and the technology." From the report: At first glance, there was nothing out of the ordinary about Saturday's wind forecast for Camas Prairie, Idaho. "Hold onto your hats!" said a social media post from the local weather office in Missoula, Montana. "Orangeotild" had a 10 percent chance of high winds, while just south, "Whata Bod" would be spared larger gusts. The problem? Neither of those places exist. Nor do a handful of the other spots marked on the National Weather Service's forecast graphic, riddled with spelling and geographical errors that the agency confirmed were linked to the use of generative AI.

NWS said AI is not commonly used for public-facing content, nor is its use prohibited. The agency said it is exploring ways to employ AI to inform the public and acknowledged mistakes have been made. "Recently, a local office used AI to create a base map to display forecast information, however the map inadvertently displayed illegible city names," said NWS spokeswoman Erica Grow Cei. "The map was quickly corrected and updated social media posts were distributed."

A post with the inaccurate map was deleted Monday, the same day The Washington Post contacted officials with questions about the image. Cei added that "NWS is exploring strategic ways to continue optimizing our service delivery for Americans, including the implementation of AI where it makes sense. NWS will continue to carefully evaluate results in cases where AI is implemented to ensure accuracy and efficiency, and will discontinue use in scenarios where AI is not effective." A Nov. 25 tweet out of the Rapid City, South Dakota, office also had misspelled locations and the Google Gemini logo in its forecast. NWS did not confirm whether the Rapid City image was made with generative AI.

Privacy

NYC Wegmans Is Storing Biometric Data On Shoppers' Eyes, Voices and Faces (gothamist.com) 26

schwit1 shares a report from Gothamist: Wegmans in New York City has begun collecting biometric data from anyone who enters its supermarkets, according to new signage posted at the chain's Manhattan and Brooklyn locations earlier this month. Anyone entering the store could have data on their face, eyes and voices collected and stored by the Rochester-headquartered supermarket chain. The information is used to "protect the safety and security of our patrons and employees," according to the signage. The new scanning policy is an expansion of a 2024 pilot.

The chain had initially said that the scanning system was only for a small group of employees and promised to delete any biometric data it collected from shoppers during the pilot rollout. The new notice makes no such assurances. Wegmans representatives did not reply to questions about how the data would be stored, why it changed its policy or if it would share the data with law enforcement.

Advertising

Vietnam Bans Unskippable Ads (phunuonline.com.vn) 50

Vietnam will begin enforcing new online advertising rules in February 2026 that ban forced video ads longer than five seconds and must allow users to close ads with just one tap. "Furthermore, platforms must provide clear icons and instructions for users to report advertisements that violate the law, and allow them to opt out, turn off, or stop viewing inappropriate ads," reports a local news outlet (translated to English). "These reports must be received and processed promptly, and the results communicated to users as required." From the report: In cases where the entity posting the infringing advertisement cannot be identified or where specialized laws do not have specific regulations, the Ministry of Culture, Sports and Tourism is the focal agency to receive notifications and send requests to block or remove the advertisement to organizations and businesses providing online advertising services in Vietnam.

Advertisers, advertising service providers, and advertising transmission and distribution units are responsible for blocking and removing infringing advertisements within 24 hours of receiving a request from the competent authority. For advertisements that infringe on national security, the blocking and removal must be carried out immediately, no later than 24 hours.

In case of non-compliance, the Ministry of Culture, Sports and Tourism, in coordination with the Ministry of Public Security, will apply technical measures to block infringing advertisements and services and handle the matter according to the law. Telecommunications companies and Internet service providers must also implement technical measures to block access to infringing advertisements within 24 hours of receiving a request.

United States

The Nation's Strictest Privacy Law Goes Into Effect (arstechnica.com) 45

An anonymous reader quotes a report from Ars Technica: Californians are getting a new, supercharged way to stop data brokers from hoarding and selling their personal information, as a recently enacted law that's among the strictest in the nation took effect at the beginning of the year. [...] Two years ago, California's Delete Act took effect. It required data brokers to provide residents with a means to obtain a copy of all data pertaining to them and to demand that such information be deleted. Unfortunately, Consumer Watchdog found that only 1 percent of Californians exercised these rights in the first 12 months after the law went into effect. A chief reason: Residents were required to file a separate demand with each broker. With hundreds of companies selling data, the burden was too onerous for most residents to take on.

On January 1, a new law known as DROP (Delete Request and Opt-out Platform) took effect. DROP allows California residents to register a single demand for their data to be deleted and no longer collected in the future. CalPrivacy then forwards it to all brokers. Starting in August, brokers will have 45 days after receiving the notice to report the status of each deletion request. If any of the brokers' records match the information in the demand, all associated data -- including inferences -- must be deleted unless legal exemptions such as information provided during one-to-one interactions between the individual and the broker apply. To use DROP, individuals must first prove they're a California resident.

Piracy

Anna's Archive Loses .Org Domain After Surprise Suspension 9

Anna's Archive lost control of its primary .org domain after it was placed on registry-level serverHold -- "an action that's typically taken by the domain name registry," reports TorrentFreak. Despite mounting legal pressure and speculation tied to its Spotify backup, the site remains accessible via multiple alternative domains, underscoring the resilience of shadow libraries. From the report: A few hours ago, the site's original domain name suddenly became unreachable globally. The annas-archive.org domain status was changed to "serverHold," which is typically done by the domain registry. This status effectively means that the domain is suspended and under investigation. Similar action has previously been taken against other pirate sites.

It is rare to see a .org domain involved in domain name suspensions. The American non-profit Public Interest Registry (PIR), which oversees the .org domains, previously refused to suspend domain names voluntarily, including thepiratebay.org. The registry's cautionary stance suggests that the actions against annas-archive.org are backed by a court order.

PIR's marketing director, Kendal Rowe, informs TorrentFreak that "unfortunately, PIR is unable to comment on the situation at this time." It is possible that, in response to the 'DRM-circumventing' Spotify backup, rightsholders requested an injunction targeting the domain name. However, we have seen no evidence of that. In the WorldCat lawsuit, OCLC requested an injunction to force action from intermediaries, including domain registries, but as far as we know, that hasn't been granted yet.
Television

Corporation for Public Broadcasting To Shut Down After 58 Years (variety.com) 171

After Congress approved President Donald Trump's rescission package eliminating federal funding, the Corporation for Public Broadcasting voted to dissolve after 58 years, rather than continue to exist and potentially be "vulnerable to future political manipulation or misuse." The shutdown leaves hundreds of local public TV and radio stations facing an uncertain future. Variety reports: The CPB was created by Congress by the Public Broadcasting Act of 1967 to support the federal government's investment in public broadcasting. The org noted that the rescission of all of CPB's federal funding came after years of political attacks. "For more than half a century, CPB existed to ensure that all Americans -- regardless of geography, income, or background -- had access to trusted news, educational programming, and local storytelling," said CPB president/CEO Patricia Harrison. "When the Administration and Congress rescinded federal funding, our Board faced a profound responsibility: CPB's final act would be to protect the integrity of the public media system and the democratic values by dissolving, rather than allowing the organization to remain defunded and vulnerable to additional attacks.

[...] "CPB's support extends to every corner of the country -- urban, rural, tribal, and everywhere in between," the org noted. "In many communities, public media stations are the only free source of trusted news, educational children's programming, and local and national cultural content." The CPB said that without funding, its board determined that "maintaining the corporation as a nonfunctional entity would not serve the public interest or advance the goals of public media. A dormant and defunded CPB could have become vulnerable to future political manipulation or misuse, threatening the independence of public media and the trust audiences place in it, and potentially subjecting staff and board members to legal exposure from bad-faith actors."

As it closes, CPB is distributing its remaining funds, and also supporting the American Archive of Public Broadcasting in digitizing and preserving historic content. The CPB's own archives will be preserved at the University of Maryland, which will make it accessible to the public. "Public media remains essential to a healthy democracy," Harrison added. "Our hope is that future leaders and generations will recognize its value, defend its independence, and continue the work of ensuring that trustworthy, educational, and community-centered media remains accessible to all Americans."

United States

As US Communities Start Fighting Back, Many Datacenters are Blocked (apnews.com) 65

America's tech companies and data center developers "are increasingly losing fights in communities where people don't want to live next to them, or even near them," reports the Associated Press: Communities across the United States are reading about — and learning from — each other's battles against data center proposals that are fast multiplying in number and size to meet steep demand as developers branch out in search of faster connections to power sources... [A]s more people hear about a data center coming to their community, once-sleepy municipal board meetings in farming towns and growing suburbs now feature crowded rooms of angry residents pressuring local officials to reject the requests...

A growing number of proposals are going down in defeat, sounding alarms across the data center constellation of Big Tech firms, real estate developers, electric utilities, labor unions and more. Andy Cvengros, who helps lead the data center practice at commercial real estate giant JLL, counted seven or eight deals he'd worked on in recent months that saw opponents going door-to-door, handing out shirts or putting signs in people's yards. "It's becoming a huge problem," Cvengros said. Data Center Watch, a project of 10a Labs, an AI security consultancy, said it is seeing a sharp escalation in community, political and regulatory disruptions to data center development. Between April and June alone, its latest reporting period, it counted 20 proposals valued at $98 billion in 11 states that were blocked or delayed amid local opposition and state-level pushback. That amounts to two-thirds of the projects it was tracking...

For some people angry over steep increases in electric bills, their patience is thin for data centers that could bring still-higher increases. Losing open space, farmland, forest or rural character is a big concern. So is the damage to quality of life, property values or health by on-site diesel generators kicking on or the constant hum of servers. Others worry that wells and aquifers could run dry...

Privacy

39 Million Californians Can Now Legally Demand Data Brokers Delete Their Personal Data (techcrunch.com) 43

While California's residents have had the right to demand companies stop collecting/selling their data since 2020, doing so used to require a laborious opting out with each individual company," reports TechCrunch. But now Californians can make "a single request that more than 500 registered data brokers delete their information" — using the Delete Requests and Opt-Out Platform (or DROP): Once DROP users verify that they are California residents, they can submit a deletion request that will go to all current and future data brokers registered with the state...

Brokers are supposed to start processing requests in August 2026, then they have 90 days to actually process requests and report back. If they don't delete your data, you'll have the option to submit additional information that may help them locate your records. Companies will also be able to keep first-party data that they've collected from users. It's only brokers who seek to buy or sell that data — which can include your social security number, browsing history, email address, phone number, and more — who will be required to delete it...

The California Privacy Protection Agency says that in addition to giving residents more control over their data, the tool could result in fewer "unwanted texts, calls, or emails" and also decrease the "risk of identity theft, fraud, AI impersonations, or that your data is leaked or hacked."

Government

North Dakota Law Included Fake Critical Minerals Using Lawyers' Last Names (northdakotamonitor.com) 53

North Dakota passed a law last May to promote development of rare earth minerals in the state. But the law's language apparently also includes two fake mineral names, according to the Bismarck Tribune, "that appear to be inspired by coal company lawyers who worked on the bill." The inclusion of fictional substances is being called an embarrassment by one state official, a possible practical joke by coal industry leaders and mystifying by the lawmakers who worked on the bill, the North Dakota Monitor reported.

The fake minerals are friezium and stralium, apparent references to Christopher Friez and David Straley, attorneys for North American Coal who were closely involved in drafting the bill and its amendments. Straley said they were not responsible for adding the fake names. "I assume it was put in to embarrass us, or to make light of it, or have a practical joke," Straley said, adding it could have been a clerical error.

Agriculture Commissioner Doug Goehring questioned the two substances listed in state law during a recent meeting of the North Dakota Industrial Commission, which is poised to adopt rules based on the legislation... Friezium and stralium first appeared in the bill on the last afternoon of the legislative session as lawmakers hurried to pass several final bills... The amended bill is labeled as prepared by Legislative Council for Rep. Dick Anderson, R-Willow City, the prime sponsor and chair of the conference committee. Anderson said the amendments were prepared by a group of attorneys and legislators, including representatives from the coal industry...

Jonathan Fortner, president of the Lignite Energy Council that represents the coal industry, said it's unfortunate this happened in such an important bill. "From the president on down, everyone's interested in developing domestic critical minerals for national security reasons," Fortner said. "While this may have been a legislative joke between some people that somehow got through, the bigger picture is one that is important and is a very serious matter."

AI

Google's $250M Deal with California to Fund Newsrooms May Be Stalled (politico.com) 25

Remember how California's government negotiated a 2024 deal where Google contributed millions to California's local newsrooms to offset advertisers moving to the search engine?

"A year after it was cemented — and billed as a model that could succeed where entire countries and continents had fallen short — the agreement is tangled in budget cuts, bureaucratic infighting and unresolved questions about who controls the money," reports Politico, "leaving journalists empty-handed and casting doubt on whether the lofty experiment will ever live up to its promise." The program, initially framed as a nearly $250 million commitment over five years, has secured just $20 million in new money for journalists in its first year, with no guarantee the funding will continue. It's changed hands twice since the University of California, Berkeley withdrew its support [with school officials "worried they wouldn't have enough of a say in how the money was distributed"]. Suggestions that other big tech players like ChatGPT-maker OpenAI could front more resources haven't materialized. A $62.5 million "AI accelerator" tied to the deal hasn't been set up yet.

Not a single newsroom has seen a dollar of funding, and there's no definitive timeline spelling out when they will... [The article adds later that state officials "have yet to draft precise rules for how California will decide which newsrooms get cash..."] Conversations with at least 20 people involved in the deal's rollout reveal how California's budget shortfalls and intraparty spats among Democrats scrambled it... California's struggle to launch its program has dampened hopes of replicating its model in other states such as Oregon, Illinois and New York, where lawmakers have tried but failed to make Big Tech pay for news...

When [California governor] Newsom unveiled his final state budget plan in May 2025 after a $12 billion deficit suddenly scrambled the state's finances, California's first-year commitment was reduced from $30 million to $10 million. Google followed suit within days and cut its first-year contribution from $15 million to $10 million... Whether the program even continues past 2026 is also unclear. Newsom's office declined to confirm whether the state will provide its $10 million commitment to the fund in the coming 2026-27 state budget. Newsom will also be termed out in 2027, and there's no requirement for his successor to honor the state's agreement with Google.

The Military

Airlines Cancel Hundreds of Flights After U.S. Attack on Venezuela (cnbc.com) 180

CNBC reports that U.S. airlines have "canceled hundreds of flights to airports in Puerto Rico and Aruba, according to flight tallies from FlightAware and carriers' sites."

JetBlue, Southwest, and American Airlines were among the multiple airlines showing cancelled flights, which "included close to 300 flights to and from San Juan, Puerto Rico's Luis Muñoz Marín International Airport, more than 40% of the day's schedule, according to FlightAware." Airlines canceled flights throughout the Caribbean on Saturday following U.S. strikes on Venezuela after the Federal Aviation Administration ordered commercial aircraft to avoid airspace in parts of the region.... It wasn't immediately clear how long the disruptions would last, though such broad restrictions are often temporary. Airlines said they would waive change fees and fare differences for customers affected by the airspace closures who could fly later in the month.
CNN cites a U.S. official who says more than 150 U.S. aircraft (including helicopters) launched from 20 different bases "on land and sea" during Friday's attack.

The U.S. has said the lights were out in Caracas during the attack, presumably because of a targeted strike on their power grid. "Videos filmed by Caracas residents showed parts of the city in the dark," reports the Miami Herald.

United Nations secretary-general António Guterres issued a statement via his spokesman saying he was "deeply concerned that the rules of international law have not been respected," (according to a Reuters report cited by the Guardian). The Guardian adds that "a number of nations have called for an emergency meeting of the UN Security Council, in New York, today, as a result of the U.S.'s unilateral action."
Security

DarkSpectre Hackers Spread Malware To 8.8 Million Chrome, Edge, and Firefox Users (cyberpress.org) 12

An anonymous reader quotes a report from Cyber Press: A newly uncovered Chinese threat group, DarkSpectre, has been linked to one of the most widespread browser-extension malware operations to date, compromising more than 8.8 million users of Chrome, Edge, Firefox, and Opera over the past seven years. According to research by Koi.ai, the group operates three interconnected campaigns: ShadyPanda, GhostPoster, and a newly identified one named The Zoom Stealer, forming a single, strategically organized operation.

DarkSpectre's structure differs from that of ordinary cybercrime operations. The group runs separate but interconnected malware clusters, each with distinct goals. The ShadyPanda campaign, responsible for 5.6 million infections, focuses on long-term user surveillance and e-commerce affiliate fraud. Its extensions have appeared legitimate for years, offering new tab pages and translation utilities, before secretly downloading malicious configurations from command-and-control servers such as jt2x.com and infinitynewtab.com. Once activated, they inject remote scripts, hijack search results, and track browsing activity.

The second campaign, GhostPoster, spreads via Firefox and Opera extensions that conceal malicious payloads in PNG images via steganography. After lying dormant for several days, the extensions extract and execute JavaScript hidden within images, enabling stealthy remote code execution. This campaign has affected over one million users and relies on domains like gmzdaily.com and mitarchive.info for payload delivery.

The most recent discovery, The Zoom Stealer, exposes around 2.2 million users to corporate espionage. These extensions masquerade as productivity tools or video downloaders while secretly harvesting corporate meeting links, credentials, and speaker profiles from more than 28 video conferencing platforms, including Zoom, Microsoft Teams, and Google Meet. The extensions use real-time WebSocket connections to exfiltrate data to Firebase databases, such as zoocorder.firebaseio.com, and to Google Cloud functions, such as webinarstvus.cloudfunctions.net.

Government

Trump Administration Removes Three Spyware-Linked Execs From Sanctions List (reuters.com) 35

Reuters reports that the United States Department of the Treasury under the Donald Trump administration has lifted sanctions on three executives linked to the spyware firm Intellexa. Reuters reports: The move partially reverses the imposition of sanctions last year by then-President Joe Biden's administration on seven people tied to Intellexa. The Treasury Department at the time described the consortium, opens new tab, launched by former Israeli intelligence official Tal Dilian, as "a complex international web of decentralized companies that built and commercialized a comprehensive suite of highly invasive spyware products."

Treasury said in an email that the removal "was done as part of the normal administrative process in response to a petition request for reconsideration." It added that each of the individuals had "demonstrated measures to separate themselves from the Intellexa Consortium."

The notice said sanctions were lifted on Sara Hamou, whom the U.S. government accused of providing managerial services to Intellexa, Andrea Gambazzi, whose company was alleged by the U.S. government to have held the distribution rights to the Predator spyware, and Merom Harpaz, described by U.S. officials as a top executive in the consortium.

Government

NYC Inauguration Bans Raspberry Pi, Flipper Zero Devices (adafruit.com) 42

Longtime Slashdot reader ptorrone writes: The January 1, 2026, NYC mayoral inauguration prohibits attendees from bringing specific brand-name devices, explicitly banning Raspberry Pi single-board computers and the Flipper Zero, listed alongside weapons, explosives, and drones. Rather than restricting behaviors or capabilities like signal interference or unauthorized transmitters, the policy names two widely used educational and testing tools while allowing smartphones and laptops that are far more capable. Critics argue this device-specific ban creates confusion, encourages selective enforcement, and reflects security theater rather than a clear, capability-based public safety framework. New York has handled large-scale events more pragmatically before.
Government

Denmark's Main Postal Carrier Ends Letter Delivery (nytimes.com) 41

PostNord is ending letter delivery in Denmark after a 90%+ collapse in mail volume. It marks the first known case of a national postal carrier abandoning letters entirely -- a symbolic milestone of a fully digitized society that's sparking nostalgia even among people who stopped sending mail years ago. The New York Times reports: Denmark has had a postal service for more than 400 years. But a steep decline in its use has led the Nordic country's longtime postal carrier to stop letter deliveries entirely, a change taking effect on Tuesday.

Danes have seen it coming for months: The carrier, PostNord, has been removing its red mailboxes, once a ubiquitous public fixture. The disappearance of the mailboxes is "what actually made people emotional," said Julia Lahme, a trend researcher and the director of Lahme, a Danish communications agency, "even though most of them hadn't sent a letter in 18 months."

Letter writing in the country has declined by more than 90 percent since 2000, according to PostNord, which is owned jointly by the Danish and Swedish governments. Next year, in Denmark, it will only deliver packages, although in Sweden it will continue to deliver letters.

The change comes partly as a result of a drop-off in government mail. Denmark is one of the world's most digitized countries. Only 250,000 people, or less than 5 percent of the population, still receive their official communications in the mail. "People simply do not rely on physical letters the way they used to," Andreas Brethvad, the communications director of PostNord Denmark, said in an emailed statement. He said that because nine in 10 Danes shop online each month, the change "is about keeping up with times to meet the demands of society. It's a natural evolution."
The report notes that snail mail lovers will still be able to send and receive letters through Dao, a private company. "While some Danes are quietly mourning a service that, for the most part, they had largely stopped using, the transition feels like a sign of the times," reports the Times.
Crime

Cybersecurity Employees Plead Guilty To Ransomware Attacks 17

Two cybersecurity professionals who spent their careers defending organizations against ransomware attacks have pleaded guilty in a Florida federal court to using ALPHV/BlackCat ransomware to extort American businesses throughout 2023.

Ryan Goldberg, a 40-year-old incident response manager from Georgia, and Kevin Martin, a 36-year-old ransomware negotiator from Texas, admitted to conspiring to obstruct commerce through extortion. Between April and December 2023, Goldberg, Martin, and a third unnamed co-conspirator deployed the ransomware against multiple U.S. victims and agreed to pay ALPHV BlackCat's operators a 20% cut of any ransoms received. They successfully extracted approximately $1.2 million in Bitcoin from one victim, splitting their 80% share three ways before laundering the proceeds. Both men face up to 20 years in prison and are scheduled for sentencing on March 12, 2026.

The Justice Department noted that all three conspirators possessed specialized skills in securing computer systems against the very attacks they carried out. ALPHV BlackCat has targeted more than 1,000 victims globally and was the subject of an FBI disruption operation in December 2023 that saved victims an estimated $99 million through a custom decryption tool.
EU

Challenges Face European Governments Pursuing 'Digital Sovereignty' (theregister.com) 57

The Register reports on challenges facing Europe's pursuit of "digital sovereignty": The US CLOUD Act of 2018 allows American authorities to compel US-based technology companies to provide requested data, regardless of where that data is stored globally. This places European organizations in a precarious position, as it directly clashes with Europe's own stringent privacy regulation, the General Data Protection Regulation (GDPR)... Furthermore, these warrants often come with a gag order, legally prohibiting the provider from informing their customer that their data has been accessed. This renders any contractual clauses requiring transparency or notification effectively meaningless. While technical measures like encryption are often proposed as a solution, their effectiveness depends entirely on who controls the encryption keys. If the US provider manages the keys, as is common in many standard cloud services, they can be forced to decrypt the data for authorities, making such safeguards moot....

American hyperscalers have recognized the market demand for sovereignty and now aggressively market 'sovereign cloud' solutions, typically by placing datacenters on European soil or partnering with local operators. Critics call this 'sovereignty washing'... [Cristina Caffarra, a competition economistand driving force behind the Eurostack initiative] warns that this does not resolve the fundamental problem. "A company subject to the extraterritorial laws of the United States cannot be considered sovereign for Europe," she says. "That simply doesn't work." Because, as long as the parent company is American, it remains subject to the CLOUD Act...

Even when organizations make deliberate choices in favour of European providers, those decisions can be undone by market forces. A recent acquisition in the Netherlands illustrates this risk. In November 2025, the American IT services giant Kyndryl announced its intention to acquire Solvinity, a Dutch managed cloud provider. This came as an "unpleasant surprise" to several of its government clients, including the municipality of Amsterdam and the Dutch Ministry of Justice and Security. These bodies had specifically chosen Solvinity to reduce their dependence on American firms and mitigate CLOUD Act risks.

Still, The Register provides several examples of government systems that are "taking concrete steps to regain control over their IT."
  • Austria's Federal Ministry for Economy, Energy and Tourism now has 1,200 employees on the European open-source collaboration platform Nextcloud, leading several other Austrian ministries to also implement Nextcloud. (The Ministry's CISO tells the Register "We can see our input in Nextcloud releases. That is a feeling we never had with Microsoft.")
  • France's Ministry of Economics and Finance recently completed NUBO (which the Register describes as "an OpenStack-based private cloud initiative designed to handle sensitive data and services.")

Thanks to long-time Slashdot reader mspohr for sharing the article.


IOS

Apple To Allow Alternative App Stores For iOS Users In Brazil 6

Apple will allow alternative iOS app stores and external payment systems in Brazil after settling an antitrust case with the country's competition authority, following a lawsuit brought by MercadoLibre back in 2022. Thurrott reports: Yesterday, Brazil's Conselho Administrativo de Defesa Economica (CADE) explained in its press release that it has approved a Term of Commitment to Cease (TCC) submitted by Apple. To settle the lawsuit, the iPhone maker has agreed to allow third-party iOS app stores in Brazil and to let developers use external payment systems. The company will also use neutral wording in the warning messages about third-party app stores and external payment systems that iOS users in Brazil will see.

As part of the settlement, Apple has 105 days to implement these changes to avoid a fine of up to $27.1 million. A separate report from Brazilian blog Tecnoblog revealed that Apple will still take a 5% "Core Technology Commission" fee on transactions going through alternative app stores. Additionally, the company will take a 15% cut on in-app purchases for App Store apps when developers redirect users to their own payment systems.
AI

Italy Tells Meta To Suspend Its Policy That Bans Rival AI Chatbots From WhatsApp 4

Italy's antitrust regulator Italian Competition Authority ordered Meta to suspend a policy that blocks rival AI chatbots from using WhatsApp's business APIs, citing potential abuse of market dominance. "Meta's conduct appears to constitute an abuse, since it may limit production, market access, or technical developments in the AI Chatbot services market, to the detriment of consumers," the Authority wrote. "Moreover, while the investigation is ongoing, Meta's conduct may cause serious and irreparable harm to competition in the affected market, undermining contestability." TechCrunch reports: The AGCM in November had broadened the scope of an existing investigation into Meta, after the company changed its business API policy in October to ban general-purpose chatbots from being offered on the chat app via the API. Meta has argued that its API isn't designed to be a platform for the distribution of chatbots and that people have more avenues beyond WhatsApp to use AI bots from other companies. The policy change, which goes into effect in January, would affect the availability of AI chatbots from the likes of OpenAI, Perplexity, and Poke on the app.
AI

China Is Worried AI Threatens Party Rule 21

An anonymous reader quotes a report from the Wall Street Journal: Concerned that artificial intelligence could threaten Communist Party rule, Beijing is taking extraordinary steps to keep it under control. Although China's government sees AI as crucial to the country's economic and military future, regulations and recent purges of online content show it also fears AI could destabilize society. Chatbots pose a particular problem: Their ability to think for themselves could generate responses that spur people to question party rule.

In November, Beijing formalized rules it has been working on with AI companies to ensure their chatbots are trained on data filtered for politically sensitive content, and that they can pass an ideological test before going public. All AI-generated texts, videos and images must be explicitly labeled and traceable, making it easier to track and punish anyone spreading undesirable content. Authorities recently said they removed 960,000 pieces of what they regarded as illegal or harmful AI-generated content during three months of an enforcement campaign. Authorities have officially classified AI as a major potential threat, adding it alongside earthquakes and epidemics to its National Emergency Response Plan.

Chinese authorities don't want to regulate too much, people familiar with the government's thinking said. Doing so could extinguish innovation and condemn China to second-tier status in the global AI race behind the U.S., which is taking a more hands-off approach toward policing AI. But Beijing also can't afford to let AI run amok. Chinese leader Xi Jinping said earlier this year that AI brought "unprecedented risks," according to state media. A lieutenant called AI without safety like driving on a highway without brakes. There are signs that China is, for now, finding a way to thread the needle.

Chinese models are scoring well in international rankings, both overall and in specific areas such as computer coding, even as they censor responses about the Tiananmen Square massacre, human-rights concerns and other sensitive topics. Major American AI models are for the most part unavailable in China. It could become harder for DeepSeek and other Chinese models to keep up with U.S. models as AI systems become more sophisticated. Researchers outside of China who have reviewed both Chinese and American models also say that China's regulatory approach has some benefits: Its chatbots are often safer by some metrics, with less violence and pornography, and are less likely to steer people toward self-harm.
"The Communist Party's top priority has always been regulating political content, but there are people in the system who deeply care about the other social impacts of AI, especially on children," said Matt Sheehan, who studies Chinese AI at the Carnegie Endowment for International Peace, a think tank. "That may lead models to produce less dangerous content on certain dimensions."
Censorship

US Bars Five Europeans It Says Pressured Tech Firms To Censor American Viewpoints Online (apnews.com) 169

An anonymous reader quotes a report from the Associated Press: The State Department announced Tuesday it was barring five Europeans it accused of leading efforts to pressure U.S. tech firms to censor or suppress American viewpoints. The Europeans, characterized by Secretary of State Marco Rubio as "radical" activists and "weaponized" nongovernmental organizations, fell afoul of a new visa policy announced in May to restrict the entry of foreigners deemed responsible for censorship of protected speech in the United States. "For far too long, ideologues in Europe have led organized efforts to coerce American platforms to punish American viewpoints they oppose," Rubio posted on X. "The Trump Administration will no longer tolerate these egregious acts of extraterritorial censorship."

The five Europeans were identified by Sarah Rogers, the under secretary of state for public diplomacy, in a series of posts on social media. [...] The five Europeans named by Rogers are: Imran Ahmed, chief executive of the Centre for Countering Digital Hate; Josephine Ballon and Anna-Lena von Hodenberg, leaders of HateAid, a German organization; Clare Melford, who runs the Global Disinformation Index; and former EU Commissioner Thierry Breton, who was responsible for digital affairs. Rogers in her post on X called Breton, a French business executive and former finance minister, the "mastermind" behind the EU's Digital Services Act, which imposes a set of strict requirements designed to keep internet users safe online. This includes flagging harmful or illegal content like hate speech. She referred to Breton warning Musk of a possible "amplification of harmful content" by broadcasting his livestream interview with Trump in August 2024 when he was running for president.

Privacy

Inside Uzbekistan's Nationwide License Plate Surveillance System (techcrunch.com) 26

An anonymous reader quotes a report from TechCrunch: Across Uzbekistan, a network of about a hundred banks of high-resolution roadside cameras continuously scan vehicles' license plates and their occupants, sometimes thousands a day, looking for potential traffic violations. Cars running red lights, drivers not wearing their seatbelts, and unlicensed vehicles driving at night, to name a few. The driver of one of the most surveilled vehicles in the system was tracked over six months as he traveled between the eastern city of Chirchiq, through the capital Tashkent, and in the nearby settlement of Eshonguzar, often multiple times a week. We know this because the country's sprawling license plate-tracking surveillance system has been left exposed to the internet.

Security researcher Anurag Sen, who discovered the security lapse, found the license plate surveillance system exposed online without a password, allowing anyone access to the data within. It's not clear how long the surveillance system has been public, but artifacts from the system show that its database was set up in September 2024, and traffic monitoring began in mid-2025. The exposure offers a rare glimpse into how such national license plate surveillance systems work, the data they collect, and how they can be used to track the whereabouts of any one of the millions of people across an entire country. The lapse also reveals the security and privacy risks associated with the mass monitoring of vehicles and their owners, at a time when the United States is building up its nationwide array of license plate readers, many of which are provided by surveillance giant Flock.

The Courts

John Carreyou and Other Authors Bring New Lawsuit Against Six Major AI Companies 32

A group of authors led by John Carreyrou has filed a new lawsuit against Anthropic, Google, OpenAI, Meta, xAI, and Perplexity, accusing the AI firms of training models on pirated copies of their books. TechCrunch reports: If this sounds familiar, it's because another set of authors already filed a class action suit against Anthropic for these same acts of copyright infringement. In that case, the judge ruled that it was legal for Anthropic and similar AI companies to train on pirated copies of books, but that it was not legal to pirate the books in the first place.

While eligible writers can receive about $3,000 from the $1.5 billion Anthropic settlement, some authors were dissatisfied with that resolution -- it doesn't hold AI companies accountable for the actual act of using stolen books to train their models, which generate billions of dollars in revenue.
The plaintiffs in the new lawsuit say the proposed Anthropic settlement "seems to serve [the AI companies], not creators."

"LLM companies should not be able to so easily extinguish thousands upon thousands of high-value claims at bargain-basement rates, eliding what should be the true cost of their massive willful infringement."
The Courts

Judge Blocks Texas App Store Age Verification Law (theverge.com) 43

A federal judge blocked Texas' app store age-verification law, ruling it likely violates the First Amendment by forcing platforms to gate speech and collect data in an overly broad way. The law was set to go into effect on January 1, 2026. The Verge reports: In an order granting a preliminary injunction on the Texas App Store Accountability Act (SB 2420), Judge Robert Pitman wrote that the statute "is akin to a law that would require every bookstore to verify the age of every customer at the door and, for minors, require parental consent before the child or teen could enter and again when they try to purchase a book." Pitman has not yet ruled on the merits of the case, but his decision to grant the preliminary injunction means he believes its defenders are unlikely to prevail in court.

Pitman found that the highest level of scrutiny must be applied to evaluate the law under the First Amendment, which means the state must prove the law is "the least restrictive means of achieving a compelling state interest." The judge found this is not the case and that it wouldn't even survive intermediate scrutiny, because Texas has so far failed to prove that its goals are connected to its methods. Since Texas already has a law requiring age verification for porn sites, Pitman said that "only in the vast minority of applications would SB 2420 have a constitutional application to unprotected speech not addressed by other laws." Though Pitman acknowledged the importance of safeguarding kids online, he added, "the means to achieve that end must be consistent with the First Amendment. However compelling the policy concerns, and however widespread the agreement that the issue must be addressed, the Court remains bound by the rule of law."
"The Texas App Store Accountability Act is the first among a series of similar state laws to face a legal challenge, making the ruling especially significant, as Congress considers a version of the statute," notes The Verge. "The laws, versions of which also passed in Utah and Louisiana, aim to impose age verification standards at the app store level, making companies like Apple and Google responsible for transmitting signals about users' ages to app developers to block users from age-inappropriate experiences."

"The state can still appeal the ruling with the Fifth Circuit Court of Appeals, which has a history of reversing blocks on internet regulations."
Piracy

LimeWire Re-Emerges In Online Rush To Share Pulled '60 Minutes' Segment (arstechnica.com) 128

An anonymous reader quotes a report from Ars Technica: CBS cannot contain the online spread of a "60 Minutes" segment that its editor-in-chief, Bari Weiss, tried to block from airing. The episode, "Inside CECOT," featured testimonies from US deportees who were tortured or suffered physical or sexual abuse at a notorious Salvadoran prison, the Center for the Confinement of Terrorism. "Welcome to hell," one former inmate was told upon arriving, the segment reported, while also highlighting a clip of Donald Trump praising CECOT and its leadership for "great facilities, very strong facilities, and they don't play games."

Weiss controversially pulled the segment on Monday, claiming it could not air in the US because it lacked critical voices, as no Trump officials were interviewed. She claimed that the segment "did not advance the ball" and merely echoed others' reporting, NBC News reported. Her plan was to air the segment when it was "ready," insisting that holding stories "for whatever reason" happens "every day in every newsroom." But Weiss apparently did not realize that the "Inside CECOT" would still stream in Canada, giving the public a chance to view the segment as reporters had intended.

Critics accusing CBS of censoring the story quickly shared the segment online Monday after discovering that it was available on the Global TV app. Using a VPN to connect to the app with a Canadian IP address was all it took to override Weiss' block in the US, as 404 Media reported the segment was uploaded to "to a variety of file sharing sites and services, including iCloud, Mega, and as a torrent," including on the recently revived file-sharing service LimeWire. It's currently also available to stream on the Internet Archive, where one reviewer largely summed up the public's response so far, writing, "cannot believe this was pulled, not a dang thing wrong with this segment except it shows truth."
"Yo what," joked Reddit user Howzitgoin, highlighting only the word "LimeWire." Another user responded, "man, who knew my nostalgia prof pic would become relevant again, WTF."

"Bringing back LimeWire to illegally rip copies of reporting suppressed by the government is definitely some cyberpunk shit," a Bluesky user wrote.

"We need a champion against the darkness," a Reddit commenter echoed. "I side with LimeWire."
United States

FCC Bans Foreign-Made Drones Over National Security, Spying Concerns (politico.com) 66

The FCC has banned approval of new foreign-made drones and components, citing "an unacceptable risk" to national security. The move will most heavily impact DJI but it "does not affect drones or drone components that are currently sold in the United States." Reuters reports: The tech was placed on the commission's "Covered List," barring DJI and other foreign drone manufacturers from receiving the FCC's approval to sell new drone models for import or sale in the U.S. In Monday's announcement, the agency said that the move "will reduce the risk of direct [drone] attacks and disruptions, unauthorized surveillance, sensitive data exfiltration and other [drone] threats to the homeland."

FCC Chair Brendan Carr said in a statement that while drones offer the potential to boost public safety and the U.S.' posture on global innovation, "criminals, terrorists and hostile foreign actors have intensified their weaponization of these technologies, creating new and serious threats to our homeland."

The ruling comes as China hawks in Congress amplify warnings about the security risks of drones made by DJI, which accounts for more than 90% of the global market share. But efforts to crack down on Capitol Hill have been met with some pushback due to the potential impacts of curbing the drone usage on U.S. businesses and law enforcement. A wide variety of sectors, including construction, energy, agriculture and mining companies, as well as local police and fire departments across the country, deploy DJI-made drones.

United States

Welcome To America's New Surveillance High Schools (forbes.com) 101

Beverly Hills High School has deployed an AI-powered surveillance apparatus that includes facial recognition cameras, behavioral analysis software, smoke detector-shaped bathroom listening devices from Motorola, drones, and license plate readers from Flock Safety -- a setup the district spent $4.8 million on in the 2024-2025 fiscal year and considers necessary given the school's high-profile location in Los Angeles.

Similar systems are spreading to campuses nationwide as schools try to stop mass shootings that killed 49 people on school property this year, 59 in 2024, and 45 in 2023. A 2023 ACLU report found that eight of the ten largest school shootings since Columbine occurred at schools that already had surveillance systems, and 32% of students surveyed said they felt like they were always being watched. The technology has a spotty track record, however.

Gun detection vendor Evolv, used by more than 800 schools including Beverly Hills High, was reprimanded by the FTC in 2024 for claiming its AI could detect all weapons after it failed to flag a seven-inch knife used to stab a student in 2022. Evolv has also flagged laptops and water bottles as guns. Rival vendor Omnilert flagged a 16-year-old student at a Maryland high school reaching for an empty Doritos bag as a possible gun threat; police held the teenager at gunpoint.

Not every school is buying in. Highline Schools in Washington state cancelled its $33,000 annual ZeroEyes contract this year and spent the money on defibrillators and Ford SUVs for its safety team instead.
Music

Spotify Says 'Anti-Copyright Extremists' Scraped Its Library (musically.com) 59

A group of activists has scraped Spotify's entire library, accessing 256 million rows of track metadata and 86 million audio files totaling roughly 300TB of data. The metadata has been released via Anna's Archive, a search engine for "shadow libraries" that previously focused on books.

Spotify described the activists as "anti-copyright extremists who've previously pirated content from YouTube and other platforms" and confirmed it is actively investigating the incident. The activists claim this represents "the world's first 'preservation archive' for music which is fully open" and covers "around 99.6% of listens."

They appear to have used Spotify's public web API to scrape the metadata and circumvented DRM to access audio files. Spotify insists that this is not a security breach affecting user data. Though the more pressing concern for the music industry may be AI training rather than pirate streaming services -- similar YouTube datasets have reportedly been used by unlicensed generative AI music services.
Crime

In 2025 Scammers Have Stolen $835M from Americans Using Fake Customer Service Numbers (straitstimes.com) 26

They call it "the business-impersonator scam". And it's fooled 396,227 Americans in just the first nine months of 2025 — 18% more than the 335,785 in the same nine months of 2024. That's according to a Bloomberg reporter (who also fell for it in late November), citing the official statistics from America's Federal Trade Commission: Some pose as airline staff on social media and respond to consumer complaints. Others use texts or e-mails claiming to be an airline reporting a delayed or cancelled flight to phish for travellers' data. But the objective is always the same: to hit a stressed out, overwhelmed traveller at their most vulnerable. In my case, the scammer exploited weaknesses in Google's automated ad-screening system, so that fraudulent sponsored results rose to the top [They'd typed "United airlines agent on demand" into Google, and the top search result on their phone said United.com, had a 1-888 number next to it and said it had had 1M+ visits in past month. "It looked legit. I tapped the number..." ]

After I reported the fake "United Airlines" ad to Google, via an online form for consumers, it was taken down. But a few days later, I entered the same search terms and the identical ad featuring the same 1-888 number was back at the top of my results. I reported it again, and it was quickly removed again... A [Google] spokesperson there said the company is constantly evolving its tactics "to stay ahead of bad actors." Of the 5.1 billion ads blocked by the company last year, she said, 415 million were taken down for "scam-related violations." Google updated its ads misrepresentation policy in 2024 to include "impersonating or falsely implying affiliation with a public figure, brand or organization to entice users to provide money or information." Still, many impostor ads slip through the cracks.

"Reported losses from business-impostor scams in the United States rose 30 per cent, to US$835 million, in the first three quarters of 2025," the article points out (citing more figures from the America's Federal Trade Commision). An updated version of the article also includes a response from United Airlines. "We encourage customers to only use customer-service contact information that is listed on our website and app."

And what happened to the scammed reporter? "I called American Express and contested the charge before cancelling my credit card. I then contacted Experian, one of the three major credit bureaus, to put a fraud alert on my file. Next, I filed a complaint with the FTC and reported the fake ad to Google.

"American Express wound up resolving the dispute in my favour, but the memories of this chaotic Thanksgiving will stay with us forever. "
United States

The U.S. Could Ban Chinese-Made Drones Used By Police Departments (msn.com) 76

Tuesday the White House faces a deadline to decide "whether Chinese drone maker DJI Technologies poses a national security threat," reports Bloomberg. But their article notes it's "a decision with the potential to ground thousands of machines deployed by police and fire departments across the US."

One person making the case against the drones is Mike Nathe, a North Dakota Republican state representative described by the Post as "at the forefront of a nationwide campaign sounding alarms about the Made-in-China aircraft." Nathe tells them that "People do not realize the security issue with these drones, the amount of information that's being funneled back to China on a daily basis." The president already signed anexecutive orderin June targeting "foreign control or exploitation" of America's drone supply chain. That came after Congress mandated a review to determine whether DJI deserves inclusion in a federal register of companies believed to endanger national security. If DJI doesn't get a clean bill of health for Christmas, it could join Huawei Technologies Co. Ltd. and ZTE Corp.on that Federal Communications Commission list. The designation would give the Trump administration authority to prevent new domestic sales or even impose a flight ban, affecting public agencies from New York to North Dakota to Nevada...

The fleet used by public safety agencies nationwide exceeds about 25,000 aircraft, said Chris Fink, founder of Unmanned Vehicle Technologies LLC, a Fayetteville, Arkansas-based firm that advises law-enforcement clients. The overwhelming majority of those drones — called uncrewed aerial vehicles, or UAVs, in industry parlance — comes from China, said Jon Beal, president of theLaw Enforcement Drone Association, a training and advocacy group that counts DJI and some US competitors as corporate sponsors...

Currently, at least half a dozen states havetargeted DJIand other Chinese-manufactured drones, including restrictions in Arkansas, Mississippi and Tennessee. A Nevada law prohibiting public agencies from using Chinese drones took effect in January... Legislators also took up the cause in Connecticut, which passed a law this year preventing public offices from using Chinese drones. Supporters said they're worried about these eyes in the skies being used for spying. "We're kind of sitting ducks," said Bob Duff, the Democratic majority leader in the state senate who promoted the legislation. "They are designed to infiltrate systems even when the users don't think that they will."

One North Dakota sheriff's department complains U.S.-made drones are "at least double and triple the price out of the gate," according to the article, which adds that public safety officials "say it's difficult to find domestic alternatives that match DJI in price and performance."

And DJI "wants an extension on the security review," according to the article, "saying Tuesday is too soon to make a conclusion."
United States

Trump Admin to Hire 1,000 for New 'Tech Force' to Build AI Infrastructure (cnbc.com) 56

An anonymous reader shared this report from CNBC: The Trump administration on Monday unveiled a new initiative dubbed the "U.S. Tech Force," comprising about 1,000 engineers and other specialists who will work on artificial intelligence infrastructure and other technology projects throughout the federal government.

Participants will commit to a two-year employment program working with teams that report directly to agency leaders in "collaboration with leading technology companies," according to an official government website. ["...and work closely with senior managers from companies partnering with the Tech Force."] Those "private sector partners" include Amazon Web Services, Apple, Google Public Sector, Dell Technologies, Microsoft, Nvidia, OpenAI, Oracle, Palantir, Salesforce and numerous others [including AMD, IBM, Coinbase, Robinhood, Uber, xAI, and Zoom], the website says.

The Tech Force shows the Trump administration increasing its focus on developing America's AI infrastructure as it competes with China for dominance in the rapidly growing industry... The engineering corps will be working on "high-impact technology initiatives including AI implementation, application development, data modernization, and digital service delivery across federal agencies," the site says.

"Answer the call," says the new web site at TechForce.gov.

"Upon completing the program, engineers can seek employment with the partnering private-sector companies for potential full-time roles — demonstrating the value of combining civil service with technical expertise." [And those private sector companies can also nominate employees to participate.] "Annual salaries are expected to be in the approximate range of $150,000 to $200,000."
Crime

Flock Executive Says Their Camera Helped Find Shooting Suspect, Addresses Privacy Concerns (cnn.com) 59

During a search for the Brown shoogin suspect, a law enforcement press conference included a request for "Ring camera footage from residents and businesses near Brown University," according to local news reports.

But in the end it was Flock cameras according to an article in Gizmodo, after a Reddit poster described seeing "odd" behavior of someone who turned out to be the suspect: The original Reddit poster, identified only as John in the affidavit, contacted police the next day and came in for an interview. He told them about his odd encounter with the suspect, noting that he was acting suspiciously by not having appropriate cold-weather clothes on when he saw him in a bathroom at Brown University. That was two hours before the shooting. After spotting him in the bathroom wearing a mask, John actually started following the suspect in what he called a "game of cat and mouse...." Police detectives showed John two images obtained through Flock, the company that's built extensive surveillance infrastructure across the U.S. used by investigators, and he recognized the suspect's vehicle, replying, "Holy shit. That might be it," according to the affidavit. Police were able to track down the license plate of the rental car, which gave them a name, and within 24 hours, they had found Claudio Manuel Neves Valente dead in a storage facility in Salem, New Hampshire, where he reportedly rented a unit.
"We intend to continue using technology to make sure our law enforcement are empowered to do their jobs," Flock's safety CEO Garrett Langley wrote on X.com, pinning the post to the top of his feed.

Though ironically, hours before Providence Police Chief Oscar Perez credited Flock for helping to find the suspect, CNN was interviewing Flock's safety CEO to discuss "his response to recent privacy concerns surrounding Flock's technology." To Langley, the situation underscored the value and importance of Flock's technology, despite mounting privacy concerns that have prompted some jurisdictions to cancel contracts with the company... Langley told me on Thursday that he was motivated to start Flock to keep Americans safer. His goal is to deter crime by convincing would-be criminals they'll be caught... One of Flock's cameras had recently spotted [the suspect's] car, helping police pinpoint Valente's location. Flock turned on additional AI capabilities that were not part of Providence Police's contract with the company to assist in the hunt, a company spokesperson told CNN, including a feature that can identify the same vehicle based on its description even if its license plates have been changed.

The company has faced criticism from some privacy advocates and community groups who worry that its networks of cameras are collecting too much personal information from private citizens and could be misused. Both the Electronic Frontier Foundation and the American Civil Liberties Union have urged communities not to work with Flock. "State legislatures and local governments around the nation need to enact strong, meaningful protections of our privacy and way of life against this kind of AI surveillance machinery," ACLU Senior Policy Analyst Jay Stanley wrote in an August blog post. Flock also drew scrutiny in October when it announced a partnership with Amazon's Ring doorbell camera system... ["Local officers using Flock Safety's technology can now post a request directly in the Ring Neighbors app asking for help," explains Flock's blog post.]

Langley told me it was up to police to reassure communities that the cameras would be used responsibly... "If you don't trust law enforcement to do their job, that's actually what you're concerned about, and I'm not going to help people get over that." Langley added that Flock has built some guardrails into its technology, including audit trails that show when data was accessed. He pointed to a case in Georgia where that audit found a police chief using data from LPR cameras to stalk and harass people. The chief resigned and was arrested and charged in November...

More recently, the company rolled out a "drone as first responder" service — where law enforcement officers can dispatch a drone equipped with a camera, whose footage is similarly searchable via AI, to evaluate the scene of an emergency call before human officers arrive. Flock's drone systems completed 10,000 flights in the third quarter of 2025 alone, according to the company... I asked what he'd tell communities already worried about surveillance from LPRs who might be wary of camera-equipped drones also flying overhead. He said cities can set their own limitations on drone usage, such as only using drones to respond to 911 calls or positioning the drones' cameras on the horizon while flying until they reach the scene. He added that the drones fly at an elevation of 400 feet.

AI

Pro-AI Group Launches First of Many Attack Ads for US Election (yahoo.com) 26

"Super PAC aims to drown out AI critics in midterms," the Washington Post reported in August, noting its intial funding over $100 million from "some of Silicon Valley's most powerful investors and executives" including OpenAI president Greg Brockman, his wife, and VC firm Andreessen Horowitz. The group's goal was "to quash a philosophical debate that has divided the tech industry on the risk of artificial intelligence overpowering humanity," according to the article — and to support "pro-AI" candidates in America's next election in November of 2026 and "oppose candidates perceived as slowing down AI development."

Their first target? State assemblyman Alex Bores, now running to be a U.S. representative. While in the state legislature Bores sponsored a bill that would "require large AI companies to publish safety data on their technology," notes the Washington Post. So the attack ad charges that Bores "wants Albany bureaucrats regulating AI," excoriating him for sponsoring a bill that "hands AI to state regulators and creates a chaotic patchwork of state rules that would crush innovation, cost New York jobs, and fail to keep people safe! And he's backed by groups funded by convicted felon Sam Bankman-Fried. Is that really who should be shaping AI safety for our kids? America needs one smart national policy that sets clear stands for safe AI not Albany politicians like Alex Bores."

The Post calls it "the opening skirmish in a battle set to play out across the country" as tech moguls (and an independent effort receiving "tens of millions" from Meta) "try to use the 2026 midterms to reengineer Congress and state legislatures in favor of their ambitions for artificial intelligence" and "to wrest control of the narrative around AI, just as politicians in both parties have started warning that the industry is moving too fast." By knocking down candidates such as Bores, who favor regulations, and boosting industry sympathizers, the tech-backed groups could signal to incumbents and candidates nationwide that opposing the tech industry can jeopardize their electoral chances. "Bores just happened to be first, but he's not the last, and he's certainly not the only," said Josh Vlasto, co-head of Leading the Future, the bipartisan super PAC behind the ad.

The group plans to support and oppose candidates in congressional and state elections next year. It will also fund rapid response operations against voices in the industry pushing for more oversight... The strategy aims to replicate the success of the cryptocurrency industry, which used a super PAC to clear a path for Congress this summer to boost the sector's fortunes with the passage of the Genius Act... But signs that voters are increasingly wary of AI suggest that approach may be challenging to replicate. More than half of Americans believe AI poses a high risk to society, Pew Research Center found in a June survey. As AI usage continues to grow, more people are being warned by chief executives that AI will disrupt their jobs, seeing power-hungry data centers spring up in their towns or hearing claims that chatbots can harm mental health.

The article also notes there's at least two other groups seeking to counter this pro-AI push, raising money through a nonprofit called "Public First."

CNN calls the new pro-AI ads "a likely preview of the vast amounts of money the technology industry could spend ahead of next year's elections," noting that the ads are first targeting the candidate-choosing primary elections
Google

Google Sues SerpApi Over Scraping and Reselling Search Data (searchengineland.com) 37

An anonymous reader quotes a report from Search Engine Land: Google said today that it is suing SerpApi, accusing the company of bypassing security protections to scrape, harvest, and resell copyrighted content from Google Search results. The allegations: Google said SerpApi:

-Circumvented Google's security measures and industry-standard crawling controls.
-Ignored website directives that specify whether content can be accessed.
-Used cloaking, rotating bot identities, and large bot networks to scrape content at scale.
-Took licensed content from Search features, including images and real-time data, and resold it for profit.

What Google is saying. "Stealthy scrapers like SerpApi override [crawling] directives and give sites no choice at all," Google wrote, calling the alleged scraping "brazen" and "unlawful." Google said SerpApi's activity "increased dramatically over the past year." [...] If Google wins, reliable SERP data could become harder to get, more expensive, or both -- especially for teams that rely on tools powered by services like SerpApi. As AI already reduces clicks and transparency, Google now appears intent on making it even harder for brands to understand how Search works, how they appear in results, and how to measure success.

Slashdot Top Deals