Google

Epic Games Sues Google and Samsung Over App Store Restrictions 45

Epic Games filed a new antitrust lawsuit against Google and Samsung, alleging they conspired to undermine third-party app stores. The suit focuses on Samsung's "Auto Blocker" feature, now enabled by default on new phones, which restricts app installations to "authorized sources" - primarily Google and Samsung's stores.

Epic claims Auto Blocker creates significant barriers for rival stores, requiring users to navigate a complex process to install third-party apps. The company argues this feature does not actually assess app safety, but is designed to stifle competition. Epic CEO Tim Sweeney stated the lawsuit aims to benefit all developers, not secure special privileges for Epic. The company seeks either default deactivation of Auto Blocker or creation of a fair whitelisting process for legitimate apps. This legal action follows Epic's December victory against Google in a separate antitrust case. Epic recently launched its own mobile app store, which it claims faces unfair obstacles due to Auto Blocker.
AI

California's Governor Just Vetoed Its Controversial AI Bill (techcrunch.com) 35

"California Governor Gavin Newsom has vetoed SB 1047, a high-profile bill that would have regulated the development of AI," reports TechCrunch. The bill "would have made companies that develop AI models liable for implementing safety protocols to prevent 'critical harms'." The rules would only have applied to models that cost at least $100 million and use 10^26 FLOPS (floating point operations, a measure of computation) during training.

SB 1047 was opposed by many in Silicon Valley, including companies like OpenAI, high-profile technologists like Meta's chief AI scientist Yann LeCun, and even Democratic politicians such as U.S. Congressman Ro Khanna. That said, the bill had also been amended based on suggestions by AI company Anthropic and other opponents.

In a statement about today's veto, Newsom said, "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the.." bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology."

"Over the past 30 days, Governor Newsom signed 17 bills covering the deployment and regulation of GenAI technology..." according to a statement from the governor's office, "cracking down on deepfakes, requiring AI watermarking, protecting children and workers, and combating AI-generated misinformation... The Newsom Administration will also immediately engage academia to convene labor stakeholders and the private sector to explore approaches to use GenAI technology in the workplace."

In a separate statement the governor pointed out California " is home to 32 of the world's 50 leading Al companies," and warned that the bill "could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good..."

"While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it.

"I do not believe this is the best approach to protecting the public from real threats posed by the technology."

Interestingly, the Los Angeles Times reported that the vetoed bill had been supported by Mark Hamill, J.J. Abrams, and "more than 125 Hollywood actors, directors, producers, music artists and entertainment industry leaders" who signed a letter of support. (And that bill also cited the support of "over a hundred current and former employees of OpenAI, Google DeepMind, Anthropic, Meta, and xAI..."
AI

Can AI Developers Be Held Liable for Negligence? (lawfaremedia.org) 123

Bryan Choi, an associate professor of law and computer science focusing on software safety, proposes shifting AI liability onto the builders of the systems: To date, most popular approaches to AI safety and accountability have focused on the technological characteristics and risks of AI systems, while averting attention from the workers behind the curtain responsible for designing, implementing, testing, and maintaining such systems...

I have previously argued that a negligence-based approach is needed because it directs legal scrutiny on the actual persons responsible for creating and managing AI systems. A step in that direction is found in California's AI safety bill, which specifies that AI developers shall articulate and implement protocols that embody the "developer's duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm" (emphasis added). Although tech leaders have opposed California's bill, courts don't need to wait for legislation to allow negligence claims against AI developers. But how would negligence work in the AI context, and what downstream effects should AI developers anticipate?

The article suggest two possibilities. Classifying AI developers as ordinary employees leaves employers then sharing liability for negligent acts (giving them "strong incentives to obtain liability insurance policies and to defend their employees against legal claims.") But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies." AI is a field that perhaps uniquely seeks to obscure its human elements in order to magnify its technical wizardry. The virtue of the negligence-based approach is that it centers legal scrutiny back on the conduct of the people who build and hype the technology. To be sure, negligence is limited in key ways and should not be viewed as a complete answer to AI governance. But fault should be the default and the starting point from which all conversations about AI accountability and AI safety begin.
Thanks to long-time Slashdot reader david.emery for sharing the article.
United States

EPA Must Address Fluoridated Water's Risk To Children's IQs, US Judge Rules (reuters.com) 153

An anonymous reader quotes a report from Reuters: A federal judge in California has ordered the U.S. Environmental Protection Agency to strengthen regulations for fluoride in drinking water, saying the compound poses an unreasonable potential risk to children at levels that are currently typical nationwide. U.S. District Judge Edward Chen in San Francisco on Tuesday sided (PDF) with several advocacy groups, finding the current practice of adding fluoride to drinking water supplies to fight cavities presented unreasonable risks for children's developing brains.

Chen said the advocacy groups had established during a non-jury trial that fluoride posed an unreasonable risk of harm sufficient to require a regulatory response by the EPA under the Toxic Substances Control Act. "The scientific literature in the record provides a high level of certainty that a hazard is present; fluoride is associated with reduced IQ," wrote Chen, an appointee of Democratic former President Barack Obama. But the judge stressed he was not concluding with certainty that fluoridated water endangered public health. [...] The EPA said it was reviewing the decision.
"The court's historic decision should help pave the way towards better and safer fluoride standards for all," Michael Connett, a lawyer for the advocacy groups, said in a statement on Wednesday.
The Courts

'Anne Frank' Copyright Dispute Triggers VPN, Geoblocking Questions At EU's Highest Court (torrentfreak.com) 98

An anonymous reader quotes a report from TorrentFreak: The Dutch Supreme Court has requested guidance from the EU's top court on geo-blocking, VPNs, and copyright in a case involving the online publication of Anne Frank's manuscripts. The CJEU's response has the potential to reshape the online content distribution landscape, impacting streaming platforms and other services that rely on geo-blocking. VPNs services will monitor the matter with great interest too. [...] While early versions are presumably in the public domain in several countries, the original manuscripts are protected by copyright in the Netherlands until 2037. As a result, the copies published by the Dutch Anne Frank Stichting, are blocked for Dutch visitors. "The scholarly edition of the Anne Frank manuscripts cannot be made available in all countries, due to copyright considerations," is the message disallowed visitors get to see.

This blocking effort is the result of a copyright battle. Ideally, Anne Frank Stichting would like to make the manuscripts available worldwide, but the Swiss 'Fonds' has not given permission for it to do so. And since some parts of the manuscript were first published in 1986, Dutch copyrights are still valid. In theory, geo-blocking efforts could alleviate the copyright concerns but, for the Fonds, these measures are not sufficient. After pointing out that people can bypass the blocking efforts with a VPN, it took the matter to court. Around the world, publishers and streaming services use geo-blocking as the standard measure to enforce geographical licenses. This applies to the Anne Frank Stichting, as well as Netflix, BBC iPlayer, news sites, and gaming platforms. The Anne Frank Fonds doesn't dispute this, but argued in court that people can circumvent these restrictions with a VPN, suggesting that the manuscripts shouldn't be published online at all. The lower court dismissed this argument, stating the defendants had taken reasonable measures to prevent access from the Netherlands. The Fonds appealed, but the appeal was also dismissed, and the case is now before the Dutch Supreme Court.

The Fonds argues that the manuscript website is (in part) directed at a Dutch audience. Therefore, the defendants are making the manuscripts available in the Netherlands, regardless of the use of any blocking measures. The defendants, in turn, argue that the use of state-of-the-art geo-blocking, along with additional measures like a user declaration, is sufficient to prevent a communication to the public in the Netherlands. The defense relied on the opinion in the GO4YU case, which suggests that circumventing geo-blocking with a VPN does not constitute a communication to the public in the blocked territory, unless the blocking is intentionally ineffective.

Movies

US Trademark Office Cancels Marvel, DC's 'Super Hero' Trademarks (reuters.com) 31

A U.S. Trademark Office tribunal canceled Marvel and DC's jointly owned "Super Hero" trademarks after the companies failed to respond to a request by London-based Superbabies Ltd, which argued the marks couldn't be owned collectively or monopolize the superhero genre. The ruling was "not just a win for our client but a victory for creativity and innovation," said Superbabies attorney Adam Adler of Reichman Jorgensen Lehman & Feldberg. "By establishing SUPER HEROES' place in the public domain, we safeguard it as a symbol of heroism available to all storytellers." Reuters reports: Rivals Marvel and DC jointly own four federal trademarks covering the terms "Super Hero" and "Super Heroes," the oldest of which dates back to 1967. Richold writes comics featuring a team of super-hero babies called the Super Babies. According to Richold, DC accused his company of infringing the "Super Hero" marks and threatened legal action after Superbabies Ltd applied for U.S. trademarks covering the "Super Babies" name. Marvel and DC have cited their marks in opposing dozens of superhero-related trademark applications at the USPTO, according to the office's records. Superbabies petitioned the office to cancel the marks in May. It argued that Marvel and DC cannot "claim ownership over an entire genre" with their trademarks, and that the two competitors cannot own trademarks together.
Privacy

Meta Fined $102 Million For Storing 600 Million Passwords In Plain Text (appleinsider.com) 28

Meta has been fined $101.5 million by the Irish Data Protection Commission (DPC) for storing over half a billion user passwords in plain text for years, with some engineers having access to this data for over a decade. The issue, discovered in 2019, predominantly affected non-US users, especially those using Facebook Lite. AppleInsider reports: Meta Ireland was found guilty of infringing four parts of GDPR, including how it "failed to notify the DPC of a personal data breach concerning storage of user passwords in plain text." Meta Ireland did report the failure, but only some months after it was discovered. "It is widely accepted that user passwords should not be stored in plaintext, considering the risks of abuse that arise from persons accessing such data," said Graham Doyle, Deputy Commissioner at the DPC, in a statement about the fine. "It must be borne in mind, that the passwords the subject of consideration in this case, are particularly sensitive, as they would enable access to users' social media accounts."

Other than the fine and an official reprimand, the full extent of the DPC's ruling is yet to be released publicly. The details published so far do not reveal whether the passwords included any of US users as well as ones in Ireland or across the rest of the European Union. It's most likely that the issue concerns only non-US users, however. That's because in 2019, Facebook told CNN that the majority of the plain text passwords were for a service called Facebook Lite, which it described as being a cut-down service for areas of the world with slower connectivity.

Businesses

If 23andMe Is Up for Sale, So Is All That DNA (msn.com) 56

23andMe is not doing well. Its stock is on the verge of being delisted. It shut down its in-house drug-development unit last month, only the latest in several rounds of layoffs. Last week, the entire board of directors quit, save for Anne Wojcicki, a co-founder and the company's CEO. Amid this downward spiral, Wojcicki has said she'll consider selling 23andMe -- which means the DNA of 23andMe's 15 million customers would be up for sale, too. The Atlantic: 23andMe's trove of genetic data might be its most valuable asset. For about two decades now, since human-genome analysis became quick and common, the A's, C's, G's, and T's of DNA have allowed long-lost relatives to connect, revealed family secrets, and helped police catch serial killers. Some people's genomes contain clues to what's making them sick, or even, occasionally, how their disease should be treated. For most of us, though, consumer tests don't have much to offer beyond a snapshot of our ancestors' roots and confirmation of the traits we already know about. 23andMe is floundering in part because it hasn't managed to prove the value of collecting all that sensitive, personal information. And potential buyers may have very different ideas about how to use the company's DNA data to raise the company's bottom line. This should concern anyone who has used the service.
Government

White House Agonizes Over UN Cybercrime Treaty (politico.com) 43

The United Nations is set to vote on a treaty later this year intended to create norms for fighting cybercrime -- and the Biden administration is fretting over whether to sign on. Politico: The uncertainty over the treaty stems from fears that countries including Russia, Iran and China could use the text as a guise for U.N. approval of their widespread surveillance measures and suppression of the digital rights of their citizens. If the United States chooses not to vote in favor of the treaty, it could become easier for these adversarial nations -- named by the Cybersecurity and Infrastructure Security Agency as the biggest state sponsors of cybercrime -- to take the lead on cyber issues in the future. And if the U.S. walks away from the negotiating table now, it could upset other nations that spent several years trying to nail down the global treaty with competing interests in mind.

While the treaty is not set for a vote during the U.N. General Assembly this week, it's a key topic of debate on the sidelines, following meetings in New York City last week, and committee meetings set for next month once the world's leaders depart. The treaty was troubled from its inception. A cybercrime convention was originally proposed by Russia, and the U.N. voted in late 2019 to start the process to draft it -- overruling objections by the U.S. and other Western nations. Those countries were worried Russia would use the agreement as an alternative to the Budapest Convention -- an existing accord on cybercrime administered by the Council of Europe, which Russia, China and Iran have not joined.

Crime

South Korea Criminalizes Watching Or Possessing Sexually Explicit Deepfakes (reuters.com) 69

An anonymous reader quotes a report from Reuters: South Korean lawmakers on Thursday passed a bill that criminalizes possessing or watching sexually explicit deepfake images and videos, with penalties set to include prison terms and fines. There has been an outcry in South Korea over Telegram group chats where sexually explicit and illegal deepfakes were created and widely shared, prompting calls for tougher punishment. Anyone purchasing, saving or watching such material could face up to three years in jail or be fined up to 30 million won ($22,600), according to the bill.

Currently, making sexually explicit deepfakes with the intention of distributing them is punishable by five years in prison or a fine of 50 million won under the Sexual Violence Prevention and Victims Protection Act. When the new law takes effect, the maximum sentence for such crimes will also increase to seven years regardless of the intention. The bill will now need the approval of President Yoon Suk Yeol in order to be enacted. South Korean police have so far handled more than 800 deepfake sex crime cases this year, the Yonhap news agency reported on Thursday. That compares with 156 for all of 2021, when data was first collated. Most victims and perpetrators are teenagers, police say.

Privacy

NIST Proposes Barring Some of the Most Nonsensical Password Rules (arstechnica.com) 180

Ars Technica's Dan Goodin reports: Last week, NIST released its second public draft of SP 800-63-4, the latest version of its Digital Identity Guidelines. At roughly 35,000 words and filled with jargon and bureaucratic terms, the document is nearly impossible to read all the way through and just as hard to understand fully. It sets both the technical requirements and recommended best practices for determining the validity of methods used to authenticate digital identities online. Organizations that interact with the federal government online are required to be in compliance. A section devoted to passwords injects a large helping of badly needed common sense practices that challenge common policies. An example: The new rules bar the requirement that end users periodically change their passwords. This requirement came into being decades ago when password security was poorly understood, and it was common for people to choose common names, dictionary words, and other secrets that were easily guessed.

Since then, most services require the use of stronger passwords made up of randomly generated characters or phrases. When passwords are chosen properly, the requirement to periodically change them, typically every one to three months, can actually diminish security because the added burden incentivizes weaker passwords that are easier for people to set and remember. Another requirement that often does more harm than good is the required use of certain characters, such as at least one number, one special character, and one upper- and lowercase letter. When passwords are sufficiently long and random, there's no benefit from requiring or restricting the use of certain characters. And again, rules governing composition can actually lead to people choosing weaker passcodes.

The latest NIST guidelines now state that:
- Verifiers and CSPs SHALL NOT impose other composition rules (e.g., requiring mixtures of different character types) for passwords and
- Verifiers and CSPs SHALL NOT require users to change passwords periodically. However, verifiers SHALL force a change if there is evidence of compromise of the authenticator. ("Verifiers" is bureaucrat speak for the entity that verifies an account holder's identity by corroborating the holder's authentication credentials. Short for credential service provider, "CSPs" are a trusted entity that assigns or registers authenticators to the account holder.) In previous versions of the guidelines, some of the rules used the words "should not," which means the practice is not recommended as a best practice. "Shall not," by contrast, means the practice must be barred for an organization to be in compliance.
Several other common sense practices mentioned in the document include: 1. Verifiers and CSPs SHALL require passwords to be a minimum of eight characters in length and SHOULD require passwords to be a minimum of 15 characters in length.
2. Verifiers and CSPs SHOULD permit a maximum password length of at least 64 characters.
3. Verifiers and CSPs SHOULD accept all printing ASCII [RFC20] characters and the space character in passwords.
4. Verifiers and CSPs SHOULD accept Unicode [ISO/ISC 10646] characters in passwords. Each Unicode code point SHALL be counted as a single character when evaluating password length.
5. Verifiers and CSPs SHALL NOT impose other composition rules (e.g., requiring mixtures of different character types) for passwords.
6. Verifiers and CSPs SHALL NOT require users to change passwords periodically. However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.
7. Verifiers and CSPs SHALL NOT permit the subscriber to store a hint that is accessible to an unauthenticated claimant.
8. Verifiers and CSPs SHALL NOT prompt subscribers to use knowledge-based authentication (KBA) (e.g., "What was the name of your first pet?") or security questions when choosing passwords.
9. Verifiers SHALL verify the entire submitted password (i.e., not truncate it).

Mozilla

Mozilla Hit With Privacy Complaint In EU Over Firefox Tracking Tech (techcrunch.com) 21

Mozilla has been hit with a complaint by EU privacy group noyb, accusing it of violating GDPR by tracking Firefox users by default without their consent. TechCrunch reports: Mozilla calls the feature at issue "Privacy Preserving Attribution" (PPA). But noyb argues this is misdirection. And if EU privacy regulators agree with the complaint the Firefox-maker could be slapped with orders to change tack -- or even face a penalty (the GDPR allows for fines of up to 4% of global revenue). "Contrary to its reassuring name, this technology allows Firefox to track user behaviour on websites," noyb wrote in a press release. "In essence, the browser is now controlling the tracking, rather than individual websites. While this might be an improvement compared to even more invasive cookie tracking, the company never asked its users if they wanted to enable it. Instead, Mozilla decided to turn it on by default once people installed a recent software update. This is particularly worrying because Mozilla generally has a reputation for being a privacy-friendly alternative when most other browsers are based on Google's Chromium."

Another component of noyb's objection is that Mozilla's move "doesn't replace cookies either" -- Firefox simply wouldn't have the market share and power to shift industry practices -- so all it's done is produce another additional way for websites to target ads. [...] The noyb-backed complaint (PDF), which has been filed with the Austrian data protection authority, accuses Mozilla of failing to inform users about the processing of their personal data and of using an opt-out -- rather than an affirmative "opt-in" -- mechanism. The privacy rights group also wants the regulator to order the deletion of all data collected so far.
In a statement attributed to Christopher Hilton, its director of policy and corporate communications, Mozilla said that it has only conducted a "limited test" of a PPA prototype on its own websites.While acknowledging poor communication around the effort, the company emphasized that no user data has been collected or shared and expressed its commitment to engaging with stakeholders as it develops the technology further.
Government

US Justice Department Probes Super Micro Computer (yahoo.com) 22

According to the Wall Street Journal, the U.S. Department of Justice is investigating Super Micro Computer after short-seller Hindenburg Research alleged "accounting manipulation" at the AI server maker. Super Micro's shares fell about 12% following the report. Reuters reports: The WSJ report, which cited people familiar with the matter, said the probe was at an early stage and that a prosecutor at a U.S. attorney's office recently contacted people who may be holding relevant information. The prosecutor has asked for information that appeared to be connected to a former employee who accused the company of accounting violations, the report added.

Super Micro had late last month delayed filing its annual report, citing a need to assess "its internal controls over financial reporting," a day after Hindenburg disclosed a short position and made claims of "accounting manipulation." The short-seller had cited a three-month investigation that included interviews with former senior employees of Super Micro and litigation records. Hindenburg's allegations included evidence of undisclosed related-party transactions, failure to abide by export controls, among other issues. The company had denied Hindenburg's claims.

Piracy

US Court Orders LibGen To Pay $30 Million To Publishers, Issues Broad Injunction 27

A New York federal court has ordered (PDF) the operators of shadow library LibGen to pay $30 million in copyright damages to publishers. The default judgment also comes with a broad injunction that affects third-party services including domain registries, browser extensions, CDN providers, IPFS gateways, advertisers, and more. These parties must restrict access to the pirate site. An anonymous reader quotes a report from TorrentFreak: Yesterday, U.S. District Court Judge Colleen McMahon granted the default judgment without any changes. The anonymous LibGen defendants are responsible for willful copyright infringement and their activities should be stopped. "Plaintiffs have been irreparably harmed as a result of Defendants' unlawful conduct and will continue to be irreparably harmed should Defendants be allowed to continue operating the Libgen Sites," the order reads. The order requires the defendants to pay the maximum statutory damages of $150,000 per work, a total of $30 million, for which they are jointly and severally liable. While this is a win on paper, it's unlikely that the publishers will get paid by the LibGen operators, who remain anonymous.

To address this concern, the publishers' motion didn't merely ask for $30 million in damages, they also demanded a broad injunction. Granted by the court yesterday, the injunction requires third-party services such as advertising networks, payment processors, hosting providers, CDN services, and IPFS gateways to restrict access to the site. [...] The injunction further targets "browser extensions" and "other tools" that are used to provide direct access to the LibGen Sites. While site blocking by residential Internet providers is mentioned in reference to other countries, ISP blocking is not part of the injunction itself. In addition to the broad measures outlined above, the order further requires domain name registrars and registries to disable or suspend all active LibGen domains, or alternatively, transfer them to the publishers. This includes Libgen.is, the most used domain name with 16 million monthly visits, as well as Libgen.rs, Libgen.li and many others.

At the moment, it's unclear how actively managed the LibGen site is, as it has shown signs of decay in recent years. However, when faced with domain seizures, sites typically respond by registering new domains. The publishers are aware of this risk. Therefore, they asked the court to cover future domain names too. The court signed off on this request, which means that newly registered domain names can be taken over as well; at least in theory. [...] All in all, the default judgment isn't just a monetary win, on paper, it's also one of the broadest anti-piracy injunctions we've seen from a U.S. court.
Privacy

Tor Project Merges With Tails (torproject.org) 17

The Tor Project: Today the Tor Project, a global non-profit developing tools for online privacy and anonymity, and Tails, a portable operating system that uses Tor to protect users from digital surveillance, have joined forces and merged operations. Incorporating Tails into the Tor Project's structure allows for easier collaboration, better sustainability, reduced overhead, and expanded training and outreach programs to counter a larger number of digital threats. In short, coming together will strengthen both organizations' ability to protect people worldwide from surveillance and censorship.

Countering the threat of global mass surveillance and censorship to a free Internet, Tor and Tails provide essential tools to help people around the world stay safe online. By joining forces, these two privacy advocates will pool their resources to focus on what matters most: ensuring that activists, journalists, other at-risk and everyday users will have access to improved digital security tools.

In late 2023, Tails approached the Tor Project with the idea of merging operations. Tails had outgrown its existing structure. Rather than expanding Tails's operational capacity on their own and putting more stress on Tails workers, merging with the Tor Project, with its larger and established operational framework, offered a solution. By joining forces, the Tails team can now focus on their core mission of maintaining and improving Tails OS, exploring more and complementary use cases while benefiting from the larger organizational structure of The Tor Project.

This solution is a natural outcome of the Tor Project and Tails' shared history of collaboration and solidarity. 15 years ago, Tails' first release was announced on a Tor mailing list, Tor and Tails developers have been collaborating closely since 2015, and more recently Tails has been a sub-grantee of Tor. For Tails, it felt obvious that if they were to approach a bigger organization with the possibility of merging, it would be the Tor Project.

The Courts

DoNotPay Has To Pay $193K For Falsely Touting Untested AI Lawyer, FTC Says (arstechnica.com) 30

An anonymous reader quotes a report from Ars Technica: Among the first AI companies that the Federal Trade Commission has exposed as deceiving consumers is DoNotPay -- which initially was advertised as "the world's first robot lawyer" with the ability to "sue anyone with the click of a button." On Wednesday, the FTC announced that it took action to stop DoNotPay from making bogus claims after learning that the AI startup conducted no testing "to determine whether its AI chatbot's output was equal to the level of a human lawyer." DoNotPay also did not "hire or retain any attorneys" to help verify AI outputs or validate DoNotPay's legal claims.

DoNotPay accepted no liability. But to settle the charges that DoNotPay violated the FTC Act, the AI startup agreed to pay $193,000, if the FTC's consent agreement is confirmed following a 30-day public comment period. Additionally, DoNotPay agreed to warn "consumers who subscribed to the service between 2021 and 2023" about the "limitations of law-related features on the service," the FTC said. Moving forward, DoNotPay would also be prohibited under the settlement from making baseless claims that any of its features can be substituted for any professional service.
"The complaint relates to the usage of a few hundred customers some years ago (out of millions of people), with services that have long been discontinued," DoNotPay's spokesperson said. The company "is pleased to have worked constructively with the FTC to settle this case and fully resolve these issues, without admitting liability."
Security

Critical Unauthenticated RCE Flaw Impacts All GNU/Linux Systems (cybersecuritynews.com) 153

"Looks like there's a storm brewing, and it's not good news," writes ancient Slashdot reader jd. "Whether or not the bugs are classically security defects or not, this is extremely bad PR for the Linux and Open Source community. It's not clear from the article whether this affects other Open Source projects, such as FreeBSD." From a report: A critical unauthenticated Remote Code Execution (RCE) vulnerability has been discovered, impacting all GNU/Linux systems. As per agreements with developers, the flaw, which has existed for over a decade, will be fully disclosed in less than two weeks. Despite the severity of the issue, no Common Vulnerabilities and Exposures (CVE) identifiers have been assigned yet, although experts suggest there should be at least three to six. Leading Linux distributors such as Canonical and RedHat have confirmed the flaw's severity, rating it 9.9 out of 10. This indicates the potential for catastrophic damage if exploited. However, despite this acknowledgment, no working fix is still available. Developers remain embroiled in debates over whether some aspects of the vulnerability impact security.
Censorship

Russia Blocks OONI Explorer, a Large Open Dataset On Internet Censorship (ooni.org) 13

As of September 11th, Russia has blocked access to OONI Explorer, citing concerns over circumvention tools. This block affects Russian users' ability to access not only circumvention data but also the extensive dataset on global internet censorship that OONI provides. From a blog post: OONI Explorer is one of the largest open datasets on internet censorship around the world. We first launched this web platform back in 2016 with the goal of enabling researchers, journalists, and human rights defenders to investigate internet censorship based on empirical network measurement data that is contributed by OONI Probe users worldwide. Every day, we publish new measurements from around the world in real-time.

Today, OONI Explorer hosts more than 2 billion network measurements collected from 27 thousand distinct networks in 242 countries and territories since 2012. Out of all countries, OONI Probe users in Russia contribute the second largest volume of measurements (following the U.S, where OONI Probe users contribute the most measurements out of any country). This has enabled us to study various cases of internet censorship in Russia, such as the blocking of Tor, the blocking of independent news media websites, and how internet censorship in Russia changed amid the war in Ukraine.

In this report, we share OONI data on the blocking of OONI Explorer in Russia.

Government

OpenAI Pitched White House On Unprecedented Data Center Buildout (yahoo.com) 38

An anonymous reader quotes a report from Bloomberg: OpenAI has pitched the Biden administration on the need for massive data centers that could each use as much power as entire cities, framing the unprecedented expansion as necessary to develop more advanced artificial intelligence models and compete with China. Following a recent meeting at the White House, which was attended by OpenAI Chief Executive Officer Sam Altman and other tech leaders, the startup shared a document with government officials outlining the economic and national security benefits of building 5-gigawatt data centers in various US states, based on an analysis the company engaged with outside experts on. To put that in context, 5 gigawatts is roughly the equivalent of five nuclear reactors, or enough to power almost 3 million homes. OpenAI said investing in these facilities would result in tens of thousands of new jobs, boost the gross domestic product and ensure the US can maintain its lead in AI development, according to the document, which was viewed by Bloomberg News. To achieve that, however, the US needs policies that support greater data center capacity, the document said. "Whatever we're talking about is not only something that's never been done, but I don't believe it's feasible as an engineer, as somebody who grew up in this," said Joe Dominguez, CEO of Constellation Energy Corp. "It's certainly not possible under a timeframe that's going to address national security and timing."
Government

California Governor Vetoes Bill Requiring Opt-Out Signals For Sale of User Data (arstechnica.com) 51

An anonymous reader quotes a report from Ars Technica: California Gov. Gavin Newsom vetoed a bill that would have required makers of web browsers and mobile operating systems to let consumers send opt-out preference signals that could limit businesses' use of personal information. The bill approved by the State Legislature last month would have required an opt-out signal "that communicates the consumer's choice to opt out of the sale and sharing of the consumer's personal information or to limit the use of the consumer's sensitive personal information." It would have made it illegal for a business to offer a web browser or mobile operating system without a setting that lets consumers "send an opt-out preference signal to businesses with which the consumer interacts."

In a veto message (PDF) sent to the Legislature Friday, Newsom said he would not sign the bill. Newsom wrote that he shares the "desire to enhance consumer privacy," noting that he previously signed a bill "requir[ing] the California Privacy Protection Agency to establish an accessible deletion mechanism allowing consumers to request that data brokers delete all of their personal information." But Newsom said he is opposed to the new bill's mandate on operating systems. "I am concerned, however, about placing a mandate on operating system (OS) developers at this time," the governor wrote. "No major mobile OS incorporates an option for an opt-out signal. By contrast, most Internet browsers either include such an option or, if users choose, they can download a plug-in with the same functionality. To ensure the ongoing usability of mobile devices, it's best if design questions are first addressed by developers, rather than by regulators. For this reason, I cannot sign this bill." Vetoes can be overridden with a two-thirds vote in each chamber. The bill was approved 59-12 in the Assembly and 31-7 in the Senate. But the State Legislature hasn't overridden a veto in decades.
"It's troubling the power that companies such as Google appear to have over the governor's office," said Justin Kloczko, tech and privacy advocate for Consumer Watchdog, a nonprofit group in California. "What the governor didn't mention is that Google Chrome, Apple Safari and Microsoft Edge don't offer a global opt-out and they make up for nearly 90 percent of the browser market share. That's what matters. And people don't want to install plug-ins. Safari, which is the default browsers on iPhones, doesn't even accept a plug-in."

Slashdot Top Deals