United Kingdom

Apple Now Requires Device-Level Age Verification in the UK. Could the US Be Next? (gizmodo.com) 6

Apple unveiled new device-level age restrictions in the UK on Wednesday. "After downloading a new update, users will now have to confirm that they are 18 or older to access unrestricted features," reports Gizmodo.

"Users will be able to confirm their age with a credit card or by scanning an ID." For those underage or who have not confirmed their age, Apple will turn on Web Content Filter and Communication Safety, which will not only restrict access to certain apps or websites, but will also monitor messages, shared photo albums, AirDrop, and FaceTime calls for nudity. Apple didn't specify exactly which services and features are banned for under-18 users, but it will likely be in compliance with UK legislation...

The British government does not require Apple and other OS providers to institute device-level age checks, but it does restrict minor access to online pornography under the Online Safety Act, which passed in 2023. So far, that restriction has only been implemented at the website level, but UK officials have been worried about easy loopholes to evade the age restrictions, like VPNs.

The broader tech industry has been campaigning for some time to use device-level age checks instead in response to the rising tide of under-16 social media and internet bans around the world. Last month, in a landmark social media trial in California, Meta CEO Mark Zuckerberg also supported this idea, saying that conducting age verification "at the level of the phone is just a lot clearer than having every single app out there have to do this separately." Pornhub-operator Aylo had advocated for device-level restrictions in the UK as well, and even sent out letters to Apple, Google, and Microsoft in November asking for OS-level age verification...

The most obvious question: Could this be brought stateside?

Media

AV1's Open, Royalty-Free Promise In Question As Dolby Sues Snapchat Over Codec (arstechnica.com) 41

An anonymous reader quotes a report from Ars Technica: AOMedia Video 1 (AV1) was invented by a group of technology companies to be an open, royalty-free alternative to other video codecs, like HEVC/H.265. But a lawsuit that Dolby Laboratories Inc. filed this week against Snap Inc. calls all that into question with claims of patent infringement. Numerous lawsuits are currently open in the US regarding the use of HEVC. Relevant patent holders, such as Nokia and InterDigital, have sued numerous hardware vendors and streaming service providers in pursuit of licensing fees for the use of patented technologies deemed essential to HEVC.

It's a touch rarer to see a lawsuit filed over the implementation of AV1. The Alliance for Open Media (AOMedia), whose members include Amazon, Apple, Google, Microsoft, Mozilla, and Netflix, says it developed AV1 "under a royalty-free patent policy (Alliance for Open Media Patent License 1.0)" and that the standard is "supported by high-quality reference implementations under a simple, permissive license (BSD 3-Clause Clear License)."

Yet, Dolby's lawsuit filed in the US District Court for the District of Delaware [PDF] alleges that AV1 leverages technologies that Dolby has patented and has not agreed to license for free and without receiving royalties. The filing reads: "[AOMedia] does not own all patents practiced by implementations of the AV1 codec. Rather, the AV1 specification was developed after many foundational video coding patents had already been filed, and AV1 incorporates technologies that are also present in HEVC. Those technologies are subject to existing third-party patent rights and associated licensing obligations." Dolby is seeking a jury trial, a declaration that Dolby isn't obligated to license the patents in questions under FRAND (fair, reasonable, and non-discriminatory) licensing obligations, and for the court to enjoin Snap from further "infringement."

Security

European Commission Investigating Breach After Amazon Cloud Account Hack (bleepingcomputer.com) 5

The European Commission is investigating a breach after a threat actor allegedly accessed at least one of its AWS cloud accounts and claimed to have stolen more than 350 GB of data, including databases and employee-related information. AWS says its own services were not breached. BleepingComputer reports: Sources familiar with the incident have told BleepingComputer that the attack was quickly detected and that the Commission's cybersecurity incident response team is now investigating. While the Commission has yet to share any details about this breach, the threat actor who claimed responsibility for the attack reached out to BleepingComputer earlier this week, stating that they had stolen over 350 GB of data (including multiple databases).

They didn't disclose how they breached the affected accounts, but they provided BleepingComputer with several screenshots as proof that they had access to information belonging to European Commission employees and to an email server used by Commission employees. The threat actor also told BleepingComputer that they will not attempt to extort the Commission using the allegedly stolen data as leverage, but intend to leak the data online at a later date.

Social Networks

Austria Plans Social Media Ban For Under-14s (bbc.com) 11

Austria plans to restrict under-14s from using social media platforms over concerns about addictive algorithms and harmful content. The government says draft legislation should be ready by the end of June, though details around enforcement and age verification have yet to be finalized. The BBC reports: Announcing the plans, Vice-Chancellor Andreas Babler of the Social Democrats said the government could not stand by and watch as social media made children "addicted and also often ill." He said it was the responsibility of politicians to protect children and argued that the issue should be treated no different to alcohol or tobacco: "There must be clear rules in the digital world too." In future, said Babler, children under 14 would be protected from algorithms that were addictive. "Other information providers have clear rules to protect young people from harmful content." These, he said, should now be implemented in the digital space. Yesterday, juries in two separate cases found social media giants liable for harming young people's mental health. The verdicts are being hailed as social media's Big Tobacco moment.

Further reading: California Bill Would Require Parent Bloggers To Delete Content of Minors On Social Media
Privacy

Iran-Linked Hackers Breach FBI Director's Personal Email (reuters.com) 80

An anonymous reader quotes a report from Reuters: Iran-linked hackers have broken into FBI Director Kash Patel's personal email inbox, publishing photographs of the director and other documents to the internet, the hackers and the bureau said on Friday. On their website, the hacker group Handala Hack Team said Patel "will now find his name among the list of successfully hacked victims." The hackers published a series of personal photographs of Patel sniffing and smoking cigars, riding in an antique convertible, and making a face while taking a picture of himself in the mirror with a large bottle of rum.

The FBI confirmed that Patel's emails had been targeted. In a statement, bureau spokesman Ben Williamson said, "we have taken all necessary steps to mitigate potential risks associated with this activity" and that the data involved was "historical in nature and involves no government information." Handala, which presents itself as a group of pro-Palestinian vigilante hackers, is considered by Western researchers to be one of several personas used by Iranian government cyberintelligence units. [...] Alongside the photographs of Patel, the hackers published a sample of more than 300 emails, which appear to show a mix of personal and work correspondence dating between 2010 and 2019.

Security

Popular LiteLLM PyPI Package Backdoored To Steal Credentials, Auth Tokens (bleepingcomputer.com) 8

joshuark shares a report from BleepingComputer: The TeamPCP hacking group continues its supply-chain rampage, now compromising the massively popular "LiteLLM" Python package on PyPI and claiming to have stolen data from hundreds of thousands of devices during the attack. LiteLLM is an open-source Python library that serves as a gateway to multiple large language model (LLM) providers via a single API. The package is very popular, with over 3.4 million downloads a day and over 95 million in the past month. According to research by Endor Labs, threat actors compromised the project and published malicious versions of LiteLLM 1.82.7 and 1.82.8 to PyPI today that deploy an infostealer that harvests a wide range of sensitive data.

[...] Both malicious LiteLLM versions have been removed from PyPI, with version 1.82.6 now the latest clean release. [...] If compromise is suspected, all credentials on affected systems should be treated as exposed and rotated immediately. [...] Organizations that use LiteLLM are strongly advised to immediately:

- Check for installations of versions 1.82.7 or 1.82.8
- Immediately rotate all secrets, tokens, and credentials used on or found within code on impacted devices.
- Search for persistence artifacts such as '~/.config/sysmon/sysmon.py' and related systemd services
- Inspect systems for suspicious files like '/tmp/pglog' and '/tmp/.pg_state'
- Review Kubernetes clusters for unauthorized pods in the 'kube-system' namespace
- Monitor outbound traffic to known attacker domains

Social Networks

California Bill Would Require Parent Bloggers To Delete Content of Minors On Social Media (latimes.com) 46

A California bill would let adults demand the removal of social media posts about them that were created by paid family content creators when they were minors. Supporters say Senate Bill 1247 addresses privacy, dignity, and safety harms caused when parents monetize their children's lives online. The Los Angeles Times reports: The legislation would require the parent or other relative to delete or edit the content within 10 business days of receiving the notification. Petitioners could take civil action against those who fail to comply and statutory damages would be set at $3,000 for each day the content remained online. Sen. Steve Padilla (D-San Diego), who introduced the bill last month, said it would help protect the dignity and mental health of those who had their childhood shared on social media. The measure was referred to the Senate Privacy, Digital Technologies and Consumer Protection Committee and is slated for a hearing on April 6.

"The evolution of these applications and technology is incredible," Padilla said. "But it's changing our social dynamic and it's creating situations that, while very productive for some folks, also need some guardrails." The bill would build upon previous legislation from Padilla that was signed into law two years ago and requires content creators that feature minors in at least 30% of their material to place some of their earnings into a trust the children can access when they turn 18.

The Courts

Judge Blocks Pentagon's Effort To 'Punish' Anthropic With Supply Chain Risk Label 61

An anonymous reader quotes a report from CNN: A federal judge in California has indefinitely blocked the Pentagon's effort to "punish" Anthropic by labeling it a supply chain risk and attempting to sever government ties with the AI company, ruling that those measures ran roughshod over its constitutional rights. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," US District Judge Rita Lin wrote in a stinging 43-page ruling.

Lin, an appointee of former President Joe Biden, said she would delay implementation of her ruling for one week to allow the government to appeal. But in her ruling, she made it clear she disapproved of the government's actions, which she said violated the company's First Amendment and due process rights. [...] "These broad measures do not appear to be directed at the government's stated national security interests," she wrote. "The Department of War's records show that it designated Anthropic as a supply chain risk because of its 'hostile manner through the press.'" "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," she added.
"We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits," an Anthropic spokesperson said after the ruling. "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI."
Cloud

Apple Gives FBI a User's Real Name Hidden Behind 'Hide My Email' Feature (404media.co) 86

An anonymous reader quotes a report from 404 Media: Apple provided the FBI with the real iCloud email address hidden behind Apple's 'Hide My Email' feature, which lets paying iCloud+ users generate anonymous email addresses, according to a recently filed court record. The move isn't surprising but still provides uncommon insight into what data is available to authorities regarding the Apple feature. The data was turned over during an investigation into a man who allegedly sent a threatening email to Alexis Wilkins, the girlfriend of FBI director Kash Patel.

"On or about February 28, 2026, Person 1 received an email from the email address peaty_terms_1o@icloud.com," the affidavit reads. Earlier on, the document explicitly says that Person 1 is Alexis Wilkins. [...] The affidavit says Apple then provided records that indicated the peaty_terms_1o@icloud.com email address was associated with an Apple account in the name of Alden Ruml. The records showed that account generated 134 anonymized email addresses, according to the affidavit.

Law enforcement agents later interviewed Ruml and he confirmed he had sent the email, the affidavit says. Ruml said he sent the email after reading a February 28 article about how the FBI was using its own resources to provide security to Wilkins. The specific article is not named or linked in the affidavit, but a New York Times article published that same day described how Patel ordered a team to ferry his girlfriend on errands and to events.

Government

Senators Demand to Know How Much Energy Data Centers Use (wired.com) 49

Elizabeth Warren and Josh Hawley are pressing the Energy Information Administration (EIA) to provide better information on how much electricity data centers actually use. In a joint letter sent to the EIA on Thursday, the two senators press the agency to publicly collect "comprehensive, annual energy-use disclosures" on data centers, saying it's "essential for accurate grid planning and will support policymaking to prevent large companies from increasing electricity costs for American families." Wired reports: In December, EIA administrator Tristan Abbey said at a roundtable that he expects the EIA "is going to be an essential player in providing objective data and analysis to policymakers" with respect to data centers. The agency announced on Wednesday that it would be conducting a voluntary pilot program to collect energy consumption information from nearly 200 companies operating data centers in Texas, Washington, and Virginia, which will cover "energy sources, electricity consumption, site characteristics, server metrics, and cooling systems."

While the senators praise the EIA pilot program, their letter includes several questions about how the agency plans to move forward with more data collection, such as whether or not the energy surveys will be mandatory and whether or not the EIA will collect information on behind-the-meter power. This information will be especially crucial, the senators say, to make sure that big tech companies that signed the agreement at the White House earlier this month pledging that consumers won't bear the costs of data center electricity use will stick to their promises. "Without this data, policymakers, utility companies, and local communities are operating in the dark," the senators write.

The EIA mandates that other industries, including oil and gas and manufacturing, provide regular data to the agency; Hawley and Warren assert that the EIA should be able to collect similar information from data centers under the same provision. The provision is broad enough, Peskoe says, that it could absolutely be interpreted to encompass data centers.
Yesterday, Senator Bernie Sanders and Rep. Alexandria Ocasio-Cortez announced a bill that would "enact a reasonable pause to the development of AI to ensure the safety of humanity." It calls for a federal moratorium on AI data centers until stronger national safeguards are in place around safety, jobs, privacy, energy costs, and environmental impact.
Privacy

Reddit Takes On Bots With 'Human Verification' Requirements (techcrunch.com) 75

Reddit is rolling out human-verification checks for accounts that show signs of bot-like behavior, while also labeling approved automated accounts that provide useful services. The social media company stressed that these checks will only happen if something appears "fishy," and that it is "not conducting sitewide human verification." TechCrunch reports: To identify potential bots, Reddit is using specialized tooling that looks at account-level signals and other factors -- like how quickly the account is attempting to write or post content. Using AI to write posts or comments, however, is not against its policies (though community moderators may set their own rules).

To verify an account is human, Reddit will leverage third-party tools like passkeys from Apple, Google, YubiKey, and other third-party biometric services, like Face ID or even Sam Altman's World ID -- or, in some countries, the use of government IDs. Reddit notes this last category may be required in some countries like the U.K. and Australia and some U.S. states, because of local regulations on age verification, but it's not the company's preferred method.
"If we need to verify an account is human, we'll do it in a privacy-first way," Reddit co-founder and CEO Steve Huffman wrote in the announcement Wednesday. "Our aim is to confirm there is a person behind the account, not who that person is. The goal is to increase transparency of what is what on Reddit while preserving the anonymity that makes Reddit unique. You shouldn't have to sacrifice one for the other."
The Courts

Supreme Court Sides With Internet Provider In Copyright Fight Over Pirated Music 91

Longtime Slashdot reader JackSpratts writes: The Supreme Court unanimously said on Wednesday that a major internet provider could not be held liable for the piracy of thousands of songs online in a closely watched copyright clash. Music labels and publishers sued Cox Communications in 2018, saying the company had failed to cut off the internet connections of subscribers who had been repeatedly flagged for illegally downloading and distributing copyrighted music. At issue for the justices was whether providers like Cox could be held legally responsible and required to pay steep damages -- a billion dollars or more in Cox's case -- if they knew that customers were pirating music but did not take sufficient steps to terminate their internet access.

In its opinion released (PDF) on Wednesday, the court said a company was not liable for "merely providing a service to the general public with knowledge that it will be used by some to infringe copyrights." Writing for the court, Justice Clarence Thomas said a provider like Cox was liable "only if it intended that the provided service be used for infringement" and if it, for instance, "actively encourages infringement." Justice Sonia Sotomayor, joined by Justice Ketanji Brown Jackson, wrote separately to say that she agreed with the outcome but for different reasons. [...]
Cox called the court's unanimous decision a "decisive victory" for the industry and for Americans who "depend on reliable internet service."

"This opinion affirms that internet service providers are not copyright police and should not be held liable for the actions of their customers," the company said.
Social Networks

Meta and YouTube Found Negligent in Landmark Social Media Addiction Case 112

A jury found Meta and YouTube negligent in a landmark social media addiction case, ruling that addictive design features such as infinite scroll and algorithmic recommendations harmed a young user and contributed to her mental health distress. The verdict awards $3 million in compensatory damages so far and could pave the way for more lawsuits seeking financial penalties and product changes across the social media industry. "Meta is responsible for 70 percent of that cost and YouTube for the remainder," notes The New York Times. "TikTok and Snap both settled with the plaintiff for undisclosed terms before the trial started." From the report: The bellwether case, which was brought by a now 20-year-old woman identified as K.G.M., had accused social media companies of creating products as addictive as cigarettes or digital casinos. K.G.M. sued Meta, which owns Instagram and Facebook, and Google's YouTube over features like infinite scroll and algorithmic recommendations that she claimed led to anxiety and depression.

The jury of seven women and five men will deliberate further to decide what further punitive damages the companies should pay for malice or fraud. The verdict in K.G.M.'s case -- one of thousands of lawsuits filed by teenagers, school districts and state attorneys general against Meta, YouTube, TikTok and Snap, which owns Snapchat -- was a major win for the plaintiffs. The finding validates a novel legal theory that social media sites or apps can cause personal injury. It is likely to factor into similar cases expected to go to trial this year, which could expose the internet giants to further financial damages and force changes to their products.
The verdict also comes on the heels of a New Mexico jury ruling that found Meta liable for violating state law by failing to protect users of its apps from child predators.
Facebook

Meta Loses Trial After Arguing Child Exploitation Was 'Inevitable' (arstechnica.com) 45

Meta lost a child safety trial in New Mexico after a court found that its platforms failed to adequately protect children from exploitation and misled parents about app safety. According to Ars Technica, the jury on Tuesday "deliberated for only one day before agreeing that Meta should pay $375 million in civil damages..." While the jury declined to impose the maximum penalty New Mexico sought, which could have cost the company $2.2 billion, Meta may still face additional financial penalties and could be forced to make changes to its apps. From the report: The trial followed a 2023 lawsuit filed by New Mexico Attorney General Raul Torrez after The Guardian published a two-year investigation exposing child sex trafficking markets on Facebook and Instagram. Torrez's office then conducted an undercover investigation codenamed "Operation MetaPhile," in which officers posed as children on Facebook, Instagram, and WhatsApp. The jury heard that these fake profiles were "simply inundated with images and targeted solicitations" from child abusers, Torrez told CNBC in 2024. Ultimately, three men were arrested amid the sting for attempting to use Meta's social networks to prey on children. At trial, Mark Zuckerberg and Instagram chief Adam Mosseri testified that "harms to children, such as sexual exploitation and detriments to mental health, were inevitable on the company's platforms due to their vast user bases," The Guardian reported. Internal messages and documents, as well as testimony from child safety experts within and outside the company, showed that Meta repeatedly ignored warnings and failed to fix platforms to protect kids, New Mexico's AG successfully argued.

Perhaps most troubling to the jury, law enforcement and the National Center for Missing and Exploited Children also testified that Meta's reporting of crimes to children on its apps -- including child sexual abuse materials (CSAM) -- was "deficient," The Guardian reported. Rather than make it easy to trace harms on its platforms, the jury learned from frustrated cops that Meta "generated high volumes of 'junk' reports by overly relying on AI to moderate its platforms." This made its reporting "useless" and "meant crimes could not be investigated," The Guardian reported.

Celebrating the win as a "historic victory," Torrez told CNBC that families had previously paid the price for "Meta's choice to put profits over kids' safety." "Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew," Torrez said. "Today the jury joined families, educators, and child safety experts in saying enough is enough."
Meta said the company plans to appeal the verdict. "We respectfully disagree with the verdict and will appeal," Meta's spokesperson said. "We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online."
Privacy

Hong Kong Police Can Demand Passwords Under New National Security Rules (bbc.com) 79

An anonymous reader quotes a report from the BBC: Hong Kong police can now demand phone or computer passwords from those who are suspected of breaching the wide-ranging National Security Law (NSL). Those who refuse could face up to a year in jail and a fine of up to $12,700, and individuals who provide "false or misleading information" could face up to three years in jail. It comes as part of new amendments to a bylaw under the NSL that the government gazetted on Monday.

The NSL was introduced in Hong Kong in 2020, in wake of massive pro-democracy protests the year before. Authorities say the laws, which target acts like terrorism and secession, are necessary for stability -- but critics say they are tools to quash dissent. The new amendments also give customs officials the power to seize items that they deem to "have seditious intention."

Monday's amendments ensure that "activities endangering national security can be effectively prevented, suppressed and punished, and at the same time the lawful rights and interests of individuals and organizations are adequately protected," Hong Kong authorities said on Monday. Changes to the bylaw was announced by the city's leader, John Lee, bypassing the city's legislative council. The NSL also allows for some trials to be heard behind closed doors.

Slashdot Top Deals