Piracy

Zuckerberg 'Personally Authorized and Encouraged' Meta's Copyright Infringement (variety.com) 48

Five major publishers and author Scott Turow have sued Meta and Mark Zuckerberg, alleging that Zuckerberg "personally authorized and actively encouraged" massive copyright infringement by using pirated books, journal articles, and web-scraped material to train Meta's Llama AI systems. Meta denies wrongdoing and says it will fight the case, arguing that courts have recognized AI training on copyrighted material as potentially fair use. Variety reports: "In their effort to win the AI 'arms race' and build a functional generative AI model, Defendants Meta and Zuckerberg followed their well-known motto: 'move fast and break things,'" the plaintiffs say in their lawsuit. "They first illegally torrented millions of copyrighted books and journal articles from notorious pirate sites and downloaded unauthorized web scrapes of virtually the entire internet. They then copied those stolen fruits many times over to train Meta's multibillion-dollar generative AI system called Llama. In doing so, Defendants engaged in one of the most massive infringements of copyrighted materials in history."

The suit was filed Tuesday (May 5) in the U.S. District Court for the Southern District of New York by five publishers (Hachette, Macmillan, McGraw Hill, Elsevier and Cengage) and Turow individually. The proposed class-action suit seeks unspecific monetary damages for the alleged copyright infringement. A copy of the lawsuit is available at this link (PDF). [...] the latest lawsuit alleges that Meta and Zuckerberg deliberately circumvented copyright-protection mechanisms -- and had considered paying to license the works before abandoning that strategy at "Zuckerberg's personal instruction." The suit essentially argues that the conduct described falls outside protections afforded by fair-use provisions of the U.S. copyright code.

The Courts

Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (cnbc.com) 38

An anonymous reader quotes a report from CNBC: OpenAI President Greg Brockman concluded his testimony on Tuesday, where he largely rebutted Elon Musk's account of the early years of the startup and negotiations that occurred at the company. Brockman testified that he never made any commitments to Musk about the company's corporate structure, and he never heard anyone else make them. He emphasized that OpenAI is still governed by a nonprofit. "This entity remains a nonprofit," Brockman said, referring to the OpenAI foundation. "It is the best-resourced nonprofit in the world." [...] Brockman, who spoke from the witness stand in federal court in Oakland, California, over the course of two days, also revealed that Musk had enlisted several OpenAI employees to do months of free work for him at Tesla, Musk's electric vehicle company. That work mainly included efforts to overhaul the company's approach to developing self-driving technology as part of the Autopilot team there in 2017. During his two days on the stand, Brockman answered questions about his personal financial ambitions, his understanding of OpenAI's structure and Musk's involvement at the company, which they co-founded with other executives in 2015.

In Musk's testimony last week, the Tesla and SpaceX CEO said that the time, money and resources he poured into OpenAI had been integral to the company's success. He repeatedly said that he helped recruit the company's top talent. Brockman said Tuesday that while Musk was helpful in convincing some employees to take the leap to join OpenAI, he was a polarizing figure for others. "Elon had a reputation of being an extremely hard driver," Brockman said. He added that "certain candidates were very attracted" by Musk's involvement at OpenAI, and that "certain candidates were very turned off." Musk testified last week that a former OpenAI researcher named Andrej Karpathy joined Tesla, but only after he had planned to leave the startup already. Brockman said that Musk, after he hired Karpathy, approached him with "an apology and a confession," about the hire, and that neither Musk nor Karpathy had told him the researcher planned to leave OpenAI before that. Musk was generally not very available for meetings and conversations, Brockman said, so he relied on employees, including Sam Teller and former OpenAI board member Shivon Zilis, as proxies.
Brockman testified that open sourcing OpenAI's technology was "not a topic of conversation" during Musk's time with the nonprofit, despite Musk's claims that it was supposed to be central to the organization. He also described tense 2017 negotiations over a possible for-profit arm, saying Musk became angry when equity stakes were discussed. "He said Musk declined the proposal during an in-person meeting, then tore a painting of a Tesla Model 3 car off the wall, and began storming out of the room," reports CNBC. He also demanded to know when the cofounders would leave the company.

Brockman further said Musk wanted control of OpenAI because he disliked situations where he lacked control, citing Zip2 and SolarCity as examples Musk had raised. He also testified that Musk partly wanted control to help fund his broader SpaceX ambition of building a "city on Mars."

CNBC notes the trial will resume at 8:30 a.m. PT on Wednesday, with Shivon Zilis expected to testify. She is the mother of four of Musk's children and a former OpenAI board member.

Recap:
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
The Courts

Apple Agrees To Pay iPhone Owners $250 Million For Not Delivering AI Siri 34

Apple has agreed to a proposed $250 million settlement over claims that it misled iPhone buyers about the availability of Apple Intelligence and its upgraded Siri features. The settlement would cover U.S. buyers of the iPhone 16 lineup and iPhone 15 Pro models between June 10, 2024, and March 29, 2025. The Verge reports: The settlement will resolve a 2025 lawsuit, alleging Apple's advertisements created a "clear and reasonable consumer expectation" that Apple Intelligence features would be available with the launch of the iPhone 16. The lawsuit claimed Apple's products "offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance."

Apple brought certain AI-powered features to the iPhone 16 weeks after its release, and delayed the launch of its more personalized Siri, which is now expected to arrive later this year. Last April, the National Advertising Division recommended that Apple "discontinue or modify" its "available now" claim for Apple Intelligence. Apple also pulled an iPhone 16 ad showing actor Bella Ramsey using the AI-upgraded Siri.
Bug

US Government Warns of Severe CopyFail Bug Affecting Major Versions of Linux (techcrunch.com) 63

An anonymous reader quotes a report from TechCrunch: A severe security vulnerability affecting almost every version of the Linux operating system has caught defenders off-guard and scrambling to patch after security researchers publicly released exploit code that allows attackers to take complete control of vulnerable systems. The U.S. government says the bug, dubbed "CopyFail," is now being exploited in the wild, meaning it's being actively used in malicious hacking campaigns. [...] Given the risk to the federal enterprise network, U.S. cybersecurity agency CISA has ordered all civilian federal agencies to patch any affected systems by May 15.
The Courts

OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (apnews.com) 143

OpenAI president Greg Brockman's testimony dominated the fifth day of the trial for Elon Musk's lawsuit against the AI company. Brockman took the witness stand on Monday, disclosing that his stake in OpenAI is worth nearly $30 billion, despite not personally investing money in OpenAI. The judge also declined to admit a pretrial text in which Musk allegedly warned Brockman that he and Altman would become "the most hated men in America." From a report: Brockman's disclosure would put him on the Forbes list of the world's richest people, with wealth comparable to Melinda French Gates. [...] Late Sunday, OpenAI lawyers tried to admit as evidence a text message Musk sent to Brockman two days before the trial began. According to a court filing -- which did not include the actual text exchange -- Musk sent a message to Brockman to gauge interest in settlement.

When Brockman replied that both sides should drop their respective claims, Musk shot back, according to the filing, "By the end of this week, you and Sam will be the most hated men in America. If you insist, so it will be." Judge Yvonne Gonzalez Rogers, who is overseeing the trial, did not admit the text exchange as evidence.
Brockman acknowledged that he had promised to personally donate $100,000 to OpenAI's charity but never did. In explaining the delay, Brockman put the onus on Altman: "I asked Sam when I should donate this, and he said he would let me know," reports Business Insider.

The first witness to testify on Monday was Stuart Russell, an artificial intelligence expert who teaches computer science at the University of California, Berkeley. "The most memorable part of Russell's testimony was when he talked about how much Musk's legal team paid him," notes Business Insider. "He received an eye-popping $5,000 per hour for 40 hours of preparatory work. Expert witnesses in high-profile cases typically make between $500 to $1,000 per hour."

Recap:
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
AI

White House Considers Vetting AI Models Before They Are Released 125

The Trump administration is reportedly considering an executive order to create a working group that could review advanced AI models before public release. The shift follows concerns over Anthropic's powerful Mythos model and its cyber capabilities, with officials weighing whether the government should get early access to frontier models without necessarily blocking their release. The New York Times reports: In meetings last week, White House officials told executives from Anthropic, Google and OpenAI about some of those plans, people briefed on the conversations said. The working group is likely to consider a number of oversight approaches, officials said. But a review process could be similar to one being developed in Britain, which has assigned several government bodies to ensure that A.I. models meet certain safety standards, people in the tech industry and the administration said.

The discussions signal a stark reversal in the Trump administration's approach to A.I. Since returning to office last year, Mr. Trump has been a major booster of the technology, which he has said is vital to winning the geopolitical contest against China. Among other moves, he swiftly rolled back a Biden administration regulatory process that asked A.I. developers to perform safety evaluations and report on A.I. models with potential military applications. "We're going to make this industry absolutely the top, because right now it's a beautiful baby that's born," Mr. Trump said of A.I. at an event in July. "We have to grow that baby and let that baby thrive. We can't stop it. We can't stop it with politics. We can't stop it with foolish rules and even stupid rules." Mr. Trump left room for some rules, but he added that "they have to be more brilliant than even the technology itself."

The White House wants to avoid any political repercussions if a devastating A.I.-enabled cyberattack were to occur, people in the tech industry and the administration said. The administration is also evaluating whether new A.I. models could yield cyber-capabilities that could be useful to the Pentagon and U.S. intelligence agencies, they said. To get ahead of models like Mythos, some officials are pushing for a review system that would give the government first access to A.I. models, but that would not block their release, people briefed on the talks said.
United Kingdom

16% of Parents Help Their Children Bypass Online Age Checks, Study Finds. One 15-Year-Old Just Uses a Fake Moustache (independent.co.uk) 162

The Independent reports that "more than a third of children in the UK have found a way around age verification measures" for social media sites and other online platforms. And new research from online safety organisation Internet Matters "suggests one in six parents have helped their child to get past age verification checks, with children reporting 'tricking' platforms into thinking they are older. " Parents also said they had caught their children drawing on facial hair in a bid to evade the technology. One mother said: "I did catch my son using an eyebrow pencil to draw a moustache on his face, and it verified him as 15 years old"... From a sample of 1,000 UK children, 46% said they believed age checks are easy to bypass, while 32% admitted to having done so.
49% of the children surveyed said they'd still encountered harmful content, according to the online safety activists. The group called the figure "unacceptable," and complained that age verification measures "are often ineffective in practice or easy to bypass."
Government

Roblox Blames Age-Verification Rollout for Lowered Growth. Stock Tumbles 22% (qz.com) 37

Age verification became mandatory for chat access on Roblox in January — and Friday morning Quartz reported it's apparently impacted the company's financials: Roblox cut its full-year 2026 bookings forecast by roughly $900 million at the midpoint on Thursday, blaming stronger-than-expected headwinds from its mandatory age-verification rollout on an audience that skews heavily toward children and teenagers. Full-year 2026 bookings are now projected at $7.33 billion to $7.60 billion, a range that sits roughly $900 million below the prior guidance of $8.28 billion to $8.55 billion; analysts had expected $8.38 billion, according to Yahoo Finance. Roblox stock fell almost 22% in premarket trading....

Daily active users rose 35% year over year to 132 million, while hours engaged climbed 43% to 31 billion hours... Daily Active Users and hours engaged fell below forecasts of 143.8 million and 33.68 billion, respectively, according to Yahoo Finance... Users who have not completed age checks have faced restricted communication features, and the process has weighed on the platform's ability to bring in new users. Russia's blocking of the platform, which took effect in December 2025, added further drag on user growth, according to Yahoo Finance. As of the end of the first quarter, 51% of global daily active users had completed age verification, with 65% of U.S. users having done so, Roblox said....

The safety push has come with legal costs. Roblox accrued $57 million in the first quarter for settlements and settlement proposals with certain states over youth-related consumer protection and digital safety matters, with payments structured over multiple years, the company said.

Roblox acknowledged in a letter to shareholders that "our aggressive push to enhance safety lowers our expectations for topline growth in 2026." But they argued that it also "makes our platform fundamentally better and amplifies the long-term growth potential of Roblox through more effective content targeting, tailored communication experiences, and improved community sentiment."
AI

South Africa's Draft AI Policy Withdrawn Due to 'Fictitious' AI-Generated Citations (timeslive.co.za) 10

An official in South Africa withdrew a draft of the country's national AI policy, reports a local newspaper, "after it was found the draft policy was compiled using AI, which cited academic articles that were 'fictitious'." Earlier this month, minister in the Presidency Khumbudzo Ntshavheni announced cabinet had approved the draft policy for public comment. [Ntshavheni] said the policy seeks to strengthen government's ability to regulate and adopt AI responsibly, while fostering innovation, job creation, and skills access.
The article includes this quotes from the country's minister of communications/digital technologies department. "This unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical."

Thanks to Slashdot reader Tokolosh for sharing the article.
The Internet

Smuggled Starlink Terminals are Beating Iran's Internet Blackout (bbc.com) 134

An anonymous reader shared this report from the BBC: "If even one extra person is able to access the internet, I think it's successful and it's worth it," says Sahand. The Iranian man is visibly anxious, speaking to the BBC outside Iran, as he carefully explains how he is part of a clandestine network smuggling satellite internet technology — which is illegal in Iran — into the country. Sahand, whose name we have changed, fears for family members and other contacts inside the country. "If I was identified by the Iranian regime, they might make those I'm in touch with in Iran pay the price," he says.

For more than two months, Iran has been in digital darkness as the government maintains one of the longest-running national internet shutdowns ever recorded worldwide... Sahand says he has sent a dozen [Starlink terminals] to Iran since January and "we are actively looking for other ways to smuggle in more". The human rights organisation Witness estimated in January that there are at least 50,000 Starlink terminals in Iran. Activists say the number is likely to have risen...

Last year, the Iranian government passed legislation that made using, buying or selling Starlink devices punishable by up to two years in prison. The jail term for distributing or importing more than 10 devices can be up to 10 years. State-affiliated media has reported multiple cases of people being arrested for selling and buying Starlink terminals, including four people — two of them foreign nationals — arrested last month for "importing satellite internet equipment".

"The BBC contacted SpaceX for more details about the use of Starlink in the country but did not receive a response."
Government

Pentagon Reaches Agreements With Top AI Companies, But Not Anthropic 21

The Pentagon says it has reached deals with seven AI companies -- SpaceX, OpenAI, Google, Nvidia, Reflection AI, Microsoft, and AWS -- to deploy their tools on classified Defense Department networks. The odd one out is Anthropic, which remains excluded after being labeled a supply-chain risk amid a dispute over military-use guardrails. Reuters reports: SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft, and Amazon Web Services (AWS), several of which already work with the Pentagon, will be integrated into its secret and top-secret network environments, providing more military access to their products for use on sensitive topics, the Pentagon said in a statement. The lesser-known Reflection AI, which raised $2 billion in October, is backed by 1789 Capital, a venture capital firm in which Donald Trump Jr. is a partner and investor.

Since the Pentagon deemed Anthropic's products a "supply-chain risk" in March and the two sides became embroiled in a lawsuit, the military has expressed increasing interest in AI startups. Since the blow-up, newer AI entrants have said the military has sped up the process of incorporating them onto secret and top-secret data levels to less than three months. The process previously took 18 months or longer.

By expanding AI services offered to troops, who use it for planning, logistics, targeting and in other ways to streamline huge operations and perform more quickly, the Pentagon said in its statement it will avoid "vendor lock," a likely nod to its overdependence on Anthropic or other dominant service providers. [...] AI has become increasingly important for the U.S. military. The Pentagon's main AI platform, GenAI.mil, has been used by over 1.3 million Defense Department personnel, the agency noted in its release, after five months of operation.
Further reading: Google and Pentagon Reportedly Agree On Deal For 'Any Lawful' Use of AI
Transportation

The California Government Is Coming For Your E-Bikes (sfstandard.com) 244

An anonymous reader quotes a report from the San Francisco Standard: If state lawmakers have their way, you'll have to get a license plate for your e-bike, and if you're planning to buy one next year, it'll be slower. Amid growing concerns about e-bike safety, particularly among children in Bay Area suburbs, two bills introduced this year aim to make it easier to ticket riders and reduce the top speed of some models. AB 1942 would require certain e-bikes to be registered with the Department of Motor Vehicles and display license plates, and AB 1557 would slow e-bikes that children are allowed to operate. Both bills are still being reviewed in committee. If either bill passes this year, it will take effect Jan. 1.
The Courts

Musk Concludes Testimony At OpenAI Trial (cnbc.com) 29

An anonymous reader quotes a report from CNBC: Elon Musk wrapped up his testimony on Thursday as the trial in his lawsuit against OpenAI CEO Sam Altman continued into its fourth day. OpenAI's attorney, William Savitt, cross-examined Musk in the morning. He asked Musk about the capped nature of Microsoft's investments in OpenAI, his involvement in negotiations about the company's structure, and whether he knew about the OpenAI nonprofit's recent initiatives. "I don't know what's going on at OpenAI," Musk testified.

Savitt also asked Musk about his competing artificial intelligence startup, xAI. While not the main focus of the case, Musk said it is "partly" true that xAI used some of OpenAI's models to train its own models, a process known as distilling. Musk also suggested that xAI has used OpenAI's technology to help build the company. Musk sued OpenAI, Altman, and Greg Brockman, the company's president, in 2024, alleging that they went back on their commitments to keep the artificial intelligence company a nonprofit and to follow its charitable mission. He claims that the roughly $38 million he donated to seed OpenAI, a company he co-founded, was used for unauthorized commercial purposes.

Once Musk wrapped up his testimony after roughly two hours of questioning on Thursday, his attorneys called Jared Birchall, who manages Musk's billions at his family office, as their next witness. Birchall testified about his knowledge of Musk's specific donations to OpenAI. Judge Yvonne Gonzalez Rogers oversaw the proceedings from federal court in Oakland, California. The trial will resume on Monday.
Recap:
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
Government

US Senators Ban Themselves From Prediction Markets Trading (cnbc.com) 55

The U.S. Senate unanimously passed a rule banning senators from trading on prediction markets effective immediately. CNBC reports: The move came amid rising concern about insider trading on prediction market platforms such as Kalshi and Polymarket, and about event contracts that can involve death or violence. On April 22, Kalshi said it had suspended and fined one U.S. Senate candidate and two candidates for the House of Representatives for political insider trading on their own campaigns.

Earlier on Thursday, a group of Democratic members of Congress called on the Commodity Futures Trading Commission to issue a rule "that prevents insider trading and corruption in the market and prohibits event contracts on the outcome of elections, war and military actions in the U.S. or abroad, sports, and government actions without a valid economic hedging interest." Kalshi and Polymarket both praised the Senate's action.
"I applaud the Senate for passing this resolution to ban Senators and their offices from trading on prediction markets," Kalshi CEO Tarek Mansour wrote in a post on X. "Kalshi already proactively blocks members of congress and enforces against insider trading. This is a great step to increase trust in our markets by making it an industry standard," Mansour said. "Now, let's pass this in the House!"

Polymarket, in its own post on X, said, "We're in full support of this. Our Rulebook & Terms of Service already prohibit such conduct, but codifying this into law is a step forward for the industry. Happy to help move this forward however we can."
Security

New Linux 'Copy Fail' Vulnerability Enables Root Access On Major Distros (copy.fail) 159

A newly disclosed Linux kernel flaw dubbed "Copy Fail" can let a local, unprivileged attacker gain root access on major Linux distributions, with researchers claiming the bug affects kernels shipped since 2017. "The POC exploit works out of the box today, but a future version that can escape from containers like Docker is promised soon," writes Slashdot reader tylerni7. "Technical details are available here." Slashdot reader BrianFagioli shares a report from NERDS.xyz: A newly disclosed Linux kernel vulnerability called Copy Fail (CVE-2026-31431) allows an unprivileged user to gain root access using a tiny 732-byte script, and it works with unsettling consistency across major distributions. Unlike older exploits that relied on race conditions or fragile timing, this one is a straight-line logic flaw in the kernel's crypto subsystem. It abuses AF_ALG sockets and splice to overwrite a few bytes in the page cache of a target file, such as /usr/bin/su. Because the kernel executes from the page cache, not directly from disk, the attacker can inject code into a setuid binary in memory and immediately escalate privileges.

What makes this especially concerning is how quiet it is. The file on disk remains unchanged, so standard integrity checks see nothing wrong, while the in-memory version has already been tampered with. The same primitive can also cross container boundaries since the page cache is shared, raising the stakes for multi-tenant environments and Kubernetes nodes. The underlying issue traces back to an in-place optimization added years ago, now being rolled back as part of the fix. Until patched kernels are widely deployed, this is one of those bugs that feels less like a theoretical risk and more like a practical, reliable path to full system compromise.

Slashdot Top Deals