Privacy

GM Secretly Sold California Drivers' Data, Agrees to Pay $12.75M In Privacy Settlement (ca.gov) 17

"General Motors sold the data of California drivers without their knowledge or consent," says California's attorney general, "and despite numerous statements reassuring drivers that it would not do so."

In 2024, The New York Times "reported that automakers including GM were sharing information about their customers' driving behavior with insurance companies," remembers TechCrunch, "and that some customers were concerned that their insurance rates had gone up as a result."

Now General Motors "has reached a privacy-related settlement with a group of law enforcement agencies led by California Attorney General Rob Bonta..." The settlement announcement from Bonta's office similarly alleges that GM sold "the names, contact information, geolocation data, and driving behavior data of hundreds of thousands of Californians" to Verisk Analytics and LexisNexis Risk Solutions, which are both data brokers. Bonta's office further alleges that this data was collected through GM's OnStar program, and that the company made roughly $20 million from data sales.

However, Bonta's office also said the data did not lead to increased insurance prices in California, "likely because under California's insurance laws, insurers are prohibited from using driving data to set insurance rates." As part of the settlement, GM has agreed to pay $12.75 million in civil penalties and to stop selling driving data to any consumer reporting agencies for five years, Bonta's office said. GM has also agreed to delete any driver data that it still retains within 180 days (unless it obtains consent from customers), and to request that Lexis and Verisk delete that data.

"This trove of information included precise and personal location data that could identify the everyday habits and movements of Californians," according to the attorney general's announcement. The settlement "requires General Motors to abandon these illegal practices, and underscores the importance of the data minimization in California's privacy law — companies can't just hold on to data and use it later for another purpose."

"Modern cars are rolling data collection machines," said San Francisco District Attorney Brooke Jenkins. "Californians must have confidence that they know what data is being collected, how it is being used, and what their opt-out rights are... This case sends a strong message that law enforcement will take action when California privacy laws are not scrupulously followed."
EU

The EU Considers Restricting Use of US Cloud Platforms for Sensitive Government Data (cnbc.com) 67

CNBC reports: The European Union is considering rules that would restrict its member governments' use of U.S. cloud providers to handle sensitive data, sources familiar with the talks told CNBC.

The European Commission — the EU's executive branch — is expected to present its "Tech Sovereignty Package" on May 27, which will include a range of measures aimed at bolstering the bloc's strategic autonomy in key digital areas. As part of preparations for that package, discussions are taking place within the Commission around limiting the exposure of sensitive public-sector data to cloud platforms provided by companies outside of the EU, two Commission officials, who asked to remain anonymous as they weren't authorized to discuss private talks, told CNBC... "The core idea is defining sectors that have to be hosted on European cloud capacity," one of the officials said. They added that companies providing cloud solutions from third countries, including the U.S., could be impacted. Proposals would not prohibit overseas companies' cloud platforms from government contracts entirely, but limit their use in processing sensitive data at public sector organizations, depending on the level of sensitivity, they added. The officials said that talks are ongoing and yet to be finalized...

The officials told CNBC there are discussions around proposing that financial, judicial and health data processed by governments and public-sector organizations require high levels of sovereign cloud infrastructure.

Privacy

Fiber Optic Cables Can Eavesdrop On Nearby Conversations (science.org) 28

sciencehabit shares a report from Science Magazine: Cold War spies planted bugs in walls, lamps, and telephones. Now, scientists warn, the cables themselves could listen in. A fiber optic technique used to detect earthquakes can also pick up the faint vibrations of nearby speech, researchers reported this week here at the general assembly of the European Geosciences Union. Freely available artificial intelligence (AI) software turned the fiber optic data into intelligible, real-time transcripts. "Not many people realize that [fiber optic cables] can detect acoustic waves," says Jack Lee Smith, a geophysicist at the University of Edinburgh who presented the result. "We show that in almost every case where you use these fibers, this could be a privacy concern."

Fiber optics can pick up on sound through a technique called distributed acoustic sensing (DAS). Using a machine called an interrogator, researchers fire laser pulses down a cable and record the pattern of reflections coming back from tiny glass defects along the length of the fiber optic. When an earthquake's seismic wave crosses a section of the fiber, it stretches and squeezes the defects, leading to shifts in the reflected light that researchers can use to build a picture of an earthquake. DAS essentially turns a fiber cable into a long chain of seismometers that can detect not only earthquakes, but also the rumblings of volcanoes, cars, and college marching bands. And although scientists set up dedicated fiber lines specifically for research, DAS can also be performed on "dark fiber" -- unused strands in the web of fiber optics that runs through cities and across oceans, carrying the world's internet traffic.

DAS can also be used to eavesdrop, the work of Smith and his colleagues shows. They conducted a field test using an existing DAS setup used to study coastal erosion. They set a speaker next to the cable and played pure tones, music, and speech. Human speech contains frequencies ranging from a few hundred to several thousand hertz. The low end of the range could be pulled out of the data "even without any preprocessing," Smith says. "You can easily see acoustic waves." Getting higher frequency speech took a bit of postprocessing, but it was possible. Dumping the data directly into Whisper, a free AI transcription tool, provided accurate real-time transcription. However, this technique worked only for coiled cables, exposed at the surface, at distances of up to 5 meters from the speaker. Burying the cable under just 20 centimeters of dirt was enough to muddy the speech. And straight cables -- even exposed ones right next to the speaker -- did not record speech well.

AI

Thousands of Vibe-Coded Apps Expose Corporate and Personal Data On the Open Web 43

An anonymous reader quotes a report from Wired: Security researcher Dor Zvi and his team at the cybersecurity firm he cofounded, RedAccess, analyzed thousands of vibe-coded web applications created using the AI software development tools Lovable, Replit, Base44, and Netlify and found more than 5,000 of them that had virtually no security or authentication of any kind. Many of these web apps allowed anyone who merely finds their web URL to access the apps and their data. Others had only trivial barriers to that access, such as requiring that a visitor sign in with any email address. Around 40 percent of the apps exposed sensitive data, Zvi says, including medical information, financial data, corporate presentations, and strategy documents, as well as detailed logs of customer conversations with chatbots.

"The end result is that organizations are actually leaking private data through vibe-coding applications," says Zvi. "This is one of the biggest events ever where people are exposing corporate or other sensitive information to anyone in the world." Zvi says RedAccess' scouring for vulnerable web apps was surprisingly easy. Lovable, Replit, Base44, and Netlify all allow users to host their web apps on those AI companies' own domains, rather than the users'. So the researchers used straightforward Google and Bing searches for those AI companies' domains combined with other search terms to identify thousands of apps that had been vibe coded with the companies' tools.

Of the 5,000 AI-coded apps that Zvi says were left publicly accessible to anyone who simply typed their URLs into a browser, he found close to 2,000 that, upon closer inspection, seemed to reveal private data: Screenshots of web apps he shared with WIRED -- several of which WIRED verified were still online and exposed -- showed what appeared to be a hospital's work assignments with the personally identifiable information of doctors, a company's detailed ad purchasing information, what appeared to be another firm's go-to-market strategy presentation, a retailer's full logs of its chatbot's conversations with customers, including the customers' full names and contact information, a shipping firm's cargo records, and assorted sales and financial records from a variety of other companies. In some cases, Zvi says, he found that the exposed apps would have allowed him to gain administrative privileges over systems and even remove other administrators. In the case of Lovable, Zvi says he also found numerous examples of phishing sites that impersonated major corporations, including Bank of America, Costco, FedEx, Trader Joe's, and McDonald's, that appeared to have been created with the AI coding tool and hosted on Lovable's domain.
"Anyone from your company at any moment can generate an app, and this is not going through any development cycle or any security check," Zvi says. "People can just start using it in production without asking anyone. And they do."
Sci-Fi

Pentagon Begins Releasing New Files On UFOs (apnews.com) 68

The Pentagon has begun releasing new UFO/UAP files through a newly launched public website, starting with 162 documents from agencies including the FBI, State Department, NASA, and others. Officials say more files will be released on a rolling basis. The Associated Press reports: The Pentagon has begun releasing new files on UFOs, saying members of the public can draw their own conclusions on "unidentified anomalous phenomena" like an object that a drone pilot says shone a bright light in the sky and then vanished. It said in a post on X on Friday that while past administrations sought to discredit or dissuade the American people, President Donald Trump "is focused on providing maximum transparency to the public, who can ultimately make up their own minds about the information contained in these files." It said additional documents will be released on a rolling basis.

Besides the Pentagon, the effort is led by the White House, the director of national intelligence, the Energy Department, NASA and the FBI. A newly unveiled website housing the documents on unidentified anomalous phenomena, or UAPs, has a decidedly retro feel, with black-and-white military imagery of flying objects displayed prominently on the page, with statements displayed in typewriter-like font. The first release includes 162 files, such as old State Department cables, FBI documents and transcripts from NASA of crewed flights into space.

One document details an FBI interview with someone identified as a drone pilot who, in September 2023, reported seeing a "linear object" with a light bright enough to "see bands within the light" in the sky. "The object was visible for five to ten seconds and then the light went out and the object vanished," according to the FBI interview. Another file is a NASA photograph from the Apollo 17 mission in 1972, showing three dots in a triangular formation. The Pentagon says in an accompanying caption that "there is no consensus about the nature of the anomaly" but that a new, preliminary analysis indicated that it could be a "physical object."

The Courts

Sam Altman Had a Bad Day In Court (businessinsider.com) 56

An anonymous reader quotes a report from Business Insider: As the trial between Elon Musk and OpenAI ended its second week, the Tesla CEO started scoring points against Sam Altman. His witnesses landed three solid punches in testimony about how Altman runs OpenAI as CEO, raising concerns about his dedication to AI safety, the nonprofit's mission, and his honesty as a leader of the organization. [...] This week, Musk's legal team called a parade of witnesses who questioned whether Altman was acting in the interest of the nonprofit. On Thursday, that included a former OpenAI safety researcher, who described a slow erosion of the company's safety teams, which prompted her to leave the company. Witnesses also shared stories about the company launching products without the proper safety reviews -- or the knowledge of the board. Rosie Campbell, a former AI safety researcher at OpenAI, testified that the company became more product-focused during her time there and moved away from the long-term safety work that had initially drawn her in. She said both long-term AI safety teams were eventually eliminated, and that she supported Altman's reinstatement only because she feared OpenAI might otherwise collapse into Microsoft: "It was my understanding at the time that the best way for OpenAI to not disintegrate and fall about would be for Sam to return." Still, Campbell's testimony wasn't entirely favorable to Musk. She also said xAI, Musk's AI company, likely had an inferior approach to safety than OpenAI.

Helen Toner, another former OpenAI board member, also testified about the board's concerns leading up to Altman's removal. She said the board was not primarily worried about ChatGPT's safety, but about Altman's leadership and investor relationships, saying, "The issues that we were concerned about in our decision to fire Sam were exacerbated by relationships with investors." Toner also described concerns that Altman was misrepresenting what others had said, telling the court, "We were concerned that Sam was inserting words into other people's mouths in order to get people to do what he wanted."

Meanwhile, Tasha McCauley, a former OpenAI board member, described a deep loss of trust in Altman and accused him of creating "chaos" and "crisis" inside the company. She said Altman fostered a "culture of lying and culture of deceit," including allegedly misleading others about whether GPT-4 Turbo needed internal safety review before launch.

Musk's lawyers then called to the stand David Schizer, a Columbia Law professor and nonprofit-governance expert, who framed Altman's alleged behavior as a serious governance problem for an organization that was supposed to be mission-driven. Asked about claims that products were launched without full board awareness or safety review, he said, "The board and CEO need to be partnering, working together, to make sure the mission is being followed," adding that "if the CEO is withholding that information, it's a big problem."

The day ended with the start of a Microsoft executive's deposition. Microsoft VP Michael Wetter said Azure had integrated OpenAI technology, that Microsoft saw strategic value in having AI developers build on Azure, and that a 2016 agreement allowed OpenAI to use Microsoft tools for free even though it could mean a loss of up to $15 million for Microsoft. Testimony ended early, with no court on Friday and the trial set to resume Monday.

Recap:
Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (Day Seven)
Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six)
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
Privacy

60% of MD5 Password Hashes Are Crackable In Under an Hour (theregister.com) 106

In honor of World Password Day, Kaspersky researchers revisited their study on the crackability of real-world passwords and found that 60% of MD5-hashed passwords could be cracked in under an hour with a single Nvidia RTX 5090, and 48% could be cracked in under a minute. "The bottom line is that passwords protected only by fast hashing algorithms such as MD5 are no longer safe if attackers obtain them in a data breach," reports The Register. From the report: Much of the reason password hashes have become so easy to crack is password predictability. Per Kaspersky, its analysis of more than 200 million exposed passwords revealed common patterns that attackers can use to optimize cracking algorithms, significantly reducing the time needed to guess the character combinations that grant access to target accounts.

In case you're wondering whether there's a trend to compare this to, Kaspersky ran a prior iteration of this study in 2024, and bad news: Passwords are actually a bit easier to crack in 2026 than they were a couple of years ago. Not by much, mind you -- only a few percent -- but it's still a move in the wrong direction. "Attackers owe this boost in speed to graphics processors, which grow more powerful every year," Kaspersky explained. "Unfortunately, passwords remain as weak as ever."
"This World Password Day, the main message ought not to be to the users, who often have no choice but to use passwords anyway, but to the sites and providers that are requiring them to do so," said senior IEEE member and University of Nottingham cybersecurity professor Steven Furnell. His advice is that providers need to modernize their login systems and enforce stronger protections, because users are often stuck with whatever security options they're given.
Social Networks

LinkedIn Profile Visitor Lists Belong to the People, Says Noyb (theregister.com) 28

A LinkedIn user in the EU is challenging Microsoft's refusal to provide a full list of profile visitors under GDPR Article 15, arguing that the data should be available for free because LinkedIn processes it and sells a more complete version to Premium users. Privacy group Noyb says the case could set a broader precedent over whether companies can monetize user-related data while denying access to the same data through GDPR requests. "Selling data to its own users is a popular practice among companies," Noyb data protection lawyer Martin Baumann said of the case. "In reality, however, people have the right to receive their own data free of charge." The Register reports: Take a look at the language of Article 15, and it's pretty clear: data subjects (i.e., users) have the right to a copy of any and all data concerning them that's been processed by the provider. A full list of profile visitors seemingly should fall under Article 15 data -- even if it's normally reserved for paying users and presented to them in a nicer way, it should still be accessible to free users who actually request it. [...] Noyb acknowledges there's a clear bit of legal fuzz stuck in this corner of the GDPR when it comes to premium service offerings. "If any business processes a person's personal data, this information is generally covered by their right of access under the GDPR," Baumann told The Register. "It does not matter that the business would prefer to sell the data to the data subject or that it would be harmful for their business model if they would."

There's only one exception in Article 15 that would give LinkedIn an out, Baumann told us, and that's the last paragraph, which says a person's right to their data can't adversely affect the rights and freedoms of others. Were LinkedIn to argue that it had to protect the identities of people who visited a data subject's profile, they could have an excuse. But not a good one, in Baumann's opinion. "Since LinkedIn does provide information about profile visits to paying Premium members, it cannot consider that disclosing the data would adversely affect the rights of the visitors whose data is disclosed," the Noyb lawyer explained. "Otherwise, providing this information to Premium users would be unlawful too."

What seems to be the sticking point here is where right of access begins and a company's right to make money off data they hold (data that was, ahem, supplied by users) ends. Baumann said he hopes this case can clear the legal air. "We expect a clarification concerning the fact that personal data that can be accessed when a user pays for it is also covered by their right of access," he explained. [...] Baumann said there are numerous other cases where similar legal clarification would be appreciated, citing the example of a bank that is unwilling to provide access to account statements in response to a GDPR request, but is happy to hand over similar data for a fee. "A precedent would be welcomed," Baumann said.
A LinkedIn spokesperson told The Register: "Not only is it incorrect that only Premium members can see who has viewed their profile, but we also satisfy GDPR Article 15 by disclosing the information at issue via our Privacy Policy."
The Courts

Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (businessinsider.com) 19

Sam Altman's management style came under scrutiny on the seventh day of Elon Musk's high-stakes OpenAI trial, as former OpenAI figures Mira Murati, Shivon Zilis, and Helen Toner took the stand to testify about their experiences working with him. Their testimony resurfaced many of the criticisms that first emerged during Altman's brief ouster as CEO in 2023. An anonymous reader quotes a report from Business Insider: The first witness was Mira Murati, OpenAI's former chief technology officer and now founder of her own AI shop, Thinking Machines Lab. Jurors watched a recorded video deposition of Murati, who was also OpenAI's interim CEO after the board briefly ousted Sam Altman. Murati's testimony focused on her concerns about Altman's "difficult and chaotic" management style. She said Altman had trouble "making decisions on big controversial things." He also had a habit of telling people what they wanted to hear.

"My concern was about Sam saying one thing to one person and a completely different thing to another person, and that makes it a very difficult and chaotic environment to work with," said Murati. Murati said that her issue with Altman was not about safety, "it is about Sam creating chaos." She said she supported Altman's return to OpenAI because the company "was at catastrophic risk of falling apart" at the time of his ousting. "I was concerned about the company completely blowing up."

Zilis said she was upset that Altman rolled out ChatGPT without involving the board. "It wasn't just me but the entire board raised concern about that whole thing happening without any board communication," she said. Zilis said she was also concerned about a potential OpenAI deal with a nuclear energy startup called Helion Energy because both Altman and Greg Brockman were investors. Although the executives had disclosed the investment to the board, Zilis said the deal talk made her uneasy. It "felt super out of left field," she said. "How is it the case that we want to place a major bet on a speculative technology?"

In a video deposition, Helen Toner, a former member of OpenAI's board who resigned in 2023, said she first became aware of ChatGPT's release when an OpenAI employee asked another board member whether the board was aware of the development. [...] Toner also elaborated on why the board, including herself, voted to remove Altman as CEO in 2023. "There were a number of things -- the pattern of behavior related to his honesty and candor, his resistance of board oversight, as well as the concerns that two os his inner management team raised to the board about his management practices, his manipulation of board processes," said Toner.
Recap:
Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six)
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
Privacy

Microsoft Edge Stores Passwords In Plaintext In RAM (pcmag.com) 107

Longtime Slashdot reader UnknowingFool writes: Security researcher Tom Joran Sonstebyseter Ronning has found that Microsoft Edge stores passwords in plaintext in RAM. After creating a password and storing it using Edge's password manager, Ronning found that he could dump the RAM and recover his password which was stored in plaintext. Part of the issue is Edge loads all passwords to all sites upon a single verification check, even if the user was not visiting a specific site. This is very different from Chrome, which only loads passwords for specific websites when challenged for the site's password. Also, Chrome will delete the password from memory once the password has been filled. Edge does not delete the passwords from memory once they are used.

Microsoft downplayed the risk noting access would require control over a user's PC like a malware infection: "Access to browser data as described in the reported scenario would require the device to already be compromised," Microsoft said. Ronning countered that it was possible to dump passwords for multiple users using administrative privileges for one user to view the passwords for other logged-on users.
"Design choices in this area involve balancing performance, usability, and security, and we continue to review it against evolving threats," Microsoft said. "Browsers access password data in memory to help users sign in quickly and securely -- this is an expected feature of the application. We recommend users install the latest security updates and antivirus software to help protect against security threats."
Piracy

Zuckerberg 'Personally Authorized and Encouraged' Meta's Copyright Infringement (variety.com) 76

Five major publishers and author Scott Turow have sued Meta and Mark Zuckerberg, alleging that Zuckerberg "personally authorized and actively encouraged" massive copyright infringement by using pirated books, journal articles, and web-scraped material to train Meta's Llama AI systems. Meta denies wrongdoing and says it will fight the case, arguing that courts have recognized AI training on copyrighted material as potentially fair use. Variety reports: "In their effort to win the AI 'arms race' and build a functional generative AI model, Defendants Meta and Zuckerberg followed their well-known motto: 'move fast and break things,'" the plaintiffs say in their lawsuit. "They first illegally torrented millions of copyrighted books and journal articles from notorious pirate sites and downloaded unauthorized web scrapes of virtually the entire internet. They then copied those stolen fruits many times over to train Meta's multibillion-dollar generative AI system called Llama. In doing so, Defendants engaged in one of the most massive infringements of copyrighted materials in history."

The suit was filed Tuesday (May 5) in the U.S. District Court for the Southern District of New York by five publishers (Hachette, Macmillan, McGraw Hill, Elsevier and Cengage) and Turow individually. The proposed class-action suit seeks unspecific monetary damages for the alleged copyright infringement. A copy of the lawsuit is available at this link (PDF). [...] the latest lawsuit alleges that Meta and Zuckerberg deliberately circumvented copyright-protection mechanisms -- and had considered paying to license the works before abandoning that strategy at "Zuckerberg's personal instruction." The suit essentially argues that the conduct described falls outside protections afforded by fair-use provisions of the U.S. copyright code.

The Courts

Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (cnbc.com) 44

An anonymous reader quotes a report from CNBC: OpenAI President Greg Brockman concluded his testimony on Tuesday, where he largely rebutted Elon Musk's account of the early years of the startup and negotiations that occurred at the company. Brockman testified that he never made any commitments to Musk about the company's corporate structure, and he never heard anyone else make them. He emphasized that OpenAI is still governed by a nonprofit. "This entity remains a nonprofit," Brockman said, referring to the OpenAI foundation. "It is the best-resourced nonprofit in the world." [...] Brockman, who spoke from the witness stand in federal court in Oakland, California, over the course of two days, also revealed that Musk had enlisted several OpenAI employees to do months of free work for him at Tesla, Musk's electric vehicle company. That work mainly included efforts to overhaul the company's approach to developing self-driving technology as part of the Autopilot team there in 2017. During his two days on the stand, Brockman answered questions about his personal financial ambitions, his understanding of OpenAI's structure and Musk's involvement at the company, which they co-founded with other executives in 2015.

In Musk's testimony last week, the Tesla and SpaceX CEO said that the time, money and resources he poured into OpenAI had been integral to the company's success. He repeatedly said that he helped recruit the company's top talent. Brockman said Tuesday that while Musk was helpful in convincing some employees to take the leap to join OpenAI, he was a polarizing figure for others. "Elon had a reputation of being an extremely hard driver," Brockman said. He added that "certain candidates were very attracted" by Musk's involvement at OpenAI, and that "certain candidates were very turned off." Musk testified last week that a former OpenAI researcher named Andrej Karpathy joined Tesla, but only after he had planned to leave the startup already. Brockman said that Musk, after he hired Karpathy, approached him with "an apology and a confession," about the hire, and that neither Musk nor Karpathy had told him the researcher planned to leave OpenAI before that. Musk was generally not very available for meetings and conversations, Brockman said, so he relied on employees, including Sam Teller and former OpenAI board member Shivon Zilis, as proxies.
Brockman testified that open sourcing OpenAI's technology was "not a topic of conversation" during Musk's time with the nonprofit, despite Musk's claims that it was supposed to be central to the organization. He also described tense 2017 negotiations over a possible for-profit arm, saying Musk became angry when equity stakes were discussed. "He said Musk declined the proposal during an in-person meeting, then tore a painting of a Tesla Model 3 car off the wall, and began storming out of the room," reports CNBC. He also demanded to know when the cofounders would leave the company.

Brockman further said Musk wanted control of OpenAI because he disliked situations where he lacked control, citing Zip2 and SolarCity as examples Musk had raised. He also testified that Musk partly wanted control to help fund his broader SpaceX ambition of building a "city on Mars."

CNBC notes the trial will resume at 8:30 a.m. PT on Wednesday, with Shivon Zilis expected to testify. She is the mother of four of Musk's children and a former OpenAI board member.

Recap:
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
The Courts

Apple Agrees To Pay iPhone Owners $250 Million For Not Delivering AI Siri 37

Apple has agreed to a proposed $250 million settlement over claims that it misled iPhone buyers about the availability of Apple Intelligence and its upgraded Siri features. The settlement would cover U.S. buyers of the iPhone 16 lineup and iPhone 15 Pro models between June 10, 2024, and March 29, 2025. The Verge reports: The settlement will resolve a 2025 lawsuit, alleging Apple's advertisements created a "clear and reasonable consumer expectation" that Apple Intelligence features would be available with the launch of the iPhone 16. The lawsuit claimed Apple's products "offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance."

Apple brought certain AI-powered features to the iPhone 16 weeks after its release, and delayed the launch of its more personalized Siri, which is now expected to arrive later this year. Last April, the National Advertising Division recommended that Apple "discontinue or modify" its "available now" claim for Apple Intelligence. Apple also pulled an iPhone 16 ad showing actor Bella Ramsey using the AI-upgraded Siri.
Bug

US Government Warns of Severe CopyFail Bug Affecting Major Versions of Linux (techcrunch.com) 66

An anonymous reader quotes a report from TechCrunch: A severe security vulnerability affecting almost every version of the Linux operating system has caught defenders off-guard and scrambling to patch after security researchers publicly released exploit code that allows attackers to take complete control of vulnerable systems. The U.S. government says the bug, dubbed "CopyFail," is now being exploited in the wild, meaning it's being actively used in malicious hacking campaigns. [...] Given the risk to the federal enterprise network, U.S. cybersecurity agency CISA has ordered all civilian federal agencies to patch any affected systems by May 15.
The Courts

OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (apnews.com) 167

OpenAI president Greg Brockman's testimony dominated the fifth day of the trial for Elon Musk's lawsuit against the AI company. Brockman took the witness stand on Monday, disclosing that his stake in OpenAI is worth nearly $30 billion, despite not personally investing money in OpenAI. The judge also declined to admit a pretrial text in which Musk allegedly warned Brockman that he and Altman would become "the most hated men in America." From a report: Brockman's disclosure would put him on the Forbes list of the world's richest people, with wealth comparable to Melinda French Gates. [...] Late Sunday, OpenAI lawyers tried to admit as evidence a text message Musk sent to Brockman two days before the trial began. According to a court filing -- which did not include the actual text exchange -- Musk sent a message to Brockman to gauge interest in settlement.

When Brockman replied that both sides should drop their respective claims, Musk shot back, according to the filing, "By the end of this week, you and Sam will be the most hated men in America. If you insist, so it will be." Judge Yvonne Gonzalez Rogers, who is overseeing the trial, did not admit the text exchange as evidence.
Brockman acknowledged that he had promised to personally donate $100,000 to OpenAI's charity but never did. In explaining the delay, Brockman put the onus on Altman: "I asked Sam when I should donate this, and he said he would let me know," reports Business Insider.

The first witness to testify on Monday was Stuart Russell, an artificial intelligence expert who teaches computer science at the University of California, Berkeley. "The most memorable part of Russell's testimony was when he talked about how much Musk's legal team paid him," notes Business Insider. "He received an eye-popping $5,000 per hour for 40 hours of preparatory work. Expert witnesses in high-profile cases typically make between $500 to $1,000 per hour."

Recap:
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Slashdot Top Deals