Security

Bitwarden Scrubs 'Always Free' and 'Inclusion' Values From Its Website (fastcompany.com) 17

Bitwarden appears to be undergoing a quiet shift in leadership and messaging. Its longtime CEO and CFO have stepped down, while the company has removed "Always free" from a prominent password-manager page and replaced "Inclusion" and "Transparency" in its GRIT values with "Innovation" and "Trust." Fast Company reports: In February, longtime CEO Michael Crandell moved to an advisory role, according to LinkedIn, with no announcement from the company. His replacement, Michael Sullivan, former CEO of both Acquia and Insightsoftware, touts his experience with "all facets of mergers and acquisitions" on his own LinkedIn page, including experience working with leading private equity firms. CFO Stephen Morrison also left Bitwarden in April, replaced by former InVision CEO Michael Shenkman. Both Crandell and Morrison joined the company in 2019. Kyle Spearrin, who started Bitwarden as a fun hobby project in 2015, remains the company's CTO.

Meanwhile, Bitwarden has made some subtle tweaks to its website. The page for its personal password manager no longer includes the phrase "Always free." Previously this appeared under the "Pick a plan" section partway down the page, but that section no longer mentions the free plan, though it remains available elsewhere on the page. Bitwarden made this change in mid-April, according to the Internet Archive. Bitwarden has also stopped listing "Inclusion" and "Transparency" as tentpole values on its careers page. The company has long defined its values with the acronym "GRIT," which used to stand for "Gratitude, Responsibility, Inclusion, and Transparency." After May 4, it changed the acronym to stand for "Gratitude, Responsibility, Innovation, and Trust." The phrase "inclusive environment" still appears under a description of Gratitude, while "transparency" is mentioned under the Trust heading. They're just no longer the focus.

Government

Bill To Block Publishers From Killing Online Games Advances In California (arstechnica.com) 24

An anonymous reader quotes a report from Ars Technica: A bill focused on maintaining long-term playable access to online games has passed out of the California Assembly's appropriations committee, setting up a floor vote by the full legislative body. The advancement is a major win for Stop Killing Games' grassroots game preservation movement and comes over the objections of industry lobbyists at the Entertainment Software Association. California's Protect Our Games Act, as currently written, would require digital game publishers who cut off support for an online game to either provide a full refund to players or offer an updated version of the game "that enables its continued use independent of services controlled by the operator." The act would also require publishers to notify players 60 days before the cessation of "services necessary for the ordinary use of the digital game." As currently amended, the act would not apply to completely free games and games offered "solely for the duration of [a] subscription. Any other game offered for sale in California on or after January 1, 2027, would be subject to the law if it passes. [...]

In a formal statement of support for the bill sent to the California legislature, SKG wrote that "there is no other medium in which a product can be marketed and sold to a consumer and then ripped away without notice As live service games rise in popularity for game developers and gamers alike, end-of-life procedures are essential tools to ensure prolonged access to the games consumers pay to enjoy." The Entertainment Software Association, which helps represent the interests of major game publishers, publicly told the California Assembly last month that the bill misrepresents how modern game distribution actually works. "Consumers receive a license to access and use a game, not an unrestricted ownership interest in the underlying work," the ESA wrote. The eventual shutdown of outdated or obsolete games is "a natural feature of modern software," the group added, especially when that software requires online infrastructure maintenance. The ESA also said the bill would impose unreasonable expectations on publishers regarding licensing rights for music or IP rights, which are often negotiated on a time-limited basis. "A legal requirement to keep games playable indefinitely could place publishers in an impossible position -- forcing them to renegotiate licenses indefinitely or alter games in ways that may not be legally or technically feasible," they wrote.

Transportation

Congress Introduces Bill To Permanently Block Chinese Vehicles From US (caranddriver.com) 88

Longtime Slashdot reader sinij shares a report from Car and Driver: A group of Michigan lawmakers has introduced a bill in Congress that would effectively place a permanent ban on Chinese connected vehicles from being sold in the United States. While an executive order signed by Joe Biden in early 2025 already imposed heavy restrictions, the new bill would codify and expand on the ban, as first reported by Autoweek and explained in a release by the House of Representatives Select Committee on China.

The bill, titled the Connected Vehicle Security Act, was co-signed by John Moolenaar, a Michigan Republican, and Debbie Dingell, a Michigan Democrat. It joins a companion version of the same Connected Vehicle Security Act introduced last month to the Senate by Sen. Bernie Moreno, an Ohio Republican, and Sen. Elissa Slotkin, a Michigan Democrat. While the wording is similar to that found in former President Biden's January 2025 executive order, the new bill would codify the language into law, as well as determine rules for compliance and enforcement.

Specifically, the new bill would restrict Chinese automakers from selling passenger cars in the United States if those vehicles contain any China-developed connectivity software. Officially, the bill covers the sale of vehicles from states deemed "foreign adversary countries," which include China, Russia, North Korea, and Iran. The proposed legislation arrives as Chinese automakers including Chery, Geely, and BYD (maker of the 2026 BYD Dolphin Surf, shown above), continue to rise in prominence in foreign markets around the world.
"Doing the right thing for the wrong reasons," comments sinij. "Connected cars that spy on consumers are not a uniquely Chinese problem and should be addressed for all vehicles."
The Courts

Musk Accused of 'Selective Amnesia', Altman of Lying As OpenAI Trial Nears End (reuters.com) 32

An anonymous reader quotes a report from Reuters: A lawyer for Elon Musk hammered at the credibility of OpenAI CEO Sam Altman on Thursday, near the end of a trial over whether to hold the ChatGPT maker and its leaders responsible for allegedly transforming the nonprofit into a vehicle to enrich themselves. OpenAI's lawyers fought back, claiming the world's richest person waited too long to claim OpenAI breached its founding agreement to build safe artificial intelligence to benefit humanity, and couldn't claim he was essential to its success. "Mr. Musk may have the Midas touch in some areas, but not in AI," said William Savitt, a lawyer for OpenAI. "To succeed in AI, as it turns out, all Mr. Musk can do is come to court."

The claims were made during closing arguments of a trial in the Oakland, California, federal court. [...] In his closing argument, Musk's lawyer Steven Molo told jurors that five witnesses, including Musk, former OpenAI board members and former OpenAI Chief ScientistIlya Sutskever, testified that Altman was a liar. Molo also noted that during cross-examination on Tuesday, Altman did not say yes unequivocally when asked if he was completely trustworthy and did not mislead people in business. "Sam Altman's credibility is directly at issue in this case," Molo said. "If you don't believe him, they cannot win."

Molo accused OpenAI of wrongfully trying to enrich investors and insiders at the nonprofit's expense, and failing to prioritize AI's safety. He also challenged Brockman's goals for the business, citing Brockman'sstatementthat his own OpenAI stake was worth nearly $30 billion. "The arrogance, the lack of sensitivity, the failure to account for just common decency is really, really abhorrent." Musk also accused Microsoft, which invested $1 billion in OpenAI in 2019 and $10 billion in 2023, of aiding and abetting OpenAI's wrongful conduct. "Microsoft was aware of what OpenAI was doing every step of the way," Molo said.

Sarah Eddy, another lawyer for the OpenAI defendants, accused Musk and his legal team in her closing argument of resorting to "sound bites and irrelevant false accusations." Eddy said by 2017, everyone associated with OpenAI -- including Musk, then still on its board -- knew it needed more money to fulfill its mission than it could raise as a nonprofit. "Mr. Musk wanted to turn OpenAI into a for-profit company that he could control," she said. "But the other founders refused to turn the keys of AGI (artificial general intelligence) over to one person, let alone Elon Musk."She also said if Musk truly believed AI should serve humanity, he would not have pushed to fold OpenAI into his electric car company Tesla, or made his rival xAI a for-profit company.

Musk had a three-year statute of limitations to sue, and OpenAI's lawyers said his August 2024 lawsuit came too late because he knew several years earlier about OpenAI's growth plans. Eddy expressed disbelief that Musk claimed he did not read a four-page term sheet in 2018 discussing OpenAI's plan to seek outside investments. "One of the most sophisticated businessmen in the history of the world" wouldn't have "stuck his head in the sand," Eddy said. Savitt accused Musk of having "selective amnesia." Microsoft's lawyer Russell Cohen said in his closing statement that Microsoft wasn't involved in the key events of the case, and was "a responsible partner at every step."
On Monday, the nine-person jury is expected to begin deliberating. The judge and lawyers will also return to court to discuss possible remedies if Musk wins, including how OpenAI should be restructured and what damages might be awarded. If Musk loses, there will be no remedies to consider.

Recap:
OpenAI Trial Wraps Up With 'Jackass' Trophy For Challenging Musk (Day Eleven)
Sam Altman Testifies That Elon Musk Wanted Control of OpenAI (Day Ten)
Microsoft CEO Satya Nadella Testifies In OpenAI Trial (Day Nine)
Sam Altman Had a Bad Day In Court (Day Eight)
Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (Day Seven)
Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six)
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
AI

OpenAI Trial Wraps Up With 'Jackass' Trophy For Challenging Musk 26

After three weeks of testimony, the Musk v. Altman trial is nearing its end. OpenAI has rested its case, closing arguments are set for Thursday, and jury deliberations are expected to begin afterward. An anonymous reader quotes a report from Business Insider: Joshua Achiam, OpenAI's chief futurist, was probably the most memorable witness of the day. He told jurors about a companywide meeting where Musk answered questions about his planned departure from OpenAI in 2018. Musk told the crowd of 50 or 60 people that he was leaving OpenAI to start his own competing AI. He said he wanted to "build it very fast, because he was very worried that someone else, if they got it, would do the wrong thing with it," Achiam said. Achaim said he challenged Musk on the safety of this approach, which he called "unsafe and reckless." "How did Musk respond," OpenAI's lawyer Randall Jackson asked. "Defensively," Achiam said. "We had a pretty tense exchange, and he snapped and called me a jackass."

In an effort to prove Achiam's story, OpenAI's lawyers brought a trophy to court that the futurist said he received after his heated exchange with Musk. On the witness stand, Achiam described the trophy as "a small golden jackass, inscribed with: 'never stop being a jackass for safety.'" He said his then-colleagues, Dario Amodei and David Luan, gave it to him as a thank-you for standing up to the Tesla CEO. Lead OpenAI attorney William Savitt told reporters after the day's session that Wednesday had been the first time he'd touched the statue. The futurist had to do without the visual aid, however. Judge Yvonne Gonzalez Rogers did not accept the trophy as evidence, so it did not appear before the jury.

Musk and Altman have presented dueling experts on a question at the core of the trial -- was the nonprofit that runs OpenAI hurt or helped by its $13 billion partnership with Microsoft? Musk's expert testified last week that the partnership was indeed hurt, supporting the Tesla CEO's contention that in partnering with Microsoft, OpenAI betrayed the company's nonprofit origins and mission. But on Thursday, OpenAI's expert, John Coates, used Musk's expert's own pie chart and testimony against him. The partnership has "generated value for the nonprofit that I believe he himself accepted was in the $200 billion range in his own testimony," Coates said, referencing Musk expert Daniel Schizer. "If that's not faring well, I don't know what faring well is."

In a scored point for Musk, the jury learned Thursday that Microsoft's own CTO once raised concerns about how OpenAI's early nonprofit donors, including LinkedIn cofounder Reid Hoffman, would react to a partnership. "I wonder if the big OpenAI donors are aware of these plans," Chief Technology Officer Kevin Scott said in a 2018 email he was asked to read aloud to jurors. In it, Scott said he doubted donors would appreciate OpenAI using their seed money to "go build a for-profit thing." Scott was being questioned by an OpenAI lawyer, who may have wanted jurors to quickly hear Scott's explanation: that he only had a "vague awareness" of what was happening at OpenAI at the time. Scott also told the jury he wasn't thinking about Musk when he made the remark. "Primarily, I was thinking about Reid Hoffman. He was the OpenAI donor I knew," Scott said, adding, "I wasn't thinking about anyone besides him."
Recap:
Sam Altman Testifies That Elon Musk Wanted Control of OpenAI (Day Ten)
Microsoft CEO Satya Nadella Testifies In OpenAI Trial (Day Nine)
Sam Altman Had a Bad Day In Court (Day Eight)
Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (Day Seven)
Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six)
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
Crime

Man Who Stole Beyonce's Hard Drives Gets Five-Year Sentence (theguardian.com) 100

A man accused of stealing hard drives containing unreleased Beyonce music, tour plans, and other materials from a rental car in Atlanta has pleaded guilty and accepted a five-year sentence, including two years in custody. Slashdot Bruce66423 shares a report from The Guardian: Kelvin Evans was by the Atlanta police department in September in connection to a July 2025 car robbery where two suitcases containing Beyonce music and tour plans were stolen from a rental car. [...] According to a July police report, Beyonce choreographer Christopher Grant and dancer Diandre Blue called 911 to report a theft from their rental vehicle, a 2024 Jeep Wagoneer, before Beyonce's Cowboy Carter tour dates in Atlanta. An October indictment stated that Evans entered the car on July 8 "with the intent to commit theft."

The stolen hard drives contained "watermarked music, some unreleased music, footage plans for the show and past and future set list," according to a police report. Clothing, designer sunglasses, laptops and AirPods headphones were also stolen, Grant and Blue said. Local law enforcement searched for the location of one of the stolen laptops and the AirPods to try and locate the property. One police officer wrote in the report: "I conducted a suspicious stop in the area, due to the information that was relayed to me. There were several cars in the area also that the AirPods were pinging to in that area also. After further investigation, a silver [redacted], which had traveled into zone 5 was moving at the same time as the tracking on the AirPods."

Evans was arrested several weeks after Grant and Blue filed a report, and was publicly named as the suspect in September. He was released on a $20,000 bond a month later. At the time of his arrest, Atlanta police said that the stolen property had not been recovered. It is unclear whether it has since been found.
Bruce66423 commented: "Just for stealing a couple of suitcases from a car. Funny how the elite punish those who inconvenience them. Can you imagine an ordinary victim see their offender get that sort of sentence?"
Facebook

Meta Employees Launch Protest Against Mouse-Tracking Tech At US Offices (reuters.com) 67

An anonymous reader quotes a report from Reuters: Meta employees distributed flyers at multiple U.S. offices on Tuesday to protest the company's recent installation of mouse-tracking software on their computers, according to photos of the pamphlets seen by Reuters. The flyers, which appeared in meeting rooms, on vending machines and atop toilet paper dispensers at the Facebook owner's offices, encouraged staffers to sign an online petition against the move. "Don't want to work at the Employee Data Extraction Factory?" they asked, according to the photos seen by Reuters. [...]

The pamphlets and the petition both cite the U.S. National Labor Relations Act, saying "workers are legally protected when they choose to organize for the improvement of working conditions." In the UK, a group of Meta employees has started organizing a drive for unionization with United Tech and Allied Workers (UTAW), a branch of the Communication Workers Union. The employees set up a website to recruit members using the URL "Leanin.uk," a reference to former Chief Operating Officer Sheryl Sandberg's best-selling book encouraging women to seek equal footing in the workplace. "Meta's workers are paying the price for management's reckless and expensive bets. While executives chase speculative AI strategies, staff are facing devastating job cuts, draconian surveillance, and the cruel reality of being forced to train the inefficient systems being positioned to replace them," said Eleanor Payne, an organizer with UTAW.
"If we're building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them -- things like mouse movements, clicking buttons, and navigating dropdown menus," said a statement Meta issued earlier.
The Courts

Sam Altman Testifies That Elon Musk Wanted Control of OpenAI (nytimes.com) 68

OpenAI CEO Sam Altman took the stand Tuesday in Elon Musk's trial against the company, testifying that Musk repeatedly sought control of OpenAI before leaving in 2018. Altman said he opposed putting AI "under the control of any one person," while Musk's lawyer used a pointed cross-examination to attack Altman's trustworthiness. An anonymous reader shares updates from the testimony via the New York Times: Before Elon Musk left OpenAI in a power struggle in 2018, he wanted to merge the nonprofit artificial intelligence lab with Tesla, his electric car company. Mr. Musk and other OpenAI co-founders met several times to discuss the merger. OpenAI's chief executive, Sam Altman, was even offered a seat on Tesla's board of directors, according to a court document. But folding OpenAI into Tesla would have eliminated the lab's nonprofit status, and that, Mr. Altman said on the witness stand on Tuesday, was something he wanted to avoid. [...] "I believed that A.I. should not be under the control of any one person," Mr. Altman said. [...] Mr. Altman testified about his feud with Mr. Musk. He said he had become worried that Mr. Musk, who provided the early investment money for OpenAI, wanted to take control of the lab. He described what he called a "particularly harrowing moment" when his OpenAI co-founders asked Mr. Musk what would happen to his control of a potential for-profit when he died. Mr. Altman said Mr. Musk had replied that the control would pass to his children. "I was not comfortable with that," Mr. Altman said. When Mr. Musk lost a power struggle for control of the lab, he left, forcing Mr. Altman to find another big financial backer in Microsoft.

But Mr. Altman ran into trouble in 2023 when OpenAI's board fired him because, as several of its members have testified in the trial, it didn't trust him. Steven Molo, Mr. Musk's lead lawyer, homed in on Mr. Altman's trustworthiness during an aggressive cross-examination. "Are you completely trustworthy?" Mr. Molo asked. "I believe so," Mr. Altman answered. After questioning Mr. Altman's trustworthiness for nearly 20 minutes, Mr. Molo turned to Mr. Altman's relationship with Mr. Musk. Mr. Altman said that after he met Mr. Musk in the mid-2010s, Mr. Musk had occasionally expressed concern about the dangers of A.I. But Mr. Musk spent far more time saying he was worried that companies like Google would get ahead in A.I. development, Mr. Altman said. (Mr. Musk testified in the trial that he had wanted to create OpenAI to prevent Google from controlling the technology.)

Mr. Altman, the lawyer intimated, took advantage of Mr. Musk's concerns and was never sincere about his own A.I. fears. "Are you a person who just tells people things they want to hear whether those things are true or not?" Mr. Molo asked. The lawyer also questioned whether Mr. Atman, who became a billionaire through years of tech investments, was self-dealing through OpenAI. Mr. Molo showed a list of Mr. Altman's personal investments across a number of companies that stand to benefit from their association with OpenAI. They included Helion Energy, a start-up that has deals with Microsoft and OpenAI, and Cerebras, a chip maker in business with OpenAI. Mr. Molo asked if Mr. Altman, who is on OpenAI's board as well as its chief executive, would ever fire himself. "I have no plans to do that," Mr. Altman said.

OpenAI's odd journey from nonprofit lab to what it is today -- a well-funded, for-profit company that is still connected to a nonprofit called the OpenAI Foundation with an endowment that could be worth more than $130 billion -- provided grist for Mr. Molo's questions about Mr. Altman's motivations. He implied that Mr. Altman could have continued to build OpenAI as a pure nonprofit. But the only way to build such a valuable charity was to raise billions through a for-profit venture, Mr. Altman responded. Still, the giant sums being raised appeared to upset Mr. Musk. In late 2022, according to court documents, Mr. Musk sent a text to Mr. Altman complaining that Microsoft was preparing to invest $10 billion in OpenAI. "This is a bait and switch," Mr. Musk said at the time. But Mr. Altman, under questioning from his own lawyers, said: "Every step of the way, I have done my best to maximize the value of the nonprofit. I would point out that there are not a lot of historical examples of a nonprofit at this scale."
Before Altman took the stand, OpenAI board chair Bret Taylor continued his testimony that began on Monday. He said Elon Musk's 2024 bid to buy the company's assets appeared to conflict with his lawsuit and was rejected because the board did not believe OpenAI's mission should be controlled by one person. "We did not feel like it was appropriate for one person to control our mission," he said.

Recap:
Microsoft CEO Satya Nadella Testifies In OpenAI Trial (Day Nine)
Sam Altman Had a Bad Day In Court (Day Eight)
Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (Day Seven)
Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six)
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
Crime

Instructure Pays Canvas Hackers To Delete Students' Stolen Data (bbc.com) 83

Instructure, the company behind the widely used Canvas learning platform, says it reached an agreement with the hackers who stole 3.5 terabytes of student and university data. The company says it received "digital confirmation" that the information was destroyed and that affected schools and students would not be extorted. The BBC reports: Paying cyber criminals goes against the advice of law enforcement agencies around the world, as it can fuel further attacks and offers no guarantee the data has been deleted. In previous cases, criminals have accepted ransom payments but lied about destroying stolen data, instead keeping it for resale. For example, when the notorious LockBit ransomware group was hacked by the National Crime Agency, police found stolen data had not been deleted even after payments had been made.

Instructure said in a statement on its website that protecting students' and education staff data was its primary motivation. "While there is never complete certainty when dealing with cyber criminals, we believe it was important to take every step within our control to give customers additional peace of mind, to the extent possible," the company said. Instructure did not set out the terms of the agreement but said that it meant that:
- the data was returned to the company
- it received "digital confirmation of data destruction"
- it had been informed that no Instructure customers would be extorted as a result of the incident
- the agreement covers all affected customers, with no need for individuals to engage with the hackers

The Courts

Microsoft CEO Satya Nadella Testifies In OpenAI Trial (cnbc.com) 26

The Musk v. Altman trial entered its third week Monday, with Microsoft CEO Satya Nadella and former OpenAI co-founder and renowned AI researcher Ilya Sutskever taking the stand. Nadella testified that Elon Musk never raised concerns to him that Microsoft's investments in OpenAI violated any special commitments, and said he viewed the partnership as clearly commercial from the start. He also described OpenAI's 2023 board crisis as "amateur city."

Meanwhile, Sutskever testified that he had raised concerns about Sam Altman because he feared OpenAI could be "destroyed." He expressed concerns about Altman's behavior to the board, in part because he said he felt "a great deal of ownership" over the startup. "I simply cared for it, and I didn't want it to be destroyed," Sutskever said. CNBC reports: Nadella said he was "very proud" that Microsoft took the risk to invest in OpenAI when "no one else was willing" to bet on the fledgling lab. Musk, who testified late last month, said Microsoft's $10 billion investment was the key tipping point that made him believe OpenAI was violating its nonprofit mission. He testified that the scale of the investment bothered him, and it prompted him to open a legal investigation into OpenAI. "I was concerned they were really trying to steal the charity," Musk said from the stand.

Nadella said he did not believe Microsoft's investments in OpenAI were donations, and that there was a clear commercial element to their partnership from the outset. He said during the partnership's early years, Microsoft gave OpenAI sharp discounts on computing resources, and Microsoft believed it would reap marketing benefits from doing so. During a separate video deposition that was played on Monday morning, Michael Wetter, a corporate development executive at Microsoft, said the company has recognized approximately $9.5 billion in revenue to date through its partnership with OpenAI as of March 2025.

[...] Nadella said he was "pretty surprised" by the board's decision [to fire Altman in November 2023], and that his priority was to try and figure out how to maintain continuity for Microsoft customers. Immediately after Altman was removed, Nadella said he made an effort to learn more about what happened, adding that he suspected jealousy and poor communication was at play. During conversations with OpenAI board members after the firing, Nadella said he was simply trying to understand the language in the OpenAI's statement about Altman being "not consistently candid" while communicating with the board. That language, Nadella said, "just didn't sort of suffice, because this is the CEO of a company that we are invested in and we're deeply partnered with, and so I felt that they could have explained to me what are the incidents or what is the detail behind it." There must have been instances of jealousy or miscommunication that could have justified pushing out Altman, Nadella said. He wanted more depth from the board members after the remark about candor, but no such information was available, he said. "It was sort of amateur city, as far as I'm concerned," Nadella testified.

[...] Musk testified that he is not entirely against OpenAI having a for-profit unit, but he said it became "the tail wagging the dog." He repeatedly accused Altman and Brockman of enriching themselves from a charity while also reaping the positive associations that come from running a nonprofit. "Microsoft has their own motivations, and that would be different from the motivations of the charity," Musk said from the stand. "All due respect to Microsoft, do you really want Microsoft controlling digital superintelligence?"

During a videotaped deposition shown in court last week, former OpenAI director Tasha McCauley recalled a discussion with Nadella and her fellow board members after the 2023 decision to dismiss Altman as OpenAI's CEO. "To the best of my recollection, Satya wanted to restore things to as they had been," McCauley said. The board members didn't think that was the right move, she said. But as a court witness on Monday, Nadella said he never demanded that the board reinstate Altman as OpenAI CEO.
Recap:
Sam Altman Had a Bad Day In Court (Day Eight)
Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (Day Seven)
Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six)
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
Privacy

GM Secretly Sold California Drivers' Data, Agrees to Pay $12.75M In Privacy Settlement (ca.gov) 41

"General Motors sold the data of California drivers without their knowledge or consent," says California's attorney general, "and despite numerous statements reassuring drivers that it would not do so."

In 2024, The New York Times "reported that automakers including GM were sharing information about their customers' driving behavior with insurance companies," remembers TechCrunch, "and that some customers were concerned that their insurance rates had gone up as a result."

Now General Motors "has reached a privacy-related settlement with a group of law enforcement agencies led by California Attorney General Rob Bonta..." The settlement announcement from Bonta's office similarly alleges that GM sold "the names, contact information, geolocation data, and driving behavior data of hundreds of thousands of Californians" to Verisk Analytics and LexisNexis Risk Solutions, which are both data brokers. Bonta's office further alleges that this data was collected through GM's OnStar program, and that the company made roughly $20 million from data sales.

However, Bonta's office also said the data did not lead to increased insurance prices in California, "likely because under California's insurance laws, insurers are prohibited from using driving data to set insurance rates." As part of the settlement, GM has agreed to pay $12.75 million in civil penalties and to stop selling driving data to any consumer reporting agencies for five years, Bonta's office said. GM has also agreed to delete any driver data that it still retains within 180 days (unless it obtains consent from customers), and to request that Lexis and Verisk delete that data.

"This trove of information included precise and personal location data that could identify the everyday habits and movements of Californians," according to the attorney general's announcement. The settlement "requires General Motors to abandon these illegal practices, and underscores the importance of the data minimization in California's privacy law — companies can't just hold on to data and use it later for another purpose."

"Modern cars are rolling data collection machines," said San Francisco District Attorney Brooke Jenkins. "Californians must have confidence that they know what data is being collected, how it is being used, and what their opt-out rights are... This case sends a strong message that law enforcement will take action when California privacy laws are not scrupulously followed."
EU

The EU Considers Restricting Use of US Cloud Platforms for Sensitive Government Data (cnbc.com) 95

CNBC reports: The European Union is considering rules that would restrict its member governments' use of U.S. cloud providers to handle sensitive data, sources familiar with the talks told CNBC.

The European Commission — the EU's executive branch — is expected to present its "Tech Sovereignty Package" on May 27, which will include a range of measures aimed at bolstering the bloc's strategic autonomy in key digital areas. As part of preparations for that package, discussions are taking place within the Commission around limiting the exposure of sensitive public-sector data to cloud platforms provided by companies outside of the EU, two Commission officials, who asked to remain anonymous as they weren't authorized to discuss private talks, told CNBC... "The core idea is defining sectors that have to be hosted on European cloud capacity," one of the officials said. They added that companies providing cloud solutions from third countries, including the U.S., could be impacted. Proposals would not prohibit overseas companies' cloud platforms from government contracts entirely, but limit their use in processing sensitive data at public sector organizations, depending on the level of sensitivity, they added. The officials said that talks are ongoing and yet to be finalized...

The officials told CNBC there are discussions around proposing that financial, judicial and health data processed by governments and public-sector organizations require high levels of sovereign cloud infrastructure.

Privacy

Fiber Optic Cables Can Eavesdrop On Nearby Conversations (science.org) 28

sciencehabit shares a report from Science Magazine: Cold War spies planted bugs in walls, lamps, and telephones. Now, scientists warn, the cables themselves could listen in. A fiber optic technique used to detect earthquakes can also pick up the faint vibrations of nearby speech, researchers reported this week here at the general assembly of the European Geosciences Union. Freely available artificial intelligence (AI) software turned the fiber optic data into intelligible, real-time transcripts. "Not many people realize that [fiber optic cables] can detect acoustic waves," says Jack Lee Smith, a geophysicist at the University of Edinburgh who presented the result. "We show that in almost every case where you use these fibers, this could be a privacy concern."

Fiber optics can pick up on sound through a technique called distributed acoustic sensing (DAS). Using a machine called an interrogator, researchers fire laser pulses down a cable and record the pattern of reflections coming back from tiny glass defects along the length of the fiber optic. When an earthquake's seismic wave crosses a section of the fiber, it stretches and squeezes the defects, leading to shifts in the reflected light that researchers can use to build a picture of an earthquake. DAS essentially turns a fiber cable into a long chain of seismometers that can detect not only earthquakes, but also the rumblings of volcanoes, cars, and college marching bands. And although scientists set up dedicated fiber lines specifically for research, DAS can also be performed on "dark fiber" -- unused strands in the web of fiber optics that runs through cities and across oceans, carrying the world's internet traffic.

DAS can also be used to eavesdrop, the work of Smith and his colleagues shows. They conducted a field test using an existing DAS setup used to study coastal erosion. They set a speaker next to the cable and played pure tones, music, and speech. Human speech contains frequencies ranging from a few hundred to several thousand hertz. The low end of the range could be pulled out of the data "even without any preprocessing," Smith says. "You can easily see acoustic waves." Getting higher frequency speech took a bit of postprocessing, but it was possible. Dumping the data directly into Whisper, a free AI transcription tool, provided accurate real-time transcription. However, this technique worked only for coiled cables, exposed at the surface, at distances of up to 5 meters from the speaker. Burying the cable under just 20 centimeters of dirt was enough to muddy the speech. And straight cables -- even exposed ones right next to the speaker -- did not record speech well.

AI

Thousands of Vibe-Coded Apps Expose Corporate and Personal Data On the Open Web 43

An anonymous reader quotes a report from Wired: Security researcher Dor Zvi and his team at the cybersecurity firm he cofounded, RedAccess, analyzed thousands of vibe-coded web applications created using the AI software development tools Lovable, Replit, Base44, and Netlify and found more than 5,000 of them that had virtually no security or authentication of any kind. Many of these web apps allowed anyone who merely finds their web URL to access the apps and their data. Others had only trivial barriers to that access, such as requiring that a visitor sign in with any email address. Around 40 percent of the apps exposed sensitive data, Zvi says, including medical information, financial data, corporate presentations, and strategy documents, as well as detailed logs of customer conversations with chatbots.

"The end result is that organizations are actually leaking private data through vibe-coding applications," says Zvi. "This is one of the biggest events ever where people are exposing corporate or other sensitive information to anyone in the world." Zvi says RedAccess' scouring for vulnerable web apps was surprisingly easy. Lovable, Replit, Base44, and Netlify all allow users to host their web apps on those AI companies' own domains, rather than the users'. So the researchers used straightforward Google and Bing searches for those AI companies' domains combined with other search terms to identify thousands of apps that had been vibe coded with the companies' tools.

Of the 5,000 AI-coded apps that Zvi says were left publicly accessible to anyone who simply typed their URLs into a browser, he found close to 2,000 that, upon closer inspection, seemed to reveal private data: Screenshots of web apps he shared with WIRED -- several of which WIRED verified were still online and exposed -- showed what appeared to be a hospital's work assignments with the personally identifiable information of doctors, a company's detailed ad purchasing information, what appeared to be another firm's go-to-market strategy presentation, a retailer's full logs of its chatbot's conversations with customers, including the customers' full names and contact information, a shipping firm's cargo records, and assorted sales and financial records from a variety of other companies. In some cases, Zvi says, he found that the exposed apps would have allowed him to gain administrative privileges over systems and even remove other administrators. In the case of Lovable, Zvi says he also found numerous examples of phishing sites that impersonated major corporations, including Bank of America, Costco, FedEx, Trader Joe's, and McDonald's, that appeared to have been created with the AI coding tool and hosted on Lovable's domain.
"Anyone from your company at any moment can generate an app, and this is not going through any development cycle or any security check," Zvi says. "People can just start using it in production without asking anyone. And they do."
Sci-Fi

Pentagon Begins Releasing New Files On UFOs (apnews.com) 83

The Pentagon has begun releasing new UFO/UAP files through a newly launched public website, starting with 162 documents from agencies including the FBI, State Department, NASA, and others. Officials say more files will be released on a rolling basis. The Associated Press reports: The Pentagon has begun releasing new files on UFOs, saying members of the public can draw their own conclusions on "unidentified anomalous phenomena" like an object that a drone pilot says shone a bright light in the sky and then vanished. It said in a post on X on Friday that while past administrations sought to discredit or dissuade the American people, President Donald Trump "is focused on providing maximum transparency to the public, who can ultimately make up their own minds about the information contained in these files." It said additional documents will be released on a rolling basis.

Besides the Pentagon, the effort is led by the White House, the director of national intelligence, the Energy Department, NASA and the FBI. A newly unveiled website housing the documents on unidentified anomalous phenomena, or UAPs, has a decidedly retro feel, with black-and-white military imagery of flying objects displayed prominently on the page, with statements displayed in typewriter-like font. The first release includes 162 files, such as old State Department cables, FBI documents and transcripts from NASA of crewed flights into space.

One document details an FBI interview with someone identified as a drone pilot who, in September 2023, reported seeing a "linear object" with a light bright enough to "see bands within the light" in the sky. "The object was visible for five to ten seconds and then the light went out and the object vanished," according to the FBI interview. Another file is a NASA photograph from the Apollo 17 mission in 1972, showing three dots in a triangular formation. The Pentagon says in an accompanying caption that "there is no consensus about the nature of the anomaly" but that a new, preliminary analysis indicated that it could be a "physical object."

Slashdot Top Deals