Privacy

60% of MD5 Password Hashes Are Crackable In Under an Hour (theregister.com) 20

In honor of World Password Day, Kaspersky researchers revisited their study on the crackability of real-world passwords and found that 60% of MD5-hashed passwords could be cracked in under an hour with a single Nvidia RTX 5090, and 48% could be cracked in under a minute. "The bottom line is that passwords protected only by fast hashing algorithms such as MD5 are no longer safe if attackers obtain them in a data breach," reports The Register. From the report: Much of the reason password hashes have become so easy to crack is password predictability. Per Kaspersky, its analysis of more than 200 million exposed passwords revealed common patterns that attackers can use to optimize cracking algorithms, significantly reducing the time needed to guess the character combinations that grant access to target accounts.

In case you're wondering whether there's a trend to compare this to, Kaspersky ran a prior iteration of this study in 2024, and bad news: Passwords are actually a bit easier to crack in 2026 than they were a couple of years ago. Not by much, mind you -- only a few percent -- but it's still a move in the wrong direction. "Attackers owe this boost in speed to graphics processors, which grow more powerful every year," Kaspersky explained. "Unfortunately, passwords remain as weak as ever."
"This World Password Day, the main message ought not to be to the users, who often have no choice but to use passwords anyway, but to the sites and providers that are requiring them to do so," said senior IEEE member and University of Nottingham cybersecurity professor Steven Furnell. His advice is that providers need to modernize their login systems and enforce stronger protections, because users are often stuck with whatever security options they're given.
Social Networks

LinkedIn Profile Visitor Lists Belong to the People, Says Noyb (theregister.com) 14

A LinkedIn user in the EU is challenging Microsoft's refusal to provide a full list of profile visitors under GDPR Article 15, arguing that the data should be available for free because LinkedIn processes it and sells a more complete version to Premium users. Privacy group Noyb says the case could set a broader precedent over whether companies can monetize user-related data while denying access to the same data through GDPR requests. "Selling data to its own users is a popular practice among companies," Noyb data protection lawyer Martin Baumann said of the case. "In reality, however, people have the right to receive their own data free of charge." The Register reports: Take a look at the language of Article 15, and it's pretty clear: data subjects (i.e., users) have the right to a copy of any and all data concerning them that's been processed by the provider. A full list of profile visitors seemingly should fall under Article 15 data -- even if it's normally reserved for paying users and presented to them in a nicer way, it should still be accessible to free users who actually request it. [...] Noyb acknowledges there's a clear bit of legal fuzz stuck in this corner of the GDPR when it comes to premium service offerings. "If any business processes a person's personal data, this information is generally covered by their right of access under the GDPR," Baumann told The Register. "It does not matter that the business would prefer to sell the data to the data subject or that it would be harmful for their business model if they would."

There's only one exception in Article 15 that would give LinkedIn an out, Baumann told us, and that's the last paragraph, which says a person's right to their data can't adversely affect the rights and freedoms of others. Were LinkedIn to argue that it had to protect the identities of people who visited a data subject's profile, they could have an excuse. But not a good one, in Baumann's opinion. "Since LinkedIn does provide information about profile visits to paying Premium members, it cannot consider that disclosing the data would adversely affect the rights of the visitors whose data is disclosed," the Noyb lawyer explained. "Otherwise, providing this information to Premium users would be unlawful too."

What seems to be the sticking point here is where right of access begins and a company's right to make money off data they hold (data that was, ahem, supplied by users) ends. Baumann said he hopes this case can clear the legal air. "We expect a clarification concerning the fact that personal data that can be accessed when a user pays for it is also covered by their right of access," he explained. [...] Baumann said there are numerous other cases where similar legal clarification would be appreciated, citing the example of a bank that is unwilling to provide access to account statements in response to a GDPR request, but is happy to hand over similar data for a fee. "A precedent would be welcomed," Baumann said.
A LinkedIn spokesperson told The Register: "Not only is it incorrect that only Premium members can see who has viewed their profile, but we also satisfy GDPR Article 15 by disclosing the information at issue via our Privacy Policy."
The Courts

Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (businessinsider.com) 19

Sam Altman's management style came under scrutiny on the seventh day of Elon Musk's high-stakes OpenAI trial, as former OpenAI figures Mira Murati, Shivon Zilis, and Helen Toner took the stand to testify about their experiences working with him. Their testimony resurfaced many of the criticisms that first emerged during Altman's brief ouster as CEO in 2023. An anonymous reader quotes a report from Business Insider: The first witness was Mira Murati, OpenAI's former chief technology officer and now founder of her own AI shop, Thinking Machines Lab. Jurors watched a recorded video deposition of Murati, who was also OpenAI's interim CEO after the board briefly ousted Sam Altman. Murati's testimony focused on her concerns about Altman's "difficult and chaotic" management style. She said Altman had trouble "making decisions on big controversial things." He also had a habit of telling people what they wanted to hear.

"My concern was about Sam saying one thing to one person and a completely different thing to another person, and that makes it a very difficult and chaotic environment to work with," said Murati. Murati said that her issue with Altman was not about safety, "it is about Sam creating chaos." She said she supported Altman's return to OpenAI because the company "was at catastrophic risk of falling apart" at the time of his ousting. "I was concerned about the company completely blowing up."

Zilis said she was upset that Altman rolled out ChatGPT without involving the board. "It wasn't just me but the entire board raised concern about that whole thing happening without any board communication," she said. Zilis said she was also concerned about a potential OpenAI deal with a nuclear energy startup called Helion Energy because both Altman and Greg Brockman were investors. Although the executives had disclosed the investment to the board, Zilis said the deal talk made her uneasy. It "felt super out of left field," she said. "How is it the case that we want to place a major bet on a speculative technology?"

In a video deposition, Helen Toner, a former member of OpenAI's board who resigned in 2023, said she first became aware of ChatGPT's release when an OpenAI employee asked another board member whether the board was aware of the development. [...] Toner also elaborated on why the board, including herself, voted to remove Altman as CEO in 2023. "There were a number of things -- the pattern of behavior related to his honesty and candor, his resistance of board oversight, as well as the concerns that two os his inner management team raised to the board about his management practices, his manipulation of board processes," said Toner.
Recap:
Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six)
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
Privacy

Microsoft Edge Stores Passwords In Plaintext In RAM (pcmag.com) 103

Longtime Slashdot reader UnknowingFool writes: Security researcher Tom Joran Sonstebyseter Ronning has found that Microsoft Edge stores passwords in plaintext in RAM. After creating a password and storing it using Edge's password manager, Ronning found that he could dump the RAM and recover his password which was stored in plaintext. Part of the issue is Edge loads all passwords to all sites upon a single verification check, even if the user was not visiting a specific site. This is very different from Chrome, which only loads passwords for specific websites when challenged for the site's password. Also, Chrome will delete the password from memory once the password has been filled. Edge does not delete the passwords from memory once they are used.

Microsoft downplayed the risk noting access would require control over a user's PC like a malware infection: "Access to browser data as described in the reported scenario would require the device to already be compromised," Microsoft said. Ronning countered that it was possible to dump passwords for multiple users using administrative privileges for one user to view the passwords for other logged-on users.
"Design choices in this area involve balancing performance, usability, and security, and we continue to review it against evolving threats," Microsoft said. "Browsers access password data in memory to help users sign in quickly and securely -- this is an expected feature of the application. We recommend users install the latest security updates and antivirus software to help protect against security threats."
Piracy

Zuckerberg 'Personally Authorized and Encouraged' Meta's Copyright Infringement (variety.com) 74

Five major publishers and author Scott Turow have sued Meta and Mark Zuckerberg, alleging that Zuckerberg "personally authorized and actively encouraged" massive copyright infringement by using pirated books, journal articles, and web-scraped material to train Meta's Llama AI systems. Meta denies wrongdoing and says it will fight the case, arguing that courts have recognized AI training on copyrighted material as potentially fair use. Variety reports: "In their effort to win the AI 'arms race' and build a functional generative AI model, Defendants Meta and Zuckerberg followed their well-known motto: 'move fast and break things,'" the plaintiffs say in their lawsuit. "They first illegally torrented millions of copyrighted books and journal articles from notorious pirate sites and downloaded unauthorized web scrapes of virtually the entire internet. They then copied those stolen fruits many times over to train Meta's multibillion-dollar generative AI system called Llama. In doing so, Defendants engaged in one of the most massive infringements of copyrighted materials in history."

The suit was filed Tuesday (May 5) in the U.S. District Court for the Southern District of New York by five publishers (Hachette, Macmillan, McGraw Hill, Elsevier and Cengage) and Turow individually. The proposed class-action suit seeks unspecific monetary damages for the alleged copyright infringement. A copy of the lawsuit is available at this link (PDF). [...] the latest lawsuit alleges that Meta and Zuckerberg deliberately circumvented copyright-protection mechanisms -- and had considered paying to license the works before abandoning that strategy at "Zuckerberg's personal instruction." The suit essentially argues that the conduct described falls outside protections afforded by fair-use provisions of the U.S. copyright code.

The Courts

Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (cnbc.com) 44

An anonymous reader quotes a report from CNBC: OpenAI President Greg Brockman concluded his testimony on Tuesday, where he largely rebutted Elon Musk's account of the early years of the startup and negotiations that occurred at the company. Brockman testified that he never made any commitments to Musk about the company's corporate structure, and he never heard anyone else make them. He emphasized that OpenAI is still governed by a nonprofit. "This entity remains a nonprofit," Brockman said, referring to the OpenAI foundation. "It is the best-resourced nonprofit in the world." [...] Brockman, who spoke from the witness stand in federal court in Oakland, California, over the course of two days, also revealed that Musk had enlisted several OpenAI employees to do months of free work for him at Tesla, Musk's electric vehicle company. That work mainly included efforts to overhaul the company's approach to developing self-driving technology as part of the Autopilot team there in 2017. During his two days on the stand, Brockman answered questions about his personal financial ambitions, his understanding of OpenAI's structure and Musk's involvement at the company, which they co-founded with other executives in 2015.

In Musk's testimony last week, the Tesla and SpaceX CEO said that the time, money and resources he poured into OpenAI had been integral to the company's success. He repeatedly said that he helped recruit the company's top talent. Brockman said Tuesday that while Musk was helpful in convincing some employees to take the leap to join OpenAI, he was a polarizing figure for others. "Elon had a reputation of being an extremely hard driver," Brockman said. He added that "certain candidates were very attracted" by Musk's involvement at OpenAI, and that "certain candidates were very turned off." Musk testified last week that a former OpenAI researcher named Andrej Karpathy joined Tesla, but only after he had planned to leave the startup already. Brockman said that Musk, after he hired Karpathy, approached him with "an apology and a confession," about the hire, and that neither Musk nor Karpathy had told him the researcher planned to leave OpenAI before that. Musk was generally not very available for meetings and conversations, Brockman said, so he relied on employees, including Sam Teller and former OpenAI board member Shivon Zilis, as proxies.
Brockman testified that open sourcing OpenAI's technology was "not a topic of conversation" during Musk's time with the nonprofit, despite Musk's claims that it was supposed to be central to the organization. He also described tense 2017 negotiations over a possible for-profit arm, saying Musk became angry when equity stakes were discussed. "He said Musk declined the proposal during an in-person meeting, then tore a painting of a Tesla Model 3 car off the wall, and began storming out of the room," reports CNBC. He also demanded to know when the cofounders would leave the company.

Brockman further said Musk wanted control of OpenAI because he disliked situations where he lacked control, citing Zip2 and SolarCity as examples Musk had raised. He also testified that Musk partly wanted control to help fund his broader SpaceX ambition of building a "city on Mars."

CNBC notes the trial will resume at 8:30 a.m. PT on Wednesday, with Shivon Zilis expected to testify. She is the mother of four of Musk's children and a former OpenAI board member.

Recap:
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
The Courts

Apple Agrees To Pay iPhone Owners $250 Million For Not Delivering AI Siri 37

Apple has agreed to a proposed $250 million settlement over claims that it misled iPhone buyers about the availability of Apple Intelligence and its upgraded Siri features. The settlement would cover U.S. buyers of the iPhone 16 lineup and iPhone 15 Pro models between June 10, 2024, and March 29, 2025. The Verge reports: The settlement will resolve a 2025 lawsuit, alleging Apple's advertisements created a "clear and reasonable consumer expectation" that Apple Intelligence features would be available with the launch of the iPhone 16. The lawsuit claimed Apple's products "offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance."

Apple brought certain AI-powered features to the iPhone 16 weeks after its release, and delayed the launch of its more personalized Siri, which is now expected to arrive later this year. Last April, the National Advertising Division recommended that Apple "discontinue or modify" its "available now" claim for Apple Intelligence. Apple also pulled an iPhone 16 ad showing actor Bella Ramsey using the AI-upgraded Siri.
Bug

US Government Warns of Severe CopyFail Bug Affecting Major Versions of Linux (techcrunch.com) 63

An anonymous reader quotes a report from TechCrunch: A severe security vulnerability affecting almost every version of the Linux operating system has caught defenders off-guard and scrambling to patch after security researchers publicly released exploit code that allows attackers to take complete control of vulnerable systems. The U.S. government says the bug, dubbed "CopyFail," is now being exploited in the wild, meaning it's being actively used in malicious hacking campaigns. [...] Given the risk to the federal enterprise network, U.S. cybersecurity agency CISA has ordered all civilian federal agencies to patch any affected systems by May 15.
The Courts

OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (apnews.com) 157

OpenAI president Greg Brockman's testimony dominated the fifth day of the trial for Elon Musk's lawsuit against the AI company. Brockman took the witness stand on Monday, disclosing that his stake in OpenAI is worth nearly $30 billion, despite not personally investing money in OpenAI. The judge also declined to admit a pretrial text in which Musk allegedly warned Brockman that he and Altman would become "the most hated men in America." From a report: Brockman's disclosure would put him on the Forbes list of the world's richest people, with wealth comparable to Melinda French Gates. [...] Late Sunday, OpenAI lawyers tried to admit as evidence a text message Musk sent to Brockman two days before the trial began. According to a court filing -- which did not include the actual text exchange -- Musk sent a message to Brockman to gauge interest in settlement.

When Brockman replied that both sides should drop their respective claims, Musk shot back, according to the filing, "By the end of this week, you and Sam will be the most hated men in America. If you insist, so it will be." Judge Yvonne Gonzalez Rogers, who is overseeing the trial, did not admit the text exchange as evidence.
Brockman acknowledged that he had promised to personally donate $100,000 to OpenAI's charity but never did. In explaining the delay, Brockman put the onus on Altman: "I asked Sam when I should donate this, and he said he would let me know," reports Business Insider.

The first witness to testify on Monday was Stuart Russell, an artificial intelligence expert who teaches computer science at the University of California, Berkeley. "The most memorable part of Russell's testimony was when he talked about how much Musk's legal team paid him," notes Business Insider. "He received an eye-popping $5,000 per hour for 40 hours of preparatory work. Expert witnesses in high-profile cases typically make between $500 to $1,000 per hour."

Recap:
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
AI

White House Considers Vetting AI Models Before They Are Released 126

The Trump administration is reportedly considering an executive order to create a working group that could review advanced AI models before public release. The shift follows concerns over Anthropic's powerful Mythos model and its cyber capabilities, with officials weighing whether the government should get early access to frontier models without necessarily blocking their release. The New York Times reports: In meetings last week, White House officials told executives from Anthropic, Google and OpenAI about some of those plans, people briefed on the conversations said. The working group is likely to consider a number of oversight approaches, officials said. But a review process could be similar to one being developed in Britain, which has assigned several government bodies to ensure that A.I. models meet certain safety standards, people in the tech industry and the administration said.

The discussions signal a stark reversal in the Trump administration's approach to A.I. Since returning to office last year, Mr. Trump has been a major booster of the technology, which he has said is vital to winning the geopolitical contest against China. Among other moves, he swiftly rolled back a Biden administration regulatory process that asked A.I. developers to perform safety evaluations and report on A.I. models with potential military applications. "We're going to make this industry absolutely the top, because right now it's a beautiful baby that's born," Mr. Trump said of A.I. at an event in July. "We have to grow that baby and let that baby thrive. We can't stop it. We can't stop it with politics. We can't stop it with foolish rules and even stupid rules." Mr. Trump left room for some rules, but he added that "they have to be more brilliant than even the technology itself."

The White House wants to avoid any political repercussions if a devastating A.I.-enabled cyberattack were to occur, people in the tech industry and the administration said. The administration is also evaluating whether new A.I. models could yield cyber-capabilities that could be useful to the Pentagon and U.S. intelligence agencies, they said. To get ahead of models like Mythos, some officials are pushing for a review system that would give the government first access to A.I. models, but that would not block their release, people briefed on the talks said.
United Kingdom

16% of Parents Help Their Children Bypass Online Age Checks, Study Finds. One 15-Year-Old Just Uses a Fake Moustache (independent.co.uk) 165

The Independent reports that "more than a third of children in the UK have found a way around age verification measures" for social media sites and other online platforms. And new research from online safety organisation Internet Matters "suggests one in six parents have helped their child to get past age verification checks, with children reporting 'tricking' platforms into thinking they are older. " Parents also said they had caught their children drawing on facial hair in a bid to evade the technology. One mother said: "I did catch my son using an eyebrow pencil to draw a moustache on his face, and it verified him as 15 years old"... From a sample of 1,000 UK children, 46% said they believed age checks are easy to bypass, while 32% admitted to having done so.
49% of the children surveyed said they'd still encountered harmful content, according to the online safety activists. The group called the figure "unacceptable," and complained that age verification measures "are often ineffective in practice or easy to bypass."
Government

Roblox Blames Age-Verification Rollout for Lowered Growth. Stock Tumbles 22% (qz.com) 37

Age verification became mandatory for chat access on Roblox in January — and Friday morning Quartz reported it's apparently impacted the company's financials: Roblox cut its full-year 2026 bookings forecast by roughly $900 million at the midpoint on Thursday, blaming stronger-than-expected headwinds from its mandatory age-verification rollout on an audience that skews heavily toward children and teenagers. Full-year 2026 bookings are now projected at $7.33 billion to $7.60 billion, a range that sits roughly $900 million below the prior guidance of $8.28 billion to $8.55 billion; analysts had expected $8.38 billion, according to Yahoo Finance. Roblox stock fell almost 22% in premarket trading....

Daily active users rose 35% year over year to 132 million, while hours engaged climbed 43% to 31 billion hours... Daily Active Users and hours engaged fell below forecasts of 143.8 million and 33.68 billion, respectively, according to Yahoo Finance... Users who have not completed age checks have faced restricted communication features, and the process has weighed on the platform's ability to bring in new users. Russia's blocking of the platform, which took effect in December 2025, added further drag on user growth, according to Yahoo Finance. As of the end of the first quarter, 51% of global daily active users had completed age verification, with 65% of U.S. users having done so, Roblox said....

The safety push has come with legal costs. Roblox accrued $57 million in the first quarter for settlements and settlement proposals with certain states over youth-related consumer protection and digital safety matters, with payments structured over multiple years, the company said.

Roblox acknowledged in a letter to shareholders that "our aggressive push to enhance safety lowers our expectations for topline growth in 2026." But they argued that it also "makes our platform fundamentally better and amplifies the long-term growth potential of Roblox through more effective content targeting, tailored communication experiences, and improved community sentiment."
AI

South Africa's Draft AI Policy Withdrawn Due to 'Fictitious' AI-Generated Citations (timeslive.co.za) 10

An official in South Africa withdrew a draft of the country's national AI policy, reports a local newspaper, "after it was found the draft policy was compiled using AI, which cited academic articles that were 'fictitious'." Earlier this month, minister in the Presidency Khumbudzo Ntshavheni announced cabinet had approved the draft policy for public comment. [Ntshavheni] said the policy seeks to strengthen government's ability to regulate and adopt AI responsibly, while fostering innovation, job creation, and skills access.
The article includes this quotes from the country's minister of communications/digital technologies department. "This unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical."

Thanks to Slashdot reader Tokolosh for sharing the article.
The Internet

Smuggled Starlink Terminals are Beating Iran's Internet Blackout (bbc.com) 134

An anonymous reader shared this report from the BBC: "If even one extra person is able to access the internet, I think it's successful and it's worth it," says Sahand. The Iranian man is visibly anxious, speaking to the BBC outside Iran, as he carefully explains how he is part of a clandestine network smuggling satellite internet technology — which is illegal in Iran — into the country. Sahand, whose name we have changed, fears for family members and other contacts inside the country. "If I was identified by the Iranian regime, they might make those I'm in touch with in Iran pay the price," he says.

For more than two months, Iran has been in digital darkness as the government maintains one of the longest-running national internet shutdowns ever recorded worldwide... Sahand says he has sent a dozen [Starlink terminals] to Iran since January and "we are actively looking for other ways to smuggle in more". The human rights organisation Witness estimated in January that there are at least 50,000 Starlink terminals in Iran. Activists say the number is likely to have risen...

Last year, the Iranian government passed legislation that made using, buying or selling Starlink devices punishable by up to two years in prison. The jail term for distributing or importing more than 10 devices can be up to 10 years. State-affiliated media has reported multiple cases of people being arrested for selling and buying Starlink terminals, including four people — two of them foreign nationals — arrested last month for "importing satellite internet equipment".

"The BBC contacted SpaceX for more details about the use of Starlink in the country but did not receive a response."
Government

Pentagon Reaches Agreements With Top AI Companies, But Not Anthropic 21

The Pentagon says it has reached deals with seven AI companies -- SpaceX, OpenAI, Google, Nvidia, Reflection AI, Microsoft, and AWS -- to deploy their tools on classified Defense Department networks. The odd one out is Anthropic, which remains excluded after being labeled a supply-chain risk amid a dispute over military-use guardrails. Reuters reports: SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft, and Amazon Web Services (AWS), several of which already work with the Pentagon, will be integrated into its secret and top-secret network environments, providing more military access to their products for use on sensitive topics, the Pentagon said in a statement. The lesser-known Reflection AI, which raised $2 billion in October, is backed by 1789 Capital, a venture capital firm in which Donald Trump Jr. is a partner and investor.

Since the Pentagon deemed Anthropic's products a "supply-chain risk" in March and the two sides became embroiled in a lawsuit, the military has expressed increasing interest in AI startups. Since the blow-up, newer AI entrants have said the military has sped up the process of incorporating them onto secret and top-secret data levels to less than three months. The process previously took 18 months or longer.

By expanding AI services offered to troops, who use it for planning, logistics, targeting and in other ways to streamline huge operations and perform more quickly, the Pentagon said in its statement it will avoid "vendor lock," a likely nod to its overdependence on Anthropic or other dominant service providers. [...] AI has become increasingly important for the U.S. military. The Pentagon's main AI platform, GenAI.mil, has been used by over 1.3 million Defense Department personnel, the agency noted in its release, after five months of operation.
Further reading: Google and Pentagon Reportedly Agree On Deal For 'Any Lawful' Use of AI

Slashdot Top Deals