United Kingdom

UK Secretly Allows Facial Recognition Scans of Passport, Immigration Databases (theregister.com) 25

An anonymous reader shares a report: Privacy groups report a surge in UK police facial recognition scans of databases secretly stocked with passport photos lacking parliamentary oversight. Big Brother Watch says the UK government has allowed images from the country's passport and immigration databases to be made available to facial recognition systems, without informing the public or parliament.

The group claims the passport database contains around 58 million headshots of Brits, plus a further 92 million made available from sources such as the immigration database, visa applications, and more. By way of comparison, the Police National Database contains circa 20 million photos of those who have been arrested by, or are at least of interest to, the police.

Cloud

Amazon's Cloud Business Giving Federal Agencies Up To $1 Billion In Discounts (cnbc.com) 20

Amazon Web Services has struck a deal with the U.S. government to provide up to $1 billion in cloud service discounts through 2028. CNBC reports: The agreement is expected to speed up migration to the cloud, as well as adoption of artificial intelligence tools, the General Services Administration said. "AWS's partnership with GSA demonstrates a shared public-private commitment to enhancing America's AI leadership," the agency said in a release.

Amazon's cloud boss, Matt Garman, hailed the agreement as a "significant milestone in the large-scale digital transformation of government services." The discounts aggregated across federal agencies include credits to use AWS' cloud infrastructure, modernization programs and training services, as well as incentives for "direct partnership."
Further reading: OpenAI Offers ChatGPT To US Federal Agencies for $1 a Year
Government

Taiwan's High 20% Tariff Rate Linked To Intel Investment (notebookcheck.net) 127

EreIamJH writes: German tech newsletter Notebookcheck is reporting that the unexpectedly high 20% tariff the U.S. recently imposed on Taiwan is intended to pressure TSMC to buy a 49% minority stake in Intel -- including an IP transfer and to spend $400 billion in the U.S., in addition to the $165 billion previously planned.
Privacy

'Facial Recognition Tech Mistook Me For Wanted Man' (bbc.co.uk) 112

Bruce66423 shares a report from the BBC: A man who is bringing a High Court challenge against the Metropolitan Police after live facial recognition technology wrongly identified him as a suspect has described it as "stop and search on steroids." Shaun Thompson, 39, was stopped by police in February last year outside London Bridge Tube station. Privacy campaign group Big Brother Watch said the judicial review, due to be heard in January, was the first legal case of its kind against the "intrusive technology." The Met, which announced last week that it would double its live facial recognition technology (LFR) deployments, said it was removing hundreds of dangerous offenders and remained confident its use is lawful. LFR maps a person's unique facial features, and matches them against faces on watch-lists. [...]

Mr Thompson said his experience of being stopped had been "intimidating" and "aggressive." "Every time I come past London Bridge, I think about that moment. Every single time." He described how he had been returning home from a shift in Croydon, south London, with the community group Street Fathers, which aims to protect young people from knife crime. As he passed a white van, he said police approached him and told him he was a wanted man. "When I asked what I was wanted for, they said, 'that's what we're here to find out'." He said officers asked him for his fingerprints, but he refused, and he was let go only after about 30 minutes, after showing them a photo of his passport.

Mr Thompson says he is bringing the legal challenge because he is worried about the impact LFR could have on others, particularly if young people are misidentified. "I want structural change. This is not the way forward. This is like living in Minority Report," he said, referring to the science fiction film where technology is used to predict crimes before they're committed. "This is not the life I know. It's stop and search on steroids. "I can only imagine the kind of damage it could do to other people if it's making mistakes with me, someone who's doing work with the community."
Bruce66423 comments: "I suspect a payout of 10,000 pounds for each false match that is acted on would probably encourage more careful use, perhaps with a second payout of 100,000 pounds if the same person is victimized again."
The Courts

Country's Strictest Ban On Election Deepfakes Struck By Judge (politico.com) 26

A federal judge struck down California's strict anti-deepfake election law, citing Section 230 protections rather than First Amendment concerns. Politico reports: [Judge John Mendez] also said he intended to overrule a second law, which would require labels on digitally altered campaign materials and ads, for violating the First Amendment. [...] The first law would have blocked online platforms from hosting deceptive, AI-generated content related to an election in the run-up to the vote. It came amid heightened concerns about the rapid advancement and accessibility of artificial intelligence, allowing everyday users to quickly create more realistic images and videos, and the potential political impacts. But opponents of the measures ... also argued the restrictions could infringe upon freedom of expression.

The original challenge was filed by the creator of the video, Christopher Kohls, on First Amendment grounds, with X later joining the case after [Elon Musk] said the measures were "designed to make computer-generated parody illegal." The satirical right-wing news website the Babylon Bee and conservative social media site Rumble also joined the suit. Mendez said the first law, penned by Democratic state Assemblymember Marc Berman, conflicted with the oft-cited Section 230 of the federal Communications Decency Act, which shields online platforms from liability for what third parties post on their sites. "They don't have anything to do with these videos that the state is objecting to," Mendez said of sites like X that host deepfakes.

But the judge did not address the First Amendment claims made by Kohls, saying it was not necessary in order to strike down the law on Section 230 grounds. "I'm simply not reaching that issue," Mendez told the plaintiffs' attorneys. [...] "I think the statute just fails miserably in accomplishing what it would like to do," Mendez said, adding he would write an official opinion on that law in the coming weeks. Laws restricting speech have to pass a strict test, including whether there are less restrictive ways of accomplishing the state's goals. Mendez questioned whether approaches that were less likely to chill free speech would be better. "It's become a censorship law and there is no way that is going to survive," Mendez added.

Government

Coding Error Blamed After Parts of Constitution Disappear From US Website (arstechnica.com) 71

An anonymous reader quotes a report from Ars Technica: The Library of Congress today said a coding error resulted in the deletion of parts of the US Constitution from Congress' website and promised a fix after many Internet users pointed out the missing sections this morning. The missing portions of the Constitution were restored to one part of the website a few hours after the Library of Congress statement and reappeared on a different part of the website another hour or so later. The Constitution Annotated website carried a notice saying it "is currently experiencing data issues. We are working to resolve this issue and regret the inconvenience."

"Upkeep of Constitution Annotated and other digital resources is a critical part of the Library's mission, and we appreciate the feedback that alerted us to the error and allowed us to fix it," the Library of Congress said. We asked the Library of Congress for specific details on the coding error, but we received only a statement that did not include specifics. "Due to a technical error, some sections of Article 1 were temporarily missing on the Constitution Annotated website. This problem has been corrected, and the missing sections have been restored," the statement said.

The deletion happened sometime in the past few weeks, as an Internet Archive capture shows that the text was still on the site until at least July 21. The deletions were being discussed this morning on Reddit and in news articles, with people expressing suspicions based on which parts of the Constitution were missing.

The Courts

Tornado Cash Co-Founder Storm Guilty in Crypto Mixing Case 8

A Manhattan jury convicted Tornado Cash co-founder Roman Storm on Wednesday of conspiring to operate an unlicensed money-transfer business, though jurors deadlocked on charges of money laundering conspiracy and sanctions violations after three days of deliberation.

Federal prosecutors alleged Storm helped cybercriminals launder more than $1 billion through the cryptocurrency mixing platform, which launched in 2019 as a decentralized protocol designed to obscure transaction origins by pooling and redistributing funds through smart contracts.
AI

OpenAI Offers ChatGPT To US Federal Agencies for $1 a Year (openai.com) 25

OpenAI will provide ChatGPT access to US federal agencies for $1 annually through the General Services Administration's new AI marketplace that also includes Google and Anthropic as approved vendors. The nominal pricing represents the deepest discount GSA has negotiated with software providers, surpassing previous deals with Adobe and Salesforce.

OpenAI said it will not use federal worker data to train its models and agencies face no renewal requirements. The $1 rate applies only to the ChatGPT chatbot interface, not OpenAI's API for custom software development.
Privacy

Meta Eavesdropped On Period-Tracker App's Users, Jury Rules (sfgate.com) 101

A San Francisco jury ruled that Meta violated the California Invasion of Privacy Act by collecting sensitive data from users of the Flo period-tracking app without consent. "The plaintiff's lawyers who sued Meta are calling this a 'landmark' victory -- the tech company contends that the jury got it all wrong," reports SFGATE. From the report: The case goes back to 2021, when eight women sued Flo and a group of other tech companies, including Google and Facebook, now known as Meta. The stakes were extremely personal. Flo asked users about their sex lives, mental health and diets, and guided them through menstruation and pregnancy. Then, the women alleged, Flo shared pieces of that data with other companies. The claims were largely based on a 2019 Wall Street Journal story and a 2021 Federal Trade Commission investigation. Google, Flo and the analytics company Flurry, which was also part of the lawsuit, reached settlements with the plaintiffs, as is common in class action lawsuits about tech privacy. But Meta stuck it out through the entire trial and lost.

The case against Meta focused on its Facebook software development kit, which Flo added to its app and which is generally used for analytics and advertising services. The women alleged that between June 2016 and February 2019, Flo sent Facebook, through that kit, various records of "Custom App Events" -- such as a user clicking a particular button in the "wanting to get pregnant" section of the app. Their complaint also pointed to Facebook's terms for its business tools, which said the company used so-called "event data" to personalize ads and content.

In a 2022 filing (PDF), the tech giant admitted that Flo used Facebook's kit during this period and that the app sent data connected to "App Events." But Meta denied receiving intimate information about users' health. Nonetheless, the jury ruled (PDF) against Meta. Along with the eavesdropping decision, the group determined that Flo's users had a reasonable expectation they weren't being overheard or recorded, as well as ruling that Meta didn't have consent to eavesdrop or record. The unanimous verdict was that the massive company violated the California Invasion of Privacy Act.
The jury's ruling could impact over 3.7 million U.S. users who registered between November 2016 and February 2019, with updates to be shared via email and a case website. The exact compensation from the trial or potential settlements remains uncertain.
Government

Swedish PM Under Fire For Using AI In Role 26

Sweden's Prime Minister Ulf Kristersson has come under fire after admitting that he frequently uses AI tools like ChatGPT for second opinions on political matters. The Guardian reports: ... Kristersson, whose Moderate party leads Sweden's center-right coalition government, said he used tools including ChatGPT and the French service LeChat. His colleagues also used AI in their daily work, he said. Kristersson told the Swedish business newspaper Dagens industri: "I use it myself quite often. If for nothing else than for a second opinion. What have others done? And should we think the complete opposite? Those types of questions."

Tech experts, however, have raised concerns about politicians using AI tools in such a way, and the Aftonbladet newspaper accused Kristersson in a editorial of having "fallen for the oligarchs' AI psychosis." Kristersson's spokesperson, Tom Samuelsson, later said the prime minister did not take risks in his use of AI. "Naturally it is not security sensitive information that ends up there. It is used more as a ballpark," he said.

But Virginia Dignum, a professor of responsible artificial intelligence at Umea University, said AI was not capable of giving a meaningful opinion on political ideas, and that it simply reflects the views of those who built it. "The more he relies on AI for simple things, the bigger the risk of an overconfidence in the system. It is a slippery slope," she told the Dagens Nyheter newspaper. "We must demand that reliability can be guaranteed. We didn't vote for ChatGPT."
The Courts

OpenAI Offers 20 Million User Chats In ChatGPT Lawsuit. NYT Wants 120 Million. (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: OpenAI is preparing to raise what could be its final defense to stop The New York Times from digging through a spectacularly broad range of ChatGPT logs to hunt for any copyright-infringing outputs that could become the most damning evidence in the hotly watched case. In a joint letter (PDF) Thursday, both sides requested to hold a confidential settlement conference on August 7. Ars confirmed with the NYT's legal team that the conference is not about settling the case but instead was scheduled to settle one of the most disputed aspects of the case: news plaintiffs searching through millions of ChatGPT logs. That means it's possible that this week, ChatGPT users will have a much clearer understanding of whether their private chats might be accessed in the lawsuit. In the meantime, OpenAI has broken down (PDF) the "highly complex" process required to make deleted chats searchable in order to block the NYT's request for broader access.

Previously, OpenAI had vowed to stop what it deemed was the NYT's attempt to conduct "mass surveillance" of ChatGPT users. But ultimately, OpenAI lost its fight to keep news plaintiffs away from all ChatGPT logs. After that loss, OpenAI appears to have pivoted and is now doing everything in its power to limit the number of logs accessed in the case -- short of settling -- as its customers fretted over serious privacy concerns. For the most vulnerable users, the lawsuit threatened to expose ChatGPT outputs from sensitive chats that OpenAI had previously promised would be deleted. Most recently, OpenAI floated a compromise, asking the court to agree that news organizations didn't need to search all ChatGPT logs. The AI company cited the "only expert" who has so far weighed in on what could be a statistically relevant, appropriate sample size -- computer science researcher Taylor Berg-Kirkpatrick. He suggested that a sample of 20 million logs would be sufficient to determine how frequently ChatGPT users may be using the chatbot to regurgitate articles and circumvent news sites' paywalls. But the NYT and other news organizations rejected the compromise, OpenAI said in a filing (PDF) yesterday. Instead, news plaintiffs have made what OpenAI said was an "extraordinary request that OpenAI produce the individual log files of 120 million ChatGPT consumer conversations."

That's six times more data than Berg-Kirkpatrick recommended, OpenAI argued. Complying with the request threatens to "increase the scope of user privacy concerns" by delaying the outcome of the case "by months," OpenAI argued. If the request is granted, it would likely trouble many users by extending the amount of time that users' deleted chats will be stored and potentially making them vulnerable to a breach or leak. As negotiations potentially end this week, OpenAI's co-defendant, Microsoft, has picked its own fight with the NYT over its internal ChatGPT equivalent tool that could potentially push the NYT to settle the disputes over ChatGPT logs.

Privacy

AI Is Listening to Your Meetings. Watch What You Say. (msn.com) 33

AI meeting transcription software is inadvertently sharing private conversations with all meeting participants through automated summaries. WSJ found a series of mishaps that people confirmed on-record.

Digital marketing agency owner Tiffany Lewis discovered her "Nigerian prince" joke about a potential client was included in the summary sent to that same client. Nashville branding firm Studio Delger received meeting notes documenting their discussion about "getting sandwich ingredients from Publix" and not liking soup when their client failed to appear. Communications agency coordinator Andrea Serra found her personal frustrations about a neighborhood Whole Foods and a kitchen mishap while making sweet potato recipes included in official meeting recaps distributed to colleagues.
Privacy

Nearly 100,000 ChatGPT Conversations Were Searchable on Google (404media.co) 13

An anonymous reader shares a report: A researcher has scraped nearly 100,000 conversations from ChatGPT that users had set to share publicly and Google then indexed, creating a snapshot of all the sorts of things people are using OpenAI's chatbot for, and inadvertently exposing. 404 Media's testing has found the dataset includes everything from the sensitive to the benign: alleged texts of non-disclosure agreements, discussions of confidential contracts, people trying to use ChatGPT to understand their relationship issues, and lots of people asking ChatGPT to write LinkedIn posts.

The news follows a July 30 Fast Company article which reported "thousands" of shared ChatGPT chats were appearing in Google search results. People have since dug through some of the chats indexed by Google. The around 100,000 conversation dataset provides a better sense of the scale of the problem, and highlights some of the potential privacy risks in using any sharing features of AI tools. OpenAI did not dispute the figure of around 100,000 indexed chats when contacted for comment.

The Courts

Rivian Sues To Sell Its EVs Directly In Ohio (techcrunch.com) 74

Rivian has filed a federal lawsuit in Ohio to challenge a state law preventing it from selling electric vehicles directly to consumers, arguing the rule is anti-competitive and outdated. The law currently protects legacy dealerships while allowing Tesla a special carve-out, and Rivian wants similar rights to apply for a direct-sales license in the state. TechCrunch reports: "Ohio's prohibition of Rivian's direct-sales-only business model is irrational in the extreme: it reduces competition, decreases consumer choice, and drives up consumer costs and inconvenience -- all of which harm consumers -- with literally no countervailing benefit," lawyers for the company wrote in the complaint. Rivian is asking the court to allow the company to apply for a dealership license so it can sell vehicles directly. Ohio customers have to buy from Rivian vehicles from locations in other states where direct sales are allowed. The cars are then shipped to Rivian service centers within Ohio.

Allowing Rivian to sell directly would not be treading new legal ground, the company argues in its complaint. Tesla has had a license to sell in Ohio since 2013 and can sell directly to consumers. What's stopping Rivian is a 2014 law passed by the state's legislature. That law, which Rivian says came after an intense lobbying effort by the Ohio Automobile Dealers Association (OADA), effectively gave Tesla a carve-out and blocked any future manufacturers from acquiring the necessary dealership licenses.
"Consumer choice is a bedrock principle of America's economy. Ohio's archaic prohibition against the direct-sales of vehicles is unconstitutional, irrational, and harms Ohioans by reducing competition and choice and driving up costs and inconvenience," Mike Callahan, Rivian's chief administrative officer, said in a statement.
Security

CrowdStrike Investigated 320 North Korean IT Worker Cases In the Past Year (cyberscoop.com) 11

An anonymous reader quotes a report from CyberScoop: North Korean operatives seeking and gaining technical jobs with foreign companies kept CrowdStrike busy, accounting for almost one incident response case or investigation per day in the past year, the company said in its annual threat hunting report released Monday. "We saw a 220% year-over-year increase in the last 12 months of Famous Chollima activity," Adam Meyers, senior vice president of counter adversary operations, said during a media briefing about the report. "We see them almost every day now," he said, referring to the North Korean state-sponsored group of North Korean technical specialists that has crept into the workforce of Fortune 500 companies and small-to-midsized organizations across the globe.

CrowdStrike's threat-hunting team investigated more than 320 incidents involving North Korean operatives gaining remote employment as IT workers during the one-year period ending June 30. CrowdStrike researchers found that Famous Chollima fueled that pace of activity with an assist from generative artificial intelligence tools that helped North Korean operatives maneuver workflows and evade detection during the hiring process. "They use generative AI across all stages of their operation," Meyers said. The insider threat group used generative AI to draft resumes, create false identities, build tools for job research, mask their identity during video interviews and answer questions or complete technical coding assignments, the report found. CrowdStrike said North Korean tech workers also used generative AI on the job to help with daily tasks and manage various communications across multiple jobs -- sometimes three to four -- they worked simultaneously.

Threat hunters observed other significant shifts in malicious activity during the past year, including a 27% year-over-year increase in hands-on-keyboard intrusions -- 81% of which involved no malware. Cybercrime accounted for 73% of all interactive intrusions during the one-year period. CrowdStrike continues to find and add more threat groups and clusters of activity to its matrix of cybercriminals, nation-state attackers and hacktivists. The company identified 14 new threat groups or individuals in the past six months, Meyers said. "We're up to over 265 named adversary groups that we track, and then 150 what we call malicious activity clusters," otherwise unnamed threat groups or individuals under development, Meyers said.

Piracy

How Napster Inspired a Generation of Rule-Breaking Entrepreneurs (fastcompany.com) 16

Napster's latest AI pivot "is the latest in a series of attempts by various owners to ride its brand cachet during emerging tech waves," Fast Company reported in July. In March, it sold for $207 million to Infinite Reality, an immersive digital media and e-commerce company, which also rebranded as Napster last month. Since 2020, other owners have included a British VR music startup (to create VR concerts) and two crypto-focused companies that bought it to anchor a Web3 music platform. Napster's launch follows a growing number of attempts to drive AI adoption beyond smartphones and laptops.
And tonight the Washington Post re-visited the legacy of Napster's original mp3-sharing model, arguing Napster "inspired successive generations of entrepreneurs to risk flouting the law so they could grow enough to get the laws changed to suit them, including Airbnb and Uber." "Napster to me embodies the idea that it is better to seek forgiveness than permission," said Mark Lemley, director of Stanford Law School's Program in Law, Science & Technology. "It didn't work out well for Napster or for many of the others who got sued, but it worked out very well for everyone else — users, and eventually the content industry, too, which is making record profits...." [Napster co-founder Sean] Parker later advised Spotify, and Napster marketing chief Oliver Schusser is now Apple's vice president for music.

Although many users saw Napster as an extension of rock-and-roll rebellion, that was not the company's real plan. First Fanning's majority-owning uncle, and then venture capital firm Hummer Winblad, wanted the start-up to leverage its knowledge of individual music consumers to make lucrative deals with the labels, according to internal documents this reporter found in researching a book on Napster. They warned that if no agreement were reached and Napster failed, more decentralized pirate services would take the audience and offer the labels nothing.

But settlement talks failed. The litigation blitz also took down a Napster competitor called Scour, which a young Travis Kalanick had joined shortly after its founding. Kalanick later created Uber, dedicated to overthrowing taxi regulations.

The article concludes that "Now it is Microsoft, Meta, Apple and Google, among the largest companies in the world, bankrolling the consumption of all media.

"They, too, have absorbed Napster's lessons in realpolitik, namely to build it first and hope the regulators will either yield or catch up."
China

China's Government Pushes Real-World AI Use to Jumpstart Its Adoption (yahoo.com) 26

The Chinese government "has embarked on an all-out drive to transform the technology from a remote concept to a newfangled reality, with applications on factory floors and in hospitals and government offices..." reports the Washington Post.

"[E]xperts say Beijing is pursuing an alternative playbook in an attempt to bridge the gap" with America: "aggressively pushing for the adoption of AI across the government and private sector." DeepSeek has been put to work over the last six months on a wide variety of government tasks. Procurement documents show military hospitals in Shaanxi and Guangxi provinces specifically requesting DeepSeek to build online consultation and health record systems. Local government websites describe state organs using DeepSeek for things like diverting calls from the public and streamlining police work. DeepSeek helps "quickly discover case clues and predict crime trends," which "greatly improves the accuracy and timeliness of crime fighting," a city government in China's Inner Mongolia region explained in a February social media post. Anti-corruption investigations — long a priority for Chinese leader Xi Jinping — are another frequent DeepSeek application, in which models are deployed to comb through dry spreadsheets to find suspicious irregularities. In April, China's main anti-graft agency even included a book called "Efficiently Using DeepSeek" on its official book recommendation list...

Alfred Wu, an expert on China's public governance at the National University of Singapore, said Beijing has disseminated a "top-down" directive to local governments to use AI. This is motivated, Wu said, by a desire to improve China's AI prowess amid a fierce rivalry with Washington by providing models access to vast stores of government data.

But not everyone is convinced that China has the winning hand, even as it attempts to push AI application nationwide. For one, China's sluggish economy will impact the AI industry's ability to grow and access funding, said Scott Singer [an expert on China's AI sector at the Carnegie Endowment for International Peace, who was attending the conference]... Others point out that local governments trumpeting their usage of DeepSeek is more about signaling than real technology uptake. Shen Yang, a professor at Tsinghua University's school of artificial intelligence, said DeepSeek is not being used at scale in anti-corruption work, for example, because the cases involve sensitive information and deploying new tools in these investigations requires long and complex approval processes.

AI

America's Los Alamos Lab Is Now Investing Heavily In AI For Science (lanl.gov) 22

Established in 1943 to coordinate America's building of the first atomic bomb, the Los Alamos National Lab in New Mexico is still "one of the world's largest and most advanced scientific institutions" notes Wikipedia.

And it now has a "National Security AI Office," where senior director Jason Pruet is working to help "prepare for a future in which AI will reshape the landscape of science and security," according to the lab's science and technology magazine 1663. "This year, the Lab invested more in AI-related work than at any point in history..." Pruet: AI is starting to feel like the next great foundation for scientific progress. Big companies are spending billions on large machines, but the buy-in costs of working at the frontiers of AI are so high that no university has the exascale-class machines needed to run the latest AI models. We're at a place now where we, meaning the government, can revitalize that pact by investing in the infrastructure to study AI for the public good... Part of what we're doing with the Lab's machines, like Venado — which has 2500 GPUs — is giving universities access to that scale of computing. The scale is just completely different. A typical university might have 50 or 100 GPUs.

Right now, for example, we have partnerships with the University of California, the University of Michigan, and many other universities where researchers can tap into this infrastructure. That's something we want to expand on. Having university collaboration will be critical if the Department of Energy is going to have a comprehensive AI program at scale that is focused on national security and energy dominance...

There was a time when I wouldn't have advocated for government investment in AI at the scale we're seeing now. But the weight of the evidence has become overwhelming. Large models — "frontier models" — have shown such extraordinary capabilities with recent advances in areas as diverse as hypothesis generation, mathematics, biological design, and complex multiphysics simulations. The potential for transformative impact is too significant to ignore.

"He no longer views the technology as just a tool, but as a fundamental shift in how scientists approach problems and make discoveries," the article concludes.

"The global race humanity is now in... is about how to harness the technology's potential while mitigating its harms."

Thanks to Slashdot reader rabbitface25 — also a Los Alamo Lab science writer — for sharing his article.
Privacy

Despite Breach and Lawsuits, Tea Dating App Surges in Popularity (www.cbc.ca) 39

The women-only app Tea now "faces two class action lawsuits filed in California" in response to a recent breach," reports NPR — even as the company is now boasting it has more than 6.2 million users.

A spokesperson for Tea told the CBC it's "working to identify any users whose personal information was involved" in a breach of 72,000 images (including 13,000 verification photos and images of government IDs) and a later breach of 1.1 million private messages. Tea said they will be offering those users "free identity protection services." The company said it removed the ID requirement in 2023, but data that was stored before February 2024, when Tea migrated to a more secure system, was accessed in the breach... [Several sites have pointed out Tea's current privacy policy is telling users selfies are "deleted immediately."]

Tea was reportedly intended to launch in Canada on Friday, according to information previously posted on the App Store, but as of this week the launch date is now in February 2026. Tea didn't respond to CBC's questions about the apparent delay. Yet even amid the current turmoil, Tea's waitlist has ballooned to 1.5 million women, all eager to join, the company posted on Wednesday. A day later, Tea posted in its Instagram stories that it had approved "well over" 800,000 women into the app that day alone.

So, why is it so popular, despite the drama and risks?

Tea tapped into a perceived weakness of ther dating apps, according to an associate health studies professor at Ontario's Western University interviewed by the CBC, who thinks users should avoid Tea, at least until its security is restored.

Tech blogger John Gruber called the incident "yet another data point for the argument that any 'private messaging' feature that doesn't use E2EE isn't actually private at all." (And later Gruber notes Tea's apparent absence at the top of the charts in Google's Play Store. "I strongly suspect that, although Google hasn't removed Tea from the Play Store, they've delisted it from discovery other than by searching for it by name or following a direct link to its listing.")

Besides anonymous discussions about specific men, Tea also allows its users to perform background and criminal record checks, according to NPR, as well as reverse image searches. But the recent breach, besides threatening the safety of its users, also "laid bare the anonymous, one-sided accusations against the men in their dating pools." The CBC points out there's a men's rights group on Reddit now urging civil lawsuits against tea as part of a plan to get the app shut down. And "Cleveland lawyer Aaron Minc, who specializes in cases involving online defamation and harassment, told The Associated Press that his firm has received hundreds of calls from people upset about what's been posted about them on Tea."

Yet in response to Tea's latest Instagram post, "The comments were almost entirely from people asking Tea to approve them, so they could join the app."
Bug

A Luggage Service's Web Bugs Exposed the Travel Plans of Every User (wired.com) 1

An anonymous reader quotes a report from Wired: An airline leaving all of its passengers' travel records vulnerable to hackers would make an attractive target for espionage. Less obvious, but perhaps even more useful for those spies, would be access to a premium travel service that spans 10 different airlines, left its own detailed flight information accessible to data thieves, and seems to be favored by international diplomats. That's what one team of cybersecurity researchers found in the form of Airportr, a UK-based luggage service that partners with airlines to let its largely UK- and Europe-based users pay to have their bags picked up, checked, and delivered to their destination. Researchers at the firm CyberX9 found that simple bugs in Airportr's website allowed them to access virtually all of those users' personal information, including travel plans, or even gain administrator privileges that would have allowed a hacker to redirect or steal luggage in transit. Among even the small sample of user data that the researchers reviewed and shared with WIRED they found what appear to be the personal information and travel records of multiple government officials and diplomats from the UK, Switzerland, and the US.

Airportr's CEO Randel Darby confirmed CyberX9's findings in a written statement provided to WIRED but noted that Airportr had disabled the vulnerable part of its site's backend very shortly after the researchers made the company aware of the issues last April and fixed the problems within a few day. "The data was accessed solely by the ethical hackers for the purpose of recommending improvements to Airportr's security, and our prompt response and mitigation ensured no further risk," Darby wrote in a statement. "We take our responsibilities to protect customer data very seriously." CyberX9's researchers, for their part, counter that the simplicity of the vulnerabilities they found mean that there's no guarantee other hackers didn't access Airportr's data first. They found that a relatively basic web vulnerability allowed them to change the password of any user to gain access to their account if they had just the user's email address -- and they were also able to brute-force guess email addresses with no rate limitations on the site. As a result, they could access data including all customers' names, phone numbers, home addresses, detailed travel plans and history, airline tickets, boarding passes and flight details, passport images, and signatures.

By gaining access to an administrator account, CyberX9's researchers say, a hacker could also have used the vulnerabilities it found to redirect luggage, steal luggage, or even cancel flights on airline websites by using Airportr's data to gain access to customer accounts on those sites. The researchers say they could also have used their access to send emails and text messages as Airportr, a potential phishing risk. Airportr tells WIRED that it has 92,000 users and claims on its website that it has handled more than 800,000 bags for customers. [...] The researchers found that they could monitor their browser's communications as they signed up for Airportr and created a new password, and then reuse an API key intercepted from those communications to instead change another user's password to anything they chose. The site also lacked a "rate limiting" security measure that would prevent automated guesses of email addresses to rapidly change the password of every user's account. And the researchers were also able to find email addresses of Airportr administrators that allowed them to take over their accounts and gain their privileges over the company's data and operations.
"Anyone would have been able to gain or might have gained absolute super-admin access to all the operations and data of this company," says Himanshu Pathak, CyberX9's founder and CEO. "The vulnerabilities resulted in complete confidential private information exposure of all airline customers in all countries who used the service of this company, including full control over all the bookings and baggage. Because once you are the super-admin of their most sensitive systems, you have have the ability to do anything."

Slashdot Top Deals