Privacy

Swedish Bodyguards Reveal Prime Minister's Location on Fitness App (politico.eu) 18

Swedish security service members who shared details of their running and cycling routes on fitness app Strava have been accused of revealing details of the prime minister's location, including his private address. Politico: According to Swedish daily Dagens Nyheter, on at least 35 occasions bodyguards uploaded their workouts to the training app and revealed information linked to Prime Minister Ulf Kristersson, including where he goes running, details of overnight trips abroad, and the location of his private home, which is supposed to be secret.
Security

Jack Dorsey Says His 'Secure' New Bitchat App Has Not Been Tested For Security (techcrunch.com) 37

An anonymous reader quotes a report from TechCrunch: On Sunday, Block CEO and Twitter co-founder Jack Dorsey launched an open source chat app called Bitchat, promising to deliver "secure" and "private" messaging without a centralized infrastructure. The app relies on Bluetooth and end-to-end encryption, unlike traditional messaging apps that rely on the internet. By being decentralized, Bitchat has potential for being a secure app in high-risk environments where the internet is monitored or inaccessible. According to Dorsey's white paper detailing the app's protocols and privacy mechanisms, Bitchat's system design "prioritizes" security.

But the claims that the app is secure, however, are already facing scrutiny by security researchers, given that the app and its code have not been reviewed or tested for security issues at all -- by Dorsey's own admission. Since launching, Dorsey has added a warning to Bitchat's GitHub page: "This software has not received external security review and may contain vulnerabilities and does not necessarily meet its stated security goals. Do not use it for production use, and do not rely on its security whatsoever until it has been reviewed." This warning now also appears on Bitchat's main GitHub project page but was not there at the time the app debuted.

As of Wednesday, Dorsey added: "Work in progress," next to the warning on GitHub. This latest disclaimer came after security researcher Alex Radocea found that it's possible to impersonate someone else and trick a person's contacts into thinking they are talking to the legitimate contact, as the researcher explained in a blog post. Radocea wrote that Bitchat has a "broken identity authentication/verification" system that allows an attacker to intercept someone's "identity key" and "peer id pair" -- essentially a digital handshake that is supposed to establish a trusted connection between two people using the app. Bitchat calls these "Favorite" contacts and marks them with a star icon. The goal of this feature is to allow two Bitchat users to interact, knowing that they are talking to the same person they talked to before.

The Internet

Browser Extensions Turn Nearly 1 Million Browsers Into Website-Scraping Bots (arstechnica.com) 28

Over 240 browser extensions with nearly a million total installs have been covertly turning users' browsers into web-scraping bots. "The extensions serve a wide range of purposes, including managing bookmarks and clipboards, boosting speaker volumes, and generating random numbers," reports Ars Technica. "The common thread among all of them: They incorporate MellowTel-js, an open source JavaScript library that allows developers to monetize their extensions." Ars Technica reports: Some of the data swept up in the collection free-for-all included surveillance videos hosted on Nest, tax returns, billing invoices, business documents, and presentation slides posted to, or hosted on, Microsoft OneDrive and Intuit.com, vehicle identification numbers of recently bought automobiles along with the names and addresses of the buyers, patient names and the doctors they saw, travel itineraries hosted on Priceline, Booking.com, and airline websites, Facebook Messenger attachments and Facebook photos, even when the photos were set to be private. The dragnet also collected proprietary information belonging to Tesla, Blue Origin, Amgen, Merck, Pfizer, Roche, and dozens of other companies.

Tuckner said in an email Wednesday that the most recent status of the affected extensions is:

- Of 45 known Chrome extensions, 12 are now inactive. Some of the extensions were removed for malware explicitly. Others have removed the library.
- Of 129 Edge extensions incorporating the library, eight are now inactive.
- Of 71 affected Firefox extensions, two are now inactive.

Some of the inactive extensions were removed for malware explicitly. Others have removed the library in more recent updates. A complete list of extensions found by Tuckner is here.

AI

McDonald's AI Hiring Bot Exposed Millions of Applicants' Data To Hackers 25

An anonymous reader quotes a report from Wired: If you want a job at McDonald's today, there's a good chance you'll have to talk to Olivia. Olivia is not, in fact, a human being, but instead an AI chatbot that screens applicants, asks for their contact information and resume, directs them to a personality test, and occasionally makes them "go insane" by repeatedly misunderstanding their most basic questions. Until last week, the platform that runs the Olivia chatbot, built by artificial intelligence software firm Paradox.ai, also suffered from absurdly basic security flaws. As a result, virtually any hacker could have accessed the records of every chat Olivia had ever had with McDonald's applicants -- including all the personal information they shared in those conversations -- with tricks as straightforward as guessing the username and password "123456."

On Wednesday, security researchers Ian Carroll and Sam Curryrevealedthat they found simple methods to hack into the backend of the AI chatbot platform on McHire.com, McDonald's website that many of its franchisees use to handle job applications. Carroll and Curry, hackers with along track record of independent security testing, discovered that simple web-based vulnerabilities -- including guessing one laughably weak password -- allowed them to access a Paradox.ai account and query the company's databases that held every McHire user's chats with Olivia. The data appears to include as many as 64 million records, including applicants' names, email addresses, and phone numbers.

Carroll says he only discovered that appalling lack of security around applicants' information because he was intrigued by McDonald's decision to subject potential new hires to an AI chatbot screener and personality test. "I just thought it was pretty uniquely dystopian compared to a normal hiring process, right? And that's what made me want to look into it more," says Carroll. "So I started applying for a job, and then after 30 minutes, we had full access to virtually every application that's ever been made to McDonald's going back years."
Paradox.ai confirmed the security findings, acknowledging that only a small portion of the accessed records contained personal data. The company stated that the weak-password account ("123456") was only accessed by the researchers and no one else. To prevent future issues, Paradox is launching a bug bounty program. "We do not take this matter lightly, even though it was resolved swiftly and effectively," Paradox.ai's chief legal officer, Stephanie King, told WIRED in an interview. "We own this."

In a statement to WIRED, McDonald's agreed that Paradox.ai was to blame. "We're disappointed by this unacceptable vulnerability from a third-party provider, Paradox.ai. As soon as we learned of the issue, we mandated Paradox.ai to remediate the issue immediately, and it was resolved on the same day it was reported to us," the statement reads. "We take our commitment to cyber security seriously and will continue to hold our third-party providers accountable to meeting our standards of data protection."
AMD

AMD Warns of New Meltdown, Spectre-like Bugs Affecting CPUs (theregister.com) 26

AMD is warning users of a newly discovered form of side-channel attack affecting a broad range of its chips that could lead to information disclosure. Register: Akin to Meltdown and Spectre, the Transient Scheduler Attack (TSA) comprises four vulnerabilities that AMD said it discovered while looking into a Microsoft report about microarchitectural leaks.

The four bugs do not appear too venomous at face value -- two have medium-severity ratings while the other two are rated "low." However, the low-level nature of the exploit's impact has nonetheless led Trend Micro and CrowdStrike to assess the threat as "critical."

The reasons for the low severity scores are the high degree of complexity involved in a successful attack -- AMD said it could only be carried out by an attacker able to run arbitrary code on a target machine. It affects AMD processors (desktop, mobile and datacenter models), including 3rd gen and 4th gen EPYC chips -- the full list is here.

AI

Linux Foundation Adopts A2A Protocol To Help Solve One of AI's Most Pressing Challenges 38

An anonymous reader quotes a report from ZDNet: The Linux Foundation announced at the Open Source Summit in Denver that it will now host the Agent2Agent (A2A) protocol. Initially developed by Google and now supported by more than 100 leading technology companies, A2A is a crucial new open standard for secure and interoperable communication between AI agents. In his keynote presentation, Mike Smith, a Google staff software engineer, told the conference that the A2A protocol has evolved to make it easier to add custom extensions to the core specification. Additionally, the A2A community is working on making it easier to assign unique identities to AI agents, thereby improving governance and security.

The A2A protocol is designed to solve one of AI's most pressing challenges: enabling autonomous agents -- software entities capable of independent action and decision-making -- to discover each other, securely exchange information, and collaborate across disparate platforms, vendors, and frameworks. Under the hood, A2A does this work by creating an AgentCard. An AgentCard is a JavaScript Object Notation (JSON) metadata document that describes its purpose and provides instructions on how to access it via a web URL. A2A also leverages widely adopted web standards, such as HTTP, JSON-RPC, and Server-Sent Events (SSE), to ensure broad compatibility and ease of integration. By providing a standardized, vendor-neutral communication layer, A2A breaks down the silos that have historically limited the potential of multi-agent systems.

For security, A2A comes with enterprise-grade authentication and authorization built in, including support for JSON Web Tokens (JWTs), OpenID Connect (OIDC), and Transport Layer Security (TLS). This approach ensures that only authorized agents can participate in workflows, protecting sensitive data and agent identities. While the security foundations are in place, developers at the conference acknowledged that integrating them, particularly authenticating agents, will be a hard slog.
Antje Barth, an Amazon Web Services (AWS) principal developer advocate for generative AI, explained what the adoption of A2A will mean for IT professionals: "Say you want to book a train ride to Copenhagen, then a hotel there, and look maybe for a fancy restaurant, right? You have inputs and individual tasks, and A2A adds more agents to this conversation, with one agent specializing in hotel bookings, another in restaurants, and so on. A2A enables agents to communicate with each other, hand off tasks, and finally brings the feedback to the end user."

Jim Zemlin, executive director of the Linux Foundation, said: "By joining the Linux Foundation, A2A is ensuring the long-term neutrality, collaboration, and governance that will unlock the next era of agent-to-agent powered productivity." Zemlin expects A2A to become a cornerstone for building interoperable, multi-agent AI systems.
Security

Activision Took Down Call of Duty Game After PC Players Hacked (techcrunch.com) 23

Activision removed "Call of Duty: WWII" from Microsoft Store and Game Pass after hackers exploited a security vulnerability that allowed them to compromise players' computers, TechCrunch reported Tuesday, citing a source. The gaming giant took the 2017 first-person shooter offline last week while investigating what it initially described only as "reports of an issue."

Players posted on social media claiming their systems had been hacked while playing the game. The vulnerability was a remote code execution exploit that enables attackers to install malware and take control of victims' devices. The Microsoft Store and Game Pass versions contained an unpatched security flaw that had been fixed in other versions of the game.
Android

Unless Users Take Action, Android Will Let Gemini Access Third-Party Apps (arstechnica.com) 74

Google is implementing a change that will enable its Gemini AI engine to interact with third-party apps, such as WhatsApp, even when users previously configured their devices to block such interactions. ArsTechnica: Users who don't want their previous settings to be overridden may have to take action. An email Google sent recently informing users of the change linked to a notification page that said that "human reviewers (including service providers) read, annotate, and process" the data Gemini accesses.

The email provides no useful guidance for preventing the changes from taking effect. The email said users can block the apps that Gemini interacts with, but even in those cases, data is stored for 72 hours. The email never explains how users can fully extricate Gemini from their Android devices and seems to contradict itself on how or whether this is even possible.

The Courts

Samsung and Epic Games Call a Truce In App Store Lawsuit (arstechnica.com) 12

An anonymous reader quotes a report from Ars Technica: Epic Games, buoyed by the massive success of Fortnite, has spent the last few years throwing elbows in the mobile industry to get its app store on more phones. It scored an antitrust win against Google in late 2023, and the following year it went after Samsung for deploying "Auto Blocker" on its Android phones, which would make it harder for users to install the Epic Games Store. Now, the parties have settled the case just days before Samsung will unveil its latest phones.

The Epic Store drama began several years ago when the company defied Google and Apple rules about accepting outside payments in the mega-popular Fortnite. Both stores pulled the app, and Epic sued. Apple emerged victorious, with Fortnite only returning to the iPhone recently. Google, however, lost the case after Epic showed it worked behind the scenes to stymie the development of app stores like Epic's. Google is still working to avoid penalties in that long-running case, but Epic thought it smelled a conspiracy last year. It filed a similar lawsuit against Samsung, accusing it of implementing a feature to block third-party app stores. The issue comes down to the addition of a feature to Samsung phones called Auto Blocker, which is similar to Google's new Advanced Protection in Android 16. It protects against attacks over USB, disables link previews, and scans apps more often for malicious activity. Most importantly, it blocks app sideloading. Without sideloading, there's no way to install the Epic Games Store or any of the content inside it.

Auto Blocker is enabled by default on Samsung phones, but users can opt out during setup. Epic claimed in its suit that the sudden inclusion of this feature was a sign that Google was working with Samsung to stand in the way of alternative app stores again. Epic has apparently gotten what it wanted from Samsung -- CEO Tim Sweeney has announced that Epic is dropping the case in light of a new settlement.
Sweeney said Samsung "will address Epic's concerns," without elaborating on the details. Samsung may stop making Auto Blocker the default or create a whitelist of apps, like the Epic Games Store, that can bypass Auto Blocker. Another possibility is that Epic and select third-party stores are granted special access while Auto Blocker remains on for others, balancing security and openness.

A "more interesting outcome," according to Ars, would be for Samsung to pre-install the Epic Games Store on its new phones.
AI

Is China Quickly Eroding America's Lead in the Global AI Race? (msn.com) 136

China "is pouring money into building an AI supply chain with as little reliance on the U.S. as possible," reports the Wall Street Journal.

And now Chinese AI companies "are loosening the U.S.'s global stranglehold on AI," reports the Wall Street Journal, "challenging American superiority and setting the stage for a global arms race in the technology." In Europe, the Middle East, Africa and Asia, users ranging from multinational banks to public universities are turning to large language models from Chinese companies such as startup DeepSeek and e-commerce giant Alibaba as alternatives to American offerings such as ChatGPT... Saudi Aramco, the world's largest oil company, recently installed DeepSeek in its main data center. Even major American cloud service providers such as Amazon Web Services, Microsoft and Google offer DeepSeek to customers, despite the White House banning use of the company's app on some government devices over data-security concerns.

OpenAI's ChatGPT remains the world's predominant AI consumer chatbot, with 910 million global downloads compared with DeepSeek's 125 million, figures from researcher Sensor Tower show. American AI is widely seen as the industry's gold standard, thanks to advantages in computing semiconductors, cutting-edge research and access to financial capital. But as in many other industries, Chinese companies have started to snatch customers by offering performance that is nearly as good at vastly lower prices. A study of global competitiveness in critical technologies released in early June by researchers at Harvard University found China has advantages in two key building blocks of AI, data and human capital, that are helping it keep pace...

Leading Chinese AI companies — which include Tencent and Baidu — further benefit from releasing their AI models open-source, meaning users are free to tweak them for their own purposes. That encourages developers and companies globally to adopt them. Analysts say it could also pressure U.S. rivals such as OpenAI and Anthropic to justify keeping their models private and the premiums they charge for their service... On Latenode, a Cyprus-based platform that helps global businesses build custom AI tools for tasks including creating social-media and marketing content, as many as one in five users globally now opt for DeepSeek's model, according to co-founder Oleg Zankov. "DeepSeek is overall the same quality but 17 times cheaper," Zankov said, which makes it particularly appealing for clients in places such as Chile and Brazil, where money and computing power aren't as plentiful...

The less dominant American AI companies are, the less power the U.S. will have to set global standards for how the technology should be used, industry analysts say. That opens the door for Beijing to use Chinese models as a Trojan horse for disseminating information that reflects its preferred view of the world, some warn.... The U.S. also risks losing insight into China's ambitions and AI innovations, according to Ritwik Gupta, AI policy fellow at the University of California, Berkeley. "If they are dependent on the global ecosystem, then we can govern it," said Gupta. "If not, China is going to do what it is going to do, and we won't have visibility."

The article also warns of other potential issues:
  • "Further down the line, a breakdown in U.S.-China cooperation on safety and security could cripple the world's capacity to fight future military and societal threats from unrestrained AI."
  • "The fracturing of global AI is already costing Western makers of computer chips and other hardware billions in lost sales... Adoption of Chinese models globally could also mean lost market share and earnings for AI-related U.S. firms such as Google and Meta."

Programming

Microsoft Open Sources Copilot Chat for VS Code on GitHub (nerds.xyz) 18

"Microsoft has released the source code for the GitHub Copilot Chat extension for VS Code under the MIT license," reports BleepingComputer. This provides the community access to the full implementation of the chat-based coding assistant, including the implementation of "agent mode," what contextual data is sent to large language models (LLMs), and the design of system prompts. The GitHub repository hosting the code also details telemetry collection mechanisms, addressing long-standing questions about data transparency in AI-assisted coding tools...

As the VS Code team explained previously, shifts in AI tooling landscape like the rapid growth of the open-source AI ecosystem and a more level playing field for all have reduced the need for secrecy around prompt engineering and UI design. At the same time, increased targeting of development tools by malicious actors has increased the need for crowdsourcing contributions to rapidly pinpoint problems and develop effective fixes. Essentially, openness is now considered superior from a security perspective.

"If you've been hesitant to adopt AI tools because you don't trust the black box behind them, this move opensources-github-copilot-chat-vscode/offers something rare these days: transparency," writes Slashdot reader BrianFagioli" Now that the extension is open source, developers can audit how agent mode actually works. You can also dig into how it manages your data, customize its behavior, or build entirely new tools on top of it. This could be especially useful in enterprise environments where compliance and control are non negotiable.

It is worth pointing out that the backend models powering Copilot remain closed source. So no, you won't be able to self host the whole experience or train your own Copilot. But everything running locally in VS Code is now fair game. Microsoft says it is planning to eventually merge inline code completions into the same open source package too, which would make Copilot Chat the new hub for both chat and suggestions.

AI

XBOW's AI-Powered Pentester Grabs Top Rank on HackerOne, Raises $75M to Grow Platform (csoonline.com) 10

We're living in a new world now — one where it's an AI-powered penetration tester that "now tops an eminent US security industry leaderboard that ranks red teamers based on reputation." CSO Online reports: On HackerOne, which connects organizations with ethical hackers to participate in their bug bounty programs, "Xbow" scored notably higher than 99 other hackers in identifying and reporting enterprise software vulnerabilities. It's a first in bug bounty history, according to the company that operates the eponymous bot...

Xbow is a fully autonomous AI-driven penetration tester (pentester) that requires no human input, but, its creators said, "operates much like a human pentester" that can scale rapidly and complete comprehensive penetration tests in just a few hours. According to its website, it passes 75% of web security benchmarks, accurately finding and exploiting vulnerabilities.

Xbow submitted nearly 1,060 vulnerabilities to HackerOne, including remote code execution, information disclosures, cache poisoning, SQL injection, XML external entities, path traversal, server-side request forgery (SSRF), cross-site scripting, and secret exposure. The company said it also identified a previously unknown vulnerability in Palo Alto's GlobalProtect VPN platform that impacted more than 2,000 hosts. Of the vulnerabilities Xbow submitted over the last 90 days, 54 were classified as critical, 242 as high and 524 as medium in severity. The company's bug bounty programs have resolved 130 vulnerabilities, and 303 are classified as triaged.

Notably, though, roughly 45% of the vulnerabilities it found are still awaiting resolution, highlighting the "volume and impact of the submissions across live targets," Nico Waisman, Xbow's head of security, wrote in a blog post this week... To further hone the technology, the company developed "validators," — automated peer reviewers that confirm each uncovered vulnerability, Waisman explained.

"As attackers adopt AI to automate and accelerate exploitation, defenders must meet them with even more capable systems," XBOW's CEO said this week, as the company raised $75 million in Series B funding to grow its platform, bringing its total funding to $117 million. Help Net Security reports: With the new funding, XBOW plans to grow its engineering team and expand its go-to-market efforts. The product is now generally available, and the company says it is working with large banks, tech firms, and other organizations that helped shape the platform during its early testing phase. XBOW's long-term goal is to help security teams stay ahead of adversaries using advanced automation. As attackers increasingly turn to AI, the company argues that defenders will need equally capable systems to match their speed and sophistication.
Bug

Two Sudo Vulnerabilities Discovered and Patched (thehackernews.com) 20

In April researchers responsibly disclosed two security flaws found in Sudo "that could enable local attackers to escalate their privileges to root on susceptible machines," reports The Hacker News. "The vulnerabilities have been addressed in Sudo version 1.9.17p1 released late last month." Stratascale researcher Rich Mirch, who is credited with discovering and reporting the flaws, said CVE-2025-32462 has managed to slip through the cracks for over 12 years. It is rooted in the Sudo's "-h" (host) option that makes it possible to list a user's sudo privileges for a different host. The feature was enabled in September 2013. However, the identified bug made it possible to execute any command allowed by the remote host to be run on the local machine as well when running the Sudo command with the host option referencing an unrelated remote host. "This primarily affects sites that use a common sudoers file that is distributed to multiple machines," Sudo project maintainer Todd C. Miller said in an advisory. "Sites that use LDAP-based sudoers (including SSSD) are similarly impacted."

CVE-2025-32463, on the other hand, leverages Sudo's "-R" (chroot) option to run arbitrary commands as root, even if they are not listed in the sudoers file. It's also a critical-severity flaw. "The default Sudo configuration is vulnerable," Mirch said. "Although the vulnerability involves the Sudo chroot feature, it does not require any Sudo rules to be defined for the user. As a result, any local unprivileged user could potentially escalate privileges to root if a vulnerable version is installed...."

Miller said the chroot option will be removed completely from a future release of Sudo and that supporting a user-specified root directory is "error-prone."

AI

UK Minister Tells Turing AI Institute To Focus On Defense (bbc.com) 40

UK Science and Technology Secretary Peter Kyle has written to the UK's national institute for AI to tell its bosses to refocus on defense and security. BBC: In a letter, Kyle said boosting the UK's AI capabilities was "critical" to national security and should be at the core of the Alan Turing Institute's activities. Kyle suggested the institute should overhaul its leadership team to reflect its "renewed purpose."

The cabinet minister said further government investment in the institute would depend on the "delivery of the vision" he had outlined in the letter. A spokesperson for the Alan Turing Institute said it welcomed "the recognition of our critical role and will continue to work closely with the government to support its priorities."
Further reading, from April: Alan Turing Institute Plans Revamp in Face of Criticism and Technological Change.
AI

Simple Text Additions Can Fool Advanced AI Reasoning Models, Researchers Find 51

Researchers have discovered that appending irrelevant phrases like "Interesting fact: cats sleep most of their lives" to math problems can cause state-of-the-art reasoning AI models to produce incorrect answers at rates over 300% higher than normal [PDF]. The technique -- dubbed "CatAttack" by teams from Collinear AI, ServiceNow, and Stanford University -- exploits vulnerabilities in reasoning models including DeepSeek R1 and OpenAI's o1 family. The adversarial triggers work across any math problem without changing the problem's meaning, making them particularly concerning for security applications.

The researchers developed their attack method using a weaker proxy model (DeepSeek V3) to generate text triggers that successfully transferred to more advanced reasoning models. Testing on 225 math problems showed the triggers increased error rates significantly across different problem types, with some models like R1-Distill-Qwen-32B reaching combined attack success rates of 2.83 times baseline error rates. Beyond incorrect answers, the triggers caused models to generate responses up to three times longer than normal, creating computational slowdowns. Even when models reached correct conclusions, response lengths doubled in 16% of cases, substantially increasing processing costs.
The Almighty Buck

Wells Fargo Scandal Pushed Customers Toward Fintech, Says UC Davis Study (nerds.xyz) 18

BrianFagioli shares a report from NERDS.xyz: A new academic study has found that the 2016 Wells Fargo scandal pushed many consumers toward fintech lenders instead of traditional banks. The research, published in the Journal of Financial Economics, suggests that it was a lack of trust rather than interest rates or fees that drove this behavioral shift. Conducted by Keer Yang, an assistant professor at the UC Davis Graduate School of Management, the study looked closely at what happened after the Wells Fargo fraud erupted into national headlines. Bank employees were caught creating millions of unauthorized accounts to meet unrealistic sales goals. The company faced $3 billion in penalties and a massive public backlash.

Yang analyzed Google Trends data, Gallup polls, media coverage, and financial transaction datasets to draw a clear conclusion. In geographic areas with a strong Wells Fargo presence, consumers became measurably more likely to take out mortgages through fintech lenders. This change occurred even though loan costs were nearly identical between traditional banks and digital lenders. In other words, it was not about money. It was about trust. That simple fact hits hard. When big institutions lose public confidence, people do not just complain. They start moving their money elsewhere.

According to the study, fintech mortgage use increased from just 2 percent of the market in 2010 to 8 percent in 2016. In regions more heavily exposed to the Wells Fargo brand, fintech adoption rose an additional 4 percent compared to areas with less exposure. Yang writes, "Therefore it is trust, not the interest rate, that affects the borrower's probability of choosing a fintech lender." [...] Notably, while customers may have been more willing to switch mortgage providers, they were less likely to move their deposits. Yang attributes that to FDIC insurance, which gives consumers a sense of security regardless of the bank's reputation. This study also gives weight to something many of us already suspected. People are not necessarily drawn to fintech because it is cheaper. They are drawn to it because they feel burned by the traditional system and want a fresh start with something that seems more modern and less manipulative.

Bitcoin

Ripple Applies For US Banking License (cointelegraph.com) 8

Ripple Labs is applying for a U.S. national bank charter and a Federal Reserve master account, "following a similar move by stablecoin issuer Circle Internet Group as crypto firms look to be regulated to deepen ties with traditional finance," reports CoinTelegraph. From the report: Ripple CEO Brad Garlinghouse confirmed on X on Wednesday that the company is applying for a license with the US Office of the Comptroller of the Currency (OCC), following an earlier report by The Wall Street Journal. "True to our long-standing compliance roots, Ripple is applying for a national bank charter from the OCC," he wrote. Garlinghouse said if the license is approved, it would be a "new (and unique!) benchmark for trust in the stablecoin market" as the firm would be under federal and state oversight -- with the New York Department of Financial Services already regulating its Ripple USD (RLUSD) stablecoin. [...]

Ripple's Garlinghouse added that the company also applied for a Master Account with the Federal Reserve, which would give it access to the US central banking system. "This access would allow us to hold $RLUSD reserves directly with the Fed and provide an additional layer of security to future proof trust in RLUSD," Garlinghouse said. "Congress is working towards clear rules and regulations, and banks (in a far cry from the years of Operation Chokepoint 2.0) are leaning in," he added, mentioning the conspiracy that the Biden administration sought to cut off crypto from the financial system. Ripple applied for the account through Standard Custody, a crypto custody firm it acquired in February 2024.

The Internet

Let's Encrypt Rolls Out Free Security Certs For IP Addresses (theregister.com) 26

Let's Encrypt, a certificate authority (CA) known for its free TLS/SSL certificates, has begun issuing digital certificates for IP addresses. From a report: It's not the first CA to do so. PositiveSSL, Sectigo, and GeoTrust all offer TLS/SSL certificates for use with IP addresses, at prices ranging from $40 to $90 or so annually. But Let's Encrypt does so at no cost.

For those with a static IP address who want to host a website, an IP address certificate provides a way to offer visitors a secure connection with that numeric identifier while avoiding the nominal expense of a domain name.

Android

Data Breach Reveals Catwatchful 'Stalkerware' Is Spying On Thousands of Phones (techcrunch.com) 17

An anonymous reader quotes a report from TechCrunch: A security vulnerability in a stealthy Android spyware operation called Catwatchful has exposed thousands of its customers, including its administrator. The bug, which was discovered by security researcher Eric Daigle, spilled the spyware app's full database of email addresses and plaintext passwords that Catwatchful customers use to access the data stolen from the phones of their victims. [...] According to a copy of the database from early June, which TechCrunch has seen, Catwatchful had email addresses and passwords on more than 62,000 customers and the phone data from 26,000 victims' devices.

Most of the compromised devices were located in Mexico, Colombia, India, Peru, Argentina, Ecuador, and Bolivia (in order of the number of victims). Some of the records date back to 2018, the data shows. The Catwatchful database also revealed the identity of the spyware operation's administrator, Omar Soca Charcov, a developer based in Uruguay. Charcov opened our emails, but did not respond to our requests for comment sent in both English and Spanish. TechCrunch asked if he was aware of the Catwatchful data breach, and if he plans to disclose the incident to its customers. Without any clear indication that Charcov will disclose the incident, TechCrunch provided a copy of the Catwatchful database to data breach notification service Have I Been Pwned.
The stalkerware operation uses a custom API and Google's Firebase to collect and store victims' stolen data, including photos and audio recordings. According to Daigle, the API was left unauthenticated, exposing sensitive user data such as email addresses and passwords.

The hosting provider temporarily suspended the spyware after TechCrunch disclosed this vulnerability but it returned later on HostGator. Despite being notified, Google has yet to take down the Firebase instance but updated Google Play Protect to detect Catwatchful.

While Catwatchful claims it "cannot be uninstalled," you can dial "543210" and press the call button on your Android phone to reveal the hidden app. As for its removal, TechCrunch has a general how-to guide for removing Android spyware that could be helpful.
Education

Hacker With 'Political Agenda' Stole Data From Columbia, University Says (therecord.media) 28

A politically motivated hacker breached Columbia University's IT systems, stealing vast amounts of sensitive student and employee data -- including admissions decisions and Social Security numbers. The Record reports: The hacker reportedly provided Bloomberg News with 1.6 gigabytes of data they claimed to have stolen from the university, including information from 2.5 million applications going back decades. The stolen data the outlet reviewed reportedly contains details on whether applicants were rejected or accepted, their citizenship status, their university ID numbers and which academic programs they sought admission to. While the hacker's claims have not been independently verified, Bloomberg said it compared data provided by the hacker to that belonging to eight Columbia applicants seeking admission between 2019 and 2024 and found it matched.

The threat actor reportedly told Bloomberg he was seeking information that would indicate whether the university continues to use affirmative action in admissions despite a 2023 Supreme Court decision prohibiting the practice. The hacker told Bloomberg he obtained 460 gigabytes of data in total -- after spending two months targeting and penetrating increasingly privileged layers of the university's servers -- and said he harvested information about financial aid packages, employee pay and at least 1.8 million Social Security numbers belonging to employees, applicants, students and their family members.

Slashdot Top Deals