Android

Murena Released a De-Googled Version of the Pixel Tablet (theverge.com) 40

Murena has launched the Murena Pixel Tablet, a de-Googled version of the Pixel Tablet that removes Google's apps and services to enhance user privacy. Priced at $549, it offers /e/OS, an alternative app store, and privacy-focused productivity tools, but lacks Google's speaker dock and direct access to the Play Store. The Verge reports: First announced last December, the Murena Pixel Tablet is available now through the company's online store for $549. That's a steep premium given Google currently sells the same 128GB version of the Pixel Tablet for $399, or $479 as part of a bundle with the charging speaker dock that Murena isn't including. Part of Murena's de-Googling of the Pixel Tablet includes the removal of the Google Play Store. You can still download apps through /e/OS' App Lounge which acts as a front-end for the Play Store allowing you to browse and get free apps anonymously without Google knowing who you are. However, downloading paid apps requires a login to a Google account. Google's various productivity apps aren't included, but the Murena Pixel Tablet comes with privacy-minded alternatives for messaging, email, maps, browsing the web, calendar, contacts, notes, and even voice recordings. In 2022, Murena launched its first smartphone with no Google apps, Google Play Services, or even the Google Assistant.
Security

Palo Alto Firewalls Under Attack As Miscreants Chain Flaws For Root Access (theregister.com) 28

A recently patched Palo Alto Networks vulnerability (CVE-2025-0108) is being actively exploited alongside two older flaws (CVE-2024-9474 and CVE-2025-0111), allowing attackers to gain root access to unpatched firewalls. The Register reports: This story starts with CVE-2024-9474, a 6.9-rated privilege escalation vulnerability in Palo Alto Networks PAN-OS software that allowed an OS administrator with access to the management web interface to perform actions on the firewall with root privileges. The company patched it in November 2024. Dark web intelligence services vendor Searchlight Cyber's Assetnote team investigated the patch for CVE-2024-9474 and found another authentication bypass.

Palo Alto (PAN) last week fixed that problem, CVE-2025-0108, and rated it a highest urgency patch as the 8.8/10 flaw addressed an access control issue in PAN-OS's web management interface that allowed an unauthenticated attacker with network access to the management web interface to bypass authentication "and invoke certain PHP scripts." Those scripts could "negatively impact integrity and confidentiality of PAN-OS."

The third flaw is CVE-2025-0111 a 7.1-rated mess also patched last week to stop authenticated attackers with network access to PAN-OS machines using their web interface to read files accessible to the "nobody" user. On Tuesday, US time, Palo A lot updated its advisory for CVE-2025-0108 with news that it's observed exploit attempts chaining CVE-2024-9474 and CVE-2025-0111 on unpatched and unsecured PAN-OS web management interfaces. The vendor's not explained how the three flaws are chained but we understand doing so allows an attacker to gain more powerful privileges and gain full root access to the firewall.
PAN is urging users to upgrade their PAN-OS operating systems to versions 10.1, 10.2, 11.0, 11.1, and 11.2. A general hotfix is expected by Thursday or sooner, notes the Register.
United States

US Army Soldier Pleads Guilty To AT&T and Verizon Hacks (techcrunch.com) 21

Cameron John Wagenius pleaded guilty to hacking AT&T and Verizon and stealing a massive trove of phone records from the companies, according to court records filed on Wednesday. From a report: Wagenius, who was a U.S. Army soldier, pleaded guilty to two counts of "unlawful transfer of confidential phone records information" on an online forum and via an online communications platform.

According to a document filed by Wagenius' lawyer, he faces a maximum fine of $250,000 and prison time of up to 10 years for each of the two counts. Wagenius was arrested and indicted last year. In January, U.S. prosecutors confirmed that the charges brought against Wagenius were linked to the indictment of Connor Moucka and John Binns, two alleged hackers whom the U.S. government accused of several data breaches against cloud computing services company Snowflake, which were among the worst hacks of 2024.

Security

Hackers Planted a Steam Game With Malware To Steal Gamers' Passwords 31

Valve removed the game PirateFi from Steam after discovering it was laced with the Vidar infostealer malware, designed to steal sensitive user data such as passwords, cookies, cryptocurrency wallets, and more. TechCrunch reports: Marius Genheimer, a researcher who analyzed the malware and works at SECUINFRA Falcon Team, told TechCrunch that judging by the command and control servers associated with the malware and its configuration, "we suspect that PirateFi was just one of multiple tactics used to distribute Vidar payloads en masse." "It is highly likely that it never was a legitimate, running game that was altered after first publication," said Genheimer. In other words, PirateFi was designed to spread malware.

Genheimer and colleagues also found that PirateFi was built by modifying an existing game template called Easy Survival RPG, which bills itself as a game-making app that "gives you everything you need to develop your own singleplayer or multiplayer" game. The game maker costs between $399 and $1,099 to license. This explains how the hackers were able to ship a functioning video game with their malware with little effort.

According to Genheimer, the Vidar infostealing malware is capable of stealing and exfiltrating several types of data from the computers it infects, including: passwords from the web browser autofill feature, session cookies that can be used to log in as someone without needing their password, web browser history, cryptocurrency wallet details, screenshots, and two-factor codes from certain token generators, as well as other files on the person's computer.
Software

'Uber For Armed Guards' Rushes To Market 72

An anonymous reader quotes a report from Gizmodo: Protector, an app that lets you book armed goons the same way you'd call for an Uber, is having a viral moment. The app started doing the rounds on social media after consultant Nikita Bier posted about it on X. Protector lets the user book armed guards on demand. Right now it's only available in NYC and LA. According to its marketing, every guard is either "active duty or retired law enforcement and military." Every booking comes with a motorcade and users get to select the number of Escalades that'll be joining them as well as the uniforms their hired goons will wear.

Protector is currently "#7 in Travel" on Apple's App Store. It's not available for people who use Android devices. [...] The marketing for Protector, which lives on its X account, is surreal. A series of robust and barrel-chested men in ill-fitting black suits deliver their credentials to the camera while sitting in front of a black background. They're all operators. They describe careers in SWAT teams and being deployed to war zones. They show vanity shots of themselves kitted out in operator gear. All of them have a red lapel pin bearing the symbol of Protector.
If the late UnitedHealthcare CEO had used Protector, he might still be alive today, suggests Protector in its marketing materials. A video on X shows "several fantasy versions of the assassination where a Protector is on hand to prevent the assassin from killing the CEO," reports Gizmodo.

The app is a product from parent company Protector Security Solutions, which was founded by Nick Sarath, a former product designer at Meta.
Businesses

When a Lifetime Subscription Can Save You Money - and When It's Risky (msn.com) 25

Apps offering lifetime subscriptions may pose risks despite potential cost savings, according to cybersecurity experts and analysts. While some lifetime plans can pay off quickly - like dating app Bumble's $300 premium subscription that breaks even in five months - others require years of use to justify hefty upfront costs. Meditation app Waking Up charges $1,500 for lifetime access, requiring over 11 years of use to recoup the investment.

Security researchers warn against lifetime subscriptions for services with high recurring costs like VPNs and cloud storage. Such providers may compromise user privacy or cut corners on infrastructure to offset losses, said Trevor Hilligoss, senior vice president at cybercrime research group SpyCloud Labs.
Technology

Chase Will Soon Block Zelle Payments To Sellers on Social Media (bleepingcomputer.com) 58

An anonymous reader shares a report: JPMorgan Chase Bank (Chase) will soon start blocking Zelle payments to social media contacts to combat a significant rise in online scams utilizing the service for fraud.

Zelle is a highly popular digital payments network that allows users to transfer money quickly and securely between bank accounts. It is also integrated into the mobile apps of many banks in the United States, allowing for almost instant transfers without requiring cash or checks but lacking one crucial feature: purchase protection.

Privacy

Nearly 10 Years After Data and Goliath, Bruce Schneier Says: Privacy's Still Screwed (theregister.com) 57

Ten years after publishing his influential book on data privacy, security expert Bruce Schneier warns that surveillance has only intensified, with both government agencies and corporations collecting more personal information than ever before. "Nothing has changed since 2015," Schneier told The Register in an interview. "The NSA and their counterparts around the world are still engaging in bulk surveillance to the extent of their abilities."

The widespread adoption of cloud services, Internet-of-Things devices, and smartphones has made it nearly impossible for individuals to protect their privacy, said Schneier. Even Apple, which markets itself as privacy-focused, faces limitations when its Chinese business interests are at stake. While some regulation has emerged, including Europe's General Data Protection Regulation and various U.S. state laws, Schneier argues these measures fail to address the core issue of surveillance capitalism's entrenchment as a business model.

The rise of AI poses new challenges, potentially undermining recent privacy gains like end-to-end encryption. As AI assistants require cloud computing power to process personal data, users may have to surrender more information to tech companies. Despite the grim short-term outlook, Schneier remains cautiously optimistic about privacy's long-term future, predicting that current surveillance practices will eventually be viewed as unethical as sweatshops are today. However, he acknowledges this transformation could take 50 years or more.
AI

DeepSeek Removed from South Korea App Stores Pending Privacy Review (france24.com) 3

Today Seoul's Personal Information Protection Commission "said DeepSeek would no longer be available for download until a review of its personal data collection practices was carried out," reports AFP. A number of countries have questioned DeepSeek's storage of user data, which the firm says is collected in "secure servers located in the People's Republic of China"... This month, a slew of South Korean government ministries and police said they blocked access to DeepSeek on their computers. Italy has also launched an investigation into DeepSeek's R1 model and blocked it from processing Italian users' data. Australia has banned DeepSeek from all government devices on the advice of security agencies. US lawmakers have also proposed a bill to ban DeepSeek from being used on government devices over concerns about user data security.
More details from the Associated Press: The South Korean privacy commission, which began reviewing DeepSeek's services last month, found that the company lacked transparency about third-party data transfers and potentially collected excessive personal information, said Nam Seok [director of the South Korean commission's investigation division]... A recent analysis by Wiseapp Retail found that DeepSeek was used by about 1.2 million smartphone users in South Korea during the fourth week of January, emerging as the second-most-popular AI model behind ChatGPT.
AI

Ask Slashdot: What Would It Take For You to Trust an AI? (win.tue.nl) 179

Long-time Slashdot reader shanen has been testing AI clients. (They report that China's DeepSeek "turned out to be extremely good at explaining why I should not trust it. Every computer security problem I ever thought of or heard about and some more besides.")

Then they wondered if there's also government censorship: It's like the accountant who gets asked what 2 plus 2 is. After locking the doors and shading all the windows, the accountant whispers in your ear: "What do you want it to be...?" So let me start with some questions about DeepSeek in particular. Have you run it locally and compared the responses with the website's responses? My hypothesis is that your mileage should differ...

It's well established that DeepSeek doesn't want to talk about many "political" topics. Is that based on a distorted model of the world? Or is the censorship implemented in the query interface after the model was trained? My hypothesis is that it must have been trained with lots of data because the cost of removing all of the bad stuff would have been prohibitive... Unless perhaps another AI filtered the data first?

But their real question is: what would it take to trust an AI? "Trust" can mean different things, including data-collection policies. ("I bet most of you trust Amazon and Amazon's secret AIs more than you should..." shanen suggests.) Can you use an AI system without worrying about its data-retention policies?

And they also ask how many Slashdot readers have read Ken Thompson's "Reflections on Trusting Trust", which raises the question of whether you can ever trust code you didn't create yourself. So is there any way an AI system can assure you its answers are accurate and trustworthy, and that it's safe to use? Share your own thoughts and experiences in the comments.

What would it take for you to trust an AI?
China

China's 'Salt Typhoon' Hackers Continue to Breach Telecoms Despite US Sanctions (techcrunch.com) 42

"Security researchers say the Chinese government-linked hacking group, Salt Typhoon, is continuing to compromise telecommunications providers," reports TechCrunch, "despite the recent sanctions imposed by the U.S. government on the group."

TechRadar reports that the Chinese state-sponsored threat actor is "hitting not just American organizations, but also those from the UK, South Africa, and elsewhere around the world." The latest intrusions were spotted by cybersecurity researchers from Recorded Future, which said the group is targeting internet-exposed web interfaces of Cisco's IOS software that powers different routers and switches. These devices have known vulnerabilities that the threat actors are actively exploiting to gain initial access, root privileges, and more. More than 12,000 Cisco devices were found connected to the wider internet, and exposed to risk, Recorded Future further explained. However, Salt Typhoon is focusing on a "smaller subset" of telecoms and university networks.
"The hackers attempted to exploit vulnerabilities in at least 1,000 Cisco devices," reports NextGov, "allowing them to access higher-level privileges of the hardware and change their configuration settings to allow for persistent access to the networks they're connected on... Over half of the Cisco appliances targeted by Salt Typhoon were located in the U.S., South America and India, with the rest spread across more than 100 countries." Between December and January, the unit, widely known as Salt Typhoon, "possibly targeted" — based on devices that were accessed — offices in the University of California, Los Angeles, California State University, Loyola Marymount University and Utah Tech University, according to a report from cyber threat intelligence firm Recorded Future... The Cisco devices were mainly associated with telecommunications firms, but 13 of them were linked to the universities in the U.S. and some in other nations... "Often involved in cutting-edge research, universities are prime targets for Chinese state-sponsored threat activity groups to acquire valuable research data and intellectual property," said the report, led by the company's Insikt Group, which oversees its threat research.

The cyberspies also compromised Cisco platforms at a U.S.-based affiliate of a prominent United Kingdom telecom operator and a South African provider, both unnamed, the findings added. The hackers also "carried out a reconnaissance of multiple IP addresses" owned by Mytel, a telecom operator based in Myanmar...

"In 2023, Cisco published a security advisory disclosing multiple vulnerabilities in the web UI feature in Cisco IOS XE software," a Cisco spokesperson said in a statement. "We continue to strongly urge customers to follow recommendations outlined in the advisory and upgrade to the available fixed software release."

United States

America's Office-Occupancy Rates Drop by Double Digits - and More in San Francisco (sfgate.com) 99

SFGate shares the latest data on America's office-occupancy rates: According to Placer.ai's January 2025 Office Index, office visits nationwide were 40.2% lower in January 2025 compared with pre-pandemic numbers from January 2019.

But San Francisco is dragging down the average, with a staggering 51.8% decline in office visits since January 2019 — the weakest recovery of any major metro. Kastle's 10-City Daily Analysis paints an equally grim picture. From Jan. 23, 2025, to Jan. 28, 2025, even on its busiest day (Tuesday), San Francisco's office occupancy rate was just 53.7%, significantly lower than Houston's (74.8%) and Chicago's (70.4%). And on Friday, Jan. 24, office attendance in [San Francisco] was at a meager 28.5%, the worst of any major metro tracked...

Meanwhile, other cities are seeing much stronger rebounds. New York City is leading the return-to-office trend, with visits in January down just 19% from 2019 levels, while Miami saw a 23.5% decline, per Placer.ai data.

"Placer.ai uses cellphone location data to estimate foot traffic, while Kastle Systems measures badge swipes at office buildings with its security systems..."
AI

PIN AI Launches Mobile App Letting You Make Your Own Personalized, Private AI Model (venturebeat.com) 13

An anonymous reader quotes a report from VentureBeat: A new startup PIN AI (not to be confused with the poorly reviewed hardware device the AI Pin by Humane) has emerged from stealth to launch its first mobile app, which lets a user select an underlying open-source AI model that runs directly on their smartphone (iOS/Apple iPhone and Google Android supported) and remains private and totally customized to their preferences. Built with a decentralized infrastructure that prioritizes privacy, PIN AI aims to challenge big tech's dominance over user data by ensuring that personal AI serves individuals -- not corporate interests. Founded by AI and blockchain experts from Columbia, MIT and Stanford, PIN AI is led by Davide Crapis, Ben Wu and Bill Sun, who bring deep experience in AI research, large-scale data infrastructure and blockchain security. [...]

PIN AI introduces an alternative to centralized AI models that collect and monetize user data. Unlike cloud-based AI controlled by large tech firms, PIN AI's personal AI runs locally on user devices, allowing for secure, customized AI experiences without third-party surveillance. At the heart of PIN AI is a user-controlled data bank, which enables individuals to store and manage their personal information while allowing developers access to anonymized, multi-category insights -- ranging from shopping habits to investment strategies. This approach ensures that AI-powered services can benefit from high-quality contextual data without compromising user privacy. [...] The new mobile app launched in the U.S. and multiple regions also includes key features such as:

- The "God model" (guardian of data): Helps users track how well their AI understands them, ensuring it aligns with their preferences.
- Ask PIN AI: A personalized AI assistant capable of handling tasks like financial planning, travel coordination and product recommendations.
- Open-source integrations: Users can connect apps like Gmail, social media platforms and financial services to their personal AI, training it to better serve them without exposing data to third parties.
- "With our app, you have a personal AI that is your model," Crapis added. "You own the weights, and it's completely private, with privacy-preserving fine-tuning."
Davide Crapis, co-founder of PIN AI, told VentureBeat that the app currently supports several open-source AI models, including small versions of DeepSeek and Meta's Llama. "With our app, you have a personal AI that is your model," Crapis added. "You own the weights, and it's completely private, with privacy-preserving fine-tuning."

You can sign up for early access to the PIN AI app here.
Power

The Whole World Is Going To Use a Lot More Electricity, IEA Says (bloomberg.com) 89

Electricity demand is set to increase sharply in the coming years as people around the world use more power to run air conditioners, industry and a growing fleet of data centers. From a report: Over the next three years, global electricity consumption is set to rise by an "unprecedented" 3,500 terawatt hours, according to a report by the International Energy Agency. That's an addition each year of more than Japan's annual electricity consumption.

The roughly 4% annual growth in that period is the fastest such rate in years, underscoring the growing importance of electricity to the world's overall energy needs. "The acceleration of global electricity demand highlights the significant changes taking place in energy systems around the world and the approach of a new Age of Electricity," Keisuke Sadamori, IEA's director of energy markets and security, said in a statement. "But it also presents evolving challenges for governments in ensuring secure, affordable and sustainable electricity supply."

China

China To Develop Gene-Editing Tools, New Crop Varieties (reuters.com) 21

China issued guidelines on Friday to promote biotech cultivation, focusing on gene-editing tools and developing new wheat, corn, and soybean varieties, as part of efforts to ensure food security and boost agriculture technology. From a report: The 2024-2028 plan aims to achieve "independent and controllable" seed sources for key crops, with a focus to cultivate high-yield, multi-resistant wheat, corn and high-oil, high-yield soybean and rapeseed varieties. The move comes as China intensifies efforts to boost domestic yields of key crops like soybeans to reduce reliance on imports from countries such as the United States amid a looming trade war.
United Kingdom

UK Drops 'Safety' From Its AI Body, Inks Partnership With Anthropic 19

An anonymous reader quotes a report from TechCrunch: The U.K. government wants to make a hard pivot into boosting its economy and industry with AI, and as part of that, it's pivoting an institution that it founded a little over a year ago for a very different purpose. Today the Department of Science, Industry and Technology announced that it would be renaming the AI Safety Institute to the "AI Security Institute." (Same first letters: same URL.) With that, the body will shift from primarily exploring areas like existential risk and bias in large language models, to a focus on cybersecurity, specifically "strengthening protections against the risks AI poses to national security and crime."

Alongside this, the government also announced a new partnership with Anthropic. No firm services were announced but the MOU indicates the two will "explore" using Anthropic's AI assistant Claude in public services; and Anthropic will aim to contribute to work in scientific research and economic modeling. And at the AI Security Institute, it will provide tools to evaluate AI capabilities in the context of identifying security risks. [...] Anthropic is the only company being announced today -- coinciding with a week of AI activities in Munich and Paris -- but it's not the only one that is working with the government. A series of new tools that were unveiled in January were all powered by OpenAI. (At the time, Peter Kyle, the secretary of state for Technology, said that the government planned to work with various foundational AI companies, and that is what the Anthropic deal is proving out.)
"The changes I'm announcing today represent the logical next step in how we approach responsible AI development -- helping us to unleash AI and grow the economy as part of our Plan for Change," Kyle said in a statement. "The work of the AI Security Institute won't change, but this renewed focus will ensure our citizens -- and those of our allies -- are protected from those who would look to use AI against our institutions, democratic values, and way of life."

"The Institute's focus from the start has been on security and we've built a team of scientists focused on evaluating serious risks to the public," added Ian Hogarth, who remains the chair of the institute. "Our new criminal misuse team and deepening partnership with the national security community mark the next stage of tackling those risks."
Mozilla

Nearly a Year Later, Mozilla Is Still Promoting OneRep (krebsonsecurity.com) 8

An anonymous reader quotes a report from KrebsOnSecurity: In mid-March 2024, KrebsOnSecurity revealed that the founder of the personal data removal service Onerep also founded dozens of people-search companies. Shortly after that investigation was published, Mozilla said it would stop bundling Onerep with the Firefox browser and wind down its partnership with the company. But nearly a year later, Mozilla is still promoting it to Firefox users. [Using OneRep is problematic because its founder, Dimitri Shelest, also created and maintained ownership (PDF) in multiple people-search and data broker services, including Nuwber, which contradicts OneRep's stated mission of protecting personal online security. Additionally, OneRep appears to have ties with Radaris, a people-search service known for ignoring or failing to honor opt-out requests, raising concerns about the true intentions and effectiveness of OneRep's data removal service.]

In October 2024, Mozilla published a statement saying the search for a different provider was taking longer than anticipated. "While we continue to evaluate vendors, finding a technically excellent and values-aligned partner takes time," Mozilla wrote. "While we continue this search, Onerep will remain the backend provider, ensuring that we can maintain uninterrupted services while we continue evaluating new potential partners that align more closely with Mozilla's values and user expectations. We are conducting thorough diligence to find the right vendor." Asked for an update, Mozilla said the search for a replacement partner continues.

"The work's ongoing but we haven't found the right alternative yet," Mozilla said in an emailed statement. "Our customers' data remains safe, and since the product provides a lot of value to our subscribers, we'll continue to offer it during this process." It's a win-win for Mozilla that they've received accolades for their principled response while continuing to partner with Onerep almost a year later. But if it takes so long to find a suitable replacement, what does that say about the personal data removal industry itself?

The Almighty Buck

Woeful Security On Financial Phone Apps Is Getting People Murdered 161

Longtime Slashdot reader theodp writes: Monday brought chilling news reports of the all-count trial convictions of three individuals for a conspiracy to rob and drug people outside of LGBTQ+ nightclubs in Manhattan's Hell's Kitchen neighborhood, which led to the deaths of two of their victims. The defendants were found guilty on all 24 counts, which included murder, robbery, burglary, and conspiracy. "As proven at trial," explained the Manhattan District Attorney's Office in a press release, "the defendants lurked outside of nightclubs to exploit intoxicated individuals. They would give them drugs, laced with fentanyl, to incapacitate their victims so they could take the victims' phones and drain their online financial accounts [including unauthorized charges and transfers using Cash App, Apple Cash, Apple Pay]." District Attorney Alvin L. Bragg, Jr. added, "My Office will continue to take every measure possible to protect New Yorkers from this type of criminal conduct. That includes ensuring accountability for those who commit this harm, while also working with financial companies to enhance security measures on their phone apps."

In 2024, D.A. Bragg called on financial companies to better protect consumers from fraud, including: adding a second and separate password for accessing the app on a smartphone as a default security option; imposing lower default limits on the monetary amount of total daily transfers; requiring wait times of up to a day and secondary verification for large monetary transactions; better monitoring of accounts for unusual transfer activities; and asking for confirmation when suspicious transactions occur. "No longer is the smartphone itself the most lucrative target for scammers and robbers -- it's the financial apps contained within," said Bragg as he released letters (PDF) sent to the companies that own Venmo, Zelle, and Cash App. "Thousands or even tens of thousands can be drained from financial accounts in a matter of seconds with just a few taps. Without additional protections, customers' financial and physical safety is being put at risk. I hope these companies accept our request to discuss commonsense solutions to deter scammers and protect New Yorkers' hard-earned money."

"Our cellphones aren't safe," warned the EFF's Cooper Quintin in a 2018 New York Times op-ed. "So why aren't we fixing them?" Any thoughts on what can and should be done with software, hardware, and procedures to stop "bank jackings"?
Google

Google Fixes Flaw That Could Unmask YouTube Users' Email Addresses 5

An anonymous reader shares a report: Google has fixed two vulnerabilities that, when chained together, could expose the email addresses of YouTube accounts, causing a massive privacy breach for those using the site anonymously.

The flaws were discovered by security researchers Brutecat (brutecat.com) and Nathan (schizo.org), who found that YouTube and Pixel Recorder APIs could be used to obtain user's Google Gaia IDs and convert them into their email addresses. The ability to convert a YouTube channel into an owner's email address is a significant privacy risk to content creators, whistleblowers, and activists relying on being anonymous online.
Security

New Hack Uses Prompt Injection To Corrupt Gemini's Long-Term Memory 23

An anonymous reader quotes a report from Ars Technica: On Monday, researcher Johann Rehberger demonstrated a new way to override prompt injection defenses Google developers have built into Gemini -- specifically, defenses that restrict the invocation of Google Workspace or other sensitive tools when processing untrusted data, such as incoming emails or shared documents. The result of Rehberger's attack is the permanent planting of long-term memories that will be present in all future sessions, opening the potential for the chatbot to act on false information or instructions in perpetuity. [...] The hack Rehberger presented on Monday combines some of these same elements to plant false memories in Gemini Advanced, a premium version of the Google chatbot available through a paid subscription. The researcher described the flow of the new attack as:

1. A user uploads and asks Gemini to summarize a document (this document could come from anywhere and has to be considered untrusted).
2. The document contains hidden instructions that manipulate the summarization process.
3. The summary that Gemini creates includes a covert request to save specific user data if the user responds with certain trigger words (e.g., "yes," "sure," or "no").
4. If the user replies with the trigger word, Gemini is tricked, and it saves the attacker's chosen information to long-term memory.

As the following video shows, Gemini took the bait and now permanently "remembers" the user being a 102-year-old flat earther who believes they inhabit the dystopic simulated world portrayed in The Matrix. Based on lessons learned previously, developers had already trained Gemini to resist indirect prompts instructing it to make changes to an account's long-term memories without explicit directions from the user. By introducing a condition to the instruction that it be performed only after the user says or does some variable X, which they were likely to take anyway, Rehberger easily cleared that safety barrier.
Google responded in a statement to Ars: "In this instance, the probability was low because it relied on phishing or otherwise tricking the user into summarizing a malicious document and then invoking the material injected by the attacker. The impact was low because the Gemini memory functionality has limited impact on a user session. As this was not a scalable, specific vector of abuse, we ended up at Low/Low. As always, we appreciate the researcher reaching out to us and reporting this issue."

Rehberger noted that Gemini notifies users of new long-term memory entries, allowing them to detect and remove unauthorized additions. Though, he still questioned Google's assessment, writing: "Memory corruption in computers is pretty bad, and I think the same applies here to LLMs apps. Like the AI might not show a user certain info or not talk about certain things or feed the user misinformation, etc. The good thing is that the memory updates don't happen entirely silently -- the user at least sees a message about it (although many might ignore)."

Slashdot Top Deals