Bug

Google Pay Bug Accidentally Sends Users Free Money (arstechnica.com) 17

Here's a good reason to use Google Pay: Google might send you a bunch of free money. From a report: Many users report that Google accidentally deposited cash in their accounts -- anywhere from $10 to $1,000. Android researcher Mishaal Rahman got hit with the bug and shared most of the relevant details on Twitter. The cash arrived via Google Pay's "reward" program. Just like a credit card, you're supposed to get a few bucks back occasionally for various promotions, but nothing like this. Numerous screenshots show users receiving loads of "Reward" money for what the message called "dogfooding the Google Pay Remittance experience." "Dogfooding" is tech speak for "internally beta testing pre-release software," so if a message like this was ever supposed to go out, it should have only gone out to Google employees and/or some testing partners. Many regular users received multiple copies of this message with multiple payouts.
Data Storage

After Disrupting Businesses, Google Drive's Secret File Cap is Dead for Now 45

Google is backtracking on its decision to put a file creation cap on Google Drive. From a report: Around two months ago, the company decided to cap all Google Drive users to 5 million files, even if they were paying for extra storage. The company did this in the worst way possible, rolling out the limit as a complete surprise and with no prior communication. Some users logged in to find they were suddenly millions of files over the new limit and unable to upload new files until they deleted enough to get under the limit. Some of these users were businesses that had the sudden file cap bring down their systems, and because Google never communicated that the change was coming, many people initially thought the limitation was a bug.

Apparently, sunshine really is the best disinfectant. The story made the tech news rounds on Friday, and Ars got Google on the record saying that the file cap was not a bug and was actually "a safeguard to prevent misuse of our system in a way that might impact the stability and safety of the system." After the weekend reaction to "Google Drive's Secret File Cap!" Google announced on Twitter Monday night that it was rolling back the limit. [...] Google told us it initially rolled the limitation out to stop what it called "misuse" of Drive, and with the tweet saying Google wants to "explore alternate approaches to ensure a great experience for all," it sounds like we might see more kinds of Drive limitations in the future.
Technology

FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group (theverge.com) 56

An artificial intelligence-focused tech ethics group has asked the Federal Trade Commission to investigate OpenAI for violating consumer protection rules, arguing that the organization's rollout of AI text generation tools has been "biased, deceptive, and a risk to public safety." From a report: The Center for AI and Digital Policy (CAIDP) filed its complaint today following the publication of a high-profile open letter calling for a pause on large generative AI experiments. CAIDP president Marc Rotenberg was one of the letter's signatories, alongside a number of AI researchers and OpenAI co-founder Elon Musk. Similar to that letter, the complaint calls to slow down the development of generative AI models and implement stricter government oversight.

The CAIDP complaint points out potential threats from OpenAI's GPT-4 generative text model, which was announced in mid-March. They include ways that GPT-4 could produce malicious code and highly tailored propaganda as well as ways that biased training data could result in baked-in stereotypes or unfair race and gender preferences in things like hiring. It also points out significant privacy failures with OpenAI's product interface -- like a recent bug that exposed OpenAI ChatGPT histories and possibly payment details to other users.

Security

Ransomware Crooks Are Exploiting IBM File-Exchange Bug With a 9.8 Severity (arstechnica.com) 18

Threat actors are exploiting a critical vulnerability in an IBM file-exchange application in hacks that install ransomware on servers, security researchers have warned. From a report: The IBM Aspera Faspex is a centralized file-exchange application that large organizations use to transfer large files or large volumes of files at very high speeds. Rather than relying on TCP-based technologies such as FTP to move files, Aspera uses IBM's proprietary FASP -- short for Fast, Adaptive, and Secure Protocol -- to better utilize available network bandwidth. The product also provides fine-grained management that makes it easy for users to send files to a list of recipients in distribution lists or shared inboxes or workgroups, giving transfers a workflow that's similar to email.

In late January, IBM warned of a critical vulnerability in Aspera versions 4.4.2 Patch Level 1 and earlier and urged users to install an update to patch the flaw. Tracked as CVE-2022-47986, the vulnerability makes it possible for unauthenticated threat actors to remotely execute malicious code by sending specially crafted calls to an outdated programming interface. The ease of exploiting the vulnerability and the damage that could result earned CVE-2022-47986 a severity rating of 9.8 out of a possible 10. On Tuesday, researchers from security firm Rapid7 said they recently responded to an incident in which a customer was breached using the vulnerability.

Google

Google Security Researchers Accuse CentOS of Failing to Backport Kernel Fixes (neowin.net) 42

An anonymous reader quotes Neowin: Google Project Zero is a security team responsible for discovering security flaws in Google's own products as well as software developed by other vendors. Following discovery, the issues are privately reported to vendors and they are given 90 days to fix the reported problems before they are disclosed publicly.... Now, the security team has reported several flaws in CentOS' kernel.

As detailed in the technical document here, Google Project Zero's security researcher Jann Horn learned that kernel fixes made to stable trees are not backported to many enterprise versions of Linux. To validate this hypothesis, Horn compared the CentOS Stream 9 kernel to the stable linux-5.15.y stable tree.... As expected, it turned out that several kernel fixes have not been made deployed in older, but supported versions of CentOS Stream/RHEL. Horn further noted that for this case, Project Zero is giving a 90-day deadline to release a fix, but in the future, it may allot even stricter deadlines for missing backports....

Red Hat accepted all three bugs reported by Horn and assigned them CVE numbers. However, the company failed to fix these issues in the allotted 90-day timeline, and as such, these vulnerabilities are being made public by Google Project Zero.

Horn is urging better patch scheduling so "an attacker who wants to quickly find a nice memory corruption bug in CentOS/RHEL can't just find such bugs in the delta between upstream stable and your kernel."
AI

OpenAI Admits ChatGPT Leaked Some Payment Data, Blames Open-Source Bug (openai.com) 22

OpenAI took ChatGPT offline earlier this week "due to a bug in an open-source library which allowed some users to see titles from another active user's chat history," according to an OpenAI blog post. "It's also possible that the first message of a newly-created conversation was visible in someone else's chat history if both users were active around the same time....

"Upon deeper investigation, we also discovered that the same bug may have caused the unintentional visibility of payment-related information of 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window." In the hours before we took ChatGPT offline on Monday, it was possible for some users to see another active user's first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date. Full credit card numbers were not exposed at any time.

We believe the number of users whose data was actually revealed to someone else is extremely low. To access this information, a ChatGPT Plus subscriber would have needed to do one of the following:

- Open a subscription confirmation email sent on Monday, March 20, between 1 a.m. and 10 a.m. Pacific time. Due to the bug, some subscription confirmation emails generated during that window were sent to the wrong users. These emails contained the last four digits of another user's credit card number, but full credit card numbers did not appear. It's possible that a small number of subscription confirmation emails might have been incorrectly addressed prior to March 20, although we have not confirmed any instances of this.

- In ChatGPT, click on "My account," then "Manage my subscription" between 1 a.m. and 10 a.m. Pacific time on Monday, March 20. During this window, another active ChatGPT Plus user's first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date might have been visible. It's possible that this also could have occurred prior to March 20, although we have not confirmed any instances of this.


We have reached out to notify affected users that their payment information may have been exposed. We are confident that there is no ongoing risk to users' data. Everyone at OpenAI is committed to protecting our users' privacy and keeping their data safe. It's a responsibility we take incredibly seriously. Unfortunately, this week we fell short of that commitment, and of our users' expectations. We apologize again to our users and to the entire ChatGPT community and will work diligently to rebuild trust.

The bug was discovered in the Redis client open-source library, redis-py. As soon as we identified the bug, we reached out to the Redis maintainers with a patch to resolve the issue.

"The bug is now patched. We were able to restore both the ChatGPT service and, later, its chat history feature, with the exception of a few hours of history."
Software

VW Will Support Software Products For Up To 15 Years (arstechnica.com) 23

An anonymous reader quotes a report from Ars Technica, written by Jonathan M. Gitlin: A perennial question that has accompanied the spread of Android Automotive has been the question of support. A car has a much longer expected service life than a smartphone, especially an Android smartphone, and with infotainment systems so integral to a car's operations now, how long can we reasonably expect those infotainment systems to be supported? I got the chance to put this question to Dirk Hilgenberg, CEO of CARIAD, Volkswagen Group's software division: Given the much longer service life of a car compared to a smartphone, how does VW plan to keep those cars patched and safe 10 or 15 years from now?

"We actually have a contract with the brands, which took a while to negotiate, but lifetime support was utterly important," Hilgenberg told me. The follow-up was obvious: How long is "lifetime"? "Fifteen years after service, and an extra option for brands who would like to have it even longer; you know, we have to guarantee updatability on all legal aspects," he said. "So that's why we are, as you can imagine, very cautious with branches of releases because every branch we need to maintain over this long time. So when you have end of operation and EOP [end of production] and it's 15 years longer, we still have to maintain that; plus, some brands actually said 'because my vehicle is a unicorn, it's something that people want even more, they only occasionally drive it but they want to be safe,'" Hilgenberg told me.

(The unicorn reference should make sense in the context of VW Group owning Bugatti, Lamborghini, and Porsche, whose cars are often collected and can be on the road for many decades.) In those cases, CARIAD would provide continued support, Hilgenberg said. "Especially as cybersecurity, all the legal things are concerned, you see that already. Now we do upgrades and releases, whether it's in China, whether it's in the US, whether it's in Europe, we take very cautious steps. Security and safety has, in the Volkswagen group, you know, the utmost importance, and we see it actually as an opportunity to differentiate," he said.
In an update to the article, Ars said CARIAD got in touch with them to add some clarifications. "As part of its development services to Volkswagen's automotive brands, CARIAD provides operational services, updates, upgrades and new releases as well as bug fixes and patches relating to its hardware- and software-products. We usually support our hard- and software releases for extended periods of time. In some cases this can be up to 15 years after the end of production ('EOP') for hardware and 10 years after EOP for software releases. Moreover, there are legally mandatory periods we comply with, e.g. cybersecurity as well as safety updates and patches are provided for as long as a function is available. In addition, there may be individual agreements with brands for longer support periods to specifically satisfy their customers' needs," wrote a CARIAD spokesperson.

Ars notes: "there's no guarantee that OEMs can make the business model work for this long-term support."
Security

Hackers Drain Bitcoin ATMs of $1.5 Million By Exploiting 0-Day Bug (arstechnica.com) 112

turp182 shares a report from Ars Technica: Hackers drained millions of dollars in digital coins from cryptocurrency ATMs by exploiting a zero-day vulnerability, leaving customers on the hook for losses that can't be reversed, the kiosk manufacturer has revealed. The heist targeted ATMs sold by General Bytes, a company with multiple locations throughout the world. These BATMs, short for bitcoin ATMs, can be set up in convenience stores and other businesses to allow people to exchange bitcoin for other currencies and vice versa. Customers connect the BATMs to a crypto application server (CAS) that they can manage or, until now, that General Bytes could manage for them. For reasons that aren't entirely clear, the BATMs offer an option that allows customers to upload videos from the terminal to the CAS using a mechanism known as the master server interface.

Over the weekend, General Bytes revealed that more than $1.5 million worth of bitcoin had been drained from CASes operated by the company and by customers. To pull off the heist, an unknown threat actor exploited a previously unknown vulnerability that allowed it to use this interface to upload and execute a malicious Java application. The actor then drained various hot wallets of about 56 BTC, worth roughly $1.5 million. General Bytes patched the vulnerability 15 hours after learning of it, but due to the way cryptocurrencies work, the losses were unrecoverable. [...] Once the malicious application executed on a server, the threat actor was able to (1) access the database, (2) read and decrypt encoded API keys needed to access funds in hot wallets and exchanges, (3) transfer funds from hot wallets to a wallet controlled by the threat actor, (4) download user names and password hashes and turn off 2FA, and (5) access terminal event logs and scan for instances where customers scanned private keys at the ATM. The sensitive data in step 5 had been logged by older versions of ATM software.

Going forward, this weekend's post said, General Bytes will no longer manage CASes on behalf of customers. That means terminal holders will have to manage the servers themselves. The company is also in the process of collecting data from customers to validate all losses related to the hack, performing an internal investigation, and cooperating with authorities in an attempt to identify the threat actor. General Bytes said the company has received "multiple security audits since 2021," and that none of them detected the vulnerability exploited. The company is now in the process of seeking further help in securing its BATMs.

Security

New Victims Come Forward After Mass-Ransomware Attack (techcrunch.com) 13

The number of victims affected by a mass-ransomware attack, caused by a bug in a popular data transfer tool used by businesses around the world, continues to grow as another organization tells TechCrunch that it was also hacked. From the report: Canadian financing giant Investissement Quebec confirmed to TechCrunch that "some employee personal information" was recently stolen by a ransomware group that claimed to have breached dozens of other companies. Spokesperson Isabelle Fontaine said the incident occurred at Fortra, previously known as HelpSystems, which develops the vulnerable GoAnywhere file transfer tool. Hitachi Energy also confirmed this week that some of its employee data had been stolen in a similar incident involving its GoAnywhere system, but saying the incident happened at Fortra.

Over the past few days, the Russia-linked Clop gang has added several other organizations to its dark web leak site, which it uses to extort companies further by threatening to publish the stolen files unless a financial ransom demand is paid. TechCrunch has learned of dozens of organizations that used the affected GoAnywhere file transfer software at the time of the ransomware attack, suggesting more victims are likely to come forward. However, while the number of victims of the mass-hack is widening, the known impact is murky at best. Since the attack in late January or early February -- the exact date is not known -- Clop has disclosed less than half of the 130 organizations it claimed to have compromised via GoAnywhere, a system that can be hosted in the cloud or on an organization's network that allows companies to securely transfer huge sets of data and other large files.

Bug

Google Pixel Bug Lets You 'Uncrop' the Last Four Years of Screenshots (arstechnica.com) 29

An anonymous reader quotes a report from Ars Technica: Back in 2018, Pixel phones gained a built-in screenshot editor called "Markup" with the release of Android 9.0 Pie. The tool pops up whenever you take a screenshot, and tapping the app's pen icon gives you access to tools like crop and a few colored drawing pens. That's very handy assuming Google's Markup tool actually does what it says, but a new vulnerability points out the edits made by this tool weren't actually destructive! It's possible to uncrop or unredact Pixel screenshots taken during the past four years.

The bug was discovered by Simon Aarons and is dubbed "Acropalypse," or more formally CVE-2023-21036. There's a proof-of-concept app that can unredact Pixel screenshots at acropalypse.app, and it works! There's also a good technical write-up here by Aarons' collaborator, David Buchanan. The basic gist of the problem is that Google's screenshot editor overwrites the original screenshot file with your new edited screenshot, but it does not truncate or recompress that file in any way. If your edited screenshot has a smaller file size than the original -- that's very easy to do with the crop tool -- you end up with a PNG with a bunch of hidden junk data at the end of it. That junk data is made up of the end bits of your original screenshot, and it's actually possible to recover that data.
While the bug was fixed in the March 2023 security update for Pixel devices, it doesn't solve the problem, notes Ars. "There's still the matter of the last four years of Pixel screenshots that are out there and possibly full of hidden data that people didn't realize they were sharing."
Bug

Nvidia Driver Bug Might Make Your CPU Work Harder After You Close Your Game (arstechnica.com) 13

An anonymous reader shares a report: Nvidia released a new driver update for its GeForce graphics cards that, among other things, introduced a new Video Super Resolution upscaling technology that could make low-resolution videos look better on high-resolution screens. But the driver (version 531.18) also apparently came with a bug that caused high CPU usage on some PCs after running and then closing a game. Nvidia has released a driver hotfix (version 531.26) that acknowledges and should fix the issue, which was apparently being caused by an undisclosed bug in the "Nvidia Container," a process that exists mostly to contain other processes that come with Nvidia's drivers. It also fixes a "random bugcheck" issue that may affect some older laptops with GeForce 1000-series or MX250 and MX350 GPUs.
Graphics

Nvidia Confirms Latest GeForce Driver Is Causing CPU Spikes (pcworld.com) 21

An Nvidia GPU driver update has caused some users to see inflated CPU usage after closing 3D games, which persists until a reboot. Nvidia confirmed the problem with driver update 531.18, and will post a hotfix on March 7. PCWorld reports: The company confirmed the problem with the latest driver update, 531.18, which was published on February 28th. An updated list of open issues (including some that didn't make it into the full release notes) was posted to Nvidia's support forum, and spotted by VideoCardz.com. Issue number 4007208 reads, "Higher CPU usage from NVIDIA Container may be observed after exiting a game." Some users are showing CPU usage of up to 10-15 percent in these conditions -- not enough to seriously hamper most gaming desktops, but more than enough to be an annoyance, especially if you use your PC for other intensive tasks. Like opening three Chrome tabs at once.

At the moment there's no easy fix, so the immediate solution if you're affected is to roll back your driver to version 528.49 from February 8th, available for manual download here.

Education

Code.org Celebrates 10th Anniversary With Fond Memories of Its Viral 2013 Video 21

Long-time Slashdot reader theodp shares his perspective on the 10th anniversary of Code.org: Remember this?" asks tech-backed Code.org on Twitter as it celebrates its achievements.... "It's the viral video that launched Code.org back in 2013!" Code.org also reminds its 1M Twitter followers that What Most Schools Don't Teach starred tech leaders Bill Gates, Mark Zuckerberg, Jack Dorsey, Tony Hsieh, and Drew Houston.

But 10 years later, the promise of unlimited tech jobs and crazy-fun workplaces promoted in the video by these Poster Boys for K-12 Computer Science hasn't exactly aged well, and may serve as more of a cautionary tale about hubris for some rather than evoke fond memories.

"Our policy at Facebook is literally to hire as many talented engineers as we can find," exclaimed Zuckerberg in the video. But ten years later, Facebook's policy is firing as many employees as it can — 11,000+ and counting. Houston, who sang the praises of working in cool tech workplaces in the video ("To get the very best people we try to make the office as awesome as possible"), went on to make remote work the standard practice at Dropbox, cut 11% of his employees, and reported a $575M loss on unneeded office space. Under pressure, Gates left Microsoft, Dorsey left Twitter, and Hsieh tragically left (Amazon-owned) Zappos, and the companies they co-founded recently unveiled plans for massive layoffs and halted ambitious office expansion plans as tech employees push back on return-to-the-office edicts.

Still, there's no denying the success of what the National Science Foundation called the "amazing marketing prowess" of tech giant supported and directed Code.org when it comes to pushing coding into American classrooms. The nonprofit boasts of having 80M+ student accounts, reported it had spent $74.7M to train 113,000+ K-12 teachers to deliver its K-12 CS curriculum, and has set its sights on making CS a high school graduation requirement in every state by 2030.

Interestingly, concomitant with Code.org's 10th anniversary celebration was the release of a new academic paper — Breaking the Code: Confronting Racism in Computer Science through Community, Criticality, and Citizenship — that provocatively questions whether K-12 CS, at least in its current incarnation, is a feature or a bug. From the paper: "We are currently seeing an unprecedented push of computing into P-12 education systems across the US, with calls for compulsory computing education and changes to graduation requirements.... Although computing creep narratives are typically framed in lofty democratic terms, the 'access' narrative is ultimately a corporate play. Broadening participation in computing serves corporate interests by offering an expanded labor supply from which to choose the most productive workers. It is true that this might benefit an elite subset of BIPOC individuals, but the macroeconomics of the global labor market mean that access to computing is unlikely to ever benefit BIPOC communities at scale. [...] There are several nonprofits invested in the growth of computing, many with mission statements that do explicitly cite equity (and sometimes racial equity, in particular). Some of the larger nonprofits, though, are mainly funded by (and thus ultimately serve) corporate interests (e.g., Code. org).
Chrome

First Look At Google Chrome's Blink Engine Running On an iPhone (9to5google.com) 39

Google has begun the process of bringing Chrome's full Blink browser engine to iOS against current App Store rules, and now we have our first look at the test browser in action. 9to5Google reports: In the weeks since the project was announced, Google (and Igalia, a major open source consultancy and frequent Chromium contributor) have been hard at work getting a simplified "content_shell" browser up and running in iOS and fixing issues along the way. As part of that bug fixing process, some developers have even shared screenshots of the minimal Blink-based browser running on an iPhone 12. In the images, we can see a few examples of Google Search working as expected, with no glaringly obvious issues in the site's appearance. Above the page contents, you can see a simple blue bar containing the address bar and typical browser controls like back, forward, and refresh.

With a significant bit of effort, we were able to build the prototype browser for ourselves and show other sites including 9to5Google running in Blink for iOS, through the Xcode Simulator. As an extra touch of detail, we now know what the three-dots button next to the address bar is for. It opens a menu with a "Begin tracing" button, to aid performance testing. From these work-in-progress screenshots, it seems clear that the Blink for iOS project is already making significant progress, but it's clearly a prototype not meant to be used like a full web browser. The next biggest step that Google has laid out is to ensure this version of Blink/Chromium for iOS passes all of the many tests that ensure all aspects of a browser are working correctly.

Bug

Scientist Finds Rare Jurassic Era Bug At Arkansas Walmart, Kills It and Puts It On a Pin (cbsnews.com) 41

Longtime Slashdot reader theshowmecanuck shares a report from CBS News: A 2012 trip to a Fayetteville, Arkansas, Walmart to pick up some milk turned out to be one for the history books. A giant bug that stopped a scientist in his tracks as he walked into the store and he ended up taking home turned out to be a rare Jurassic-era flying insect. Michael Skvarla, director of Penn State University's Insect Identification Lab, found the mysterious bug -- an experience that he says he remembers "vividly."

"I was walking into Walmart to get milk and I saw this huge insect on the side of the building," he said in a press release from Penn State. "I thought it looked interesting, so I put it in my hand and did the rest of my shopping with it between my fingers. I got home, mounted it, and promptly forgot about it for almost a decade."

[I]n the fall of 2020 when he was teaching an online course on insect biodiversity and evolution, Skvarla was showing students the bug and suddenly realized it wasn't what he originally thought. He and his students then figured out what it might be -- live on a Zoom call. "We were watching what Dr. Skvarla saw under his microscope and he's talking about the features and then just kinda stops," one of his students Codey Mathis said. "We all realized together that the insect was not what it was labeled and was in fact a super-rare giant lacewing." A clear indicator of this identification was the bug's wingspan. It was about 50 millimeters -- nearly 2 inches -- a span that the team said made it clear the insect was not an antlion.
His team's molecular analysis on the bug has been published in the Proceedings of the Entomological Society of Washington.

theshowmecanuck captioned: "To be fair, he said he didn't know what it was so [he] just collected it and took it home, and then figured it out later. My thought that I added to the title was because of this quote in the story (which tickled my cynicism in humanity): "It could have been 100 years since it was even in this area -- and it's been years since it's been spotted anywhere near it..."
Youtube

YouTube Video Causes Pixel Phones To Instantly Reboot (arstechnica.com) 55

An anonymous reader writes quotes a report from Ars Technica: Did you ever see that movie The Ring? People who watched a cursed, creepy video would all mysteriously die in seven days. Somehow Google seems to have re-created the tech version of that, where the creepy video is this clip of the 1979 movie Alien, and the thing that dies after watching it is a Google Pixel phone. As noted by the user 'OGPixel5" on the Google Pixel subreddit, watching this specific clip on a Google Pixel 6, 6a, or Pixel 7 will cause the phone to instantly reboot. Something about the clip is disagreeable to the phone, and it hard-crashes before it can even load a frame. Some users in the thread say cell service wouldn't work after the reboot, requiring another reboot to get it back up and running.

The leading theory floating around is that something about the format of the video (it's 4K HDR) is causing the phone to crash. It wouldn't be the first time something like this happened to an Android phone. In 2020, there was a cursed wallpaper that would crash a phone when set as the background due to a color space bug. The affected phones all use Google's Exynos-derived Tensor SoC, so don't expect non-Google phones to be affected by this. Samsung Exynos phones would be the next most-likely candidates, but we haven't seen any reports of that.
According to CNET, the issue has been addressed and a full fix will be deployed in March.
Bug

Security Researchers Warn of a 'New Class' of Apple Bugs (techcrunch.com) 30

Since the earliest versions of the iPhone, "The ability to dynamically execute code was nearly completely removed," write security researchers at Trellix, "creating a powerful barrier for exploits which would need to find a way around these mitigations to run a malicious program. As macOS has continually adopted more features of iOS it has also come to enforce code signing more strictly.

"The Trellix Advanced Research Center vulnerability team has discovered a large new class of bugs that allow bypassing code signing to execute arbitrary code in the context of several platform applications, leading to escalation of privileges and sandbox escape on both macOS and iOS.... The vulnerabilities range from medium to high severity with CVSS scores between 5.1 and 7.1. These issues could be used by malicious applications and exploits to gain access to sensitive information such as a user's messages, location data, call history, and photos."

Computer Weekly explains that the vulnerability bypasses strengthened code-signing mitigations put in place by Apple on its developer tool NSPredicate after the infamous ForcedEntry exploit used by Israeli spyware manufacturer NSO Group: So far, the team has found multiple vulnerabilities within the new class of bugs, the first and most significant of which exists in a process designed to catalogue data about behaviour on Apple devices. If an attacker has achieved code execution capability in a process with the right entitlements, they could then use NSPredicate to execute code with the process's full privilege, gaining access to the victim's data.

Emmitt and his team also found other issues that could enable attackers with appropriate privileges to install arbitrary applications on a victim's device, access and read sensitive information, and even wipe a victim's device. Ultimately, all of the new bugs carry a similar level of impact to ForcedEntry.

Senior vulnerability researcher Austin Emmitt said the vulnerabilities constituted a "significant breach" of the macOS and iOS security models, which rely on individual applications having fine-grain access to the subset of resources needed, and querying services with more privileges to get anything else.

"The key thing here is the vulnerabilities break Apple's security model at a fundamental level," Trellix's director of vulnerability research told Wired — though there's some additional context: Apple has fixed the bugs the company found, and there is no evidence they were exploited.... Crucially, any attacker trying to exploit these bugs would require an initial foothold into someone's device. They would need to have found a way in before being able to abuse the NSPredicate system. (The existence of a vulnerability doesn't mean that it has been exploited.)

Apple patched the NSPredicate vulnerabilities Trellix found in its macOS 13.2 and iOS 16.3 software updates, which were released in January. Apple has also issued CVEs for the vulnerabilities that were discovered: CVE-2023-23530 and CVE-2023-23531. Since Apple addressed these vulnerabilities, it has also released newer versions of macOS and iOS. These included security fixes for a bug that was being exploited on people's devices.

TechCrunch explores its severity: While Trellix has seen no evidence to suggest that these vulnerabilities have been actively exploited, the cybersecurity company tells TechCrunch that its research shows that iOS and macOS are "not inherently more secure" than other operating systems....

Will Strafach, a security researcher and founder of the Guardian firewall app, described the vulnerabilities as "pretty clever," but warned that there is little the average user can do about these threats, "besides staying vigilant about installing security updates." And iOS and macOS security researcher Wojciech ReguÅa told TechCrunch that while the vulnerabilities could be significant, in the absence of exploits, more details are needed to determine how big this attack surface is.

Jamf's Michael Covington said that Apple's code-signing measures were "never intended to be a silver bullet or a lone solution" for protecting device data. "The vulnerabilities, though noteworthy, show how layered defenses are so critical to maintaining good security posture," Covington said.

Programming

GCC Gets a New Frontend for Rust (fosdem.org) 106

Slashdot reader sleeping cat shares a recent FOSDEM talk by a compiler engineer on the team building Rust-GCC, "an alternative compiler implementation for the Rust programming language."

"If gccrs interprets a program differently from rustc, this is considered a bug," explains the project's FAQ on GitHub.

The FAQ also notes that LLVM's set of compiler technologies — which Rust uses — "is missing some backends that GCC supports, so a gccrs implementation can fill in the gaps for use in embedded development." But the FAQ also highlights another potential benefit: With the recent announcement of Rust being allowed into the Linux Kernel codebase, an interesting security implication has been highlighted by Open Source Security, inc. When code is compiled and uses Link Time Optimization (LTO), GCC emits GIMPLE [an intermediate representation] directly into a section of each object file, and LLVM does something similar with its own bytecode. If mixing rustc-compiled code and GCC-built code in the Linux kernel, the compilers will be unable to perform a full link-time optimization pass over all of the compiled code, leading to absent CFI (control flow integrity).

If Rust is available in the GNU toolchain, releases can be built on the Linux kernel (for example) with CFI using LLVM or GCC.

Started in 2014 (and revived in 2019), "The effort has been ongoing since 2020...and we've done a lot of effort and a lot of progress," compiler engineer Arthur Cohen says in the talk. "We have upstreamed the first version of gccrs within GCC. So next time when you install GCC 13 — you'll have gccrs in it. You can use it, you can start hacking on it, you can please report issues when it inevitably crashes and dies horribly."

"One big thing we're doing is some work towards running the rustc test suite. Because we want gccrs to be an actual Rust compiler and not a toy project or something that compiles a language that looks like Rust but isn't Rust, we're trying really hard to get that test suite working."

Read on for some notes from the talk...
Privacy

Dashlane Publishes Its Source Code To GitHub In Transparency Push (techcrunch.com) 8

Password management company Dashlane has made its mobile app code available on GitHub for public perusal, a first step it says in a broader push to make its platform more transparent. TechCrunch reports: The Dashlane Android app code is available now alongside the iOS incarnation, though it also appears to include the codebase for its Apple Watch and Mac apps even though Dashlane hasn't specifically announced that. The company said that it eventually plans to make the code for its web extension available on GitHub too. Initially, Dashlane said that it was planning to make its codebase "fully open source," but in response to a handful of questions posed by TechCrunch, it appears that won't in fact be the case.

At first, the code will be open for auditing purposes only, but in the future it may start accepting contributions too --" however, there is no suggestion that it will go all-in and allow the public to fork or otherwise re-use the code in their own applications. Dashlane has released the code under a Creative Commons Attribution-NonCommercial 4.0 license, which technically means that users are allowed to copy, share and build upon the codebase so long as it's for non-commercial purposes. However, the company said that it has stripped out some key elements from its release, effectively hamstringing what third-party developers are able to do with the code. [...]

"The main benefit of making this code public is that anyone can audit the code and understand how we build the Dashlane mobile application," the company wrote. "Customers and the curious can also explore the algorithms and logic behind password management software in general. In addition, business customers, or those who may be interested, can better meet compliance requirements by being able to review our code." On top of that, the company says that a benefit of releasing its code is to perhaps draw-in technical talent, who can inspect the code prior to an interview and perhaps share some ideas on how things could be improved. Moreover, so-called "white-hat hackers" will now be better equipped to earn bug bounties. "Transparency and trust are part of our company values, and we strive to reflect those values in everything we do," Dashlane continued. "We hope that being transparent about our code base will increase the trust customers have in our product."

Security

Anker Finally Comes Clean About Its Eufy Security Cameras (theverge.com) 30

An anonymous reader quotes a report from The Verge: First, Anker told us it was impossible. Then, it covered its tracks. It repeatedly deflected while utterly ignoring our emails. So shortly before Christmas, we gave the company an ultimatum: if Anker wouldn't answer why its supposedly always-encrypted Eufy cameras were producing unencrypted streams -- among other questions -- we would publish a story about the company's lack of answers. It worked.

In a series of emails to The Verge, Anker has finally admitted its Eufy security cameras are not natively end-to-end encrypted -- they can and did produce unencrypted video streams for Eufy's web portal, like the ones we accessed from across the United States using an ordinary media player. But Anker says that's now largely fixed. Every video stream request originating from Eufy's web portal will now be end-to-end encrypted -- like they are with Eufy's app -- and the company says it's updating every single Eufy camera to use WebRTC, which is encrypted by default. Reading between the lines, though, it seems that these cameras could still produce unencrypted footage upon request.

That's not all Anker is disclosing today. The company has apologized for the lack of communication and promised to do better, confirming it's bringing in outside security and penetration testing companies to audit Eufy's practices, is in talks with a "leading and well-known security expert" to produce an independent report, is promising to create an official bug bounty program, and will launch a microsite in February to explain how its security works in more detail. Those independent audits and reports may be critical for Eufy to regain trust because of how the company has handled the findings of security researchers and journalists. It's a little hard to take the company at its word! But we also think Anker Eufy customers, security researchers and journalists deserve to read and weigh those words, particularly after so little initial communication from the company. That's why we're publishing Anker's full responses [here].
As highlighted by Ars Technica, some of the notable statements include: - Its web portal now prohibits users from entering "debug mode."
- Video stream content is encrypted and inaccessible outside the portal.
- While "only 0.1 percent" of current daily users access the portal, it "had some issues," which have been resolved.
- Eufy is pushing WebRTC to all of its security devices as the end-to-end encrypted stream protocol.
- Facial recognition images were uploaded to the cloud to aid in replacing/resetting/adding doorbells with existing image sets, but has been discontinued. No recognition data was included with images sent to the cloud.
- Outside of the "recent issue with the web portal," all other video uses end-to-end encryption.
- A "leading and well-known security expert" will produce a report about Eufy's systems.
- "Several new security consulting, certification, and penetration testing" firms will be brought in for risk assessment.
- A "Eufy Security bounty program" will be established.
- The company promises to "provide more timely updates in our community (and to the media!)."

Slashdot Top Deals