×
Youtube

YouTube Video Causes Pixel Phones To Instantly Reboot (arstechnica.com) 55

An anonymous reader writes quotes a report from Ars Technica: Did you ever see that movie The Ring? People who watched a cursed, creepy video would all mysteriously die in seven days. Somehow Google seems to have re-created the tech version of that, where the creepy video is this clip of the 1979 movie Alien, and the thing that dies after watching it is a Google Pixel phone. As noted by the user 'OGPixel5" on the Google Pixel subreddit, watching this specific clip on a Google Pixel 6, 6a, or Pixel 7 will cause the phone to instantly reboot. Something about the clip is disagreeable to the phone, and it hard-crashes before it can even load a frame. Some users in the thread say cell service wouldn't work after the reboot, requiring another reboot to get it back up and running.

The leading theory floating around is that something about the format of the video (it's 4K HDR) is causing the phone to crash. It wouldn't be the first time something like this happened to an Android phone. In 2020, there was a cursed wallpaper that would crash a phone when set as the background due to a color space bug. The affected phones all use Google's Exynos-derived Tensor SoC, so don't expect non-Google phones to be affected by this. Samsung Exynos phones would be the next most-likely candidates, but we haven't seen any reports of that.
According to CNET, the issue has been addressed and a full fix will be deployed in March.
Bug

Security Researchers Warn of a 'New Class' of Apple Bugs (techcrunch.com) 30

Since the earliest versions of the iPhone, "The ability to dynamically execute code was nearly completely removed," write security researchers at Trellix, "creating a powerful barrier for exploits which would need to find a way around these mitigations to run a malicious program. As macOS has continually adopted more features of iOS it has also come to enforce code signing more strictly.

"The Trellix Advanced Research Center vulnerability team has discovered a large new class of bugs that allow bypassing code signing to execute arbitrary code in the context of several platform applications, leading to escalation of privileges and sandbox escape on both macOS and iOS.... The vulnerabilities range from medium to high severity with CVSS scores between 5.1 and 7.1. These issues could be used by malicious applications and exploits to gain access to sensitive information such as a user's messages, location data, call history, and photos."

Computer Weekly explains that the vulnerability bypasses strengthened code-signing mitigations put in place by Apple on its developer tool NSPredicate after the infamous ForcedEntry exploit used by Israeli spyware manufacturer NSO Group: So far, the team has found multiple vulnerabilities within the new class of bugs, the first and most significant of which exists in a process designed to catalogue data about behaviour on Apple devices. If an attacker has achieved code execution capability in a process with the right entitlements, they could then use NSPredicate to execute code with the process's full privilege, gaining access to the victim's data.

Emmitt and his team also found other issues that could enable attackers with appropriate privileges to install arbitrary applications on a victim's device, access and read sensitive information, and even wipe a victim's device. Ultimately, all of the new bugs carry a similar level of impact to ForcedEntry.

Senior vulnerability researcher Austin Emmitt said the vulnerabilities constituted a "significant breach" of the macOS and iOS security models, which rely on individual applications having fine-grain access to the subset of resources needed, and querying services with more privileges to get anything else.

"The key thing here is the vulnerabilities break Apple's security model at a fundamental level," Trellix's director of vulnerability research told Wired — though there's some additional context: Apple has fixed the bugs the company found, and there is no evidence they were exploited.... Crucially, any attacker trying to exploit these bugs would require an initial foothold into someone's device. They would need to have found a way in before being able to abuse the NSPredicate system. (The existence of a vulnerability doesn't mean that it has been exploited.)

Apple patched the NSPredicate vulnerabilities Trellix found in its macOS 13.2 and iOS 16.3 software updates, which were released in January. Apple has also issued CVEs for the vulnerabilities that were discovered: CVE-2023-23530 and CVE-2023-23531. Since Apple addressed these vulnerabilities, it has also released newer versions of macOS and iOS. These included security fixes for a bug that was being exploited on people's devices.

TechCrunch explores its severity: While Trellix has seen no evidence to suggest that these vulnerabilities have been actively exploited, the cybersecurity company tells TechCrunch that its research shows that iOS and macOS are "not inherently more secure" than other operating systems....

Will Strafach, a security researcher and founder of the Guardian firewall app, described the vulnerabilities as "pretty clever," but warned that there is little the average user can do about these threats, "besides staying vigilant about installing security updates." And iOS and macOS security researcher Wojciech ReguÅa told TechCrunch that while the vulnerabilities could be significant, in the absence of exploits, more details are needed to determine how big this attack surface is.

Jamf's Michael Covington said that Apple's code-signing measures were "never intended to be a silver bullet or a lone solution" for protecting device data. "The vulnerabilities, though noteworthy, show how layered defenses are so critical to maintaining good security posture," Covington said.

Programming

GCC Gets a New Frontend for Rust (fosdem.org) 106

Slashdot reader sleeping cat shares a recent FOSDEM talk by a compiler engineer on the team building Rust-GCC, "an alternative compiler implementation for the Rust programming language."

"If gccrs interprets a program differently from rustc, this is considered a bug," explains the project's FAQ on GitHub.

The FAQ also notes that LLVM's set of compiler technologies — which Rust uses — "is missing some backends that GCC supports, so a gccrs implementation can fill in the gaps for use in embedded development." But the FAQ also highlights another potential benefit: With the recent announcement of Rust being allowed into the Linux Kernel codebase, an interesting security implication has been highlighted by Open Source Security, inc. When code is compiled and uses Link Time Optimization (LTO), GCC emits GIMPLE [an intermediate representation] directly into a section of each object file, and LLVM does something similar with its own bytecode. If mixing rustc-compiled code and GCC-built code in the Linux kernel, the compilers will be unable to perform a full link-time optimization pass over all of the compiled code, leading to absent CFI (control flow integrity).

If Rust is available in the GNU toolchain, releases can be built on the Linux kernel (for example) with CFI using LLVM or GCC.

Started in 2014 (and revived in 2019), "The effort has been ongoing since 2020...and we've done a lot of effort and a lot of progress," compiler engineer Arthur Cohen says in the talk. "We have upstreamed the first version of gccrs within GCC. So next time when you install GCC 13 — you'll have gccrs in it. You can use it, you can start hacking on it, you can please report issues when it inevitably crashes and dies horribly."

"One big thing we're doing is some work towards running the rustc test suite. Because we want gccrs to be an actual Rust compiler and not a toy project or something that compiles a language that looks like Rust but isn't Rust, we're trying really hard to get that test suite working."

Read on for some notes from the talk...
Privacy

Dashlane Publishes Its Source Code To GitHub In Transparency Push (techcrunch.com) 8

Password management company Dashlane has made its mobile app code available on GitHub for public perusal, a first step it says in a broader push to make its platform more transparent. TechCrunch reports: The Dashlane Android app code is available now alongside the iOS incarnation, though it also appears to include the codebase for its Apple Watch and Mac apps even though Dashlane hasn't specifically announced that. The company said that it eventually plans to make the code for its web extension available on GitHub too. Initially, Dashlane said that it was planning to make its codebase "fully open source," but in response to a handful of questions posed by TechCrunch, it appears that won't in fact be the case.

At first, the code will be open for auditing purposes only, but in the future it may start accepting contributions too --" however, there is no suggestion that it will go all-in and allow the public to fork or otherwise re-use the code in their own applications. Dashlane has released the code under a Creative Commons Attribution-NonCommercial 4.0 license, which technically means that users are allowed to copy, share and build upon the codebase so long as it's for non-commercial purposes. However, the company said that it has stripped out some key elements from its release, effectively hamstringing what third-party developers are able to do with the code. [...]

"The main benefit of making this code public is that anyone can audit the code and understand how we build the Dashlane mobile application," the company wrote. "Customers and the curious can also explore the algorithms and logic behind password management software in general. In addition, business customers, or those who may be interested, can better meet compliance requirements by being able to review our code." On top of that, the company says that a benefit of releasing its code is to perhaps draw-in technical talent, who can inspect the code prior to an interview and perhaps share some ideas on how things could be improved. Moreover, so-called "white-hat hackers" will now be better equipped to earn bug bounties. "Transparency and trust are part of our company values, and we strive to reflect those values in everything we do," Dashlane continued. "We hope that being transparent about our code base will increase the trust customers have in our product."

Security

Anker Finally Comes Clean About Its Eufy Security Cameras (theverge.com) 30

An anonymous reader quotes a report from The Verge: First, Anker told us it was impossible. Then, it covered its tracks. It repeatedly deflected while utterly ignoring our emails. So shortly before Christmas, we gave the company an ultimatum: if Anker wouldn't answer why its supposedly always-encrypted Eufy cameras were producing unencrypted streams -- among other questions -- we would publish a story about the company's lack of answers. It worked.

In a series of emails to The Verge, Anker has finally admitted its Eufy security cameras are not natively end-to-end encrypted -- they can and did produce unencrypted video streams for Eufy's web portal, like the ones we accessed from across the United States using an ordinary media player. But Anker says that's now largely fixed. Every video stream request originating from Eufy's web portal will now be end-to-end encrypted -- like they are with Eufy's app -- and the company says it's updating every single Eufy camera to use WebRTC, which is encrypted by default. Reading between the lines, though, it seems that these cameras could still produce unencrypted footage upon request.

That's not all Anker is disclosing today. The company has apologized for the lack of communication and promised to do better, confirming it's bringing in outside security and penetration testing companies to audit Eufy's practices, is in talks with a "leading and well-known security expert" to produce an independent report, is promising to create an official bug bounty program, and will launch a microsite in February to explain how its security works in more detail. Those independent audits and reports may be critical for Eufy to regain trust because of how the company has handled the findings of security researchers and journalists. It's a little hard to take the company at its word! But we also think Anker Eufy customers, security researchers and journalists deserve to read and weigh those words, particularly after so little initial communication from the company. That's why we're publishing Anker's full responses [here].
As highlighted by Ars Technica, some of the notable statements include: - Its web portal now prohibits users from entering "debug mode."
- Video stream content is encrypted and inaccessible outside the portal.
- While "only 0.1 percent" of current daily users access the portal, it "had some issues," which have been resolved.
- Eufy is pushing WebRTC to all of its security devices as the end-to-end encrypted stream protocol.
- Facial recognition images were uploaded to the cloud to aid in replacing/resetting/adding doorbells with existing image sets, but has been discontinued. No recognition data was included with images sent to the cloud.
- Outside of the "recent issue with the web portal," all other video uses end-to-end encryption.
- A "leading and well-known security expert" will produce a report about Eufy's systems.
- "Several new security consulting, certification, and penetration testing" firms will be brought in for risk assessment.
- A "Eufy Security bounty program" will be established.
- The company promises to "provide more timely updates in our community (and to the media!)."

Google

Google Expands Open Source Bounties, Will Soon Support Javascript Fuzzing Too (zdnet.com) 6

Google has expanded OSS-Fuzz Reward Program to offer rewards of up to $30,000 for researchers who find security flaws in open source programs. From a report: The expanded scope of the program now means the total rewards possible per project integration rise from $20,000 to $30,000. The purpose of OSS-Fuzz is to support open source projects adopt fuzz testing and the new categories of rewards support those who create more ways of integrating new projects.

Google created two new reward categories that reward wider improvements across all OSS-Fuzz projects. It offers up to $11,337 available per category. It's also offering rewards for notable FuzzBench fuzzer integrations, and for integrating new sanitizers or 'bug detectors' that help find vulnerabilities. "We hope to accelerate the integration of critical open source projects into OSS-Fuzz by providing stronger incentives to security researchers and open source maintainers," explains Oliver Chang of Google's OSS-Fuzz team.

Facebook

Hacker Finds Bug That Allowed Anyone To Bypass Facebook 2FA (techcrunch.com) 13

An anonymous reader quotes a report from TechCrunch: A bug in a new centralized system that Meta created for users to manage their logins for Facebook and Instagram could have allowed malicious hackers to switch off an account's two-factor protections just by knowing their phone number. Gtm Manoz, a security researcher from Nepal, realized that Meta did not set up a limit of attempts when a user entered the two-factor code used to log into their accounts on the new Meta Accounts Center, which helps users link all their Meta accounts, such as Facebook and Instagram.

With a victim's phone number, an attacker would go to the centralized accounts center, enter the phone number of the victim, link that number to their own Facebook account, and then brute force the two-factor SMS code. This was the key step, because there was no upper limit to the amount of attempts someone could make. Once the attacker got the code right, the victim's phone number became linked to the attacker's Facebook account. A successful attack would still result in Meta sending a message to the victim, saying their two-factor was disabled as their phone number got linked to someone else's account.

Manoz found the bug in the Meta Accounts Center last year, and reported it to the company in mid-September. Meta fixed the bug a few days later, and paid Manoz $27,200 for reporting the bug. Meta spokesperson Gabby Curtis told TechCrunch that at the time of the bug the login system was still at the stage of a small public test. Curtis also said that Meta's investigation after the bug was reported found that there was no evidence of exploitation in the wild, and that Meta saw no spike in usage of that particular feature, which would signal the fact that no one was abusing it.

AI

OpenAI Hires an Army of Contractors. Will They Make Coding Obsolete? (semafor.com) 110

Last week Microsoft announced 10,000 layoffs — and a multibillion-dollar investment in OpenAI, the company that created ChatGPT.

But OpenAI also released a tool called Codex in August of 2021 "designed to translate natural language into code," reports Semafor. And now OpenAI "has ramped up its hiring around the world, bringing on roughly 1,000 remote contractors over the past six months in regions like Latin America and Eastern Europe, according to people familiar with the matter."

The article points out that roughly 40% of those contractors "are computer programmers who are creating data for OpenAI's models to learn software engineering tasks." "A well-established company, which is determined to provide world-class AI technology to make the world a better and more efficient place, is looking for a Python Developer," reads one OpenAI job listing in Spanish, which was posted by an outsourcing agency....

OpenAI appears to be building a dataset that includes not just lines of code, but also the human explanations behind them written in natural language. A software developer in South America who completed a five-hour unpaid coding test for OpenAI told Semafor he was asked to tackle a series of two-part assignments. First, he was given a coding problem and asked to explain in written English how he would approach it. Then, the developer was asked to provide a solution. If he found a bug, OpenAI told him to detail what the problem was and how it should be corrected, instead of simply fixing it.

"They most likely want to feed this model with a very specific kind of training data, where the human provides a step-by-step layout of their thought-process," said the developer, who asked to remain anonymous to avoid jeopardizing future work opportunities.

AI

What Happens When ChatGPT Can Find Bugs in Computer Code? (pcmag.com) 122

PC Magazine describes a startling discovery by computer science researchers from Johannes Gutenberg University and University College London.

"ChatGPT can weed out errors with sample code and fix it better than existing programs designed to do the same. Researchers gave 40 pieces of buggy code to four different code-fixing systems: ChatGPT, Codex, CoCoNut, and Standard APR. Essentially, they asked ChatGPT: "What's wrong with this code?" and then copy and pasted it into the chat function. On the first pass, ChatGPT performed about as well as the other systems. ChatGPT solved 19 problems, Codex solved 21, CoCoNut solved 19, and standard APR methods figured out seven. The researchers found its answers to be most similar to Codex, which was "not surprising, as ChatGPT and Codex are from the same family of language models."

However, the ability to, well, chat with ChatGPT after receiving the initial answer made the difference, ultimately leading to ChatGPT solving 31 questions, and easily outperforming the others, which provided more static answers. "A powerful advantage of ChatGPT is that we can interact with the system in a dialogue to specify a request in more detail," the researchers' report says. "We see that for most of our requests, ChatGPT asks for more information about the problem and the bug. By providing such hints to ChatGPT, its success rate can be further increased, fixing 31 out of 40 bugs, outperforming state-of-the-art....."

Companies that create bug-fixing software — and software engineers themselves — are taking note. However, an obvious barrier to tech companies adopting ChatGPT on a platform like Sentry in its current form is that it's a public database (the last place a company wants its engineers to send coveted intellectual property).

IOS

iOS 16.3 and macOS Ventura 13.2 Add Hardware Security Key Support 17

Apple released iOS and iPadOS 16.3, macOS Ventura 13.2, and watchOS 9.3 today. The updates focus primarily on bug fixes and under-the-hood improvements, but there is one notable addition: Apple ID got support for hardware security keys. From a report: Once they've updated to the new software, a user can opt to make a device like a YubiKey a required part of the two-factor authentication process for their account. It's unlikely most users will take advantage of this, of course, but for a select few, the extra security is welcome. Other additions in iOS 16.3 include support for the upcoming new HomePod model, a tweak to how Emergency SOS calls are made, and a new Black History Month wallpaper. On the Mac side, hardware security key support is joined by the rollout of Rapid Security Response, a means for urgent security updates to be delivered to Macs without issuing a major software update. The watchOS update is oriented around bug fixes.
AI

GitHub Copilot Labs Add Photoshop-Style 'Brushes' for ML-Powered Code Modifying (githubnext.com) 56

"Can editing code feel more tactile, like painting with Photoshop brushes?"

Researchers at GitHub Next asked that question this week — and then supplied the answer. "We added a toolbox of brushes to our Copilot Labs Visual Studio Code extension that can modify your code.... Just select a few lines, choose your brush, and see your code update."

The tool's web page includes interactive before-and-after examples demonstrating:
  • Add Types brush
  • Fix Bugs brush
  • Add Debugging Statements brush
  • Make More Readable brush

And last month Microsoft's principle program manager for browser tools shared an animated GIF showing all the brushes in action.

"In the future, we're interested in adding more useful brushes, as well as letting developers store their own custom brushes," adds this week's announcement. "As we explore enhancing developers' workflows with Machine Learning, we're focused on how to empower developers, instead of automating them. This was one of many explorations we have in the works along those lines."

It's ultimately grafting an incredibly easy interface onto "ML-powered code modification", writes Visual Studio Magazine, noting that "The bug-fixing brush, for example can fix a simple typo, changing a variable name from the incorrect 'low' to the correct 'lo'....

"All of the above brushes and a few others have been added to the Copilot Labs brushes toolbox, which is available for anyone with a GitHub Copilot license, costing $10 per month or $100 per year.... At the time of this writing, the extension has been installed 131,369 times, earning a perfect 5.0 rating from six reviewers."


Facebook

Meta Abandons Original Quest VR Headset (gizmodo.com) 54

Meta is dropping support for its first Meta Quest VR headset. The device will no longer receive future content updates, and by 2024 it will no longer get any bug fixes or security patches. Gizmodo reports: Notably, users will no longer have significant functionality. Though Meta promised you will still be able to use the headset and its installed games and apps, Quest 1 users will no longer be able to join parties, and they will also lose access to Meta's feature product Horizon Home on March 5 this year. Users will no longer be able to invite others to their homes or travel over to another user's home.

Meta CEO Mark Zuckerberg announced what was originally called the Oculus Quest in 2018 as the premiere wireless VR headset. The company released the headset in 2019 (so Meta is a little off in their letter when they said they launched the device "over four years ago"), and this was all before Meta officially renamed the devices and its various services in 2021. So the Quest 1 is working off four-year-old tech, and it would make some sense why Meta would not want to support aging hardware.

Privacy

Researchers Track GPS Location of All of California's New Digital License Plates (vice.com) 53

An anonymous reader quotes a report from Motherboard: A team of security researchers managed to gain "super administrative access" into Reviver, the company behind California's new digital license plates which launched last year. That access allowed them to track the physical GPS location of all Reviver customers and change a section of text at the bottom of the license plate designed for personalized messages to whatever they wished, according to a blog post from the researchers. "An actual attacker could remotely update, track, or delete anyone's REVIVER plate," Sam Curry, a bug bounty hunter, wrote in the blog post. Curry wrote that he and a group of friends started finding vulnerabilities across the automotive industry. That included Reviver.

California launched the option to buy digital license plates in October. Reviver is the sole provider of these plates, and says that the plates are legal to drive nationwide, and "legal to purchase in a growing number of states." [...] In the blog post, Curry writes the researchers were interested in Reviver because the license plate's features meant it could be used to track vehicles. After digging around the app and then a Reviver website, the researchers found Reviver assigned different roles to user accounts. Those included "CONSUMER" and "CORPORATE." Eventually, the researchers identified a role called "REVIVER," managed to change their account to it, which in turn granted them access to all sorts of data and capabilities, which included tracking the location of vehicles. "We could take any of the normal API calls (viewing vehicle location, updating vehicle plates, adding new users to accounts) and perform the action using our super administrator account with full authorization," Curry writes. "We could additionally access any dealer (e.g. Mercedes-Benz dealerships will often package REVIVER plates) and update the default image used by the dealer when the newly purchased vehicle still had DEALER tags."
Reviver told Motherboard in a statement that it patched the issues identified by the researchers. "We are proud of our team's quick response, which patched our application in under 24 hours and took further measures to prevent this from occurring in the future. Our investigation confirmed that this potential vulnerability has not been misused. Customer information has not been affected, and there is no evidence of ongoing risk related to this report. As part of our commitment to data security and privacy, we also used this opportunity to identify and implement additional safeguards to supplement our existing, significant protections," the statement read.

"Cybersecurity is central to our mission to modernize the driving experience and we will continue to work with industry-leading professionals, tools, and systems to build and monitor our secure platforms for connected vehicles," it added.
United States

US Supreme Court Lets Meta's WhatsApp Pursue 'Pegasus' Spyware Suit (reuters.com) 13

The U.S. Supreme Court on Monday let Meta's WhatsApp pursue a lawsuit accusing Israel's NSO Group of exploiting a bug in its WhatsApp messaging app to install spy software allowing the surveillance of 1,400 people, including journalists, human rights activists and dissidents. From a report: The justices turned away NSO's appeal of a lower court's decision that the lawsuit could move forward. NSO has argued that it is immune from being sued because it was acting as an agent for unidentified foreign governments when it installed the "Pegasus" spyware.

President Joe Biden's administration had urged the justices to reject NSO's appeal, noting that the U.S. State Department had never before recognized a private entity acting as an agent of a foreign state as being entitled to immunity. WhatsApp in 2019 sued NSO seeking an injunction and damages, accusing it of accessing WhatsApp servers without permission six months earlier to install the Pegasus software on victims' mobile devices.

Privacy

CES's 'Worst in Show' Criticized Over Privacy, Security, and Environmental Threats (youtube.com) 74

"We are seeing, across the gamut, products that impact our privacy, products that create cybersecurity risks, that have overarchingly long-term environmental impacts, disposable products, and flat-out just things that maybe should not exist."

That's the CEO of the how-to repair site iFixit, introducing their third annual "Worst in Show" ceremony for the products displayed at this year's CES. But the show's slogan promises it's also "calling out the most troubling trends in tech." For example, the EFF's executive director started with two warnings. First, "If it's communicating with your phone, it's generally communicating to the cloud too." But more importantly, if a product is gathering data about you and communicating with the cloud, "you have to ask yourself: is this company selling something to me, or are they selling me to other people? And this year, as in many past years at CES, it's almost impossible to tell from the products and the advertising copy around them! They're just not telling you what their actual business model is, and because of that — you don't know what's going on with your privacy."

After warning about the specific privacy implications of a urine-analyzing add-on for smart toilets, they noted there was a close runner-up for the worst privacy: the increasing number of scam products that "are basically based on the digital version of phrenology, like trying to predict your emotions based upon reading your face or other things like that. There's a whole other category of things that claim to do things that they cannot remotely do."

To judge the worst in show by environmental impact, Consumer Reports sent the Associate Director for their Product Sustainability, Research and Testing team, who chose the 55-inch portable "Displace TV" for being powered only by four lithium-ion batteries (rather than, say, a traditional power cord).

And the "worst in show" award for repairability went to the Ember Mug 2+ — a $200 travel mug "with electronics and a battery inside...designed to keep your coffee hot." Kyle Wiens, iFixit's CEO, first noted it was a product which "does not need to exist" in a world which already has equally effective double-insulated, vaccuum-insulated mugs and Thermoses. But even worse: it's battery powered, and (at least in earlier versions) that battery can't be easily removed! (If you email the company asking for support on replacing the battery, Wiens claims that "they will give you a coupon on a new, disposable coffee mug. So this is the kind of product that should not exist, doesn't need to exist, and is doing active harm to the world.

"The interesting thing is people care so much about their $200 coffee mug, the new feature is 'Find My iPhone' support. So not only is it harming the environment, it's also spying on where you're located!"

The founder of SecuRepairs.org first warned about "the vast ecosystem of smart, connected products that are running really low-quality, vulnerable software that make our persons and our homes and businesses easy targets for hackers." But for the worst in show for cybersecurity award, they then chose Roku's new Smart TV, partly because smart TVs in general "are a problematic category when it comes to cybersecurity, because they're basically surveillance devices, and they're not created with security in mind." And partly because to this day it's hard to tell if Roku has fixed or even acknowledged its past vulnerabilities — and hasn't implemented a prominent bug bounty program. "They're not alone in this. This is a problem that affects electronics makers of all different shapes and sizes at CES, and it's something that as a society, we just need to start paying a lot more attention to."

And US Pirg's "Right to Repair" campaign director gave the "Who Asked For This" award to Neutrogena's "SkinStacks" 3D printer for edible skin-nutrient gummies — which are personalized after phone-based face scans. ("Why just sell vitamins when you could also add in proprietary refills and biometic data harvesting.")
Security

NetGear Warns Users To Patch Recently Fixed Wi-Fi Router Bug (bleepingcomputer.com) 7

Netgear has fixed a high-severity vulnerability affecting multiple WiFi router models and advised customers to update their devices to the latest available firmware as soon as possible. BleepingComputer reports: The flaw impacts multiple Wireless AC Nighthawk, Wireless AX Nighthawk (WiFi 6), and Wireless AC router models. Although Netgear did not disclose any information about the component affected by this bug or its impact, it did say that it is a pre-authentication buffer overflow vulnerability. The impact of a successful buffer overflow exploitation can range from crashes following denial of service to arbitrary code execution, if code execution is achieved during the attack. Attackers can exploit this flaw in low-complexity attacks without requiring permissions or user interaction. In a security advisory published on Wednesday, Netgear said it "strongly recommends that you download the latest firmware as soon as possible." A list of vulnerable routers and the patched firmware versions can be found here.
Windows

Windows 95 Went the Extra Mile To Ensure Compatibility of SimCity, Other Games (arstechnica.com) 53

It's still possible to learn a lot of interesting things about old operating systems. Sometimes those things were documented, or at least hinted at, in blog posts that miraculously still exist. One such quirk showed up recently when someone noticed how Microsoft made sure that SimCity and other popular apps worked on Windows 95. From a report: A recent tweet by @Kalyoshika highlights an excerpt from a blog post by Fog Creek Software co-founder, Stack Overflow co-creator, and longtime software blogger Joel Spolsky. The larger post is about chicken-and-egg OS/software appeal and demand. The part that caught the eye of a Hardcore Gaming 101 podcast co-host is how the Windows 3.1 version of SimCity worked on the Windows 95 system. Windows 95 merged MS-DOS and Windows apps, upgraded APIs from 16 to 32-bit, and was hyper-marketed. A popular app like SimCity, which sold more than 5 million copies, needed to work without a hitch. Spolsky's post summarizes how SimCity became Windows 95-ready, as he heard it, without input from Maxis or user workarounds.

Jon Ross, who wrote the original version of SimCity for Windows 3.x, told me that he accidentally left a bug in SimCity where he read memory that he had just freed. Yep. It worked fine on Windows 3.x, because the memory never went anywhere. Here's the amazing part: On beta versions of Windows 95, SimCity wasn't working in testing. Microsoft tracked down the bug and added specific code to Windows 95 that looks for SimCity. If it finds SimCity running, it runs the memory allocator in a special mode that doesn't free memory right away. That's the kind of obsession with backward compatibility that made people willing to upgrade to Windows 95.

Spolsky (in 2000) considers this a credit to Microsoft and an example of how to break the chicken-and-egg problem: "provide a backwards compatibility mode which either delivers a truckload of chickens, or a truckload of eggs, depending on how you look at it, and sit back and rake in the bucks."

Firefox

Mozilla Just Fixed an 18-Year-Old Firefox Bug (howtogeek.com) 61

Mozilla recently fixed a bug that was first reported 18 years ago in Firebox 1.0, reports How-to Geek: Bug 290125 was first reported on April 12, 2005, only a few days before the release of Firefox 1.0.3, and outlined an issue with how Firefox rendered text with the ::first-letter CSS pseudo-element. The author said, "when floating left a :first-letter (to produce a dropcap), Gecko ignores any declared line-height and inherits the line-height of the parent box. [...] Both Opera 7.5+ and Safari 1.0+ correctly handle this."

The initial problem was that the Mac version of Firefox handled line heights differently than Firefox on other platforms, which was fixed in time for Firefox 3.0 in 2007. The issue was then re-opened in 2014, when it was decided in a CSS Working Group meeting that Firefox's special handling of line heights didn't meet CSS specifications and was causing compatibility problems. It led to some sites with a large first letter in blocks of text, like The Verge and The Guardian, render incorrectly in Firefox compared to other browsers.

The issue was still marked as low priority, so progress continued slowly, until it was finally marked as fixed on December 20, 2022. Firefox 110 should include the updated code, which is expected to roll out to everyone in February 2023.

Bug

Linux Kernel Security Bug Allows Remote Code Execution for Authenticated Remote Users (zdnet.com) 51

The Zero Day Initiative, a zero-day security research firm, announced a new Linux kernel security bug that allows authenticated remote users to disclose sensitive information and run code on vulnerable Linux kernel versions. ZDNet reports: Originally, the Zero Day Initiative ZDI rated it a perfect 10 on the 0 to 10 common Vulnerability Scoring System scale. Now, the hole's "only" a 9.6....

The problem lies in the Linux 5.15 in-kernel Server Message Block (SMB) server, ksmbd. The specific flaw exists within the processing of SMB2_TREE_DISCONNECT commands. The issue results from the lack of validating the existence of an object prior to performing operations on the object. An attacker can leverage this vulnerability to execute code in the kernel context. This new program, which was introduced to the kernel in 2021, was developed by Samsung. Its point was to deliver speedy SMB3 file-serving performance....

Any distro using the Linux kernel 5.15 or above is potentially vulnerable. This includes Ubuntu 22.04, and its descendants; Deepin Linux 20.3; and Slackware 15.

Bug

Patched Windows Bug Was Actually a Dangerous Wormable Code-Execution Vulnerability (arstechnica.com) 20

Ars Technica reports on a dangerously "wormable" Windows vulnerability that allowed attackers to execute malicious code with no authentication required — a vulnerability that was present "in a much broader range of network protocols, giving attackers more flexibility than they had when exploiting the older vulnerability." Microsoft fixed CVE-2022-37958 in September during its monthly Patch Tuesday rollout of security fixes. At the time, however, Microsoft researchers believed the vulnerability allowed only the disclosure of potentially sensitive information. As such, Microsoft gave the vulnerability a designation of "important." In the routine course of analyzing vulnerabilities after they're patched, IBM security researcher Valentina Palmiotti discovered it allowed for remote code execution in much the way EternalBlue did [the flaw used to detonate WannaCry]. Last week, Microsoft revised the designation to critical and gave it a severity rating of 8.1, the same given to EternalBlue....

One potentially mitigating factor is that a patch for CVE-2022-37958 has been available for three months. EternalBlue, by contrast, was initially exploited by the NSA as a zero-day. The NSA's highly weaponized exploit was then released into the wild by a mysterious group calling itself Shadow Brokers. The leak, one of the worst in the history of the NSA, gave hackers around the world access to a potent nation-state-grade exploit. Palmiotti said there's reason for optimism but also for risk: "While EternalBlue was an 0-Day, luckily this is an N-Day with a 3 month patching lead time," said Palmiotti.

There's still some risk, Palmiotti tells Ars Technica. "As we've seen with other major vulnerabilities over the years, such as MS17-010 which was exploited with EternalBlue, some organizations have been slow deploying patches for several months or lack an accurate inventory of systems exposed to the internet and miss patching systems altogether."

Thanks to Slashdot reader joshuark for sharing the article.

Slashdot Top Deals