Wyze Cam Security Flaw Gave Hackers Access To Video; Went Unfixed For Almost Three Years (9to5mac.com) 24
Bleeping Computer reports: "A Wyze Cam internet camera vulnerability allows unauthenticated, remote access to videos and images stored on local memory cards and has remained unfixed for almost three years. The bug, which has not been assigned a CVE ID, allowed remote users to access the contents of the SD card in the camera via a webserver listening on port 80 without requiring authentication. Upon inserting an SD card on the Wyze Cam IoT, a symlink to it is automatically created in the www directory, which is served by the webserver but without any access restrictions."
And as if that weren't bad enough, it gets worse. Many people re-use existing SD cards they have laying around, some of which still have private data on them, especially photos. The flaw gave access to all data on the card, not just files created by the camera. Finally, the AES encryption key is also stored on the card, potentially giving an attacker live access to the camera feed. Altogether, Bitdefender security researchers advised the company of three vulnerabilities. It took Wyze six months to fix one, 21 months to fix another, and just under two years to patch the SD card flaw. The v1 camera still hasn't been patched, and as the company announced last year that it has reached end-of-life status, so it appears it never will.
Log4Shell Exploited To Infect VMware Horizon Servers With Backdoors, Crypto Miners (zdnet.com) 10
According to Sophos, the latest Log4Shell attacks target unpatched VMware Horizon servers with three different backdoors and four cryptocurrency miners. The attackers behind the campaign are leveraging the bug to obtain access to vulnerable servers. Once they have infiltrated the system, Atera agent or Splashtop Streamer, two legitimate remote monitoring software packages, may be installed, with their purpose twisted into becoming backdoor surveillance tools.
The other backdoor detected by Sophos is Silver, an open source offensive security implant released for use by pen testers and red teams. Sophos says that four miners are linked to this wave of attacks: z0Miner, JavaX miner, Jin, and Mimu, which mine for Monero (XMR). Previously, Trend Micro found z0Miner operators were exploiting the Atlassian Confluence RCE (CVE-2021-26084) for cryptojacking attacks. A PowerShell URL connected to this both campaigns suggests there may also be a link, although that is uncertain. [...] In addition, the researchers uncovered evidence of reverse shell deployment designed to collect device and backup information.
'Biggest Change Ever' to Go Brings Generics, Native Fuzzing, and a Performance Boost (go.dev) 35
It's part of what Go's development team is calling the "biggest change ever to the language".
SiliconANGLE writes that "Right out of the gate, Go 1.18 is getting a CPU speed performance boost of up to 20% for Apple M1, ARM64 and PowerPC64 chips. This is all from an expansion of Go 1.17's calling conventions for the application binary interface on these processor architectures."
And Go 1.18 also introduces native support for fuzz testing — the first major programming language to do so, writes ZDNet: As Google explains, fuzz testing or 'fuzzing' is a means of testing the vulnerability of a piece of software by throwing arbitrary or invalid data at it to expose bugs and unknown errors. This adds an additional layer of security to Go's code that will keep it protected as its functionality evolves — crucial as attacks on software continue to escalate both in frequency and complexity. "At Google we are committed to securing the online infrastructure and applications the world depends upon," said Eric Brewer, VIP infrastructure at Google....
While other languages support fuzzing, Go is the first major programming language to incorporate it into its core toolchain, meaning — unlike other languages — third-party support integrations aren't required.
Google is emphasizing Go's security features — and its widespread adoption. ZDNet writes: Google created Go in 2007 and was designed specifically to help software engineers build secure, open-source enterprise applications for modern, multi-core computing systems. More than three-quarters of Cloud Native Computing Foundation projects, including Kubernetes and Istio, are written in Go, says Google. [Also Docker and Etc.] According to data from Stack Overflow, some 10% of developers are writing in Go worldwide, and there are signs that more recruiters are seeking out Go coders in their search for tech talent..... "Although we have a dedicated Go team at Google, we welcome a significant amount of contributions from our community. It's a shared effort, and with their updates we're helping our community achieve Go's long-term vision.
Or, as the Go blog says: We want to thank every Go user who filed a bug, sent in a change, wrote a tutorial, or helped in any way to make Go 1.18 a reality. We couldn't do it without you. Thank you.
Enjoy Go 1.18!
* Supporting generics "includes major — but fully backward-compatible — changes to the language," explains the release notes. Although it adds a few cautionary notes: These new language changes required a large amount of new code that has not had significant testing in production settings. That will only happen as more people write and use generic code. We believe that this feature is well implemented and high quality. However, unlike most aspects of Go, we can't back up that belief with real world experience. Therefore, while we encourage the use of generics where it makes sense, please use appropriate caution when deploying generic code in production.
While we believe that the new language features are well designed and clearly specified, it is possible that we have made mistakes.... it is possible that there will be code using generics that will work with the 1.18 release but break in later releases. We do not plan or expect to make any such change. However, breaking 1.18 programs in future releases may become necessary for reasons that we cannot today foresee. We will minimize any such breakage as much as possible, but we can't guarantee that the breakage will be zero.
Developers Debate Denying Updates for Open Source Software to Russia (thenewstack.io) 95
Over the last month, this topic has again become a focus of debate as Russia's invasion of Ukraine has led to developers calling for blanket bans by companies like GitHub and GitLab; and to some developers even taking action. Earlier this month, we wrote about how open source gateway Scarf began limiting access to open source packages for the Russian government and military entities, via its gateway.
As we noted at the time, there was a primary distinction made when Scarf took this action: distribution of open source software is separate from the licensing of it. Those points of the OSI definition pertain to the licensing, not to some entity actively providing the software to others.
Since then, discussions around these ideas have continued, and this week an essay by Bradley M. Kuhn, a policy fellow and hacker-in-residence at the Software Freedom Conservancy, argues that copyleft won't solve all problems, just some of them.
The essay specifically takes to task the idea that open source software can effectively affect change by way of licensing limitations. He spent nearly 3,000 words on the topic, before pointedly addressing the issue of Russia — with a similar conclusion to the one reached by Scarf earlier this month. Kuhn argues that "FOSS licenses are not an effective tool to advance social justice causes other than software freedom" and that, instead, developers have a moral obligation to take stances by way of other methods.
"For example, FOSS developers should refuse to work specifically on bug reports from companies who don't pay their workers a living wage," Kuhn offers in an example.
Regarding Russia specifically, Kuhn again points to distribution as an avenue of protest, while still remaining in line with the principles of free and open source software.
"Every FOSS license in existence permits capricious distribution; software freedom guarantees the right to refuse to distribute new versions of the software. (i.e., Copyleft does not require that you publish all your software on the Internet for everyone, or that you give equal access to everyone — rather, it merely requires that those whom you chose to give legitimate access to the software also receive CCS). FOSS projects should thus avoid providing Putin easy access to updates to their FOSS," writes Kuhn.
Linux For M1 Macs? First Alpha Release Announced for Asahi Linux (asahilinux.org) 108
And now that first Asahi Linux alpha release is out — ready for testing on M1, M1 Pro, and M1 Max machines (except Mac Studio): We're really excited to finally take this step and start bringing Linux on Apple Silicon to everyone. This is only the beginning, and things will move even more quickly going forward!
Keep in mind that this is still a very early, alpha release. It is intended for developers and power users; if you decide to install it, we hope you will be able to help us out by filing detailed bug reports and helping debug issues. That said, we welcome everyone to give it a try — just expect things to be a bit rough.... Asahi Linux is developed by a group of volunteers, and led by marcan as his primary job. You can support him directly via Patreon and GitHub Sponsors....
Can I dual-boot macOS and Linux?
Yes! In fact, we expect you to do that, and the installer doesn't support replacing macOS at this point. This is because we have no mechanism for updating system firmware from Linux yet, and until we do it makes sense to keep a macOS install lying around for that. You can have as many macOS and Linux installs as you want, and they will all play nicely and show up in Apple's boot picker. Each Linux install acts as a self-contained OS and should not interfere with the others.
Note that keeping a macOS install around does mean you lose ~70GB of disk space (in order to allow for updates, since the macOS updater is quite inefficient). In the future we expect to have a mechanism for firmware updates from Linux and better integration, at which point we'll be comfortable recommending Linux-only setups....
Is this just Arch Linux ARM?
Pretty much! Most of our work is in the kernel and a few core support packages, and we rely on Linux's excellent existing ARM64 support. The Asahi Linux reference distro images are based off of Arch Linux ARM and simply add our own package repository, which only adds a few packages. You can freely convert between Arch Linux ARM and Asahi Linux by adding or removing this repository and the relevant packages, although vanilla Arch Linux ARM kernels will not boot on these machines at this time.
The project's home page adds that "All contributors are welcome, of any skill level!"
"Doing this requires a tremendous amount of work, as Apple Silicon is an entirely undocumented platform," the team explains. "In particular, we will be reverse engineering the Apple GPU architecture and developing an open-source driver for it." But they're already documenting the Apple Silicon platform on their GitHub wiki. We will eventually release a remix of Arch Linux ARM, packaged for installation by end-users, as a distribution of the same name. The majority of the work resides in hardware support, drivers, and tools, and it will be upstreamed to the relevant projects....
Apple allows booting unsigned/custom kernels on Apple Silicon Macs without a jailbreak! This isn't a hack or an omission, but an actual feature that Apple built into these devices. That means that, unlike iOS devices, Apple does not intend to lock down what OS you can use on Macs (though they probably won't help with the development). As long as no code is taken from macOS to build the Linux support, the result is completely legal to distribute and for end-users to use, as it would not be a derivative work of macOS.
An interesting observataion from Slashdot reader mrwireless: It once again seems Apple is informally supportive of these efforts, as the recent release of OS Monterey 12.3 makes the process even simpler. As Twitter user Matthew Garrett writes:
"People who hate UEFI should read https://github.com/AsahiLinux/... — Apple made deliberate design choices that allow third party OSes to run on M1 hardware without compromising security, and with much less closed code than on basically any modern x86."
Nasty Linux Netfilter Firewall Security Hole Found (zdnet.com) 53
This vulnerability is present in the Linux kernel versions 5.4 through 5.6.10. It's listed as Common Vulnerabilities and Exposures (CVE-2022-25636), and with a Common Vulnerability Scoring System (CVSS) score of 7.8), this is a real badie. How bad? In its advisory, Red Hat said, "This flaw allows a local attacker with a user account on the system to gain access to out-of-bounds memory, leading to a system crash or a privilege escalation threat." So, yes, this is bad. Worse still, it affects recent major distribution releases such as Red Hat Enterprise Linux (RHEL) 8.x; Debian Bullseye; Ubuntu Linux, and SUSE Linux Enterprise 15.3. While the Linux kernel netfilter patch has been made, the patch isn't available yet in all distribution releases.
Ukraine Ethical Hackers Bewildered as HackerOne Bug Bounty Platform Said To Halt Their Payouts (gadgets360.com) 28
Earlier this month, HackerOne CEO Marten Mickos had announced, "[A]s we work to comply with the new sanctions, we'll withdraw all programmes for customers based in Russia, Belarus, and the occupied areas of Ukraine." On Monday, he clarified that the restrictions were for sanctioned regions - Russia and Belarus, not mentioning any clear details about the status of Ukraine. "That's a really weird situation," said independent security researcher Bob Diachenko, who has been associated with the San Francisco, California-based platform for the last two-three years now. The security researcher tweeted on Sunday that HackerOne stopped paying bounties worth around $3,000 for the flaws he reported. Alongside stopping payouts, HackerOne has removed its 'Clear' status from all Ukraine accounts. The status essentially allows ethical hackers to participate in private programmes run by various companies to earn a minimum of $2,000 for a high-severity vulnerability or $5,000 for a critical one. It requires background-check for researchers to participate in the listed programmes.
Intel Finds Bug In AMD's Spectre Mitigation, AMD Issues Fix (tomshardware.com) 44
"One of the patches that AMD has used to fix the Spectre vulnerabilities has been broken since 2018." Intel's security team, STORM, found the issue with AMD's mitigation. In response, AMD has issued a security bulletin and updated its guidance to recommend using an alternative method to mitigate the Spectre vulnerabilities, thus repairing the issue anew....
Intel's research into AMD's Spectre fix begins in a roundabout way — Intel's processors were recently found to still be susceptible to Spectre v2-based attacks via a new Branch History Injection variant, this despite the company's use of the Enhanced Indirect Branch Restricted Speculation (eIBRS) and/or Retpoline mitigations that were thought to prevent further attacks. In need of a newer Spectre mitigation approach to patch the far-flung issue, Intel turned to studying alternative mitigation techniques. There are several other options, but all entail varying levels of performance tradeoffs. Intel says its ecosystem partners asked the company to consider using AMD's LFENCE/JMP technique. The "LFENCE/JMP" mitigation is a Retpoline alternative commonly referred to as "AMD's Retpoline."
As a result of Intel's investigation, the company discovered that the mitigation AMD has used since 2018 to patch the Spectre vulnerabilities isn't sufficient — the chips are still vulnerable. The issue impacts nearly every modern AMD processor spanning almost the entire Ryzen family for desktop PCs and laptops (second-gen to current-gen) and the EPYC family of datacenter chips....
In response to the STORM team's discovery and paper, AMD issued a security bulletin (AMD-SB-1026) that states it isn't aware of any currently active exploits using the method described in the paper. AMD also instructs its customers to switch to using "one of the other published mitigations (V2-1 aka 'generic retpoline' or V2-4 aka 'IBRS')." The company also published updated Spectre mitigation guidance reflecting those changes [PDF]....
AMD's security bulletin thanks Intel's STORM team by name and noted it engaged in the coordinated vulnerability disclosure, thus allowing AMD enough time to address the issue before making it known to the public.
Thanks to Slashdot reader Hmmmmmm for submitting the story...
Linux Has Been Bitten By Its Most High-Severity Vulnerability in Years (arstechnica.com) 110
The name Dirty Pipe is meant to both signal similarities to Dirty Cow and provide clues about the new vulnerability's origins. "Pipe" refers to a pipeline, a Linux mechanism for one OS process to send data to another process. In essence, a pipeline is two or more processes that are chained together so that the output text of one process (stdout) is passed directly as input (stdin) to the next one. Tracked as CVE-2022-0847, the vulnerability came to light when a researcher for website builder CM4all was troubleshooting a series of corrupted files that kept appearing on a customer's Linux machine. After months of analysis, the researcher finally found that the customer's corrupted files were the result of a bug in the Linux kernel.
Millions of Palm-Sized, Flying Spiders Could Invade the East Coast (scientificamerican.com) 53
Common to China, Taiwan, Japan and Korea, the Joro spider is part of a group of spiders known as "orb weavers" because of their highly symmetrical, circular webs. The spider gets its name from Jorgumo, a Japanese spirit, or Ykai, that is said to disguise itself as a beautiful woman to prey upon gullible men. True to its mythical reputation, the Joro spider is stunning to look at, with a large, round, jet-black body cut across with bright yellow stripes, and flecked on its underside with intense red markings. But despite its threatening appearance and its fearsome standing in folklore, the Joro spider's bite is rarely strong enough to break through the skin, and its venom poses no threat to humans, dogs or cats unless they are allergic. That's perhaps good news, as the spiders are destined to spread far and wide across the continental U.S., researchers say.
The scientists came to this conclusion after comparing the Joro spider to a close cousin, the golden silk spider, which migrated from tropical climates 160 years ago to establish an eight-legged foothold in the southern United States. By tracking the spiders' locations in the wild and monitoring their vitals as they subjected caught specimens to freezing temperatures, the researchers found that the Joro spider has about double the metabolic rate of its cousin, along with a 77% higher heart rate and a much better survival rate in cold temperatures. Additionally, Joro spiders exist in most parts of their native Japan -- warm and cold -- which has a very similar climate to the U.S. and sits across roughly the same latitude. [...] While most invasive species tend to destabilize the ecosystems they colonize, entomologists are so far optimistic that the Joro spider could actually be beneficial, especially in Georgia where, instead of lovesick men, they kill off mosquitos, biting flies and another invasive species -- the brown marmorated stink bug, which damages crops and has no natural predators. In fact, the researchers say that the Joro is much more likely to be a nuisance than a danger, and that it should be left to its own devices.
How a Simple Security Bug Became a University Campus 'Master Key' (techcrunch.com) 73
And so by analyzing the app's network data at the same time he unlocked his dorm room door, Johnson found a way to replicate the network request and unlock the door by using a one-tap Shortcut button on his iPhone. For it to work, the Shortcut has to first send his precise location along with the door unlock request or his door won't open. Johnson said as a security measure students have to be physically in proximity to unlock doors using the app, seen as a measure aimed at preventing accidental door openings across campus. It worked, but why stop there? If he could unlock a door without needing the app, what other tasks could he replicate?
Johnson didn't have to look far for help. CBORD publishes a list of commands available through its API, which can be controlled using a student's credentials, like his. But he soon found a problem: The API was not checking if a student's credentials were valid. That meant Johnson, or anyone else on the internet, could communicate with the API and take over another student's account without having to know their password. Johnson said the API only checked the student's unique ID, but warned that these are sometimes the same as a university-issued student username or student ID number, which some schools publicly list on their online student directories, and as such cannot be considered a secret. Johnson described the password bug as a "master key" to his university -- at least to the doors that are controlled by CBORD. As for needing to be in close proximity to a door to unlock it, Johnson said the bug allowed him to trick the API into thinking he was physically present -- simply by sending back the approximate coordinates of the lock itself. The vulnerability was fixed and session keys were invalidated shortly after TechCrunch shared details of the bug with CBORD.
Why Swift Creator Chris Lattner Stepped Down From Its Core Team This Week (devclass.com) 98
The tech news site DevClass notes Lattner is also "the mind behind compiler infrastructure project LLVM," but reports that "Apparently, Lattner hasn't been part of the [Swift] core team since autumn 2021, when he tried discussing what he perceived as a toxic meeting environment with project leadership after an especially noteworthy call made him take a break in summer." "[...] after avoiding dealing with it, they made excuses, and made it clear they weren't planning to do anything about it. As such, I decided not to return," Lattner wrote in his explanation post. Back then, he planned to keep participating via the Swift Evolution community "but after several discussions generating more heat than light, when my formal proposal review comments and concerns were ignored by the unilateral accepts, and the general challenges with transparency working with core team, I decided that my effort was triggering the same friction with the same people, and thus I was just wasting my time."
Lattner had been the steering force behind Swift since the language's inception in 2010. However, after leaving Apple in 2017 and handing over his project lead role, design premises like "single things that compose" seem to have fallen by the wayside, making the decision to move on completely easier for language-creator Lattner.
The article points out Lattner's latest endeavour is AI infrastructure company Modular.AI.
And Lattner wrote in his comment that Swift's leadership "reassures me they 'want to make sure things are better for others in the future based on what we talked about' though...." Swift has a ton of well meaning and super talented people involved in and driving it. They are trying to be doing the best they can with a complicated situation and many pressures (including lofty goals, fixed schedules, deep bug queues to clear, internal folks that want to review/design things before the public has access to them, and pressures outside their team) that induce odd interactions with the community. By the time things get out to us, the plans are already very far along and sometimes the individuals are attached to the designs they've put a lot of energy into. This leads to a challenging dynamic for everyone involved.
I think that Swift is a phenomenal language and has a long and successful future ahead, but it certainly isn't a community designed language, and this isn't ambiguous. The new ideas on how to improve things sounds promising — I hope they address the fundamental incentive system challenges that the engineers/leaders face that cause the symptoms we see. I think that a healthy and inclusive community will continue to benefit the design and evolution of Swift.
DevClass also reported on the aftermath: Probably as a consequence of the move, the Swift core team is currently looking to restructure project leadership. According to Swift project lead Ted Kremenek... "The intent is to free the core team to invest more in overall project stewardship and create a larger language workgroup that can incorporate more community members in language decisions."
Kremenek also used the announcement to thank Lattner for his leadership throughout the formative years of the project, writing "it has been one of the greatest privileges of my life to work with Chris on Swift."
In 2017 Chris Lattner answered questions from Slashdot's readers.
Programming in Rust is Fun - But Challenging, Finds Annual Community Survey (rust-lang.org) 58
- For those who adopted Rust at work, 83% found it "challenging." But it was unclear how much of this was a Rust-specific issue or general challenges posed by adopting a new language. During adoption, only 13% of respondents believed the language was slowing their team down while 82% believed Rust helped their teams achieve their goals.
- Of the respondents using Rust, 59% use it at least occasionally at work and 23% use it for the majority of their coding. Last year, only 42% used Rust at work.
From the survey's results: After adoption, the costs seem to be justified: only 1% of respondents did not find the challenge worth it while 79% said it definitely was. When asked if their teams were likely to use Rust again in the future, 90% agreed. Finally, of respondents using Rust at work, 89% of respondents said their teams found it fun and enjoyable to program.
As for why respondents are using Rust at work, the top answer was that it allowed users "to build relatively correct and bug free software" with 96% of respondents agreeing with that statement. After correctness, performance (92%) was the next most popular choice. 89% of respondents agreed that they picked Rust at work because of Rust's much-discussed security properties.
Overall, Rust seems to be a language ready for the challenges of production, with only 3% of respondents saying that Rust was a "risky" choice for production use.
Thanks to Slashdot reader joshuark for submitting the story...
Behind the Stalkerware Network Spilling the Private Phone Data of Thousands (techcrunch.com) 17
On the front line of the operation is a collection of white-label Android spyware apps that continuously collect the contents of a person's phone, each with custom branding, and fronted by identical websites with U.S. corporate personas that offer cover by obfuscating links to its true operator. Behind the apps is a server infrastructure controlled by the operator, which is known to TechCrunch as a Vietnam-based company called 1Byte. TechCrunch found nine nearly identical spyware apps that presented with distinctly different branding, some with more obscure names than others: Copy9, MxSpy, TheTruthSpy, iSpyoo, SecondClone, TheSpyApp, ExactSpy, FoneTracker and GuestSpy. Other than their names, the spyware apps have practically identical features under the hood, and even the same user interface for setting up the spyware. Once installed, each app allows the person who planted the spyware access to a web dashboard for viewing the victim's phone data in real time -- their messages, contacts, location, photos and more. Much like the apps, each dashboard is a clone of the same web software. And, when TechCrunch analyzed the apps' network traffic, we found the apps all contact the same server infrastructure. But because the nine apps share the same code, web dashboards and the same infrastructure, they also share the same vulnerability.
The vulnerability in question is known as an insecure direct object reference, or IDOR, a class of bug that exposes files or data on a server because of sub-par, or no, security controls in place. It's similar to needing a key to unlock your mailbox, but that key can also unlock every other mailbox in your neighborhood. IDORs are one of the most common kinds of vulnerability [...]. But shoddy coding didn't just expose the private phone data of ordinary people. The entire spyware infrastructure is riddled with bugs that reveal more details about the operation itself. It's how we came to learn that data on some 400,000 devices -- though perhaps more -- have been compromised by the operation. Shoddy coding also led to the exposure of personal information about its affiliates who bring in new paying customers, information that they presumably expected to be private; even the operators themselves. After emailing 1Byte with details of the security vulnerability, the email address was shut down along with "at least two of the branded spyware apps," according to TechCrunch. "That leaves us here. Without a fix, or intervention from the web host, TechCrunch cannot disclose more about the security vulnerability -- even if it's the result of bad actors themselves -- because of the risk it poses to the hundreds of thousands of people whose phones have been unknowingly compromised by this spyware."
In a separate report, security editor Zack Whittaker explains how one can remove common consumer-grade spyware.
Phishing Attack Tricks 32 OpenSea Users Out of 254 NFTs (theverge.com) 35
"A spreadsheet compiled by the blockchain security service PeckShield counted 254 tokens stolen over the course of the attack, including tokens from Decentraland and Bored Ape Yacht Club." The bulk of the attacks took place between 5PM and 8PM ET, targeting 32 users in total. Molly White, who runs the blog Web3 is Going Great, estimated the value of the stolen tokens at more than $1.7 million.
The attack appears to have exploited a flexibility in the Wyvern Protocol, the open-source standard underlying most NFT smart contracts, including those made on OpenSea. One explanation (linked by CEO Devin Finzer on Twitter) described the attack in two parts: first, targets signed a partial contract, with a general authorization and large portions left blank. With the signature in place, attackers completed the contract with a call to their own contract, which transferred ownership of the NFTs without payment. In essence, targets of the attack had signed a blank check — and once it was signed, attackers filled in the rest of the check to take their holdings.
"I checked every transaction," said the user, who goes by Neso. "They all have valid signatures from the people who lost NFTs so anyone claiming they didn't get phished but lost NFTs is sadly wrong...."
Writing on Twitter shortly before 3AM ET, OpenSea CEO Devin Finzer said the attacks had not originated from OpenSea's website, its various listing systems, or any emails from the company. The rapid pace of the attack — hundreds of transactions in a matter of hours — suggests some common vector of attack, but so far no link has been discovered.
An update to OpenSea's smart contract was scheduled the day before (to remove old and inactive listings from the platform), and the scammer mimicked a genuine OpenSea email, according to The Street. A user who posted the text of the phishing email online explains that the scammer "then got a number of people to sign permissions with WyvernExchange. No exploit, just people not reading sign permissions as normal."
CEO Finzer told Bloomberg that some of the stolen NFTs have actually been returned, with no further malicious activity seen from the attacker's account. "He also dispelled rumors of a $200 million hack, saying the attacker has $1.7 million of Ethereum in his wallet from selling some of the stolen NFTs."
And PC Magazine shares this update about the wallet: CoinDesk reports that Etherscan, which bills itself as "the Ethereum blockchain explorer," has flagged the account that appears to be connected to these NFT thefts. (The public name of which is, fittingly enough, "Fake_Phishing5169.")
Linux Developers Patch Bugs Faster Than Microsoft, Apple, and Google, Study Shows (zdnet.com) 43
ZDNet reports that Linux's competition "didn't do nearly as well." For instance, Apple, 69 days; Google, 44 days; and Mozilla, 46 days. Coming in at the bottom was Microsoft, 83 days, and Oracle, albeit with only a handful of security problems, with 109 days.
By Project Zero's count, others, which included primarily open-source organizations and companies such as Apache, Canonical, Github, and Kubernetes, came in with a respectable 44 days.
Generally, everyone's getting faster at fixing security bugs. In 2021, vendors took an average of 52 days to fix reported security vulnerabilities. Only three years ago the average was 80 days. In particular, the Project Zero crew noted that Microsoft, Apple, and Linux all significantly reduced their time to fix over the last two years.
As for mobile operating systems, Apple iOS with an average of 70 days is a nose better than Android with its 72 days. On the other hand, iOS had far more bugs, 72, than Android with its 10 problems.
Browsers problems are also being fixed at a faster pace. Chrome fixed its 40 problems with an average of just under 30 days. Mozilla Firefox, with a mere 8 security holes, patched them in an average of 37.8 days. Webkit, Apple's web browser engine, which is primarily used by Safari, has a much poorer track record. Webkit's programmers take an average of over 72 days to fix bugs.
Firefox and Chrome Versions '100' May Break Some Websites (engadget.com) 92
Zoom Update Prevents Microphone From Staying Active After Calls On Mac (9to5mac.com) 16
Zoom has confirmed that there was a bug in its macOS app that could cause the orange microphone-in-use indicator to appear even after leaving a call. According to a company representative, the latest version of the app no longer has this problem: "We experienced a bug relating to the Zoom client for macOS, which could show the orange indicator light continue to appear after having left a meeting, call, or webinar. This bug was addressed in the Zoom client for macOS version 5.9.3 and we recommend you update to version 5.9.3 to apply the fix."