Bug

NYSE Investigating 'Technical Issue' That Showed Berkshire Hathaway Share Price Dropping 99% (nbcnews.com) 33

The New York Stock Exchange said Monday it was investigating a "technical issue" that was leading to large fluctuations in the prices of certain stocks including Warren Buffett's Berkshire Hathaway. From a report: According to a notice posted on its website, the issue involved "limit up, limit down bands," which are designed to limit volatility. Some 50 stocks were affected, the website indicated, and trades in those companies was halted. NYSE trading data incorrectly showed so-called Class A shares of Berkshire down 99% from its price above $620,000 a share.
Security

Federal Agency Warns (Patched) Critical Linux Vulnerability Being Actively Exploited (arstechnica.com) 21

"The US Cybersecurity and Infrastructure Security Agency has added a critical security bug in Linux to its list of vulnerabilities known to be actively exploited in the wild," reported Ars Technica on Friday.

"The vulnerability, tracked as CVE-2024-1086 and carrying a severity rating of 7.8 out of a possible 10, allows people who have already gained a foothold inside an affected system to escalate their system privileges." It's the result of a use-after-free error, a class of vulnerability that occurs in software written in the C and C++ languages when a process continues to access a memory location after it has been freed or deallocated. Use-after-free vulnerabilities can result in remote code or privilege escalation. The vulnerability, which affects Linux kernel versions 5.14 through 6.6, resides in the NF_tables, a kernel component enabling the Netfilter, which in turn facilitates a variety of network operations... It was patched in January, but as the CISA advisory indicates, some production systems have yet to install it. At the time this Ars post went live, there were no known details about the active exploitation.

A deep-dive write-up of the vulnerability reveals that these exploits provide "a very powerful double-free primitive when the correct code paths are hit." Double-free vulnerabilities are a subclass of use-after-free errors...

Canada

'Ottawa Wants the Power To Create Secret Backdoors In Our Networks' (theglobeandmail.com) 39

An anonymous reader quotes an op-ed from The Globe and Mail, written by Kate Robertson and Ron Deibert. Robertson is a senior research associate and Deibert is director at the University of Toronto's Citizen Lab. From the piece: A federal cybersecurity bill, slated to advance through Parliament soon, contains secretive, encryption-breaking powers that the government has been loath to talk about. And they threaten the online security of everyone in Canada. Bill C-26 empowers government officials to secretly order telecommunications companies to install backdoors inside encrypted elements in Canada's networks. This could include requiring telcos to alter the 5G encryption standards that protect mobile communications to facilitate government surveillance. The government's decision to push the proposed law forward without amending it to remove this encryption-breaking capability has set off alarm bells that these new powers are a feature, not a bug.

There are already many insecurities in today's networks, reaching down to the infrastructure layers of communication technology. The Signalling System No. 7, developed in 1975 to route phone calls, has become a major source of insecurity for cellphones. In 2017, the CBC demonstrated how hackers only needed a Canadian MP's cell number to intercept his movements, text messages and phone calls. Little has changed since: A 2023 Citizen Lab report details pervasive vulnerabilities at the heart of the world's mobile networks. So it makes no sense that the Canadian government would itself seek the ability to create more holes, rather than patching them. Yet it is pushing for potential new powers that would infect next-generation cybersecurity tools with old diseases.

It's not as if the government wasn't warned. Citizen Lab researchers presented the 2023 report's findings in parliamentary hearings on Bill C-26, and leaders and experts in civil society and in Canada's telecommunications industry warned that the bill must be narrowed to prevent its broad powers to compel technical changes from being used to compromise the "confidentiality, integrity, or availability" of telecommunication services. And yet, while government MPs maintained that their intent is not to expand surveillance capabilities, MPs pushed the bill out of committee without this critical amendment last month. In doing so, the government has set itself up to be the sole arbiter of when, and on what conditions, Canadians deserve security for their most confidential communications -- personal, business, religious, or otherwise. The new powers would only make people in Canada more vulnerable to malicious threats to the privacy and security of all network users, including Canada's most senior officials. [...]
"Now, more than ever, there is no such thing as a safe backdoor," the authors write in closing. "A shortcut that provides a narrow advantage for the few at the expense of us all is no way to secure our complex digital ecosystem."

"Against this threat landscape, a pivot is crucial. Canada needs cybersecurity laws that explicitly recognize that uncompromised encryption is the backbone of cybersecurity, and it must be mandated and protected by all means possible."
AI

Mojo, Bend, and the Rise of AI-First Programming Languages (venturebeat.com) 26

"While general-purpose languages like Python, C++, and Java remain popular in AI development," writes VentureBeat, "the resurgence of AI-first languages signifies a recognition that AI's unique demands require specialized languages tailored to the domain's specific needs... designed from the ground up to address the specific needs of AI development." Bend, created by Higher Order Company, aims to provide a flexible and intuitive programming model for AI, with features like automatic differentiation and seamless integration with popular AI frameworks. Mojo, developed by Modular AI, focuses on high performance, scalability, and ease of use for building and deploying AI applications. Swift for TensorFlow, an extension of the Swift programming language, combines the high-level syntax and ease of use of Swift with the power of TensorFlow's machine learning capabilities...

At the heart of Mojo's design is its focus on seamless integration with AI hardware, such as GPUs running CUDA and other accelerators. Mojo enables developers to harness the full potential of specialized AI hardware without getting bogged down in low-level details. One of Mojo's key advantages is its interoperability with the existing Python ecosystem. Unlike languages like Rust, Zig or Nim, which can have steep learning curves, Mojo allows developers to write code that seamlessly integrates with Python libraries and frameworks. Developers can continue to use their favorite Python tools and packages while benefiting from Mojo's performance enhancements... It supports static typing, which can help catch errors early in development and enable more efficient compilation... Mojo also incorporates an ownership system and borrow checker similar to Rust, ensuring memory safety and preventing common programming errors. Additionally, Mojo offers memory management with pointers, giving developers fine-grained control over memory allocation and deallocation...

Mojo is conceptually lower-level than some other emerging AI languages like Bend, which compiles modern high-level language features to native multithreading on Apple Silicon or NVIDIA GPUs. Mojo offers fine-grained control over parallelism, making it particularly well-suited for hand-coding modern neural network accelerations. By providing developers with direct control over the mapping of computations onto the hardware, Mojo enables the creation of highly optimized AI implementations.

According to Mojo's creator, Modular, the language has already garnered an impressive user base of over 175,000 developers and 50,000 organizations since it was made generally available last August. Despite its impressive performance and potential, Mojo's adoption might have stalled initially due to its proprietary status. However, Modular recently decided to open-source Mojo's core components under a customized version of the Apache 2 license. This move will likely accelerate Mojo's adoption and foster a more vibrant ecosystem of collaboration and innovation, similar to how open source has been a key factor in the success of languages like Python.

Developers can now explore Mojo's inner workings, contribute to its development, and learn from its implementation. This collaborative approach will likely lead to faster bug fixes, performance improvements and the addition of new features, ultimately making Mojo more versatile and powerful.

The article also notes other languages "trying to become the go-to choice for AI development" by providing high-performance execution on parallel hardware. Unlike low-level beasts like CUDA and Metal, Bend feels more like Python and Haskell, offering fast object allocations, higher-order functions with full closure support, unrestricted recursion and even continuations. It runs on massively parallel hardware like GPUs, delivering near-linear speedup based on core count with zero explicit parallel annotations — no thread spawning, no locks, mutexes or atomics. Powered by the HVM2 runtime, Bend exploits parallelism wherever it can, making it the Swiss Army knife for AI — a tool for every occasion...

The resurgence of AI-focused programming languages like Mojo, Bend, Swift for TensorFlow, JAX and others marks the beginning of a new era in AI development. As the demand for more efficient, expressive, and hardware-optimized tools grows, we expect to see a proliferation of languages and frameworks that cater specifically to the unique needs of AI. These languages will leverage modern programming paradigms, strong type systems, and deep integration with specialized hardware to enable developers to build more sophisticated AI applications with unprecedented performance. The rise of AI-focused languages will likely spur a new wave of innovation in the interplay between AI, language design and hardware development. As language designers work closely with AI researchers and hardware vendors to optimize performance and expressiveness, we will likely see the emergence of novel architectures and accelerators designed with these languages and AI workloads in mind. This close relationship between AI, language, and hardware will be crucial in unlocking the full potential of artificial intelligence, enabling breakthroughs in fields like autonomous systems, natural language processing, computer vision, and more.

The future of AI development and computing itself are being reshaped by the languages and tools we create today.

In 2017 Modular AI's founder Chris Lattner (creator of the Swift and LLVM) answered questions from Slashdot readers.
Apple

Apple Explains Rare iOS 17.5 Bug That Resurfaced Deleted Photos (9to5mac.com) 59

Apple has shed more light on the bizarre iOS 17.5 bug that caused long-deleted photos to mysteriously reappear on users' devices. In a statement to 9to5Mac, the iPhone maker clarified that the issue stemmed from a corrupted database on the device itself, not iCloud Photos. This means the photos were never fully erased from the device, but they also weren't synced to iCloud. Interestingly, these files could have hitched a ride to new devices through backups or direct transfers.
Science

'Pay Researchers To Spot Errors in Published Papers' 24

Borrowing the idea of "bug bounties" from the technology industry could provide a systematic way to detect and correct the errors that litter the scientific literature. Malte Elson, writing at Nature: Just as many industries devote hefty funding to incentivizing people to find and report bugs and glitches, so the science community should reward the detection and correction of errors in the scientific literature. In our industry, too, the costs of undetected errors are staggering. That's why I have joined with meta-scientist Ian Hussey at the University of Bern and psychologist Ruben Arslan at Leipzig University in Germany to pilot a bug-bounty programme for science, funded by the University of Bern. Our project, Estimating the Reliability and Robustness of Research (ERROR), pays specialists to check highly cited published papers, starting with the social and behavioural sciences (see go.nature.com/4bmlvkj). Our reviewers are paid a base rate of up to 1,000 Swiss francs (around US$1,100) for each paper they check, and a bonus for any errors they find. The bigger the error, the greater the reward -- up to a maximum of 2,500 francs.

Authors who let us scrutinize their papers are compensated, too: 250 francs to cover the work needed to prepare files or answer reviewer queries, and a bonus 250 francs if no errors (or only minor ones) are found in their work. ERROR launched in February and will run for at least four years. So far, we have sent out almost 60 invitations, and 13 sets of authors have agreed to have their papers assessed. One review has been completed, revealing minor errors. I hope that the project will demonstrate the value of systematic processes to detect errors in published research. I am convinced that such systems are needed, because current checks are insufficient. Unpaid peer reviewers are overburdened, and have little incentive to painstakingly examine survey responses, comb through lists of DNA sequences or cell lines, or go through computer code line by line. Mistakes frequently slip through. And researchers have little to gain personally from sifting through published papers looking for errors. There is no financial compensation for highlighting errors, and doing so can see people marked out as troublemakers.
Open Source

Why a 'Frozen' Distribution Linux Kernel Isn't the Safest Choice for Security (zdnet.com) 104

Jeremy Allison — Sam (Slashdot reader #8,157) is a Distinguished Engineer at Rocky Linux creator CIQ. This week he published a blog post responding to promises of Linux distros "carefully selecting only the most polished and pristine open source patches from the raw upstream open source Linux kernel in order to create the secure distribution kernel you depend on in your business."

But do carefully curated software patches (applied to a known "frozen" Linux kernel) really bring greater security? "After a lot of hard work and data analysis by my CIQ kernel engineering colleagues Ronnie Sahlberg and Jonathan Maple, we finally have an answer to this question. It's no." The data shows that "frozen" vendor Linux kernels, created by branching off a release point and then using a team of engineers to select specific patches to back-port to that branch, are buggier than the upstream "stable" Linux kernel created by Greg Kroah-Hartman. How can this be? If you want the full details the link to the white paper is here. But the results of the analysis couldn't be clearer.

- A "frozen" vendor kernel is an insecure kernel. A vendor kernel released later in the release schedule is doubly so.

- The number of known bugs in a "frozen" vendor kernel grows over time. The growth in the number of bugs even accelerates over time.

- There are too many open bugs in these kernels for it to be feasible to analyze or even classify them....

[T]hinking that you're making a more secure choice by using a "frozen" vendor kernel isn't a luxury we can still afford to believe. As Greg Kroah-Hartman explicitly said in his talk "Demystifying the Linux Kernel Security Process": "If you are not using the latest stable / longterm kernel, your system is insecure."

CIQ describes its report as "a count of all the known bugs from an upstream kernel that were introduced, but never fixed in RHEL 8." For the most recent RHEL 8 kernels, at the time of writing, these counts are: RHEL 8.6 : 5034 RHEL 8.7 : 4767 RHEL 8.8 : 4594

In RHEL 8.8 we have a total of 4594 known bugs with fixes that exist upstream, but for which known fixes have not been back-ported to RHEL 8.8. The situation is worse for RHEL 8.6 and RHEL 8.7 as they cut off back-porting earlier than RHEL 8.8 but of course that did not prevent new bugs from being discovered and fixed upstream....

This whitepaper is not meant as a criticism of the engineers working at any Linux vendors who are dedicated to producing high quality work in their products on behalf of their customers. This problem is extremely difficult to solve. We know this is an open secret amongst many in the industry and would like to put concrete numbers describing the problem to encourage discussion. Our hope is for Linux vendors and the community as a whole to rally behind the kernel.org stable kernels as the best long term supported solution. As engineers, we would prefer this to allow us to spend more time fixing customer specific bugs and submitting feature improvements upstream, rather than the endless grind of backporting upstream changes into vendor kernels, a practice which can introduce more bugs than it fixes.

ZDNet calls it "an open secret in the Linux community." It's not enough to use a long-term support release. You must use the most up-to-date release to be as secure as possible. Unfortunately, almost no one does that. Nevertheless, as Google Linux kernel engineer Kees Cook explained, "So what is a vendor to do? The answer is simple: if painful: Continuously update to the latest kernel release, either major or stable." Why? As Kroah-Hartman explained, "Any bug has the potential of being a security issue at the kernel level...."

Although [CIQ's] programmers examined RHEL 8.8 specifically, this is a general problem. They would have found the same results if they had examined SUSE, Ubuntu, or Debian Linux. Rolling-release Linux distros such as Arch, Gentoo, and OpenSUSE Tumbleweed constantly release the latest updates, but they're not used in businesses.

Jeremy Allison's post points out that "the Linux kernel used by Android devices is based on the upstream kernel and also has a stable internal kernel ABI, so this isn't an insurmountable problem..."
AI

Bruce Schneier Reminds LLM Engineers About the Risks of Prompt Injection Vulnerabilities (schneier.com) 40

Security professional Bruce Schneier argues that large language models have the same vulnerability as phones in the 1970s exploited by John Draper.

"Data and control used the same channel," Schneier writes in Communications of the ACM. "That is, the commands that told the phone switch what to do were sent along the same path as voices." Other forms of prompt injection involve the LLM receiving malicious instructions in its training data. Another example hides secret commands in Web pages. Any LLM application that processes emails or Web pages is vulnerable. Attackers can embed malicious commands in images and videos, so any system that processes those is vulnerable. Any LLM application that interacts with untrusted users — think of a chatbot embedded in a website — will be vulnerable to attack. It's hard to think of an LLM application that isn't vulnerable in some way.

Individual attacks are easy to prevent once discovered and publicized, but there are an infinite number of them and no way to block them as a class. The real problem here is the same one that plagued the pre-SS7 phone network: the commingling of data and commands. As long as the data — whether it be training data, text prompts, or other input into the LLM — is mixed up with the commands that tell the LLM what to do, the system will be vulnerable. But unlike the phone system, we can't separate an LLM's data from its commands. One of the enormously powerful features of an LLM is that the data affects the code. We want the system to modify its operation when it gets new training data. We want it to change the way it works based on the commands we give it. The fact that LLMs self-modify based on their input data is a feature, not a bug. And it's the very thing that enables prompt injection.

Like the old phone system, defenses are likely to be piecemeal. We're getting better at creating LLMs that are resistant to these attacks. We're building systems that clean up inputs, both by recognizing known prompt-injection attacks and training other LLMs to try to recognize what those attacks look like. (Although now you have to secure that other LLM from prompt-injection attacks.) In some cases, we can use access-control mechanisms and other Internet security systems to limit who can access the LLM and what the LLM can do. This will limit how much we can trust them. Can you ever trust an LLM email assistant if it can be tricked into doing something it shouldn't do? Can you ever trust a generative-AI traffic-detection video system if someone can hold up a carefully worded sign and convince it to not notice a particular license plate — and then forget that it ever saw the sign...?

Someday, some AI researcher will figure out how to separate the data and control paths. Until then, though, we're going to have to think carefully about using LLMs in potentially adversarial situations...like, say, on the Internet.

Schneier urges engineers to balance the risks of generative AI with the powers it brings. "Using them for everything is easier than taking the time to figure out what sort of specialized AI is optimized for the task.

"But generative AI comes with a lot of security baggage — in the form of prompt-injection attacks and other security risks. We need to take a more nuanced view of AI systems, their uses, their own particular risks, and their costs vs. benefits."
Businesses

Two Students Uncover Security Bug That Could Let Millions Do Their Laundry For Free (techcrunch.com) 78

Two university students discovered a security flaw in over a million internet-connected laundry machines operated by CSC ServiceWorks, allowing users to avoid payment and add unlimited funds to their accounts. The students, Alexander Sherbrooke and Iakov Taranenko from UC Santa Cruz, reported the vulnerability to the company, a major laundry service provider, in January but claim it remains unpatched. TechCrunch adds: Sherbrooke said he was sitting on the floor of his basement laundry room in the early hours one January morning with his laptop in hand, and "suddenly having an 'oh s-' moment." From his laptop, Sherbrooke ran a script of code with instructions telling the machine in front of him to start a cycle despite having $0 in his laundry account. The machine immediately woke up with a loud beep and flashed "PUSH START" on its display, indicating the machine was ready to wash a free load of laundry.

In another case, the students added an ostensible balance of several million dollars into one of their laundry accounts, which reflected in their CSC Go mobile app as though it were an entirely normal amount of money for a student to spend on laundry.

The Almighty Buck

Germany's Sovereign Tech Fund Now Supporting FFmpeg (phoronix.com) 16

Michael Larabel reports via Phoronix: Following Germany's Sovereign Tech Fund providing significant funding for GNOME, Rust Coreutils, PHP, a systemd bug bounty, and numerous other free software projects, the FFmpeg multimedia library is the latest beneficiary to this funding from the Germany government. The Sovereign Tech Fund notes that the FFmpeg project is receiving 157,580 euros for 2024 and 2025.

An announcement on the FFmpeg.org project site notes: "The FFmpeg community is excited to announce that Germany's Sovereign Tech Fund has become its first governmental sponsor. Their support will help sustain the [maintenance] of the FFmpeg project, a critical open-source software multimedia component essential to bringing audio and video to billions around the world everyday."

Bitcoin

MIT Students Stole $25 Million In Seconds By Exploiting ETH Blockchain Bug, DOJ Says (arstechnica.com) 112

An anonymous reader quotes a report from Ars Technica: Within approximately 12 seconds, two highly educated brothers allegedly stole $25 million by tampering with the ethereum blockchain in a never-before-seen cryptocurrency scheme, according to an indictment that the US Department of Justice unsealed Wednesday. In a DOJ press release, US Attorney Damian Williams said the scheme was so sophisticated that it "calls the very integrity of the blockchain into question."

"The brothers, who studied computer science and math at one of the most prestigious universities in the world, allegedly used their specialized skills and education to tamper with and manipulate the protocols relied upon by millions of ethereum users across the globe," Williams said. "And once they put their plan into action, their heist only took 12 seconds to complete." Anton, 24, and James Peraire-Bueno, 28, were arrested Tuesday, charged with conspiracy to commit wire fraud, wire fraud, and conspiracy to commit money laundering. Each brother faces "a maximum penalty of 20 years in prison for each count," the DOJ said. The indictment goes into detail explaining that the scheme allegedly worked by exploiting the ethereum blockchain in the moments after a transaction was conducted but before the transaction was added to the blockchain.
To uncover the scheme, the special agent in charge, Thomas Fattorusso of the IRS Criminal Investigation (IRS-CI) New York Field Office, said that investigators "simply followed the money."

"Regardless of the complexity of the case, we continue to lead the effort in financial criminal investigations with cutting-edge technology and good-ol'-fashioned investigative work, on and off the blockchain," Fattorusso said.
IOS

Troubling iOS 17.5 Bug Reportedly Resurfacing Old Deleted Photos (macrumors.com) 58

An anonymous reader shares a report: There are concerning reports on Reddit that Apple's latest iOS 17.5 update has introduced a bug that causes old photos that were deleted -- in some cases years ago -- to reappear in users' photo libraries. After updating their iPhone, one user said they were shocked to find old NSFW photos that they deleted in 2021 suddenly showing up in photos marked as recently uploaded to iCloud. Other users have also chimed in with similar stories. "Same here," said one Redditor. "I have four pics from 2010 that keep reappearing as the latest pics uploaded to iCloud. I have deleted them repeatedly." "Same thing happened to me," replied another user. "Six photos from different times, all I have deleted. Some I had deleted in 2023." More reports have been trickling in overnight. One said: "I had a random photo from a concert taken on my Canon camera reappear in my phone library, and it showed up as if it was added today."
Social Networks

Is Mastodon's Link-Previewing Overloading Servers? (itsfoss.com) 39

The blog Its FOSS has 15,000 followers for its Mastodon account — which they think is causing problems: When you share a link on Mastodon, a link preview is generated for it, right? With Mastodon being a federated platform (a part of the Fediverse), the request to generate a link preview is not generated by just one Mastodon instance. There are many instances connected to it who also initiate requests for the content almost immediately. And, this "fediverse effect" increases the load on the website's server in a big way.

Sure, some websites may not get overwhelmed with the requests, but Mastodon does generate numerous hits, increasing the load on the server. Especially, if the link reaches a profile with more followers (and a broader network of instances)... We tried it on our Mastodon profile, and every time we shared a link, we were able to successfully make our website unresponsive or slow to load.

Slashdot reader nunojsilva is skeptical that "blurbs with a thumbnail and description" could create the issue (rather than, say, poorly-optimized web content). But the It's Foss blog says they found three GitHub issues about the same problem — one from 2017, and two more from 2023. And other blogs also reported the same issue over a year ago — including software developer Michael Nordmeyer and legendary Netscape programmer Jamie Zawinski.

And back in 2022, security engineer Chris Partridge wrote: [A] single roughly ~3KB POST to Mastodon caused servers to pull a bit of HTML and... an image. In total, 114.7 MB of data was requested from my site in just under five minutes — making for a traffic amplification of 36704:1. [Not counting the image.]
Its Foss reports Mastodon's official position that the issue has been "moved as a milestone for a future 4.4.0 release. As things stand now, the 4.4.0 release could take a year or more (who knows?)."

They also state their opinion that the issue "should have been prioritized for a faster fix... Don't you think as a community-powered, open-source project, it should be possible to attend to a long-standing bug, as serious as this one?"
Ubuntu

Ubuntu Criticized For Bug Blocking Installation of .Deb Packages (linux-magazine.com) 118

The blog It's FOSS is "pissed at the casual arrogance of Ubuntu and its parent company Canonical..... The sheer audacity of not caring for its users reeks of Microsoft-esque arrogance." If you download a .deb package of a software, you cannot install it using the official graphical software center on Ubuntu anymore. When you double-click on the downloaded deb package, you'll see this error, "there is no app installed for Debian package files".

If you right-click and choose to open it with Software Center, you are in for another annoyance. The software center will go into eternal loading. It may look as if it is doing something, but it will go on forever. I could even livestream the loading app store on YouTube, and it would continue for the 12 years of its long-term support period.

Canonical software engineer Dennis Loose actually created an issue ticket for the problem himself — back in September of 2023. And two weeks ago he returned to the discussion to announce that fix "will be a priority for the next cycle". (Though "unfortunately we didn't have the capacity to work on this for 24.04...)

But Its Foss accused Canonical of "cleverly booting out deb in favor of Snap, one baby step at a time" (noting the problem started with Ubuntu 23.10): There is also the issue of replacing deb packages with Snap, even with the apt command line tool. You use 'sudo apt install chromium', you get a Snap package of Chromium instead of Debian
The venerable Linux magazine argues that Canonical "has secretly forced Snap installation on users." [I]t looks as if the Software app defaults to Snap packages for everything now. I combed through various apps and found this to be the case.... As far as the auto-installation of downloaded .deb files, you'll have to install something like gdebi to bring back this feature.
AI

Copilot Workspace Is GitHub's Take On AI-Powered Software Engineering 12

An anonymous reader quotes a report from TechCrunch: Ahead of its annual GitHub Universe conference in San Francisco early this fall, GitHub announced Copilot Workspace, a dev environment that taps what GitHub describes as "Copilot-powered agents" to help developers brainstorm, plan, build, test and run code in natural language. Jonathan Carter, head of GitHub Next, GitHub's software R&D team, pitches Workspace as somewhat of an evolution of GitHub's AI-powered coding assistant Copilot into a more general tool, building on recently introduced capabilities like Copilot Chat, which lets developers ask questions about code in natural language. "Through research, we found that, for many tasks, the biggest point of friction for developers was in getting started, and in particular knowing how to approach a [coding] problem, knowing which files to edit and knowing how to consider multiple solutions and their trade-offs," Carter said. "So we wanted to build an AI assistant that could meet developers at the inception of an idea or task, reduce the activation energy needed to begin and then collaborate with them on making the necessary edits across the entire corebase."

Given a GitHub repo or a specific bug within a repo, Workspace -- underpinned by OpenAI's GPT-4 Turbo model -- can build a plan to (attempt to) squash the bug or implement a new feature, drawing on an understanding of the repo's comments, issue replies and larger codebase. Developers get suggested code for the bug fix or new feature, along with a list of the things they need to validate and test that code, plus controls to edit, save, refactor or undo it. The suggested code can be run directly in Workspace and shared among team members via an external link. Those team members, once in Workspace, can refine and tinker with the code as they see fit.

Perhaps the most obvious way to launch Workspace is from the new "Open in Workspace" button to the left of issues and pull requests in GitHub repos. Clicking on it opens a field to describe the software engineering task to be completed in natural language, like, "Add documentation for the changes in this pull request," which, once submitted, gets added to a list of "sessions" within the new dedicated Workspace view. Workspace executes requests systematically step by step, creating a specification, generating a plan and then implementing that plan. Developers can dive into any of these steps to get a granular view of the suggested code and changes and delete, re-run or re-order the steps as necessary.
"Since developers spend a lot of their time working on [coding issues], we believe we can help empower developers every day through a 'thought partnership' with AI," Carter said. "You can think of Copilot Workspace as a companion experience and dev environment that complements existing tools and workflows and enables simplifying a class of developer tasks ... We believe there's a lot of value that can be delivered in an AI-native developer environment that isn't constrained by existing workflows."
Idle

Airline Ticketing System Keeps Mistaking a 101-Year-Old Woman for a 1-Year-Old (bbc.com) 121

Though it's long past Y2K, another date-related bug is still with us, writes Slashdot reader Bruce66423, sharing this report from the BBC.

"A 101-year-old woman keeps getting mistaken for a baby, because of an error with an airline's booking system." The problem occurs because American Airlines' systems apparently cannot compute that Patricia, who did not want to share her surname, was born in 1922, rather than 2022.... [O]n one occasion, airport staff did not have transport ready for her inside the terminal as they were expecting a baby who could be carried...

[I]t appears the airport computer system is unable to process a birth date so far in the past — so it defaulted to one 100 years later instead... But she is adamant the IT problems will not put her off flying, and says she is looking forward to her next flight in the autumn. By then she will be 102 — and perhaps by then the airline computers will have caught on to her real age.

The Almighty Buck

Software Glitch Saw Aussie Casino Give Away Millions In Cash 19

A software glitch in the "ticket in, cash out" (TICO) machines at Star Casino in Sydney, Australia, saw it inadvertently give away $2.05 million over several weeks. This glitch allowed gamblers to reuse a receipt for slot machine winnings, leading to unwarranted cash payouts which went undetected due to systematic failures in oversight and audit processes. The Register reports: News of the giveaway emerged on Monday at an independent inquiry into the casino, which has had years of compliance troubles that led to a finding that its operators were unsuitable to hold a license. In testimony [PDF] given on Monday to the inquiry, casino manager Nicholas Weeks explained that it is possible to insert two receipts into TICO machines. That was a feature, not a bug, and allowed gamblers to redeem two receipts and be paid the aggregate amount. But a software glitch meant that the machines would return one of those tickets and allow it to be re-used -- the barcode it bore was not recognized as having been paid.

"What occurred was small additional amounts of cash were being provided to customers in circumstances when they shouldn't have received it because of that defect," Weeks told the inquiry. Local media reported that news of the free cash got around and 43 people used the TICO machines to withdraw money to which they were not entitled -- at least one of them a recovering gambling addict who fell off the wagon as the "free" money allowed them to fund their activities. Known abusers of the TICO machines have been charged, and one of those set to face the courts is accused of association with a criminal group. (The first inquiry into The Star, two years ago, found it may have been targeted by organized crime groups.)
Operating Systems

Framework's Software and Firmware Have Been a Mess (arstechnica.com) 18

Framework, the company known for designing and selling upgradeable, modular laptops, has struggled with providing up-to-date software for its products. Ars Technica's Andrew Cunningham spoke with CEO Nirav Patel to discuss how the company is working on fixing these issues. Longtime Slashdot reader snikulin shares the report: Driver bundles remain un-updated for years after their initial release. BIOS updates go through long and confusing beta processes, keeping users from getting feature improvements, bug fixes, and security updates. In its community support forums, Framework employees, including founder and CEO Nirav Patel, have acknowledged these issues and promised fixes but have remained inconsistent and vague about actual timelines. [...] Patel says Framework has taken steps to improve the update problem, but he admits that the team's initial approach -- supporting existing laptops while also trying to spin up firmware for upcoming launches -- wasn't working. "We started 12th-gen [Intel Framework Laptop] development, basically the 12th-gen team was also handling looking back at 11th-gen [Intel Framework Laptop] to do firmware updates there," Patel told Ars. "And it became clear, especially as we continued to add on more platforms, that just wasn't a sustainable path to proceed on."

Part of the issue is that Framework relies on external companies to put together firmware updates. Some components are provided by Intel, AMD, and other chip companies to all PC companies that use their chips. Others are provided by Insyde, which writes UEFI firmware for Framework and others. And some are handled by Compal, the contract manufacturer that actually produces Framework's systems and has also designed and sold systems for most of the big-name PC companies. As far back as August 2023, Patel has written that the plan is to work with Compal and Insyde to hire dedicated staff to provide better firmware support for Framework laptops. However, the benefits of this arrangement have been slow to reach users. "[Compal] started recruiting on their side towards the end of last year," Patel told Ars. "And now, just at the beginning of this year, we've been able to get that whole team into place and start onboarding them. And especially after Lunar New Year, which is in early February, that team is now up and running at full speed." The goal, Patel says, is to continuously cycle through all of Framework's actively supported laptops, updating each of them one at a time before looping back around and starting the process over again. Functionality-breaking problems and security fixes will take precedence, while additional features and user requests will be lower-priority. ...
snikulin adds: "As a recent Framework 13/AMD owner, I can confirm that it does not sleep properly on a default Windows 11 install. When I close the lid in the evening, the battery is dead the next morning. It's interesting to hear from Linus Sebastian (LTT) on the topic because he is a stakeholder in Framework."
Security

A Crypto Wallet Maker's Warning About an iMessage Bug Sounds Like a False Alarm (techcrunch.com) 3

A crypto wallet maker claimed this week that hackers may be targeting people with an iMessage "zero-day" exploit -- but all signs point to an exaggerated threat, if not a downright scam. From a report: Trust Wallet's official X (previously Twitter) account wrote that "we have credible intel regarding a high-risk zero-day exploit targeting iMessage on the Dark Web. This can infiltrate your iPhone without clicking any link. High-value targets are likely. Each use raises detection risk." The wallet maker recommended iPhone users to turn off iMessage completely "until Apple patches this," even though no evidence shows that "this" exists at all. The tweet went viral, and has been viewed over 3.6 million times as of our publication. Because of the attention the post received, Trust Wallet hours later wrote a follow-up post. The wallet maker doubled down on its decision to go public, saying that it "actively communicates any potential threats and risks to the community."
Security

Crickets From Chirp Systems in Smart Lock Key Leak (krebsonsecurity.com) 14

The U.S. government is warning that smart locks securing entry to an estimated 50,000 dwellings nationwide contain hard-coded credentials that can be used to remotely open any of the locks. Krebs on SecurityL: The lock's maker Chirp Systems remains unresponsive, even though it was first notified about the critical weakness in March 2021. Meanwhile, Chirp's parent company, RealPage, Inc., is being sued by multiple U.S. states for allegedly colluding with landlords to illegally raise rents. On March 7, 2024, the U.S. Cybersecurity & Infrastructure Security Agency (CISA) warned about a remotely exploitable vulnerability with "low attack complexity" in Chirp Systems smart locks.

"Chirp Access improperly stores credentials within its source code, potentially exposing sensitive information to unauthorized access," CISA's alert warned, assigning the bug a CVSS (badness) rating of 9.1 (out of a possible 10). "Chirp Systems has not responded to requests to work with CISA to mitigate this vulnerability." Matt Brown, the researcher CISA credits with reporting the flaw, is a senior systems development engineer at Amazon Web Services. Brown said he discovered the weakness and reported it to Chirp in March 2021, after the company that manages his apartment building started using Chirp smart locks and told everyone to install Chirp's app to get in and out of their apartments.

Slashdot Top Deals