Media

FFmpeg Devs Boast of Up To 94x Performance Boost After Implementing Handwritten AVX-512 Assembly Code (tomshardware.com) 135

Anton Shilov reports via Tom's Hardware: FFmpeg is an open-source video decoding project developed by volunteers who contribute to its codebase, fix bugs, and add new features. The project is led by a small group of core developers and maintainers who oversee its direction and ensure that contributions meet certain standards. They coordinate the project's development and release cycles, merging contributions from other developers. This group of developers tried to implement a handwritten AVX512 assembly code path, something that has rarely been done before, at least not in the video industry.

The developers have created an optimized code path using the AVX-512 instruction set to accelerate specific functions within the FFmpeg multimedia processing library. By leveraging AVX-512, they were able to achieve significant performance improvements -- from three to 94 times faster -- compared to standard implementations. AVX-512 enables processing large chunks of data in parallel using 512-bit registers, which can handle up to 16 single-precision FLOPS or 8 double-precision FLOPS in one operation. This optimization is ideal for compute-heavy tasks in general, but in the case of video and image processing in particular.

The benchmarking results show that the new handwritten AVX-512 code path performs considerably faster than other implementations, including baseline C code and lower SIMD instruction sets like AVX2 and SSSE3. In some cases, the revamped AVX-512 codepath achieves a speedup of nearly 94 times over the baseline, highlighting the efficiency of hand-optimized assembly code for AVX-512.

Programming

Python Overtakes JavaScript on GitHub, Annual Survey Finds (github.blog) 97

GitHub released its annual "State of the Octoverse" report this week. And while "Systems programming languages, like Rust, are also on the rise... Python, JavaScript, TypeScript, and Java remain the most widely used languages on GitHub."

In fact, "In 2024, Python overtook JavaScript as the most popular language on GitHub." They also report usage of Jupyter Notebooks "skyrocketed" with a 92% jump in usage, which along with Python's rise seems to underscore "the surge in data science and machine learning on GitHub..." We're also seeing increased interest in AI agents and smaller models that require less computational power, reflecting a shift across the industry as more people focus on new use cases for AI... While the United States leads in contributions to generative AI projects on GitHub, we see more absolute activity outside the United States. In 2024, there was a 59% surge in the number of contributions to generative AI projects on GitHub and a 98% increase in the number of projects overall — and many of those contributions came from places like India, Germany, Japan, and Singapore...

Notable growth is occurring in India, which is expected to have the world's largest developer population on GitHub by 2028, as well as across Africa and Latin America... [W]e have seen greater growth outside the United States every year since 2013 — and that trend has sped up over the past few years.

Last year they'd projected India would have the most developers on GitHub #1 by 2027, but now believe it will happen a year later. This year's top 10?

1. United States
2. India
3. China
4. Brazil
5. United Kingdom
6. Russia
7. Germany
8. Indonesia
9. Japan
10. Canada

Interestingly, the UK's population ranks #21 among countries of the world, while Germany ranks #19, and Canada ranks #36.)

GitHub's announcement argues the rise of non-English, high-population regions "is notable given that it is happening at the same time as the proliferation of generative AI tools, which are increasingly enabling developers to engage with code in their natural language." And they offer one more data point: GitHub's For Good First Issue is a curated list of Digital Public Goods that need contributors, connecting those projects with people who want to address a societal challenge and promote sustainable development...

Significantly, 34% of contributors to the top 10 For Good Issue projects... made their first contribution after signing up for GitHub Copilot.

There's now 518 million projects on GitHub — with a year-over-year growth of 25%...
Security

Is AI-Driven 0-Day Detection Here? (zeropath.com) 25

"AI-driven 0-day detection is here," argues a new blog post from ZeroPath, makers of a GitHub app that "detects, verifies, and issues pull requests for security vulnerabilities in your code."

They write that AI-assisted security research "has been quietly advancing" since early 2023, when researchers at the DARPA and ARPA-H's Artificial Intelligence Cyber Challenge demonstrated the first practical applications of LLM-powered vulnerability detection — with new advances continuing. "Since July 2024, ZeroPath's tool has uncovered critical zero-day vulnerabilities — including remote code execution, authentication bypasses, and insecure direct object references — in popular AI platforms and open-source projects." And they ultimately identified security flaws in projects owned by Netflix, Salesforce, and Hulu by "taking a novel approach combining deep program analysis with adversarial AI agents for validation. Our methodology has uncovered numerous critical vulnerabilities in production systems, including several that traditional Static Application Security Testing tools were ill-equipped to find..." TL;DR — most of these bugs are simple and could have been found with a code review from a security researcher or, in some cases, scanners. The historical issue, however, with automating the discovery of these bugs is that traditional SAST tools rely on pattern matching and predefined rules, and miss complex vulnerabilities that do not fit known patterns (i.e. business logic problems, broken authentication flaws, or non-traditional sinks such as from dependencies). They also generate a high rate of false positives.

The beauty of LLMs is that they can reduce ambiguity in most of the situations that caused scanners to be either unusable or produce few findings when mass-scanning open source repositories... To do this well, you need to combine deep program analysis with an adversarial agents that test the plausibility of vulnerabilties at each step. The solution ends up mirroring the traditional phases of a pentest — recon, analysis, exploitation (and remediation which is not mentioned in this post)...

AI-driven vulnerability detection is moving fast... What's intriguing is that many of these vulnerabilities are pretty straightforward — they could've been spotted with a solid code review or standard scanning tools. But conventional methods often miss them because they don't fit neatly into known patterns. That's where AI comes in, helping us catch issues that might slip through the cracks.

"Many vulnerabilities remain undisclosed due to ongoing remediation efforts or pending responsible disclosure processes," according to the blog post, which includes a pie chart showing the biggest categories of vulnerabilities found:
  • 53%: Authorization flaws, including roken access control in API endpoints and unauthorized Redis access and configuration exposure. ("Impact: Unauthorized access, data leakage, and resource manipulation across tenant boundaries.")
  • 26%: File operation issues, including directory traversal in configuration loading and unsafe file handling in upload features. ("Impact: Unauthorized file access, sensitive data exposure, and potential system compromise.")
  • 16%: Code execution vulnerabilities, including command injection in file processing and unsanitized input in system commands. ("Impact: Remote code execution, system command execution, and potential full system compromise.")

The company's CIO/cofounder was "former Red Team at Tesla," according to the startup's profile at YCombinator, and earned over $100,000 as a bug-bounty hunter. (And another co-founded is a former Google security engineer.)

Thanks to Slashdot reader Mirnotoriety for sharing the article.


Android

Android 16 Will Launch Earlier Than Usual (cnet.com) 11

Google is advancing the release timeline for Android 16, shifting it to the second quarter of 2025 to better align with new device launches and accelerate access to its latest AI and machine learning resources. It should also "enable app creators and phone companies to prepare their products for the new software more quickly," reports CNET. From the report: [I]n a big-picture sense, the change could help facilitate a new wave of apps with more AI integration, considering developers will get access to Google's latest machine learning and AI resources even sooner. "We're in a once-in-a-generation moment to completely reimagine what our smartphones can do and how we interact with them," Google's Seang Chau, who took on the role of vice president and general manager of the Android Platform earlier this year, said in an interview with CNET. "It's a really exciting time for smartphones, and we've been putting a lot of thought into what we want to do next with them."

In addition to moving up the major release, Google will roll out a minor update in the fourth quarter of 2025 with feature updates, optimizations and bug fixes. It's a notable switch from Google's usual release timeline, but it's just one of several changes the company has made to the way it distributes Android updates in an effort to add features more frequently. [...] "Things are moving quite fast in the AI world right now," Chau said. "So we want to make sure that we get those developer [application programming interfaces], especially around machine learning and AI, available to our developers so they can build these capabilities faster and get them out to our users faster."

AI

GitHub Copilot Moves Beyond OpenAI Models To Support Claude 3.5, Gemini 9

GitHub Copilot will switch from using exclusively OpenAI's GPT models to a multi-model approach, adding Anthropic's Claude 3.5 Sonnet and Google's Gemini 1.5 Pro. Ars Technica reports: First, Anthropic's Claude 3.5 Sonnet will roll out to Copilot Chat's web and VS Code interfaces over the next few weeks. Google's Gemini 1.5 Pro will come a bit later. Additionally, GitHub will soon add support for a wider range of OpenAI models, including GPT o1-preview and o1-mini, which are intended to be stronger at advanced reasoning than GPT-4, which Copilot has used until now. Developers will be able to switch between the models (even mid-conversation) to tailor the model to fit their needs -- and organizations will be able to choose which models will be usable by team members.

The new approach makes sense for users, as certain models are better at certain languages or types of tasks. "There is no one model to rule every scenario," wrote [GitHub CEO Thomas Dohmke]. "It is clear the next phase of AI code generation will not only be defined by multi-model functionality, but by multi-model choice." It starts with the web-based and VS Code Copilot Chat interfaces, but it won't stop there. "From Copilot Workspace to multi-file editing to code review, security autofix, and the CLI, we will bring multi-model choice across many of GitHub Copilot's surface areas and functions soon," Dohmke wrote. There are a handful of additional changes coming to GitHub Copilot, too, including extensions, the ability to manipulate multiple files at once from a chat with VS Code, and a preview of Xcode support.
GitHub also introduced "Spark," a natural language-based app development tool that enables both non-coders and coders to create and refine applications using conversational prompts. It's currently in an early preview phase, with a waitlist available for those who are interested.
Programming

More Than a Quarter of New Code At Google Is Generated By AI 92

Google has integrated AI deeply across its operations, with over 25% of its new code generated by AI. CEO Sundar Pichai announced the milestone during the company's third quarter 2024 earnings call. The Verge reports: AI is helping Google make money as well. Alphabet reported $88.3 billion in revenue for the quarter, with Google Services (which includes Search) revenue of $76.5 billion, up 13 percent year-over-year, and Google Cloud (which includes its AI infrastructure products for other companies) revenue of $11.4 billion, up 35 percent year-over-year. Operating incomes were also strong. Google Services hit $30.9 billion, up from $23.9 billion last year, and Google Cloud hit $1.95 billion, significantly up from last year's $270 million. "In Search, our new AI features are expanding what people can search for and how they search for it," CEO Sundar Pichai says in a statement. "In Cloud, our AI solutions are helping drive deeper product adoption with existing customers, attract new customers and win larger deals. And YouTube's total ads and subscription revenues surpassed $50 billion over the past four quarters for the first time."
Education

Code.org Taps No-Code Tableau To Make the Case For K-12 Programming Courses 62

theodp writes: "Computer science education is a necessity for all students," argues tech-backed nonprofit Code.org in its newly-published 2024 State of Computer Science Education (Understanding Our National Imperative) report. "Students of all identities and chosen career paths need quality computer science education to become informed citizens and confident creators of content and digital tools."

In the 200-page report, Code.org pays special attention to participation in "foundational computer science courses" in high school. "Across the country, 60% of public high schools offer at least one foundational computer science course," laments Code.org (curiously promoting a metric that ignores school size which nonetheless was embraced by Education Week and others).

"A course that teaches foundational computer science includes a minimum amount of time applying learned concepts through programming (at least 20 hours of programming/coding for grades 9-12 high schools)," Code.org explains in a separate 13-page Defining Foundational Computer Science document. Interestingly, Code.org argues that Data and Informatics courses -- in which "students may use Oracle WebDB, SQL, PL/SQL, SPSS, and SAS" to learn "the K-12 CS Framework concepts about data and analytics" -- do not count, because "the course content focuses on querying using a scripting language rather than creating programs [the IEEE's Top Programming Languages 2024 begs to differ]." Code.org similarly dissed the use of the Wolfram Language for broad educational use back in 2016.

With its insistence on the importance of kids taking Code.org-defined 'programming' courses in K-12 to promote computational thinking, it's probably no surprise to see that the data behind the 2024 State of Computer Science Education report was prepared using Python (the IEEE's top programming language) and presented to the public in a Jupyter notebook. Just kidding. Ironically, the data behind the 2024 State of Computer Science Education analysis is prepared and presented by Code.org in a no-code Tableau workbook.
Television

Why is Apple So Bad at Marketing Its TV Shows? (fastcompany.com) 137

Speaking of streaming services, an anonymous reader shares a story that looks into Apple's entertainment offering: Ever since its launch in 2019, Apple TV+ has been carving out an identity as the new home for prestige shows from some of Hollywood's biggest names -- the kind of shows that sound natural coming out of Jimmy Kimmel's mouth in monologue jokes at the Emmys. While the company never provides spending details, Apple is estimated to have spent at least $20 billion recruiting the likes of Reese Witherspoon, M. Night Shayamalan, and Harrison Ford to help cultivate its award-worthy sheen. For all the effort Apple has expended, and for all the cultural excitement around Ted Lasso during its three-season run, the streaming service has won nearly 500 Emmys ... while attracting just 0.2% of total TV viewing in the U.S.

No wonder the company reportedly began reining in its spending spree recently. (Apple did not reply to a request for comment.) "It seems like Apple TV wants to be seen as a platform that's numbers-agnostic," says Ashley Ray, comedian, TV writer, and host of the erstwhile podcast TV I Say. "They wanna be known for being about the creativity and the love of making TV shows, even if nobody's watching them."

The experience of enjoying a new Apple TV+ series can often be a lonely one. Adventurous subscribers might see an in-network ad about something like last summer's Sunny, the timely, genre-bending Rashida Jones series about murderous AI, and give it a shot -- only to find that nobody else is talking about it in their social media feeds or around the company Keurig machine. Sure, the same could be said for hundreds of other streaming series in the post-monoculture era, but most streaming companies aren't consistently landing as much marquee talent for such a limited library. (Apple currently has 259 TV shows and films compared to Netflix's nearly 16,000.)

How is it possible for a streaming service to have as much high-pedigree programming as Apple TV+ does and so relatively few viewers, despite an estimated 25 million paid subscribers? How can shows starring Natalie Portman, Idris Elba, and Colin Farrell launch and even get renewed without ever quite grazing the zeitgeist? How does a show set in the same Monsterverse as Godzilla vs. Kong, and starring Kurt Russell and his roguishly charming son, not become a monster-size hit?

For many perplexed observers, the blame falls squarely on Apple's marketing efforts, or seeming lack thereof.

Programming

An Alternative to Rewriting Memory-Unsafe Code in Rust: the 'Safe C++ Extensions' Proposal (theregister.com) 105

"After two years of being beaten with the memory-safety stick, the C++ community has published a proposal to help developers write less vulnerable code," reports the Register.

"The Safe C++ Extensions proposal aims to address the vulnerable programming language's Achilles' heel, the challenge of ensuring that code is free of memory safety bugs..." Acknowledging the now deafening chorus of calls to adopt memory safe programming languages, developers Sean Baxter, creator of the Circle compiler, and Christian Mazakas, from the C++ Alliance, argue that while Rust is the only popular systems level programming language without garbage collection that provides rigorous memory safety, migrating C++ code to Rust poses problems. "Rust lacks function overloading, templates, inheritance and exceptions," they explain in the proposal. "C++ lacks traits, relocation and borrow checking. These discrepancies are responsible for an impedance mismatch when interfacing the two languages. Most code generators for inter-language bindings aren't able to represent features of one language in terms of the features of another."

Though DARPA is trying to develop better automated C++ to Rust conversion tools, Baxter and Mazakas argue telling veteran C++ developers to learn Rust isn't an answer... The Safe C++ project adds new technology for ensuring memory safety, Baxter explained, and isn't just a reiteration of best practices. "Safe C++ prevents users from writing unsound code," he said. "This includes compile-time intelligence like borrow checking to prevent use-after-free bugs and initialization analysis for type safety." Baxter said that rewriting a project in a different programming language is costly, so the aim here is to make memory safety more accessible by providing the same soundness guarantees as Rust at a lower cost. "With Safe C++, existing code continues to work as always," he explained. "Stakeholders have more control for incrementally opting in to safety."

The next step, Baxter said, involves greater participation from industry to help realize the Safe C++ project. "The foundations are in: We have fantastic borrow checking and initialization analysis which underpin the soundness guarantees," he said. "The next step is to comprehensively visit all of C++'s features and specify memory-safe versions of them. It's a big effort, but given the importance of reducing C++ security vulnerabilities, it's an effort worth making."

Stats

C Drops, Java (and Rust) Climb in Popularity - as Coders Seek Easy, Secure Languages (techrepublic.com) 108

Last month C dropped from 3rd to 4th in TIOBE's ranking of programming language popularity (which tries to calculate each language's share of search engine results). Java moved up into the #3 position in September, reports TechRepublic, which notes that by comparison October "saw relatively little change" — though percentages of search results increased slightly. "At number one, Python jumped from 20.17% in September to 21.9% in October. In second place, C++ rose from 10.75% in September to 11.6%. In third, Java ascended from 9.45% to 10.51%..."

Is there a larger trend? TIOBE CEO Paul Jansen writes that the need to harvest more data increases demand for fast data manipulation languages. But they also need to be easy to learn ("because the resource pool of skilled software engineers is drying up") and secure ("because of continuous cyber threats.") King of all, Python, is easy to learn and secure, but not fast. Hence, engineers are frantically looking for fast alternatives for Python. C++ is an obvious candidate, but it is considered "not secure" because of its explicit memory management. Rust is another candidate, although not easy to learn. Rust is, thanks to its emphasis on security and speed, making its way to the TIOBE index top 10 now. [It's #13 — up from #20 a year ago]

The cry for fast, data crunching languages is also visible elsewhere in the TIOBE index. The language Mojo [a faster superset of Python designed for accelerated hardware like GPUs]... enters the top 50 for the first time. The fact that this language is only 1 year old and already showing up, makes it a very promising language.

In the last 12 months three languages also fell from the top ten:
  • PHP (dropping from #8 to #15)
  • SQL (dropping from #9 to #11)
  • Assembly language (dropping from #10 to #16)

Programming

'Running Clang in the Browser Using WebAssembly' (wasmer.io) 56

This week (MIT-licensed) WebAssembly runtime Wasmer announced "a major milestone in making any software run with WebAssembly."

The announcement's headline? Running Clang in the browser using WebAssembly... Thanks to the newest release of Wasmer (4.4) and the Wasmer JS SDK (0.8.0) you can now run [compiler front-end] clang anywhere Wasmer runs! This allows compiling C programs from virtually anywhere. Including Javascript and your preferred browser! (we tested Chrome, Safari and Firefox and everything is working like a charm)...

- You can compile C code to WebAssembly easily just using the Wasmer CLI: no toolchains or complex installations needed, install Wasmer and you are ready to go...!

- You can compile C projects directly from JavaScript...!

- We expect online IDEs to start adopting the SDK to allow their users compile and run C programs in the browser....

Do you want to use clang in your Javascript project? Thanks to our newly released Wasmer JS SDK you can do it easily, in both the browser and Node.js/Bun etc... Wasmer's clang can even optimize the file for you automatically using wasm-opt under the hood (Clang automatically detects if wasm-opt is used, and it will be automatically called when optimizing the file). Imagine using Emscripten without needing its toolchain installed — or even better, imagine running Emscripten in the browser.

The announcement looks to a future of compiling native Python libraries, when "any project depending on LLVM can now be easily compiled to WebAssembly..."

"This is the beginning of an awesome journey, we can't wait to see what you create next with this."
AI

80% of Software Engineers Must Upskill For AI Era By 2027, Gartner Warns (itpro.com) 108

80% of software engineers will need to upskill by 2027 to keep pace with generative AI's growing demands, according to Gartner. The consultancy predicts AI will transform the industry in three phases. Initially, AI tools will boost productivity, particularly for senior developers. Subsequently, "AI-native software engineering" will emerge, with most code generated by AI. Long-term, AI engineering will rise as enterprise adoption increases, requiring a new breed of professionals skilled in software engineering, data science, and machine learning.
Math

Researchers Claim New Technique Slashes AI Energy Use By 95% (decrypt.co) 115

Researchers at BitEnergy AI, Inc. have developed Linear-Complexity Multiplication (L-Mul), a technique that reduces AI model power consumption by up to 95% by replacing energy-intensive floating-point multiplications with simpler integer additions. This method promises significant energy savings without compromising accuracy, but it requires specialized hardware to fully realize its benefits. Decrypt reports: L-Mul tackles the AI energy problem head-on by reimagining how AI models handle calculations. Instead of complex floating-point multiplications, L-Mul approximates these operations using integer additions. So, for example, instead of multiplying 123.45 by 67.89, L-Mul breaks it down into smaller, easier steps using addition. This makes the calculations faster and uses less energy, while still maintaining accuracy. The results seem promising. "Applying the L-Mul operation in tensor processing hardware can potentially reduce 95% energy cost by element wise floating point tensor multiplications and 80% energy cost of dot products," the researchers claim. Without getting overly complicated, what that means is simply this: If a model used this technique, it would require 95% less energy to think, and 80% less energy to come up with new ideas, according to this research.

The algorithm's impact extends beyond energy savings. L-Mul outperforms current 8-bit standards in some cases, achieving higher precision while using significantly less bit-level computation. Tests across natural language processing, vision tasks, and symbolic reasoning showed an average performance drop of just 0.07% -- a negligible tradeoff for the potential energy savings. Transformer-based models, the backbone of large language models like GPT, could benefit greatly from L-Mul. The algorithm seamlessly integrates into the attention mechanism, a computationally intensive part of these models. Tests on popular models such as Llama, Mistral, and Gemma even revealed some accuracy gain on certain vision tasks.

At an operational level, L-Mul's advantages become even clearer. The research shows that multiplying two float8 numbers (the way AI models would operate today) requires 325 operations, while L-Mul uses only 157 -- less than half. "To summarize the error and complexity analysis, L-Mul is both more efficient and more accurate than fp8 multiplication," the study concludes. But nothing is perfect and this technique has a major achilles heel: It requires a special type of hardware, so the current hardware isn't optimized to take full advantage of it. Plans for specialized hardware that natively supports L-Mul calculations may be already in motion. "To unlock the full potential of our proposed method, we will implement the L-Mul and L-Matmul kernel algorithms on hardware level and develop programming APIs for high-level model design," the researchers say.

Businesses

Epic Games CEO Tim Sweeney Renews Blast At 'Gatekeeper' Platform Owners (venturebeat.com) 77

An anonymous reader quotes a report from VentureBeat: Epic Games CEO Tim Sweeney opened the Unreal Fest Seattle event today with an update on news that included a blistering criticism of monopolistic platform owners. Sweeney is a big proponent of open platforms and the open metaverse. In fact, he will talk about that subject in a virtual talk at our GamesBeat Next 2024 event on October 28-29 in San Francisco. (You can use this code for a 25% discount: gbn24dean). And so Sweeney continues to pressure the major platforms to give more favorable terms to game developers. He started out on that front by giving a price cut for users of Unreal Engine 5, Epic's tools for making games. For those who release games first or simultaneously on the Epic Games Store, Epic is cutting its royalty rate from 5% to 3.5% for Unreal developers. He noted that Epic is in better financial shape than it was a year ago, when Epic had to lay off a lot of staff. Sweeney said the company spent the last year rebuilding. "We're at a point now where game development is expensive. It's low margin, and game companies are suffering. Apple and Google make way more profit from most games than the developers make themselves, while contributing nothing," Sweeney said.

Sweeney reminisced about programming on early Apple computers, aligning with Steve Wozniak's vision for Apple where users had complete freedom without corporate restrictions. He contrasted this with today's mobile platforms, accusing Apple and Google of acting as gatekeepers that stifle innovation. "Among the fights we've taken on here, he noted the case with Apple is still an ongoing fight to open up payments so developers can process payments without Apple mediation and without Apple fees," he said, noting the "massive victory" against Google in a jury trial late last year.
Programming

Are AI Coding Assistants Really Saving Developers Time? (cio.com) 142

Uplevel provides insights from coding and collaboration data, according to a recent report from CIO magazine — and recently they measured "the time to merge code into a repository [and] the number of pull requests merged" for about 800 developers over a three-month period (comparing the statistics to the previous three months).

Their study "found no significant improvements for developers" using Microsoft's AI-powered coding assistant tool Copilot, according to the article (shared by Slashdot reader snydeq): Use of GitHub Copilot also introduced 41% more bugs, according to the study...

In addition to measuring productivity, the Uplevel study looked at factors in developer burnout, and it found that GitHub Copilot hasn't helped there, either. The amount of working time spent outside of standard hours decreased for both the control group and the test group using the coding tool, but it decreased more when the developers weren't using Copilot.

An Uplevel product manager/data analyst acknowledged to the magazine that there may be other ways to measure developer productivity — but they still consider their metrics solid. "We heard that people are ending up being more reviewers for this code than in the past... You just have to keep a close eye on what is being generated; does it do the thing that you're expecting it to do?"

The article also quotes the CEO of software development firm Gehtsoft, who says they didn't see major productivity gains from LLM-based coding assistants — but did see them introducing errors into code. With different prompts generating different code sections, "It becomes increasingly more challenging to understand and debug the AI-generated code, and troubleshooting becomes so resource-intensive that it is easier to rewrite the code from scratch than fix it."

On the other hand, cloud services provider Innovative Solutions saw significant productivity gains from coding assistants like Claude Dev and GitHub Copilot. And Slashdot reader destined2fail1990 says that while large/complex code bases may not see big gains, "I have seen a notable increase in productivity from using Cursor, the AI powered IDE." Yes, you have to review all the code that it generates, why wouldn't you? But often times it just works. It removes the tedious tasks like querying databases, writing model code, writing forms and processing forms, and a lot more. Some forms can have hundreds of fields and processing those fields along with doing checks for valid input is time consuming, but can be automated effectively using AI.
This prompted an interesting discussion on the original story submission. Slashdot reader bleedingobvious responded: Cursor/Claude are great BUT the code produced is almost never great quality. Even given these tools, the junior/intern teams still cannot outpace the senior devs. Great for learning, maybe, but the productivity angle not quite there.... yet.

It's damned close, though. GIve it 3-6 months.

And Slashdot reader abEeyore posted: I suspect that the results are quite a bit more nuanced than that. I expect that it is, even outside of the mentioned code review, a shift in where and how the time is spent, and not necessarily in how much time is spent.
Agree? Disagree? Share your own experiences in the comments.

And are developers really saving time with AI coding assistants?
AI

Can AI Developers Be Held Liable for Negligence? (lawfaremedia.org) 123

Bryan Choi, an associate professor of law and computer science focusing on software safety, proposes shifting AI liability onto the builders of the systems: To date, most popular approaches to AI safety and accountability have focused on the technological characteristics and risks of AI systems, while averting attention from the workers behind the curtain responsible for designing, implementing, testing, and maintaining such systems...

I have previously argued that a negligence-based approach is needed because it directs legal scrutiny on the actual persons responsible for creating and managing AI systems. A step in that direction is found in California's AI safety bill, which specifies that AI developers shall articulate and implement protocols that embody the "developer's duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm" (emphasis added). Although tech leaders have opposed California's bill, courts don't need to wait for legislation to allow negligence claims against AI developers. But how would negligence work in the AI context, and what downstream effects should AI developers anticipate?

The article suggest two possibilities. Classifying AI developers as ordinary employees leaves employers then sharing liability for negligent acts (giving them "strong incentives to obtain liability insurance policies and to defend their employees against legal claims.") But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies." AI is a field that perhaps uniquely seeks to obscure its human elements in order to magnify its technical wizardry. The virtue of the negligence-based approach is that it centers legal scrutiny back on the conduct of the people who build and hype the technology. To be sure, negligence is limited in key ways and should not be viewed as a complete answer to AI governance. But fault should be the default and the starting point from which all conversations about AI accountability and AI safety begin.
Thanks to long-time Slashdot reader david.emery for sharing the article.
Supercomputing

IBM Opens Its Quantum-Computing Stack To Third Parties (arstechnica.com) 7

An anonymous reader quotes a report from Ars Technica, written by John Timmer: [P]art of the software stack that companies are developing to control their quantum hardware includes software that converts abstract representations of quantum algorithms into the series of commands needed to execute them. IBM's version of this software is called Qiskit (although it was made open source and has since been adopted by other companies). Recently, IBM made a couple of announcements regarding Qiskit, both benchmarking it in comparison to other software stacks and opening it up to third-party modules. [...] Right now, the company is supporting six third-party Qiskit functions that break down into two categories.

The first can be used as stand-alone applications and are focused on providing solutions to problems for users who have no expertise programming quantum computers. One calculates the ground-state energy of molecules, and the second performs optimizations. But the remainder are focused on letting users get more out of existing quantum hardware, which tends to be error prone. But some errors occur more often than others. These errors can be due to specific quirks of individual hardware qubits or simply because some specific operations are more error prone than others. These can be handled in two different ways. One is to design the circuit being executed to avoid the situations that are most likely to produce an error. The second is to examine the final state of the algorithm to assess whether errors likely occurred and adjust to compensate for any. And third parties are providing software that can handle both of these.

One of those third parties is Q-CTRL, and we talked to its CEO, Michael Biercuk. "We build software that is really focused on everything from the lowest level of hardware manipulation, something that we call quantum firmware, up through compilation and strategies that help users map their problem onto what has to be executed on hardware," he told Ars. (Q-CTRL is also providing the optimization tool that's part of this Qiskit update.) "We're focused on suppressing errors everywhere that they can occur inside the processor," he continued. "That means the individual gate or logic operations, but it also means the execution of the circuit. There are some errors that only occur in the whole execution of a circuit as opposed to manipulating an individual quantum device." Biercuk said Q-CTRL's techniques are hardware agnostic and have been demonstrated on machines that use very different types of qubits, like trapped ions. While the sources of error on the different hardware may be distinct, the manifestations of those problems are often quite similar, making it easier for Q-CTRL's approach to work around the problems.

Those work-arounds include things like altering the properties of the microwave pulses that perform operations on IBM's hardware, and replacing the portion of Qiskit that converts an algorithm to a series of gate operations. The software will also perform operations that suppress errors that can occur when qubits are left idle during the circuit execution. As a result of all these differences, he claimed that using Q-CTRL's software allows the execution of more complex algorithms than are possible via Qiskit's default compilation and execution. "We've shown, for instance, optimization with all 156 qubits on [an IBM] system, and importantly -- I want to emphasize this word -- successful optimization," Biercuk told Ars. "What it means is you run it and you get the right answer, as opposed to I ran it and I kind of got close."

Firefox

Zen Browser: a New Firefox-based Alternative to Chromium Browsers (zen-browser.app) 80

First released on July 11th, the Firefox-based Zen browser is "taking a different approach to the user interface," according to the blog It's FOSS.

The Register says the project "reminds us strongly of Arc, a radical Chromium-based web browser... to modernize the standard web browser UI by revising some fundamental assumptions." [Arc] removes the URL bar from front and center, gets rid of the simple flat list of tabs, and so on. Zen is trying to do some similar things, but in a slightly more moderate way — and it's doing it on the basis of Mozilla's Firefox codebase... Instead of the tired old horizontal tab bar you'll see in both Firefox and Chrome, Zen implements its own tab bar... By default, this tab bar is narrow and just shows page icons — but there are some extra controls at the bottom of the sidebar, one of which expands the sidebar to show page titles too. For us, it worked better than Vivaldi's fancier sidebar.
The article concludes it's "a new effort to modernize web browsing by bringing tiling, workspaces, and so on — and it's blissfully free of Google code." One Reddit comment swooned over Zen's "extraordinary" implementation of a distraction-free "Compact Mode" (hiding things like the sidebar and top bar). And It's Foss described it as a "tranquil," browser, "written using CSS, C++, JavaScript, and a few other programming languages, with a community of over 30 people contributing to it." The layout of the interface felt quite clean to me; there were handy buttons on the top to control the webpage, manage extensions, and a menu with additional options... The split-view functionality allows you to open up two different tabs on the same screen, allowing for easy multitasking when working across different webpages... I split two tabs, but in my testing, I could split over 10+ tabs... If you have a larger monitor, then you are in for a treat...

The Zen Sidebar feature... can run web apps alongside any open tabs. This can be helpful in situations where you need to quickly access a service like a note-taking app, Wikipedia, Telegram, and others.

On the customization side of things, you will find that Zen Browser supports everything that Firefox does, be it the settings, adding new extensions/themes/plugins, etc.

The Register points out it's easy to give it a try. "Being based on Firefox means that as well as running existing extensions, it can connect to Mozilla's Sync service and pick up not just your bookmarks, but also your tabs from other instances."

And beyond all that, "There's just something satisfying about switching browsers every now and again..." argues the tech site Pocket-Lint: Zen Browser's vertical tabs layout is superb and feels much better than anything available in standard Firefox. [Firefox recently offered vertical tabs and a new sidebar experience in Nightly/Firefox Labs 131.] The tab bar can be set to automatically hide and show up whenever you hover near it, and it also contains quick access buttons to bookmarks, settings, and browsing history. The tab bar also contains a profile switcher...

One of the greatest parts of the Zen Browser is the community that has popped up around it. At its heart, Zen Browser is a community-driven project... Zen Browser themes are aesthetic and functional tweaks to the UI. While there aren't a ton available right now, the ones that are show a lot of promise for the browser's future... I've personally gotten great use out of the Super URL Bar theme, which makes your URL bar expand and become the focus of your screen while typing in it... There's a lot you can do to make Zen Browser feel nearly exactly like what you want it to feel like.

The "Business Standard calls it "an open-source alternative to Chromium-based browsers," adding "Where Zen truly shines is it offers a range of customisation, tab management, and workspace management..." Their theme store offers a range of options, including modifications to the bookmark toolbar, a floating URL bar, private mode theming, and removal of browser padding. In addition to these, users can also choose from custom colour schemes and built-in theming options... The Sidebar is another neat feature which allows you to open tabs in a smaller, smartphone-sized window. You can view websites in mobile layout by using this panel.
It's "focused on being always at the latest version of Firefox," according to its official site, noting that Firefox is known for its security features. But then, "We also have additional security features like https only built into Zen Browser to help keep you safe online." And it also promises automated Releases "to ensure security."

It's FOSS adds that you can get Zen Browser for Linux, Windows, and macOS from its official website (adding "They also offer it on the Flathub store for further accessibility on Linux.")

And its source code is available on GitHub.
Programming

'Compile and Run C in JavaScript', Promises Bun (thenewstack.io) 54

The JavaScript runtime Bun is a Node.js/Deno alternative (that's also a bundler/test runner/package manager).

And Bun 1.1.28 now includes experimental support for ">compiling and running native C from JavaScript, according to this report from The New Stack: "From compression to cryptography to networking to the web browser you're reading this on, the world runs on C," wrote Jarred Sumner, creator of Bun. "If it's not written in C, it speaks the C ABI (C++, Rust, Zig, etc.) and is available as a C library. C and the C ABI are the past, present, and future of systems programming." This is a low-boilerplate way to use C libraries and system libraries from JavaScript, he said, adding that this feature allows the same project that runs JavaScript to also run C without a separate build step... "It's good for glue code that binds C or C-like libraries to JavaScript. Sometimes, you want to use a C library or system API from JavaScript, and that library was never meant to be used from JavaScript," Sumner added.

It's currently possible to achieve this by compiling to WebAssembly or writing a N-API (napi) addon or V8 C++ API library addon, the team explained. But both are suboptimal... WebAssembly can do this but its isolated memory model comes with serious tradeoffs, the team wrote, including an inability to make system calls and a requirement to clone everything. "Modern processors support about 280 TB of addressable memory (48 bits). WebAssembly is 32-bit and can only access its own memory," Sumner wrote. "That means by default, passing strings and binary data JavaScript WebAssembly must clone every time. For many projects, this negates any performance gain from leveraging WebAssembly."

The latest version of Bun, released Friday, builds on this by adding N-API (nap) support to cc [Bun's C compiler, which uses TinyCC to compile the C code]. "This makes it easier to return JavaScript strings, objects, arrays and other non-primitive values from C code," wrote Sumner. "You can continue to use types like int, float, double to send & receive primitive values from C code, but now you can also use N-API types! Also, this works when using dlopen to load shared libraries with bun:ffi (such as Rust or C++ libraries with C ABI exports)....

"TinyCC compiles to decently performant C, but it won't do advanced optimizations that Clang or GCC does like autovectorization or very specialized CPU instructions," Sumner wrote. "You probably won't get much of a performance gain from micro-optimizing small parts of your codebase through C, but happy to be proven wrong!"

Security

CISA Boss: Makers of Insecure Software Are the Real Cyber Villains (theregister.com) 120

Software developers who ship buggy, insecure code are the true baddies in the cyber crime story, Jen Easterly, boss of the US government's Cybersecurity and Infrastructure Security Agency, has argued. From a report: "The truth is: Technology vendors are the characters who are building problems" into their products, which then "open the doors for villains to attack their victims," declared Easterly during a Wednesday keynote address at Mandiant's mWise conference. Easterly also implored the audience to stop "glamorizing" crime gangs with fancy poetic names. How about "Scrawny Nuisance" or "Evil Ferret," Easterly suggested.

Even calling security holes "software vulnerabilities" is too lenient, she added. This phrase "really diffuses responsibility. We should call them 'product defects,'" Easterly said. And instead of automatically blaming victims for failing to patch their products quickly enough, "why don't we ask: Why does software require so many urgent patches? The truth is: We need to demand more of technology vendors."

Slashdot Top Deals