Open Source

Linus Torvalds Talks About Rust Adoption and AI (zdnet.com) 48

"At The Linux Foundation's Open Source Summit China conference, Linus Torvalds and his buddy Dirk Hohndel, Verizon's Head of the Open Source Program Office, once more chatted about Linux development and related issues," reports ZDNet: Torvalds: "Later this year, we will have the 20th anniversary of the real-time Linux project. This is a project that literally started 20 years ago, and the people involved are finally at that point where they feel like it is done... well, almost done. They're still tweaking the last things, but they hope it will soon be ready to be completely merged in the upstream kernel this year... You'd think that all the basics would have been fixed long ago, but they're not. We're still dealing with basic issues such as memory management...."

Switching to a more modern topic, the introduction of the Rust language into Linux, Torvalds is disappointed that its adoption isn't going faster. "I was expecting updates to be faster, but part of the problem is that old-time kernel developers are used to C and don't know Rust. They're not exactly excited about having to learn a new language that is, in some respects, very different. So there's been some pushback on Rust." On top of that, Torvalds commented, "Another reason has been the Rust infrastructure itself has not been super stable...."

The pair then moved on to the hottest of modern tech topics: AI. While Torvalds is skeptical about the current AI hype, he is hopeful that AI tools could eventually aid in code review and bug detection. In the meantime, though, Torvalds is happy about AI's side effects. For example, he said, "When AI came in, it was wonderful, because Nvidia got much more involved in the kernel. Nvidia went from being on my list of companies who are not good to my list of companies who are doing really good work."

Intel

Intel Discontinues High-Speed, Open-Source H.265/HEVC Encoder Project (phoronix.com) 37

Phoronix's Michael Larabel reports: As part of Intel's Scalable Video Technology (SVT) initiative they had been developing SVT-HEVC as a BSD-licensed high performance H.265/HEVC video encoder optimized for Xeon Scalable and Xeon D processors. But recently they've changed course and the project has been officially discontinued. [...] The SVT-AV1 project a while ago was already punted to the Alliance for Open Media (AOMedia) project and one of its lead maintainers having joined Meta from Intel two years ago. SVT-AV1 continues excelling great outside the borders of Intel but SVT-HEVC (and SVT-VP9) have remained Intel open-source projects but at least officially SVT-HEVC has ended.

SVT-HEVC hadn't seen a new release since 2021 and there are already several great open-source H.265 encoders out there like x265 and Kvazaar. But as of a few weeks ago, SVT-HEVC upstream is now discontinued. The GitHub repository was put into a read-only state [with a discontinuation notice]. Meanwhile SVT-VP9 doesn't have any discontinuation notice at this time. The SVT-VP9 GitHub repository remains under Intel's Open Visual Cloud account although it hasn't seen any new commits in four months and the last tagged release was back in 2020.

Open Source

Can the Linux Foundation's 'Open Model Initiative' Build AI-Powering LLMs Without Restrictive Licensing? (infoworld.com) 7

"From the beginning, we have believed that the right way to build these AI models is with open licenses," says the Open Model Initiative. SD Times quotes them as saying that open licenses "allow creatives and businesses to build on each other's work, facilitate research, and create new products and services without restrictive licensing constraints."

Phoronix explains the community initiative "came about over the summer to help advance open-source AI models while now is becoming part of the Linux Foundation to further their cause." As part of the Linux Foundation, the OMI will be working to establish a governance framework and working groups, create shared standards to enhance model interoperability and metadata practices, develop a transparent dataset for training and captioning, complete an alpha test model for targeted red teaming, and release an alpha version of a new model with fine-tuning scripts before the end of 2024.
The group was established "in response to a number of recent decisions by creators of popular open-source models to alter their licensing terms," reports Silicon Angle: The creators highlighted the recent licensing change announced by Stability AI Ltd., regarding its popular image-generation model Stable Diffusion 3 (SD3). That model had previously been entirely free and open, but the changes introduced a monthly fee structure and imposed limitations on its usage. Stability AI was also criticized for the lack of clarity around its licensing terms, but it isn't the only company to have introduced licensing restrictions on previously free software. The OMI intends to eliminate all barriers to enterprise adoption by focusing on training and developing AI models with "irrevocable open licenses without deletion clauses or recurring costs for access," the Linux Foundation said.
InfoWorld also notes "the unavailability of source code and the license restrictions from LLM providers such as Meta, Mistral and Anthropic, who put caveats in the usage policies of their 'open source' models." Meta, for instance, does provide the rights to use Llama models royalty free without any license, but does not provide the source code, according to [strategic research firm] Everest Group's AI practice leader Suseel Menon. "Meta also adds a clause: 'If, on the Meta Llama 3, monthly active users of the products or services is greater than 700 million monthly active users, you must request a license from Meta.' This clause, combined with the unavailability of the source code, raises the question if the term open source should apply to Llama's family of models," Menon explained....

The OMI's objectives and vision received mixed reactions from analysts. While Amalgam Insights' chief analyst Hyoun Park believes that the OMI will lead to the development of more predictable and consistent standards for open source models, so that these models can potentially work with each other more easily, Everest Group's Malik believes that the OMI may not be able to stand before the might of vendors such as Meta and Anthropic. "Developing LLMs is highly compute intensive and has cost big tech giants and start-ups billions in capital expenditure to achieve the scale they currently have with their open-source and proprietary LLMs," Malik said, adding that this could be a major challenge for community-based LLMs.

The AI practice leader also pointed out that previous attempts at a community-based LLM have not garnered much adoption, as models developed by larger entities tend to perform better on most metrics... However, Malik said that the OMI might be able to find appropriate niches within the content development space (2D/3D image generation, adaptation, visual design, editing, etc.) as it begins to build its models... One of the other use cases for the OMI's community LLMs is to see their use as small language models (SLMs), which can offer specific functionality at high effectiveness or functionality that is restricted to unique applications or use cases, analysts said. Currently, the OMI's GitHub page has three repositories, all under the Apache 2.0 license.

AI

'AI-Powered Remediation': GitHub Now Offers 'Copilot Autofix' Suggestions for Code Vulnerabilities (infoworld.com) 18

InfoWorld reports that Microsoft-owned GitHub "has unveiled Copilot Autofix, an AI-powered software vulnerability remediation service."

The feature became available Wednesday as part of the GitHub Advanced Security (or GHAS) service: "Copilot Autofix analyzes vulnerabilities in code, explains why they matter, and offers code suggestions that help developers fix vulnerabilities as fast as they are found," GitHub said in the announcement. GHAS customers on GitHub Enterprise Cloud already have Copilot Autofix included in their subscription. GitHub has enabled Copilot Autofix by default for these customers in their GHAS code scanning settings.

Beginning in September, Copilot Autofix will be offered for free in pull requests to open source projects.

During the public beta, which began in March, GitHub found that developers using Copilot Autofix were fixing code vulnerabilities more than three times faster than those doing it manually, demonstrating how AI agents such as Copilot Autofix can radically simplify and accelerate software development.

"Since implementing Copilot Autofix, we've observed a 60% reduction in the time spent on security-related code reviews," says one principal engineer quoted in GitHub's announcement, "and a 25% increase in overall development productivity."

The announcement also notes that Copilot Autofix "leverages the CodeQL engine, GPT-4o, and a combination of heuristics and GitHub Copilot APIs." Code scanning tools detect vulnerabilities, but they don't address the fundamental problem: remediation takes security expertise and time, two valuable resources in critically short supply. In other words, finding vulnerabilities isn't the problem. Fixing them is...

Developers can keep new vulnerabilities out of their code with Copilot Autofix in the pull request, and now also pay down the backlog of security debt by generating fixes for existing vulnerabilities... Fixes can be generated for dozens of classes of code vulnerabilities, such as SQL injection and cross-site scripting, which developers can dismiss, edit, or commit in their pull request.... For developers who aren't necessarily security experts, Copilot Autofix is like having the expertise of your security team at your fingertips while you review code...

As the global home of the open source community, GitHub is uniquely positioned to help maintainers detect and remediate vulnerabilities so that open source software is safer and more reliable for everyone. We firmly believe that it's highly important to be both a responsible consumer of open source software and contributor back to it, which is why open source maintainers can already take advantage of GitHub's code scanning, secret scanning, dependency management, and private vulnerability reporting tools at no cost. Starting in September, we're thrilled to add Copilot Autofix in pull requests to this list and offer it for free to all open source projects...

While responsibility for software security continues to rest on the shoulders of developers, we believe that AI agents can help relieve much of the burden.... With Copilot Autofix, we are one step closer to our vision where a vulnerability found means a vulnerability fixed.

The Internet

Techdirt's Mike Masnick Joins the Bluesky Board To Support a 'More Open, Decentralized Internet' (techdirt.com) 18

Mike Masnick, a semi-regular Slashdot contributor and founder of the tech blog Techdirt, is joining the board of Bluesky, where he "will be providing advice and guidance to the company to help it achieve its vision of a more open, more competitive, more decentralized online world." Masnick writes: In the nearly three decades that I've been writing Techdirt I've been writing about what is happening in the world of the internet, but also about how much better the internet can be. That won't change. I will still be writing about what is happening and where I believe we should be going. But given that there are now people trying to turn some of that better vision into a reality, I cannot resist this opportunity to help them achieve that goal. The early internet had tremendous promise as a decentralized system that enabled anyone to build what they wanted on a global open network, opening up all sorts of possibilities for human empowerment and creativity. But over the last couple of decades, the internet has moved away from that democratizing promise. Instead, it has been effectively taken over by a small number of giant companies with centralized, proprietary, closed systems that have supplanted the more open network we were promised.

There are, of course, understandable reasons why those centralized systems have been successful, such as by providing a more user-friendly experience on the front-end. But there was a price to pay: losing user autonomy, privacy and the benefits of decentralization (not to mention losing a highly dynamic, competitive internet). The internet need not be so limited, and over the years I've tried to encourage people and companies to make different choices to return to the original promise and benefits of openness. With Bluesky, we now have one company who is trying.
"Mike's work has been an inspiration to us from the start," says Jay Graber, CEO of Bluesky. "Having him join our board feels like a natural progression of our shared vision for a more open internet. His perspective will help ensure we're building something that truly serves users as we continue to evolve Bluesky and the AT Protocol."
AI

NIST Releases an Open-Source Platform for AI Safety Testing (scmagazine.com) 4

America's National Institute of Standards and Technology (NIST) has released a new open-source software tool called Dioptra for testing the resilience of machine learning models to various types of attacks.

"Key features that are new from the alpha release include a new web-based front end, user authentication, and provenance tracking of all the elements of an experiment, which enables reproducibility and verification of results," a NIST spokesperson told SC Media: Previous NIST research identified three main categories of attacks against machine learning algorithms: evasion, poisoning and oracle. Evasion attacks aim to trigger an inaccurate model response by manipulating the data input (for example, by adding noise), poisoning attacks aim to impede the model's accuracy by altering its training data, leading to incorrect associations, and oracle attacks aim to "reverse engineer" the model to gain information about its training dataset or parameters, according to NIST.

The free platform enables users to determine to what degree attacks in the three categories mentioned will affect model performance and can also be used to gauge the use of various defenses such as data sanitization or more robust training methods.

The open-source testbed has a modular design to support experimentation with different combinations of factors such as different models, training datasets, attack tactics and defenses. The newly released 1.0.0 version of Dioptra comes with a number of features to maximize its accessibility to first-party model developers, second-party model users or purchasers, third-party model testers or auditors, and researchers in the ML field alike. Along with its modular architecture design and user-friendly web interface, Dioptra 1.0.0 is also extensible and interoperable with Python plugins that add functionality... Dioptra tracks experiment histories, including inputs and resource snapshots that support traceable and reproducible testing, which can unveil insights that lead to more effective model development and defenses.

NIST also published final versions of three "guidance" documents, according to the article. "The first tackles 12 unique risks of generative AI along with more than 200 recommended actions to help manage these risks. The second outlines Secure Software Development Practices for Generative AI and Dual-Use Foundation Models, and the third provides a plan for global cooperation in the development of AI standards."

Thanks to Slashdot reader spatwei for sharing the news.
Government

Why DARPA is Funding an AI-Powered Bug-Spotting Challenge (msn.com) 43

Somewhere in America's Defense Department, the DARPA R&D agency is running a two-year contest to write an AI-powered program "that can scan millions of lines of open-source code, identify security flaws and fix them, all without human intervention," reports the Washington Post. [Alternate URL here.]

But as they see it, "The contest is one of the clearest signs to date that the government sees flaws in open-source software as one of the country's biggest security risks, and considers artificial intelligence vital to addressing it." Free open-source programs, such as the Linux operating system, help run everything from websites to power stations. The code isn't inherently worse than what's in proprietary programs from companies like Microsoft and Oracle, but there aren't enough skilled engineers tasked with testing it. As a result, poorly maintained free code has been at the root of some of the most expensive cybersecurity breaches of all time, including the 2017 Equifax disaster that exposed the personal information of half of all Americans. The incident, which led to the largest-ever data breach settlement, cost the company more than $1 billion in improvements and penalties.

If people can't keep up with all the code being woven into every industrial sector, DARPA hopes machines can. "The goal is having an end-to-end 'cyber reasoning system' that leverages large language models to find vulnerabilities, prove that they are vulnerabilities, and patch them," explained one of the advising professors, Arizona State's Yan Shoshitaishvili.... Some large open-source projects are run by near-Wikipedia-size armies of volunteers and are generally in good shape. Some have maintainers who are given grants by big corporate users that turn it into a job. And then there is everything else, including programs written as homework assignments by authors who barely remember them.

"Open source has always been 'Use at your own risk,'" said Brian Behlendorf, who started the Open Source Security Foundation after decades of maintaining a pioneering free server software, Apache, and other projects at the Apache Software Foundation. "It's not free as in speech, or even free as in beer," he said. "It's free as in puppy, and it needs care and feeding."

40 teams entered the contest, according to the article — and seven received $1 million in funding to continue on to the next round, with the finalists to be announced at this year's Def Con, according to the article.

"Under the terms of the DARPA contest, all finalists must release their programs as open source," the article points out, "so that software vendors and consumers will be able to run them."
Programming

Go Tech Lead Russ Cox Steps Down to Focus on AI-Powered Open-Source Contributor Bot (google.com) 12

Thursday Go's long-time tech lead Russ Cox made an announcement: Starting September 1, Austin Clements will be taking over as the tech lead of Go: both the Go team at Google and the overall Go project. Austin is currently the tech lead for what we sometimes call the "Go core", which encompasses compiler toolchain, runtime, and releases. Cherry Mui will be stepping up to lead those areas.

I am not leaving the Go project, but I think the time is right for a change... I will be shifting my focus to work more on Gaby [or "Go AI bot," an open-source contributor agent] and Oscar [an open-source contributor agent architecture], trying to make useful contributions in the Go issue tracker to help all of you work more productively. I am hopeful that work on Oscar will uncover ways to help open source maintainers that will be adopted by other projects, just like some of Go's best ideas have been adopted by other projects. At the highest level, my goals for Oscar are to build something useful, learn something new, and chart a path for other projects. These are the same broad goals I've always had for our work on Go, so in that sense Oscar feels like a natural continuation.

The post notes that new tech lead Austin Clements "has been working on Go at Google since 2014" (and Mui since 2016). "Their judgment is superb and their knowledge of Go and the systems it runs on both broad and deep. When I have general design questions or need to better understand details of the compiler, linker, or runtime, I turn to them." It's important to remember that tech lead — like any position of leadership — is a service role, not an honorary title. I have been leading the Go project for over 12 years, serving all of you, and trying to create the right conditions for all of you to do your best work. Large projects like Go absolutely benefit from stable leadership, but they can also benefit from leadership changes. New leaders bring new strengths and fresh perspectives. For Go, I think 12+ years of one leader is enough stability; it's time for someone new to serve in this role.

In particular, I don't believe that the "BDFL" (benevolent dictator for life) model is healthy for a person or a project. It doesn't create space for new leaders. It's a single point of failure. It doesn't give the project room to grow. I think Python benefited greatly from Guido stepping down in 2018 and letting other people lead, and I've had in the back of my mind for many years that we should have a Go leadership change eventually....

I am going to consciously step back from decision making and create space for Austin and the others to step forward, but I am not disappearing. I will still be available to talk about Go designs, review CLs, answer obscure history questions, and generally help and support you all in whatever way I can. I will still file issues and send CLs from time to time, I have been working on a few potential new standard libraries, I will still advocate for Go across the industry, and I will be speaking about Go at GoLab in Italy in November...

I am incredibly proud of the work we have all accomplished together, and I am confident in the leaders both on the Go team at Google and in the Go community. You are all doing remarkable work, and I know you will continue to do that.

Open Source

Linux Hits Another Desktop Market Share Record 171

According to Statcounter, Linux use hit another all-time high in July. For July 2024, the statistics website is showing Linux at 4.45%, climbing almost a half a percentage point from June's 4.05% high.

Is 2024 truly the year of Linux on the desktop?
Open Source

A New White House Report Embraces Open-Source AI 15

An anonymous reader quotes a report from ZDNet: According to a new statement, the White House realizes open source is key to artificial intelligence (AI) development -- much like many businesses using the technology. On Tuesday, the National Telecommunications and Information Administration (NTIA) issued a report supporting open-source and open models to promote innovation in AI while emphasizing the need for vigilant risk monitoring. The report recommends that the US continue to support AI openness while working on new capabilities to monitor potential AI risks but refrain from restricting the availability of open model weights.
Open Source

Mike McQuaid on 15 Years of Homebrew and Protecting Open-Source Maintainers (thenextweb.com) 37

Despite multiple methods available across major operating systems for installing and updating applications, there remains "no real clear answer to 'which is best,'" reports The Next Web. Each system faces unique challenges such as outdated packages, high fees, and policy restrictions.

Enter Homebrew.

"Initially created as an option for developers to keep the dependencies they often need for developing, testing, and running their work, Homebrew has grown to be so much more in its 15-year history." Created in 2009, Homebrew has become a leading solution for macOS, integrating with MDM tools through its enterprise-focused extension, Workbrew, to balance user freedom with corporate security needs, while maintaining its open-source roots under the guidance of Mike McQuaid. In an interview with The Next Web's Chris Chinchilla, project leader Mike McQuaid talks about the challenges and responsibilities of maintaining one of the world's largest open-source projects: As with anything that attracts plenty of use and attention, Homebrew also attracts a lot of mixed and extreme opinions, and processing and filtering those requires a tough outlook, something that Mike has spoken about in numerous interviews and at conferences. "As a large project, you get a lot of hate from people. Either people are just frustrated because they hit a bug or because you changed something, and they didn't read the release notes, and now something's broken," Mike says when I ask him about how he copes with the constant influx of communication. "There are a lot of entitled, noisy users in open source who contribute very little and like to shout at people and make them feel bad. One of my strengths is that I have very little time for those people, and I just insta-block them or close their issues."

More crucially, an open-source project is often managed and maintained by a group of people. Homebrew has several dozen maintainers and nearly one thousand total contributors. Mike explains that all of these people also deserve to be treated with respect by users, "I'm also super protective of my maintainers, and I don't want them to be treated that way either." But despite these features and its widespread use, one area Homebrew has always lacked is the ability to work well with teams of users. This is where Workbrew, a company Mike founded with two other Homebrew maintainers, steps in. [...] Workbrew ties together various Homebrew features with custom glue to create a workflow for setting up and maintaining Mac machines. It adds new features that core Homebrew maintainers had no interest in adding, such as admin and reporting dashboards for a computing fleet, while bringing more general improvements to the core project.

Bearing in mind Mike's motivation to keep Homebrew in the "traditional open source" model, I asked him how he intended to keep the needs of the project and the business separated and satisfied. "We've seen a lot of churn in the last few years from companies that made licensing decisions five or ten years ago, which have now changed quite dramatically and have generated quite a lot of community backlash," Mike said. "I'm very sensitive to that, and I am a little bit of an open-source purist in that I still consider the open-source initiative's definition of open source to be what open source means. If you don't comply with that, then you can be another thing, but I think you're probably not open source."

And regarding keeping his and his co-founder's dual roles separated, Mike states, "I'm the CTO and co-founder of Workbrew, and I'm the project leader of Homebrew. The project leader with Homebrew is an elected position." Every year, the maintainers and the community elect a candidate. "But then, with the Homebrew maintainers working with us on Workbrew, one of the things I say is that when we're working on Workbrew, I'm your boss now, but when we work on Homebrew, I'm not your boss," Mike adds. "If you think I'm saying something and it's a bad idea, you tell me it's a bad idea, right?" The company is keeping its early progress in a private beta for now, but you can expect an announcement soon. As for what's happening for Homebrew? Well, in the best "open source" way, that's up to the community and always will be.

GNU is Not Unix

After Crowdstrike Outage, FSF Argues There's a Better Way Forward (fsf.org) 139

"As free software activists, we ought to take the opportunity to look at the situation and see how things could have gone differently," writes FSF campaigns manager Greg Farough: Let's be clear: in principle, there is nothing ethically wrong with automatic updates so long as the user has made an informed choice to receive them... Although we can understand how the situation developed, one wonders how wise it is for so many critical services around the world to hedge their bets on a single distribution of a single operating system made by a single stupefyingly predatory monopoly in Redmond, Washington. Instead, we can imagine a more horizontal structure, where this airline and this public library are using different versions of GNU/Linux, each with their own security teams and on different versions of the Linux(-libre) kernel...

As of our writing, we've been unable to ascertain just how much access to the Windows kernel source code Microsoft granted to CrowdStrike engineers. (For another thing, the root cause of the problem appears to have been an error in a configuration file.) But this being the free software movement, we could guarantee that all security engineers and all stakeholders could have equal access to the source code, proving the old adage that "with enough eyes, all bugs are shallow." There is no good reason to withhold code from the public, especially code so integral to the daily functioning of so many public institutions and businesses. In a cunning PR spin, it appears that Microsoft has started blaming the incident on third-party firms' access to kernel source and documentation. Translated out of Redmond-ese, the point they are trying to make amounts to "if only we'd been allowed to be more secretive, this wouldn't have happened...!"

We also need to see that calling for a diversity of providers of nonfree software that are mere front ends for "cloud" software doesn't solve the problem. Correcting it fully requires switching to free software that runs on the user's own computer.The Free Software Foundation is often accused of being utopian, but we are well aware that moving airlines, libraries, and every other institution affected by the CrowdStrike outage to free software is a tremendous undertaking. Given free software's distinct ethical advantage, not to mention the embarrassing damage control underway from both Microsoft and CrowdStrike, we think the move is a necessary one. The more public an institution, the more vitally it needs to be running free software.

For what it's worth, it's also vital to check the syntax of your configuration files. CrowdStrike engineers would do well to remember that one, next time.

AI

What Is the Future of Open Source AI? (fb.com) 22

Tuesday Meta released Llama 3.1, its largest open-source AI model to date. But just one day Mistral released Large 2, notes this report from TechCrunch, "which it claims to be on par with the latest cutting-edge models from OpenAI and Meta in terms of code generation, mathematics, and reasoning...

"Though Mistral is one of the newer entrants in the artificial intelligence space, it's quickly shipping AI models on or near the cutting edge." In a press release, Mistral says one of its key focus areas during training was to minimize the model's hallucination issues. The company says Large 2 was trained to be more discerning in its responses, acknowledging when it does not know something instead of making something up that seems plausible. The Paris-based AI startup recently raised $640 million in a Series B funding round, led by General Catalyst, at a $6 billion valuation...

However, it's important to note that Mistral's models are, like most others, not open source in the traditional sense — any commercial application of the model needs a paid license. And while it's more open than, say, GPT-4o, few in the world have the expertise and infrastructure to implement such a large model. (That goes double for Llama's 405 billion parameters, of course.)

Mistral only has 123 billion parameters, according to the article. But whichever system prevails, "Open Source AI Is the Path Forward," Mark Zuckerberg wrote this week, predicting that open-source AI will soar to the same popularity as Linux: This year, Llama 3 is competitive with the most advanced models and leading in some areas. Starting next year, we expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency... Beyond releasing these models, we're working with a range of companies to grow the broader ecosystem. Amazon, Databricks, and NVIDIA are launching full suites of services to support developers fine-tuning and distilling their own models. Innovators like Groq have built low-latency, low-cost inference serving for all the new models. The models will be available on all major clouds including AWS, Azure, Google, Oracle, and more. Companies like Scale.AI, Dell, Deloitte, and others are ready to help enterprises adopt Llama and train custom models with their own data.
"As the community grows and more companies develop new services, we can collectively make Llama the industry standard and bring the benefits of AI to everyone," Zuckerberg writes. He says that he's heard from developers, CEOs, and government officials that they want to "train, fine-tune, and distill" their own models, protecting their data with a cheap and efficient model — and without being locked into a closed vendor. But they also tell him that want to invest in an ecosystem "that's going to be the standard for the long term." Lots of people see that open source is advancing at a faster rate than closed models, and they want to build their systems on the architecture that will give them the greatest advantage long term...

One of my formative experiences has been building our services constrained by what Apple will let us build on their platforms. Between the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping, it's clear that Meta and many other companies would be freed up to build much better services for people if we could build the best versions of our products and competitors were not able to constrain what we could build. On a philosophical level, this is a major reason why I believe so strongly in building open ecosystems in AI and AR/VR for the next generation of computing...

I believe that open source is necessary for a positive AI future. AI has more potential than any other modern technology to increase human productivity, creativity, and quality of life — and to accelerate economic growth while unlocking progress in medical and scientific research. Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn't concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society. There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives. I think governments will conclude it's in their interest to support open source because it will make the world more prosperous and safer... [O]pen source should be significantly safer since the systems are more transparent and can be widely scrutinized...

The bottom line is that open source AI represents the world's best shot at harnessing this technology to create the greatest economic opportunity and security for everyone... I believe the Llama 3.1 release will be an inflection point in the industry where most developers begin to primarily use open source, and I expect that approach to only grow from here. I hope you'll join us on this journey to bring the benefits of AI to everyone in the world.

Open Source

Nvidia's Open-Source Linux Kernel Driver Performing At Parity To Proprietary Driver (phoronix.com) 21

Nvidia's new R555 Linux driver series has significantly improved their open-source GPU kernel driver modules, achieving near parity with their proprietary drivers. Phoronix's Michael Larabel reports: The NVIDIA open-source kernel driver modules shipped by their driver installer and also available via their GitHub repository are in great shape. With the R555 series the support and performance is basically at parity of their open-source kernel modules compared to their proprietary kernel drivers. [...] Across a range of different GPU-accelerated creator workloads, the performance of the open-source NVIDIA kernel modules matched that of the proprietary driver. No loss in performance going the open-source kernel driver route. Across various professional graphics workloads, both the NVIDIA RTX A2000 and A4000 graphics cards were also achieving the same performance whether on the open-source MIT/GPLv2 driver or using NVIDIA's classic proprietary driver.

Across all of the tests I carried out using the NVIDIA 555 stable series Linux driver, the open-source NVIDIA kernel modules were able to achieve the same performance as the classic proprietary driver. Also important is that there was no increased power use or other difference in power management when switching over to the open-source NVIDIA kernel modules.

It's great seeing how far the NVIDIA open-source kernel modules have evolved and that with the upcoming NVIDIA 560 Linux driver series they will be defaulting to them on supported GPUs. And moving forward with Blackwell and beyond, NVIDIA is just enabling the GPU support along their open-source kernel drivers with leaving the proprietary kernel drivers to older hardware. Tests I have done using NVIDIA GeForce RTX 40 graphics cards with Linux gaming workloads between the MIT/GPL and proprietary kernel drivers have yielded similar (boring but good) results: the same performance being achieved with no loss going the open-source route.
You can view Phoronix's performance results in charts here, here, and here.
AI

FTC's Khan Backs Open AI Models in Bid to Avoid Monopolies (yahoo.com) 8

Open AI models that allow developers to customize them with few restrictions are more likely to promote competition, FTC Chair Lina Khan said, weighing in on a key debate within the industry. From a report: "There's tremendous potential for open-weight models to promote competition," Khan said Thursday in San Francisco at startup incubator Y Combinator. "Open-weight models can liberate startups from the arbitrary whims of closed developers and cloud gatekeepers."

"Open-weight" models disclose what an AI model picked up and was tweaked on during its training process. That allows developers to better customize them and makes them more accessible to smaller companies and researchers. But critics have warned that open models carry an increased risk of abuse and could potentially allow companies from geopolitical rivals like China to piggyback off the technology. Khan's comments come as the Biden administration is considering guidance on the use and safety of open-weight models.

Open Source

Switzerland Now Requires All Government Software To Be Open Source (zdnet.com) 60

Switzerland has enacted the "Federal Law on the Use of Electronic Means for the Fulfillment of Government Tasks" (EMBAG), mandating open-source software (OSS) in the public sector to enhance transparency, security, and efficiency. "This new law requires all public bodies to disclose the source code of software developed by or for them unless third-party rights or security concerns prevent it," writes ZDNet's Steven Vaughan-Nichols. "This 'public money, public code' approach aims to enhance government operations' transparency, security, and efficiency." From the report: Making this move wasn't easy. It began in 2011 when the Swiss Federal Supreme Court published its court application, Open Justitia, under an OSS license. The proprietary legal software company Weblaw wasn't happy about this. There were heated political and legal fights for more than a decade. Finally, the EMBAG was passed in 2023. Now, the law not only allows the release of OSS by the Swiss government or its contractors, but also requires the code to be released under an open-source license "unless the rights of third parties or security-related reasons would exclude or restrict this."

Professor Dr. Matthias Sturmer, head of the Institute for Public Sector Transformation at the Bern University of Applied Sciences, led the fight for this law. He hailed it as "a great opportunity for government, the IT industry, and society." Sturmer believes everyone will benefit from this regulation, as it reduces vendor lock-in for the public sector, allows companies to expand their digital business solutions, and potentially leads to reduced IT costs and improved services for taxpayers.

In addition to mandating OSS, the EMBAG also requires the release of non-personal and non-security-sensitive government data as Open Government Data (OGD). This dual "open by default" approach marks a significant paradigm shift towards greater openness and practical reuse of software and data. Implementing the EMBAG is expected to serve as a model for other countries considering similar measures. It aims to promote digital sovereignty and encourage innovation and collaboration within the public sector. The Swiss Federal Statistical Office (BFS) is leading the law's implementation, but the organizational and financial aspects of the OSS releases still need to be clarified.

Open Source

Nvidia Will Fully Transition To Open-Source GPU Kernel Modules With R560 Drivers 50

Nvidia is ready to fully transition to open-source Linux GPU kernel drivers, starting with the R555 series and planning a complete shift with the R560 series. The open-source kernel modules will only be available for select newer GPUs, while older architectures like Maxwell, Pascal, and Volta must continue using proprietary drivers. TechSpot reports: According to Nvidia, the open-source GPU kernel modules have helped deliver "equivalent or better" application performance compared to its proprietary kernels. The company has also added new features like Heterogeneous Memory Management (HMM) support, confidential computing, and the coherent memory architectures of the Grace platform to its open-source kernels. [...] For compatible GPUs, the default version of the driver installed by all methods is switching from proprietary to open-source. However, users will have the ability to manually select the closed-source modules if they are still available for their platform.

Unfortunately, the open-source kernel modules are not available for GPUs from the older Maxwell, Pascal, and Volta architectures, meaning people still running a GTX 980 or GTX 1080 will have to continue using Nvidia's proprietary drivers. For mixed deployments with older and newer GPUs in the same system, Nvidia recommends continuing to use the proprietary driver for full compatibility.
"Nvidia has moved most of its proprietary functions into a proprietary, closed-source firmware blob," adds Ars Technica's Kevin Purdy. "The parts of Nvidia's GPUs that interact with the broader Linux system are open, but the user-space drivers and firmware are none of your or the OSS community's business."
EU

OW2: 'The European Union Must Keep Funding Free Software' (ow2.org) 15

OW2, the non-profit international consortium dedicated to developing open-source middleware, published an open letter to the European Commission today. They're urging the European Union to continue funding free software after noticing that the Next Generation Internet (NGI) programs were no longer mentioned in Cluster 4 of the 2025 Horizon Europe funding plans.

OW2 argues that discontinuing NGI funding would weaken Europe's technological ecosystem, leaving many projects under-resourced and jeopardizing Europe's position in the global digital landscape. The letter reads, in part: NGI programs have shown their strength and importance to support the European software infrastructure, as a generic funding instrument to fund digital commons and ensure their long-term sustainability. We find this transformation incomprehensible, moreover when NGI has proven efficient and economical to support free software as a whole, from the smallest to the most established initiatives. This ecosystem diversity backs the strength of European technological innovation, and maintaining the NGI initiative to provide structural support to software projects at the heart of worldwide innovation is key to enforce the sovereignty of a European infrastructure. Contrary to common perception, technical innovations often originate from European rather than North American programming communities, and are mostly initiated by small-scaled organizations.

Previous Cluster 4 allocated 27 millions euros to:
- "Human centric Internet aligned with values and principles commonly shared in Europe";
- "A flourishing internet, based on common building blocks created within NGI, that enables better control of our digital life";
- "A structured eco-system of talented contributors driving the creation of new internet commons and the evolution of existing internet commons."

In the name of these challenges, more than 500 projects received NGI funding in the first 5 years, backed by 18 organizations managing these European funding consortia.

The Internet

Substack Rival Ghost Federates Its First Newsletter (techcrunch.com) 16

After teasing support for the fediverse earlier this year, the newsletter platform and Substack rival Ghost has finally delivered. "Over the past few days, Ghost says it has achieved two major milestones in its move to become a federated service," reports TechCrunch. "Of note, it has federated its own newsletter, making it the first federated Ghost instance on the internet." From the report: Users can follow the newsletter through their preferred federated app at @index@activitypub.ghost.org, though the company warns there will be bugs and issues as it continues to work on the platform's integration with ActivityPub, the protocol that powers Mastodon and other federated apps. "Having multiple Ghost instances in production successfully running ActivityPub is a huge milestone for us because it means that for the first time, we're interacting with the wider fediverse. Not just theoretical local implementations and tests, but the real world wide social web," the company shared in its announcement of the news.

In addition, Ghost's ActivityPub GitHub repository is now fully open source. That means those interested in tracking Ghost's progress toward federation can follow its code changes in real time, and anyone else can learn from, modify, distribute or contribute to its work. Developers who want to collaborate with Ghost are also being invited to get involved following this move. By offering a federated version of the newsletter, readers will have more choices on how they want to subscribe. That is, instead of only being able to follow the newsletter via email or the web, they also can track it using RSS or ActivityPub-powered apps, like Mastodon and others. Ghost said it will also develop a way for sites with paid subscribers to manage access via ActivityPub, but that functionality hasn't yet rolled out with this initial test.

Open Source

Developer Successfully Boots Up Linux on Google Drive (ersei.net) 42

Its FOSS writes: When it comes to Linux, we get to see some really cool, and sometimes quirky projects (read Hannah Montana Linux) that try to show off what's possible, and that's not a bad thing. One such quirky undertaking has recently surfaced, which sees a sophomore trying to one-up their friend, who had booted Linux off NFS. With their work, they have been able to run Arch Linux on Google Drive.
Their ultimate idea included FUSE (which allows running file-system code in userspace). The developer's blog post explains that when Linux boots, "the kernel unpacks a temporary filesystem into RAM which has the tools to mount the real filesystem... it's very helpful! We can mount a FUSE filesystem in that step and boot normally.... " Thankfully, Dracut makes it easy enough to build a custom initramfs... I decide to build this on top of Arch Linux because it's relatively lightweight and I'm familiar with how it work."
Doing testing in an Amazon S3 container, they built an EFI image — then spent days trying to enable networking... And the adventure continues. ("Would it be possible to manually switch the root without a specialized system call? What if I just chroot?") After they'd made a few more tweaks, "I sit there, in front of my computer, staring. It can't have been that easy, can it? Surely, this is a profane act, and the spirit of Dennis Ritchie ought't've stopped me, right? Nobody stopped me, so I kept going..." I build the unified EFI file, throw it on a USB drive under /BOOT/EFI, and stick it in my old server... This is my magnum opus. My Great Work. This is the mark I will leave on this planet long after I am gone: The Cloud Native Computer.

Despite how silly this project is, there are a few less-silly uses I can think of, like booting Linux off of SSH, or perhaps booting Linux off of a Git repository and tracking every change in Git using gitfs. The possibilities are endless, despite the middling usefulness.

If there is anything I know about technology, it's that moving everything to The Cloud is the current trend. As such, I am prepared to commercialize this for any company wishing to leave their unreliable hardware storage behind and move entirely to The Cloud. Please request a quote if you are interested in True Cloud Native Computing.

Unfortunately, I don't know what to do next with this. Maybe I should install Nix?

Slashdot Top Deals