Open Source

The UN Ditches Google for Form Submissions, Opts for Open Source 'CryptPad' Instead (itsfoss.com) 17

Did you know there's an initiative to drive Open Source adoption both within the United Nations — and globally? Launched in March, it's the work of the Digital Technology Network (under the UN's chief executive board) which "works to advance open source technologies throughout UN agencies," promoting "collaboration and scalable solutions to support the UN's digital transformation." Fun fact: The first group to endorse the initiative's principles was the Open Source Initiative...

"The Open Source Initiative applauds the United Nations for recognizing the growing importance of Open Source in solving global challenges and building sustainable solutions, and we are honored to be the first to endorse the UN Open Source Principles," said Stefano Maffulli, executive director of OSI.
But that's just the beginining, writes It's FOSS News: As part of the UN Open Source Principles initiative, the UN has invited other organizations to support and officially endorse these principles. To collect responses, they are using CryptPad instead of Google Forms... If you don't know about CryptPad, it is a privacy-focused, open source online collaboration office suite that encrypts all of its content, doesn't log IP addresses, and supports a wide range of collaborative documents and tools for people to use.

While this happened back in late March, we thought it would be a good idea to let people know that a well-known global governing body like the UN was slowly moving towards integrating open source tech into their organization... I sincerely hope the UN continues its push away from proprietary Big Tech solutions in favor of more open, privacy-respecting alternatives, integrating more of their workflow with such tools.

16 groups have already endorsed the UN Open Source Principles (including the GNOME Foundation, the Linux Foundation, and the Eclipse Foundation).

Here's the eight UN Open Source Principles:
  1. Open by default: Making Open Source the standard approach for projects
  2. Contribute back: Encouraging active participation in the Open Source ecosystem
  3. Secure by design: Making security a priority in all software projects
  4. Foster inclusive participation and community building: Enabling and facilitating diverse and inclusive contributions
  5. Design for reusability: Designing projects to be interoperable across various platforms and ecosystems
  6. Provide documentation: Providing thorough documentation for end-users, integrators and developers
  7. RISE (recognize, incentivize, support and empower): Empowering individuals and communities to actively participate
  8. Sustain and scale: Supporting the development of solutions that meet the evolving needs of the UN system and beyond.

Open Source

May is 'Maintainer Month'. Open Source Initiative Joins GitHub to Celebrate Open Source Security (opensource.org) 6

The Open Source Initiative is joining "a global community of contributors" for GitHub's annual event "honoring the individuals who steward and sustain Open Source projects."

And the theme of the 5th Annual "Maintainer Month" will be: securing Open Source: Throughout the month, OSI and our affiliates will be highlighting maintainers who prioritize security in their projects, sharing their stories, and providing a platform for collaboration and learning... Maintainer Month is a time to gather, share knowledge, and express appreciation for the people who keep Open Source projects running. These maintainers not only review issues and merge pull requests — they also navigate community dynamics, mentor new contributors, and increasingly, adopt security best practices to protect their code and users....

- OSI will publish a series of articles on Opensource.net highlighting maintainers whose work centers around security...

- As part of our programming for May, OSI will host a virtual Town Hall [May 21st] with our affiliate organizations and invite the broader Open Source community to join....

- Maintainer Month is also a time to tell the stories of those who often work behind the scenes. OSI will be amplifying voices from across our affiliate network and encouraging communities to recognize the people whose efforts are often invisible, yet essential.

"These efforts are not just celebrations — they are opportunities to recognize the essential role maintainers play in safeguarding the Open Source infrastructure that underpins so much of our digital world," according to the OSI's announcement. And this year they're focusing on three key areas of open source security:
  • Adopting security best practices in projects and communities
  • Recognizing contributors who improve project security
  • Collaborating to strengthen the ecosystem as a whole

AI

In 'Milestone' for Open Source, Meta Releases New Benchmark-Beating Llama 4 Models (meta.com) 65

It's "a milestone for Meta AI and for open source," Mark Zuckerberg said this weekend. "For the first time, the best small, mid-size, and potentially soon frontier [large-language] models will be open source."

Zuckerberg anounced four new Llama LLMs in a video posted on Instagram and Facebook — two dropping this weekend, with another two on the way. "Our goal is to build the world's leading AI, open source it, and make it universally accessible so that everyone in the world benefits."

Zuckerberg's announcement: I've said for a while that I think open source AI is going to become the leading models. And with Llama 4 this is starting to happen.

- The first model is Llama 4 Scout. It is extremely fast, natively multi-modal. It has an industry-leading "nearly infinite" 10M-token context length, and is designed to run on a single GPU. [Meta's blog post says it fits on an NVIDIA H100]. It is 17 billion parameters by 16 experts, and it is by far the highest performing small model in its class.

- The second model is Llama 4 Maverick — the workhorse. It beats GPT-4o and Gemini Flash 2 on all benchmarks. It is smaller and more efficient than DeepSeek v3, but it is still comparable on text, plus it is natively multi-modal. This one is 17B parameters x 128 experts, and it is designed to run on a single host for easy inference.

This thing is a beast.

Zuck promised more news next month on "Llama 4 Reasoning" — but the fourth model will be called Llama 4 Behemoth. "This thing is massive. More than 2 trillion parameters." (A blog post from Meta AI says it also has a 288 billion active parameter model, outperforms GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro on STEM benchmarks, and will "serve as a teacher for our new models.")

"I'm not aware of anyone training a larger model out there," Zuckberg says in his video, calling Behemoth "already the highest performing base model in the world, and it is not even done training yet."

"If you want to try Llama 4, you can use Meta AI in WhatsApp, Messenger, or Instagram Direct," Zuckberg said in his video, "or you can go to our web site at meta.ai." The Scout and Maverick models can be downloaded from llama.com and Hugging Face.

"We continue to believe that openness drives innovation," Meta AI says in their blog post, "and is good for developers, good for Meta, and good for the world." Their blog post declares it's "The beginning of a new era of natively multimodal AI innovation," calling Scout and Maverick "the best choices for adding next-generation intelligence." This is just the beginning for the Llama 4 collection. We believe that the most intelligent systems need to be capable of taking generalized actions, conversing naturally with humans, and working through challenging problems they haven't seen before. Giving Llama superpowers in these areas will lead to better products for people on our platforms and more opportunities for developers to innovate on the next big consumer and business use cases. We're continuing to research and prototype both models and products, and we'll share more about our vision at LlamaCon on April 29...

We also can't wait to see the incredible new experiences the community builds with our new Llama 4 models.

"The impressive part about Llama 4 Maverick is that with just 17B active parameters, it has scored an ELO score of 1,417 on the LMArena leaderboard," notes the tech news site Beebom. "This puts the Maverick model in the second spot, just below Gemini 2.5 Pro, and above Grok 3, GPT-4o, GPT-4.5, and more.

"It also achieves comparable results when compared to the latest DeepSeek V3 model on reasoning and coding tasks, and surprisingly, with just half the active parameters."
Open Source

'Landrun': Lightweight Linux Sandboxing With Landlock, No Root Required (github.com) 40

Over on Reddit's "selfhosted" subreddit for alternatives to popular services, long-time Slashdot reader Zoup described a pain point:

- Landlock is a Linux Security Module (LSM) that lets unprivileged processes restrict themselves.

- It's been in the kernel since 5.13, but the API is awkward to use directly.

- It always annoyed the hell out of me to run random binaries from the internet without any real control over what they can access.


So they've rolled their own solution, according to Thursday's submission to Slashdot: I just released Landrun, a Go-based CLI tool that wraps Linux Landlock (5.13+) to sandbox any process without root, containers, or seccomp. Think firejail, but minimal and kernel-native. Supports fine-grained file access (ro/rw/exec) and TCP port restrictions (6.7+). No daemons, no YAML, just flags.

Example (where --rox allows read-only access with execution to specified path):

# landrun --rox /usr touch /tmp/file
touch: cannot touch '/tmp/file': Permission denied
# landrun --rox /usr --rw /tmp touch /tmp/file
#

It's MIT-licensed, easy to audit, and now supports systemd services.

AI

Microsoft Uses AI To Find Flaws In GRUB2, U-Boot, Barebox Bootloaders (bleepingcomputer.com) 57

Slashdot reader zlives shared this report from BleepingComputer: Microsoft used its AI-powered Security Copilot to discover 20 previously unknown vulnerabilities in the GRUB2, U-Boot, and Barebox open-source bootloaders.

GRUB2 (GRand Unified Bootloader) is the default boot loader for most Linux distributions, including Ubuntu, while U-Boot and Barebox are commonly used in embedded and IoT devices. Microsoft discovered eleven vulnerabilities in GRUB2, including integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison. Additionally, 9 buffer overflows in parsing SquashFS, EXT4, CramFS, JFFS2, and symlinks were discovered in U-Boot and Barebox, which require physical access to exploit.

The newly discovered flaws impact devices relying on UEFI Secure Boot, and if the right conditions are met, attackers can bypass security protections to execute arbitrary code on the device. While exploiting these flaws would likely need local access to devices, previous bootkit attacks like BlackLotus achieved this through malware infections.

Miccrosoft titled its blog post "Analyzing open-source bootloaders: Finding vulnerabilities faster with AI." (And they do note that Micxrosoft disclosed the discovered vulnerabilities to the GRUB2, U-boot, and Barebox maintainers and "worked with the GRUB2 maintainers to contribute fixes... GRUB2 maintainers released security updates on February 18, 2025, and both the U-boot and Barebox maintainers released updates on February 19, 2025.")

They add that performing their initial research, using Security Copilot "saved our team approximately a week's worth of time," Microsoft writes, "that would have otherwise been spent manually reviewing the content." Through a series of prompts, we identified and refined security issues, ultimately uncovering an exploitable integer overflow vulnerability. Copilot also assisted in finding similar patterns in other files, ensuring comprehensive coverage and validation of our findings...

As AI continues to emerge as a key tool in the cybersecurity community, Microsoft emphasizes the importance of vendors and researchers maintaining their focus on information sharing. This approach ensures that AI's advantages in rapid vulnerability discovery, remediation, and accelerated security operations can effectively counter malicious actors' attempts to use AI to scale common attack tactics, techniques, and procedures (TTPs).

This week Google also announced Sec-Gemini v1, "a new experimental AI model focused on advancing cybersecurity AI frontiers."
AI

Open Source Coalition Announces 'Model-Signing' with Sigstore to Strengthen the ML Supply Chain (googleblog.com) 10

The advent of LLMs and machine learning-based applications "opened the door to a new wave of security threats," argues Google's security blog. (Including model and data poisoning, prompt injection, prompt leaking and prompt evasion.)

So as part of the Linux Foundation's nonprofit Open Source Security Foundation, and in partnership with NVIDIA and HiddenLayer, Google's Open Source Security Team on Friday announced the first stable model-signing library (hosted at PyPI.org), with digital signatures letting users verify that the model used by their application "is exactly the model that was created by the developers," according to a post on Google's security blog. [S]ince models are an uninspectable collection of weights (sometimes also with arbitrary code), an attacker can tamper with them and achieve significant impact to those using the models. Users, developers, and practitioners need to examine an important question during their risk assessment process: "can I trust this model?"

Since its launch, Google's Secure AI Framework (SAIF) has created guidance and technical solutions for creating AI applications that users can trust. A first step in achieving trust in the model is to permit users to verify its integrity and provenance, to prevent tampering across all processes from training to usage, via cryptographic signing... [T]he signature would have to be verified when the model gets uploaded to a model hub, when the model gets selected to be deployed into an application (embedded or via remote APIs) and when the model is used as an intermediary during another training run. Assuming the training infrastructure is trustworthy and not compromised, this approach guarantees that each model user can trust the model...

The average developer, however, would not want to manage keys and rotate them on compromise. These challenges are addressed by using Sigstore, a collection of tools and services that make code signing secure and easy. By binding an OpenID Connect token to a workload or developer identity, Sigstore alleviates the need to manage or rotate long-lived secrets. Furthermore, signing is made transparent so signatures over malicious artifacts could be audited in a public transparency log, by anyone. This ensures that split-view attacks are not possible, so any user would get the exact same model. These features are why we recommend Sigstore's signing mechanism as the default approach for signing ML models.

Today the OSS community is releasing the v1.0 stable version of our model signing library as a Python package supporting Sigstore and traditional signing methods. This model signing library is specialized to handle the sheer scale of ML models (which are usually much larger than traditional software components), and handles signing models represented as a directory tree. The package provides CLI utilities so that users can sign and verify model signatures for individual models. The package can also be used as a library which we plan to incorporate directly into model hub upload flows as well as into ML frameworks.

"We can view model signing as establishing the foundation of trust in the ML ecosystem..." the post concludes (adding "We envision extending this approach to also include datasets and other ML-related artifacts.") Then, we plan to build on top of signatures, towards fully tamper-proof metadata records, that can be read by both humans and machines. This has the potential to automate a significant fraction of the work needed to perform incident response in case of a compromise in the ML world...

To shape the future of building tamper-proof ML, join the Coalition for Secure AI, where we are planning to work on building the entire trust ecosystem together with the open source community. In collaboration with multiple industry partners, we are starting up a special interest group under CoSAI for defining the future of ML signing and including tamper-proof ML metadata, such as model cards and evaluation results.

Power

Open-Source Tool Designed To Throttle PC and Server Performance Based On Electricity Pricing (tomshardware.com) 56

Robotics and machine learning engineer Naveen Kul developed WattWise, a lightweight open-source CLI tool that monitors power usage via smart plugs and throttles system performance based on electricity pricing and peak hours. Tom's Hardware reports: The simple program, called WattWise, came about when Naveen built a dual-socket EPYC workstation with plans to add four GPUs. It's a power-intensive setup, so he wanted a way to monitor its power consumption using a Kasa smart plug. The enthusiast has released the monitoring portion of the project to the public now, but the portion that manages clocks and power will be released later. Unfortunately, the Kasa Smart app and the Home Assistant dashboard was inconvenient and couldn't do everything he desired. He already had a terminal window running monitoring tools like htop, nvtop, and nload, and decided to take matters into his own hands rather than dealing with yet another app.

Naveen built a terminal-based UI that shows power consumption data through Home Assistant and the TP-Link integration. The app monitors real-time power use, showing wattage and current, as well as providing historical consumption charts. More importantly, it is designed to automatically throttle CPU and GPU performance. Naveen's power provider uses Time-of-Use (ToU) pricing, so using a lot of power during peak hours can cost significantly more. The workstation can draw as much as 1400 watts at full load, but by reducing the CPU frequency from 3.7 GHz to 1.5 GHz, he's able to reduce consumption by about 225 watts. (No mention is made of GPU throttling, which could potentially allow for even higher power savings with a quad-GPU setup.)

Results will vary based on the hardware being used, naturally, and servers can pull far more power than a typical desktop -- even one designed and used for gaming. WattWise optimizes the system's clock speed based on the current system load, power consumption as reported by the smart plug, and the time -- with the latter factoring in peak pricing. From there, it uses a Proportional-Integral (PI) controller to manage the power and adapts system parameters based on the three variables.
A blog post with more information is available here.

WattWise is also available on GitHub.
AI

OpenAI Plans To Release a New 'Open' AI Language Model In the Coming Months 6

OpenAI plans to release a new open-weight language model -- its first since GPT-2 -- in the coming months and is seeking community feedback to shape its development. "That's according to a feedback form the company published on its website Monday," reports TechCrunch. "The form, which OpenAI is inviting 'developers, researchers, and [members of] the broader community' to fill out, includes questions like 'What would you like to see in an open-weight model from OpenAI?' and 'What open models have you used in the past?'" From the report: "We're excited to collaborate with developers, researchers, and the broader community to gather inputs and make this model as useful as possible," OpenAI wrote on its website. "If you're interested in joining a feedback session with the OpenAI team, please let us know [in the form] below." OpenAI plans to host developer events to gather feedback and, in the future, demo prototypes of the model. The first will take place in San Francisco within a few weeks, followed by sessions in Europe and Asia-Pacific regions.

OpenAI is facing increasing pressure from rivals such as Chinese AI lab DeepSeek, which have adopted an "open" approach to launching models. In contrast to OpenAI's strategy, these "open" competitors make their models available to the AI community for experimentation and, in some cases, commercialization.
Cloud

Microsoft Announces 'Hyperlight Wasm': Speedy VM-Based Security at Scale with a WebAssembly Runtime (microsoft.com) 18

Cloud providers like the security of running things in virtual machines "at scale" — even though VMs "are not known for having fast cold starts or a small footprint..." noted Microsoft's Open Source blog last November. So Microsoft's Azure Core Upstream team built an open source Rust library called Hyperlight "to execute functions as fast as possible while isolating those functions within a VM."

But that was just the beginning... Then, we showed how to run Rust functions really, really fast, followed by using C to [securely] run Javascript. In February 2025, the Cloud Native Computing Foundation (CNCF) voted to onboard Hyperlight into their Sandbox program [for early-stage projects].

[This week] we're announcing the release of Hyperlight Wasm: a Hyperlight virtual machine "micro-guest" that can run wasm component workloads written in many programming languages...

Traditional virtual machines do a lot of work to be able to run programs. Not only do they have to load an entire operating system, they also boot up the virtual devices that the operating system depends on. Hyperlight is fast because it doesn't do that work; all it exposes to its VM guests is a linear slice of memory and a CPU. No virtual devices. No operating system. But this speed comes at the cost of compatibility. Chances are that your current production application expects a Linux operating system running on the x86-64 architecture (hardware), not a bare linear slice of memory...

[B]uilding Hyperlight with a WebAssembly runtime — wasmtime — enables any programming language to execute in a protected Hyperlight micro-VM without any prior knowledge of Hyperlight at all. As far as program authors are concerned, they're just compiling for the wasm32-wasip2 target... Executing workloads in the Hyperlight Wasm guest isn't just possible for compiled languages like C, Go, and Rust, but also for interpreted languages like Python, JavaScript, and C#. The trick here, much like with containers, is to also include a language runtime as part of the image... Programming languages, runtimes, application platforms, and cloud providers are all starting to offer rich experiences for WebAssembly out of the box. If we do things right, you will never need to think about whether your application is running inside of a Hyperlight Micro-VM in Azure. You may never know your workload is executing in a Hyperlight Micro VM. And that's a good thing.

While a traditional virtual-device-based VM takes about 125 milliseconds to load, "When the Hyperlight VMM creates a new VM, all it needs do to is create a new slice of memory and load the VM guest, which in turn loads the wasm workload. This takes about 1-2 milliseconds today, and work is happening to bring that number to be less than 1 millisecond in the future."

And there's also double security due to Wasmtime's software-defined runtime sandbox within Hyperlight's larger VM...
Facebook

'An Open Letter To Meta: Support True Messaging Interoperability With XMPP' (xmpp.org) 31

In 1999 Slashdot reader Jeremie announced "a new project I recently started to create a complete open-source platform for Instant Messaging with transparent communication to other IM systems (ICQ, AIM, etc)." It was the first release of the eXtensible Messaging and Presence Protocol, and by 2008 Slashdot was asking if XMPP was "the next big thing." Facebook even supported it for third-party chat clients until 2015.

And here in 2025, the chair of the nonprofit XMPP Standards Foundation is long-time Slashdot reader ralphm, who is now issuing this call to action at XMPP.org: The European Digital Markets Act (DMA) is designed to break down walled gardens and enforce messaging interoperability. As a designated gatekeeper, Meta—controlling WhatsApp and Messenger—must comply. However, its current proposal falls short, risking further entrenchment of its dominance rather than fostering genuine competition. [..]

A Call to Action

The XMPP Standards Foundation urges Meta to adopt XMPP for messaging interoperability. It is ready to collaborate, continue to evolve the protocol to meet modern needs, and ensure true compliance with the DMA. Let's build an open, competitive messaging ecosystem—one that benefits both users and service providers.

It's time for real interoperability. Let's make it happen.

Android

Google Will Develop the Android OS Fully In Private 20

An anonymous reader quotes a report from Android Authority: No matter the manufacturer, every Android phone has one thing in common: its software base. Manufacturers can heavily customize the look and feel of the Android OS they ship on their Android devices, but under the hood, the core system functionality is derived from the same open-source foundation: the Android Open Source Project. After over 16 years, Google is making big changes to how it develops the open source version of Android in an effort to streamline its development. [...] Beginning next week, all Android development will occur within Google's internal branches, and the source code for changes will only be released when Google publishes a new branch containing those changes. As this is already the practice for most Android component changes, Google is simply consolidating its development efforts into a single branch.

This change will have minimal impact on regular users. While it streamlines Android OS development for Google, potentially affecting the speed of new version development and bug reduction, the overall effect will likely be imperceptible. Therefore, don't expect this change to accelerate OS updates for your phone. This change will also have minimal impact on most developers. App developers are unaffected, as it pertains only to platform development. Platform developers, including those who build custom ROMs, will largely also see little change, since they typically base their work on specific tags or release branches, not the main AOSP branch. Similarly, companies that release forked AOSP products rarely use the main AOSP branch due to its inherent instability.

External developers who enjoy reading or contributing to AOSP will likely be dismayed by this news, as it reduces their insight into Google's development efforts. Without a GMS license, contributing to Android OS development becomes more challenging, as the available code will consistently lag behind by weeks or months. This news will also make it more challenging for some developers to keep up with new Android platform changes, as they'll no longer be able to track changes in AOSP. For reporters, this change means less access to potentially revealing information, as AOSP patches often provide insights into Google's development plans. [...] Google will share more details about this change when it announces it later this week. If you're interested in learning more, be sure to keep an eye out for the announcement and new documentation on source.android.com.
Android Authority's Mishaal Rahman says Google is "committed to publishing Android's source code, so this change doesn't mean that Android is becoming closed-source."

"What will change is the frequency of public source code releases for specific Android components," says Rahman. "Some components like the build system, update engine, Bluetooth stack, Virtualization framework, and SELinux configuration are currently AOSP-first, meaning they're developed fully in public. Most Android components like the core OS framework are primarily developed internally, although some features, such as the unlocked-only storage area API, are still developed within AOSP."
The Internet

Open Source Devs Say AI Crawlers Dominate Traffic, Forcing Blocks On Entire Countries (arstechnica.com) 64

An anonymous reader quotes a report from Ars Technica: Software developer Xe Iaso reached a breaking point earlier this year when aggressive AI crawler traffic from Amazon overwhelmed their Git repository service, repeatedly causing instability and downtime. Despite configuring standard defensive measures -- adjusting robots.txt, blocking known crawler user-agents, and filtering suspicious traffic -- Iaso found that AI crawlers continued evading all attempts to stop them, spoofing user-agents and cycling through residential IP addresses as proxies. Desperate for a solution, Iaso eventually resorted to moving their server behind a VPN and creating "Anubis," a custom-built proof-of-work challenge system that forces web browsers to solve computational puzzles before accessing the site. "It's futile to block AI crawler bots because they lie, change their user agent, use residential IP addresses as proxies, and more," Iaso wrote in a blog post titled "a desperate cry for help." "I don't want to have to close off my Gitea server to the public, but I will if I have to."

Iaso's story highlights a broader crisis rapidly spreading across the open source community, as what appear to be aggressive AI crawlers increasingly overload community-maintained infrastructure, causing what amounts to persistent distributed denial-of-service (DDoS) attacks on vital public resources. According to a comprehensive recent report from LibreNews, some open source projects now see as much as 97 percent of their traffic originating from AI companies' bots, dramatically increasing bandwidth costs, service instability, and burdening already stretched-thin maintainers.

Kevin Fenzi, a member of the Fedora Pagure project's sysadmin team, reported on his blog that the project had to block all traffic from Brazil after repeated attempts to mitigate bot traffic failed. GNOME GitLab implemented Iaso's "Anubis" system, requiring browsers to solve computational puzzles before accessing content. GNOME sysadmin Bart Piotrowski shared on Mastodon that only about 3.2 percent of requests (2,690 out of 84,056) passed their challenge system, suggesting the vast majority of traffic was automated. KDE's GitLab infrastructure was temporarily knocked offline by crawler traffic originating from Alibaba IP ranges, according to LibreNews, citing a KDE Development chat. While Anubis has proven effective at filtering out bot traffic, it comes with drawbacks for legitimate users. When many people access the same link simultaneously -- such as when a GitLab link is shared in a chat room -- site visitors can face significant delays. Some mobile users have reported waiting up to two minutes for the proof-of-work challenge to complete, according to the news outlet.

Open Source

FaunaDB Shuts Down But Hints At Open Source Future (theregister.com) 13

FaunaDB, a serverless database combining relational and document features, will shut down by the end of May due to unsustainable capital demands. The company plans to open source its core technology, including its FQL query language, in hopes of continuing its legacy within the developer community. The Register reports: The startup pocketed $27 million in VC funding in 2020 and boasted that 25,000 developers worldwide were using its serverless database. However, last week, FaunaDB announced that it would sunset its database services. FaunaDB said it plans to release an open-source version of its core database technology. The system stores data in JSON documents but retains relational features like consistency, support for joins and foreign keys, and full schema enforcement. Fauna's query language, FQL, will also be made available to the open-source community. "Driving broad based adoption of a new operational database that runs as a service globally is very capital intensive. In the current market environment, our board and investors have determined that it is not possible to raise the capital needed to achieve that goal independently," the leadership team said.

"While we will no longer be accepting new customers, existing Fauna customers will experience no immediate change. We will gradually transition customers off Fauna and are committed to ensuring a smooth process over the next several months," it added.
Open Source

Developer Loads Steam On a $100 ARM Single Board Computer (interfacinglinux.com) 24

"There's no shortage of videos showing Steam running on expensive ARM single-board computers with discrete GPUs," writes Slashdot reader VennStone. "So I thought it would be worthwhile to make a guide for doing it on (relatively) inexpensive RK3588-powered single-board computers, using Box86/64 and Armbian." The guides I came across were out of date, had a bunch of extra steps thrown in, or were outright incorrect... Up first, we need to add the Box86 and Box64 ARM repositories [along with dependencies, ARMHF architecture, and the Mesa graphics driver]...
The guide closes with a multi-line script and advice to "Just close your eyes and run this. It's not pretty, but it will download the Steam Debian package, extract the needed bits, and set up a launch script." (And then the final step is sudo reboot now.)

"At this point, all you have to do is open a terminal, type 'steam', and tap Enter. You'll have about five minutes to wait... Check out the video to see how some of the tested games perform." At 720p, performance is all over the place, but the games I tested typically managed to stay above 30 FPS. This is better than I was expecting from a four-year-old SOC emulating x86 titles under ARM.

Is this a practical way to play your Steam games? Nope, not even a little bit. For now, this is merely an exercise in ludicrous neatness. Things might get a wee bit better, considering Collabora is working on upstream support for RK3588 and Valve is up to something ARM-related, but ya know, "Valve Time"...

"You might be tempted to enable Steam Play for your Windows games, but don't waste your time. I mean, you can try, but it ain't gonna work."
Open Source

'Unaware and Uncertain': Report Finds Widespread Unfamiliarity With 2027's EU Cyber Resilience Requirements (linuxfoundation.org) 6

Two "groundbreaking research reports" on open source security were announced this week by the Linux Foundation in partnership with the Open Source Security Foundation (OpenSSF) and Linux Foundation Europe. The reports specifically address the EU's Cyber Resilience Act (or CRA) and "highlight knowledge gaps and best practices for CRA compliance."

"Unaware and Uncertain: The Stark Realities of CRA-Readiness in Open Source" includes a survey which found that when it comes to CRA requirements, 62% of respondents were either "not familiar at all" (36%) or "slightly familiar" (26%) — while 51% weren't sure about its deadlines. ("Only 28% correctly identified 2027 as the target year for full compliance," according to one infographic, which adds that CRA "is expected to drive a 6% average price increase, though 53% of manufacturers are still assessing pricing impacts.") Manufacturers, who bear primary responsibility, lack readiness — many [46%] passively rely on upstream security fixes, and only a small portion produce Software Bills of Materials (SBOMs). The report recommends that manufacturers take a more active role in open source security, that more funding and legal support is needed to support security practices, and that clear regulatory guidance is essential to prevent unintended negative impacts on open source development.
The research also provides "an in-depth analysis of how open collaboration can strengthen software security and innovation across global markets," with another report that "examines how three Linux Foundation projects are meeting the CRA's minimum compliance requirements" and "provides insight on the elements needed to ensure leadership in cybersecurity best practices." (It also includes CRA-related resources.)

"These two reports offer actionable conclusions for open source stakeholders to ready themselves for 2027, when the CRA comes into force," according to a Linux Foundation reserach executive cited in the announcement. "We hope that these reports catalyze higher levels of collaboration across the open source community."
Open Source

FSF's Memorabilia Silent Auction Begins Today (fsf.org) 29

This week the Free Software Foundation published memorabilia items for an online silent auction — part of their big 40th anniversary celebration. "Starting March 17, the FSF will unlock items each day for bidding on the LibrePlanet wiki at 12:00 EDT.. Bidding on all items will conclude at 15:00 EDT on March 21, 2025...

"During the auction, the FSF welcomes everyone who supports user freedom to bid on historical and symbolic free software memorabilia," they annouced this week: The auction is split into two parts: a silent auction hosted on the LibrePlanet wiki from March 17 through March 21 and a live auction held on the FSF's Galène videoconferencing server on March 23 from 14:00-17:00. The auction is only the opening act to a months-long itinerary celebrating forty years of free software activism...

Executive director Zoë Kooyman adds: "These items are valuable pieces of FSF history, and some of them are emblematic of the free software movement. We want to entrust these memorabilia in the hands of the free software community for preservation and would love to see some of these items displayed in exhibitions." All in all, there are twenty-five pieces that are either directly part of the FSF's history and/or representative of the free software movement that will be available in the silent auction.

Winning bidders can rest assured that all proceeds from this auction will go towards the FSF's continued work to promote computer user freedom worldwide.

Silent auction items include:
  • A mid-1980s VT220 terminal that "still works, and can be connected to your favorite free machine over the serial interface... This is the same terminal that was on the FSF reception desk for some time, introducing visitors to ASCII art, NetHack, and other free software lore." Bids start at $250... (with estimate shipping costs of $100)
  • An Amiga 3000UX donated to the GNU project "sometime in 1990." While it now has a damaged battery, "FSF staff programmers used it at MIT to help further some early development of the GNU operating system." Starting bid: $300 (with estimated shipping costs of $400).
  • "A variety of plush animals that had greeted visitors at its former offices in Boston on 51 Franklin Street..."

"The most notable items have been reserved for the live auction on Sunday, March 23," they note — including the Internet Hall of Fame medal awarded to FSF founder Richard Stallman in 2013 "as ultimate recognition of free software's immense impact on the development and advancement of the Internet."


Open Source

Startup Claims Its Upcoming (RISC-V ISA) Zeus GPU is 10X Faster Than Nvidia's RTX 5090 (tomshardware.com) 69

"The number of discrete GPU developers from the U.S. and Western Europe shrank to three companies in 2025," notes Tom's Hardware, "from around 10 in 2000." (Nvidia, AMD, and Intel...) No company in the recent years — at least outside of China — was bold enough to engage into competition against these three contenders, so the very emergence of Bolt Graphics seems like a breakthrough. However, the major focuses of Bolt's Zeus are high-quality rendering for movie and scientific industries as well as high-performance supercomputer simulations. If Zeus delivers on its promises, it could establish itself as a serious alternative for scientific computing, path tracing, and offline rendering. But without strong software support, it risks struggling against dominant market leaders.
This week the Sunnyvale, California-based startup introduced its Zeus GPU platform designed for gaming, rendering, and supercomputer simulations, according to the article. "The company says that its Zeus GPU not only supports features like upgradeable memory and built-in Ethernet interfaces, but it can also beat Nvidia's GeForce RTX 5090 by around 10 times in path tracing workloads, according to slide published by technology news site ServeTheHome." There is one catch: Zeus can only beat the RTX 5090 GPU in path tracing and FP64 compute workloads. It's not clear how well it will handle traditional rendering techniques, as that was less of a focus. In speaking with Bolt Graphics, the card does support rasterization, but there was less emphasis on that aspect of the GPU, and it may struggle to compete with the best graphics cards when it comes to gaming. And when it comes to data center options like Nvidia's Blackwell B200, it's an entirely different matter.

Unlike GPUs from AMD, Intel, and Nvidia that rely on proprietary instruction set architectures, Bolt's Zeus relies on the open-source RISC-V ISA, according to the published slides. The Zeus core relies on an open-source out-of-order general-purpose RVA23 scalar core mated with FP64 ALUs and the RVV 1.0 (RISC-V Vector Extension Version 1.0) that can handle 8-bit, 16-bit, 32-bit, and 64-bit data types as well as Bolt's additional proprietary extensions designed for acceleration of scientific workloads... Like many processors these days, Zeus relies on a multi-chiplet design... Unlike high-end GPUs that prioritize bandwidth, Bolt is evidently focusing on greater memory size to handle larger datasets for rendering and simulations. Also, built-in 400GbE and 800GbE ports to enable faster data transfer across networked GPUs indicates the data center focus of Zeus.

High-quality rendering, real-time path tracing, and compute are key focus areas for Zeus. As a result, even the entry-level Zeus 1c26-32 offers significantly higher FP64 compute performance than Nvidia's GeForce RTX 5090 — up to 5 TFLOPS vs. 1.6 TFLOPS — and considerably higher path tracing performance: 77 Gigarays vs. 32 Gigarays. Zeus also features a larger on-chip cache than Nvidia's flagship — up to 128MB vs. 96MB — and lower power consumption of 120W vs. 575W, making it more efficient for simulations, path tracing, and offline rendering. However, the RTX 5090 dominates in AI workloads with its 105 FP16 TFLOPS and 1,637 INT8 TFLOPS compared to the 10 FP16 TFLOPS and 614 INT8 TFLOPS offered by a single-chiplet Zeus...

The article emphasizes that Zeus "is only running in simulation right now... Bolt Graphics says that the first developer kits will be available in late 2025, with full production set for late 2026."

Thanks to long-time Slashdot reader arvn for sharing the news.
Python

Codon Python Compiler Gets Faster - and Changes to Apache 2 License (usenix.org) 4

Slashdot reader rikfarrow summarizes an article they wrote for Usenix.org about the Open Source Python compiler Codon: In 2023 I tried out Codon. At the time I had difficulty compiling the scripts I most commonly used, but was excited by the prospect. Python is essentially single threaded and checks the shape (type) of each variable as it interprets scripts. Codon fixes types and compiles Python into compact, executable binaries that execute much faster.

Several things have changed with their latest release: I have successful compiles, the committers have added a compiled version of NumPy (high performance math algorithms), and changed their open source license to Apache 2.

"The other big news is that Exaloop, the company that is behind Codon, has changed their license to Apache 2..." according to the article, so "commercial use and derivations of Codon are now permitted without licensing."
AI

Ask Slashdot: Where Are the Open-Source Local-Only AI Solutions? 192

"Why can't we each have our own AI software that runs locally," asks long-time Slashdot reader BrendaEM — and that doesn't steal the work of others.

Imagine a powerful-but-locally-hosted LLM that "doesn't spy... and no one else owns it." We download it, from souce-code if you like, install it, if we want. And it assists: us... No one gate-keeps it. It's not out to get us...

And this is important: because no one owns it, the AI software is ours and leaks no data anywhere — to no one, no company, for no political nor financial purpose. No one profits — but you!

Their longer original submission also asks a series of related questions — like why can't we have software without AI? (Along with "Why is AMD stamping AI on local-processors?" and "Should AI be crowned the ultimate hype?") But this question seems to be at the heart of their concern. "What future will anyone have if anything they really wanted to do — could be mimicked and sold by the ill-gotten work of others...?"

"Could local, open-source, AI software be the only answer to dishearten billionaire companies from taking and selling back to their customers — everything we have done? Could we not...instead — steal their dream?!"

Share your own thoughts and answers in the comments. Where are the open-source, local-only AI solutions?
Networking

Cloudflare Accused of Blocking Niche Browsers (palemoon.org) 162

Long-time Slashdot reader BenFenner writes: For the third time in recent memory, CloudFlare has blocked large swaths of niche browsers and their users from accessing web sites that CloudFlare gate-keeps. In the past these issues have been resolved quickly (within a week) and apologies issued with promises to do better. (See 2024-03-11, 2024-07-08, and 2025-01-30.)

This time around it has been over six weeks and CloudFlare has been unable or unwilling to fix the problem on their end, effectively stalling any progress on the matter with various tactics including asking browser developers to sign overarching NDAs.

That last link is an update posted today by Pale Moon's main developer: Our current situation remains unchanged: CloudFlare is still blocking our access to websites through the challenges, and the captcha/turnstile continues to hang the browser until our watchdog terminates the hung script after which it reloads and hangs again after a short pause (but allowing users to close the tab in that pause, at least). To say that this upsets me is an understatement. Other than deliberate intent or absolute incompetence, I see no reason for this to endure. Neither of those options are very flattering for CloudFlare.

I wish I had better news.

In a comment, Slashdot reader BenFenner shares a list posted by Pale Moon's developer of reportedly affected browsers:
  • Pale Moon
  • Basilisk
  • Waterfox
  • Falkon
  • SeaMonkey
  • Various Firefox ESR flavors
  • Thorium (on some systems)
  • Ungoogled Chromium
  • K-Meleon
  • LibreWolf
  • MyPal 68
  • Otter browser

Slashdot reader Z00L00K speculates that "this is some kind of anti-bot measure that fails. I suspect that the reason for them wanting a NDA to be signed is to prevent ways to circumvent the anti-bot measures..."


Slashdot Top Deals