Open Source

How W4 Plans To Monetize the Godot Game Engine Using Red Hat's Open Source Playbook (techcrunch.com) 8

An anonymous reader quotes a report from TechCrunch: A new company from the creators of the Godot game engine is setting out to grab a piece of the $200 billion global video game market -- and to do so, it's taking a cue from commercial open source software giant Red Hat. Godot, for the uninitiated, is a cross-platform game engine first released under an open source license back in 2014, though its initial development pre-dates that by several years. Today, Godot claims some 1,500 contributors, and is considered one of the world's top open source projects by various metrics. Godot has been used in high-profile games such as the Sonic Colors: Ultimate remaster, published by Sega last year as the first major mainstream game powered by Godot. But Tesla, too, has apparently used Godot to power some of the more graphically intensive animations in its mobile app.

Among Godot's founding creators is Juan Linietsky, who has served as head of development for the Godot project for the past 13 years, and who will now serve as CEO of W4 Games, a new venture that's setting out to take Godot to the next level. W4 quietly exited stealth last week, but today the Ireland-headquartered company has divulged more details about its goals to grow Godot and make it accessible for a wider array of commercial use cases. On top of that, the company told TechCrunch that it has raised $8.5 million in seed funding to make its mission a reality, with backers including OSS Capital, Lux Capital, Sisu Game Ventures and -- somewhat notably -- Bob Young, the co-founder and former CEO of Red Hat, an enterprise-focused open source company that IBM went on to acquire for $34 billion in 2019.

[...] "Companies like Red Hat have proven that with the right commercial offerings on top, the appeal of using open source in enterprise environments is enormous," Linietsky said. "W4 intends to do this very same thing for the game industry." In truth, Godot is nowhere near having the kind of impact in gaming that Linux has had in the enterprise, but it's still early days -- and this is exactly where W4 could make a difference. [...] W4's core target market will be broad -- it's gunning for independent developers and small studios, as well as medium and large gaming companies. The problem that it's looking to solve, ultimately, is that while Godot is popular with hobbyists and indie developers, companies are hesitant to use the engine on commercial projects due to its inherent limitations -- currently, there is no easy way to garner technical support, discuss the product's development roadmap, or access any other kind of value-added service. [...]

"W4 will offer console ports to developers under very accessible terms," Linietsky said. "Independent developers won't need to pay upfront to publish, while for larger companies there will be commercial packages that include support." Elsewhere, W4 is developing a range of products and services which it's currently keeping under wraps, with Linietsky noting that they will most likely be announced at Game Developers Conference (GDC) in San Francisco next March. "The aim of W4 is to help developers overcome any problem developers may stumble upon while trying to use Godot commercially," Linietsky added. It's worth noting that there are a handful of commercial companies out there already, such as Lone Wolf Technology and Pineapple Works, that help developers get the most out of Godot -- including console porting. But Linietsky was keen to highlight one core difference between W4 and these incumbents: its expertise. "The main distinctive feature of W4 is that it has been created by the Godot project leadership, which are the individuals with the most understanding and insight about Godot and its community," he said.

Operating Systems

NetBSD 9.3: A 2022 OS That Can Run On Late-1980s Hardware (theregister.com) 41

Version 9.3 of NetBSD is here, able to run on very low-end systems and with that authentic early-1990s experience. The Register reports: Version 9.3 comes some 15 months after NetBSD 9.2 and boasts new and updated drivers, improved hardware support, including for some recent AMD and Intel processors, and better handling of suspend and resume. The next sentence in the release announcement, though, might give some readers pause: "Support for wsfb-based X11 servers on the Commodore Amiga." This is your clue that we are in a rather different territory from run-of-the-mill PC operating systems here. A notable improvement in NetBSD 9.3 is being able to run a graphical desktop on an Amiga. This is a 2022 operating system that can run on late-1980s hardware, and there are not many of those around.

NetBSD supports eight "tier I" architectures: 32-bit and 64-bit x86 and Arm, plus MIPS, PowerPC, Sun UltraSPARC, and the Xen hypervisor. Alongside those, there are no less than 49 "tier II" supported architectures, which are not as complete and not everything works -- although almost all of them are on version 9.3 except for the version for original Acorn computers with 32-bit Arm CPUs, which is still only on NetBSD 8.1. There's also a "tier III" for ports which are on "life support" so there may be a risk Archimedes support could drop to that. This is an OS that can run on 680x0 hardware, DEC VAX minicomputers and workstations, and Sun 2, 3, and 32-bit SPARC boxes. In other words, it reaches back as far as some 1970s hardware. Let this govern your expectations. For instance, in VirtualBox, if you tell it you want to create a NetBSD guest, it disables SMP support.

Open Source

NVIDIA Publishes 73k Lines Worth Of 3D Header Files For Fermi Through Ampere GPUs (phoronix.com) 6

In addition to NVIDIA being busy working on transitioning to an open-source GPU kernel driver, yesterday they made a rare public open-source documentation contribution... NVIDIA quietly published 73k lines worth of header files to document the 3D classes for their Fermi through current-generation Ampere GPUs. Phoronix's Michael Larabel reports: To NVIDIA's Open-GPU-Docs portal they have posted the 73k lines worth of 3D class header files covering RTX 30 "Ampere" GPUs back through the decade-old GeForce 400/500 "Fermi" graphics processors. These header files define the classes used to program the 3D engine of the GPU, the texture header and texture sampler layout are documented, and other 3D-related programming bits. Having all of these header files will be useful to the open-source Nouveau driver developers to save on their reverse-engineering and guessing/uncertainty over certain bits.

NVIDIA's Open GPU Kernel Driver is for only GeForce RTX 20 "Turing" series and newer, so it's great seeing NVIDIA now posting this documentation going back to Fermi which is squarely to help the open-source community / Nouveau. [...] The timing of NVIDIA opening these 3D classes back to Fermi is interesting and potentially tied to SIGGRAPH 2022 happening this week. Those wanting to grab NVIDIA's latest open-source GPU documentation can find it via this GitHub repository.

Open Source

DreamWorks Animation To Release Renderer As Open-Source Software (hollywoodreporter.com) 30

With annual CG confab SIGGRAPH slated to start Monday in Vancouver, DreamWorks Animation announced its intent to release its proprietary renderer, MoonRay, as open-source software later this year. Hollywood Reporter reports: MoonRay has been used on feature films such as How to Train Your Dragon: The Hidden World, Croods: A New Age, The Bad Guys and upcoming Puss in Boots: The Last Wish. MoonRay uses DreamWorks' distributed computation framework, Arras, also to be included in the open-source code base.

"We are thrilled to share with the industry over 10 years of innovation and development on MoonRay's vectorized, threaded, parallel, and distributed code base," said Andrew Pearce, DWA's vp of global technology. "The appetite for rendering at scale grows each year, and MoonRay is set to meet that need. We expect to see the code base grow stronger with community involvement as DreamWorks continues to demonstrate our commitment to open source."

Cloud

Will the US Army, Not Meta, Build an 'Open' Metaverse? (venturebeat.com) 35

Just five weeks before his death in 2001, Douglas Adams made a mind-boggling pronouncement. "We are participating in a 3.5 billion-year program to turn dumb matter into smart matter..." He gave the keynote address for an embedded systems conference at San Francisco's Moscone Center... Adams dazzled the audience with a vision of a world where information devices are ultimately "as plentiful as chairs...." When the devices of the world were networked together, they could create a "soft earth" — a shared software model of the world assembled from all the bits of data. Communicating in real time, the soft earth would be alive and developing — and with the right instruments, humankind could just as easily tap into a soft solar system.
It's 21 years later, in a world where the long-time global software company Bohemia Interactive Simulations claims to be "at the forefront of simulation training solutions for defense and civilian organizations." And writing in VentureBeat, their chief commercial officer argues that "We do not yet have a shared imagination for the metaverse and the technology required to build it," complaining that big-tech companies "want to keep users reliant on their tech within a closed, commercialized ecosystem." I envision an open virtual world that supports thousands of simultaneous players and offers valuable, immersive use cases.

The scope of this vision requires an open cloud architecture with native support for cloud scalability. By prioritizing cloud development and clear goal-setting, military organizations have taken significant leaps toward building an actual realization of this metaverse. In terms of industry progress towards the cloud-supported, scalable metaverse, no organization has come further than the U.S. Army.

Their Synthetic Training Environment (STE) has been in development since 2017. The STE aims to replace all legacy simulation programs and integrate different systems into a single, connected system for combined arms and joint training. The STE fundamentally differs from traditional, server-based approaches. For example, it will host a 1:1 digital twin of the Earth on a cloud architecture that will stream high fidelity (photo-realistic) terrain data to connected simulations. New terrain management platforms such as Mantle ETM will ensure that all connected systems operate on exactly the same terrain data. For example, trainees in a tank simulator will see the same trees, bushes and buildings as the pilot in a connected flight simulator, facilitating combined arms operations.

Cloud scalability (that is, scaling with available computational power) will allow for a better real-world representation of essential details such as population density and terrain complexity that traditional servers could not support. The ambition of STE is to automatically pull from available data resources to render millions of simulated entities, such as AI-based vehicles or pedestrians, all at once.... [D]evelopers are creating a high-fidelity, digital twin of the entire planet.

Commercial metaverses created for entertainment or commercial uses may not require an accurate representation of the earth.... Still, the military metaverse could be a microcosm of what may soon be a large-scale, open-source digital world that is not controlled or dominated by a few commercial entities....

STE success will pave the way for any cloud-based, open-source worlds that come after it, and will help prove that the metaverse's value extends far beyond that of a marketing gimmick.

Graphics

Coding Mistake Made Intel GPUs 100X Slower in Ray Tracing (tomshardware.com) 59

Intel Linux GPU driver developers have released an update that results in a massive 100X boost in ray tracing performance. This is something to be celebrated, of course. However, on the flip side, the driver was 100X slower than it should have been because of a memory allocation oversight. Tom's Hardware reports: Linux-centric news site Phoronix reports that a fix merged into the open-source Intel Mesa Vulkan driver was implemented by Intel Linux graphics driver engineering stalwart Lionel Landwerlin on Thursday. The developer wryly commented that the merge request, which already landed in Mesa 22.2, would deliver "Like a 100x (not joking) improvement." Intel has been working on Vulkan raytracing support since late 2020, but this fix is better late than never.

Usually, the Vulkan driver would ensure temporary memory used for Vulkan raytracing work would be in local memory, i.e., the very fast graphics memory onboard the discrete GPU. A line of code was missing, so this memory allocation housekeeping task wasn't set. Thus, the Vulkan driver would shift ray tracing data to slower offboard system memory and back. Think of the continued convoluted transfers to this slower memory taking place, slowing down the raytracing performance significantly. It turns out, as per our headline, that setting a flag for "ANV_BO_ALLOC_LOCAL_MEM" ensured that the VRAM would be used instead, and a 100X performance boost was the result.
"Mesa 22.2, which includes the new code, is due to be branched in the coming days and will be included in a bundle of other driver refinements, which should reach end-users by the end of August," adds the report.
Open Source

Can Google's New Programming Language 'Carbon' Replace C++ Better Than Rust? (thenewstack.io) 185

It's difficult for large projects to convert existing C++ codebases into Rust, argue Google engineers — so they've created a new "experimental" open source programming language called Carbon.

Google Principal Software Engineer Chandler Carruth introduced Carbon this week at the "CPP North" C++ conference in Toronto. TechRadar reports: The newly announced Carbon should be interoperable with the popular C++ code, however for users looking to make the full switch, the migration should be fairly easy. For those unsure about a full changeover, Carruth delved into more detail about some of the reasons why Carbon should be considered a powerful successor to the C++ language, including simpler grammar and smoother API imports.
Google's engineers are already building tools to translate C++ into this new language. "While Carbon began as a Google internal project, the development team ultimately wants to reduce contributions from Google, or any other single company, to less than 50% by the end of the year," reports The New Stack, adding that Google ultimately wants to hand off the project to an independent software foundation where development will be led by volunteers: Long the language of choice for building performance-critical applications, C++ is plagued with a number of issues that hamper modern developers, Carruth explained on a GitHub page. It has accumulated decades of technical debt, bringing with it many of the outdated practices that were part of the language's predecessor, C. The keepers of C++ prioritize backward compatibility, in order to continue to support widely-used projects such as Linux and its package management ecosystem, Carruth charged.

The language's evolution is also stymied by a bureaucratic committee process, oriented around standardization rather than design. Which can make it difficult to add new features. C++ has largely a sequestered development process, in which a select committee makes the important decisions, in a waterfall process that can take years. "The committee structure is designed to ensure representation of nations and companies, rather than building an inclusive and welcoming team and community of experts and people actively contributing to the language," Carruth wrote. "Access to the committee and standard is restricted and expensive, attendance is necessary to have a voice, and decisions are made by live votes of those present."

Carruth wants to build Carbon by a more open community-led environment. The project will be maintained on GitHub, and discussed on Discord.... The design team wants to release a core working version ("0.1") by the end of the year.

Carbon will boast modern features like generics and memory safety (including dynamic bounds checks), the article points out. And "The development team will also set out to create a built-in package manager, something that C++ sorely lacks."
Microsoft

Microsoft Changes Policy Against the Sale of Open Source Software in the Microsoft Store (betanews.com) 34

Having previously upset software developers by implementing a ban on the sale of open source software in its app store, Microsoft has reversed its decision. From a report: The company says that it has listened to feedback -- which was vocal and negative -- and has updated the Microsoft Stores Policies, removing references to open source pricing. Microsoft has also clarified just why it put the ban in place. The policy changes that effectively banned the sale of open source came into force last month, but Microsoft has already been forced to backtrack in the face of mounting criticism. In a series of tweets announcing the latest policy changes that remove this ban, Microsoft's Giorgio Sardo says that the previous policy was intended to "help protect customers from misleading product listings."
Operating Systems

Can a Fork Save Cutefish OS (or Its Desktop)? (debugpoint.com) 109

In April ZDNet called its beta "the cutest Linux distro you'll ever use," praising the polished "incredible elegance" of Debian-based Cutefish OS, with its uncluttered, MacOS-like "Cutefish DE" desktop.

But now CutefishOS.com times out, with at least one Reddit user complaining "their email is not responding" and seeking contributors for a fork.

But meanwhile, the technology site DebugPoint.com shares another update: It looks like the OpenMandriva project is already continuing with the development of the Cutefish DE (not the OS) for its own OS. For more details, visit the Matrix discussion page.

Besides, it's worth mentioning that Arch Linux already have the Cutefish desktop packages in the community repo. You can even install it as a standalone desktop environment in Arch Linux with easy steps. As you can see, it is easier to maintain the desktop environment to continue its development because the structure is already out there.

I have tested and reviewed hundreds of distros for years, and Cutefish OS is the promising one with its stunning desktop environment. It was written from the ground up with QML and C++ and took advantage of KWin. It would have been an attractive desktop as a separate component and could have been another great option besides KDE Plasma or GNOME.

Many open-source projects are born and die every year, and it's unfortunate to see the situation of Cutefish OS. I hope an official fork comes up soon, and we all can contribute to it.

Microsoft

Dissecting Microsoft's Proposed Policy To Ban Commercial Open-Source Apps (techcrunch.com) 51

Microsoft caused considerable consternation in the open source community over the past month, after unveiling a shake up to the way developers will be able to monetize open source software. From a report: There are many examples of open source software sold in Microsoft's app store as full-featured commercial applications, ranging from video editing software such as Shotcut, to FTP clients such as WinSCP. But given how easy it is for anyone to reappropriate and repackage open source software as a new standalone product, it appears that Microsoft is trying to put measures in place to prevent such "copycat" imitations from capitalizing on the hard work of the open source community.

However, at the crux of the issue was the specific wording of Microsoft's new policy, with section 10.8.7 noting that developers must not: ...attempt to profit from open-source or other software that is otherwise generally available for free, nor be priced irrationally high relative to the features and functionality provided by your product. In its current form, the language is seemingly preventing anyone -- including the project owners and maintainers -- from charging for their work. Moreover, some have argued that it could hold implications for proprietary applications that include open source components with certain licenses, while others have noted that developers may be deterred from making their software available under an open source license.

The Military

DARPA Is Worried About How Well Open-Source Code Can Be Trusted (technologyreview.com) 85

An anonymous reader quotes a report from MIT Technology Review: "People are realizing now: wait a minute, literally everything we do is underpinned by Linux," says Dave Aitel, a cybersecurity researcher and former NSA computer security scientist. "This is a core technology to our society. Not understanding kernel security means we can't secure critical infrastructure." Now DARPA, the US military's research arm, wants to understand the collision of code and community that makes these open-source projects work, in order to better understand the risks they face. The goal is to be able to effectively recognize malicious actors and prevent them from disrupting or corrupting crucially important open-source code before it's too late. DARPA's "SocialCyber" program is an 18-month-long, multimillion-dollar project that will combine sociology with recent technological advances in artificial intelligence to map, understand, and protect these massive open-source communities and the code they create. It's different from most previous research because it combines automated analysis of both the code and the social dimensions of open-source software.

Here's how the SocialCyber program works. DARPA has contracted with multiple teams of what it calls "performers," including small, boutique cybersecurity research shops with deep technical chops. One such performer is New York -- based Margin Research, which has put together a team of well-respected researchers for the task. Margin Research is focused on the Linux kernel in part because it's so big and critical that succeeding here, at this scale, means you can make it anywhere else. The plan is to analyze both the code and the community in order to visualize and finally understand the whole ecosystem.

Margin's work maps out who is working on what specific parts of open-source projects. For example, Huawei is currently the biggest contributor to the Linux kernel. Another contributor works for Positive Technologies, a Russian cybersecurity firm that -- like Huawei -- has been sanctioned by the US government, says Aitel. Margin has also mapped code written by NSA employees, many of whom participate in different open-source projects. "This subject kills me," says d'Antoine of the quest to better understand the open-source movement, "because, honestly, even the most simple things seem so novel to so many important people. The government is only just realizing that our critical infrastructure is running code that could be literally being written by sanctioned entities. Right now." This kind of research also aims to find underinvestment -- that is critical software run entirely by one or two volunteers. It's more common than you might think -- so common that one common way software projects currently measure risk is the "bus factor": Does this whole project fall apart if just one person gets hit by a bus?
SocialCyber will also tackle other open-source projects too, such as Python which is "used in a huge number of artificial-intelligence and machine-learning projects," notes the report. "The hope is that greater understanding will make it easier to prevent a future disaster, whether it's caused by malicious activity or not."
Red Hat Software

Red Hat Names New CEO (zdnet.com) 16

Red Hat announced that Paul Cormier, the company's CEO and president since 2020, is stepping over to become chairman of the board. Matt Hicks, a Red Hat veteran and the company's head of products and technologies, will replace Cormier as president and CEO. ZDNet reports: It had been rumored at May 2022's Red Hat Summit that Cormier, who had been with Red Hat for over 14 years, might retire soon. That rumor wasn't true, but he is moving to a "somewhat" less demanding position. That said, as Stephanie Wonderlick, Red Hat's VP of Brand Experience + Communication, said, "I don't think Red Hat would have become Red Hat without Paul Cormier." [...]

As for Hicks, he's a popular figure in the company. He's known as a hands-on leader. Hicks joined Red Hat in 2006 as a developer working on porting Perl applications to Java. That is not the start one thinks of for a future CEO! Hicks knows it. He said in a note to Red Hat employees that he'd "never imagined that my career would lead me to this moment. If I had followed my initial path, not raised my hand for certain projects, or shied away from contributing ideas and asking questions, I might not be here. That is what I love about Red Hat, and it's something that differentiates us from other companies: nothing is predetermined; we're only limited by our passion and drive to contribute and make an impact." So it was that he quickly rose to leadership positions. In particular, thanks to his work with Red Hat OpenShift, he saw Red Hat move from being primarily a Linux powerhouse to a hybrid cloud technology leader as well.

Hicks, now in charge, said in a statement, "When I first joined Red Hat, I was passionate about open source and our mission, and I wanted to be a part of that. I am humbled and energized to be stepping into this role at this moment. There has never been a more exciting time to be in our industry, and the opportunity in front of Red Hat is vast. I'm ready to roll up my sleeves and prove that open-source technology truly can unlock the world's potential." He also said, Together, [IBM and Red Hat] can really lead a new era of hybrid computing. Red Hat has the technology expertise and open-source model -- IBM has the reach."

Cormier's new role will focus on "moving forward to help customers drive innovation forward with a hybrid cloud platform built on open-source technology. Open-source technology has won the innovation debates, and whatever the future looks like, it's going to be built on open-source technology, and Red Hat will be there. Moving ahead, Cormier will continue to work alongside IBM chairman and CEO, Arvind Krishna. Both Cormier and Hicks will report to Krishna. As for day-to-day work, Hicks said, "I'm here to do the work with you. Let's roll up our sleeves together, embrace these values and earn the opportunity ahead of us."

Cloud

Is Amazon's AWS Quietly Getting Better at Contributing to Open Source? (techrepublic.com) 8

"If I want AWS to ignore me completely all I have to do is open a pull request against one of their repositories," quipped cloud economist Corey Quinn in April, while also complaining that the real problem is "how they consistently and in my opinion incorrectly try to shape a narrative where they're contributing to the open source ecosystem at a level that's on par with their big tech company peers."

But on Friday tech columnist Matt Asay argued that AWS is quietly getting better at open source. "Agreed," tweeted tech journalist Steven J. Vaughan-Nichols in response, commending "Good open source people, good open-source work." (And Vaughan-Nichols later retweeted an AWS principle software engineer's announcement that "Over at Amazon Linux we are hiring, and also trying to lead and better serve customers by being more involved in upstream communities.") Mark Atwood, principle engineer for open source at Amazon, also joined Asay's thread, tweeting "I'm glad that people are noticing. Me and my team have been doing heavy work for years to get to this point. Generally we don't want to sit at the head of the table, but we are seeing the value of sitting at the table."

Asay himself was AWS's head of developer marketing/Open Source strategy for two years, leaving in August of 2021. But Friday Asay's article noted a recent tweet where AWS engineer Divij Vaidya announced he'd suddenly become one of the top 10 contributors to Apache Kafka after three months as the founding engineer for AWS's Apache Kafka open source team. (Vaida added "We are hiring for a globally distributed fully remote team to work on open source Apache Kafka! Join us.")

Asay writes: Apache Kafka is just the latest example of this.... This is exactly what critics have been saying AWS doesn't do. And, for years, they were mostly correct.

AWS was, and is, far more concerned with taking care of customers than being popular with open-source audiences. So, the company has focused on being "the best place for customers to build and run open-source software in the cloud." Historically, that tended to not involve or require contributing to the open-source projects it kept building managed services around. Many felt that was a mistake — that a company so dependent on open source for its business was putting its supply chain at risk by not sustaining the projects upon which it depended...

PostgreSQL contributor (and sometime AWS open-source critic) Paul Ramsey has noticed. As he told me recently, it "[f]eels like a switch flipped at AWS a year or two ago. The strategic value of being a real stakeholder in the software they spin is now recognized as being worth the dollars spent to make it happen...." What seems to be happening at AWS, if quietly and usually behind the scenes, is a shift toward AWS service teams taking greater ownership in the open-source projects they operationalize for customers. This allows them to more effectively deliver results because they can help shape the roadmap for customers, and it ensures AWS customers get the full open-source experience, rather than a forked repo with patches that pile up as technical debt.

Vaidya and the Managed Service for Kafka team is an example along with Madelyn Olson, an engineer with AWS's ElastiCache team and one of five core maintainers for Redis. And then there are the AWS employees contributing to Kubernetes, etcd and more. No, AWS is still not the primary contributor to most of these. Not yet. Google, Microsoft and Red Hat tend to top many of the charts, to Quinn's point above. This also isn't somehow morally wrong, as Quinn also argued: "Amazon (and any company) is there to make money, not be your friend."

But slowly and surely, AWS product teams are discovering that a key element of obsessing over customers is taking care of the open-source projects upon which those customers depend. In other words, part of the "undifferentiated heavy lifting" that AWS takes on for customers needs to be stewardship for the open-source projects those same customers demand.

UPDATE: Reached for a comment today, Asay clarified his position on Quinn's original complaints about AWS's low level of open source contributions. "What I was trying to say was that while Corey's point had been more-or-less true, it wasn't really true anymore."
Microsoft

Will Microsoft Ban Commercial Open Source from Its App Store? (sfconservancy.org) 54

Microsoft has "delayed enforcement" of what could be a controversial policy change, according to the Software Freedom Conservancy: A few weeks ago, Microsoft quietly updated its Microsoft [app] Store Policies, adding new policies (which go into effect next week), that include this text:

all pricing ... must ... [n]ot attempt to profit from open-source or other software that is otherwise generally available for free [meaning, in price, not freedom].

Wednesday, a number of Microsoft Store users discovered this and started asking questions. Quickly, those of us (including our own organization) that provide Free and Open Source Software (FOSS) via the Microsoft Store started asking our own questions too.... Since all (legitimate) FOSS is already available (at least in source code form) somewhere "for free" (as in "free beer"), this term (when enacted) will apply to all FOSS...

Sadly, these days, companies like Microsoft have set up these app stores as gatekeepers of the software industry. The primary way that commercial software distributors reach their customers (or non-profit software distributors reach their donors) is via app stores. Microsoft has closed its iron grasp on the distribution chain of software (again) — to squeeze FOSS from the marketplace. If successful, even app store users will come to believe that the only legitimate FOSS is non-commercial FOSS. This is first and foremost an affront to all efforts to make a living writing open source software. This is not a merely hypothetical consideration. Already many developers support their FOSS development (legitimately so, at least under the FOSS licenses themselves) through app store deployments that Microsoft recently forbid in their Store....

Microsoft counter-argues that this is about curating content for customers and/or limiting FOSS selling to the (mythical) "One True Developer". But, even a redrafted policy (that Giorgio Sardo [General Manager of Apps at Microsoft] hinted at publicly early Thursday) will mandate only toxic business models for FOSS (such as demo-ware, less-featureful versions available as FOSS, while the full-featured proprietary version is available for a charge).

The Conservancy argues that FOSS "was designed specifically to allow both the original developers and downstream redistributors to profit fairly from the act of convenient redistribution (such as on app stores)." But it also speculates about the sincerity of Microsoft's intentions. "We're cognizant that Microsoft probably planned all this, anyway — including the community outrage followed by their usual political theater of feigned magnanimity."

The Conservancy's post Thursday received an update Friday about Microsoft's coming policy update: After we and others pointed out this problem, a Microsoft employee claimed via Twitter that they would "delay enforcement" of their new anti-FOSS regulation [giving as their reason that "it could be perceived differently than intended."]

We do hope Microsoft will ultimately rectify the matter, and look forward to the change they intend to enact later. Twitter is a reasonable place to promote such a change once it's made, but an indication of non-enforcement by one executive on their personal account is a suboptimal approach. This is a precarious situation for FOSS projects who currently raise funds on the Microsoft Store; they deserve a definitive answer.

Given the tight timetable (just five days!) until the problematic policy actually does go into effect, we call on Microsoft to officially publish a corrected policy now that addresses this point and move the roll-out date at least two months into the future. (We suggest September 16, 2022.) This will allow FOSS projects to digest the new policy with a reasonable amount of time, and give Microsoft time to receive feedback from the impacted projects and FOSS experts.

Databases

Baserow Challenges Airtable With an Open Source No-Code Database Platform (techcrunch.com) 19

An anonymous reader quotes a report from TechCrunch: The burgeoning low-code and no-code movement is showing little sign of waning, with numerous startups continuing to raise sizable sums to help the less-technical workforce develop and deploy software with ease. Arguably one of the most notable examples of this trend is Airtable, a 10-year-old business that recently attained a whopping $11 billion valuation for a no-code platform used by firms such as Netflix and Shopify to create relational databases. In tandem, we're also seeing a rise in "open source alternatives" to some of the big-name technology incumbents, from Google's backend-as-a-service platform Firebase to open source scheduling infrastructure that seeks to supplant the mighty Calendly. A young Dutch company called Baserow sits at the intersection of both these trends, pitching itself as an open source Airbase alternative that helps people build databases with minimal technical prowess. Today, Baserow announced that it has raised $5.2 million in seed funding to launch a suite of new premium and enterprise products in the coming months, transforming the platform from its current database-focused foundation into a "complete, open source no-code toolchain," co-founder and CEO Bram Wiepjes told TechCrunch.

So what, exactly, does Baserow do in its current guise? Well, anyone with even the most rudimentary spreadsheet skills can use Baserow for use-cases spanning content marketing, such as managing brand assets collaboratively across teams; managing and organizing events; helping HR teams or startups manage and track applicants for a new role; and countless more, which Baserow provides pre-built templates for. [...] Baserow's open source credentials are arguably its core selling point, with the promise of greater extensibility and customizations (users can create their own plug-ins to enhance its functionality, similar to how WordPress works) -- this is a particularly alluring proposition for businesses with very specific or niche use cases that aren't well supported from an off-the-shelf SaaS solution. On top of that, some sectors require full control of their data and technology stack for security or compliance purposes. This is where open source really comes into its own, given that businesses can host the product themselves and circumvent vendor lock-in.

With a fresh 5 million euros in the bank, Baserow is planning to double down on its commercial efforts, starting with a premium incarnation that's officially launching out of an early access program later this month. This offering will be available as a SaaS and self-hosted product and will include various features such as the ability to export in different formats; user management tools for admin; Kanban view; and more. An additional "advanced" product will also be made available purely for SaaS customers and will include a higher data storage limit and service level agreements (SLAs). Although Baserow has operated under the radar somewhat since its official foundation in Amsterdam last year, it claims to have 10,000 active users, 100 sponsors who donate to the project via GitHub and 800 users already on the waiting list for its premium version. Later this year, Baserow plans to introduce a paid enterprise version for self-hosting customers, with support for specific requirements such as audit logs, single sign-on (SSO), role-based access control and more.

Open Source

Gtk 5 Might Drop X.11 Support, Says GNOME Dev (theregister.com) 145

One of the GNOME developers has suggested that the next major release of Gtk could drop support for the X window system. The Register reports: Emmanuele Bassi opened a discussion last week on the GNOME project's Gitlab instance that asked whether the developers could drop X.11 support in the next release of Gtk. At this point, it is only a suggestion, but if it gets traction, this could significantly accelerate the move to the Wayland display server and the end of X.11.

Don't panic: Gtk 5 is not imminent. Gtk is a well-established toolkit, originally designed for the GIMP bitmap editing program back in 1998. Gtk 4 arrived relatively recently, shortly before the release of GNOME 40 in 2021. GNOME 40 has new user-interface guidelines, and as a part of this, Gtk 4 builds GNOME's Adwaita theme into the toolkit by means of the new libadwaita library, which is breaking the appearance of some existing apps.

Also, to be fair, as we recently covered, the X window system is very old now and isn't seeing major changes, although new releases of parts of it do still happen. This discussion is almost certain to get wildly contentious, and the thread on Gitlab has been closed to further comments for now. If this idea gains traction, one likely outcome might well be a fork of Gtk, just as happened when GNOME 3 came out. [...] A lot of the features of the current version, X.11, are no longer used or relevant to most users. Even so, X.12 is barely even in the planning stages yet.

Open Source

Pine64 Is Working On a RISC-V Single-Board Computer (liliputing.com) 43

Open hardware company Pine64 says it's preparing to launch a single-board computer (SBC) that will be its most powerful RISC-V powered device yet. Liliputing reports: While Pine64 hasn't provided detailed specs yet (some are still being worked out), the company says that the upcoming SBC have a RISC-V chip that offers comparable performance to the Rockchip RK3566 quad-core ARM Cortex-A55 processor at the heart of Pine64's Quartz64 board.

The RISC-V board will be available with 4GB or 8GB of RAM and features support for USB 3.0, Gigabit Ethernet, and a PCIe slot. And while Pine64 hasn't revealed which RISC-V processor it's using yet, the company notes that that the chip features an Imagination Technologies BXE-2-32 GPU which is designed for "entry-level" and "mid-range" applications and for which Imagination plans to make source code available soon. Pine64 says the board will follow the "Model A" form factor, meaning it'll measure around 133 x 80 x 19mm (5.24" x 3.15" x 0.75"). That makes it a bit larger than a Raspberry Pi Model B, but the extra space means there's room for that PCIe slot and other I/O connectors.

Open Source

Software Freedom Conservancy Quits GitHub (theregister.com) 45

An anonymous reader quotes a report from The Register: The Software Freedom Conservancy (SFC), a non-profit focused on free and open source software (FOSS), said it has stopped using Microsoft's GitHub for project hosting -- and is urging other software developers to do the same. In a blog post on Thursday, Denver Gingerich, SFC FOSS license compliance engineer, and Bradley M. Kuhn, SFC policy fellow, said GitHub has over the past decade come to play a dominant role in FOSS development by building an interface and social features around Git, the widely used open source version control software. In so doing, they claim, the company has convinced FOSS developers to contribute to the development of a proprietary service that exploits FOSS. "We are ending all our own uses of GitHub, and announcing a long-term plan to assist FOSS projects to migrate away from GitHub," said Gingerich and Kuhn.

The SFC mostly uses self-hosted Git repositories, they say, but the organization did use GitHub to mirror its repos. The SFC has added a Give Up on GitHub section to its website and is asking FOSS developers to voluntarily switch to a different code hosting service. "While we will not mandate our existing member projects to move at this time, we will no longer accept new member projects that do not have a long-term plan to migrate away from GitHub," said Gingerich and Kuhn. "We will provide resources to support any of our member projects that choose to migrate, and help them however we can."

For the SFC, the break with GitHub was precipitated by the general availability of GitHub Copilot, an AI coding assistant tool. GitHub's decision to release a for-profit product derived from FOSS code, the SFC said, is "too much to bear." Copilot, based on OpenAI's Codex, suggests code and functions to developers as they're working. It's able to do so because it was trained "on natural language text and source code from publicly available sources, including code in public repositories on GitHub," according to GitHub. Gingerich and Kuhn see that as a problem because Microsoft and GitHub have failed to provide answers about the copyright ramifications of training its AI system on public code, about why Copilot was trained on FOSS code but not copyrighted Windows code, and whether the company can specify all the software licenses and copyright holders attached to code used in the training data set.
"We don't believe Amazon, Atlassian, GitLab, or any other for-profit hoster are perfect actors," said Gingerich and Kuhn. "However, a relative comparison of GitHub's behavior to those of its peers shows that GitHub's behavior is much worse. GitHub also has a record of ignoring, dismissing and/or belittling community complaints on so many issues, that we must urge all FOSS developers to leave GitHub as soon as they can."
Open Source

MNT Shrinks Its Open Source Reform Laptop Into a 7-Inch Pocket PC Throwback (arstechnica.com) 23

An anonymous reader quotes a report from Ars Technica: A few months ago, we reviewed the MNT Reform, which attempts to bring the dream of entirely open source hardware to an audience that doesn't want to design and build a laptop totally from scratch. Now, MNT is bringing its open-hardware ethos to a second PC, a 7-inch "Pocket Reform" laptop that recalls the design of old clamshell Pocket PCs, just like the big Reform references the design of chunky '90s ThinkPads.

The Pocket Reform borrows many of the big Reform laptop's design impulses, including a low-profile mechanical keyboard and trackball-based pointing device and a chunky, retro-throwback design. The device includes a 7-inch 1080p screen, a pair of USB-C ports (one of which is used for charging), a microSD slot for storage expansion, and a micro HDMI port for connecting to a display when you're at your desk. [...] The version of the Pocket Reform in the announcement isn't ready to launch yet, and MNT says it represents "near-final specs and design." For users interested in the Pocket Reform's imminent early beta program, there's a newsletter sign-up link at the bottom of the announcement.
One of the main complaints Ars noted about the big Reform was the "miserably slow ARM processor," which will be included in the Pocket Reform.

With that said, MNT has addressed other complaints about the big Reform by "adding reinforced metal side panels to cover the ports and a redesigned battery system that won't let the batteries fully discharge if the laptop is left unplugged."
Open Source

Linus Torvalds Is Cautiously Optimistic About Bringing Rust Into Linux Kernel's Next Release (zdnet.com) 123

slack_justyb shares a report from ZDNet: For over three decades, Linux has been written in the C programming language. Indeed, Linux is C's most outstanding accomplishment. But the last few years have seen a growing momentum to make the Rust programming language Linux's second Linux language. At the recent Open Source Summit in Austin, Texas, Linux creator Linus Torvald said he could see Rust making it into the Linux kernel as soon as the next major release. "I'd like to see the Rust infrastructure merging to be started in the next release, but we'll see." Linux said after the summit. "I won't force it, and it's not like it's going to be doing anything really meaningful at that point -- it would basically be the starting point. So, no promises."

Now, you may ask: "Why are they adding Rust at all?" Rust lends itself more easily to writing secure software. Samartha Chandrashekar, an AWS product manager, said it "helps ensure thread safety and prevent memory-related errors, such as buffer overflows that can lead to security vulnerabilities." Many other developers agree with Chandrashekar. Torvalds also agrees and likes that Rust is more memory-safe. "There are real technical reasons like memory safety and why Rust is good to get in the kernel." Mind you, no one is going to be rewriting the entire 30 or so million lines of the Linux kernel into Rust. As Linux developer Nelson Elhage said in his summary of the 2020 Linux Plumber's meeting on Rust in Linux: "They're not proposing a rewrite of the Linux kernel into Rust; they are focused only on moving toward a world where new code may be written in Rust." The three areas of potential concern for Rust support are making use of the existing APIs in the kernel, architecture support, and dealing with application binary interface (ABI) compatibility between Rust and C.

Slashdot Top Deals