Open Source

Why Are 'Supply Chain Attacks' on Open Source Libraries Getting Worse? (arstechnica.com) 44

"A rash of supply chain attacks hitting open source software over the past year shows few signs of abating, following the discovery this week of two separate backdoors slipped into a dozen libraries downloaded by hundreds of thousands of server administrators," reports Ars Technica: The compromises of Webmin and the RubyGems libraries are only the latest supply chain attacks to hit open source software. Most people don't think twice about installing software or updates from the official site of a known developer. As developers continue to make software and websites harder to exploit, black hats over the past few years have increasingly exploited this trust to spread malicious wares by poisoning code at its source...

To be fair, closed-source software also falls prey to supply-side attacks -- as evidenced by those that hit computer maker ASUS on two occasions, the malicious update to tax-accounting software M.E.Doc that seeded the NotPetya outbreak of 2017, and another backdoor that infected users of the CCleaner hard drive utility that same year. But the low-hanging fruit for supply chain attacks seems to be open source projects, in part because many don't make multi-factor authentication and code signing mandatory among its large base of contributors.

"The recent discoveries make it clear that these issues are becoming more frequent and that the security ecosystem around package publication and management isn't improving fast enough," Atredis Partners Vice President of Research and Development HD Moore told Ars. "The scary part is that each of these instances likely resulted in even more developer accounts being compromised (through captured passwords, authorization tokens, API keys, and SSH keys). The attackers likely have enough credentials at hand to do this again, repeatedly, until all credentials are reset and appropriate MFA and signing is put in place."

IBM

IBM is Moving OpenPower Foundation To The Linux Foundation (techcrunch.com) 46

IBM makes the Power Series chips, and as part of that has open sourced some of the underlying technologies to encourage wider use of these chips. The open source pieces have been part of the OpenPower Foundation. Today, the company announced it was moving the foundation under The Linux Foundation, and while it was at it, announced it was open sourcing several other important bits. From a report: Ken King, general manager for OpenPower at IBM, says that at this point in his organization's evolution, they wanted to move it under the auspices of the Linux Foundation. But IBM didn't stop there. It also announced that it was open sourcing some of the technical underpinnings of the Power Series chip to make it easier for developers and engineers to build on top of the technology. Perhaps most importantly, the company is open sourcing the Power Instruction Set Architecture (ISA). These are "the definitions developers use for ensuring hardware and software work together on Power," the company explained.

King sees open sourcing this technology as an important step for a number of reasons around licensing and governance. "The first thing is that we are taking the ability to be able to implement what we're licensing, the ISA instruction set architecture, for others to be able to implement on top of that instruction set royalty free with patent rights," he explained. The company is also putting this under an open governance workgroup at the OpenPower Foundation.

Open Source

Slackware, the Longest Active Linux Distro, Finally Has a Patreon Page (patreon.com) 49

"Slackware is the longest active Linux distribution project, founded in 1993," writes TheBAFH (Slashdot reader #68,624).

"Today there are many Linux distributions available, but I've remained dedicated to this project as I believe it still holds an important place in the Linux ecosystem," writes Patrick J. Volkerding on a new Patreon page. He adds that Slackware's users "know that Slackware can be trusted not to constantly change the way things work, so that your investment in learning Slackware lasts longer than it would with a system that's a moving target... Your support is greatly appreciated, and will make it possible for me to continue to maintain this project."

TheBAFH writes: The authenticity of the Patreon page has been confirmed by Mr. Volkerding in a post in the Slackware forum of LinuxQuestions.org. "I was going to wait to announce it until I had a few more planned updates done in -current that would be getting things closer to an initial 15.0 beta release, but since it's been spotted in the wild I'll confirm it."
Slashdot also emailed Patrick J. Volkerding at Slackware.com last summer and confirmed that that is indeed the account that he's posting from on LinuxQuestions. At the time, he was still trying to find the time to get a Patreon page set up.

"I've been trying to catch up on nearly a decade of neglecting everything other than Slackware, but I'm at least getting more caught up."
Google

Google Open-Sources Live Transcribe's Speech Engine (venturebeat.com) 14

Friday Google open-sourced "the speech engine that powers its Android speech recognition transcription tool Live Transcribe," reports Venture Beat: The company hopes doing so will let any developer deliver captions for long-form conversations. The source code is available now on GitHub.

Google released Live Transcribe in February. The tool uses machine learning algorithms to turn audio into real-time captions. Unlike Android's upcoming Live Caption feature, Live Transcribe is a full-screen experience, uses your smartphone's microphone (or an external microphone), and relies on the Google Cloud Speech API. Live Transcribe can caption real-time spoken words in over 70 languages and dialects. You can also type back into it — Live Transcribe is really a communication tool. The other main difference: Live Transcribe is available on 1.8 billion Android devices. (When Live Caption arrives later this year, it will only work on select Android Q devices.)

Anime

Anime Studio, Khara, Is Planning To Use Open-Source Blender Software (neowin.net) 20

The Japanese anime studio, Khara, is moving to Blender, the the open-source 3D creation software. "It'll begin partially using the software for its current development 'EVANGELION:3.0+1.0' but will make the full switch once that project is finished," reports Neowin. "The current project is expected to end in June next year, so after that point, its employees will start using Blender for the majority of their work." From the report: At the moment, Khara uses 3ds Max from Autodesk on a subscription basis; however, the company found that it had to reach out to small and medium-sized businesses for its projects. Due to the limitations of those companies, it's harder for them to afford 3ds Max. By switching to Blender, Khara says it can work better with external firms.

While Blender will be used for the bulk of the work, Khara does have a backup plan if there's anything Blender struggles with; Hiroyasu Kobayashi, General Manager of Digital Dpt. and Director of Board of Khara, said: "There are currently some areas where Blender cannot take care of our needs, but we can solve it with the combination with Unity. Unity is usually enough to cover 3ds Max and Maya as well. Unity can be a bridge among environments."

Open Source

Was 2007 the 'Golden Age of Open Source'? (linuxjournal.com) 42

Just a few months ago, the editor of the recently-departed Linux Journal wrote that in many ways the golden age of Linux and FOSS was 2007. "Linux was now mainstream in corporate IT, and it was much rarer to meet much resistance when you wanted to set up Linux servers, unless your company was a 100% Windows shop... FOSS companies were making a lot of money, and developers were being paid to work on Linux and FOSS full time."

He also wrote that when Linux Journal later folded (the first time), "It became clearer than ever to me that while Linux and FOSS had won the battle over the tech giants a decade before, new ones had taken their place in the meantime, and we were letting them win."

And he offered this final assessment in April: Today, Linux has wide hardware support, and a number of vendors offer hardware with Linux pre-installed and supported. The internet itself is full of FOSS projects, and one of the first things people do when they are about to start on a software project is to look on GitHub to see if anything that meets their needs already exists. Linux absolutely dominates the cloud in terms of numbers of VMs that run it, and much cloud infrastructure also runs FOSS services. Linux also is in many people's pockets and home appliances. Linux and FOSS are more ubiquitous than ever.

Linux and FOSS also are more hidden than ever. So many of those FOSS projects on GitHub ultimately are used as building blocks for proprietary software. So many companies that seem to champion FOSS by helping upstream projects they rely on also choose to keep the projects they write themselves proprietary. Although Linux dominates the cloud, more and more developers and system administrators who use the cloud do so via proprietary APIs and proprietary services. New developers and sysadmins get less exposure to Linux servers and FOSS services if they use the cloud how the providers intended. And, while Linux runs in your pocket and in your home, it's hidden underneath a huge layer of proprietary applications.

For the most part, the FOSS philosophy that defined Linux in its early days is hidden as well. Many people in the community tout FOSS only in terms of the ability to see code or as a way to avoid writing code themselves. It has become rarer for people to tout the importance of the freedoms that come along with FOSS and the problems that come from proprietary software. Indeed, most Linux application development in the cloud these days is done on Mac or Windows machines -- something that would have been considered unthinkable in the early days of Linux... I encourage everyone from all corners of the community not to take FOSS and Linux for granted. The world of readily available code and mostly open protocols you enjoy today isn't a given. If current trends continue, we could be back to a world of proprietary software, vendor lock-in and closed protocols like the world before 1994.

This new battle we find ourselves in is much more insidious. The ways that proprietary software and protocols have spread, in particular on mobile devices, has made it much more challenging for FOSS to win compared to in the past. If we want to win this battle, we need the whole community to work together toward a common goal.

Open Source

Are We In 'The Golden Age of Open Source'? (infoworld.com) 72

InfoWorld's Matt Asay argues we're in (or near) "the golden age of open source." Here and there an open source company might struggle to make a buck, but as a community of communities, open source has never been healthier. There are a few good indicators for this.

The first is that the clouds -- yes, all of them -- are open sourcing essential building blocks that expose their operations. Google rightly gets credit for moving first on this with projects like Kubernetes and TensorFlow, but the others have followed suit. For example, Microsoft Azure released Azure Functions, which "extends the existing Azure application platform with capabilities to implement code triggered by events occurring in virtually any Azure or third-party service as well as on-premises systems...." More recently, AWS released Firecracker, a lightweight, open source virtualization technology for running multi-tenant container workloads that emerged from AWS' serverless products (Lambda and Fargate). In a textbook example of how open source is supposed to work, Firecracker was derived from the Google-spawned crosvm but then spawned its own upgrade in the form of Weave Ignite, which made Firecracker much easier to manage.

These are just a few examples of the interesting open source projects emerging from the public clouds. (Across the ocean, Alibaba has been open sourcing its chip architecture, among other things.) More remains to be done, but these offer hope that the public clouds come not to bury open source, but rather to raise it...

it's not hard to believe that the more companies get serious about becoming software companies, the more they're going to encourage their developers to get involved in the open source communities upon which they depend... [I]t's not just the upstarts. Old-school enterprises like Home Depot host code on GitHub, while financial services companies like Capital One go even further, sponsoring open source events to help foster community around their proliferating projects.... So, again, not everybody is doing it. Not yet. But far more organizations are involved in open source today than were back in 2008... Such involvement is happening both at the elite level (public clouds) and in more mainstream ways, ushering in a golden era of open source.

Open Source

Landmark 2.80 Release of Open Source Blender 3D With Improved UI Now Available (blender.org) 67

"In the 3D content creation space, where are lot of professional 3D software costs anywhere from 2K to 8K Dollars a license, people have always hoped that the free, open source 3D software Blender would some day be up to the job of replacing expensive commercial 3D software packages," writes Slashdot reader dryriver: This never happened, not because Blender didn't have good 3D features technically, but rather because the Blender Foundation simply did not listen to thousands of 3D artists screaming for a "more standard UI design" in Blender. Blender's eccentric GUI with reversed left-click-right-click conventions, keyboard shortcuts that don't match commercial software and other nastiness just didn't work for a lot of people.

After years of screaming, Blender finally got a much better and more familiar UI design in release 2.80, which can be downloaded here. Version 2.80 has many powerful features, but the standout feature is that after nearly 10 years of asking, 3D artists finally get a better, more standard, more sensible User Interface. This effectively means that for the first time, Blender can compete directly with expensive commercial 3D software made by industry leaders like Autodesk, Maxon, NewTek and SideFX.

Why the Blender Foundation took nearly a decade to revise the software's UI is anybody's guess.

Bug

Remember Autorun.inf Malware In Windows? Turns Out KDE Offers Something Similar (zdnet.com) 85

Long-time Slashdot reader Artem S. Tashkinov writes: A security researcher has published proof-of-concept (PoC) code for a vulnerability in the KDE software framework. A fix is not available at the time of writing. The bug was discovered by Dominik "zer0pwn" Penner and impacts the KDE Frameworks package 5.60.0 and below. The KDE Frameworks software library is at the base of the KDE desktop environment v4 and v5 (Plasma), currently included with a large number of Linux distributions.

The vulnerability occurs because of the way the KDesktopFile class (part of KDE Frameworks) handles .desktop or .directory files. It was discovered that malicious .desktop and .directory files could be created that could be used to run malicious code on a user's computer. When a user opens the KDE file viewer to access the directory where these files are stored, the malicious code contained within the .desktop or .directory files executes without user interaction — such as running the file.

Zero user interaction is required to trigger code execution — all you have to do is to browse a directory with a malicious file using any of KDE file system browsing applications like Dolphin.

When ZDNet contacted KDE for a comment Tuesday, their spokesperson provided this response.

"We would appreciate if people would contact security@kde.org before releasing an exploit into the public, rather than the other way around, so that we can decide on a timeline together."
Red Hat Software

Red Hat Joins the RISC-V Foundation (phoronix.com) 49

Red Hat has joined the RISC-V Foundation to help foster this open-source processor ISA. Phoronix reports: While we're still likely years away from seeing any serious RISC-V powered servers at least that can deliver meaningful performance, Red Hat has been active in promoting RISC-V as an open-source processor instruction set architecture and one of the most promising libre architectures we have seen over the years. Red Hat developers have already helped in working on Fedora's RISC-V support and now the IBM-owned company is helping out more and showing their commitment by joining the RISC-V Foundation. Red Hat joins the likes of Google, NVIDIA, Qualcomm, SiFive, Western Digital, IBM, and Samsung as among the many RISC-V members.
Open Source

When Open Source Software Comes With a Few Catches (wired.com) 120

As open source software grows more popular, and important, developers face an existential question: How to make money from something you give away for free? An anonymous reader shares a report: The Open Source Initiative standards body says an open source license must allow users to view the underlying source code, modify it, and share it as they see fit. Independent developers and large companies alike now routinely release software under these licenses. Many coders believe open collaboration results in better software. Some companies open their code for marketing purposes. Open source software now underpins much technology, from smartphone operating systems to government websites.

Companies that release software under open source licenses generate revenue in different ways. Some sell support, including Red Hat, which IBM acquired for $34 billion earlier this month. Others, like cloud automation company HashiCorp, sell proprietary software based on the open source components. But with the rise of cloud computing, developers see their open source code being bundled into services and sold by other companies. Amazon, for example, sells a cloud-hosted service based on the popular open source database Redis, which competes with a similar cloud-hosted service offered by Redis Labs, the sponsor of the open source project. To protect against such scenarios, companies behind popular open source projects are restricting how others can use their software. Redis Labs started the trend last year when it relicensed several add-ons for its core product under terms that essentially prohibit offering those add-ons as part of a commercial cloud computing service.

That way, Amazon and other cloud providers can't use those add-ons in their competing Redis services. Companies that want the functionality provided by those add-ons need to develop those features themselves, or get permission from Redis Labs. [...] Analytics company Confluent and database maker CockroachDB added similar terms to their licenses, preventing cloud computing companies from using some or all of their code to build competing services. Taking a slightly different tack, MongoDB relicensed its flagship database product last year under a new "Server Side Public License" (SSPL) that requires companies that sell the database system as a cloud service also release the source code of any additional software they include.

Open Source

Open Source RISC-V License Helps Alibaba Sidestep US Trade War (tomshardware.com) 221

"RISC-V is open source, so it's much more resistant to government bans," reports Tom's Hardware: The Alibaba Group Holding, China's largest e-commerce company, unveiled its first self-designed chip, Xuantie 910, based on the open source RISC-V instruction set architecture. As reported by Nikkei Asian Review, the chip will target edge computing and autonomous driving, while the RISC-V's open source license may help Alibaba side-step the U.S. trade war altogether.

Alibaba doesn't intend to manufacture the chips itself. Instead, it could outsource production to other Chinese semiconductor companies, such as Semiconductor Manufacturing International Corp. According to Nikkei, the Chinese government has been encouraging wealthy Chinese companies from various industries to enter the semiconductor industry in recent years. The government's efforts accelerated when the trade war with the U.S. started last year. It reportedly forced foreign companies to transfer their technology and IP to Chinese companies if they wanted any chance at the local Chinese market.

"Most Chinese companies are still wary about whether Arm's architecture and Intel's architecture and technical support would remain accessible amid tech tension and further geopolitical uncertainties," Sean Yang, an analyst at research company CINNO in Shanghai, said, according to Nikkei. "It would be very helpful for China to increase long-term semiconductor sufficiency if big companies such as Alibaba jump in to build a chip (design) platform which smaller Chinese developers can just use without worrying about being cut off from supplies."

The article also notes that using RISC-V will give Alibaba "the ability to completely customize and extend the ISA of the processors built on top of it without having to get permission from any company first."
AI

AI is Supercharging the Creation of Maps Around the World (fb.com) 49

For those of us who live in places where driving directions are available at our fingertips, it might be surprising to learn that millions of miles of roads around the world have yet to be mapped. From a blog post: For more than 10 years, volunteers with the OpenStreetMap (OSM) project have worked to address that gap by meticulously adding data on the ground and reviewing public satellite images by hand and annotating features like roads, highways, and bridges. It's a painstaking manual task. But, thanks to AI, there is now an easier way to cover more areas in less time". With assistance from Map With AI (a new service that Facebook AI researchers and engineers created) a team of Facebook mappers has recently cataloged all the missing roads in Thailand and more than 90 percent of missing roads in Indonesia. Map With AI enabled them to map more than 300,000 miles of roads in Thailand in only 18 months, going from a road network that covered 280,000 miles before they began to 600,000 miles after. Doing it the traditional way -- without AI -- would have taken another three to five years, estimates Xiaoming Gao, a Facebook research scientist who helped lead the project.

"We were really excited about this achievement because it has proven Map With AI works at a large scale," Gao says. Starting today, anyone will be able to use the Map With AI service, which includes access to AI-generated road mappings in Afghanistan, Bangladesh, Indonesia, Mexico, Nigeria, Tanzania, and Uganda, with more countries rolling out over time. As part of Map With AI, Facebook is releasing our AI-powered mapping tool, called RapiD, to the OSM community. RapiD is an enhanced version of the popular OSM editing tool iD. RapiD is designed to make adding and editing roads quick and simple for anyone to use; it also includes data integrity checks to ensure that new map edits are consistent and accurate. You can find out more about RapiD at mapwith.ai.

AI

IBM Gives Cancer-Killing Drug AI Project To the Open Source Community 42

IBM has released three artificial intelligence (AI) projects tailored to take on the challenge of curing cancer to the open-source community. ZDNet reports: The first project, dubbed PaccMann -- not to be confused with the popular Pac-Man computer game -- is described as the "Prediction of anticancer compound sensitivity with Multi-modal attention-based neural networks." IBM is working on the PaccMann algorithm to automatically analyze chemical compounds and predict which are the most likely to fight cancer strains, which could potentially streamline this process. The ML algorithm exploits data on gene expression as well as the molecular structures of chemical compounds. IBM says that by identifying potential anti-cancer compounds earlier, this can cut the costs associated with drug development.

The second project is called "Interaction Network infErence from vectoR representATions of words," otherwise known as INtERAcT. This tool is a particularly interesting one given its automatic extraction of data from valuable scientific papers related to our understanding of cancer. INtERAcT aims to make the academic side of research less of a burden by automatically extracting information from these papers. At the moment, the tool is being tested on extracting data related to protein-protein interactions -- an area of study which has been marked as a potential cause of the disruption of biological processes in diseases including cancer.

The third and final project is "pathway-induced multiple kernel learning," or PIMKL. This algorithm utilizes datasets describing what we currently know when it comes to molecular interactions in order to predict the progression of cancer and potential relapses in patients. PIMKL uses what is known as multiple kernel learning to identify molecular pathways crucial for categorizing patients, giving healthcare professionals an opportunity to individualize and tailor treatment plans.
Graphics

'Fortnite' Creator Epic Games Supports Blender Foundation With $1.2 Million (blender.org) 43

Long-time Slashdot reader dnix writes: Apparently having a lot of people playing Fortnite is good for the open source community too. Epic Games' MegaGrants program just awarded the Blender Foundation with $1.2 million over the next three years...to further the success of the free and open source 3D creation suite.
It's part of the company's $100 million "MegaGrants" program, according to the announcement. "Open tools, libraries and platforms are critical to the future of the digital content ecosystem," said Tim Sweeney, founder and CEO of Epic Games. "Blender is an enduring resource within the artistic community, and we aim to ensure its advancement to the benefit of all creators."
Bitcoin

Celo Launches Decentralized Open Source Financial Services Prototype (forbes.com) 32

Forbes notes that other financial transaction platforms hope to benefit from Facebook's struggles in launching its Libra cryptocurrency -- including Celo. The key value proposition of the assets running on top of the [Celo] platform is that they are immune to the wide swings in volatility that have plagued leading crypto assets in recent years. Many are designed to mirror the price movements of traditional currency, and most have names that reflect their fiat brethren, such as the Gemini Dollar. This is a critical need for the industry, as no asset will be able to serve as a currency if it does not maintain a consistent price. However, rather than being a centralized issuer that supports the price pegs with fiat held in banks, Celo has built a full-stack platform (meaning it developed the underlying blockchain and applications that run on top), that can offer an unlimited number of stablecoins all backed by cryptoassets held in reserve.

Furthermore, Celo is what is known as an algorithmic-based stablecoin provider. This distinction means that rather than being a centralized entity that controls issuances and redemptions, the company employs a smart-contract based stability protocol that automatically expands or contracts the supply of its collateral reserves in a fashion similar to how the Federal Reserve adjusts the U.S. monetary supply... Additionally, a key differentiator for Celo from similar projects is that for the first time its blockchain platform allows users to send/receive money to a person's phone number, IP address, email, as well as other identifiers. This feature will be critical to the long-term success for the network because it eliminates the need for counterparties in a transaction to share their public keys with each other prior to a transaction.

And now... Celo is open-sourcing its entire codebase and design after two years of development. Additionally, the company is launching the first prototype of its platform, named the Alfajores Testnet, and Celo Wallet, an Android app that will allow users to manage their accounts and send/receive payments on the testnet.

This announcement and product is intended to be just the first of what will be a wide range of financial services applications designed to connect the world.

Celo's investors include LinkedIn founder Reid Hoffman and Twitter/Square CEO Jack Dorsey, the article points out, as well as some of Libra's first members, "including venerated venture capital firm Andreessen Horowitz and crypto-unicorn Coinbase."
Programming

Developer Requests Google Remove Their Logo From Re-Designed Golang Page (github.com) 113

Slashdot reader DevNull127 writes: Another very minor kerfuffle has broken out in the community for the Go programming language. When its official Twitter account asked for feedback on the new look of its web site, one developer suggested that it had been a mistake to add the Google logo to the lower-right of the home page. "A lot of people associate it with a commercial Google product."

Following the suggested procedure, he then created an issue on GitHub. ("Go is perceived by some as a pure Google project without community involvement. Adding a Google logo does not help in this discussion.") The issue received 61 upvotes (and 30 downvotes), eventually receiving a response from Google software engineer Andrew Bonventre, the engineering lead on the Go Team.

"Thanks for the issue. We spent a long time talking about it and are sensitive to this concern. It's equally important to make it clear that Google supports Go, which was missing before (Much like typescriptlang.org). Google pays for and hosts the infrastructure that golang.org runs on and we hope the current very small logo is a decent compromise." He then closed the issue.

The developer who created the issue then responded, "I get that you've discussed this internally. This is a great opportunity to discuss it with the community. I'm thankful to Google for financing the initial and ongoing development of Go but Google is not the only company investing [in] Go. I would like to move the Google logo into an separate section, together will the major stakeholders of the project."

In a later comment he added "I value Google's participation in Go and I'm not arguing to change that. Having the Google logo in the corner of each golang.org page suggests that this is a pure Google project when it is not..."

For some perspective, another Go developer had also suggested "animate the gopher's eyes on the website."

"Thanks, but we're not going to do this," responded the engineering lead on the Go Team. "We've discussed it before and it would be way too distracting."

Open Source

GitHub Removed Open Source Versions of 'Deepfakes' Porn App DeepNude (vice.com) 178

An anonymous reader quotes a report from Motherboard: GitHub recently removed code from its website that used neural networks to algorithmically strip clothing from images of women. The multiple code repositories were spun off from an app called DeepNude, a highly invasive piece of software that was specifically designed to create realistic nude images of women without their consent. The news shows how after DeepNude's creator pulled the plug on his own invention late last month following a media and public backlash, some platforms are now stopping the spread of similar tools. "We do not proactively monitor user-generated content, but we do actively investigate abuse reports. In this case, we disabled the project because we found it to be in violation of our acceptable use policy," a GitHub spokesperson told Motherboard in a statement. "We do not condone using GitHub for posting sexually obscene content and prohibit such conduct in our Terms of Service and Community Guidelines."

The "Sexually Obscene" section of GitHub's Community Guidelines states: "Don't post content that is pornographic. This does not mean that all nudity, or all code and content related to sexuality, is prohibited. We recognize that sexuality is a part of life and non-pornographic sexual content may be a part of your project, or may be presented for educational or artistic purposes. We do not allow obscene sexual content or content that may involve the exploitation or sexualization of minors."
Debian

After 25 Months, Debian 10 'buster' Released (debian.org) 158

"After 25 months of development the Debian project is proud to present its new stable version 10 (code name 'buster'), which will be supported for the next 5 years thanks to the combined work of the Debian Security team and of the Debian Long Term Support team."

An anonymous reader quotes Debian.org: In this release, GNOME defaults to using the Wayland display server instead of Xorg. Wayland has a simpler and more modern design, which has advantages for security. However, the Xorg display server is still installed by default and the default display manager allows users to choose Xorg as the display server for their next session.

Thanks to the Reproducible Builds project, over 91% of the source packages included in Debian 10 will build bit-for-bit identical binary packages. This is an important verification feature which protects users against malicious attempts to tamper with compilers and build networks. Future Debian releases will include tools and metadata so that end-users can validate the provenance of packages within the archive.

For those in security-sensitive environments AppArmor, a mandatory access control framework for restricting programs' capabilities, is installed and enabled by default. Furthermore, all methods provided by APT (except cdrom, gpgv, and rsh) can optionally make use of "seccomp-BPF" sandboxing. The https method for APT is included in the apt package and does not need to be installed separately... Secure Boot support is included in this release for amd64, i386 and arm64 architectures and should work out of the box on most Secure Boot-enabled machines.

The announcement touts Debian's "traditional wide architecture support," arguing that it shows Debian "once again stays true to its goal of being the universal operating system." It ships with several desktop applications and environments, including the following:
  • Cinnamon 3.8
  • GNOME 3.30
  • KDE Plasma 5.14
  • LXDE 0.99.2
  • LXQt 0.14
  • MATE 1.20
  • Xfce 4.12

"If you simply want to try Debian 10 'buster' without installing it, you can use one of the available live images which load and run the complete operating system in a read-only state via your computer's memory... Should you enjoy the operating system you have the option of installing from the live image onto your computer's hard disk."


Programming

'Kerfuffle' Erupts Around Newly-Proposed try() Feature For Go Language (thenewstack.io) 210

Matt Klein, a member of the Go steering committee recently apologized for the angst caused to some people by "the try() kerfuffle... Change is hard, but sometimes it's for the best."

Tech columnist Mike Melanson covers the kerfuffle over the newly-proposed feature, while trying "not to over-dramatize what is happening." There is disagreement and conflicting views, but working through those views is how the open source sausage is made, is it not? Of course, in the Go community, how the core team receives those opposing views may be a point of soreness among some who vehemently opposed the vgo package versioning for Go and felt that, in the end, it was rammed through despite their objections. As one Gopher points out, it is better to debate now than summarily accept and then later deprecate...

As Go makes its way to Go 2.0, with Go 1.14 currently taking center stage for debate, there is, again, as Klein points out, some kerfuffle about a newly proposed feature called try(), which is "designed specifically to eliminate the boilerplate if statements typically associated with error handling in Go." According to the proposal, the "minimal approach addresses most common scenarios while adding very little complexity to the language" and "is easy to explain, straightforward to implement, orthogonal to other language constructs, and fully backward-compatible" as well as extensible for future needs.

Much of the disagreement around try() comes in the form of whether or not the resultant code is more or less readable than current implementations of error handling. Beyond that, however, some say that even if try() were accepted, it has faults that would prevent them from recommending or even allowing its use among their teams. Meanwhile, another point of contention is offered in an open letter to the Go team about try by William Kennedy who often writes about Go, and focuses on not style or function, but rather whether or not a solution is needed at all. According to Kennedy, "the perceived error handling complaints are perhaps overblown and these changes are not what the majority of Go developers want or need" and that try() may be a solution searching for a problem, and even the cause of more problems than it solves."Since this new mechanic is going to cause severe inconsistencies in code bases, disagreements on teams, and create an impossible task for product owners to enforce consistent guidelines, things need to be slowed down and more data needs to be gathered," Kennedy writes.

He goes on to point out those very sensitivities that may have lingered from previous discussions in the Go community. "This is a serious change and it feels like it's being pushed through without a concerted effort to understand exactly what those 5% of Go developers meant when they said they wanted improved error handling...."

Slashdot Top Deals