×
AI

Analyst Mocks the Idea That It's 'The End of Programming' Again (zdnet.com) 97

January's Communications of the ACM includes an essay predicting "the end of programming," in an AI-powered future where "programming will be obsolete."

But IT analyst and ZDNet contributor Joe McKendrick remains skeptical, judging by a new essay sardonically titled "It's the end of programming as we know it — again." Over the past few decades, various movements, paradigms, or technology surges — whatever you want to call them — have roiled the software world, promising either to hand a lot of programming grunt work to end users, or automate more of the process. CASE tools, 4GL, object-oriented programming, service oriented architecture, microservices, cloud services, Platform as a Service, serverless computing, low-code, and no-code all have theoretically taken the onerous burdens out of software development. And, potentially, threaten the job security of developers.

Yet, here we are. Software developers are busier than ever, with demand for skills only increasing.

"I remember when the cloud first started becoming popular and companies were migrating to Office 365, everyone was saying that IT Pros will soon have no job," says Vlad Catrinescu, author at Pluralsight. "Guess what — we're still here and busier than ever."

The question is how developers' job will ultimately evolve. There is the possibility that artificial intelligence, applied to application development and maintenance, may finally make low-level coding a thing of the past.... Catrinescu believes that the emerging generation of automated or low-code development solutions actually "empowers IT professionals and developers to work on more challenging applications. IT departments can focus on enterprise applications and building complicated apps and automations that will add a lot of value to the enterprise."

Even the man predicting "the end of programming" in an AI-powered future also envisions new technology that "potentially opens up computing to almost anyone" (in ACM's video interview). But in ZDNet's article Jared Ficklin, chief creative technologist and co-founder of argodesign, even predicts the possibility of real-time computing.

"You could imagine asking Alexa to make you an app to help organize your kitchen. AI would recognize the features, pick the correct patterns and in real time, over the air deliver an application to your mobile phone or maybe into your wearable mobile computer."
AI

AI-Powered Software Delivery Company Predicts 'The End of Programming' (acm.org) 150

Matt Welsh is the CEO and co-founder of Fixie.ai, an AI-powered software delivery company founded by a team from Google and Apple. "I believe the conventional idea of 'writing a program' is headed for extinction," he opines in January's Communications of the ACM, "and indeed, for all but very specialized applications, most software, as we know it, will be replaced by AI systems that are trained rather than programmed."

His essay is titled "The End of programming," and predicts a future will "Programming will be obsolete." In situations where one needs a "simple" program (after all, not everything should require a model of hundreds of billions of parameters running on a cluster of GPUs), those programs will, themselves, be generated by an AI rather than coded by hand.... with humans relegated to, at best, a supervisory role.... I am not just talking about things like Github's CoPilot replacing programmers. I am talking about replacing the entire concept of writing programs with training models. In the future, CS students are not going to need to learn such mundane skills as how to add a node to a binary tree or code in C++. That kind of education will be antiquated, like teaching engineering students how to use a slide rule.

The engineers of the future will, in a few keystrokes, fire up an instance of a four-quintillion-parameter model that already encodes the full extent of human knowledge (and then some), ready to be given any task required of the machine. The bulk of the intellectual work of getting the machine to do what one wants will be about coming up with the right examples, the right training data, and the right ways to evaluate the training process. Suitably powerful models capable of generalizing via few-shot learning will require only a few good examples of the task to be performed. Massive, human-curated datasets will no longer be necessary in most cases, and most people "training" an AI model will not be running gradient descent loops in PyTorch, or anything like it. They will be teaching by example, and the machine will do the rest.

In this new computer science — if we even call it computer science at all — the machines will be so powerful and already know how to do so many things that the field will look like less of an engineering endeavor and more of an an educational one; that is, how to best educate the machine, not unlike the science of how to best educate children in school. Unlike (human) children, though, these AI systems will be flying our airplanes, running our power grids, and possibly even governing entire countries. I would argue that the vast majority of Classical CS becomes irrelevant when our focus turns to teaching intelligent machines rather than directly programming them. Programming, in the conventional sense, will in fact be dead....

We are rapidly moving toward a world where the fundamental building blocks of computation are temperamental, mysterious, adaptive agents.... This shift in the underlying definition of computing presents a huge opportunity, and plenty of huge risks. Yet I think it is time to accept that this is a very likely future, and evolve our thinking accordingly, rather than just sit here waiting for the meteor to hit.

"I think the debate right now is primarily around the extent to which these AI models are going to revolutionize the field," Welsh says in a video interview. "It's more a question of degree rather than whether it's going to happen....

"I think we're going to change from a world in which people are primarily writing programs by hand to a world in which we're teaching AI models how to do things that we want them to do... It starts to feel more like a field that focuses on AI education and maybe even AI psychiatry. In order to solve these problems, you can't just assume that people are going to be writing the code by hand."
Programming

MIT's Newest fMRI Study: 'This is Your Brain on Code' (mit.edu) 9

Remember when MIT researchers did fMRI brain scans measuring the blood flow through brains to determine which parts were engaged when programmers evaluated code? MIT now says that a new paper (by many of the same authors) delves even deeper: Whereas the previous study looked at 20 to 30 people to determine which brain systems, on average, are relied upon to comprehend code, the new research looks at the brain activity of individual programmers as they process specific elements of a computer program. Suppose, for instance, that there's a one-line piece of code that involves word manipulation and a separate piece of code that entails a mathematical operation. "Can I go from the activity we see in the brains, the actual brain signals, to try to reverse-engineer and figure out what, specifically, the programmer was looking at?" asks Shashank Srikant, a PhD student in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). "This would reveal what information pertaining to programs is uniquely encoded in our brains." To neuroscientists, he notes, a physical property is considered "encoded" if they can infer that property by looking at someone's brain signals.

Take, for instance, a loop — an instruction within a program to repeat a specific operation until the desired result is achieved — or a branch, a different type of programming instruction than can cause the computer to switch from one operation to another. Based on the patterns of brain activity that were observed, the group could tell whether someone was evaluating a piece of code involving a loop or a branch. The researchers could also tell whether the code related to words or mathematical symbols, and whether someone was reading actual code or merely a written description of that code.....

The team carried out a second set of experiments, which incorporated machine learning models called neural networks that were specifically trained on computer programs. These models have been successful, in recent years, in helping programmers complete pieces of code. What the group wanted to find out was whether the brain signals seen in their study when participants were examining pieces of code resembled the patterns of activation observed when neural networks analyzed the same piece of code. And the answer they arrived at was a qualified yes. "If you put a piece of code into the neural network, it produces a list of numbers that tells you, in some way, what the program is all about," Srikant says. Brain scans of people studying computer programs similarly produce a list of numbers. When a program is dominated by branching, for example, "you see a distinct pattern of brain activity," he adds, "and you see a similar pattern when the machine learning model tries to understand that same snippet."

But where will it all lead? They don't yet know what these recently-gleaned insights can tell us about how people carry out more elaborate plans in the real world.... Creating models of code composition, says O'Reilly, a principal research scientist at CSAIL, "is beyond our grasp at the moment." Lipkin, a BCS PhD student, considers this the next logical step — figuring out how to "combine simple operations to build complex programs and use those strategies to effectively address general reasoning tasks." He further believes that some of the progress toward that goal achieved by the team so far owes to its interdisciplinary makeup. "We were able to draw from individual experiences with program analysis and neural signal processing, as well as combined work on machine learning and natural language processing," Lipkin says. "These types of collaborations are becoming increasingly common as neuro- and computer scientists join forces on the quest towards understanding and building general intelligence."
Television

Amazon Planning Standalone App For Sports Content 48

According to the Information, Amazon is working on a standalone app for watching sports content. Reuters reports: Sports remains one of the biggest attractions for live viewing as U.S. audiences increasingly switch from pay TV subscriptions to streaming apps. The move will likely complement Amazon's effort to double down on sports programming through its Prime Video service, a key channel to attract consumers to its shopping platform.

Amazon already owns the rights to stream games including National Football League's Thursday Night Football franchise and Premier League soccer matches in the UK, setting it up to better compete with sports streaming leader Walt Disney Co. It was not clear when Amazon would roll out the sports app and whether it would go through with the plan, according to the report.
Programming

Code-Generating AI Can Introduce Security Vulnerabilities, Study Finds (techcrunch.com) 37

An anonymous reader quotes a report from TechCrunch: A recent study finds that software engineers who use code-generating AI systems are more likely to cause security vulnerabilities in the apps they develop. The paper, co-authored by a team of researchers affiliated with Stanford, highlights the potential pitfalls of code-generating systems as vendors like GitHub start marketing them in earnest. The Stanford study looked specifically at Codex, the AI code-generating system developed by San Francisco-based research lab OpenAI. (Codex powers Copilot.) The researchers recruited 47 developers -- ranging from undergraduate students to industry professionals with decades of programming experience -- to use Codex to complete security-related problems across programming languages including Python, JavaScript and C.

Codex was trained on billions of lines of public code to suggest additional lines of code and functions given the context of existing code. The system surfaces a programming approach or solution in response to a description of what a developer wants to accomplish (e.g. "Say hello world"), drawing on both its knowledge base and the current context. According to the researchers, the study participants who had access to Codex were more likely to write incorrect and "insecure" (in the cybersecurity sense) solutions to programming problems compared to a control group. Even more concerningly, they were more likely to say that their insecure answers were secure compared to the people in the control.

Megha Srivastava, a postgraduate student at Stanford and the second co-author on the study, stressed that the findings aren't a complete condemnation of Codex and other code-generating systems. The study participants didn't have security expertise that might've enabled them to better spot code vulnerabilities, for one. That aside, Srivastava believes that code-generating systems are reliably helpful for tasks that aren't high risk, like exploratory research code, and could with fine-tuning improve in their coding suggestions. "Companies that develop their own [systems], perhaps further trained on their in-house source code, may be better off as the model may be encouraged to generate outputs more in-line with their coding and security practices," Srivastava said.
The co-authors suggest vendors use a mechanism to "refine" users' prompts to be more secure -- "akin to a supervisor looking over and revising rough drafts of code," reports TechCrunch. "They also suggest that developers of cryptography libraries ensure their default settings are secure, as code-generating systems tend to stick to default values that aren't always free of exploits."
Programming

Study Finds AI Assistants Help Developers Produce Code That's More Likely To Be Buggy (theregister.com) 50

Computer scientists from Stanford University have found that programmers who accept help from AI tools like Github Copilot produce less secure code than those who fly solo. From a report: In a paper titled, "Do Users Write More Insecure Code with AI Assistants?", Stanford boffins Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh answer that question in the affirmative. Worse still, they found that AI help tends to delude developers about the quality of their output. "We found that participants with access to an AI assistant often produced more security vulnerabilities than those without access, with particularly significant results for string encryption and SQL injection," the authors state in their paper.

"Surprisingly, we also found that participants provided access to an AI assistant were more likely to believe that they wrote secure code than those without access to the AI assistant." Previously, NYU researchers have shown that AI-based programming suggestions are often insecure in experiments under different conditions. The Stanford authors point to an August 2021 research paper titled "Asleep at the Keyboard? Assessing the Security of GitHub Copilot's Code Contributions," which found that given 89 scenarios, about 40 per cent of the computer programs made with the help of Copilot had potentially exploitable vulnerabilities.

That study, the Stanford authors say, is limited in scope because it only considers a constrained set of prompts corresponding to 25 vulnerabilities and just three programming languages: Python, C, and Verilog. The Stanford scholars also cite a followup study from some of the same NYU eggheads, "Security Implications of Large Language Model Code Assistants: A User Study," as the only comparable user study they're aware of. They observe, however, that their work differs because it focuses on OpenAI's codex-davinci-002 model rather than OpenAI's less powerful codex-cushman-001 model, both of which play a role in GitHub Copilot, itself a fine-tuned descendant of a GPT-3 language model.

Math

Donald Knuth's 2022 'Christmas Tree' Lecture Is About Trees (thenewstack.io) 11

Like a visit from an old friend, it's Donald Knuth's annual Christmas tree lecture for 2022. "Because of the pandemic, it's been three years since Knuth has been able to honor this tradition," notes The New Stack: 2022 marks the 60th anniversary of that fateful day in 1962 when a 24-year-old Donald Knuth started writing " The Art of Computer Programming." Now approaching his 85th birthday, Knuth has become almost a legend in the world of computer programming — and he's still writing additional volumes for his massive analysis of algorithms. But every year, right around Christmas time, there's another tradition. Knuth gives a special lecture "pitched at non-specialists" for a small audience at Stanford University (where Knuth is a professor emeritus) and a larger audience online...

Hunched over a notepad (which was projected onto a screen behind him), Knuth began the 26th annual Christmas lecture by pointing out that the evening's topic had been hiding in plain sight for two decades. For the first 20 years, they'd called them the "Christmas tree" lectures, since "trees are one of the most important things to a computer scientist. And every year I learned at least two new cool things about trees..."

About five years ago they'd changed the name to just "Christmas lectures" — but the problem wasn't that trees stopped being interesting. "I still learn cool things about trees every year. But they're getting harder and harder to explain to a general audience!"

So this year's triumphant "homecoming" lecture would indeed include trees — specifically a phenomenon Knuth describes as "twintrees," along with Baxter permutations, and Floorplans. Knuth noted they're all topics touched on in the latest volume of The Art of Computer Programming, before jokingly reminding the audience that his book makes an excellent Christmas present.

By the end of the lecture, Knuth had written algorithms for all three mathematical concepts — then connected all three algorithms with Linux pipes to show what happens when you convert one kind of sequence into the other and then into the other.

"I get back, of course, the one I started with!"
Crime

Six Arrested After Manipulating Gas Station Pumps To Steal 30,000 Gallons of Gas (msn.com) 72

A Valero gas station sells approximately 5,000 gallons of gas a day, one employee estimates.

But local police arrested six men who, in a series of robberies, tricked the pumps out of 30,000 gallons of gasoline, reports the Mercury News, "a haul authorities estimated was worth at least $180,000." Upon further inspection of surveillance video, authorities said, police saw one of the suspects activate a gas-pump computer, allowing another suspect to pump fuel into his vehicle.... An employee from the Valero station, who declined to give their name, called the process the gas thieves used "nearly untraceable."

"You must have a deep understanding of how the pump system works," the person said. "There is a time frame anywhere from 75 seconds to two minutes for the authorization to go through the network [after sliding a credit card into a gas pump]. In this (time period), there's an opportunity to manipulate the pump ... You're able to manipulate the pump and confuse the programming to an extent that the pump starts dispensing gas...."

In a Facebook post, authorities said the three suspects had been "conspiring together in a sophisticated operation to thwart security devices and pump electronics to steal large amounts of gasoline from the business...."

Authorities say $20,000 of damage was done to gas pumps.

Thanks to Slashdot reader k6mfw for submitting the story.
Graphics

Rust-GPU Project Now Supports SPIR-V Ray-tracing (github.com) 17

For three years Stockholm-based games studio Embark has been working on the Rust-gpu project to make Rust "a first class language and ecosystem for GPU programming." The project's latest announcement? rust-gpu now supports ray-tracing.

Their original announcement explained the rationale for this years-long dvelopment effort: Historically in games GPU programming has been done through writing either HLSL, or to a lesser extent GLSL. These are simple programming languages that have evolved along with rendering APIs over the years. However, as game engines have evolved, these languages have failed to provide mechanisms for dealing with large codebases, and have generally stayed behind the curve compared to other programming languages.

In part this is because it's a niche language for a niche market, and in part this has been because the industry as a whole has sunk quite a lot of time and effort into the status quo. While over-all better alternatives to both languages exist, none of them are in a place to replace HLSL or GLSL. Either because they are vendor locked, or because they don't support the traditional graphics pipeline. Examples of this include CUDA and OpenCL. And while attempts have been made to create language in this space, none of them have gained any notable traction in the gamedev community.

Our hope with this project is that we push the industry forward by bringing an existing, low-level, safe, and high performance language to the GPU; namely Rust. And with it come some additional benefits that can't be overlooked: a package/module system that's one of the industry's best, built in safety against race-conditions or out of bounds memory access, a wide range of tools and utilities to improve programmer workflows, and many others!

Along with ray-tracing, this week they announced plans to keep rust-gpu on the same schedule as the stable Rust release, "so you can use your favorite new language features as new stable versions of Rust are being released, by just updating your rust-gpu version."

Thanks to Slashdot reader guest reader for sharing the news!
Programming

Microsoft Spooks Windows Desktop Developers By Calling WPF a 'Community Run Project' (devclass.com) 81

A Microsoft .NET Community standup has left Windows desktop developers wondering what kind of future, if any, the company has planned for its older desktop application frameworks, Windows Forms and Windows Presentation Foundation (WPF). From a report: A "what's new" slide for WPF presented by senior program manager Olia Gavrysh last week shows "Community Run Project" as the first bullet point, causing consternation among attendees. "Who's happy that WPF is now a community run project? This is soooo scary," remarked Morten Nielsen, a senior principal engineer at ESRI working on the ArcGIS runtime, for location-based analytics.

The slide was perhaps misinterpreted. It was intended as an update on what is happening with pull requests from the community, rather than meaing that WPF has been handed over to the community. Nevertheless, concerns about the future of the framework are well founded. "It's not dead. we have a team working on WPF and supporting it," said Gavrysh, but added, "we now switch to the model where we accept a lot of PRs [pull requests] from the community because we think of WPF as [a] very mature project so not that much rapid development is happening."

United States

IRS Accidentally Releases 112,000 Taxpayers' Private Data Again (bloombergtax.com) 45

Confidential data of about 112,000 taxpayers inadvertently published by the IRS over the summer was mistakenly republished in late November and remained online until early December, the IRS disclosed last week. From a report: Form 990-T data that was supposed to stay private had been taken offline but made its way back to the IRS site when a contractor uploaded an old file that still included most of the private information, a letter sent Thursday to congressional leaders said. The agency is required to make Form 990-Ts filed by nonprofit groups available online but is supposed to keep the form filed by individuals private; in both cases, the agency made that information available too.

An internal programming error caused the September release of private forms along with the ones filed by nonprofit groups, the letter said. This time, the contractor tasked with managing the database reuploaded the older file with the original data instead of a new file that filtered out the forms that needed to be kept private. The IRS shared corrected data with the contractor on Nov. 23, but the old files had not been purged from their system. A third-party researcher alerted the IRS the files were back online on Dec. 1, and the IRS ordered the contractor to take them down immediately. Roughly 104,000 of the 106,000 forms disclosed in September were redisclosed this time.
The agency is reconsidering its relationship with the contractor Accenture on this project, the report added, citing a person familiar with the matter.
Television

Streaming Services Are Ordering Fewer Series - Except for Amazon and Apple TV+ (nytimes.com) 89

"Peak TV has peaked," reports the new York Times: The never-ending supply of new programming that helped define the streaming era — spawning shows at a breakneck pace but also overwhelming viewers with too many choices — appears to finally be slowing. The number of adult scripted series ordered by TV networks and streaming companies aimed for U.S. audiences fell by 24 percent in the second half of this year, compared with the same period last year, according to Ampere Analysis, a research firm. Compared with 2019, it is a 40 percent drop. "The second half of the year has really gone off a bit of a cliff," said Fred Black, a research manager at Ampere.

It may take some time for that to become apparent to viewers — if it becomes apparent at all, given the glut. It is usually months and sometimes more than a year for a TV show to premiere after a network orders it.

The drop is a result of broader reckoning inside the entertainment industry. For years, television executives tossed off billions of dollars on TV series to help build out their streaming services and chase subscribers. The spending has been a boon to high-profile writers and producers, who captured eight- and nine-figure deals, as well as for the actors, directors and behind-the-scenes workers who kept the engine going. But Wall Street soured on the buy-at-any-cost strategy starting in the spring, when Netflix, the streaming powerhouse, announced that it had lost subscribers for the first time in a decade. Netflix's stock nose-dived, and other entertainment companies soon watched their share prices fall, too. Hollywood companies quickly shifted, putting a new emphasis on higher profits instead of raw subscriber counts.

Then, in recent months, entertainment companies became increasingly anxious about a slowing economy, the cord-cutting movement and a troublesome advertising market. Since the summer, scores of executives have abruptly been dismissed, strict cost-cutting measures have been adopted and layoffs have taken hold throughout the industry.... Netflix also cut hundreds of jobs and introduced a cheaper advertising tier, overturning the company's longtime pledge to never allow commercials on the service. Warner Bros. Discovery, a company that was formed in April, faces a debt of roughly $50 billion, and has been in severe cost-cutting mode. There have been rounds of layoffs companywide, including at HBO and HBO Max, as well as sudden cancellations. The once-popular series "Westworld" was canceled last month — a move that surprised Hollywood — and the lesser-known, raunchy dating series "FBoy Island" was cut a few weeks ago....

There are a few outliers to this year's trend: Apple TV+ and Amazon have increased the number of adult scripted series they have purchased this year. So has Disney, according to Ampere's research. (For the second half of the year, however, Disney's buying has declined compared with the same period last year.)

Programming

Stack Overflow Survey Finds 74% of Developers are 'Actively' Looking or 'Open to' a New Job (zdnet.com) 54

Stack Overflow has the announced the results of its annual survey of developers. ZDNet reports: Almost three-quarters (74%) of developers are actively looking for new roles or are open to fresh opportunities, according to research.... The highest percentage of active job seekers is in the 20-24 year-old cohort (27%), with 21% for 25-34 year-olds, 17% for 35-44 year-olds, and only 12% for 45-54 year-olds.

Additionally, the percentage of younger developers actively searching for their next role increased nine points year over year, according to the survey of 2,600 developers by StackOverflow....

Some 54% of respondents to the StackOverflow survey said a better salary is the largest motivator when considering a new opportunity. The biggest factors that stop developers from looking for new jobs are flexibility (58%), salary (54%), and learning opportunities (54%). Developers also want flexibility and the option to work from home, with 46% citing starting/ending the day at a precise time or being expected to work from an office (44%) as the top drawbacks in their current roles.

"Regardless of the economy, it's clear salary is important but it's not everything," says StackOverflow CEO Prashanth Chandrasekar.

Programming

GitHub To Offer Coders Free Scanning For Leaked Keys, Tokens, and Other Secrets (techcrunch.com) 4

TechCrunch reports: Every developer knows that it's a bad idea to hardcode security credentials into source code. Yet it happens and when it does, the consequences can be dire. Until now, GitHub only made its secret scanning service available to paying enterprise users who paid for GitHub Advanced Security, but starting Thursday, the Microsoft-owned company is making its secrets scanning service available for all public GitHub repos for free.

In 2022 alone, the company notified partners in its secret scanning partner program of more than 1.7 million potential secrets that were exposed in public repositories. The service scans repositories for over 200 known token formats and then alerts partners of potential leaks — and you can define your own regex patterns, too.... However, the rollout of the service will be gradual and it will not be available to all users until the end of January 2023.

TechCrunch also notes there's alternatives (including open source GitLeaks).
Open Source

As GitHub Retires 'Atom', Open Source 'Pulsar' Continues Its Legacy (itsfoss.com) 24

In June GitHub announced they'd retire their customizable text editor Atom on December 15th — so they could focus their development efforts on the IDEs Microsoft Visual Studio Code and GitHub Codespaces. "As new cloud-based tools have emerged and evolved over the years, Atom community involvement has declined significantly," according to a post on GitHub's blog.

So while "GitHub and our community have benefited tremendously from those who have filed issues, created extensions, fixed bugs, and built new features on Atom," this now means that:

- Atom package management will stop working
- No more security updates
- Teletype will no longer work
- Deprecated redirects that supported downloading Electron symbols and headers will no longer work
- Pre-built Atom binaries can continue to be downloaded from the atom repository releases

Fortunately, in 2014 GitHub open sourced the code for Atom. And according to It's FOSS News: A community build for it is already available; however, there seems to be a new version (Pulsar) that aims to bring feature parity with the original Atom and introduce modern features and updated architecture....

The reason why they made a separate fork is because of different goals for the projects. Pulsar wants to modernize everything to present a successor to Atom. Of course, the user interface is much of the same. Considering Pulsar hasn't had a stable release yet, the branding could sometimes seem all over the place. However, the essentials seem to be there with the documentation, packages, and features like the ability to install packages from Git repositories....

As of now, it is too soon to say if Pulsar will become something better than what the Atom community version offers. However, it is something that we can keep an eye on.... You can head to its official download page to get the package required for your system and test it out.

Like Atom, Pulsar is cross-platform support (supporting Linux, macOS, and Windows).
Facebook

John Carmack Resigns Meta VR Post, Leaves VR Industry, Criticizes Meta's 'Inefficiency' (venturebeat.com) 163

"John Carmack, the programmer who brought us Doom, Quake and Oculus/Meta virtual reality products, has resigned from his executive consultant post for virtual reality at Meta," reports VentureBeat.

"This is the end of my decade in VR," Carmack wrote in an internal post (which he later reposted on Facebook).

"I have mixed feelings." Quest 2 [Meta's VR headset] is almost exactly what I wanted to see from the beginning — mobile hardware, inside out tracking, optional PC streaming, 4k (ish) screen, cost effective. Despite all the complaints I have about our software, millions of people are still getting value out of it. We have a good product. It is successful, and successful products make the world a better place. It all could have happened a bit faster and been going better if different decisions had been made, but we built something pretty close to The Right Thing.

The issue is our efficiency.... We have a ridiculous amount of people and resources, but we constantly self-sabotage and squander effort....

It has been a struggle for me. I have a voice at the highest levels here, so it feels like I should be able to move things, but I'm evidently not persuasive enough. A good fraction of the things I complain about eventually turn my way after a year or two passes and evidence piles up, but I have never been able to kill stupid things before they cause damage, or set a direction and have a team actually stick to it. I think my influence at the margins has been positive, but it has never been a prime mover.

This was admittedly self-inflicted — I could have moved to Menlo Park after the Oculus acquisition and tried to wage battles with generations of leadership, but I was busy programming, and I assumed I would hate it, be bad at it, and probably lose anyway.

Enough complaining. I wearied of the fight and have my own startup to run, but the fight is still winnable! VR can bring value to most of the people in the world, and no company is better positioned to do it than Meta. Maybe it actually is possible to get there by just plowing ahead with current practices, but there is plenty of room for improvement.

Make better decisions and fill your products with "Give a Damn"!

Google

Google Debuts OSV-Scanner, a Go Tool For Finding Security Holes in Open Source (theregister.com) 16

Google this week released OSV-Scanner -- an open source vulnerability scanner linked to the OSV.dev database that debuted last year. From a report: Written in the Go programming language, OSV-Scanner is designed to scan open source applications to assess the security of any incorporated dependencies -- software libraries that get added to projects to provide pre-built functions so developers don't have to recreate those functions on their own. Modern applications can have a lot of dependencies. For example, researchers from Mozilla and Concordia University in Canada recently created a single-page web application with the React framework using the create-react-app command. The result was a project with seven runtime dependencies and nine development dependencies.

But each of these direct dependencies had other dependencies, known as transitive dependencies. The react package includes loose-envify as a transitive dependency -- one that itself depends on other libraries. All told, this basic single-page "Hello world" app required a total of 1,764 dependencies. As Rex Pan, a software engineer on Google's Open Source Security Team, observed on Tuesday in a blog post, vetting thousands of dependences isn't something developers can do on their own.

Privacy

FBI's Vetted Info Sharing Network 'InfraGard' Hacked (krebsonsecurity.com) 21

An anonymous reader quotes a report from KrebsOnSecurity: On Dec. 10, 2022, the relatively new cybercrime forum Breached featured a bombshell new sales thread: The user database for InfraGard, including names and contact information for tens of thousands of InfraGard members. The FBI's InfraGard program is supposed to be a vetted Who's Who of key people in private sector roles involving both cyber and physical security at companies that manage most of the nation's critical infrastructures -- including drinking water and power utilities, communications and financial services firms, transportation and manufacturing companies, healthcare providers, and nuclear energy firms. "InfraGard connects critical infrastructure owners, operators, and stakeholders with the FBI to provide education, networking, and information-sharing on security threats and risks," the FBI's InfraGard fact sheet reads.

KrebsOnSecurity contacted the seller of the InfraGard database, a Breached forum member who uses the handle "USDoD" and whose avatar is the seal of the U.S. Department of Defense. USDoD said they gained access to the FBI's InfraGard system by applying for a new account using the name, Social Security Number, date of birth and other personal details of a chief executive officer at a company that was highly likely to be granted InfraGard membership. The CEO in question -- currently the head of a major U.S. financial corporation that has a direct impact on the creditworthiness of most Americans -- did not respond to requests for comment. USDoD told KrebsOnSecurity their phony application was submitted in November in the CEO's name, and that the application included a contact email address that they controlled -- but also the CEO's real mobile phone number. "When you register they said that to be approved can take at least three months," USDoD said. "I wasn't expected to be approve[d]." But USDoD said that in early December, their email address in the name of the CEO received a reply saying the application had been approved. While the FBI's InfraGard system requires multi-factor authentication by default, users can choose between receiving a one-time code via SMS or email. "If it was only the phone I will be in [a] bad situation," USDoD said. "Because I used the person['s] phone that I'm impersonating."

USDoD said the InfraGard user data was made easily available via an Application Programming Interface (API) that is built into several key components of the website that help InfraGard members connect and communicate with each other. USDoD said after their InfraGard membership was approved, they asked a friend to code a script in Python to query that API and retrieve all available InfraGard user data. "InfraGard is a social media intelligence hub for high profile persons," USDoD said. "They even got [a] forum to discuss things." USDoD acknowledged that their $50,000 asking price for the InfraGard database may be a tad high, given that it is a fairly basic list of people who are already very security-conscious. Also, only about half of the user accounts contain an email address, and most of the other database fields -- like Social Security Number and Date of Birth -- are completely empty. [...] While the data exposed by the infiltration at InfraGard may be minimal, the user data might not have been the true end game for the intruders. USDoD said they were hoping the imposter account would last long enough for them to finish sending direct messages as the CEO to other executives using the InfraGuard messaging portal.

Programming

C++ Zooms Past Java in Programming Language Popularity Contest (theregister.com) 108

"Java is no longer among the top three most popular programming languages in the TIOBE Index," reports the Register, "one of several not particularly definitive yardsticks by which such things are measured." According to Paul Jansen, CEO of Netherlands-based TIOBE Software, the rising popularity of C++ has pushed Java down a notch. The index's rankings are now:

- Python in first place
- C second
- C++ third, and
- Java fourth.

C++ stepped up to third, and Java fell to fourth. "C++ surpassed Java for the first time in the history of the TIOBE Index, which means that Java is at position 4 now," said Jansen in the December update for the TIOBE Index. "This is the first time that Java is not part of the top 3 since the beginning of the TIOBE Index in 2001."

The surge in C++, perhaps in part helped by the stable release of C++ 20 in December 2020, is particularly ironic in light of the language's recent dismissal by Microsoft CTO Mark Russinovich, which coincides with industry evangelism for Rust and its capacity for memory safety.

The article points out that other rankings still show a slighty higher popularity for Java. And ZDNet notes the other languages rising quickly in popularity over the last 12 months: In a year-on-year comparison in Tiobe's index, the languages now in the top 20 that made significant gains over the period are: Rust (up from 27 to 20), Objective-C (up from 29 to 19), science-specialized MATLAB (20 to 14), and Google's Go language (up from 19 to 12).
Programming

Linux 6.1 Released With Initial Support for Rust-Based Kernel Development (lwn.net) 65

"Linus has released the 6.1 kernel," reports LWN.net — and it's the one with initial support for kernel development in Rust.

Elsewhere LWN explains the specifics of this milestone: No system with a production 6.1 kernel will be running any Rust code, but this change does give kernel developers a chance to play with the language in the kernel context and get a sense for how Rust development feels....

There are other initiatives underway, including the writing of an Apple graphics driver in the Rust language. For the initial merge into the mainline kernel, though, Linus Torvalds made it clear that as little functionality as possible should be included. So those drivers and their support code were trimmed out and must wait for a future kernel release. What is there is the support needed to build a module that can be loaded into the kernel, along with a small sample module.... Torvalds asked for something that could do "hello world" and that is what we got. It is something that can be played with, but it cannot be used for any sort of real kernel programming at this point.

That situation will, hopefully, change in the near future.

Meanwhile, Linux 6.1 also includes "support for destructive BPF programs, some significant io_uring performance improvements, better user-space control over transparent huge-page creation, improved memory-tiering support."

The Register adds: Other interesting additions include more support for the made-in-China LoongArch CPU architecture, introductory work to support Wi-Fi 7 and security fixes for some flaky Wi-Fi routines in previous versions of the kernel. There's also plenty of effort to improve the performance of Linux on laptops, and enhanced power efficiency for AMD's PC-centric RYZEN silicon.

Slashdot Top Deals