AI

Is AI Impacting Which Programming Language Projects Use? (github.blog) 58

"In August 2025, TypeScript surpassed both Python and JavaScript to become the most-used language on GitHub for the first time ever..." writes GitHub's senior developer advocate.

They point to this as proof that "AI isn't just speeding up coding. It's reshaping which languages, frameworks, and tools developers choose in the first place." Eighty percent of new developers on GitHub use Copilot within their first week. Those early exposures reset the baseline for what "easy" means. When AI handles boilerplate and error-prone syntax, the penalty for choosing powerful but complex languages disappears. Developers stop avoiding tools with high overhead and start picking based on utility instead.

The language adoption data shows this behavioral shift:

— TypeScript grew 66% year-over-year
— JavaScript grew 24%
— Shell scripting usage in AI-generated projects jumped 206%

That last one matters. We didn't suddenly love Bash. AI absorbed the friction that made shell scripting painful. So now we use the right tool for the job without the usual cost.

"When a task or process goes smoothly, your brain remembers," they point out. "Convenience captures attention. Reduced friction becomes a preference — and preferences at scale can shift ecosystems." And they offer these suggestions...
  • "AI performs better with strongly typed languages. Strongly typed languages give AI much clearer constraints..."
  • "Standardize before you scale. Document patterns. Publish template repositories. Make your architectural decisions explicit. AI tools will mirror whatever structures they see."
  • "Test AI-generated code harder, not less."

Open Source

'Open Source Registries Don't Have Enough Money To Implement Basic Security' (theregister.com) 24

Google and Microsoft contributed $5 million to launch Alpha-Omega in 2022 — a Linux Foundation project to help secure the open source supply chain. But its co-founder Michael Winser warns that open source registries are in financial peril, reports The Register, since they're still relying on non-continuous funding from grants and donations.

And it's not just because bandwidth is expensive, he said at this year's FOSDEM. "The problem is they don't have enough money to spend on the very security features that we all desperately need..." In a follow-up LinkedIn exchange after this article had posted, Winser estimated it could cost $5 million to $8 million a year to run a major registry the size of Crates.io, which gets about 125 billion downloads a year. And this number wouldn't include any substantial bandwidth and infrastructure donations (Like Fastly's for Crates.io). Adding to that bill is the growing cost of identifying malware, the proliferation of which has been amplified through the use of AI and scripts. These repositories have detected 845,000 malware packages from 2019 to January 2025 (the vast majority of those nasty packages came to npm)...

In some cases benevolent parties can cover [bandwidth] bills: Python's PyPI registry bandwidth needs for shipping copies of its 700,000+ packages (amounting to 747PB annually at a sustained rate of 189 Gbps) are underwritten by Fastly, for instance. Otherwise, the project would have to pony up about $1.8 million a month. Yet the costs Winser was most concerned about are not bandwidth or hosting; they are the security features needed to ensure the integrity of containers and packages. Alpha-Omega underwrites a "distressingly" large amount of security work around registries, he said. It's distressing because if Alpha-Omega itself were to miss a funding round, a lot of registries would be screwed. Alpha-Omega's recipients include the Python Software Foundation, Rust Foundation, Eclipse Foundation, OpenJS Foundation for Node.js and jQuery, and Ruby Central.

Donations and memberships certainly help defray costs. Volunteers do a lot of what otherwise would be very expensive work. And there are grants about...Winser did not offer a solution, though he suggested the key is to convince the corporate bean counters to consider paid registries as "a normal cost of doing business and have it show up in their opex as opposed to their [open source program office] donation budget."

The dilemma was summed up succinctly by the anonymous Slashdot reader who submitted this story.

"Free beer is great. Securing the keg costs money!"
Robotics

Man Accidentally Gains Control of 7,000 Robot Vacuums (popsci.com) 51

A software engineer tried steering his robot vacuum with a videogame controller, reports Popular Science — but ended up with "a sneak peak into thousands of people's homes." While building his own remote-control app, Sammy Azdoufal reportedly used an AI coding assistant to help reverse-engineer how the robot communicated with DJI's remote cloud servers. But he soon discovered that the same credentials that allowed him to see and control his own device also provided access to live camera feeds, microphone audio, maps, and status data from nearly 7,000 other vacuums across 24 countries.

The backend security bug effectively exposed an army of internet-connected robots that, in the wrong hands, could have turned into surveillance tools, all without their owners ever knowing. Luckily, Azdoufal chose not to exploit that. Instead, he shared his findings with The Verge, which quickly contacted DJI to report the flaw... He also claims he could compile 2D floor plans of the homes the robots were operating in. A quick look at the robots' IP addresses also revealed their approximate locations.

DJI told Popular Science the issue was addressed "through two updates, with an initial patch deployed on February 8 and a follow-up update completed on February 10."
United States

F-35 Software Could Be Jailbreaked Like an IPhone: Dutch Defense Minister (twz.com) 87

Lockheed Martin's F-35 combat aircraft is a supersonic stealth "strike fighter." But this week the military news site TWZ reports that the fighter's "computer brain," including "its cloud-based components, could be cracked to accept third-party software updates, just like 'jailbreaking' a cellphone, according to the Dutch State Secretary for Defense."

TWZ notes that the Dutch defense secretary made the remarks during an episode of BNR Nieuwsradio's "Boekestijn en de Wijk" podcast, according to a machine translation: Gijs Tuinman, who has been State Secretary for Defense in the Netherlands since 2024, does not appear to have offered any further details about what the jailbreaking process might entail. What, if any, cyber vulnerabilities this might indicate is also unclear. It is possible that he may have been speaking more notionally or figuratively about action that could be taken in the future, if necessary...

The ALIS/ODIN network is designed to handle much more than just software updates and logistical data. It is also the port used to upload mission data packages containing highly sensitive planning information, including details about enemy air defenses and other intelligence, onto F-35s before missions and to download intelligence and other data after a sortie. To date, Israel is the only country known to have successfully negotiated a deal giving it the right to install domestically-developed software onto its F-35Is, as well as otherwise operate its jets outside of the ALIS/ODIN network.

The comments "underscore larger issues surrounding the F-35 program, especially for foreign operators," the article points out. But at the same time F-35's have a sophisticated mission-planning data package. "So while jailbreaking F-35's onboard computers, as well as other aspects of the ALIS/ODIN network, may technically be feasible, there are immediate questions about the ability to independently recreate the critical mission planning and other support it provides. This is also just one aspect of what is necessary to keep the jets flying, let alone operationally relevant."

"TWZ previously explored many of these same issues in detail last year, amid a flurry of reports about the possibility that F-35s have some type of discreet 'kill switch' built in that U.S. authorities could use to remotely disable the jets. Rumors of this capability are not new and remain completely unsubstantiated." At that time, we stressed that a 'kill switch' would not even be necessary to hobble F-35s in foreign service. At present, the jets are heavily dependent on U.S.-centric maintenance and logistics chains that are subject to American export controls and agreements with manufacturer Lockheed Martin. Just reliably sourcing spare parts has been a huge challenge for the U.S. military itself... F-35s would be quickly grounded without this sustainment support. [A cutoff in spare parts and support"would leave jailbroken jets quickly bricked on the ground," the article notes later.] Altogether, any kind of jailbreaking of the F-35's systems would come with a serious risk of legal action by Lockheed Martin and additional friction with the U.S. government.
Thanks to long-time Slashdot reader Koreantoast for sharing the article.
Programming

Has the AI Disruption Arrived - and Will It Just Make Software Cheaper and More Accessible? (aboard.com) 88

Programmer/entrepreneur Paul Ford is the co-founder of AI-driven business software platform Aboard. This week he wrote a guest essay for the New York Times titled "The AI Disruption Has Arrived, and It Sure Is Fun," arguing that Anthropic's Claude Code "was always a helpful coding assistant, but in November it suddenly got much better, and ever since I've been knocking off side projects that had sat in folders for a decade or longer... [W]hen the stars align and my prompts work out, I can do hundreds of thousands of dollars worth of work for fun (fun for me) over weekends and evenings, for the price of the Claude $200-a-month."

He elaborates on his point on the Aboard.com blog: I'm deeply convinced that it's possible to accelerate software development with AI coding — not deprofessionalize it entirely, or simplify it so that everything is prompts, but make it into a more accessible craft. Things which not long ago cost hundreds of thousands of dollars to pull off might come for hundreds of dollars, and be doable by you, or your cousin. This is a remarkable accelerant, dumped into the public square at a bad moment, with no guidance or manual — and the reaction of many people who could gain the most power from these tools is rejection and anxiety. But as I wrote....

I believe there are millions, maybe billions, of software products that don't exist but should: Dashboards, reports, apps, project trackers and countless others. People want these things to do their jobs, or to help others, but they can't find the budget. They make do with spreadsheets and to-do lists.

I don't expect to change any minds; that's not how minds work. I just wanted to make sure that I used the platform offered by the Times to say, in as cheerful a way as possible: Hey, this new power is real, and it should be in as many hands as possible. I believe everyone should have good software, and that it's more possible now than it was a few years ago.

From his guest essay: Is the software I'm making for myself on my phone as good as handcrafted, bespoke code? No. But it's immediate and cheap. And the quantities, measured in lines of text, are large. It might fail a company's quality test, but it would meet every deadline. That is what makes A.I. coding such a shock to the system... What if software suddenly wanted to ship? What if all of that immense bureaucracy, the endless processes, the mind-boggling range of costs that you need to make the computer compute, just goes?

That doesn't mean that the software will be good. But most software today is not good. It simply means that products could go to market very quickly. And for lots of users, that's going to be fine. People don't judge A.I. code the same way they judge slop articles or glazed videos. They're not looking for the human connection of art. They're looking to achieve a goal. Code just has to work... In about six months you could do a lot of things that took me 20 years to learn. I'm writing all kinds of code I never could before — but you can, too. If we can't stop the freight train, we can at least hop on for a ride.

The simple truth is that I am less valuable than I used to be. It stings to be made obsolete, but it's fun to code on the train, too. And if this technology keeps improving, then all of the people who tell me how hard it is to make a report, place an order, upgrade an app or update a record — they could get the software they deserve, too. That might be a good trade, long term.

United Kingdom

After 16 Years, 'Interim' CTO Finally Eradicating Fujitsu and Horizon From the UK's Post Office (computerweekly.com) 38

Besides running tech operations at the UK's Post Office, their interim CTO is also removing and replacing Fujitsu's Horizon system, which Computer Weekly describes as "the error-ridden software that a public inquiry linked to 13 people taking their own lives."

After over 16 years of covering the scandal they'd first discovered back in 2009, Computer Weekly now talks to CTO Paul Anastassi about his plans to finally remove every trace of the Horizon system that's been in use at Post Office branches for over 30 years — before the year 2030: "There are more than 80 components that make up the Horizon platform, and only half of those are managed by Fujitsu," said Anastassi. "The other components are internal and often with other third parties as well," he added... The plan is to introduce a modern front end that is device agnostic. "We want to get away from [the need] to have a certain device on a certain terminal in your branch. We want to provide flexibility around that...."

Anastassi is not the first person to be given the task of terminating Horizon and ending Fujitsu's contract. In 2015, the Post Office began a project to replace Fujitsu and Horizon with IBM and its technology, but after things got complex, Post Office directors went crawling back to Fujitsu. Then, after Horizon was proved in the High Court to be at fault for the account shortfalls that subpostmasters were blamed and punished for, the Post Office knew it had to change the system. This culminated in the New Branch IT (NBIT) project, but this ran into trouble and was eventually axed. This was before Anastassi's time, and before that of its new top team of executives....

Things are finally moving at pace, and by the summer of this year, two separate contracts will be signed with suppliers, signalling the beginning of the final act for Fujitsu and its Horizon system.

Anastassi has 30 years of IT management experience, the article points out, and he estimates the project will even bring "a considerable cost saving over what we currently pay for Fujitsu."
X

T2 Linux Restores XAA In Xorg, Making 2D Graphics Fast Again (t2linux.com) 55

Berlin-based T2 Linux developer René Rebe (long-time Slashdot reader ReneR) is announcing that their Xorg display server has now restored its XAA acceleration architecture, "bringing fixed-function hardware 2D acceleration back to many older graphics cards that upstream left in software-rendered mode." Older fixed-function GPUs now regain smooth window movement, low CPU usage, and proper 24-bit bpp framebuffer support (also restored in T2). Tested hardware includes ATi Mach-64 and Rage-128, SiS, Trident, Cirrus, Matrox (Millennium/G450), Permedia2, Tseng ET6000 and even the Sun Creator/Elite 3D.

The result: vintage and retro systems and classic high-end Unix workstations that are fast and responsive again.

Python

How Python's Security Response Team Keeps Python Users Safe (blogspot.com) 5

This week the Python Software Foundation explained how they keep Python secure. A new blog post recognizes the volunteers and paid Python Software Foundation staff on the Python Security Response Team (PSRT), who "triage and coordinate vulnerability reports and remediations keeping all Python users safe." Just last year the PSRT published 16 vulnerability advisories for CPython and pip, the most in a single year to date! And the PSRT usually can't do this work alone, PSRT coordinators are encouraged to involve maintainers and experts on the projects and submodules. By involving the experts directly in the remediation process ensures fixes adhere to existing API conventions and threat-models, are maintainable long-term, and have minimal impact on existing use-cases. Sometimes the PSRT even coordinates with other open source projects to avoid catching the Python ecosystem off-guard by publishing a vulnerability advisory that affects multiple other projects. The most recent example of this is PyPI's ZIP archive differential attack mitigation.

This work deserves recognition and celebration just like contributions to source code and documentation. [Security Developer-in-Residence Seth Larson and PSF Infrastructure Engineer Jacob Coffee] are developing further improvements to workflows involving "GitHub Security Advisories" to record the reporter, coordinator, and remediation developers and reviewers to CVE and OSV records to properly thank everyone involved in the otherwise private contribution to open source projects.

The Internet

Fury Over Discord's Age Checks Explodes After Shady Persona Test In UK (arstechnica.com) 62

Backlash intensified against Discord's age verification rollout after it briefly disclosed a UK age-verification test involving vendor Persona, contradicting earlier claims about minimal ID storage and transparency. Ars Technica explains: One of the major complaints was that Discord planned to collect more government IDs as part of its global age verification process. It shocked many that Discord would be so bold so soon after a third-party breach of a former age check partner's services recently exposed 70,000 Discord users' government IDs.

Attempting to reassure users, Discord claimed that most users wouldn't have to show ID, instead relying on video selfies using AI to estimate ages, which raised separate privacy concerns. In the future, perhaps behavioral signals would override the need for age checks for most users, Discord suggested, seemingly downplaying the risk that sensitive data would be improperly stored. Discord didn't hide that it planned to continue requesting IDs for any user appealing an incorrect age assessment, and users weren't happy, since that is exactly how the prior breach happened. Responding to critics, Discord claimed that the majority of ID data was promptly deleted. Specifically, Savannah Badalich, Discord's global head of product policy, told The Verge that IDs shared during appeals "are deleted quickly -- in most cases, immediately after age confirmation."

It's unsurprising then that backlash exploded after Discord posted, and then weirdly deleted, a disclaimer on an FAQ about Discord's age assurance policies that contradicted Discord's hyped short timeline for storing IDs. An archived version of the page shows the note shared this warning: "Important: If you're located in the UK, you may be part of an experiment where your information will be processed by an age-assurance vendor, Persona. The information you submit will be temporarily stored for up to 7 days, then deleted. For ID document verification, all details are blurred except your photo and date of birth, so only what's truly needed for age verification is used."

Critics felt that Discord was obscuring not just how long IDs may be stored, but also the entities collecting information. Discord did not provide details on what the experiment was testing or how many users were affected, and Persona was not listed as a partner on its platform. Asked for comment, Discord told Ars that only a small number of users was included in the experiment, which ran for less than one month. That test has since concluded, Discord confirmed, and Persona is no longer an active vendor partnering with Discord. Moving forward, Discord promised to "keep our users informed as vendors are added or updated." While Discord seeks to distance itself from Persona, Rick Song, Persona's CEO [...] told Ars that all the data of verified individuals involved in Discord's test has been deleted.
Ars also notes that hackers "quickly exposed a 'workaround' to avoid Persona's age checks on Discord" and "found a Persona frontend exposed to the open internet on a U.S. government authorized server."

The Rage, an independent publication that covers financial surveillance, reported: "In 2,456 publicly accessible files, the code revealed the extensive surveillance Persona software performs on its users, bundled in an interface that pairs facial recognition with financial reporting -- and a parallel implementation that appears designed to serve federal agencies." While Persona does not have any government contracts, the exposed service "appears to be powered by an OpenAI chatbot," The Rage noted.

Hackers warned "that OpenAI may have created an internal database for Persona identity checks that spans all OpenAI users via its internal watchlistdb," seemingly exploiting the "opportunity to go from comparing users against a single federal watchlist, to creating the watchlist of all users themselves."
Security

Cyber Stocks Slide As Anthropic Unveils 'Claude Code Security' (bloomberg.com) 29

An anonymous reader quotes a report from Bloomberg: Shares of cybersecurity software companies tumbled Friday after Anthropic PBC introduced a new security feature into its Claude AI model. Crowdstrike Holdings was the among the biggest decliners, falling as much as 6.5%, while Cloudflare slumped more than 6%. Meanwhile, Zscaler dropped 3.5%, SailPoint shed 6.8%, and Okta declined 5.7%. The Global X Cybersecurity ETF fell as much as 3.8%, extending its losses on the year to 14%.

Anthropic said the new tool will "scans codebases for security vulnerabilities and suggests targeted software patches for human review." The firm said the update is available in a limited research preview for now.

XBox (Games)

Phil Spencer Retiring After 38 Years At Microsoft (ign.com) 23

Xbox chief and Microsoft Gaming CEO Phil Spencer is leaving Microsoft after nearly 40 years at the company. "Meanwhile, Xbox President Sarah Bond, "long thought by many both inside and outside of Microsoft to be Spencer's heir apparent, has resigned," reports IGN. From the report: The new CEO of Microsoft Gaming will be Asha Sharma, currently the President of Microsoft's CoreAI product. Finally, Xbox Game Studios head Matt Booty is being promoted to Chief Content Officer and will work closely with Sharma. "I want to thank Phil for his extraordinary leadership and partnership," Microsoft CEO Satya Nadella said in an email sent to Microsoft staff. "Over 38 years at Microsoft, including 12 years leading Gaming, Phil helped transform what we do and how we do it." [...]

Spencer was named Head of Xbox in March of 2014, when he was tasked with righting a ship that had made a number of product choices and policy decisions that rubbed core gamers the wrong way in the run-up to the launch of the Xbox One in Fall 2013. Long hailed by gamers as being one of their own, Spencer could frequently be found on Xbox Live, playing games regularly with fellow Xbox gamers and racking up a healthy Gamerscore. His first major move when put in charge was decoupling the Kinect 2.0 peripheral from the Xbox One package, thus immediately reducing the new console's price by $100 to $399, matching the day-one price of Sony's PlayStation 4. He spearheaded the much-heralded backwards compatibility movement within Xbox, the Xbox Game Pass service was born under his watch, and accessibility made major advances during his tenure in both hardware and software. Xbox Play Anywhere, which sought to let gamers play their Xbox games on any device, be it a PC, console, or handheld, isn't new but has been a big recent focal point.

Spencer's time running Xbox will perhaps be most remembered for Microsoft's $69 billion acquisition of Activision-Blizzard-King in 2022, which took almost two years to achieve regulatory approval from various agencies around the world. But Spencer began trying to solve for Xbox's dearth of first-party games in 2018, when the first wave of studio acquisitions occurred. Prior to the Activision deal, Spencer's biggest move came with the $7.5 billion acquisition of ZeniMax, parent company of Bethesda, in 2020. The deal gave Xbox total ownership of Bethesda Game Studios and its Fallout and Elder Scrolls franchises along with id Software and its Doom and Quake IPs, among many others. Questions arose from there about whether or not that meant all of Xbox's new studios would produce games exclusively for Xbox consoles, and while some games were kept off of PlayStation platforms temporarily, many weren't and most now seem to come to PS5 eventually, if not on day one.

Facebook

Several Meta Employees Have Started Calling Themselves 'AI Builders' (businessinsider.com) 16

An anonymous reader shares a report: Meta product managers are rebranding. Some are now calling themselves "AI builders," a signal that AI coding tools are changing who gets to build software inside the company. One of them, Jeremie Guedj, announced the change in a LinkedIn post last week. "I still can't believe I'm writing this: as of today, my full-time job at Meta is AI Builder," he wrote.

Guedj has spent more than a decade as a traditional product manager, a role that sets the road map and strategy for products then built by engineering teams. He said that while his title in Meta's internal systems still lists him as a product manager, his actual work is now full-time building with AI on what he calls an "AI-native team." Another Meta product manager also lists "AI Builder" on her LinkedIn profile, while at least two other Meta engineers write the term in their bios, Business Insider found.

Businesses

PayPal Discloses Data Breach That Exposed User Info For 6 Months (bleepingcomputer.com) 7

PayPal is notifying customers of a data breach after a software error in a loan application exposed their sensitive personal information, including Social Security numbers, for nearly 6 months last year. From a report: The incident affected the PayPal Working Capital (PPWC) loan app, which provides small businesses with quick access to financing. PayPal discovered the breach on December 12, 2025, and determined that customers' names, email addresses, phone numbers, business addresses, Social Security numbers, and dates of birth had been exposed since July 1, 2025.

The financial technology company said it has reversed the code change that caused the incident, blocking attackers' access to the data one day after discovering the breach. "On December 12, 2025, PayPal identified that due to an error in its PayPal Working Capital ('PPWC') loan application, the PII of a small number of customers was exposed to unauthorized individuals during the timeframe of July 1, 2025 to December 13, 2025," PayPal said in breach notification letters sent to affected users. "PayPal has since rolled back the code change responsible for this error, which potentially exposed the PII. We have not delayed this notification as a result of any law enforcement investigation."

AI

HSBC To Investors: If India Couldn't Build an Enterprise Software Challenger, Neither Can AI (x.com) 54

India's IT services giants have spent decades deploying, customizing, and maintaining the world's largest enterprise software platforms, putting hundreds of thousands of engineers in daily contact with the business logic and proprietary architectures of vendors like SAP and Oracle. None of them have built a competing product that gained meaningful traction against the U.S. incumbents, HSBC said in a note to clients, using this history to argue AI-generated code faces the same structural barriers.

The bank's analysts contend that enterprise software competition turns on factors that have little to do with the ability to write code -- sales teams, cross-licensing agreements, patented IP, first-mover lock-in, brand awareness, and go-to-market infrastructure. If a massive, low-cost, domain-expert workforce couldn't crack the market over several decades, HSBC argues, the idea that AI-generated code will do so is, in the words of Nvidia's Jensen Huang that the report approvingly cites, "illogical."
Security

How Private Equity Debt Left a Leading VPN Open To Chinese Hackers (financialpost.com) 26

An anonymous reader quotes a report from Bloomberg: In early 2024, the agency that oversees cybersecurity for much of the US government issued a rare emergency order -- disconnect your Connect Secure virtual private network software immediately. Chinese spies had hacked the code and infiltrated nearly two dozen organizations. The directive applied to all civilian federal agencies, but given the product's customer base, its impact was more widely felt. The software, which is made by Ivanti Inc., was something of an industry standard across government and much of the corporate world. Clients included the US Air Force, Army, Navy and other parts of the Defense Department, the Department of State, the Federal Aviation Administration, the Federal Reserve, the National Aeronautics and Space Administration, thousands of companies and more than 2,000 banks including Wells Fargo & Co. and Deutsche Bank AG, according to federal procurement records, internal documents, interviews and the accounts of former Ivanti employees who requested anonymity because they were not authorized to disclose customer information.

Soon after sending out their order, which instructed agencies to install an Ivanti-issued fix, staffers at the Cybersecurity and Infrastructure Security Agency discovered that the threat was also inside their own house. Two sensitive CISA databases -- one containing information about personnel at chemical facilities, another assessing the vulnerabilities of critical infrastructure operators -- had been compromised via the agency's own Connect Secure software. CISA had followed all its own guidance. Ivanti's fix had failed. This was a breaking point for some American national security officials, who had long expressed concerns about Connect Secure VPNs. CISA subsequently published a letter with the Federal Bureau of Investigation and the national cybersecurity agencies of the UK, Canada, Australia and New Zealand warning customers of the "significant risk" associated with continuing to use the software. According to Laura Galante, then the top cyber official in the Office of the Director of National Intelligence, the government came to a simple conclusion about the technology. "You should not be using it," she said. "There really is no other way to put it."

That attack, along with several others that successfully targeted the Ivanti software, illustrate how private equity's push into the cybersecurity market ended up compromising the quality and safety of some critical VPN products, Bloomberg has found. Last year, Bloomberg reported that Citrix Systems Inc., another top VPN maker, experienced several major hacks after its private equity owners, Elliott Investment Management and Vista Equity Partners, cut most of the company's 70-member product security team following their acquisition of the company in 2022. Some government officials and private-sector executives are now reconsidering their approach to evaluating cybersecurity software. In addition to excising private equity-owned VPNs from their networks, some factor private equity ownership into their risk assessments of key technologies.

Printer

California's New Bill Requires DOJ-Approved 3D Printers That Report on Themselves (adafruit.com) 123

California's recently-proposed AB-2047 would require 3D printers sold in the state to be DOJ-approved models equipped with "firearm blocking technology," banning non-certified machines after 2029 and criminalizing efforts to bypass the software. Adafruit notes that unlike similar legislation proposed in Washington State and New York, California's version "adds a certification bureaucracy on top: state-approved algorithms, state-approved software control processes, state-approved printer models, quarterly list updates, and civil penalties up to $25,000 per violation." From the report: Assembly Member Bauer-Kahan introduced AB-2047, the "California Firearm Printing Prevention Act," on February 17th. The bill would ban the sale or transfer of any 3D printer in California unless it appears on a state-maintained roster of approved makes and models... certified by the Department of Justice as equipped with "firearm blocking technology." Manufacturers would need to submit attestations for every make and model. The DOJ would publish a list. If your printer isn't on the list by March 1, 2029, it can't be sold. In addition, knowingly disabling or circumventing the blocking software is a misdemeanor.

[...] As Michael Weinberg wrote after the New York and Washington proposals dropped⦠accurately identifying gun parts from geometry alone is incredibly hard, desktop printers lack the processing power to run this kind of analysis, and the open-source firmware that runs most machines makes any blocking requirement trivially easy to bypass. The Firearms Policy Coalition flagged AB-2047 on X, and the reactions tell you everything. Jon Lareau called it "stupidity on steroids," pointing out that a simple spring-shaped part has no way of revealing its intended use. The Foundry put it plainly: "Regulating general-purpose machines is another. AB-2047 would require 3D printers to run state-approved surveillance software and criminalize modifying your own hardware."

Security

OpenClaw Security Fears Lead Meta, Other AI Firms To Restrict Its Use (wired.com) 7

An anonymous reader quotes a report from Wired: Last month, Jason Grad issued a late-night warning to the 20 employees at his tech startup. "You've likely seen Clawdbot trending on X/LinkedIn. While cool, it is currently unvetted and high-risk for our environment," he wrote in a Slack message with a red siren emoji. "Please keep Clawdbot off all company hardware and away from work-linked accounts." Grad isn't the only tech executive who has raised concerns to staff about the experimental agentic AI tool, which was briefly known as MoltBot and is now named OpenClaw. A Meta executive says he recently told his team to keep OpenClaw off their regular work laptops or risk losing their jobs. The executive told reporters he believes the software is unpredictable and could lead to a privacy breach if used in otherwise secure environments. He spoke on the condition of anonymity to speak frankly.

[...] Some cybersecurity professionals have publicly urged companies to take measures to strictly control how their workforces use OpenClaw. And the recent bans show how companies are moving quickly to ensure security is prioritized ahead of their desire to experiment with emerging AI technologies. "Our policy is, 'mitigate first, investigate second' when we come across anything that could be harmful to our company, users, or clients," says Grad, who is cofounder and CEO of Massive, which provides Internet proxy tools to millions of users and businesses. His warning to staff went out on January 26, before any of his employees had installed OpenClaw, he says. At another tech company, Valere, which works on software for organizations including Johns Hopkins University, an employee posted about OpenClaw on January 29 on an internal Slack channel for sharing new tech to potentially try out. The company's president quickly responded that use of OpenClaw was strictly banned, Valere CEO Guy Pistone tells WIRED. "If it got access to one of our developer's machines, it could get access to our cloud services and our clients' sensitive information, including credit card information and GitHub codebases," Pistone says. "It's pretty good at cleaning up some of its actions, which also scares me."

A week later, Pistone did allow Valere's research team to run OpenClaw on an employee's old computer. The goal was to identify flaws in the software and potential fixes to make it more secure. The research team later advised limiting who can give orders to OpenClaw and exposing it to the Internet only with a password in place for its control panel to prevent unwanted access. In a report shared with WIRED, the Valere researchers added that users have to "accept that the bot can be tricked." For instance, if OpenClaw is set up to summarize a user's email, a hacker could send a malicious email to the person instructing the AI to share copies of files on the person's computer. But Pistone is confident that safeguards can be put in place to make OpenClaw more secure. He has given a team at Valere 60 days to investigate. "If we don't think we can do it in a reasonable time, we'll forgo it," he says. "Whoever figures out how to make it secure for businesses is definitely going to have a winner."

Transportation

Europe's Labor Laws Are Strangling Its Ability To Innovate, New Analysis Argues (worksinprogress.co) 98

A new essay in Works in Progress Magazine argues that Europe's failure to produce a Tesla or a Waymo stems not from insufficient research spending or high taxes -- problems California shares in abundance -- but from labor laws that make it devastatingly expensive for companies to unwind failed bets. According to estimates, corporate restructuring costs the equivalent of 31 months of salary per employee in Germany, 38 in France, and 62 in Spain, compared to seven in the United States.

The downstream effects are visible across Europe's flagship industries. When Audi closed its Brussels factory after cancelling the E-Tron SUV in 2024, severance ran to $718 million -- over $235,000 per employee and more than the cost of writing off the plant's physical assets. Volkswagen spent $50 billion on its electric vehicle lineup, failed to develop competitive software internally, and ultimately paid up to $5 billion for access to American startup Rivian's technology.

Between 2012 and 2016, 79% of all startup acquisitions tracked by Crunchbase took place in the US. The essay points to Denmark, Austria and Switzerland as countries that have found a middle path -- generous unemployment insurance and portable severance accounts that protect workers without penalizing employers for taking risks.
Businesses

Study of 12,000 EU Firms Finds AI's Productivity Gains Are Real (cepr.org) 61

A study of more than 12,000 European firms found that AI adoption causally increases labour productivity by 4% on average across the EU, and that it does so without reducing employment in the short run.

Researchers from the Bank for International Settlements and the European Investment Bank used an instrumental variable strategy that matched EU firms to comparable US firms by sector, size, investment intensity and other characteristics, then used the AI adoption rates of those US counterparts as a proxy for exogenous AI exposure among European firms.

The productivity gains, however, skewed heavily toward medium and large companies. Among large firms, 45% had deployed AI, compared to just 24% of small firms. The study also found that complementary investments mattered enormously: an extra percentage point of spending on workforce training amplified AI's productivity effect by 5.9%, and an extra point on software and data infrastructure added 2.4%.
The Courts

NPR's Radio Host David Greene Says Google's NotebookLM Tool Stole His Voice 24

An anonymous reader quotes a report from the Washington Post: David Greene had never heard of NotebookLM, Google's buzzy artificial intelligence tool that spins up podcasts on demand, until a former colleague emailed him to ask if he'd lent it his voice. "So... I'm probably the 148th person to ask this, but did you license your voice to Google?" the former co-worker asked in a fall 2024 email. "It sounds very much like you!"

Greene, a public radio veteran who has hosted NPR's "Morning Edition" and KCRW's political podcast "Left, Right & Center," looked up the tool, listening to the two virtual co-hosts -- one male and one female -- engage in light banter. "I was, like, completely freaked out," Greene said. "It's this eerie moment where you feel like you're listening to yourself." Greene felt the male voice sounded just like him -- from the cadence and intonation to the occasional "uhhs" and "likes" that Greene had worked over the years to minimize but never eliminated. He said he played it for his wife and her eyes popped.

As emails and texts rolled in from friends, family members and co-workers, asking if the AI podcast voice was his, Greene became convinced he'd been ripped off. Now he's suing Google, alleging that it violated his rights by building a product that replicated his voice without payment or permission, giving users the power to make it say things Greene would never say. Google told The Washington Post in a statement on Thursday that NotebookLM's male podcast voice has nothing to do with Greene. Now a Santa Clara County, California, court may be asked to determine whether the resemblance is uncanny enough that ordinary people hearing the voice would assume it's his -- and if so, what to do about it.
Greene's lawsuit cites an unnamed AI forensic firm that used its software to compare the artificial voice to Greene's. It gave a confidence rating of 53-60% that Greene's voice was used to train the model, which it considers "relatively high" confidence.

"If I was David Greene I would be upset, not just because they stole my voice," but because they used it to make the podcasting equivalent of AI "slop," said Mike Pesca, host of "The Gist" podcast and a former colleague of Greene's at NPR. "They have banter, but it's very surface-level, un-insightful banter, and they're always saying, 'Yeah, that's so interesting.' It's really bad, because what do we as show hosts have except our taste in commentary and pointing our audience to that which is interesting?"

Slashdot Top Deals