AI

Goldman Sachs: Why AI Spending Is Not Boosting GDP 63

Goldman Sachs, in a research note Thursday (the note isn't publicly posted): Annualized revenue for public companies exposed to the build-out of AI infrastructure increased by over $340 billion from 2022 through 2024Q4 (and is projected to increase by almost $580 billion by end-2025). In contrast, annualized real investment in AI-related categories in the US GDP accounts has only risen by $42 billion over the same period. This sharp divergence has prompted questions from investors about why US GDP is not receiving a larger boost from AI.

A large share of the nominal revenue increase reported by public companies reflects cost inflation (particularly for semiconductors) and foreign revenue, neither of which should boost real US GDP. Indeed, we find that margin expansion ($30 billion) and increased revenue from other countries ($130 billion) account for around half of the publicly reported AI spending surge.

That said, the BEA's (Bureau of Economic Analysis) methodology potentially understates the impact of AI-related investment on real GDP by around $100 billion. Manufacturing shipments and net imports imply that US semiconductor supply has increased by over $35 billion since 2022, but the BEA records semiconductor purchases as intermediate inputs rather than investment (since semiconductors have historically been embedded in products that are later resold) and therefore excludes them from GDP. Cloud services used to train and support AI models are similarly mostly recorded as intermediate inputs.

Combined, we find that these explanations can explain most of the AI investment discrepancy, with only $50 billion unexplained. Looking ahead, we see more scope for AI-related investment to provide a moderate boost to real US GDP in 2025 since AI investment should broaden to categories like data centers, servers and networking hardware, and utilities that will likely be captured as real investment. However, we expect the bulk of investment in semiconductors and cloud computing will remain unmeasured barring changes to US national account methodology.
AI

Amazon Tests AI Dubbing on Prime Video Movies, Series (aboutamazon.com) 42

Amazon has launched a pilot program testing "AI-aided dubbing" for select content on Prime Video, offering translations between English and Latin American Spanish for 12 licensed movies and series including "El Cid: La Leyenda," "Mi Mama Lora" and "Long Lost." The company describes a hybrid approach where "localization professionals collaborate with AI," suggesting automated dubbing receives professional editing for accuracy. The initiative, the company said, aims to increase content accessibility as streaming services expand globally.
Google

Google is Adding More AI Overviews and a New 'AI Mode' To Search (theverge.com) 33

Google announced Wednesday it is expanding its AI Overviews to more query types and users worldwide, including those not logged into Google accounts, while introducing a new "AI Mode" chatbot feature. AI Mode, which resembles competitors like Perplexity or ChatGPT Search, will initially be limited to Google One AI Premium subscribers who enable it through the Labs section of Search.

The feature delivers AI-generated answers with supporting links interspersed throughout, powered by Google's search index. "What we're finding from people who are using AI Overviews is that they're really bringing different kinds of questions to Google," said Robby Stein, VP of product on the Search team. "They're more complex questions, that may have been a little bit harder before." Google is also upgrading AI Overviews with its Gemini 2.0 model, which Stein says will improve responses for math, coding and reasoning-based queries.
AI

OpenAI Plots Charging $20,000 a Month For PhD-Level Agents (theinformation.com) 112

OpenAI is preparing to launch a tiered pricing structure for its AI agent products, with high-end research assistants potentially costing $20,000 per month, [alternative source] according to The Information. The AI startup, which already generates approximately $4 billion in annualized revenue from ChatGPT, plans three service levels: $2,000 monthly agents for "high-income knowledge workers," $10,000 monthly agents for software development, and $20,000 monthly PhD-level research agents. OpenAI has told some investors that agent products could eventually constitute 20-25% of company revenue, the report added.
AI

Turing Award Winners Sound Alarm on Hasty AI Deployment (ft.com) 10

Reinforcement learning pioneers Andrew Barto and Richard Sutton have warned against the unsafe deployment of AI systems [alternative source] after winning computing's prestigious $1 million Turing Award Wednesday. "Releasing software to millions of people without safeguards is not good engineering practice," said Barto, professor emeritus at the University of Massachusetts, comparing it to testing a bridge by having people use it.

Barto and Sutton developed reinforcement learning in the 1980s, inspired by psychological studies of human learning. The technique, which rewards AI systems for desired behaviors, has become fundamental to advances at OpenAI and Google. Sutton, a University of Alberta professor and former DeepMind researcher, dismissed tech companies' artificial general intelligence narrative as "hype."

Both laureates also criticized President Trump's proposed cuts to federal research funding, with Barto calling it "wrong and a tragedy" that would eliminate opportunities for exploratory research like their early work.
Medicine

World's First 'Synthetic Biological Intelligence' Runs On Living Human Cells 49

Australian company Cortical Labs has launched the CL1, the world's first commercial "biological computer" that merges human brain cells with silicon hardware to form adaptable, energy-efficient neural networks. New Atlas reports: Known as a Synthetic Biological Intelligence (SBI), Cortical's CL1 system was officially launched in Barcelona on March 2, 2025, and is expected to be a game-changer for science and medical research. The human-cell neural networks that form on the silicon "chip" are essentially an ever-evolving organic computer, and the engineers behind it say it learns so quickly and flexibly that it completely outpaces the silicon-based AI chips used to train existing large language models (LLMs) like ChatGPT.

"Today is the culmination of a vision that has powered Cortical Labs for almost six years," said Cortical founder and CEO Dr Hon Weng Chong. "We've enjoyed a series of critical breakthroughs in recent years, most notably our research in the journal Neuron, through which cultures were embedded in a simulated game-world, and were provided with electrophysiological stimulation and recording to mimic the arcade game Pong. However, our long-term mission has been to democratize this technology, making it accessible to researchers without specialized hardware and software. The CL1 is the realization of that mission." He added that while this is a groundbreaking step forward, the full extent of the SBI system won't be seen until it's in users' hands.

"We're offering 'Wetware-as-a-Service' (WaaS)," he added -- customers will be able to buy the CL-1 biocomputer outright, or simply buy time on the chips, accessing them remotely to work with the cultured cell technology via the cloud. "This platform will enable the millions of researchers, innovators and big-thinkers around the world to turn the CL1's potential into tangible, real-word impact. We'll provide the platform and support for them to invest in R&D and drive new breakthroughs and research." These remarkable brain-cell biocomputers could revolutionize everything from drug discovery and clinical testing to how robotic "intelligence" is built, allowing unlimited personalization depending on need. The CL1, which will be widely available in the second half of 2025, is an enormous achievement for Cortical -- and as New Atlas saw recently with a visit to the company's Melbourne headquarters -- the potential here is much more far-reaching than Pong. [...]
AI

Users Report Emotional Bonds With Startlingly Realistic AI Voice Demo (arstechnica.com) 65

An anonymous reader quotes a report from Ars Technica: In late 2013, the Spike Jonze film Her imagined a future where people would form emotional connections with AI voice assistants. Nearly 12 years later, that fictional premise has veered closer to reality with the release of a new conversational voice model from AI startup Sesame that has left many users both fascinated and unnerved. "I tried the demo, and it was genuinely startling how human it felt," wrote one Hacker News user who tested the system. "I'm almost a bit worried I will start feeling emotionally attached to a voice assistant with this level of human-like sound."

In late February, Sesame released a demo for the company's new Conversational Speech Model (CSM) that appears to cross over what many consider the "uncanny valley" of AI-generated speech, with some testers reporting emotional connections to the male or female voice assistant ("Miles" and "Maya"). In our own evaluation, we spoke with the male voice for about 28 minutes, talking about life in general and how it decides what is "right" or "wrong" based on its training data. The synthesized voice was expressive and dynamic, imitating breath sounds, chuckles, interruptions, and even sometimes stumbling over words and correcting itself. These imperfections are intentional.

"At Sesame, our goal is to achieve 'voice presence' -- the magical quality that makes spoken interactions feel real, understood, and valued," writes the company in a blog post. "We are creating conversational partners that do not just process requests; they engage in genuine dialogue that builds confidence and trust over time. In doing so, we hope to realize the untapped potential of voice as the ultimate interface for instruction and understanding." [...] Sesame sparked a lively discussion on Hacker News about its potential uses and dangers. Some users reported having extended conversations with the two demo voices, with conversations lasting up to the 30-minute limit. In one case, a parent recounted how their 4-year-old daughter developed an emotional connection with the AI model, crying after not being allowed to talk to it again.

Firefox

Firefox 136 Released With Vertical Tabs, Official ARM64 Linux Binaries (9to5linux.com) 49

An anonymous reader quotes a report from 9to5Linux: Mozilla published today the final build of the Firefox 136 open-source web browser for all supported platforms ahead of the March 4th, 2025, official release date, so it's time to take a look at the new features and changes. Highlights of Firefox 136 include official Linux binary packages for the AArch64 (ARM64) architecture, hardware video decoding for AMD GPUs on Linux systems, a new HTTPS-First behavior for upgrading page loads to HTTPS, and Smartblock Embeds for selectively unblocking certain social media embeds blocked in the ETP Strict and Private Browsing modes.

Firefox 136 is available for download for 32-bit, 64-bit, and AArch64 (ARM64) Linux systems right now from Mozilla's FTP server. As mentioned before, Mozilla plans to officially release Firefox 136 tomorrow, March 4th, 2025, when it will roll out as an OTA (Over-the-Air) update to macOS and Windows users.
Here's a list of the general features available in this release:

- Vertical Tabs Layout
- New Browser Layout Section
- PNG Copy Support
- HTTPS-First Behavior
- Smartblock Embeds
- Solo AI Link
- Expanded Data Collection & Use Settings
- Weather Forecast on New Tab Page
- Address Autofill Expansion

A full list of changes can be found here.
Youtube

YouTube Warns Creators an AI-Generated Video of Its CEO is Being Used For Phishing Scams (theverge.com) 16

An anonymous reader shares a report: YouTube is warning creators about a new phishing scam that attempts to lure victims using an AI-generated video of its CEO Neal Mohan. The fake video has been shared privately with users and claims YouTube is making changes to its monetization policy in an attempt to steal their credentials, according to an announcement on Tuesday.

"YouTube and its employees will never attempt to contact you or share information through a private video," YouTube says. "If a video is shared privately with you claiming to be from YouTube, the video is a phishing scam." In recent weeks, there have been reports floating around Reddit about scams similar to the one described by YouTube.

Opera

Opera Adds an Automated AI Agent To Its Browser (theregister.com) 23

king*jojo shares a report from The Register: The Opera web browser now boasts "agentic AI," meaning users can ask an onboard AI model to perform tasks that require a series of in-browser actions. The AI agent, referred to as the Browser Operator, can, for example, find 12 pairs of men's size 10 Nike socks that you can buy. This is demonstrated in an Opera-made video of the process, running intermittently at 6x time, which shows the user has to type out the request for the undergarments rather than click around some webpages.

The AI, in the given example, works its way through eight steps in its browser chat sidebar, clicking and navigating on your behalf in the web display pane, to arrive at a Walmart checkout page with two six-packs of socks added to the user's shopping cart, ready for payment. [...] Other tasks such as finding specific concert tickets and booking flight tickets from Oslo to Newcastle are also depicted, accelerated at times from 4x to 10x, with the user left to authorize the actual purchase. Browser Operator runs more slowly than shown in the video, though that's actually helpful for a semi-capable assistant. A more casual pace allows the user to intervene at any point and take over.

AI

Judges Are Fed Up With Lawyers Using AI That Hallucinate Court Cases (404media.co) 74

An anonymous reader quotes a report from 404 Media: After a group of attorneys were caught using AI to cite cases that didn't actually exist in court documents last month, another lawyer was told to pay $15,000 for his own AI hallucinations that showed up in several briefs. Attorney Rafael Ramirez, who represented a company called HoosierVac in an ongoing case where the Mid Central Operating Engineers Health and Welfare Fund claims the company is failing to allow the union a full audit of its books and records, filed a brief in October 2024 that cited a case the judge wasn't able to locate. Ramirez "acknowledge[d] that the referenced citation was in error," withdrew the citation, and "apologized to the court and opposing counsel for the confusion," according to Judge Mark Dinsmore, U.S. Magistrate Judge for the Southern District of Indiana. But that wasn't the end of it. An "exhaustive review" of Ramirez's other filings in the case showed that he'd included made-up cases in two other briefs, too. [...]

In January, as part of a separate case against a hoverboard manufacturer and Walmart seeking damages for an allegedly faulty lithium battery, attorneys filed court documents that cited a series of cases that don't exist. In February, U.S. District Judge Kelly demanded they explain why they shouldn't be sanctioned for referencing eight non-existent cases. The attorneys contritely admitted to using AI to generate the cases without catching the errors, and called it a "cautionary tale" for the rest of the legal world. Last week, Judge Rankin issued sanctions on those attorneys, according to new records, including revoking one of the attorneys' pro hac vice admission (a legal term meaning a lawyer can temporarily practice in a jurisdiction where they're not licensed) and removed him from the case, and the three other attorneys on the case were fined between $1,000 and $3,000 each.
The judge in the Ramirez case said that he "does not aim to suggest that AI is inherently bad or that its use by lawyers should be forbidden." In fact, he noted that he's a vocal advocate for the use of technology in the legal profession.

"Nevertheless, much like a chain saw or other useful [but] potentially dangerous tools, one must understand the tools they are using and use those tools with caution," he wrote. "It should go without saying that any use of artificial intelligence must be consistent with counsel's ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution."
Apple

Apple Unveils iPad Air With M3 Chip (apple.com) 42

Apple today announced a significant update to its iPad Air lineup, integrating the M3 chip previously reserved for higher-end devices. The new tablets, available in both 11-inch ($599) and 13-inch ($799) configurations, deliver substantial performance gains: nearly 2x faster than M1-equipped models and 3.5x faster than A14 Bionic versions.

The M3 brings Apple's advanced graphics architecture to the Air for the first time, featuring dynamic caching, hardware-accelerated mesh shading, and ray tracing. The chip includes an 8-core CPU delivering 35% faster multithreaded performance over M1, paired with a 9-core GPU offering 40% faster graphics. The Neural Engine processes AI workloads 60% faster than M1, the company said. Apple also introduced a redesigned Magic Keyboard ($269/$319) with function row and larger trackpad.
Google

Google Releases SpeciesNet, an AI Model Designed To Identify Wildlife (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: Google has open sourced an AI model, SpeciesNet, designed to identify animal species by analyzing photos from camera traps. Researchers around the world use camera traps -- digital cameras connected to infrared sensors -- to study wildlife populations. But while these traps can provide valuable insights, they generate massive volumes of data that take days to weeks to sift through. In a bid to help, Google launched Wildlife Insights, an initiative of the company's Google Earth Outreach philanthropy program, around six years ago. Wildlife Insights provides a platform where researchers can share, identify, and analyze wildlife images online, collaborating to speed up camera trap data analysis.

Many of Wildlife Insights' analysis tools are powered by SpeciesNet, which Google claims was trained on over 65 million publicly available images and images from organizations like the Smithsonian Conservation Biology Institute, the Wildlife Conservation Society, the North Carolina Museum of Natural Sciences, and the Zoological Society of London. Google says that SpeciesNet can classify images into one of more than 2,000 labels, covering animal species, taxa like "mammalian" or "Felidae," and non-animal objects (e.g. "vehicle"). SpeciesNet is available on GitHub under an Apache 2.0 license, meaning it can be used commercially largely sans restrictions.

Education

Researchers Find Less-Educated Areas Adopting AI Writing Tools Faster 108

An anonymous reader quotes a report from Ars Technica: Since the launch of ChatGPT in late 2022, experts have debated how widely AI language models would impact the world. A few years later, the picture is getting clear. According to new Stanford University-led research examining over 300 million text samples across multiple sectors, AI language models now assist in writing up to a quarter of professional communications across sectors. It's having a large impact, especially in less-educated parts of the United States. "Our study shows the emergence of a new reality in which firms, consumers and even international organizations substantially rely on generative AI for communications," wrote the researchers.

The researchers tracked large language model (LLM) adoption across industries from January 2022 to September 2024 using a dataset that included 687,241 consumer complaints submitted to the US Consumer Financial Protection Bureau (CFPB), 537,413 corporate press releases, 304.3 million job postings, and 15,919 United Nations press releases. By using a statistical detection system that tracked word usage patterns, the researchers found that roughly 18 percent of financial consumer complaints (including 30 percent of all complaints from Arkansas), 24 percent of corporate press releases, up to 15 percent of job postings, and 14 percent of UN press releases showed signs of AI assistance during that period of time.

The study also found that while urban areas showed higher adoption overall (18.2 percent versus 10.9 percent in rural areas), regions with lower educational attainment used AI writing tools more frequently (19.9 percent compared to 17.4 percent in higher-education areas). The researchers note that this contradicts typical technology adoption patterns where more educated populations adopt new tools fastest. "In the consumer complaint domain, the geographic and demographic patterns in LLM adoption present an intriguing departure from historical technology diffusion trends where technology adoption has generally been concentrated in urban areas, among higher-income groups, and populations with higher levels of educational attainment."
"Arkansas showed the highest adoption rate at 29.2 percent (based on 7,376 complaints), followed by Missouri at 26.9 percent (16,807 complaints) and North Dakota at 24.8 percent (1,025 complaints)," notes Ars. "In contrast, states like West Virginia (2.6 percent), Idaho (3.8 percent), and Vermont (4.8 percent) showed minimal AI writing adoption. Major population centers demonstrated moderate adoption, with California at 17.4 percent (157,056 complaints) and New York at 16.6 percent (104,862 complaints)."

The study was listed on the arXiv preprint server in mid-February.
AI

Microsoft Unveils New Voice-Activated AI Assistant For Doctors 18

Microsoft has introduced Dragon Copilot, a voice-activated AI assistant for doctors that integrates dictation and ambient listening tools to automate clinical documentation, including notes, referrals, and post-visit summaries. The tool is set to launch in May in the U.S. and Canada. CNBC reports: Microsoft acquired Nuance Communications, the company behind Dragon Medical One and DAX Copilot, for about $16 billion in 2021. As a result, Microsoft has become a major player in the fiercely competitive AI scribing market, which has exploded in popularity as health systems have been looking for tools to help address burnout. AI scribes like DAX Copilot allow doctors to draft clinical notes in real time as they consensually record their visits with patients. DAX Copilot has been used in more than 3 million patient visits across 600 health-care organizations in the last month, Microsoft said.

Dragon Copilot is accessible through a mobile app, browser or desktop, and it integrates directly with several different electronic health records, the company said. Clinicians will still be able to draft clinical notes with the assistant like they could with DAX Copilot, but they'll be able to use natural language to edit their documentation and prompt it further, Kenn Harper, general manager of Dragon products at Microsoft, told reporters on the call. For instance, a doctor could ask questions like, "Was the patient experiencing ear pain?" or "Can you add the ICD-10 codes to the assessment and plan?" Physicians can also ask broader treatment-related queries such as, "Should this patient be screened for lung cancer?" and get an answer with links to resources like the Centers for Disease Control and Prevention. [...]
AI

TSMC Pledges To Spend $100 Billion On US Chip Facilities (techcrunch.com) 67

An anonymous reader quotes a report from TechCrunch: Chipmaker TSMC said that it aims to invest "at least" $100 billion in chip manufacturing plants in the U.S. over the next four years as part of an effort to expand the company's network of semiconductor factories. President Donald Trump announced the news during a press conference Monday. TSMC's cash infusion will fund the construction of several new facilities in Arizona, C. C. Wei, chairman and CEO of TSMC, said during the briefing. "We are going to produce many AI chips to support AI progress," Wei said.

TSMC previously pledged to pour $65 billion into U.S.-based fabrication plants and has received up to $6.6 billion in grants from the CHIPS Act, a major Biden administration-era law that sought to boost domestic semiconductor production. The new investment brings TSMC's total investments in the U.S. chip industry to around $165 billion, Trump said in prepared remarks. [...] TSMC, the world's largest contract chip maker, already has several facilities in the U.S., including a factory in Arizona that began mass production late last year. But the company currently reserves its most sophisticated facilities for its home country of Taiwan.

AI

Call Centers Using AI To 'Whiten' Indian Accents 136

The world's biggest call center company is using artificial intelligence to "neutralise" Indian accents for Western customers. From a report: Teleperformance said it was applying real-time AI software on phone calls in order to increase "human empathy" between two people on the phone. The French company's customers in the UK include parts of the Government, the NHS, Vodafone and eBay.

Teleperformance has 90,000 employees in India and tens of thousands more in other countries. It is using software from Sanas, an American company that says the system helps "build a more understanding world" and reduces miscommunication. The company's website says it makes call center workers more productive and means customer service calls are resolved more quickly. The company also says it means call center workers are less likely to be abused and customers are less likely to demand to speak to a supervisor. It is already used by companies including Walmart and UPS.
AI

The US Cities Whose Workers Are Most Exposed to AI (bloomberg.com) 24

Silicon Valley, the place that did more than any other to pioneer artificial intelligence, is the most exposed to its ability to automate work. That's according to an analysis by researchers at the Brookings Institution, a think tank, which matched the tasks that OpenAI's ChatGPT-4 could do with the jobs that are most common in different US cities. From a report: The result is a sharp departure from previous rounds of automation. Whereas technologies like robotics came for middle-class jobs -- and manufacturing cities such as Detroit -- generative AI is best at the white-collar work that's highly paid and most common in "superstar" cities like San Francisco and Washington, DC.

The Brookings analysis is of the US, but the same logic would apply anywhere: The more a city's economy is oriented around white-collar knowledge work, the more exposed it is to AI. "Exposure" doesn't necessarily mean automation, stressed Mark Muro, a senior fellow at Brookings and one of the study's authors. It could also mean productivity gains.
From the Brookings report: Now, the higher-end workers and regions only mildly exposed to earlier forms of automation look to be most involved (for better or worse) with generative AI and its facility for cognitive, office-type tasks. In that vein, workers in high-skill metro areas such as San Jose, Calif.; San Francisco; Durham, N.C.; New York; and Washington D.C. appear likely to experience heavy involvement with generative AI, while those in less office-oriented metro areas such as Las Vegas; Toledo, Ohio; and Fort Wayne, Ind. appear far less susceptible. For instance, while 43% of workers in San Jose could see generative AI shift half or more of their work tasks, that share is only 31% of workers in Las Vegas.
Technology

Lenovo's ThinkBook Flip Puts an Extra-Tall Folding Display On a Laptop (theverge.com) 10

Speaking of some concept devices that Lenovo has unveiled, the company today teased its ThinkBook "codename Flip" AI PC Concept at Mobile World Congress, featuring a flexible 18.1-inch OLED display that can transform between three configurations: a traditional 13.1-inch clamshell, a folded 12.9-inch tablet, or a laptop with an extra-tall vertical screen.

Unlike the motorized ThinkBook Plus Gen 6 expected in June, the Flip uses the display's flexibility to fold behind itself, eliminating motors while gaining 0.4 inches of additional screen space. Users can mirror content on the rear-facing portion when folded or enjoy the full 2000x2664 resolution display in vertical orientation. The concept also features a SmartForcePad trackpad with LED-illuminated shortcut layers. While still in prototype phase, Lenovo has specs in mind: Intel Ultra 7 processor, 32GB RAM, PCIe SSD storage, and Thunderbolt 4 connectivity.
Programming

Can TrapC Fix C and C++ Memory Safety Issues? (infoworld.com) 99

"TrapC, a fork of the C language, is being developed as a potential solution for memory safety issues that have hindered the C and C++ languages," reports InfoWorld.

But also being developed is a compiler named trapc "intended to be implemented as a cybersecurity compiler for C and C++ code, said developer Robin Rowe..." Due by the end of this year, trapc will be a free, open source compiler similar to Clang... Rowe said.

TrapC has pointers that are memory-safe, addressing the memory safety issue with the two languages. With TrapC, developers write in C or C++ and compile in TrapC, for memory safety...

Rowe presented TrapC at an ISO C meeting this week. Developers can download a TrapC whitepaper and offer Rowe feedback. According to the whitepaper, TrapC's memory management is automatic and cannot leak memory. Pointers are lifetime-managed, not garbage-collected. Also, TrapC reuses a few code safety features from C++, notably member functions, constructors, destructors, and the new keyword.

"TrapC Memory Safe Pointers will not buffer overrun and will not segfault," Rowe told the ISO C Committee standards body meeting, according to the Register. "When C code is compiled using a TrapC compiler, all pointers become Memory Safe Pointers and are checked."

In short, TrapC "is a programming language forked from C, with changes to make it LangSec and Memory Safe," according to that white paper. "To accomplish that, TrapC seeks to eliminate all Undefined Behavior in the C programming language..."

"The startup TRASEC and the non-profit Fountain Abode have a TrapC compiler in development, called trapc," the whitepaper adds, and their mission is "to enable recompiling legacy C code into executables that are safe by design and secure by default, without needing much code refactoring... The TRASEC trapc cybersecurity compiler with AI code reasoning is expected to release as free open source software sometime in 2025."

In November the Register offered some background on the origins of TrapC...

Slashdot Top Deals