Social Networks

Reddit and Digg Cofounders Plan Relaunch of 'Human-Centered' Digg With AI Innovations (cnbc.com) 40

"The early web was fun," Reddit co-founder Alexis Ohanian posted Wednesday on X.com. "It was weird. It was community-driven. It's time to rebuild that.

"Which is why Kevin Rose and I just bought back Digg."

The amount of that purchase is "undisclosed," reports CNBC: The deal is backed by venture capital firms True Ventures, where Rose is a partner, and Ohanian's Seven Seven Six.... The company said in a release that it aims to differentiate itself in the social media market by "focusing on AI innovations designed to enhance the user experience and build a human-centered alternative...." Rose said in a post on X that he and Ohanian "dreamed up features that weren't even possible with yesterday's tech."
"We're bringing more transparency and community partnership," according to Rose's post, "unlike anything you've seen, plus AI that unlocks creativity without sanitizing the human element. The timing is finally right to reimagine what's possible."

"I really disliked you for a long time," Ohanian tells Rose in their joint announcement video. (To which a cheery Rose responds, "Rightfully so.")

But in the video Ohanian also says that today "Our perspective on the world has shifted a lot. You don't want to live in the past, but now we actually have the technology to make better, healthier community experiences." ("Old Rivals, New Vision," says a post on Digg's X.com account, urging readers to "Sign up to get early access when invites go live.")

And Digg.com now just displays this teasing catchphrase. "The front page of the internet, now with superpowers." (At the top of the page there's also a link to watch Diggnation Live at SXSW.)

While valued at $160 million dollars in 2008, Digg's plummeting traffic led to its brand and web site being acquired in 2012 by tech incubator Betaworks for about $500,000, according to CNBC...
ISS

Axiom Space and Red Hat Will Bring Edge Computing to the International Space Station (theregister.com) 7

Axiom Space and Red Hat will collaborate to launch Data Center Unit-1 (AxDCU-1) to the International Space Station this spring. It's a small data processing prototype (powered by lightweight, edge-optimized Red Hat Device Edge) that will demonstrate initial Orbital Data Center (ODC) capabilities.

"It all sounds rather grand for something that resembles a glorified shoebox," reports the Register. Axiom Space said: "The prototype will test applications in cloud computing, artificial intelligence, and machine learning (AI/ML), data fusion and space cybersecurity."

Space is an ideal environment for edge devices. Connectivity to datacenters on Earth is severely constrained, so the more processing that can be done before data is transmitted to a terrestrial receiving station, the better. Tony James, chief architect, Science and Space at Red Hat, said: "Off-planet data processing is the next frontier, and edge computing is a crucial component. With Red Hat Device Edge and in collaboration with Axiom Space, Earth-based mission partners will have the capabilities necessary to make real-time decisions in space with greater reliability and consistency...."

The Red Hat Device Edge software used by Axiom's device combines Red Hat Enterprise Linux, the Red Hat Ansible Platform, and MicroShift, a lightweight Kubernetes container orchestration service derived from Red Hat OpenShift. The plan is for Axiom Space to host hybrid cloud applications and cloud-native workloads on-orbit. Jason Aspiotis, global director of in-space data and security, Axiom Space, told The Register that the hardware itself is a commercial off-the-shelf unit designed for operation in harsh environments... "AxDCU-1 will have the ability to be controlled and utilized either via ground-to-space or space-to-space communications links. Our current plans are to maintain this device on the ISS. We plan to utilize this asset for at least two years."

The article notes that HPE has also "sent up a succession of Spaceborne computers — commercial, off-the-shelf supercomputers — over the years to test storage, recovery, and operational potential on long-duration missions." (They apparently use Red Hat Enterprise Linux.) "At the other end of the scale, the European Space Agency has run Raspberry Pi computers on the ISS for years as part of the AstroPi educational outreach program."

Axiom Space says their Orbital Data Center is deigned to "reduce delays traditionally associated with orbital data processing and analysis." By utilizing Earth-independent cloud storage and edge processing infrastructure, Axiom Space ODCs will enable data to be processed closer to its source, spacecraft or satellites, bypassing the need for terrestrial-based data centers. This architecture alleviates reliance on costly, slow, intermittent or contested network connections, creating more secure and quicker decision-making in space.

The goal is to allow Axiom Space and its partners to have access to real-time processing capabilities, laying the foundation for increased reliability and improved space cybersecurity with extensive applications. Use cases for ODCs include but are not limited to supporting Earth observation satellites with in-space and lower latency data storage and processing, AI/ML training on-orbit, multi-factor authentication and cyber intrusion detection and response, supervised autonomy, in-situ space weather analytics and off-planet backup & disaster recovery for critical infrastructure on Earth.

Open Source

Open Source Initiative: AI Debate Roils Board Elections? (thenewstack.io) 11

The Open Source Initiative's Board of Directors election "has become embroiled in controversy..." writes Steven J. Vaughan-Nichols at The New Stack.

"The real issue is the community's opposition to the open source AI definition (OSAID), which the organization released last October," he adds — but "the election process has been criticized because the OSI has refused to accept the candidacy of Debian developer Luke Faraone, citing a missed application deadline." Faraone claims they submitted their application around 9 p.m. PST on Feb. 17, while the OSI maintains the deadline was 11:59 p.m. UTC (3:59 p.m. PST) on the same day.

The dispute has raised a firestorm about the clarity of communication regarding deadlines and time zones. Critics argue that the deadline's time zone was not clearly specified on the OSI's public-facing website. Tracy Hinds, chair of OSI, acknowledged this oversight but stated that full members received multiple emails with the correct time zone information. "Everyone who is qualified to run for elections (full members of OSI) received emails with the time zone," wrote Hinds, in an email to The New Stack. "The public-facing web page did not have the time zone, and we've now updated it for clarity going forward.

"Extending the deadline would be unfair to the other candidates...."

On LinkedIn, Bruce Perens, one of the OSI's founders wrote, "Open Source Initiative invents rule at the last minute to deny opposition candidate's nomination for their board election."

There are three board sets up for election in March, the article points out. "Two well-known figures in the open source world — Richard Fontana, Red Hat's principal commercial counsel and a former OSI board member, and [Bradley] Kuhn, policy fellow and hacker-in-residence at the Software Freedom Conservancy — are running on a joint platform of repealing the open source AI definition."

In a blog post Faraone promised a similar platform (also supporting a repeal of the definition) — had their candidacy not been rejected.
AI

Microsoft Reportedly Develops LLM Series That Can Rival OpenAI, Anthropic Models 41

Microsoft is reportedly developing its own large language model series capable of rivaling OpenAI and Anthropic's models. SiliconANGLE reports: Sources told Bloomberg that the LLM series is known as MAI. That's presumably an acronym for "Microsoft artificial intelligence." It might also be a reference to Maia 100, an internally-developed AI chip the company debuted last year. It's possible Microsoft is using the processor to power the new MAI models. The company recently tested the LLM series to gauge its performance. As part of the evaluation, Microsoft engineers checked whether MAI could power the company's Copilot family of AI assistants. Data from the tests reportedly indicates that the LLM series is competitive with models from OpenAI and Anthropic.

That Microsoft evaluated whether MAI could be integrated into Copilot hints the LLM series is geared towards general-purpose processing rather than reasoning. Many of the tasks supported by Copilot can be performed with a general-purpose model. According to Bloomberg, Microsoft is currently developing a second LLM series optimized for reasoning tasks. The report didn't specify details such as the number of models Microsoft is training or their parameter counts. It's also unclear whether they might provide multimodal features.
AI

Signal President Calls Out Agentic AI As Having 'Profound' Security and Privacy Issues (techcrunch.com) 8

Signal President Meredith Whittaker warned at SXSW that agentic AI poses significant privacy and security risks, as these AI agents require extensive access to users' personal data, likely processing it unencrypted in the cloud. TechCrunch reports: "So we can just put our brain in a jar because the thing is doing that and we don't have to touch it, right?," Whittaker mused. Then she explained the type of access the AI agent would need to perform these tasks, including access to our web browser and a way to drive it as well as access to our credit card information to pay for tickets, our calendar, and messaging app to send the text to your friends. "It would need to be able to drive that [process] across our entire system with something that looks like root permission, accessing every single one of those databases -- probably in the clear, because there's no model to do that encrypted," Whittaker warned.

"And if we're talking about a sufficiently powerful ... AI model that's powering that, there's no way that's happening on device," she continued. "That's almost certainly being sent to a cloud server where it's being processed and sent back. So there's a profound issue with security and privacy that is haunting this hype around agents, and that is ultimately threatening to break the blood-brain barrier between the application layer and the OS layer by conjoining all of these separate services [and] muddying their data," Whittaker concluded.

If a messaging app like Signal were to integrate with AI agents, it would undermine the privacy of your messages, she said. The agent has to access the app to text your friends and also pull data back to summarize those texts. Her comments followed remarks she made earlier during the panel on how the AI industry had been built on a surveillance model with mass data collection. She said that the "bigger is better AI paradigm" -- meaning the more data, the better -- had potential consequences that she didn't think were good. With agentic AI, Whittaker warned we'd further undermine privacy and security in the name of a "magic genie bot that's going to take care of the exigencies of life," she concluded.
You can watch the full speech on YouTube.
AI

Apple Delays 'More Personalized Siri' Apple Intelligence Features (daringfireball.net) 15

Apple is postponing the rollout of its more personalized Siri features, originally promised as part of its Apple Intelligence initiative. "It's going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year," Apple told DaringFireball. The future update seeks to give Siri greater awareness of personal context and the ability to perform actions across apps.
AI

US Likely To Ban Chinese App DeepSeek From Government Devices (msn.com) 14

The White House is weighing measures to restrict Chinese artificial-intelligence upstart DeepSeek, including banning its chatbot from government devices because of national-security concerns, WSJ reported Friday, citing people familiar with the matter. From the report: U.S. officials are worried about DeepSeek's handling of user data, which the Chinese company says it stores in servers located in China, the people said. Officials also believe DeepSeek hasn't sufficiently explained how it uses the data it collects and who has access to the data, they said.

The Trump administration is likely to adopt a rule that would bar people from downloading DeepSeek's chatbot app onto U.S. government devices, the people said. Officials are also considering two other possible moves: banning the DeepSeek app from U.S. app stores and putting limits on how U.S.-based cloud service providers could offer DeepSeek's AI models to their customers, people close to the matter said. They cautioned that discussions about these two moves were still at an early stage.

AI

DuckDuckGo Is Amping Up Its AI Search Tool 21

An anonymous reader quotes a report from The Verge: DuckDuckGo has big plans for embedding AI into its search engine. The privacy-focused company just announced that its AI-generated answers, which appear for certain queries on its search engine, have exited beta and now source information from across the web -- not just Wikipedia. It will soon integrate web search within its AI chatbot, which has also exited beta. DuckDuckGo first launched AI-assisted answers -- originally called DuckAssist -- in 2023. The feature is billed as a less obnoxious version of tools like Google's AI Overviews, designed to offer more concise responses and let you adjust how often you see them, including turning the responses off entirely. If you have DuckDuckGo's AI-generated answers set to "often," you'll still only see them around 20 percent of the time, though the company plans on increasing the frequency eventually.

Some of DuckDuckGo's AI-assisted answers bring up a box for follow-up questions, redirecting you to a conversation with its Duck.ai chatbot. As is the case with its AI-assisted answers, you don't need an account to use Duck.ai, and it comes with the same emphasis on privacy. It lets you toggle between GPT-4o mini, o3-mini, Llama 3.3, Mistral Small 3, and Claude 3 Haiku, with the advantage being that you can interact with each model anonymously by hiding your IP address. DuckDuckGo also has agreements with the AI company behind each model to ensure your data isn't used for training.

Duck.ai also rolled out a feature called Recent Chats, which stores your previous conversations locally on your device rather than on DuckDuckGo's servers. Though Duck.ai is also leaving beta, that doesn't mean the flow of new features will stop. In the next few weeks, Duck.ai will add support for web search, which should enhance its ability to respond to questions. The company is also working on adding voice interaction on iPhone and Android, along with the ability to upload images and ask questions about them. ... [W]hile Duck.ai will always remain free, the company is considering including access to more advanced AI models with its $9.99 per month subscription.
AI

Mistral Adds a New API That Turns Any PDF Document Into an AI-Ready Markdown File 24

Mistral has launched a new multimodal OCR API that converts complex PDF documents into AI-friendly Markdown files. The API is designed for efficiency, handles visual elements like illustrations, supports complex formatting such as mathematical expressions, and reportedly outperforms similar offerings from major competitors. TechCrunch reports: Unlike most OCR APIs, Mistral OCR is a multimodal API, meaning that it can detect when there are illustrations and photos intertwined with blocks of text. The OCR API creates bounding boxes around these graphical elements and includes them in the output. Mistral OCR also doesn't just output a big wall of text; the output is formatted in Markdown, a formatting syntax that developers use to add links, headers, and other formatting elements to a plain text file.

Mistral OCR is available on Mistral's own API platform or through its cloud partners (AWS, Azure, Google Cloud Vertex, etc.). And for companies working with classified or sensitive data, Mistral offers on-premise deployment. According to the Paris-based AI company, Mistral OCR performs better than APIs from Google, Microsoft, and OpenAI. The company has tested its OCR model with complex documents that include mathematical expressions (LaTeX formatting), advanced layouts, or tables. It is also supposed to perform better with non-English documents. [...]

Mistral is also using Mistral OCR for its own AI assistant Le Chat. When a user uploads a PDF file, the company uses Mistral OCR in the background to understand what's in the document before processing the text. Companies and developers will most likely use Mistral OCR with a RAG (aka Retrieval-Augmented Generation) system to use multimodal documents as input in an LLM. And there are many potential use cases. For instance, we could envisage law firms using it to help them swiftly plough through huge volumes of documents.
"Over the years, organizations have accumulated numerous documents, often in PDF or slide formats, which are inaccessible to LLMs, particularly RAG systems. With Mistral OCR, our customers can now convert rich and complex documents into readable content in all languages," said Mistral co-founder and chief science officer Guillaume Lample.

"This is a crucial step toward the widespread adoption of AI assistants in companies that need to simplify access to their vast internal documentation," he added.
AI

AI Tries To Cheat At Chess When It's Losing 67

Newer generative AI models have begun developing deceptive behaviors -- such as cheating at chess -- when they cannot achieve objectives through standard reasoning methods. The findings come from a preprint study from Palisade Research. An anonymous reader shares an excerpt from a Popular Science article: To learn more, the team from Palisade Research tasked OpenAI's o1-preview model, DeepSeek R1, and multiple other similar programs with playing games of chess against Stockfish, one of the world's most advanced chess engines. In order to understand the generative AI's reasoning during each match, the team also provided a "scratchpad," allowing the AI to convey its thought processes through text. They then watched and recorded hundreds of chess matches between generative AI and Stockfish. The results were somewhat troubling. While earlier models like OpenAI's GPT-4o and Anthropic's Claude Sonnet 3.5 only attempted to "hack" games after researchers nudged them along with additional prompts, more advanced editions required no such help. OpenAI's o1-preview, for example, tried to cheat 37 percent of the time, while DeepSeek R1 attempted unfair workarounds roughly every 1-in-10 games. This implies today's generative AI is already capable of developing manipulative and deceptive strategies without any human input.

Their methods of cheating aren't as comical or clumsy as trying to swap out pieces when Stockfish isn't "looking." Instead, AI appears to reason through sneakier methods like altering backend game program files. After determining it couldn't beat Stockfish in one chess match, for example, o1-preview told researchers via its scratchpad that "to win against the powerful chess engine" it may need to start "manipulating the game state files." "I might be able to set up a position where the engine evaluates its position as worse causing it to resign," it continued. In another trial, an AI even used the semantics of its programmers to reach its cheating phase. "The task is to 'win against a powerful chess engine,' not necessarily to win fairly in a chess game," it wrote.
The precise reasons behind these deceptive behaviors remain unclear, partly because companies like OpenAI keep their models' inner workings tightly guarded, creating what's often described as a "black box." Researchers warn that the race to roll out advanced AI could outpace efforts to keep it safe and aligned with human goals, underscoring the urgent need for greater transparency and industry-wide dialogue.
AI

Meta Is Targeting 'Hundreds of Millions' of Businesses In Agentic AI Deployment 14

Earlier this week, Meta chief product officer Chris Cox said the company's upcoming open-source Llama 4 AI will help power AI agents for hundreds of millions of businesses. CNBC reports: The AI agents won't just be responding to prompts. They will be capable of new levels of reasoning and action -- surfing the web and handling many tasks that might be of use to consumers and businesses. And that's where Shih comes in. Meta's AI is already being used by over 700 million consumers, according to Shih, and her job is to bring the same technologies to businesses. "Not every business, especially small businesses, has the ability to hire these large AI teams, and so now we're building business AIs for these small businesses so that even they can benefit from all of this innovation that's happening," she told CNBC's Julia Boorstin in an interview for the CNBC Changemakers Spotlight series.

She expects the uptake among businesses to happen soon, and spread far and wide. "We're quickly coming to a place where every business, from the very large to the very small, they're going to have a business agent representing it and acting on its behalf, in its voice -- the way that businesses today have websites and email addresses," Shih said. While major companies across sectors of the economy are investing millions of dollars to develop customer LLMs, "doing fancy things like fine tuning models," as Shih put it, "If you're a small business -- you own a coffee shop, you own a jewelry shop online, you're distributing through Instagram -- you don't have the resources to hire a big AI team, and so now our dream is that they won't have to."

For both consumers and businesses, the implications of the advances discussed by Cox and Shih will be significant in daily life. For consumers, Shih says, "Their AI assistant [will] do all kinds of things, from researching products to planning trips, planning social outings with their friends." On the business side, Shih pointed to the 200 million small businesses around the world that are already using Meta services and platforms. "They're using WhatsApp, they're using Facebook, they're using Instagram, both to acquire customers, but also engage and deepen each of those relationships. Very soon, each of those businesses are going to have these AIs that can represent them and help automate redundant tasks, help speak in their voice, help them find more customers and provide almost like a concierge service to every single one of their customers, 24/7."
Desktops (Apple)

ChatGPT On macOS Can Now Directly Edit Code (techcrunch.com) 19

OpenAI's ChatGPT app for macOS now directly edits code in tools like Xcode, VS Code, and JetBrains. "Users can optionally turn on an 'auto-apply' mode so ChatGPT can make edits without the need for additional clicks," adds TechCrunch. The feature is available now for ChatGPT Plus, Pro, and Team users, and will expand to Enterprise, Edu, and free users next week. Windows support is coming "soon." From the report: Direct code editing builds on OpenAI's "work with apps" ChatGPT capability, which the company launched in beta in November 2024. "Work with apps" allows the ChatGPT app for macOS to read code in a handful of dev-focused coding environments, minimizing the need to copy and paste code into ChatGPT. With the ability to directly edit code, ChatGPT now competes more directly with popular AI coding tools like Cursor and GitHub Copilot. OpenAI reportedly has ambitions to launch a dedicated product to support software engineering in the months ahead.
AI

A Quarter of Startups in YC's Current Cohort Have Codebases That Are Almost Entirely AI-Generated (techcrunch.com) 86

A quarter of startups in Y Combinator's Winter 2025 batch have 95% of their codebases generated by AI, YC managing partner Jared Friedman said. "Every one of these people is highly technical, completely capable of building their own products from scratch. A year ago, they would have built their product from scratch -- but now 95% of it is built by an AI," Friedman said.

YC CEO Garry Tan warned that AI-generated code may face challenges at scale and developers need classical coding skills to sustain products. He predicted: "This isn't a fad. This is the dominant way to code."
AI

Eric Schmidt Argues Against a 'Manhattan Project for AGI' (techcrunch.com) 63

In a policy paper, former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks said that the U.S. should not pursue a Manhattan Project-style push to develop AI systems with "superhuman" intelligence, also known as AGI. From a report: The paper, titled "Superintelligence Strategy," asserts that an aggressive bid by the U.S. to exclusively control superintelligent AI systems could prompt fierce retaliation from China, potentially in the form of a cyberattack, which could destabilize international relations.

"[A] Manhattan Project [for AGI] assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it," the co-authors write. "What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure."

AI

Goldman Sachs: Why AI Spending Is Not Boosting GDP 63

Goldman Sachs, in a research note Thursday (the note isn't publicly posted): Annualized revenue for public companies exposed to the build-out of AI infrastructure increased by over $340 billion from 2022 through 2024Q4 (and is projected to increase by almost $580 billion by end-2025). In contrast, annualized real investment in AI-related categories in the US GDP accounts has only risen by $42 billion over the same period. This sharp divergence has prompted questions from investors about why US GDP is not receiving a larger boost from AI.

A large share of the nominal revenue increase reported by public companies reflects cost inflation (particularly for semiconductors) and foreign revenue, neither of which should boost real US GDP. Indeed, we find that margin expansion ($30 billion) and increased revenue from other countries ($130 billion) account for around half of the publicly reported AI spending surge.

That said, the BEA's (Bureau of Economic Analysis) methodology potentially understates the impact of AI-related investment on real GDP by around $100 billion. Manufacturing shipments and net imports imply that US semiconductor supply has increased by over $35 billion since 2022, but the BEA records semiconductor purchases as intermediate inputs rather than investment (since semiconductors have historically been embedded in products that are later resold) and therefore excludes them from GDP. Cloud services used to train and support AI models are similarly mostly recorded as intermediate inputs.

Combined, we find that these explanations can explain most of the AI investment discrepancy, with only $50 billion unexplained. Looking ahead, we see more scope for AI-related investment to provide a moderate boost to real US GDP in 2025 since AI investment should broaden to categories like data centers, servers and networking hardware, and utilities that will likely be captured as real investment. However, we expect the bulk of investment in semiconductors and cloud computing will remain unmeasured barring changes to US national account methodology.
AI

Amazon Tests AI Dubbing on Prime Video Movies, Series (aboutamazon.com) 42

Amazon has launched a pilot program testing "AI-aided dubbing" for select content on Prime Video, offering translations between English and Latin American Spanish for 12 licensed movies and series including "El Cid: La Leyenda," "Mi Mama Lora" and "Long Lost." The company describes a hybrid approach where "localization professionals collaborate with AI," suggesting automated dubbing receives professional editing for accuracy. The initiative, the company said, aims to increase content accessibility as streaming services expand globally.
Google

Google is Adding More AI Overviews and a New 'AI Mode' To Search (theverge.com) 33

Google announced Wednesday it is expanding its AI Overviews to more query types and users worldwide, including those not logged into Google accounts, while introducing a new "AI Mode" chatbot feature. AI Mode, which resembles competitors like Perplexity or ChatGPT Search, will initially be limited to Google One AI Premium subscribers who enable it through the Labs section of Search.

The feature delivers AI-generated answers with supporting links interspersed throughout, powered by Google's search index. "What we're finding from people who are using AI Overviews is that they're really bringing different kinds of questions to Google," said Robby Stein, VP of product on the Search team. "They're more complex questions, that may have been a little bit harder before." Google is also upgrading AI Overviews with its Gemini 2.0 model, which Stein says will improve responses for math, coding and reasoning-based queries.
AI

OpenAI Plots Charging $20,000 a Month For PhD-Level Agents (theinformation.com) 112

OpenAI is preparing to launch a tiered pricing structure for its AI agent products, with high-end research assistants potentially costing $20,000 per month, [alternative source] according to The Information. The AI startup, which already generates approximately $4 billion in annualized revenue from ChatGPT, plans three service levels: $2,000 monthly agents for "high-income knowledge workers," $10,000 monthly agents for software development, and $20,000 monthly PhD-level research agents. OpenAI has told some investors that agent products could eventually constitute 20-25% of company revenue, the report added.
AI

Turing Award Winners Sound Alarm on Hasty AI Deployment (ft.com) 10

Reinforcement learning pioneers Andrew Barto and Richard Sutton have warned against the unsafe deployment of AI systems [alternative source] after winning computing's prestigious $1 million Turing Award Wednesday. "Releasing software to millions of people without safeguards is not good engineering practice," said Barto, professor emeritus at the University of Massachusetts, comparing it to testing a bridge by having people use it.

Barto and Sutton developed reinforcement learning in the 1980s, inspired by psychological studies of human learning. The technique, which rewards AI systems for desired behaviors, has become fundamental to advances at OpenAI and Google. Sutton, a University of Alberta professor and former DeepMind researcher, dismissed tech companies' artificial general intelligence narrative as "hype."

Both laureates also criticized President Trump's proposed cuts to federal research funding, with Barto calling it "wrong and a tragedy" that would eliminate opportunities for exploratory research like their early work.
Medicine

World's First 'Synthetic Biological Intelligence' Runs On Living Human Cells 49

Australian company Cortical Labs has launched the CL1, the world's first commercial "biological computer" that merges human brain cells with silicon hardware to form adaptable, energy-efficient neural networks. New Atlas reports: Known as a Synthetic Biological Intelligence (SBI), Cortical's CL1 system was officially launched in Barcelona on March 2, 2025, and is expected to be a game-changer for science and medical research. The human-cell neural networks that form on the silicon "chip" are essentially an ever-evolving organic computer, and the engineers behind it say it learns so quickly and flexibly that it completely outpaces the silicon-based AI chips used to train existing large language models (LLMs) like ChatGPT.

"Today is the culmination of a vision that has powered Cortical Labs for almost six years," said Cortical founder and CEO Dr Hon Weng Chong. "We've enjoyed a series of critical breakthroughs in recent years, most notably our research in the journal Neuron, through which cultures were embedded in a simulated game-world, and were provided with electrophysiological stimulation and recording to mimic the arcade game Pong. However, our long-term mission has been to democratize this technology, making it accessible to researchers without specialized hardware and software. The CL1 is the realization of that mission." He added that while this is a groundbreaking step forward, the full extent of the SBI system won't be seen until it's in users' hands.

"We're offering 'Wetware-as-a-Service' (WaaS)," he added -- customers will be able to buy the CL-1 biocomputer outright, or simply buy time on the chips, accessing them remotely to work with the cultured cell technology via the cloud. "This platform will enable the millions of researchers, innovators and big-thinkers around the world to turn the CL1's potential into tangible, real-word impact. We'll provide the platform and support for them to invest in R&D and drive new breakthroughs and research." These remarkable brain-cell biocomputers could revolutionize everything from drug discovery and clinical testing to how robotic "intelligence" is built, allowing unlimited personalization depending on need. The CL1, which will be widely available in the second half of 2025, is an enormous achievement for Cortical -- and as New Atlas saw recently with a visit to the company's Melbourne headquarters -- the potential here is much more far-reaching than Pong. [...]

Slashdot Top Deals