Security

AI Hallucinations Lead To a New Cyber Threat: Slopsquatting 51

Researchers have uncovered a new supply chain attack called Slopsquatting, where threat actors exploit hallucinated, non-existent package names generated by AI coding tools like GPT-4 and CodeLlama. These believable yet fake packages, representing almost 20% of the samples tested, can be registered by attackers to distribute malicious code. CSO Online reports: Slopsquatting, as researchers are calling it, is a term first coined by Seth Larson, a security developer-in-residence at Python Software Foundation (PSF), for its resemblance to the typosquatting technique. Instead of relying on a user's mistake, as in typosquats, threat actors rely on an AI model's mistake. A significant number of packages, amounting to 19.7% (205,000 packages), recommended in test samples were found to be fakes. Open-source models -- like DeepSeek and WizardCoder -- hallucinated more frequently, at 21.7% on average, compared to the commercial ones (5.2%) like GPT 4. Researchers found CodeLlama ( hallucinating over a third of the outputs) to be the worst offender, and GPT-4 Turbo ( just 3.59% hallucinations) to be the best performer.

These package hallucinations are particularly dangerous as they were found to be persistent, repetitive, and believable. When researchers reran 500 prompts that had previously produced hallucinated packages, 43% of hallucinations reappeared every time in 10 successive re-runs, with 58% of them appearing in more than one run. The study concluded that this persistence indicates "that the majority of hallucinations are not just random noise, but repeatable artifacts of how the models respond to certain prompts." This increases their value to attackers, it added. Additionally, these hallucinated package names were observed to be "semantically convincing." Thirty-eight percent of them had moderate string similarity to real packages, suggesting a similar naming structure. "Only 13% of hallucinations were simple off-by-one typos," Socket added.
The research can found be in a paper on arXiv.org (PDF).
Microsoft

Microsoft Implements Stricter Performance Management System With Two-Year Rehire Ban (businessinsider.com) 52

Microsoft is intensifying performance scrutiny through new policies that target underperforming employees, according to an internal email from Chief People Officer Amy Coleman. The company has introduced a formalized Performance Improvement Plan (PIP) system that gives struggling employees two options: accept improvement targets or exit the company with a Global Voluntary Separation Agreement.

The policy establishes a two-year rehire blackout period for employees who leave with low performance ratings (zero to 60% in Microsoft's 0-200 scale) or during a PIP process. These employees are also barred from internal transfers while still at the company.

Coming months after Microsoft terminated 2,000 underperformers without severance, the company is also developing AI-supported tools to help managers "prepare for constructive or challenging conversations" through interactive practice environments. "Our focus remains on enabling high performance to achieve our priorities spanning security, quality, and leading AI," Coleman wrote, emphasizing that these changes aim to create "a globally consistent and transparent experience" while fostering "accountability and growth."
AI

Cursor AI's Own Support Bot Hallucinated Its Usage Policy (theregister.com) 9

Cursor AI users recently encountered an ironic AI failure when the platform's support bot falsely claimed a non-existent login restriction policy. Co-founder Michael Truell apologized for the issue, clarified that no such policy exists, and attributed the mishap to AI hallucination and a session management bug. The Register reports: Users of the Cursor editor, designed to generate and fix source code in response to user prompts, have sometimes been booted from the software when trying to use the app in multiple sessions on different machines. Some folks who inquired about the inability to maintain multiple logins for the subscription service across different machines received a reply from the company's support email indicating this was expected behavior. But the person on the other end of that email wasn't a person at all, but an AI support bot. And it evidently made that policy up.

In an effort to placate annoyed users this week, Michael Truell co-founder of Cursor creator Anysphere, published a note to Reddit to apologize for the snafu. "Hey! We have no such policy," he wrote. "You're of course free to use Cursor on multiple machines. Unfortunately, this is an incorrect response from a front-line AI support bot. We did roll out a change to improve the security of sessions, and we're investigating to see if it caused any problems with session invalidation." Truell added that Cursor provides an interface for viewing active sessions in its settings and apologized for the confusion.

In a post to the Hacker News discussion of the SNAFU, Truell again apologized and acknowledged that something had gone wrong. "We've already begun investigating, and some very early results: Any AI responses used for email support are now clearly labeled as such. We use AI-assisted responses as the first filter for email support." He said the developer who raised this issue had been refunded. The session logout issue, now fixed, appears to have been the result of a race condition that arises on slow connections and spawns unwanted sessions.

Software

Over 100 Public Software Companies Getting 'Squeezed' by AI, Study Finds (businessinsider.com) 37

Over 100 mid-market software companies are caught in a dangerous "squeeze" between AI-native startups and tech giants, according to a new AlixPartners study released Monday. The consulting firm warns many face "threats to their survival over the next 24 months" as generative AI fundamentally reshapes enterprise software.

The squeeze reflects a dramatic shift: AI agents are evolving from mere assistants to becoming applications themselves, potentially rendering traditional SaaS architecture obsolete. High-growth companies in this sector plummeted from 57% in 2023 to 39% in 2024, with further decline expected. Customer stickiness is also deteriorating, with median net dollar retention falling from 120% in 2021 to 108% in Q3 2024.
Books

Should the Government Have Regulated the Early Internet - or Our Future AI? (hedgehogreview.com) 45

In February tech journalist Nicholas Carr published Superbloom: How Technologies of Connection Tear Us Apart.

A University of Virginia academic journal says the book "appraises the past and present" of information technology while issuing "a warning about its future." And specifically Carr argues that the government ignored historic precedents by not regulating the early internet sometime in the 1990s. But as he goes on to remind us, the early 1990s were also when the triumphalism of America's Cold War victory, combined with the utopianism of Silicon Valley, convinced a generation of decision-makers that "an unfettered market seemed the best guarantor of growth and prosperity" and "defending the public interest now meant little more than expanding consumer choice." So rather than try to anticipate the dangers and excesses of commercialized digital media, Congress gave it free rein in the Telecommunications Act of 1996, which, as Carr explains,

"...erased the legal and ethical distinction between interpersonal communication and broadcast communications that had governed media in the twentieth century. When Google introduced its Gmail service in 2004, it announced, with an almost imperial air of entitlement, that it would scan the contents of all messages and use the resulting data for any purpose it wanted. Our new mailman would read all our mail."

As for the social-media platforms, Section 230 of the Act shields them from liability for all but the most egregiously illegal content posted by users, while explicitly encouraging them to censor any user-generated content they deem offensive, "whether or not such material is constitutionally protected" (emphasis added). Needless to say, this bizarre abdication of responsibility has led to countless problems, including what one observer calls a "sociopathic rendition of human sociability." For Carr, this is old news, but he warns us once again that the compulsion "to inscribe ourselves moment by moment on the screen, to reimagine ourselves as streams of text and image...[fosters] a strange, needy sort of solipsism. We socialize more than ever, but we're also at a further remove from those we interact with."

Carr's book suggests "frictional design" to slow posting (and reposting) on social media might "encourage civil behavior" — but then decides it's too little, too late, because our current frictionless efficiency "has burrowed its way too deeply into society and the social mind."

Based on all of this, the article's author looks ahead to the next revolution — AI — and concludes "I do not think it wise to wait until these kindly bots are in place before deciding how effective they are. Better to roll them off the nearest cliff today..."
Space

Space Investor Sees Opportunities in Defense-Related Startups and AI-Driven Systems (yahoo.com) 12

Chad Anderson is the founder/managing partner of the early-stage VC Space Capital (and an investor in SpaceX, along with dozens of other space companies). Space Capital produces quarterly reports on the space economy, and he says today, unlike 2021, "the froth is gone. But so is the hype. What's left is a more grounded — and investable — space economy."

On Yahoo Finance he shares several of the report's insights — including the emergence of "investable opportunities across defense-oriented startups in space domain awareness, AI-driven command systems, and hardened infrastructure." The same geopolitical instability that's undermining public markets is driving national urgency around space resilience. China's simulated space "dogfights" prompted the US Department of Defense to double down on orbital supremacy, with the proposed "Golden Dome" missile shield potentially unleashing a new wave of federal spending...

Defense tech is on fire, but commercial location-based services and logistics are freezing over. Companies like Shield AI and Saronic raised monster rounds, while others are relying on bridge financings to stay afloat...

Q1 also saw a breakout quarter for geospatial artificial intelligence (GeoAI). Software developer Niantic launched a spatial computing platform. SkyWatch partnered with GIS software supplier Esri. Planet Labs collaborated with Anthropic. And Xona Space Systems inked a deal with Trimble to boost precision GPS. This is the next leg of the space economy, where massive volumes of satellite data is finally made useful through machine learning, semantic indexing, and real-time analytics.

Distribution-layer companies are doing more with less. They remain underfunded relative to infrastructure and applications but are quietly powering the most critical systems, such as resilient communications, battlefield networks, and edge-based geospatial analysis. Don't let the low round count fool you; innovation here is quietly outpacing capital.

The article includes several predictions, insights, and possible trends (going beyond the fact that defense spending "will carry the sector...")
  • "AI's integration into space (across geospatial intelligence, satellite communications, and sensor fusion) is not a novelty. It's a competitive necessity."
  • "Focusing solely on rockets and orbital assets misses where much of the innovation and disruption is occurring: the software-defined layers that sit atop the physical backbone..."
  • "For years, SpaceX faced little serious competition, but that's starting to change." [He cites Blue Origin's progress toward approval for launching U.S. military satellites, and how Rocket Lab and Stoke Space "have also joined the competition for lucrative government launch contracts." Even Relativity Space may make a comeback, with former GOogle CEO Eric Schmidt acquiring a controlling stake.]
  • "An infrastructure reset is coming. The imminent ramp-up of SpaceX's Starship could collapse the cost structure for the infrastructure layer. When that happens, legacy providers with fixed-cost-heavy business models will be at risk. Conversely, capital-light innovators in station design, logistics, and in-orbit servicing could suddenly be massively undervalued."

AI

Can You Run the Llama 2 LLM on DOS? (yeokhengmeng.com) 26

Slashdot reader yeokm1 is the Singapore-based embedded security researcher whose side projects include installing Linux on a 1993 PC and building a ChatGPT client for MS-DOS.

He's now sharing his latest project — installing Llama 2 on DOS: Conventional wisdom states that running LLMs locally will require computers with high performance specifications especially GPUs with lots of VRAM. But is this actually true?

Thanks to an open-source llama2.c project [original created by Andrej Karpathy], I ported it to work so vintage machines running DOS can actually inference with Llama 2 LLM models. Of course there are severe limitations but the results will surprise you.

"Everything is open sourced with the executable available here," according to the blog post. (They even addressed an early "gotcha" with DOS filenames being limited to eight characters.)

"As expected, the more modern the system, the faster the inference speed..." it adds. "Still, I'm amazed what can still be accomplished with vintage systems."
AI

Famed AI Researcher Launches Controversial Startup to Replace All Human Workers Everywhere (techcrunch.com) 177

TechCrunch looks at Mechanize, an ambitious new startup "whose founder — and the non-profit AI research organization he founded called Epoch — is being skewered on X..." Mechanize was launched on Thursday via a post on X by its founder, famed AI researcher Tamay Besiroglu. The startup's goal, Besiroglu wrote, is "the full automation of all work" and "the full automation of the economy."

Does that mean Mechanize is working to replace every human worker with an AI agent bot? Essentially, yes. The startup wants to provide the data, evaluations, and digital environments to make worker automation of any job possible. Besiroglu even calculated Mechanize's total addressable market by aggregating all the wages humans are currently paid. "The market potential here is absurdly large: workers in the US are paid around $18 trillion per year in aggregate. For the entire world, the number is over three times greater, around $60 trillion per year," he wrote.

Besiroglu did, however, clarify to TechCrunch that "our immediate focus is indeed on white-collar work" rather than manual labor jobs that would require robotics...

Besiroglu argues to the naysayers that having agents do all the work will actually enrich humans, not impoverish them, through "explosive economic growth." He points to a paper he published on the topic. "Completely automating labor could generate vast abundance, much higher standards of living, and new goods and services that we can't even imagine today," he told TechCrunch.

TechCrunch wonders how jobless humans will produce goods — and whether wealth will simply concentrate around whoever owns the agents.

But they do concede that Besiroglu may be right that "If each human worker has a personal crew of agents which helps them produce more work, economic abundance could follow..."
AI

Open Source Advocate Argues DeepSeek is 'a Movement... It's Linux All Over Again' (infoworld.com) 33

Matt Asay answered questions from Slashdot readers in 2010 (as the then-COO of Canonical). He currently runs developer relations at MongoDB (after holding similar positions at AWS and Adobe).

This week he contributed an opinion piece to InfoWorld arguing that DeepSeek "may have originated in China, but it stopped being Chinese the minute it was released on Hugging Face with an accompanying paper detailing its development." Soon after, a range of developers, including the Beijing Academy of Artificial Intelligence (BAAI), scrambled to replicate DeepSeek's success but this time as open source software. BAAI, for its part, launched OpenSeek, an ambitious effort to take DeepSeek's open-weight models and create a project that surpasses DeepSeek while uniting "the global open source communities to drive collaborative innovation in algorithms, data, and systems."

If that sounds cool to you, it didn't to the U.S. government, which promptly put BAAI on its "baddie" list. Someone needs to remind U.S. (and global) policymakers that no single country, company, or government can contain community-driven open source... DeepSeek didn't just have a moment. It's now very much a movement, one that will frustrate all efforts to contain it. DeepSeek, and the open source AI ecosystem surrounding it, has rapidly evolved from a brief snapshot of technological brilliance into something much bigger — and much harder to stop. Tens of thousands of developers, from seasoned researchers to passionate hobbyists, are now working on enhancing, tuning, and extending these open source models in ways no centralized entity could manage alone.

For example, it's perhaps not surprising that Hugging Face is actively attempting to reverse engineer and publicly disseminate DeepSeek's R1 model. Hugging Face, while important, is just one company, just one platform. But Hugging Face has attracted hundreds of thousands of developers who actively contribute to, adapt, and build on open source models, driving AI innovation at a speed and scale unmatched even by the most agile corporate labs.

Hugging Face by itself could be stopped. But the communities it enables and accelerates cannot. Through the influence of Hugging Face and many others, variants of DeepSeek models are already finding their way into a wide range of applications. Companies like Perplexity are embedding these powerful open source models into consumer-facing services, proving their real-world utility. This democratization of technology ensures that cutting-edge AI capabilities are no longer locked behind the walls of large corporations or elite government labs but are instead openly accessible, adaptable, and improvable by a global community.

"It's Linux all over again..." Asay writes at one point. "What started as the passion project of a lone developer quickly blossomed into an essential, foundational technology embraced by enterprises worldwide," winning out "precisely because it captivated developers who embraced its promise and contributed toward its potential."

We are witnessing a similar phenomenon with DeepSeek and the broader open source AI ecosystem, but this time it's happening much, much faster...

Organizations that cling to proprietary approaches (looking at you, OpenAI!) or attempt to exert control through restrictive policies (you again, OpenAI!) are not just swimming upstream — they're attempting to dam an ocean. (Yes, OpenAI has now started to talk up open source, but it's a long way from releasing a DeepSeek/OpenSeek equivalent on GitHub.)

AI

US Chipmakers Fear Ceding China's AI Market to Huawei After New Trump Restrictions (msn.com) 99

The Trump administration is "taking measures to restrict the sale of AI chips by Nvidia, Advanced Micro Devices and Intel," especially in China, reports the New York Times. But that's triggered a series of dominoes. "In the two days after the limits became public, shares of Nvidia, the world's leading AI chipmaker, fell 8.4%. AMD's shares dropped 7.4%, and Intel's were down 6.8%." (AMD expects up to $800 million in charges after the move, according to CNBC, while NVIDIA said it would take a quarterly charge of about $5.5 billion.)

The Times notes hopeful remarks Thursday from Jensen Huang, CEO of Nvidia, during a meeting with the China Council for the Promotion of International Trade. "We're going to continue to make significant effort to optimize our products that are compliant within the regulations and continue to serve China's market." But America's chipmakers also have a greater fear, according to the article: "that their retreat could turn the Chinese tech giant Huawei into a global chip-making powerhouse." "For the U.S. semiconductor industry, China is gone," said Handel Jones, a semiconductor consultant at International Business Strategies, which advises electronics companies. He projects that Chinese companies will have a majority share of chips in every major category in China by 2030... Huang's message spoke to one of his biggest fears. For years, he has worried that Huawei, China's telecommunications giant, will become a major competitor in AI. He has warned U.S. officials that blocking U.S. companies from competing in China would accelerate Huawei's rise, said three people familiar with those meetings who spoke on the condition of anonymity.

If Huawei gains ground, Huang and others at Nvidia have painted a dark picture of a future in which China will use the company's chips to build AI data centers across the world for the Belt and Road Initiative, a strategic effort to increase Beijing's influence by paying for infrastructure projects around the world, a person familiar with the company's thinking said...

Nvidia's previous generation of chips perform about 40% better than Huawei's best product, said Gregory C. Allen, who has written about Huawei in his role as director of the Wadhwani AI Center at the Center for Strategic and International Studies. But that gap could dwindle if Huawei scoops up the business of its American rivals, Allen said. Nvidia was expected to make more than $16 billion in sales this year from the H20 in China before the restriction. Huawei could use that money to hire more experienced engineers and make higher-quality chips. Allen said the U.S. government's restrictions also could help Huawei bring on customers like DeepSeek, a leading Chinese AI startup. Working with those companies could help Huawei improve the software it develops to control its chips. Those kinds of tools have been one of Nvidia's strengths over the years.

TechRepublic identifies this key quote from an earlier article: "This kills NVIDIA's access to a key market, and they will lose traction in the country," Patrick Moorhead, a tech analyst with Moor Insights & Strategy, told The New York Times. He added that Chinese companies will buy from local rival Huawei instead.
AI

Could AI and Automation Find Better Treatments for Cancer - and Maybe Aging? (cnn.com) 28

CNN looks at "one field that's really benefitting" from the use of AI: "the discovery of new medicines".

The founder/CEO of London-based LabGenius says their automated robotic system can assemble "thousands of different DNA constructs, each of which encodes a completely unique therapeutic molecule that we'll then test in the lab. This is something that historically would've had to have been done by hand." In short, CNN says, their system lets them "design and conduct experiments, and learn from them in a circular process that creates molecular antibodies at a rate far faster than a human researcher."

While many cancer treatments have debilitating side effects, CNN notes that LabGenius "reengineers therapeutic molecules so they can selectively target just the diseased cells." But more importantly, their founder says they've now discovered "completely novel molecules with over 400x improvement in [cell] killing selectivity."

A senior lecturer at Imperial College London tells CNN that LabGenius seems to have created an efficient process with seamless connections, identifying a series of antibodies that look like they can target cancer cells very selectively "that's as good as any results I've ever seen for this." (Although the final proof will be what happens when they test them on patients..) "And that's the next step for Labgenius," says CNN. "They aim to have their first therapeutics entering clinics in 2027."

Finally, CNN asks, if it succeeds is their potential beyond cancer treatment? "If you take one step further," says the company's CEO/founder, "you could think about knocking out senescent cells or aging cells as a way to treat the underlying cause of aging."
Space

High School Student Discovers 1.5M New Astronomical Objects by Developing an AI Algorithm (smithsonianmag.com) 21

For combining machine learning with astronomy, high school senior Matteo Paz won $250,000 in the Regeneron Science Talent Search, reports Smithsonian magazine: The young scientist's tool processed 200 billion data entries from NASA's now-retired Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE) telescope. His model revealed 1.5 million previously unknown potential celestial bodies.... [H]e worked on an A.I. model that sorted through the raw data in search of tiny changes in infrared radiation, which could indicate the presence of variable objects.
Working with a mentor at the Planet Finder Academy at Caltech, Paz eventually flagged 1.5 million potential new objects, accoridng to the article, including supernovas and black holes.

And that mentor says other Caltech researchers are using Paz's catalog of potential variable objects to study binary star systems.

Thanks to long-time Slashdot reader schwit1 for sharing the article.
AI

As Russia and China 'Seed Chatbots With Lies', Any Bad Actor Could Game AI the Same Way (detroitnews.com) 61

"Russia is automating the spread of false information to fool AI chatbots," reports the Washington Post. (When researchers checked 10 chatbots, a third of the responses repeated false pro-Russia messaging.)

The Post argues that this tactic offers "a playbook to other bad actors on how to game AI to push content meant to inflame, influence and obfuscate instead of inform," and calls it "a fundamental weakness of the AI industry." Chatbot answers depend on the data fed into them. A guiding principle is that the more the chatbots read, the more informed their answers will be, which is why the industry is ravenous for content. But mass quantities of well-aimed chaff can skew the answers on specific topics. For Russia, that is the war in Ukraine. But for a politician, it could be an opponent; for a commercial firm, it could be a competitor. "Most chatbots struggle with disinformation," said Giada Pistilli, principal ethicist at open-source AI platform Hugging Face. "They have basic safeguards against harmful content but can't reliably spot sophisticated propaganda, [and] the problem gets worse with search-augmented systems that prioritize recent information."

Early commercial attempts to manipulate chat results also are gathering steam, with some of the same digital marketers who once offered search engine optimization — or SEO — for higher Google rankings now trying to pump up mentions by AI chatbots through "generative engine optimization" — or GEO.

Our current situation "plays into the hands of those with the most means and the most to gain: for now, experts say, that is national governments with expertise in spreading propaganda." Russia and, to a lesser extent, China have been exploiting that advantage by flooding the zone with fables. But anyone could do the same, burning up far fewer resources than previous troll farm operations... In a twist that befuddled researchers for a year, almost no human beings visit the sites, which are hard to browse or search. Instead, their content is aimed at crawlers, the software programs that scour the web and bring back content for search engines and large language models. While those AI ventures are trained on a variety of datasets, an increasing number are offering chatbots that search the current web. Those are more likely to pick up something false if it is recent, and even more so if hundreds of pages on the web are saying much the same thing...

The gambit is even more effective because the Russian operation managed to get links to the Pravda network stories edited into Wikipedia pages and public Facebook group postings, probably with the help of human contractors. Many AI companies give special weight to Facebook and especially Wikipedia as accurate sources. (Wikipedia said this month that its bandwidth costs have soared 50 percent in just over a year, mostly because of AI crawlers....) Last month, other researchers set out to see whether the gambit was working. Finnish company Check First scoured Wikipedia and turned up nearly 2,000 hyperlinks on pages in 44 languages that pointed to 162 Pravda websites. It also found that some false information promoted by Pravda showed up in chatbot answers.

"They do even better in such places as China," the article points out, "where traditional media is more tightly controlled and there are fewer sources for the bots." (The nonprofit American Sunlight Project calls the process "LLM grooming".)

The article quotes a top Kremlin propagandist as bragging in January that "we can actually change worldwide AI."
Robotics

China Pits Humanoid Robots Against Humans In Half-Marathon (msn.com) 25

An anonymous reader quotes a report from Reuters: Twenty-one humanoid robots joined thousands of runners at the Yizhuang half-marathon in Beijing on Saturday, the first time these machines have raced alongside humans over a 21-km (13-mile) course. The robots from Chinese manufacturers such as DroidVP and Noetix Robotics came in all shapes and sizes, some shorter than 120 cm (3.9 ft), others as tall as 1.8 m (5.9 ft). One company boasted that its robot looked almost human, with feminine features and the ability to wink and smile.

Some firms tested their robots for weeks before the race. Beijing officials have described the event as more akin to a race car competition, given the need for engineering and navigation teams. "The robots are running very well, very stable ... I feel I'm witnessing the evolution of robots and AI," said spectator He Sishu, who works in artificial intelligence. The robots were accompanied by human trainers, some of whom had to physically support the machines during the race.

A few of the robots wore running shoes, with one donning boxing gloves and another wearing a red headband with the words "Bound to Win" in Chinese. The winning robot was Tiangong Ultra, from the Beijing Innovation Center of Human Robotics, with a time of 2 hours and 40 minutes. The men's winner of the race had a time of 1 hour and 2 minutes. [...] Some robots, like Tiangong Ultra, completed the race, while others struggled from the beginning. One robot fell at the starting line and lay flat for a few minutes before getting up and taking off. One crashed into a railing after running a few metres, causing its human operator to fall over.
You can watch a recording of the race in its entirety on YouTube.
Data Storage

China Develops Flash Memory 10,000x Faster With 400-Picosecond Speed (interestingengineering.com) 91

Longtime Slashdot reader hackingbear shares a report from Interesting Engineering: A research team at Fudan University in Shanghai, China has built the fastest semiconductor storage device ever reported, a nonvolatile flash memory dubbed "PoX" that programs a single bit in 400 picoseconds (0.0000000004 s) -- roughly 25 billion operations per second. Conventional static and dynamic RAM (SRAM, DRAM) write data in 1-10 nanoseconds but lose everything when power is cut while current flash chips typically need micro to milliseconds per write -- far too slow for modern AI accelerators that shunt terabytes of parameters in real time.

The Fudan group, led by Prof. Zhou Peng at the State Key Laboratory of Integrated Chips and Systems, re-engineered flash physics by replacing silicon channels with two dimensional Dirac graphene and exploiting its ballistic charge transport. Combining ultralow energy with picosecond write speeds could eliminate separate highspeed SRAM caches and remove the longstanding memory bottleneck in AI inference and training hardware, where data shuttling, not arithmetic, now dominates power budgets. The team [which is now scaling the cell architecture and pursuing arraylevel demonstrations] did not disclose endurance figures or fabrication yield, but the graphene channel suggests compatibility with existing 2Dmaterial processes that global fabs are already exploring.
The result is published in the journal Nature.
Music

A Musician's Brain Matter Is Still Making Music Three Years After His Death (popularmechanics.com) 29

An anonymous reader quotes a report from Popular Mechanics: American composer Alvin Lucier was well-known for his experimental works that tested the boundaries of music and art. A longtime professor at Wesleyan University (before retiring in 2011), Alvin passed away in 2021 at the age of 90. However, that wasn't the end of his lifelong musical odyssey. Earlier this month, at the Art Gallery of Western Australia, a new art installation titled Revivification used Lucier's "brain matter" -- hooked up to an electrode mesh connected to twenty large brass plates -- to create electrical signals that triggered a mallet to strike the varying plates, creating a kind of post-mortem musical piece. Conceptualized in collaboration with Lucier himself before his death, the artists solicited the help of researchers from Harvard Medical School, who grew a mini-brain from Lucier's white blood cells. The team created stem cells from these white blood cells, and due to their pluripotency, the cells developed into cerebral organoids somewhat similar to developing human brains. "At a time when generative AI is calling into question human agency, this project explores the challenges of locating creativity and artistic originality," the team behind Revivification told The Art Newspaper. "Revivification is an attempt to shine light on the sometimes dark possibilities of extending a person's presence beyond the seemed finality of death."

"The central question we want people to ask is: could there be a filament of memory that persists through this biological transformation? Can Lucier's creative essence persist beyond his death?" the team said.
AI

OpenAI Puzzled as New Models Show Rising Hallucination Rates 98

OpenAI's latest reasoning models, o3 and o4-mini, hallucinate more frequently than the company's previous AI systems, according to both internal testing and third-party research. On OpenAI's PersonQA benchmark, o3 hallucinated 33% of the time -- double the rate of older models o1 (16%) and o3-mini (14.8%). The o4-mini performed even worse, hallucinating 48% of the time. Nonprofit AI lab Transluce discovered o3 fabricating processes it claimed to use, including running code on a 2021 MacBook Pro "outside of ChatGPT." Stanford adjunct professor Kian Katanforoosh noted his team found o3 frequently generates broken website links.

OpenAI says in its technical report that "more research is needed" to understand why hallucinations worsen as reasoning models scale up.
Movies

Netflix Revenue Rises To $10.5 Billion Following Price Hike (theverge.com) 15

Netflix's Q1 revenue rose to $10.5 billion, a 13% increase from last year, while net income grew to $2.9 billion. The company says it expects more growth in the coming months when it sees "the full quarter benefit from recent price changes and continued growth in membership and advertising revenue." The Verge reports: Netflix raised the prices across most of its plans in January, with its premium plan hitting $24.99 per month. It also increased the price of its Extra Member option -- its solution to password sharing -- to $8.99 per month. Though Netflix already rolled out the increase in the US, UK, and Argentina, the streamer now plans to do the same in France. This is the first quarter that Netflix didn't reveal how many subscribers it gained or lost. It decided to only report "major subscriber milestones" last year, as other streams of revenue continue to grow, like advertising, continue to grow. Netflix last reported having 300 million global subscribers in January.

During an earnings call on Thursday, Netflix co-CEO Greg Peters said the company expects to "roughly double" advertising revenue in 2025. The company launched its own advertising technology platform earlier this month. There are some changes coming to Netflix, too, as Peters confirmed that its homepage redesign for its TV app will roll out "later this year." He also hinted at adding an "interactive" search feature using "generative technologies," which sounds a lot like the AI feature Bloomberg reported on last week.
Further reading: Netflix CEO Counters Cameron's AI Cost-Cutting Vision: 'Make Movies 10% Better'
AI

Study Finds 50% of Workers Use Unapproved AI Tools 18

An anonymous reader quotes a report from SecurityWeek: An October 2024 study by Software AG suggests that half of all employees are Shadow AI users, and most of them wouldn't stop even if it was banned. The problem is the ease of access to AI tools, and a work environment that increasingly advocates the use of AI to improve corporate efficiency. It is little wonder that employees seek their own AI tools to improve their personal efficiency and maximize the potential for promotion. It is frictionless, says Michael Marriott, VP of marketing at Harmonic Security. 'Using AI at work feels like second nature for many knowledge workers now. Whether it's summarizing meeting notes, drafting customer emails, exploring code, or creating content, employees are moving fast.' If the official tools aren't easy to access or if they feel too locked down, they'll use whatever's available which is often via an open tab on their browser.

There is almost also never any malicious intent (absent, perhaps, the mistaken employment of rogue North Korean IT workers); merely a desire to do and be better. If this involves using unsanctioned AI tools, employees will likely not disclose their actions. The reasons may be complex but combine elements of a reluctance to admit that their efficiency is AI assisted rather than natural, and knowledge that use of personal shadow AI might be discouraged. The result is that enterprises often have little knowledge of the extent of Shadow IT, nor the risks it may present.
According to an analysis from Harmonic, ChatGPT is the dominant gen-AI model used by employees, with 45% of data prompts originating from personal accounts (such as Gmail). Image files accounted for 68.3%. The report also notes that 7% of empmloyees were using Chinese AI models like DeepSeek, Baidu Chat and Qwen.

"Overall, there has been a slight reduction in sensitive prompt frequency from Q4 2024 (down from 8.5% to 6.7% in Q1 2025)," reports SecurityWeek. "However, there has been a shift in the risk categories that are potentially exposed. Customer data (down from 45.8% to 27.8%), employee data (from 26.8% to 14.3%) and security (6.9% to 2.1%) have all reduced. Conversely, legal and financial data (up from 14.9% to 30.8%) and sensitive code (5.6% to 10.1%) have both increased. PII is a new category introduced in Q1 2025 and was tracked at 14.9%."
AI

Actors Who Sold AI Avatars Stuck In Black Mirror-Esque Dystopia (arstechnica.com) 16

Some actors who sold their likenesses to AI video companies like Synthesia now regret the decision, after finding their digital avatars used in misleading, embarrassing, or politically charged content. Ars Technica reports: Among them is a 29-year-old New York-based actor, Adam Coy, who licensed rights to his face and voice to a company called MCM for one year for $1,000 without thinking, "am I crossing a line by doing this?" His partner's mother later found videos where he appeared as a doomsayer predicting disasters, he told the AFP. South Korean actor Simon Lee's AI likeness was similarly used to spook naive Internet users but in a potentially more harmful way. He told the AFP that he was "stunned" to find his AI avatar promoting "questionable health cures on TikTok and Instagram," feeling ashamed to have his face linked to obvious scams. [...]

Even a company publicly committed to ethically developing AI avatars and preventing their use in harmful content like Synthesia can't guarantee that its content moderation will catch everything. A British actor, Connor Yeates, told the AFP that his video was "used to promote Ibrahim Traore, the president of Burkina Faso who took power in a coup in 2022" in violation of Synthesia's terms. [...] Yeates was paid about $5,000 for a three-year contract with Synthesia that he signed simply because he doesn't "have rich parents and needed the money." But he likely couldn't have foreseen his face being used for propaganda, as even Synthesia didn't anticipate that outcome.

Others may not like their AI avatar videos but consider the financial reward high enough to make up for the sting. Coy confirmed that money motivated his decision, and while he found it "surreal" to be depicted as a con artist selling a dystopian future, that didn't stop him from concluding that "it's decent money for little work." Potentially improving the climate for actors, Synthesia is forming a talent program that it claims will give actors a voice in decision-making about AI avatars. "By involving actors in decision-making processes, we aim to create a culture of mutual respect and continuous improvement," Synthesia's blog said.

Slashdot Top Deals