Google

Google Launches Nano Banana 2 Model With Faster Image Generation (techcrunch.com) 6

Google has launched Nano Banana 2 (Gemini 3.1 Flash Image), a faster, more realistic image generation model that becomes the default across Gemini, Search, Lens, and Flow. TechCrunch reports: The new Nano Banana 2 retains some of the high-fidelity characteristics of the Pro model but produces images faster. The company says you can create images with a resolution ranging from 512px to 4K, in different aspect ratios. Nano Banana 2 can maintain character consistency for up to five characters and fidelity of up to 14 objects in one workflow for better storytelling. Users can also issue complex requests with detailed nuances for image generation, Google says. In addition, users can create media with more vibrant lighting, richer textures, and sharper detail.

[...] On Google's higher-end plans, Google AI Pro and Ultra, subscribers can continue to use Nano Banana Pro for specialized tasks by regenerating images via the three-dot menu. [...] The company said that all images created through the new model will have a SynthID watermark, which is Google's mark to denote AI-generated images. The images are also interoperable with C2PA Content Credentials, created by an industry body consisting of companies like Adobe, Microsoft, Google, OpenAI, and Meta. Google said that since launching the SynthID verification in the Gemini app in November, people have used it over 20 million times.

China

Chinese Official's Use of ChatGPT Revealed a Global Intimidation Opperation (cnn.com) 20

New submitter sabbede shares a report from CNN Politics: A sprawling Chinese influence operation -- accidentally revealed by a Chinese law enforcement official's use of ChatGPT -- focused on intimidating Chinese dissidents abroad, including by impersonating US immigration officials, according to a new report from ChatGPT-maker OpenAI. The Chinese law enforcement official used ChatGPT like a diary to document the alleged covert campaign of suppression, OpenAI said. In one instance, Chinese operators allegedly disguised themselves as US immigration officials to warn a US-based Chinese dissident that their public statements had supposedly broken the law, according to the ChatGPT user. In another case, they describe an effort to use forged documents from a US county court to try to get a Chinese dissident's social media account taken down. "This is what Chinese modern transnational repression looks like," Ben Nimmo, principal investigator at OpenAI, told reporters ahead of the report's release. "It's not just digital. It's not just about trolling. It's industrialized. It's about trying to hit critics of the CCP [Chinese Communist Party] with everything, everywhere, all at once."

Michael Horowitz, a former Pentagon official focused on emerging technologies, said the report from OpenAI "clearly demonstrates the way that China is actively employing AI tools to enhance information operations. US-China AI competition is continuing to intensify. This competition is not just taking place at the frontier, but in how China's government is planning and implementing the day-to-day of their surveillance and information apparatus."
Firefox

Firefox 148 Lets You Kill All AI Features in One Click (firefox.com) 48

Mozilla has released Firefox 148 for Windows, macOS and Linux, bringing a new AI Settings section that lets users disable all of the browser's AI-powered features in one click and then selectively re-enable the ones they actually want, such as the local translation tool that works locally rather than in the cloud.

The update also patches more than 50 security vulnerabilities -- none known to be under active exploitation -- over half of which Mozilla classifies as high risk, including five sandbox escape flaws and eight use-after-free bugs in the JavaScript engine that could allow code execution.
Businesses

Which Piece of Speculative Fiction Had the Greatest Single-Day Stock Market Impact? (ft.com) 27

Speaking of the Citrini's blog post, which imagines a near-future AI-driven economic collapse, and which ended up help triggering the S&P 500's worst single-day drop in nearly two weeks on Monday, FT Alphaville decided to track how US stock markets have moved on the release days of notable dystopian speculative fiction throughout history. The story adds: You may contend that this is facile. We would agree. You might contend that the comparisons make no sense because it's possible to read a blog post during a single work shift, but it's tricker to complete a whole novel (or sneak out to watch a movie). We would contend: do you really think traders read? Let's begin. The methodology -- tracking S&P 500 daily moves for post-1986 releases and DJIA moves for pre-1986 ones -- crowned The Matrix as the all-time leader, its March 1999 US debut coinciding with a 1.11% drop in the index. Citrini's "The 2028 Global Intelligence Crisis" came in a close second at -1.04%. On the positive end, the 2013 release of Her, a film about a man falling in love with an AI agent, coincided with the largest gain in the set at +1.66%.
AI

The AI Case Against Indian IT Ignores What Indian IT Actually Does (indiadispatch.com) 28

A fictional memo set in June 2028, published by short seller Citrini Research, wiped roughly $10 billion off Indian IT stocks in a single trading session on February 24 and sent the Nifty IT index down as much as 5.3% -- its worst single-day fall since August 2023 -- on the argument that AI coding agents have collapsed the cost advantage of Indian developers to the price of electricity. The index has shed more than $68 billion in market value in February alone, its worst month since 2003.

But the core claim that India's entire $205 billion software export industry rests on cheap labor is roughly 15 years out of date, an analysis argues, custom application maintenance alone accounts for about 35% of a typical Indian IT firm's revenue, per HSBC, and enterprise platforms require deterministic outputs that probabilistic AI systems cannot wholesale replace. HSBC estimates gross AI-led revenue deflation for the sector at 14-16%, a measured headwind rather than an extinction event. The story adds: 24 years of software export data that has never posted a decline, $200 billion in annual revenue, partnerships with the very AI labs whose products are supposed to be the instrument of the sector's destruction, possibly a new $1.5 trillion market category emerging at the intersection of services and software, and the largest U.S. corporates in the middle of mapping their entire workforces into process architectures that require technology partners to modernise. I think India's IT is going to be fine.
AI

Burger King Will Use AI To Check If Employees Say 'Please' and 'Thank You' (theverge.com) 124

An anonymous reader shares a report: Burger King is launching an AI chatbot that will live in the headsets used by employees. The voice-enabled chatbot, called "Patty," is part of an overarching BK Assistant platform that will not only assist employees with meal preparation but also evaluate their interactions with customers for "friendliness."

Thibault Roux, Burger King's chief digital officer, tells The Verge that the company compiled information from franchisees and guests on how to measure friendliness, resulting in the fast food chain training its AI system to recognize certain words and phrases, such as "welcome to Burger King," "please," and "thank you." Managers can then ask the AI assistant how their location is performing on friendliness. "This is all meant to be a coaching tool," Roux says, adding that the company is "iterating" on capturing the tone of conversations as well.

IT

Cloudflare Experiment Ports Most of Next.js API in 'One Week' With AI (theregister.com) 29

An anonymous reader shares a report: A Cloudflare engineer says he has implemented 94% of the Next.js API by directing Anthropic's Claude, spending about $1,100 on tokens. The purpose of the experimental project was not to show off AI coding, but to address an issue with Next.js, the popular React-based framework sponsored by Vercel.

According to Cloudflare engineering director Steve Faulkner, the Next.js tooling is "entirely bespoke... If you want to deploy it to Cloudflare, Netlify, or AWS Lambda, you have to take that build output and reshape it into something the target platform can actually run."

The Next.js team is addressing this following numerous complaints that deploying the framework with full features on platforms other than Vercel is too difficult, with a feature in progress called deployment adapters. "Vercel will use the same adapter API as every other partner," the company said when introducing the planned feature last year.

Businesses

Uber Employees Have Built an AI Clone of Their CEO To Practice Presentations Before the Real Thing (businessinsider.com) 30

An anonymous reader shares a report: Some Uber employees have built an AI clone of CEO Dara Khosrowshahi -- internally dubbed "Dara AI" -- and have been using it to rehearse and fine-tune presentations before delivering them to the actual Khosrowshahi, he revealed on a recent podcast.

Khosrowshahi said a team member told him that some teams "make the presentation to the Dara AI as a prep for making a presentation to me," and that the bot helps them adjust their slides and sharpen their delivery. Asked by the podcast host whether employees might eventually show Dara AI to the board, Khosrowshahi laughed but noted that AI models still can't process and act on new information the way executives do. "When the models can learn in real-time, that is the point at which I'm going to think that, yeah, we are all replaceable," he said.

Security

AI Can Find Hundreds of Software Bugs -- Fixing Them Is Another Story (theregister.com) 26

Anthropic last week promoted Claude Code Security, a research preview capability that uses its Claude Opus 4.6 model to hunt for software vulnerabilities, claiming its red team had surfaced over 500 bugs in production open-source codebases -- but security researchers say the real bottleneck was never discovery.

Guy Azari, a former security researcher at Microsoft and Palo Alto Networks, told The Register that only two to three of those 500 vulnerabilities have been fixed and none have received CVE assignments. The National Vulnerability Database already carried a backlog of roughly 30,000 CVE entries awaiting analysis in 2025, and nearly two-thirds of reported open-source vulnerabilities lacked an NVD severity score.

The curl project closed its bug bounty program because maintainers could no longer handle the flood of poorly crafted reports from AI tools and humans alike. Feross Aboukhadijeh, CEO of security firm Socket, said discovery is becoming dramatically cheaper but validating findings, coordinating with maintainers, and developing architecture-aligned patches remains slow, human-intensive work.
Businesses

Tech Firms Aren't Just Encouraging Their Workers To Use AI. They're Enforcing It. (msn.com) 101

Tech companies ranging from 300-person startups to giants like Amazon, Google, Meta, Microsoft and Salesforce have moved beyond encouraging employees to use AI tools and are now actively tracking adoption and, in several cases, tying it to performance reviews. Google is factoring AI use into some software engineer reviews for the first time this year, and Meta's new performance review system will do the same -- it can track how many lines of code an engineer wrote with AI assistance.

Amazon Web Services managers have dashboards showing individual engineer AI-tool usage and consider adoption when evaluating promotions. About 42% of tech-industry workers said their direct manager expects AI use in daily work as of last October, up from 32% eight months earlier, according to AI consulting firm Section. At software maker Autodesk, CEO Andrew Anagnost acknowledged that some employees had been using initially blocked coding tools like Cursor stealthily -- and warned that AI holdouts "probably won't survive long term."
XBox (Games)

Xbox Co-founder Says Microsoft is Quietly Sunsetting the Platform (gamesbeat.com) 46

Seamus Blackley, one of the original founders of Xbox who helped convince Bill Gates and Steve Ballmer to back a console project more than 26 years ago, told GamesBeat in an interview that he believes Microsoft is quietly sunsetting the platform under the guise of an AI-driven leadership transition.

Microsoft recently announced that Asha Sharma, whose career has focused on AI and software as a service, will replace Phil Spencer as Xbox CEO, and that COO and president Sarah Bond is leaving the company. Blackley said he expects Sharma's role to be that of "a palliative care doctor who slides Xbox gently into the night," arguing that Satya Nadella's all-consuming bet on generative AI has turned every business unit -- Xbox included -- into a nail for the same hammer.

He compared the appointment to putting someone who doesn't like movies in charge of a major motion picture studio, and advised Sharma to either develop a genuine passion for games or find a way to leave the job soon.
AI

Hacker Used Anthropic's Claude To Steal Sensitive Mexican Data (bloomberg.com) 22

A hacker exploited Anthropic's AI chatbot to carry out a series of attacks against Mexican government agencies, resulting in the theft of a huge trove of sensitive tax and voter information, according to cybersecurity researchers. From a report: The unknown Claude user wrote Spanish-language prompts for the chatbot to act as an elite hacker, finding vulnerabilities in government networks, writing computer scripts to exploit them and determining ways to automate data theft, Israeli cybersecurity startup Gambit Security said in research published Wednesday.

The activity started in December and continued for roughly a month. In all, 150 gigabytes of Mexican government data was stolen, including documents related to 195 million taxpayer records as well as voter records, government employee credentials and civil registry files, according to the researchers.

AI

Anthropic Drops Flagship Safety Pledge (time.com) 81

Anthropic, the AI company that has long positioned itself as the industry's most safety-conscious research lab, is dropping the central commitment of its Responsible Scaling Policy -- a 2023 pledge to never train an AI system unless it could guarantee beforehand that its safety measures were adequate. "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments ... if competitors are blazing ahead," chief science officer Jared Kaplan told TIME.

The overhauled policy, approved unanimously by CEO Dario Amodei and Anthropic's board, instead commits the company to matching or surpassing competitors' safety efforts and to delaying development only if Anthropic considers itself to be leading the AI race and believes catastrophic risks are significant.

The company also plans to publish detailed "Risk Reports" every three to six months and release "Frontier Safety Roadmaps" laying out future safety goals. Chris Painter, director of policy at the AI evaluation nonprofit METR, who reviewed an early draft, told TIME the shift signals that Anthropic "believes it needs to shift into triage mode with its safety plans, because methods to assess and mitigate risk are not keeping up with the pace of capabilities."
HP

HP Says Memory's Contribution To PC Costs Just Doubled To 35% (theregister.com) 25

HP has revealed that memory now accounts for 35% of the cost of materials it needs to build a PC, up from between 15 and 18% last quarter. And the company expects RAM's contribution will rise through the year. From a report: Speaking on the company's Q1 2026 earnings call, interim CEO Bruce Broussard said the company has secured long-term supply agreements for the year and also "qualified new suppliers [and] built in strategic inventory positions for key platforms and cut the time to qualify new material in half to accelerate our product configuration changes."

That sounds a lot like HP Inc is signing up new suppliers at a brisk pace. Broussard said the company has also "expanded lower-cost sourcing across our commodity basket, lowering logistics costs with agile end-to-end planning processes." The company is using its internal AI initiatives to power those new processes. The company is also "configuring our products and shaping demand to align the supply we have with our customer needs" and "taking targeted pricing actions to offset the remaining cost impact in close partnership with both our channel and direct customers."

AI

Meta AI Security Researcher Said an OpenClaw Agent Ran Amok on Her Inbox (techcrunch.com) 75

Meta AI security researcher Summer Yue posted a now-viral account on X describing how an OpenClaw agent she had tasked with sorting through her overstuffed email inbox went rogue, deleting messages in what she called a "speed run" while ignoring her repeated commands from her phone to stop.

"I had to RUN to my Mac mini like I was defusing a bomb," Yue wrote, sharing screenshots of the ignored stop prompts as proof. Yue said she had previously tested the agent on a smaller "toy" inbox where it performed well enough to earn her trust, so she let it loose on the real thing. She believes the larger volume of data triggered compaction -- a process where the context window grows too large and the agent begins summarizing and compressing its running instructions, potentially dropping ones the user considers critical.

The agent may have reverted to its earlier toy-inbox behavior and skipped her last prompt telling it not to act. OpenClaw is an open-source AI agent designed to run as a personal assistant on local hardware.
United Kingdom

New Datacentres Risk Doubling Great Britain's Electricity Use, Regulator Says (theguardian.com) 44

The amount of power being sought by new datacentre projects in Great Britain would exceed the national current peak electricity consumption, according to an industry watchdog. From a report: Ofgem said about 140 proposed datacentre schemes, driven by use of artificial intelligence, could require 50 gigawatts of electricity -- 5GW more than the country's current peak demand.

The figure was revealed in an Ofgem consultation on demand for new connections to the power grid. It pointed to a "surge in demand" for connection applications between November 2024 and June last year, with a significant number coming from datacentres. This has exceeded even the most ambitious forecasts.

Meanwhile, new renewable energy projects are not being connected to the grid at the pace they are being built to help meet the government's clean energy targets by the end of the decade. Ofgem said the work required to connect surging numbers of datacentres could mean delays for other projects that are "critical for decarbonisation and economic growth." Datacentres are the central nervous system of AI tools such as chatbots and image generators, playing a vital role in training and operating products such as ChatGPT and Gemini.

AI

Hegseth Gives Anthropic Until Friday To Back Down on AI Safeguards (axios.com) 195

Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei until Friday evening to give the military unfettered access to its AI model or face harsh penalties, Axios has learned. Hegseth told Amodei in a tense meeting on Tuesday that the Pentagon will either cut ties and declare Anthropic a "supply chain risk," or invoke the Defense Production Act to force the company to tailor its model to the military's needs.

The Pentagon wants to punish Anthropic as the feud over AI safeguards grows increasingly nasty, but officials are also worried about the consequences of losing access to its industry-leading model, Claude. "The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good," a Defense official told Axios ahead of the meeting. Anthropic has said it is willing to adapt its usage policies for the Pentagon, but not to allow its model to be used for the mass surveillance of Americans or the development of weapons that fire without human involvement.
Programming

Microsoft Execs Worry AI Will Eat Entry Level Coding Jobs (theregister.com) 62

An anonymous reader shares a report: Microsoft Azure CTO Mark Russinovich and VP of Developer Community Scott Hanselman have written a paper arguing that senior software engineers must mentor junior developers to prevent AI coding agents from hollowing out the profession's future skills base.

The paper, Redefining the Engineering Profession for AI, is based on several assumptions, the first of which is that agentic coding assistants "give senior engineers an AI boost... while imposing an AI drag on early-in-career (EiC) developers to steer, verify and integrate AI output."

In an earlier podcast on the subject, Russinovich said this basic premise -- that AI is increasing productivity only for senior developers while reducing it for juniors -- is a "hot topic in all our customer engagements... they all say they see it at their companies." [...] The logical outcome is that "if organizations focus only on short-term efficiency -- hiring those who can already direct AI -- they risk hollowing out the next generation of technical leaders," Russinovich and Hanselman state in the paper.

Firefox

Firefox 148 Now Available With The New AI Controls, AI Kill Switches 71

Firefox 148 introduces granular AI controls and a global "AI kill switch" that allows users to disable or selectively manage the browser's AI features. Phoronix reports: Among the AI features that can be toggled individually are around translations, image alt text in the Firefox PDF viewer, tab group suggestions, key points in link previews, and AI chatbot providers in the sidebar. Firefox 148 also brings Firefox for Android, support for the Trusted Types API, CSS shape() function support, Sanitizer API support, WebGPU enhancements, and a variety of other changes. Developer chances can be found at developer.mozilla.org. Binaries are available from ftp.mozilla.org.
United States

US Farmers Are Rejecting Multimillion-Dollar Datacenter Bids For Their Land (theguardian.com) 96

An anonymous reader quotes a report from the Guardian: When two men knocked on Ida Huddleston's door last May, they carried a contract worth more than $33m in exchange for the Kentucky farm that had fed her family for centuries. According to Huddleston, the men's client, an unnamed "Fortune 100 company," sought her 650 acres (260 hectares) in Mason county for an unspecified industrial development. Finding out any more would require signing a non-disclosure agreement. More than a dozen of her neighbors received the same knock. Searching public records for answers, they discovered that a new customer (PDF) had applied for a 2.2 gigawatt project from the local power plant, nearly double its annual generation capacity. The unknown company was building a datacenter. "You don't have enough to buy me out. I'm not for sale. Leave me alone, I'm satisfied," Huddleston, 82, later told the men.

As tech companies race to build the massive datacenters needed to power artificial intelligence across the US and the world, bids like the one for Huddleston's land are appearing on rural doorsteps nationwide. Globally, 40,000 acres of powered land – real estate prepped for datacenter development -- are projected to be needed for new projects over the next five years, double the amount currently in use. Yet despite sums that often dwarf the land's recent value, farmers are increasingly shutting the door. At least five of Huddleston's neighbors gave similar categorical rejections, including one who was told he could name any price.

In Pennsylvania, a farmer rejected $15m in January for land he'd worked for 50 years. A Wisconsin farmer turned down $80m the same month. Other landowners have declined offers exceeding $120,000 per acre -- prices unimaginable just a few years ago. The rebuffs are a jarring reminder of AI's physical bounds, and limits of the dollars behind the technology. [...] As AI promises to transcend corporeal fallibility, these standoffs reveal its very physical constraints -- and Wall Street's miscalculation of what some people value most. In the rolling hills of Mason county and farmland across America, that gap is measured not in dollars but in something harder to price: identity.

Slashdot Top Deals