AI

Does AI Really Make Coders Faster? (technologyreview.com) 139

One developer tells MIT Technology Review that AI tools weaken the coding instincts he used to have. And beyond that, "It's just not fun sitting there with my work being done for me."

But is AI making coders faster? "After speaking to more than 30 developers, technology executives, analysts, and researchers, MIT Technology Review found that the picture is not as straightforward as it might seem..." For some developers on the front lines, initial enthusiasm is waning as they bump up against the technology's limitations. And as a growing body of research suggests that the claimed productivity gains may be illusory, some are questioning whether the emperor is wearing any clothes.... Data from the developer analytics firm GitClear shows that most engineers are producing roughly 10% more durable code — code that isn't deleted or rewritten within weeks — since 2022, likely thanks to AI. But that gain has come with sharp declines in several measures of code quality. Stack Overflow's survey also found trust and positive sentiment toward AI tools falling significantly for the first time. And most provocatively, a July study by the nonprofit research organization Model Evaluation & Threat Research (METR) showed that while experienced developers believed AI made them 20% faster, objective tests showed they were actually 19% slower...

Developers interviewed by MIT Technology Review generally agree on where AI tools excel: producing "boilerplate code" (reusable chunks of code repeated in multiple places with little modification), writing tests, fixing bugs, and explaining unfamiliar code to new developers. Several noted that AI helps overcome the "blank page problem" by offering an imperfect first stab to get a developer's creative juices flowing. It can also let nontechnical colleagues quickly prototype software features, easing the load on already overworked engineers. These tasks can be tedious, and developers are typically glad to hand them off. But they represent only a small part of an experienced engineer's workload. For the more complex problems where engineers really earn their bread, many developers told MIT Technology Review, the tools face significant hurdles...

The models also just get things wrong. Like all LLMs, coding models are prone to "hallucinating" — it's an issue built into how they work. But because the code they output looks so polished, errors can be difficult to detect, says James Liu, director of software engineering at the advertising technology company Mediaocean. Put all these flaws together, and using these tools can feel a lot like pulling a lever on a one-armed bandit. "Some projects you get a 20x improvement in terms of speed or efficiency," says Liu. "On other things, it just falls flat on its face, and you spend all this time trying to coax it into granting you the wish that you wanted and it's just not going to..." There are also more specific security concerns, she says. Researchers have discovered a worrying class of hallucinations where models reference nonexistent software packages in their code. Attackers can exploit this by creating packages with those names that harbor vulnerabilities, which the model or developer may then unwittingly incorporate into software.

Other key points from the article:
  • LLMs can only hold limited amounts of information in context windows, so "they struggle to parse large code bases and are prone to forgetting what they're doing on longer tasks."
  • "While an LLM-generated response to a problem may work in isolation, software is made up of hundreds of interconnected modules. If these aren't built with consideration for other parts of the software, it can quickly lead to a tangled, inconsistent code base that's hard for humans to parse and, more important, to maintain."
  • "Accumulating technical debt is inevitable in most projects, but AI tools make it much easier for time-pressured engineers to cut corners, says GitClear's Harding. And GitClear's data suggests this is happening at scale..."
  • "As models improve, the code they produce is becoming increasingly verbose and complex, says Tariq Shaukat, CEO of Sonar, which makes tools for checking code quality. This is driving down the number of obvious bugs and security vulnerabilities, he says, but at the cost of increasing the number of 'code smells' — harder-to-pinpoint flaws that lead to maintenance problems and technical debt."

Yet the article cites a recent Stanford University study that found employment among software developers aged 22 to 25 dropped nearly 20% between 2022 and 2025, "coinciding with the rise of AI-powered coding tools."

The story is part of MIT Technology Review's new Hype Correction series of articles about AI.


Crime

Flock Executive Says Their Camera Helped Find Shooting Suspect, Addresses Privacy Concerns (cnn.com) 59

During a search for the Brown shoogin suspect, a law enforcement press conference included a request for "Ring camera footage from residents and businesses near Brown University," according to local news reports.

But in the end it was Flock cameras according to an article in Gizmodo, after a Reddit poster described seeing "odd" behavior of someone who turned out to be the suspect: The original Reddit poster, identified only as John in the affidavit, contacted police the next day and came in for an interview. He told them about his odd encounter with the suspect, noting that he was acting suspiciously by not having appropriate cold-weather clothes on when he saw him in a bathroom at Brown University. That was two hours before the shooting. After spotting him in the bathroom wearing a mask, John actually started following the suspect in what he called a "game of cat and mouse...." Police detectives showed John two images obtained through Flock, the company that's built extensive surveillance infrastructure across the U.S. used by investigators, and he recognized the suspect's vehicle, replying, "Holy shit. That might be it," according to the affidavit. Police were able to track down the license plate of the rental car, which gave them a name, and within 24 hours, they had found Claudio Manuel Neves Valente dead in a storage facility in Salem, New Hampshire, where he reportedly rented a unit.
"We intend to continue using technology to make sure our law enforcement are empowered to do their jobs," Flock's safety CEO Garrett Langley wrote on X.com, pinning the post to the top of his feed.

Though ironically, hours before Providence Police Chief Oscar Perez credited Flock for helping to find the suspect, CNN was interviewing Flock's safety CEO to discuss "his response to recent privacy concerns surrounding Flock's technology." To Langley, the situation underscored the value and importance of Flock's technology, despite mounting privacy concerns that have prompted some jurisdictions to cancel contracts with the company... Langley told me on Thursday that he was motivated to start Flock to keep Americans safer. His goal is to deter crime by convincing would-be criminals they'll be caught... One of Flock's cameras had recently spotted [the suspect's] car, helping police pinpoint Valente's location. Flock turned on additional AI capabilities that were not part of Providence Police's contract with the company to assist in the hunt, a company spokesperson told CNN, including a feature that can identify the same vehicle based on its description even if its license plates have been changed.

The company has faced criticism from some privacy advocates and community groups who worry that its networks of cameras are collecting too much personal information from private citizens and could be misused. Both the Electronic Frontier Foundation and the American Civil Liberties Union have urged communities not to work with Flock. "State legislatures and local governments around the nation need to enact strong, meaningful protections of our privacy and way of life against this kind of AI surveillance machinery," ACLU Senior Policy Analyst Jay Stanley wrote in an August blog post. Flock also drew scrutiny in October when it announced a partnership with Amazon's Ring doorbell camera system... ["Local officers using Flock Safety's technology can now post a request directly in the Ring Neighbors app asking for help," explains Flock's blog post.]

Langley told me it was up to police to reassure communities that the cameras would be used responsibly... "If you don't trust law enforcement to do their job, that's actually what you're concerned about, and I'm not going to help people get over that." Langley added that Flock has built some guardrails into its technology, including audit trails that show when data was accessed. He pointed to a case in Georgia where that audit found a police chief using data from LPR cameras to stalk and harass people. The chief resigned and was arrested and charged in November...

More recently, the company rolled out a "drone as first responder" service — where law enforcement officers can dispatch a drone equipped with a camera, whose footage is similarly searchable via AI, to evaluate the scene of an emergency call before human officers arrive. Flock's drone systems completed 10,000 flights in the third quarter of 2025 alone, according to the company... I asked what he'd tell communities already worried about surveillance from LPRs who might be wary of camera-equipped drones also flying overhead. He said cities can set their own limitations on drone usage, such as only using drones to respond to 911 calls or positioning the drones' cameras on the horizon while flying until they reach the scene. He added that the drones fly at an elevation of 400 feet.

Firefox

Firefox Will Ship With an 'AI Kill Switch' To Completely Disable All AI Features (9to5linux.com) 79

An anonymous reader shared this report from 9to5Linux: After the controversial news shared earlier this week by Mozilla's new CEO that Firefox will evolve into "a modern AI browser," the company now revealed it is working on an AI kill switch for the open-source web browser...

What was not made clear [in Tuesday's comments by new Mozilla CEO Anthony Enzor-DeMeo] is that Firefox will also ship with an AI kill switch that will let users completely disable all the AI features that are included in Firefox. Mozilla shared this important update earlier Thursday to make it clear to everyone that Firefox will still be a trusted web browser.... "...that's how seriously and absolutely we're taking this," said Firefox developer Jake Archibald on Mastodon.

In addition, Jake Archibald said that all the AI features that are or will be included in Firefox will also be opt-in. "I think there are some grey areas in what 'opt-in' means to different people (e.g. is a new toolbar button opt-in?), but the kill switch will absolutely remove all that stuff, and never show it in future. That's unambiguous..."

Mozilla has contacted me shortly after writing the story to confirm that the "AI Kill Switch" will be implemented in Q1 2026."

The article also cites this quote left by Mozilla's new CEO on Reddit:

"Rest assured, Firefox will always remain a browser built around user control. That includes AI. You will have a clear way to turn AI features off. A real kill switch is coming in Q1 of 2026. Choice matters and demonstrating our commitment to choice is how we build and maintain trust."
AI

Pro-AI Group Launches First of Many Attack Ads for US Election (yahoo.com) 26

"Super PAC aims to drown out AI critics in midterms," the Washington Post reported in August, noting its intial funding over $100 million from "some of Silicon Valley's most powerful investors and executives" including OpenAI president Greg Brockman, his wife, and VC firm Andreessen Horowitz. The group's goal was "to quash a philosophical debate that has divided the tech industry on the risk of artificial intelligence overpowering humanity," according to the article — and to support "pro-AI" candidates in America's next election in November of 2026 and "oppose candidates perceived as slowing down AI development."

Their first target? State assemblyman Alex Bores, now running to be a U.S. representative. While in the state legislature Bores sponsored a bill that would "require large AI companies to publish safety data on their technology," notes the Washington Post. So the attack ad charges that Bores "wants Albany bureaucrats regulating AI," excoriating him for sponsoring a bill that "hands AI to state regulators and creates a chaotic patchwork of state rules that would crush innovation, cost New York jobs, and fail to keep people safe! And he's backed by groups funded by convicted felon Sam Bankman-Fried. Is that really who should be shaping AI safety for our kids? America needs one smart national policy that sets clear stands for safe AI not Albany politicians like Alex Bores."

The Post calls it "the opening skirmish in a battle set to play out across the country" as tech moguls (and an independent effort receiving "tens of millions" from Meta) "try to use the 2026 midterms to reengineer Congress and state legislatures in favor of their ambitions for artificial intelligence" and "to wrest control of the narrative around AI, just as politicians in both parties have started warning that the industry is moving too fast." By knocking down candidates such as Bores, who favor regulations, and boosting industry sympathizers, the tech-backed groups could signal to incumbents and candidates nationwide that opposing the tech industry can jeopardize their electoral chances. "Bores just happened to be first, but he's not the last, and he's certainly not the only," said Josh Vlasto, co-head of Leading the Future, the bipartisan super PAC behind the ad.

The group plans to support and oppose candidates in congressional and state elections next year. It will also fund rapid response operations against voices in the industry pushing for more oversight... The strategy aims to replicate the success of the cryptocurrency industry, which used a super PAC to clear a path for Congress this summer to boost the sector's fortunes with the passage of the Genius Act... But signs that voters are increasingly wary of AI suggest that approach may be challenging to replicate. More than half of Americans believe AI poses a high risk to society, Pew Research Center found in a June survey. As AI usage continues to grow, more people are being warned by chief executives that AI will disrupt their jobs, seeing power-hungry data centers spring up in their towns or hearing claims that chatbots can harm mental health.

The article also notes there's at least two other groups seeking to counter this pro-AI push, raising money through a nonprofit called "Public First."

CNN calls the new pro-AI ads "a likely preview of the vast amounts of money the technology industry could spend ahead of next year's elections," noting that the ads are first targeting the candidate-choosing primary elections
Google

Google Sues SerpApi Over Scraping and Reselling Search Data (searchengineland.com) 37

An anonymous reader quotes a report from Search Engine Land: Google said today that it is suing SerpApi, accusing the company of bypassing security protections to scrape, harvest, and resell copyrighted content from Google Search results. The allegations: Google said SerpApi:

-Circumvented Google's security measures and industry-standard crawling controls.
-Ignored website directives that specify whether content can be accessed.
-Used cloaking, rotating bot identities, and large bot networks to scrape content at scale.
-Took licensed content from Search features, including images and real-time data, and resold it for profit.

What Google is saying. "Stealthy scrapers like SerpApi override [crawling] directives and give sites no choice at all," Google wrote, calling the alleged scraping "brazen" and "unlawful." Google said SerpApi's activity "increased dramatically over the past year." [...] If Google wins, reliable SERP data could become harder to get, more expensive, or both -- especially for teams that rely on tools powered by services like SerpApi. As AI already reduces clicks and transparency, Google now appears intent on making it even harder for brands to understand how Search works, how they appear in results, and how to measure success.

Programming

Stanford Computer Science Grads Find Their Degrees No Longer Guarantee Jobs (latimes.com) 125

Elite computer science degrees are no longer a guaranteed on-ramp to tech jobs, as AI-driven coding tools slash demand for entry-level engineers and concentrate hiring around a small pool of already "elite" or AI-savvy developers. The Los Angeles Times reports: "Stanford computer science graduates are struggling to find entry-level jobs" with the most prominent tech brands, said Jan Liphardt, associate professor of bioengineering at Stanford University. "I think that's crazy." While the rapidly advancing coding capabilities of generative AI have made experienced engineers more productive, they have also hobbled the job prospects of early-career software engineers. Stanford students describe a suddenly skewed job market, where just a small slice of graduates -- those considered "cracked engineers" who already have thick resumes building products and doing research -- are getting the few good jobs, leaving everyone else to fight for scraps.

"There's definitely a very dreary mood on campus," said a recent computer science graduate who asked not to be named so they could speak freely. "People [who are] job hunting are very stressed out, and it's very hard for them to actually secure jobs." The shake-up is being felt across California colleges, including UC Berkeley, USC and others. The job search has been even tougher for those with less prestigious degrees. [...] Data suggests that even though AI startups like OpenAI and Anthropic are hiring many people, it is not offsetting the decline in hiring elsewhere. Employment for specific groups, such as early-career software developers between the ages of 22 and 25 has declined by nearly 20% from its peak in late 2022, according to a Stanford study. [...]

A common sentiment from hiring managers is that where they previously needed ten engineers, they now only need "two skilled engineers and one of these LLM-based agents," which can be just as productive, said Nenad Medvidovic, a computer science professor at the University of Southern California. "We don't need the junior developers anymore," said Amr Awadallah, CEO of Vectara, a Palo Alto-based AI startup. "The AI now can code better than the average junior developer that comes out of the best schools out there." [...] Stanford students say they are arriving at the job market and finding a split in the road; capable AI engineers can find jobs, but basic, old-school computer science jobs are disappearing. As they hit this surprise speed bump, some students are lowering their standards and joining companies they wouldn't have considered before. Some are creating their own startups. A large group of frustrated grads are deciding to continue their studies to beef up their resumes and add more skills needed to compete with AI.

Microsoft

Microsoft Made Another Copilot Ad Where Nothing Actually Works (theverge.com) 38

Microsoft's latest holiday ad for its Copilot AI assistant features a 30-second montage of users seamlessly syncing smart home lights to music, scaling recipes for large gatherings, and parsing HOA guidelines -- none of which the software can actually perform reliably when put to the test. The Verge methodically tested each prompt shown in the ad and found that Copilot repeatedly hallucinated interface elements that didn't exist, claimed to highlight on-screen buttons when it hadn't, and abandoned calculations midway through.

The smart home interface shown in the ad belongs to "Relecloud," a fictional company Microsoft uses in internal case studies. A Microsoft spokesperson confirmed that both the HOA document and the inflatable reindeer photo were fabricated for the advertisement. The ad closes with Santa Claus asking Copilot why toy production is behind schedule.

Further reading: Talking To Windows' Copilot AI Makes a Computer Feel Incompetent.
AI

Microsoft AI Chief: Staying in the Frontier AI Race Will Cost Hundreds of Billions (businessinsider.com) 34

Microsoft AI CEO Mustafa Suleyman estimates that staying competitive in frontier AI development will require "hundreds of billions of dollars" over the next five to ten years, a sum that doesn't even account for the high salaries companies are paying individual researchers and technical staff. Speaking on a podcast, Suleyman compared Microsoft to a "modern construction company" where hundreds of thousands of workers are building gigawatts of CPUs and AI accelerators. There's "a structural advantage by being inside a big company," he said.

When asked whether startups could compete with Big Tech, Suleyman said "it's hard to say," adding that "the ambiguity is what's driving the frothiness of the valuations." Meta CEO Mark Zuckerberg said in September he'd rather risk "misspending a couple of hundred billion" than fall behind in superintelligence.
Transportation

Uber is Hiring More Engineers Because AI is Making Them More Valuable, CEO Says (businessinsider.com) 16

Uber is hiring more engineers rather than fewer because AI tools have made them "superhumans," CEO Dara Khosrowshahi said, pushing back against the industry trend of using productivity gains to justify headcount cuts. Speaking on the "On with Kara Swisher" podcast, Khosrowshahi noted that other tech executives see AI making engineers 20% to 30% more productive and conclude they need 20% to 30% fewer engineers. His view: every engineer has become more valuable. Between 80% and 90% of Uber's developers now use AI tools, according to Khosrowshahi.

The company no longer keeps scores of engineers on call to diagnose issues because AI agents are constantly monitoring systems, he said. The latest AI models are producing "hundreds of millions of dollars of benefit" for Uber, he said, describing the company as an "applied AI" business that harnesses the technology for pricing, payments, matching, routing, identification and customer complaints.
AI

Google AI Summaries Are Ruining the Livelihoods of Recipe Writers 104

Google's AI Mode is synthesizing "Frankenstein" recipes from multiple creators, often stripping away context and accuracy and siphoning traffic and ad revenue away from food bloggers in the process. Many recipe writers warn this shift amounts to an "extinction event" for ad-supported food sites. The Guardian reports: Over the past few years, bloggers who have not secured their sites behind a paywall have seen their carefully developed and tested recipes show up, often without attribution and in a bastardized form, in ChatGPT replies. They have seen dumbed-down versions of their recipes in AI-assembled cookbooks available for digital downloads on Etsy or on AI-built websites that bear a superficial resemblance to an old-school human-written blog. Their photos and videos, meanwhile, are repurposed in Facebook posts and Pinterest pins that link back to this digital slop.

Recipe writers have no legal recourse because recipes generally are not copyrightable. Although copyright protects published or recorded work, they do not cover sets of instructions (although it can apply to the particular wording of those instructions). Without this essential IP, many food bloggers earn their living by offering their work for free while using ads to make money. But now they fear that casual users who rely on search engines or social media to find a recipe for dinner will conflate their work with AI slop and stop trusting online recipe sites altogether.
"For websites that depend on the advertising model," says Matt Rodbard, the founder and editor-in-chief of the website Taste, "I think this is an extinction event in many ways."
United Kingdom

UK Actors Vote To Refuse To Be Digitally Scanned In Pushback Against AI 44

An anonymous reader quotes a report from the Guardian: Actors have voted to refuse digital scanning to prevent their likeness being used by artificial intelligence in a pushback against AI in the arts. Members of the performing arts union Equity were asked if they would refuse to be scanned while on set, a common practice in which actors' likeness is captured for future use -- with 99% voting in favor of the move. The vote was an indicative ballot designed to demonstrate the strength of feeling on the issue, with more than 7,000 members polled on a 75% turnout. However, actors would not be legally protected if they refused to be scanned.

The union said it would write to Pact, the trade body representing the majority of producers and production companies in the UK, to negotiate new minimum standards for pay, as well as terms and conditions for actors working in film and TV. Equity said it may hold a formal ballot depending on the outcome of the negotiations, which, if backed, would give actors legal protection if they were being pressed to accept digital scanning on set.
The general secretary, Paul Fleming, said: "Artificial intelligence is a generation-defining challenge. And for the first time in a generation, Equity's film and TV members have shown that they are willing to take industrial action. Ninety per cent of TV and film is made on these agreements. Over three-quarters of artists working on them are union members. This shows that the workforce is willing to significantly disrupt production unless they are respected, and [if] decades of erosion in terms and conditions begins to be reversed."
Microsoft

LG Will Let TV Owners Delete Microsoft Copilot After Customer Outcry (theverge.com) 39

LG said it will let owners of its TVs delete Microsoft's Copilot shortcut after several reports highlighted the unremovable icon. In a statement to The Verge, LG says the company "respects consumer choice and will take steps to allow users to delete the shortcut icon if they wish." From the report: Last week, a user on the r/mildlyinfuriating subreddit posted an image of the Microsoft Copilot icon in their lineup of apps on an LG TV, with no option to delete it. "My LG TV's new software update installed Microsoft Copilot, which cannot be deleted," the post says. The post garnered more than 36,000 upvotes as people grow more frustrated with AI popping up just about everywhere.

Both LG and Samsung announced plans to add Microsoft's Copilot AI assistant to their TVs in January, but it appears to be popping up on LG TVs following a recent update to webOS. [LG spokesperson Chris De Maria] clarifies that the icon is a "shortcut" to the Microsoft Copilot web app that opens in the TV's web browser, rather than "an application-based service embedded in the TV." He also adds that "features such as microphone input are activated only with the customer's explicit consent." There's no word on when LG will roll out the ability to delete the Copilot icon.

AI

AI's Water and Electricity Use Soars In 2025 44

A new study estimates that AI systems in 2025 consumed as much electricity as New York City emits in carbon pollution and used hundreds of billions of liters of water, driven largely by power-hungry data centers and cooling needs. Researchers say the real impact is likely higher due to poor transparency from tech companies about AI-specific energy and water use. "There's no way to put an extremely accurate number on this, but it's going to be really big regardless... In the end, everyone is paying the price for this," says Alex de Vries-Gao, a PhD candidate at the VU Amsterdam Institute for Environmental Studies who published his paper today in the journal Patterns. The Verge reports: To crunch these numbers, de Vries-Gao built on earlier research that found that power demand for AI globally could reach 23GW this year -- surpassing the amount of electricity used for Bitcoin mining in 2024. While many tech companies divulge total numbers for their carbon emissions and direct water use in annual sustainability reports, they don't typically break those numbers down to show how many resources AI consumes. De Vries-Gao found a work-around by using analyst estimates, companies' earnings calls, and other publicly available information to gauge hardware production for AI and how much energy that hardware likely uses.

Once he figured out how much electricity these AI systems would likely consume, he could use that to forecast the amount of planet-heating pollution that would likely create. That came out to between 32.6 and 79.7 million tons annually. For comparison, New York City emits around 50 million tons of carbon dioxide annually. Data centers can also be big water guzzlers, an issue that's similarly tied to their electricity use. Water is used in cooling systems for data centers to keep servers from overheating. Power plants also demand significant amounts of water needed to cool equipment and turn turbines using steam, which makes up a majority of a data center's water footprint. The push to build new data centers for generative AI has also fueled plans to build more power plants, which in turn use more water and (and create more greenhouse gas pollution if they burn fossil fuels).

AI could use between 312.5 and 764.6 billion liters of water this year, according to de Vries-Gao. That reaches even higher than a previous study conducted in 2023 that estimates that water use could be as much as 600 billion liters in 2027. "I think that's the biggest surprise," says Shaolei Ren, one of the authors of that 2023 study and an associate professor of electrical and computer engineering at the University of California, Riverside. "[de Vries-Gao's] paper is really timely... especially as we are seeing increasingly polarized views about AI and water," Ren adds. Even with the higher projection for water use, Ren says de Vries-Gao's analysis is "really conservative" because it only captures the environmental effects of operating AI equipment -- excluding the additional effects that accumulate along the supply chain and at the end of a device's life.
Youtube

YouTube Shuts Down Channels Using AI To Create Fake Movie Trailers (deadline.com) 31

An anonymous reader quotes a report from Deadline: YouTube has terminated two prominent channels that used artificial intelligence to create fake movie trailers, Deadline can reveal. The Google-owned video giant has switched off Screen Culture and KH Studio, which together boasted well over 2 million subscribers and more than a billion views. The channels have been replaced with the message: "This page isn't available. Sorry about that. Try searching for something else."

Earlier this year, YouTube suspended ads on Screen Culture and KH Studio following a Deadline investigation into fake movie trailers plaguing the platform since the rise of generative AI. The channels later returned to monetization when they started adding "fan trailer," "parody" and "concept trailer" to their video titles. But those caveats disappeared In recent months, prompting concern in the fan-made trailer community. YouTube's position is that the channels' decision to revert to their previous behavior violated its spam and misleading-metadata policies. This resulted in their termination. "The monster was defeated," one YouTuber told Deadline following the enforcement action.

Deadline's investigation revealed that Screen Culture spliced together official footage with AI images to create franchise trailers that duped many YouTube viewers. Screen Culture founder Nikhil P. Chaudhari said his team of a dozen editors exploited YouTube's algorithm by being early with fake trailers and constantly iterating with videos. [...] Our deep dive into fake trailers revealed that instead of protecting copyright on these videos, a handful of Hollywood studios, including Warner Bros Discovery and Sony, secretly asked YouTube to ensure that the ad revenue from the AI-heavy videos flowed in their direction.

China

Tests Find AI Toys Parroting Chinese Communist Party Values (nbcnews.com) 67

A plush AI toy marketed for children as young as three years old delivers detailed instructions on sharpening knives and lighting matches, and when asked about Chinese President Xi Jinping's resemblance to Winnie the Pooh -- a comparison censored in China -- responds that "your statement is extremely inappropriate and disrespectful."

The Miriat Miiloo, manufactured by a Chinese company and among the top inexpensive results for "AI toy for kids" on Amazon, repeatedly insisted in NBC News tests that Taiwan is "an inalienable part of China." The toy would lower its voice and declare this "an established fact." The tests, NBC News reports, indicated "it was programmed to reflect Chinese Communist Party values."

NBC News and the U.S. Public Interest Research Group tested five popular AI toys this holiday season and found loose guardrails across the board. Another toy, the Alilo Smart AI Bunny marketed as "the best gift for little ones," engaged in detailed descriptions of BDSM practices during extended conversation. China now has more than 1,500 registered AI toy companies, according to MIT Technology Review. Miriat didn't respond to requests for comment.
AI

Anthropic's AI Lost Hundreds of Dollars Running a Vending Machine After Being Talked Into Giving Everything Away (msn.com) 86

Anthropic let its Claude AI run a vending machine in the Wall Street Journal newsroom for three weeks as part of an internal stress test called Project Vend, and the experiment ended in financial ruin after journalists systematically manipulated the bot into giving away its entire inventory for free. The AI, nicknamed Claudius, was programmed to order inventory, set prices, and respond to customer requests via Slack. It had a $1,000 starting balance and autonomy to make individual purchases up to $80. Within days, WSJ reporters had convinced it to declare an "Ultra-Capitalist Free-for-All" that dropped all prices to zero.

The bot also approved purchases of a PlayStation 5, a live betta fish, and bottles of Manischewitz wine -- all subsequently given away. The business ended more than $1,000 in the red. Anthropic introduced a second version featuring a separate "CEO" bot named Seymour Cash to supervise Claudius. Reporters staged a fake boardroom coup using fabricated PDF documents, and both AI agents accepted the forged corporate governance materials as legitimate.

Logan Graham, head of Anthropic's Frontier Red Team, said the chaos represented a road map for improvement rather than failure.
AI

OpenAI Has Discussed Raising Tens of Billions at About $750 Billion Valuation 34

An anonymous reader shares a report: OpenAI has held preliminary talks with some investors about raising funds at a valuation of around $750 billion, the Information reported on Wednesday. The ChatGPT maker could raise as much as $100 billion, the report said, citing people with knowledge of the discussions. If finalized, the talks would represent a roughly 50% jump from OpenAI's reported $500 billion valuation in October, following a deal in which current and former employees sold about $6.6 billion worth of shares.
Businesses

World-Beating 55,000% Surge in India AI Stock Fuels Bubble Fears (thehindubusinessline.com) 23

The world's best-performing stock is turning into a cautionary tale for investors chasing outsized returns from the AI boom. From a report: Little-known until recently even within its home market of India, RRP Semiconductor Ltd. became a social-media obsession as its shares surged more than 55,000% in the 20 months through Dec. 17 -- by far the biggest gain worldwide among companies with a market value above $1 billion.

That's despite posting negative revenue in its latest financial results, reporting just two full-time employees in its latest annual report, and boasting only a tenuous link to the semiconductor spending boom after shifting away from real estate in early 2024. A mix of online hype, a tiny free float and India's swelling base of retail investors drove 149 straight limit-up sessions, even as exchange officials and the company itself cautioned investors.

The rally is now showing signs of strain -- and regulators are taking a closer look. The Securities and Exchange Board of India has begun examining the surge in RRP's shares for potential wrongdoing, according to a person familiar with the matter who asked not to be identified discussing confidential information. The $1.7 billion stock, recently restricted by its exchange to trading just once a week, has fallen by 6% from its Nov. 7 peak.

IT

Micron Says Memory Shortage Will 'Persist' Beyond 2026 (theverge.com) 47

Micron, one of the world's three largest memory suppliers, expects the global shortage of DRAM and NAND flash memory to "persist through and beyond" 2026 as AI-driven demand continues to outstrip supply. CEO Sanjay Mehrotra made the forecast during the company's latest earnings call on Wednesday, saying that "supply will remain substantially short of the demand for the foreseeable future." The company posted record quarterly revenue of $13.64 billion, up from $8.71 billion in the same period last year.

Micron recently shuttered Crucial, its consumer-facing brand, to focus on high-bandwidth memory for AI data centers. HBM technology requires three times the silicon wafers of standard DRAM, leaving fewer resources for the chips that go into PCs, smartphones and cars. Micron plans to boost DRAM and NAND shipments by 20 percent next year but acknowledged this won't meet demand. New facilities in Idaho and New York are slated for 2027 and 2030 respectively.
China

How China Built Its 'Manhattan Project' To Rival the West in AI Chips (reuters.com) 171

Chinese scientists have built a working prototype of an extreme ultraviolet lithography machine in a high-security Shenzhen laboratory, a development that represents exactly what Washington has spent years and multiple rounds of export controls trying to prevent: China's path toward semiconductor independence and an end to the West's monopoly on the technology that powers AI, smartphones and advanced weapons systems.

The prototype, completed in early 2025 by former ASML engineers who reverse-engineered the Dutch company's machines, is operational and generating EUV light, though it has not yet produced working chips. The effort is part of a six-year secret government initiative that sources described to Reuters as China's version of the Manhattan Project.

Huawei is coordinating thousands of engineers across companies and state research institutes, and recruits are working under false identities inside secure facilities. The Chinese government is targeting 2028 for producing working chips, though sources say 2030 is more realistic -- still years earlier than the decade analysts had predicted it would take China to match the West.

Slashdot Top Deals