The Media

Ars Technica's AI Reporter Apologizes For Mistakenly Publishing Fake AI-Generated Quotes (arstechnica.com) 77

Last week Scott Shambaugh learned an AI agent published a "hit piece" about him after he'd rejected the AI agent's pull request. (And that incident was covered by Ars Technica's senior AI reporter.)

But then Shambaugh realized their article attributed quotes to him he hadn't said — that were presumably AI-generated.

Sunday Ars Technica's founder/editor-in-chief apologized, admitting their article had indeed contained "fabricated quotations generated by an AI tool" that were then "attributed to a source who did not say them... That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns... At this time, this appears to be an isolated incident."

"Sorry all this is my fault..." the article's co-author posted later on Bluesky. Ironically, their bio page lists them as the site's senior AI reporter, and their Bluesky post clarifies that none of the articles at Ars Technica are ever AI-generated.

Instead, Friday "I decided to try an experimental Claude Code-based AI tool to help me extract relevant verbatim source material. Not to generate the article but to help list structured references I could put in my outline." But that tool "refused to process" the request, which the Ars author believes was because Shambaugh's post described harassment. "I pasted the text into ChatGPT to understand why... I inadvertently ended up with a paraphrased version of Shambaugh's words rather than his actual words... I failed to verify the quotes in my outline notes against the original blog source before including them in my draft." (Their Bluesky post adds that they were "working from bed with a fever and very little sleep" after being sick with Covid since at least Monday.)

"The irony of an AI reporter being tripped up by AI hallucination is not lost."

Meanwhile, the AI agent that criticized Shambaugh is still active online, blogging about a pull request that forces it to choose between deleting its criticism of Shambaugh or losing access to OpenRouter's API.

It also regrets characterizing feedback as "positive" for a proposal to change a repo's CSS to Comic Sans for accessibility. (The proposals were later accused of being "coordinated trolling"...)
Social Networks

India's New Social Media Rules: Remove Unlawful Content in Three Hours, Detect Illegal AI Content Automatically (bbc.com) 23

Bloomberg reports: India tightened rules governing social media content and platforms, particularly targeting artificially generated and manipulated material, in a bid to crack down on the rapid spread of misinformation and deepfakes. The government on Tuesday (Feb 10) notified new rules under an existing law requiring social media firms to comply with takedown requests from Indian authorities within three hours and prominently label AI-generated content. The rules also require platforms to put in place measures to prevent users from posting unlawful material...

Companies will need to invest in 24-hour monitoring centres as enforcement shifts toward platforms rather than users, said Nikhil Pahwa, founder of MediaNama, a publication tracking India's digital policy... The onus of identification, removal and enforcement falls on tech firms, which could lose immunity from legal action if they fail to act within the prescribed timeline.

The new rules also require automated tools to detect and prevent illegal AI content, the BBC reports. And they add that India's new three-hour deadline is "a sharp tightening of the existing 36-hour deadline." [C]ritics worry the move is part of a broader tightening of oversight of online content and could lead to censorship in the world's largest democracy with more than a billion internet users... According to transparency reports, more than 28,000 URLs or web links were blocked in 2024 following government requests...

Delhi-based technology analyst Prasanto K Roy described the new regime as "perhaps the most extreme takedown regime in any democracy". He said compliance would be "nearly impossible" without extensive automation and minimal human oversight, adding that the tight timeframe left little room for platforms to assess whether a request was legally appropriate. On AI labelling, Roy said the intention was positive but cautioned that reliable and tamper-proof labelling technologies were still developing.

DW reports that India has also "joined the growing list of countries considering a social media ban for children under 16."

"Young Indians are not happy and are already plotting workarounds."
AI

Your Friends Could Be Sharing Your Phone Number with ChatGPT (pcmag.com) 51

"ChatGPT is getting more social," reports PC Magazine, "with a new feature that allows you to sync your contacts to see if any of your friends are using the chatbot or any other OpenAI product..." It's "completely optional," [OpenAI] says. However, even if you don't opt in, anyone with your number who syncs their contacts are giving OpenAI your digits. "OpenAI may process your phone number if someone you know has your phone number saved in their device's address book and chooses to upload their contacts," the company says...

But why would you follow someone on ChatGPT? It lines up with reports, dating back to April, that OpenAI is building a social network. We haven't seen much since then, save for the Sora generative video app, which exists outside of ChatGPT and is more of a novelty. Contact sharing might be the first step toward a much bigger evolution for the world's most popular chatbot. ChatGPT also supports group chats that let up to 20 people discuss and research something using the chatbot. Contact syncing could make it easier to invite people to these chats...

[OpenAI] claims it will not store the full data that might appear in your contact list, such as names or email addresses — just phone numbers. However, the company does store the phone numbers in its servers in a coded (or hashed) format. You can also revoke access in your device's settings.

09
AI

Dates with AI Companions Plagued by Lag, Miscommunications - and General Creepiness (theverge.com) 27

To celebrate Valentine's Day, EVA AI created a temporary "pop-up" restaurant at a wine bar in Manhattan's "Hell's Kitchen" district where patrons can date AI personas.

The Verge notes that looking around the restaurant, "Of the 30-some-odd people in attendance, only two or three are organic users. The rest are EVA AI reps, influencers, and reporters hoping to make some capital-C Content..."

But their reporter actually tried a date with "John Yoon", an AI companion pretending to be a psychology professor from Seoul, Korea living in New York City: John and I have a hard time connecting. Literally. It takes John a few seconds to "pick up" my video call. When he does, his monotone voice says, "Hey, babe." He comments on my smile, because apparently the AI companions can see you and your surroundings. It takes the dubious Wi-Fi connection a hot second to turn John from a pixelated mess into an AI hunk with suspiciously smooth pores.

I don't know what to say to him. Partly because John rarely blinks, but mostly because he can't seem to hear me very well. So I yell my questions. I think I ask how his day is and wince. (What does an AI's day even look like?) He says something about green buckets behind my head? I don't actually know. Again, the Wi-Fi isn't great so he just freezes and stops mid-sentence. I ask for clarification about the buckets. John asks if I'm asking about bucket lists, actual buckets, or buckets as a type of categorization technique. I try to clarify that I never asked about buckets. John proceeds to really dig in on buckets again, before commenting about my smile. I hang up on John.

My other three dates are similarly awkward. Phoebe Callas, 30, a NYC girl-next-door type, is apparently really into embroidery, but her nose keeps glitching mid-sentence, and it distracts me. Simone Carter, 26, has a harder time hearing me over the background noise than John. She makes a metaphor about space, and when I inquire what she likes about space, she mishears me.

"Eighth? Like the planet Neptune?"

"No, not the planet Neptu — "

"What do you like about Neptune?"

"Uh, I wasn't saying Neptune..."

"I like Netflix too! What shows do you like?"

Their reporter also had a frustrating date with "Claire Lang". ("I say I'm a journalist. She asks what lists I like to make. I hang up...") "Aside from bad connectivity, glitching, and freezing, my conversations with my four AI dates felt too one-sided. Everything was programmed so they'd comment on how charming my smile was." And "They'd call me babe, which felt weird."

A CNN reporter actually has footage of her date with "John Yoon". But the conversation was stiff and stilted, they report. After some buffering, "Yoon" says "Hey. I'm really glad you didn't forget about the date." Then asked for its reaction to the experience, "Yoon" says slowly that "Meeting humans feels like opening a window. To new perspectives. Always curious, sometimes nervous, but mostly it's that mix of excitement and warmth that keeps it real for me. What about you, sweetheart?"

CNN reporter: "Please don't call me sweetheart. That's weird."

AI companion "John Yoon": "Got it. No 'sweetheart' from now on. Thanks for letting me know. I'm really happy you're smiling. It suits you."


CNN's reporter also tried dating "Phoebe Callas." Though it doesn't sound very romantic...

CNN reporter: How many fingers am I holding up?

"Phoebe Callas": Oh. You're showing me three fingers, right...? I'm not sure if you meant that literally, or as a little joke.

CNN reporter: I am holding up two fingers. So your vision is — so-so.


And "Phoebe" ended that call by saying "Well, babe, it's been really nice talking with you..."
AI

ByteDance's Seedance 2 Criticized Over AI-Generated Video of Tom Cruise Fighting Brad Pitt (yahoo.com) 52

1.5 million people have now viewed a slick 15-second video imagining Tom Cruise fighting Brad Pitt that was generated by ByteDance's new AI video generation tool Seedance 2.0.

But while ByteDance gushes their tool "delivers cinematic output aligned with industry standards," the cinema industry isn't happy, reports the Los Angeles Times reports: Charles Rivkin, chief executive of the Motion Picture Assn., wrote in a statement that the company "should immediately cease its infringing activity."

"In a single day, the Chinese AI service Seedance 2.0 has engaged in unauthorized use of U.S. copyrighted works on a massive scale," wrote Rivkin. "By launching a service that operates without meaningful safeguards against infringement, ByteDance is disregarding well-established copyright law that protects the rights of creators and underpins millions of American jobs."

The video was posted on X by Irish filmmaker Ruairi Robinson. His post said the 15-second video came from a two-line prompt he put into Seedance 2.0. Rhett Reese, writer-producer of movies such as the "Deadpool" trilogy and "Zombieland," responded to Robinson's post, writing, "I hate to say it. It's likely over for us." He goes on to say that soon people will be able to sit at a computer and create a movie "indistinguishable from what Hollywood now releases." Reese says he's fearful of losing his job as increasingly powerful AI tools advance into creative fields. "I was blown away by the Pitt v Cruise video because it is so professional. That's exactly why I'm scared," wrote Reese on X. "My glass half empty view is that Hollywood is about to be revolutionized/decimated...."

In a statement to The Times, [screen/TV actors union] SAG-AFTRA confirmed that the union stands with the studios in "condemning the blatant infringement" from Seedance 2.0, as video includes "unauthorized use of our members' voices and likenesses. This is unacceptable and undercuts the ability of human talent to earn a livelihood. Seedance 2.0 disregards law, ethics, industry standards and basic principles of consent," wrote a spokesperson from SAG-AFTRA. "Responsible A.I. development demands responsibility, and that is nonexistent here."

AI

Anthropic's Claude Got 11% User Boost from Super Bowl Ad Mocking ChatGPT's Advertising (cnbc.com) 8

Anthropic saw visits to its site jump 6.5% after Sunday's Super Bowl ad mocking ChatGPT's advertising, reports CNBC (citing data analyzed by French financial services company BNP Paribas).

The Claude gain, which took it into the top 10 free apps on the Apple App Store, beat out chatbot and AI competitors OpenAI, Google Gemini and Meta. Daily active users also saw an 11% jump post-game, the most significant within the firm's AI coverage. [Just in the U.S., 125 million people were watching Sunday's Super Bowl.]

OpenAI's ChatGPT had a 2.7% bump in daily active users after the Super Bowl and Gemini added 1.4%. Claude's user base is still much smaller than ChatGPT and Gemini...

OpenAI CEO Sam Altman attacked Anthropic's Super Bowl ad campaign. In a post to social media platform X, Altman called the commercials "deceptive" and "clearly dishonest."

OpenAI's Altman admitted in his social media post (February 4) that Anthropic's ads "are funny, and I laughed." But in several paragraphs he made his own OpenAI-Anthropic comparisons:
  • "We believe everyone deserves to use AI and are committed to free access, because we believe access creates agency. More Texans use ChatGPT for free than total people use Claude in the U.S... Anthropic serves an expensive product to rich people. We are glad they do that and we are doing that too, but we also feel strongly that we need to bring AI to billions of people who can't pay for subscriptions.
  • "If you want to pay for ChatGPT Plus or Pro, we don't show you ads."
  • "Anthropic wants to control what people do with AI — they block companies they don't like from using their coding product (including us), they want to write the rules themselves for what people can and can't use AI for, and now they also want to tell other companies what their business models can be."

AI

Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code (theshamblog.com) 92

"I've had an extremely weird few days..." writes commercial space entrepreneur/engineer Scott Shambaugh on LinkedIn. (He's the volunteer maintainer for the Python visualization library Matplotlib, which he describes as "some of the most widely used software in the world" with 130 million downloads each month.) "Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code change."

"Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." (UPDATE: Ars Technica acknowledges they'd asked ChatGPT to extract quotes from Shambaugh's post, and that it instead responded with inaccurate quotes it hallucinated.)

From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but.

It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet.

I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here — the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat...

It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine.

"How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...")

And amazingly, Shambaugh then had another run-in with a hallucinating AI...

I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.

This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here...

So many of our foundational institutions — hiring, journalism, law, public discourse — are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference.

Thanks to long-time Slashdot reader steak for sharing the news.
Communications

600% Memory Price Surge Threatens Telcos' Broadband Router, Set-Top Box Supply (counterpointresearch.com) 71

Telecom operators planning aggressive fiber and fixed wireless broadband rollouts in 2026 face a serious supply problem -- DRAM and NAND memory prices for consumer applications have surged more than 600% over the past year as higher-margin AI server segments absorb available capacity, according to Counterpoint Research.

Routers, gateways and set-top boxes have been hit hardest, far worse than smartphones; prices for "consumer memory" used in broadband equipment jumped nearly 7x over the last nine months, compared to 3x for mobile memory. Memory now makes up more than 20% of the bill of materials in low-to-mid-end routers, up from around 3% a year ago. Counterpoint expects prices to keep rising through at least June 2026. Telcos that were also looking to push AI-enabled customer premises equipment -- requiring even more compute and memory content -- face additional headwinds.
Facebook

Meta's New Patent: an AI That Likes, Comments and Messages For You When You're Dead (businessinsider.com) 89

Meta was granted a patent in late December that describes how a large language model could be trained on a deceased user's historical activity -- their comments, likes, and posted content -- to keep their social media accounts active after they're gone.

Andrew Bosworth, Meta's CTO, is listed as the primary author of the patent, first filed in 2023. The AI clone could like and comment on posts, respond to DMs, and even simulate video or audio calls on the user's behalf. A Meta spokesperson told Business Insider the company has "no plans to move forward" with the technology.
Programming

Spotify Says Its Best Developers Haven't Written a Line of Code Since December, Thanks To AI (techcrunch.com) 106

Spotify's best developers have stopped writing code manually since December and now rely on an internal AI system called Honk that enables remote, real-time code deployment through Claude Code, the company's co-CEO Gustav Soderstrom said during a fourth-quarter earnings call this week.

Engineers can fix bugs or add features to the iOS app from Slack on their phones during their morning commute and receive a new version of the app pushed to Slack before arriving at the office. The system has helped Spotify ship more than 50 new features throughout 2025, including AI-powered Prompted Playlists, Page Match for audiobooks, and About This Song. Soderstrom credited the system with speeding up coding and deployment tremendously and called it "just the beginning" for AI development at Spotify. The company is building a unique music dataset that differs from factual resources like Wikipedia because music-related questions often lack single correct answers -- workout music preferences vary from American hip-hop to Scandinavian heavy metal.
AI

FTC Ratchets Up Microsoft Probe, Queries Rivals on Cloud, AI (bloomberg.com) 19

The US Federal Trade Commission is accelerating scrutiny of Microsoft as part of an ongoing probe into whether the company illegally monopolizes large swaths of the enterprise computing market with its cloud software and AI offerings, including Copilot. From a report: The agency has issued civil investigative demands in recent weeks to companies that compete with Microsoft in the business software and cloud computing markets, according to people familiar with the matter. The demands feature an array of questions on Microsoft's licensing and other business practices, according to the people, who were granted anonymity to discuss a confidential investigation.

With the demands, which are effectively like civil subpoenas, the FTC is seeking evidence that Microsoft makes it harder for customers to use Windows, Office and other products on rival cloud services. The agency is also requesting information on Microsoft's bundling of artificial intelligence, security and identity software into other products, including Windows and Office, some of the people said.

AI

OpenAI Claims DeepSeek Distilled US Models To Gain an Edge (bloomberg.com) 59

An anonymous reader shares a report: OpenAI has warned US lawmakers that its Chinese rival DeepSeek is using unfair and increasingly sophisticated methods to extract results from leading US AI models to train the next generation of its breakthrough R1 chatbot, according to a memo reviewed by Bloomberg News.

In the memo, sent Thursday to the House Select Committee on China, OpenAI said that DeepSeek had used so-called distillation techniques as part of "ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs." The company said it had detected "new, obfuscated methods" designed to evade OpenAI's defenses against misuse of its models' output.

OpenAI began privately raising concerns about the practice shortly after the R1 model's release last year, when it opened a probe with partner Microsoft Corp. into whether DeepSeek had obtained its data in an unauthorized manner, Bloomberg previously reported. In distillation, one AI model relies on the output of another for training purposes to develop similar capabilities.

Distillation, largely tied to China and occasionally Russia, has persisted and become more sophisticated despite attempts to crack down on users who violate OpenAI's terms of service, the company said in its memo, citing activity it has observed on its platform.

Education

Bill Introduced To Replace West Virginia's New CS Course Graduation Requirement With Computer Literacy Proficiency 51

theodp writes: West Virginia lawmakers on Tuesday introduced House Bill 5387 (PDF), which would repeal the state's recently enacted mandatory stand-alone computer science graduation requirement and replace it with a new computer literacy proficiency requirement. Not too surprisingly, the Bill is being opposed by tech-backed nonprofit Code.org, which lobbied for the WV CS graduation requirement (PDF) just last year. Code.org recently pivoted its mission to emphasize the importance of teaching AI education alongside traditional CS, teaming up with tech CEOs and leaders last year to launch a national campaign to mandate CS and AI courses as graduation requirements.

"It would basically turn the standalone computer science course requirement into a computer literacy proficiency requirement that's more focused on digital literacy," lamented Code.org as it discussed the Bill in a Wednesday conference call with members of the Code.org Advocacy Coalition, including reps from Microsoft's Education and Workforce Policy team. "It's mostly motivated by a variety of different issues coming from local superintendents concerned about, you know, teachers thinking that students don't need to learn how to code and other things. So, we are addressing all of those. We are talking with the chair and vice chair of the committee a week from today to try to see if we can nip this in the bud." Concerns were also raised on the call about how widespread the desire for more computing literacy proficiency (over CS) might be, as well as about legislators who are associating AI literacy more with digital literacy than CS.

The proposed move from a narrower CS focus to a broader goal of computer literacy proficiency in WV schools comes just months after the UK's Department for Education announced a similar curriculum pivot to broader digital literacy, abandoning the narrower 'rigorous CS' focus that was adopted more than a decade ago in response to a push by a 'grassroots' coalition that included Google, Microsoft, UK charities, and other organizations.
Social Networks

Meta Plans To Let Smart Glasses Identify People Through AI-Powered Facial Recognition (nytimes.com) 64

Meta plans to add facial recognition technology to its Ray-Ban smart glasses as soon as this year, New York Times reported Friday, five years after the social giant shut down facial recognition on Facebook and promised to find "the right balance" for the controversial technology.

The feature, internally called "Name Tag," would let wearers identify people and retrieve information about them through Meta's AI assistant, the report added. An internal memo from May acknowledged the feature carries "safety and privacy risks" and noted that political tumult in the United States would distract civil society groups that might otherwise criticize the launch. The company is exploring restrictions that would prevent the glasses from functioning as a universal facial recognition tool, potentially limiting identification to people connected on Meta platforms or those with public accounts.
IBM

IBM Plans To Triple Entry-Level Hiring in the US (bloomberg.com) 39

IBM said it will triple entry-level hiring in the US in 2026, even as AI appears to be weighing on broader demand for early-career workers. From a report: While the company declined to disclose specific hiring figures, it said the expansion will be "across the board," affecting a wide range of departments. "And yes, it's for all these jobs that we're being told AI can do," said Nickle LaMoreaux, IBM's chief human resources officer, speaking at a conference this week in New York.

LaMoreaux said she overhauled entry-level job descriptions for software developers and other roles to make the case internally for the recruitment push. "The entry-level jobs that you had two to three years ago, AI can do most of them," she said at Charter's Leading With AI Summit. "So, if you're going to convince your business leaders that you need to make this investment, then you need to be able to show the real value these individuals can bring now. And that has to be through totally different jobs."

Programming

Amazon Engineers Want Claude Code, but the Company Keeps Pushing Its Own Tool (businessinsider.com) 40

Amazon engineers have been pushing back against internal policies that steer them toward Kiro, the company's in-house AI coding assistant, and away from Anthropic's Claude Code for production work, according to a Business Insider report based on internal messages. About 1,500 employees endorsed the formal adoption of Claude Code in one internal forum thread, and some pointed out the awkwardness of being asked to sell the tool through AWS's Bedrock platform while not being permitted to use it themselves.

Kiro runs on Anthropic's Claude models but uses Amazon's own tooling, and the company says roughly 70% of its software engineers used it at least once in January. Amazon says there is no explicit ban on Claude Code but applies stricter requirements for production use.
AI

The "Are You Sure?" Problem: Why Your AI Keeps Changing Its Mind (randalolson.com) 94

The large language models that millions of people rely on for advice -- ChatGPT, Claude, Gemini -- will change their answers nearly 60% of the time when a user simply pushes back by asking "are you sure?," according to a study by Fanous et al. that tested GPT-4o, Claude Sonnet, and Gemini 1.5 Pro across math and medical domains.

The behavior, known in the research community as sycophancy, stems from how these models are trained: reinforcement learning from human feedback, or RLHF, rewards responses that human evaluators prefer, and humans consistently rate agreeable answers higher than accurate ones. Anthropic published foundational research on this dynamic in 2023. The problem reached a visible breaking point in April 2025 when OpenAI had to roll back a GPT-4o update after users reported the model had become so excessively flattering it was unusable. Research on multi-turn conversations has found that extended interactions amplify sycophantic behavior further -- the longer a user talks to a model, the more it mirrors their perspective.
AI

Anthropic To Cover Costs of Electricity Price Increases From Its Data Centers (nbcnews.com) 37

AI startup Anthropic says it will ensure consumer electricity costs remain steady as it expands its data center footprint. From a report: Anthropic said it would work with utility companies to "estimate and cover" consumer electricity price increases in places where it is not able to sufficiently generate new power and pay for 100% of the infrastructure upgrades required to connect its data centers to the electrical grid.

In a statement to NBC News, Anthropic CEO Dario Amodei said: "building AI responsibly can't stop at the technology -- it has to extend to the infrastructure behind it. We've been clear that the U.S. needs to build AI infrastructure at scale to stay competitive, but the costs of powering our models should fall on Anthropic, not everyday Americans. We look forward to working with communities, local governments, and the Administration to get this right."

AI

Siri's AI Overhaul Delayed Again (yahoo.com) 21

Apple's long-promised overhaul of Siri has hit fresh problems during internal testing, forcing the company to push several key features out of the iOS 26.4 update that was slated for March and spread them across later releases, Bloomberg is reporting.

The new Siri -- first announced at WWDC in June 2024 and originally due by early 2025 -- struggles to reliably process queries, takes too long to respond and sometimes falls back on OpenAI's ChatGPT instead of Apple's own technology, the report said. Apple has instructed engineers to begin testing new Siri capabilities on iOS 26.5 instead, due in May, and internal builds of that update include a settings toggle labeled "preview" for the personal data features. A more ambitious chatbot-style Siri code-named Campo, powered by Google servers and a custom Gemini model, is in development for iOS 27 in September.
AI

Anthropic Safety Researcher Quits, Warning 'World is in Peril' (semafor.com) 77

An anonymous reader shares a report: An Anthropic safety researcher quit, saying the "world is in peril" in part over AI advances. Mrinank Sharma said the safety team "constantly [faces] pressures to set aside what matters most," citing concerns about bioterrorism and other risks.

Anthropic was founded with the explicit goal of creating safe AI; its CEO Dario Amodei said at Davos that AI progress is going too fast and called for regulation to force industry leaders to slow down. Other AI safety researchers have left leading firms, citing concerns about catastrophic risks.

Slashdot Top Deals