The Internet

The Internet Archive Now Captures AI-Generated Content (Including Google's AI Overviews) (cnn.com) 4

CNN profiled the non-profit Internet Archive today — and included this tidbit about how they archive parts of the internet that are now "tucked in conversations with AI chatbots." The rise of artificial intelligence and AI chatbots means the Internet Archive is changing how it records the history of the internet. In addition to web pages, the Internet Archive now captures AI-generated content, like ChatGPT answers and those summaries that appear at the top of Google search results. The Internet Archive team, which is made up of librarians and software engineers, are experimenting with ways to preserve how people get their news from chatbots by coming up with hundreds of questions and prompts each day based on the news, and recording both the queries and outputs, [says Wayback Machine Director Mark Graham].
It sounds like a fun place to work... Archivists use bespoke machines to digitize books page by page, livestreaming their work on YouTube for all to see (alongside some lo-fi music). Record players churn out vintage tunes from 1920s and 1940s, and the building houses every type of media console for any type of content imaginable, from microfilm, to CDs and satellite television. (The Internet Archive preserves music, television, books and video games, too)... "There are a lot of people that are just passionate about the cause. There's a cyberpunk atmosphere," Annie Rauwerda, a Wikipedia editor and social media influencer, said at a party thrown at the Internet Archive's headquarters to celebrate reaching a trillion pages "The internet (feels) quite corporate when I use it a lot these days, but you wouldn't know from the people here."
AI

Could Firefox Be the Browser That Protects the Privacy of AI Users? (anildash.com) 54

Tech entrepreneur/blogger Anil Dash has been critical of AI browsers like ChatGPT Atlas. (He's written that Atlas "substitutes its own AI-generated content for the web, but it looks like it's showing you the web," while its prompt-based/command-line interface resembles a clunky text adventure, and it's true purpose seems to be ingesting more training data.)

And at the Mozilla Festival in Spain, "Virtually everyone shared some version of what I'd articulated as the majority view on AI, which is approximately that LLMs can be interesting as a technology, but that Big Tech, and especially Big AI, are decidedly awful and people are very motivated to stop them from committing their worst harms upon the vulnerable."

But... Another reality that people were a little more quiet in acknowledging, and sometimes reluctant to engage with out loud, is the reality that hundreds of millions of people are using the major AI tools every day... I don't know why today's Firefox users, even if they're the most rabid anti-AI zealots in the world, don't say, "well, even if I hate AI, I want to make sure Firefox is good at protecting the privacy of AI users so I can recommend it to my friends and family who use AI"...

My personal wishlist would be pretty simple:

* Just give people the "shut off all AI features" button. It's a tiny percentage of people who want it, but they're never going to shut up about it, and they're convinced they're the whole world and they can't distinguish between being mad at big companies and being mad at a technology so give them a toggle switch and write up a blog post explaining how extraordinarily expensive it is to maintain a configuration option over the lifespan of a global product.

* Market Firefox as "The best AI browser for people who hate Big AI". Regular users have no idea how creepy the Big AI companies are — they've just heard their local news talk about how AI is the inevitable future. If Mozilla can warn me how to protect my privacy from ChatGPT, then it can also mention that ChatGPT tells children how to self-harm, and should be aggressive in engaging with the community on how to build tools that help mitigate those kinds of harms — how do we catalyze that innovation?

* Remind people that there isn't "a Firefox" — everyone is Firefox. Whether it's Zen, or your custom build of Firefox with your favorite extensions and skins, it's all part of the same story. Got a local LLM that runs entirely as a Firefox extension? Great! That should be one of the many Firefoxes, too. Right now, so much of the drama and heightened emotions and tension are coming from people's (well... dudes') egos about there being One True Firefox, and wanting to be the one who controls what's in that version, as an expression of one set of values. This isn't some blood-feud fork, there can just be a lot of different choices for different situations. Make it all work.

United States

Are Data Centers Raising America's Electricity Prices? (cnbc.com) 71

Residential utility bills in America "rose 6% on average nationwide in August compared with the same period in the previous year," reports CNBC, citing statistics from the U.S. Energy Information Administration: The reasons for price increases are often complex and vary by region. But in at least three states with high concentrations of data centers, electric bills climbed much faster than the national average during that period. Prices, for example, surged by 13% in Virginia, 16% in Illinois and 12% in Ohio.

The tech companies and AI labs are building data centers that consume a gigawatt or more of electricity in some cases, equivalent to more than 800,000 homes, the size of a city essentially... "The techlash is real," said Abraham Silverman, who served as general counsel for New Jersey's public utility board from 2019 until 2023 under outgoing Democratic Gov. Phil Murphy. "Data centers aren't always great neighbors," said Silverman, now a researcher at Johns Hopkins University. "They tend to be loud, they can be dirty and there's a number of communities, particularly in places with really high concentrations of data centers, that just don't want more data centers..." [C]apacity prices get passed down to consumers in their utility bills, Silverman said. The data center load in PJM [America's largest grid, serving 13 states] is also impacting prices in states that are not industry leaders such as New Jersey, where prices jumped about 20% year over year...

There are other reasons for rising electricity prices, Silverman said. The aging electric grid needs upgrades at a time of broad inflation and the cost of building new transmission lines has gone up by double digits, he said. The utilities also point to rising demand from the expansion of domestic manufacturing and the broader electrification of the economy, such as electric vehicles and the adoption of electric heat pumps in some regions...

In other states, however, the relationship between rising electricity prices and data centers is less clear. Texas, for example, is second only to Virginia with more than 400 data centers. But prices in the Lone Star state increased about 4% year over year in August, lower than the national average. Texas operates its own grid, ERCOT, with a relatively fast process that can connect new electric supply to the grid in around three years, according to a February 2024 report from the Brattle Group. California, meanwhile, has the third most data centers in the nation and the second highest residential electricity prices, nearly 80% above the national average. But prices in the Golden State increased about 1% in August 2024 over the prior year period, far below the average hike nationwide. One of the reasons California's electricity rates are so much higher than most of the country is the costs associated with preventing wildfires.

Programming

Security Researchers Spot 150,000 Function-less npm Packages in Automated 'Token Farming' Scheme (theregister.com) 11

An anonymous reader shared this report from The Register: Yet another supply chain attack has hit the npm registry in what Amazon describes as "one of the largest package flooding incidents in open source registry history" — but with a twist. Instead of injecting credential-stealing code or ransomware into the packages, this one is a token farming campaign.

Amazon Inspector security researchers, using a new detection rule and AI assistance, originally spotted the suspicious npm packages in late October, and, by November 7, the team had flagged thousands. By November 12, they had uncovered more than 150,000 malicious packages across "multiple" developer accounts. These were all linked to a coordinated tea.xyz token farming campaign, we're told. This is a decentralized protocol designed to reward open-source developers for their contributions using the TEA token, a utility asset used within the tea ecosystem for incentives, staking, and governance.

Unlike the spate of package poisoning incidents over recent months, this one didn't inject traditional malware into the open source code. Instead, the miscreants created a self-replicating attack, infecting the packages with code to automatically generate and publish, thus earning cryptocurrency rewards on the backs of legitimate open source developers. The code also included tea.yaml files that linked these packages to attacker-controlled blockchain wallet addresses.

At the moment, Tea tokens have no value, points out CSO Online. "But it is suspected that the threat actors are positioning themselves to receive real cryptocurrency tokens when the Tea Protocol launches its Mainnet, where Tea tokens will have actual monetary value and can be traded..." In an interview on Friday, an executive at software supply chain management provider Sonatype, which wrote about the campaign in April 2024, told CSO that number has now grown to 153,000. "It's unfortunate that the worm isn't under control yet," said Sonatype CTO Brian Fox. And while this payload merely steals tokens, other threat actors are paying attention, he predicted. "I'm sure somebody out there in the world is looking at this massively replicating worm and wondering if they can ride that, not just to get the Tea tokens but to put some actual malware in there, because if it's replicating that fast, why wouldn't you?"

When Sonatype wrote about the campaign just over a year ago, it found a mere 15,000 packages that appeared to come from a single person. With the swollen numbers reported this week, Amazon researchers wrote that it's "one of the largest package flooding incidents in open source registry history, and represents a defining moment in supply chain security...." For now, says Sonatype's Fox, the scheme wastes the time of npm administrators, who are trying to expel over 100,000 packages. But Fox and Amazon point out the scheme could inspire others to take advantage of other reward-based systems for financial gain, or to deliver malware.

After deplooying a new detection rule "paired with AI", Amazon's security researchers' write, "within days, the system began flagging packages linked to the tea.xyz protocol... By November 7, the researchers flagged thousands of packages and began investigating what appeared to be a coordinated campaign. The next day, after validating the evaluation results and analyzing the patterns, they reached out to OpenSSF to share their findings and coordinate a response.
Their blog post thanks the Open Source Security Foundation (OpenSSF) for rapid collaboration, while calling the incident "a defining moment in supply chain security..."
AI

Copy-and-Paste Now Exceeds File Transferring as the Top Corporate Data Exfiltration Vector (scworld.com) 32

Slashdot reader spatwei writes: It is now more common for data to leave companies through copying and pasting than through file transfers and uploads, LayerX revealed in its Browser Security Report 2025.

This shift is largely due to generative AI (genAI), with 77% of employees pasting data into AI prompts, and 32% of all copy-pastes from corporate accounts to non-corporate accounts occurring within genAI tools.

'Traditional governance built for email, file-sharing, and sanctioned SaaS didn't anticipate that copy/paste into a browser prompt would become the dominant leak vector,' LayerX CEO Or Eshed wrote in a blog post summarizing the report.

"GenAI now accounts for 11% of enterprise application usage," notes this article from SC World, "with adoption rising faster than many data loss protection (DLP) controls can keep up. Overall, 45% of employees actively use AI tools, with 67% of these tools being accessed via personal accounts and ChatGPT making up 92% of all use..."

"With the rise of AI-driven browsers such as OpenAI's Atlas and Perplexity's Comet, governance of AI tools' access to corporate data becomes even more urgent, the LayerX report notes."
Supercomputing

A Quantum Error Correction Breakthrough? (harvard.edu) 39

The dream of quantum computers has been hampered by the challenge of error correction, writes the Harvard Gazette, since qubits "are inherently susceptible to slipping out of their quantum states and losing their encoded information."

But in a newly-published paper, a research team "combined various methods to create complex circuits with dozens of error correction layers" that "suppresses errors below a critical threshold — the point where adding qubits further reduces errors rather than increasing them." "For the first time, we combined all essential elements for a scalable, error-corrected quantum computation in an integrated architecture," said Mikhail Lukin, co-director of the Quantum Science and Engineering Initiative, Joshua and Beth Friedman University Professor, and senior author of the new paper. "These experiments — by several measures the most advanced that have been done on any quantum platform to date — create the scientific foundation for practical large-scale quantum computation..."

"There are still a lot of technical challenges remaining to get to very large-scale computer with millions of qubits, but this is the first time we have an architecture that is conceptually scalable," said lead author Dolev Bluvstein, Ph.D. '25, who did the research during his graduate studies at Harvard and is now an assistant professor at Caltech. "It's going to take a lot of effort and technical development, but it's becoming clear that we can build fault-tolerant quantum computers...."

Hartmut Neven, vice president of engineering at the Google Quantum AI team, said the new paper came amid an "incredibly exciting" race between qubit platforms. "This work represents a significant advance toward our shared goal of building a large-scale, useful quantum computer," he said... With recent advances, Lukin believes the core elements for building quantum computers are falling into place. "This big dream that many of us had for several decades, for the first time, is really in direct sight," he said.

"In theory, a system of 300 quantum bits can store more information than the number of particles in the known universe..." the article points out.

"The new paper represents an important advance in a three-decade pursuit of quantum error correction."

Thanks to long-time Slashdot reader schwit1 for sharing the article.
AI

Fear Drives the AI 'Cold War' Between America and China (msn.com) 28

A new "cold war" between America and China is "pushing leaders to sideline concerns about the dangers of powerful AI models," reports the Wall Street Journal, "including the spread of disinformation and other harmful content, and the development of superintelligent AI systems misaligned with human values..."

"Both countries are driven as much by fear as by hope of progress. " In Washington and Silicon Valley, warnings abound that China's "authoritarian AI," left unchecked, will erode American tech supremacy. Beijing is gripped by the conviction that a failure to keep pace in AI will make it easier for the U.S. to cut short China's resurgence as a global power. Both countries believe market share for their companies across the world is up for grabs — and with it, the potential to influence large swaths of the global population.

The U.S. still has a clear lead, producing the most powerful AI models. China can't match it in advanced chips and has no answer for the financial firepower of private American investors, who funded AI startups to the tune of $104 billion in the first half of 2025, and are gearing up for more. But it has a massive population of capable engineers, lower costs and a state-led development model that often moves faster than the U.S., all of which Beijing is working to harness to tip the contest in its direction. A new "whole of society" campaign looks to accelerate the construction of computing clusters in areas like Inner Mongolia, where vast solar and wind farms provide plentiful cheap energy, and connect hundreds of data centers to create a shared compute pool — some describe it as a "national cloud" — by 2028. China is also funneling hundreds of billions of dollars into its power grid to support AI training and adoption...

"Our lead is probably in the 'months but not years' realm," said Chris McGuire, who helped design U.S. export controls on AI chips while serving on the National Security Council under the Biden administration. Chinese AI models currently rank at or near the top in every task from coding to video generation, with the exception of search, according to Chatbot Arena, a popular crowdsourced ranking platform. China's manufacturing sector, meanwhile, is rocketing past the U.S. in bringing AI into the physical world through robotaxis, autonomous drones and humanoid robots. Given China's progress, McGuire said, the U.S. is "very lucky" to have its advantage in chips...

If AI surpasses human intelligence and acquires the ability to improve itself, it could confer unshakable scientific, economic and military superiority on the country that controls it. Short of that, AI's ability to automate tedious tasks and process vast amounts of data quickly promises to supercharge everything from cancer diagnoses to missile defense. With so much at stake, hacking and cyber espionage are likely to get worse, as AI gives hackers more powerful tools, while increasing incentives for state-backed groups to try to steal AI-related intellectual property. As distrust grows, Washington and Beijing will also find it hard, if not impossible, to cooperate in areas like preventing extremist groups from using AI in destructive ways, such as building bioweapons. "The costs of the AI Cold War are already high and will go much higher," said Paul Triolo, a former U.S. government analyst and current technology policy lead at business consulting firm DGA-Albright Stonebridge Group. "A U.S.-China AI arms race becomes a self-fulfilling prophecy, with neither side able to trust that the other would observe any restrictions on advanced AI capability development...."

The article includes an interesting observation from Helen Toner, director of strategy for Georgetown's Center for Security and Emerging Technology and a former OpenAI board member. Toner points out "We don't actually know" if boosting computing power with better chips will continue producing more-powerful AI models.

So "If performance plateaus," the Journal writes, "despite all the spending by OpenAI and others — a growing concern in Silicon Valley — China has a chance to compete."
AI

While Meta Crawls the Web for AI Training Data, Bruce Ediger Pranks Them with Endless Bad Data (bruceediger.com) 43

From the personal blog of interface expert Bruce Ediger: Early in March 2025, I noticed that a web crawler with a user agent string of

meta-externalagent/1.1 (+https://developers.facebook.com/docs/sharing/webmasters/crawler)

was hitting my blog's machine at an unreasonable rate.

I followed the URL and discovered this is what Meta uses to gather premium, human-generated content to train its LLMs. I found the rate of requests to be annoying.

I already have a PHP program that creates the illusion of an infinite website. I decided to answer any HTTP request that had "meta-externalagent" in its user agent string with the contents of a bork.php generated file...

This worked brilliantly. Meta ramped up to requesting 270,000 URLs on May 30 and 31, 2025...

After about 3 months, I got scared that Meta's insatiable consumption of Super Great Pages about condiments, underwear and circa 2010 C-List celebs would start costing me money. So I switched to giving "meta-externalagent" a 404 status code. I decided to see how long it would take one of the highest valued companies in the world to decide to go away.

The answer is 5 months.

AI

She Used ChatGPT To Win the Virginia Lottery, Then Donated Every Dollar 84

An anonymous reader quotes a report from the Washington Post: Winning the lottery isn't what brought Carrie Edwards her 15 minutes of fame. It was giving it all away. Standing alone in her kitchen one day in September, the Virginia woman was thunderstruck to discover she had won $150,000 in a Powerball drawing. As she was absorbing her windfall, she said, "I just heard as loud as you can hear God or whoever you believe in the universe just say, this is -- it's not your money." Then came a decision: She would donate it all to her three most cherished charities (source paywalled; alternative source). [...] Her journey to the lucky prize started when she walked into a 7-Eleven with a friend who wanted to buy two Powerball tickets. The jackpot for the Sept. 6 drawing was topping $1.7 billion, the second-largest amount ever. Edwards, 68, hardly ever played the lottery, but her friend was an active player who gave her two pieces of advice: Always buy a paper ticket, rather than getting them online. And the Powerball multiplier is a scam, don't do it. She ignored him on both accounts.

She created a Virginia Lottery account on her phone. Then, instead of the typical strategies of using family birthdays and lucky numbers, she went to ChatGPT -- which she had only recently started using for research -- and asked, "Do you have any winning numbers for me?" "Luck is luck," replied the chatbot. Then it gave numbers that she plugged in -- paying the extra dollar for the Power Play to multiply anything she might win. She initially thought luck wasn't on her side when she didn't win the massive jackpot. But what she didn't realize is that she'd picked the "draw two" option, meaning her numbers were reentered for the next drawing. When she got a notification on her phone that she had won, she said, she thought it was a scam, or maybe she'd won something small, like $10. Just to satisfy her curiosity, she logged into her account and saw that she had matched four of the five numbers plus the Powerball in that second drawing. It would have been a $50,000 payout, but the multiplier tripled her winnings.
AI

Sam Altman Celebrates ChatGPT Finally Following Em Dash Formatting Rules 74

An anonymous reader quotes a report from Ars Technica: On Thursday evening, OpenAI CEO Sam Altman posted on X that ChatGPT has started following custom instructions to avoid using em dashes. "Small-but-happy win: If you tell ChatGPT not to use em-dashes in your custom instructions, it finally does what it's supposed to do!" he wrote.

The post, which came two days after the release of OpenAI's new GPT-5.1 AI model, received mixed reactions from users who have struggled for years with getting the chatbot to follow specific formatting preferences. And this "small win" raises a very big question: If the world's most valuable AI company has struggled with controlling something as simple as punctuation use after years of trying, perhaps what people call artificial general intelligence (AGI) is farther off than some in the industry claim.
"The fact that it's been 3 years since ChatGPT first launched, and you've only just now managed to make it obey this simple requirement, says a lot about how little control you have over it, and your understanding of its inner workings," wrote one X user in a reply. "Not a good sign for the future."
Businesses

Krafton Launches Voluntary Resignation Program Weeks After Declaring 'AI-First Company' Future (pcgamer.com) 24

An anonymous reader shares a report: In October, PUBG and Subnautica 2 publisher Krafton announced that it would be undergoing a "complete reorganization" to become an "AI-first" company, planning to invest over 130 billion won ($88 million) in agentic AI infrastructure and deployment beginning in 2026. This week, as it boasts record-breaking quarterly profits, the Korean publisher has followed that strategic shift by launching a voluntary resignation program for its domestic employees, according to Business Korea reporting.

The program, announced internally, offers substantial buyouts for domestic Krafton employees based on their length of employment at the publisher. Severance packages range from 6 months' salary for employees with one year or less of service to 36 months' salary for employees who've worked at Krafton for over 11 years. The voluntary resignation program follows a November 4 earnings call in which Krafton announced a record quarterly profit of $717 million. During the call, Krafton CFO Bae Dong-geun indicated that Krafton had also halted hiring for new positions, telling investors that "excluding organizations developing original intellectual property and AI-related personnel, we have frozen hiring company-wide."

Social Networks

Jack Dorsey Funds diVine, a Vine Reboot That Includes Vine's Video Archive (techcrunch.com) 20

An anonymous reader quotes a report from TechCrunch: As generative AI content starts to fill our social apps, a project to bring back Vine's six-second looping videos is launching with Twitter co-founder Jack Dorsey's backing. On Thursday, a new app called diVine will give access to more than 100,000 archived Vine videos, restored from an older backup that was created before Vine's shutdown. The app won't just exist as a walk down memory lane; it will also allow users to create profiles and upload their own new Vine videos. However, unlike on traditional social media, where AI content is often haphazardly labeled, diVine will flag suspected generative AI content and prevent it from being posted. According to TechCrunch, a volunteer preservation group called the Archive Team saved Vine's content when it shut down in 2016. The only problem was that everything was stored in massive 40-50 GB binary blob files that were basically unusable for casual viewing.

Evan Henshaw-Plath (who goes by the name Rabble), an early Twitter employee and member of Jack Dorsey's nonprofit "and Other Stuff," dug into those backup files to try and salvage as much as he could. He spent months writing big-data extraction scripts, reverse-engineering how the archived binaries were structured, and reconstructing the original video files, old user info, view counts, and more. "I wasn't able to get all of them out, but I was able to get a lot out and basically reconstruct these Vines and these Vine users, and give each person a new user [profile] on this open network," he said.

Rabble estimates that through this process he was able to successfully recover 150,000-200,000 Vine videos from around 60,000 creators. diVine then rebuilt user profiles on top of the decentralized Nostr protocol so creators can reclaim their accounts, request takedowns, or upload missing videos.

You can check out the app for yourself at diVine.video. It's available in beta form on both iOS and Android.
AI

LinkedIn Is Making It Easier To Search For People With AI 20

LinkedIn is rolling out an AI-powered people search tool that lets users find connections by describing what they need instead of relying on names or titles. For example, you can enter a more descriptive search, such as "Northwestern alumni who work in entertaining marketing," or even pose a question, like "Who can help me understand the US work visa system." The Verge reports: LinkedIn senior director of product management Rohan Rajiv tells The Verge that the platform will rank results based on the connections you might have with someone, as well as their relevance to your search. [...] LinkedIn is rolling out AI-powered people search to Premium users in the US starting today, but the platform plans on bringing it to all users soon.
Security

Chinese Hackers Used Anthropic's AI To Automate Cyberattacks (msn.com) 15

China's state-sponsored hackers used AI technology from Anthropic to automate break-ins of major corporations and foreign governments during a September hacking campaign, the company said Thursday. From a report: The effort focused on dozens of targets and involved a level of automation that Anthropic's cybersecurity investigators had not previously seen, according to Jacob Klein, the company's head of threat intelligence.

Hackers have been using AI for years now to conduct individual tasks such as crafting phishing emails or scanning the internet for vulnerable systems, but in this instance 80% to 90% of the attack was automated, with humans only intervening in a handful of decision points, Klein said.

The hackers conducted their attacks "literally with the click of a button, and then with minimal human interaction," Klein said. Anthropic disrupted the campaigns and blocked the hackers' accounts, but not before as many as four intrusions were successful. In one case, the hackers directed Anthropic's Claude AI tools to query internal databases and extract data independently.

Mozilla

Mozilla Launches AI Window for Firefox (mozilla.org) 42

Mozilla announced on Thursday that it is building an AI Window for Firefox, a new opt-in browsing mode that will let users interact with an AI assistant and chatbot. The feature will become one of three browsing experiences in Firefox alongside the existing classic and private windows. Users will be able to select which AI model they want to use in the AI Window, according to a post on the Mozilla Connect forum.

The company opened a waitlist for users who want to receive updates and be among the first to test the feature. Mozilla described the AI Window as an "intelligent and user-controlled space" that it is developing in the open through community feedback. Users who try the feature and decide against it can switch it off entirely.
AI

Reddit Cofounder Had a Bad Feeling About Giving Data To Sam Altman 29

Reddit cofounder Alexis Ohanian said he had serious doubts a decade ago about sharing the platform's data with Sam Altman. Ohanian recounted on the "Brew Markets" podcast that between 2015 and 2016, Altman asked Reddit to let him "aggressively scrape" the site's content. Altman had recently helped Reddit raise $50 million in a Series B round and was launching OpenAI as a nonprofit.

Ohanian described Altman as "very smart" and "incredibly cunning" but questioned whether he was "the most philanthropically minded guy." The Reddit cofounder said he "felt in my bones" the company should refuse the request and debated internally about it against Steve Huffman. Ohanian said he "lost that debate." Reddit and OpenAI announced a formal licensing deal in 2024.
AI

Russia's AI Robot Falls Seconds After Being Unveiled 112

Russia's first AI humanoid robot, Aldol, fell just seconds after its debut at a technology event in Moscow on Tuesday. "The robot was being led on stage to the soundtrack from the film 'Rocky,' before it suddenly lost its balance and fell," reports the BBC. "Assistants could then be seen scrambling to cover it with a cloth -- which ended up tangling in the process." Developers of Aldol blamed poor lighting and calibration issues for the collapse, saying the robot's stereo cameras are sensitive to light and the hall was dark.
Music

AI-Generated Song Tops Country Music Chart (go.com) 68

Slashdot readers Tablizer and fjo3 share news that an AI-generated country song has topped the U.S. sales chart for the first time this week. ABC News reports: The new country tune, "Walk my Walk" by Breaking Rust, recently hit No. 1 on Billboard's Country Digital Song Sales chart, reaching over 3 million streams on Spotify in less than a month. That success has garnered mixed reactions from music fans and artists alike, particularly on TikTok, where hundreds of users have posted videos addressing the tune and others discussing the music in the comments.

Billboard has acknowledged Breaking Rust is an AI act and said it is one of at least six to chart in the past few months alone. "Ultimately, this feels like an experiment to see just how far something like this can go and what happens in the future and in other disciplines of art as well," senior entertainment reporter Kelley L. Carter told ABC News. "AI artists won't require things that a real human artist will require, and once companies start considering it and looking at bottom lines, I think that's when artists should rightly be concerned about it," she added.

Businesses

Anthropic To Spend $50 Billion On US AI Infrastructure (cnbc.com) 20

An anonymous reader quotes a report from CNBC: Anthropic announced plans Wednesday to spend $50 billion on a U.S. artificial intelligence infrastructure build-out, starting with custom data centers in Texas and New York. The facilities, which will be designed to support the company's rapid enterprise growth and its long-term research agenda, will be developed in partnership with Fluidstack.

Fluidstack is an AI cloud platform that supplies large-scale graphics processing unit, or GPU, clusters to clients like Meta, Midjourney and Mistral. Additional sites are expected to follow, with the first locations going live in 2026. The project is expected to create 800 permanent jobs and more than 2,000 construction roles. The investment positions Anthropic as a major domestic player in physical AI infrastructure at a moment when policymakers are increasingly focused on U.S.-based compute capacity and technological sovereignty.
"We're getting closer to AI that can accelerate scientific discovery and help solve complex problems in ways that weren't possible before. Realizing that potential requires infrastructure that can support continued development at the frontier," said CEO Dario Amodei. "These sites will help us build more capable AI systems that can drive those breakthroughs, while creating American jobs."
AI

OpenAI's GPT-5.1 Brings Smarter Reasoning and More Personality Presets To ChatGPT (openai.com) 20

OpenAI today released GPT-5.1, an update to its flagship model line. The update includes two versions: GPT-5.1 Instant, which OpenAI says adds adaptive reasoning capabilities and improved instruction following, and GPT-5.1 Thinking, which adjusts its processing time based on query complexity.

The Thinking model responds roughly twice as fast on simple tasks and twice as slow on complex problems compared to its predecessor. The company began rolling out both models to paid subscribers and plans to extend access to free users in coming days. OpenAI added three personality presets -- Professional, Candid, and Quirky -- to its existing customization options. The previous GPT-5 models will remain available through a legacy dropdown menu for three months.

Slashdot Top Deals