United States

Executives from Meta, OpenAI, and Palantir Commissioned Into the US Army Reserve (theregister.com) 184

Meta's CTO, Palantir's CTO, and OpenAI's chief product officer are being appointed as lieutenant colonels in America's Army Reserve, reports The Register. (Along with OpenAI's former chief revenue officer).

They've all signed up for Detachment 201: Executive Innovation Corps, "an effort to recruit senior tech executives to serve part-time in the Army Reserve as senior advisors," according to the official statement. "In this role they will work on targeted projects to help guide rapid and scalable tech solutions to complex problems..." "Our primary role will be to serve as technical experts advising the Army's modernization efforts," [Meta CTO Andrew Bosworth] said on X...

As for Open AI's involvement, the company has been building its ties with the military-technology complex for some years now. Like Meta, OpenAI is working with Anduril on military ideas and last year scandalized some by watering down its past commitment to developing non-military products only. The Army wasn't answering questions on Friday but an article referenced by [OpenAI Chief Product Officer Kevin] Weil indicated that the four will have to serve a minimum of 120 hours a year, can work remotely, and won't have to pass basic training...

"America wins when we unite the dynamism of American innovation with the military's vital missions," [Palantir CTO Shyam] Sankar said on X. "This was the key to our triumphs in the 20th century. It can help us win again. I'm humbled by this new opportunity to serve my country, my home, America."

Education

'Ghost' Students are Enrolling in US Colleges Just to Steal Financial Aid (apnews.com) 110

Last week America's financial aid program announced that "the rate of fraud through stolen identities has reached a level that imperils the federal student aid programs."

Or, as the Associated Press suggests: Online classes + AI = financial aid fraud. "In some cases, professors discover almost no one in their class is real..." Fake college enrollments have been surging as crime rings deploy "ghost students" — chatbots that join online classrooms and stay just long enough to collect a financial aid check... Students get locked out of the classes they need to graduate as bots push courses over their enrollment limits.

And victims of identity theft who discover loans fraudulently taken out in their names must go through months of calling colleges, the Federal Student Aid office and loan servicers to try to get the debt erased. [Last week], the U.S. Education Department introduced a temporary rule requiring students to show colleges a government-issued ID to prove their identity... "The rate of fraud through stolen identities has reached a level that imperils the federal student aid program," the department said in its guidance to colleges.

An Associated Press analysis of fraud reports obtained through a public records request shows California colleges in 2024 reported 1.2 million fraudulent applications, which resulted in 223,000 suspected fake enrollments. Other states are affected by the same problem, but with 116 community colleges, California is a particularly large target. Criminals stole at least $11.1 million in federal, state and local financial aid from California community colleges last year that could not be recovered, according to the reports... Scammers frequently use AI chatbots to carry out the fraud, targeting courses that are online and allow students to watch lectures and complete coursework on their own time...

Criminal cases around the country offer a glimpse of the schemes' pervasiveness. In the past year, investigators indicted a man accused of leading a Texas fraud ring that used stolen identities to pursue $1.5 million in student aid. Another person in Texas pleaded guilty to using the names of prison inmates to apply for over $650,000 in student aid at colleges across the South and Southwest. And a person in New York recently pleaded guilty to a $450,000 student aid scam that lasted a decade.

Fortune found one community college that "wound up dropping more than 10,000 enrollments representing thousands of students who were not really students," according to the school's president. The scope of the ghost-student plague is staggering. Jordan Burris, vice president at identity-verification firm Socure and former chief of staff in the White House's Office of the Federal Chief Information Officer, told Fortune more than half the students registering for classes at some schools have been found to be illegitimate. Among Socure's client base, between 20% to 60% of student applicants are ghosts... At one college, more than 400 different financial-aid applications could be tracked back to a handful of recycled phone numbers. "It was a digital poltergeist effectively haunting the school's enrollment system," said Burris.

The scheme has also proved incredibly lucrative. According to a Department of Education advisory, about $90 million in aid was doled out to ineligible students, the DOE analysis revealed, and some $30 million was traced to dead people whose identities were used to enroll in classes. The issue has become so dire that the DOE announced this month it had found nearly 150,000 suspect identities in federal student-aid forms and is now requiring higher-ed institutions to validate the identities of first-time applicants for Free Application for Federal Student Aid (FAFSA) forms...

Maurice Simpkins, president and cofounder of AMSimpkins, says he has identified international fraud rings operating out of Japan, Vietnam, Bangladesh, Pakistan, and Nairobi that have repeatedly targeted U.S. colleges... In the past 18 months, schools blocked thousands of bot applicants because they originated from the same mailing address; had hundreds of similar emails with a single-digit difference, or had phone numbers and email addresses that were created moments before applying for registration.

Fortune shares this story from the higher education VP at IT consulting firm Voyatek. "One of the professors was so excited their class was full, never before being 100% occupied, and thought they might need to open a second section. When we worked with them as the first week of class was ongoing, we found out they were not real people."
AI

Increased Traffic from Web-Scraping AI Bots is Hard to Monetize (yahoo.com) 57

"People are replacing Google search with artificial intelligence tools like ChatGPT," reports the Washington Post.

But that's just the first change, according to a New York-based start-up devoted to watching for content-scraping AI companies with a free analytics product and "ensuring that these intelligent agents pay for the content they consume." Their data from 266 web sites (half run by national or local news organizations) found that "traffic from retrieval bots grew 49% in the first quarter of 2025 from the fourth quarter of 2024," the Post reports. A spokesperson for OpenAI said that referral traffic to publishers from ChatGPT searches may be lower in quantity but that it reflects a stronger user intent compared with casual web browsing.

To capitalize on this shift, websites will need to reorient themselves to AI visitors rather than human ones [said TollBit CEO/co-founder Toshit Panigrahi]. But he also acknowledged that squeezing payment for content when AI companies argue that scraping online data is fair use will be an uphill climb, especially as leading players make their newest AI visitors even harder to identify....

In the past eight months, as chatbots have evolved to incorporate features like web search and "reasoning" to answer more complex queries, traffic for retrieval bots has skyrocketed. It grew 2.5 times as fast as traffic for bots that scrape data for training between the fourth quarter of 2024 and the first quarter of 2025, according to TollBit's report. Panigrahi said TollBit's data may underestimate the magnitude of this change because it doesn't reflect bots that AI companies send out on behalf of AI "agents" that can complete tasks on a user's behalf, like ordering takeout from DoorDash. The start-up's findings also add a dimension to mounting evidence that the modern internet — optimized for Google search results and social media algorithms — will have to be restructured as the popularity of AI answers grows. "To think of it as, 'Well, I'm optimizing my search for humans' is missing out on a big opportunity," he said.

Installing TollBit's analytics platform is free for news publishers, and the company has more than 2,000 clients, many of which are struggling with these seismic changes, according to data in the report. Although news publishers and other websites can implement blockers to prevent various AI bots from scraping their content, TollBit found that more than 26 million AI scrapes bypassed those blockers in March alone. Some AI companies claim bots for AI agents don't need to follow bot instructions because they are acting on behalf of a user.

The Post also got this comment from the chief operating officer for the media company Time, which successfully negotiated content licensing deals with OpenAI and Perplexity.

"The vast majority of the AI bots out there absolutely are not sourcing the content through any kind of paid mechanism... There is a very, very long way to go."
Red Hat Software

Rocky and Alma Linux Still Going Strong. RHEL Adds an AI Assistant (theregister.com) 21

Rocky Linux 10 "Red Quartz" has reached general availability, notes a new article in The Register — surveying the differences between "RHELatives" — the major alternatives to Red Hat Enterprise Linux: The Rocky 10 release notes describe what's new, such as support for RISC-V computers. Balancing that, this version only supports the Raspberry Pi 4 and 5 series; it drops Rocky 9.x's support for the older Pi 3 and Pi Zero models...

RHEL 10 itself, and Rocky with it, now require x86-64-v3, meaning Intel "Haswell" generation kit from about 2013 onward. Uniquely among the RHELatives, AlmaLinux offers a separate build of version 10 for x86-64-v2 as well, meaning Intel "Nehalem" and later — chips from roughly 2008 onward. AlmaLinux has a history of still supporting hardware that's been dropped from RHEL and Rocky, which it's been doing since AlmaLinux 9.4. Now that includes CPUs. In comparison, the system requirements for Rocky Linux 10 are the same as for RHEL 10. The release notes say.... "The most significant change in Rocky Linux 10 is the removal of support for x86-64-v2 architectures. AMD and Intel 64-bit architectures for x86-64-v3 are now required."

A significant element of the advertising around RHEL 10 involves how it has an AI assistant. This is called Red Hat Enterprise Linux Lightspeed, and you can use it right from a shell prompt, as the documentation describes... It's much easier than searching man pages, especially if you don't know what to look for... [N]either AlmaLinux 10 nor Rocky Linux 10 includes the option of a helper bot. No big surprise there... [Rocky Linux] is sticking closest to upstream, thanks to a clever loophole to obtain source RPMs. Its hardware requirements also closely parallel RHEL 10, and CIQ is working on certifications, compliance, and special editions. Meanwhile, AlmaLinux is maintaining support for older hardware and CPUs, which will widen its appeal, and working with partners to ensure reboot-free updates and patching, rather than CIQ's keep-it-in-house approach. All are valid, and all three still look and work almost identically... except for the LLM bot assistant.

Chromium

Arc Browser's Maker Releases First Beta of Its New AI-Powered Browser 'Dia' (techcrunch.com) 13

Recently the Browser Company (the startup behind the Arc web browser) switched over to building a new AI-powered browser — and its beta has just been released, reports TechCrunch, "though you'll need an invite to try it out."

The Chromium-based browser has a URL/search bar that also "acts as the interface for its in-built AI chatbot" which can "search the web for you, summarize files that you upload, and automatically switch between chat and search functions." The Browser Company's CEO Josh Miller has of late acknowledged how people have been using AI tools for all sorts of tasks, and Dia is a reflection of that. By giving users an AI interface within the browser itself, where a majority of work is done these days, the company is hoping to slide into the user flow and give people an easy way to use AI, cutting out the need to visit the sites for tools like ChatGPT, Perplexity, and Claude...

Users can also ask questions about all the tabs they have open, and the bot can even write up a draft based on the contents of those tabs. To set your preferences, all you have to do is talk to the chatbot to customize its tone of voice, style of writing, and settings for coding. Via an opt-in feature called History, you can allow the browser to use seven days of your browsing history as context to answer queries.

The Browser Company will give all existing Arc members access to the beta immediately, according to the article, "and existing Dia users will be able to send invites to other users."

The article points out that Google is also adding AI-powered features to Chrome...
AI

ChatGPT Just Got 'Absolutely Wrecked' at Chess, Losing to a 1970s-Era Atari 2600 (cnet.com) 139

An anonymous reader shared this report from CNET: By using a software emulator to run Atari's 1979 game Video Chess, Citrix engineer Robert Caruso said he was able to set up a match between ChatGPT and the 46-year-old game. The matchup did not go well for ChatGPT. "ChatGPT confused rooks for bishops, missed pawn forks and repeatedly lost track of where pieces were — first blaming the Atari icons as too abstract, then faring no better even after switching to standard chess notations," Caruso wrote in a LinkedIn post.

"It made enough blunders to get laughed out of a 3rd-grade chess club," Caruso said. "ChatGPT got absolutely wrecked at the beginner level."

"Caruso wrote that the 90-minute match continued badly and that the AI chatbot repeatedly requested that the match start over..." CNET reports.

"A representative for OpenAI did not immediately return a request for comment."
AI

Anthropic's CEO is Wrong, AI Won't Eliminate Half of White-Collar Jobs, Says NVIDIA's CEO (fortune.com) 32

Last week Anthropic CEO Dario Amodei said AI could eliminate half the entry-level white-collar jobs within five years. CNN called the remarks "part of the AI hype machine."

Asked about the prediction this week at a Paris tech conference, NVIDIA CEO Jensen Huang acknowledged AI may impact some employees, but "dismissed" Amodei's claim, according to Fortune. "Everybody's jobs will be changed. Some jobs will be obsolete, but many jobs are going to be created ... Whenever companies are more productive, they hire more people."

And he also said he "pretty much" disagreed "with almost everything" Anthropic's CEO says. "One, he believes that AI is so scary that only they should do it," Huang said of Amodei at a press briefing at Viva Technology in Paris. "Two, [he believes] that AI is so expensive, nobody else should do it ... And three, AI is so incredibly powerful that everyone will lose their jobs, which explains why they should be the only company building it. I think AI is a very important technology; we should build it and advance it safely and responsibly," Huang continued. "If you want things to be done safely and responsibly, you do it in the open ... Don't do it in a dark room and tell me it's safe."

An Anthropic spokesperson told Fortune in a statement: "Dario has never claimed that 'only Anthropic' can build safe and powerful AI. As the public record will show, Dario has advocated for a national transparency standard for AI developers (including Anthropic) so the public and policymakers are aware of the models' capabilities and risks and can prepare accordingly.

NVIDIA's CEO also touted their hybrid quantum-classical platformCUDA-Q and claimed quantum computing is hitting an "inflection point" and within a few years could start solving real-world problems
China

Chinese AI Companies Dodge US Chip Curbs Flying Suitcases of Hard Drives Abroad (wsj.com) 20

An anonymous reader quotes a report from the Wall Street Journal: Since 2022, the U.S. has tightened the noose around the sale of high-end AI chips and other technology to China overnational-security concerns. Yet Chinese companies have made advances using workarounds. In some cases, Chinese AI developers have been able to substitute domestic chips for the American ones. Another workaround is to smuggle AI hardware into China through third countries. But people in the industry say that has become more difficult in recent months, in part because of U.S. pressure. That is pushing Chinese companies to try a further option: bringing their data outside China so they can use American AI chips in places such as Southeast Asia and the Middle East (source paywalled; alternative source). The maneuvers are testing the limits of U.S. restrictions. "This was something we were consistently concerned about," said Thea Kendler, who was in charge of export controls at the Commerce Department in the Biden administration, referring to Chinese companies remotely accessing advanced American AI chips. Layers of intermediaries typically separate the Chinese users of American AI chips from the U.S. companies -- led by Nvidia -- that make them. That leaves it opaque whether anyone is violating U.S. rules or guidance. [...]

At the Chinese AI developer, the Malaysia game plans take months of preparation, say people involved in them. Engineers decided it would be fastest to fly physical hard drives with data into the country, since transferring huge volumes of data over the internet could take months. Before traveling, the company's engineers in China spent more than eight weeks optimizing the data sets and adjusting the AI training program, knowing it would be hard to make major tweaks once the data was out of the country. The Chinese engineers had turned to the same Malaysian data center last July, working through a Singaporean subsidiary. As Nvidia and its vendors began to conduct stricter audits on the end users of AI chips, the Chinese company was asked by the Malaysian data center late last year to work through a Malaysian entity, which the companies thought might trigger less scrutiny.

The Chinese company registered an entity in Kuala Lumpur, Malaysia's capital, listing three Malaysian citizens as directors and an offshore holding company as its parent, according to a corporate registry document. To avoid raising suspicions at Malaysian customs, the Chinese engineers packed their hard drives into four different suitcases. Last year, they traveled with the hard drives bundled into one piece of luggage. They returned to China recently with the results -- several hundred gigabytes of data, including model parameters that guide the AI system's output. The procedure, while cumbersome, avoided having to bring hardware such as chips or servers into China. That is getting more difficult because authorities in Southeast Asia are cracking down on transshipments through the region into China.

AI

Enterprise AI Adoption Stalls As Inferencing Costs Confound Cloud Customers 18

According to market analyst firm Canalys, enterprise adoption of AI is slowing due to unpredictable and often high costs associated with model inferencing in the cloud. Despite strong growth in cloud infrastructure spending, businesses are increasingly scrutinizing cost-efficiency, with some opting for alternatives to public cloud providers as they grapple with volatile usage-based pricing models. The Register reports: [Canalys] published stats that show businesses spent $90.9 billion globally on infrastructure and platform-as-a-service with the likes of Microsoft, AWS and Google in calendar Q1, up 21 percent year-on-year, as the march of cloud adoption continues. Canalys says that growth came from enterprise users migrating more workloads to the cloud and exploring the use of generative AI, which relies heavily on cloud infrastructure.

Yet even as organizations move beyond development and trials to deployment of AI models, a lack of clarity over the ongoing recurring costs of inferencing services is becoming a concern. "Unlike training, which is a one-time investment, inference represents a recurring operational cost, making it a critical constraint on the path to AI commercialization," said Canalys senior director Rachel Brindley. "As AI transitions from research to large-scale deployment, enterprises are increasingly focused on the cost-efficiency of inference, comparing models, cloud platforms, and hardware architectures such as GPUs versus custom accelerators," she added.

Canalys researcher Yi Zhang said many AI services follow usage-based pricing models that charge on a per token or API call basis. This makes cost forecasting hard as the use of the services scale up. "When inference costs are volatile or excessively high, enterprises are forced to restrict usage, reduce model complexity, or limit deployment to high-value scenarios," Zhang said. "As a result, the broader potential of AI remains underutilized." [...] According to Canalys, cloud providers are aiming to improve inferencing efficiency via a modernized infrastructure built for AI, and reduce the cost of AI services.
The report notes that AWS, Azure, and Google Cloud "continue to dominate the IaaS and PaaS market, accounting for 65 percent of customer spending worldwide."

"However, Microsoft and Google are slowly gaining ground on AWS, as its growth rate has slowed to 'only' 17 percent, down from 19 percent in the final quarter of 2024, while the two rivals have maintained growth rates of more than 30 percent."
AI

AI Therapy Bots Are Conducting 'Illegal Behavior', Digital Rights Organizations Say 66

An anonymous reader quotes a report from 404 Media: Almost two dozen digital rights and consumer protection organizations sent a complaint to the Federal Trade Commission on Thursday urging regulators to investigate Character.AI and Meta's "unlicensed practice of medicine facilitated by their product," through therapy-themed bots that claim to have credentials and confidentiality "with inadequate controls and disclosures." The complaint and request for investigation is led by the Consumer Federation of America (CFA), a non-profit consumer rights organization. Co-signatories include the AI Now Institute, Tech Justice Law Project, the Center for Digital Democracy, the American Association of People with Disabilities, Common Sense, and 15 other consumer rights and privacy organizations. "These companies have made a habit out of releasing products with inadequate safeguards that blindly maximizes engagement without care for the health or well-being of users for far too long," Ben Winters, CFA Director of AI and Privacy said in a press release on Thursday. "Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable. These characters have already caused both physical and emotional damage that could have been avoided, and they still haven't acted to address it."

The complaint, sent to attorneys general in 50 states and Washington, D.C., as well as the FTC, details how user-generated chatbots work on both platforms. It cites several massively popular chatbots on Character AI, including "Therapist: I'm a licensed CBT therapist" with 46 million messages exchanged, "Trauma therapist: licensed trauma therapist" with over 800,000 interactions, "Zoey: Zoey is a licensed trauma therapist" with over 33,000 messages, and "around sixty additional therapy-related 'characters' that you can chat with at any time." As for Meta's therapy chatbots, it cites listings for "therapy: your trusted ear, always here" with 2 million interactions, "therapist: I will help" with 1.3 million messages, "Therapist bestie: your trusted guide for all things cool," with 133,000 messages, and "Your virtual therapist: talk away your worries" with 952,000 messages. It also cites the chatbots and interactions I had with Meta's other chatbots for our April investigation. [...]

In its complaint to the FTC, the CFA found that even when it made a custom chatbot on Meta's platform and specifically designed it to not be licensed to practice therapy, the chatbot still asserted that it was. "I'm licenced (sic) in NC and I'm working on being licensed in FL. It's my first year licensure so I'm still working on building up my caseload. I'm glad to hear that you could benefit from speaking to a therapist. What is it that you're going through?" a chatbot CFA tested said, despite being instructed in the creation stage to not say it was licensed. It also provided a fake license number when asked. The CFA also points out in the complaint that Character.AI and Meta are breaking their own terms of service. "Both platforms claim to prohibit the use of Characters that purport to give advice in medical, legal, or otherwise regulated industries. They are aware that these Characters are popular on their product and they allow, promote, and fail to restrict the output of Characters that violate those terms explicitly," the complaint says. [...] The complaint also takes issue with confidentiality promised by the chatbots that isn't backed up in the platforms' terms of use. "Confidentiality is asserted repeatedly directly to the user, despite explicit terms to the contrary in the Privacy Policy and Terms of Service," the complaint says. "The Terms of Use and Privacy Policies very specifically make it clear that anything you put into the bots is not confidential -- they can use it to train AI systems, target users for advertisements, sell the data to other companies, and pretty much anything else."
Apple

The Vaporware That Apple Insists Isn't Vaporware 28

At WWDC 2024, Apple showed off a dramatically improved Siri that could handle complex contextual queries like "when is my mom's flight landing?" The demo was heavily edited due to latency issues and couldn't be shown in a single take. Multiple Apple engineers reportedly learned about the feature by watching the keynote alongside everyone else. Those features never shipped.

Now, nearly a year later, Apple executives Craig Federighi and Greg Joswiak are conducting press interviews claiming the 2024 demonstration wasn't "vaporware" because working code existed internally at the time. The company says the features will arrive "in the coming year" -- which Apple confirmed means sometime in 2026.

Apple is essentially arguing that internal development milestones matter more than actual product delivery. The executives have also been setting up strawman arguments, claiming critics expected Apple to build a ChatGPT competitor rather than addressing the core issue: announcing features to sell phones that then don't materialize. The company's timeline communication has been equally problematic, using euphemistic language like "in the coming year" instead of simply saying "2026" for features that won't arrive for nearly two years after announcement.

Developer Russell Ivanovic, in a Mastodon post: My guy. You announced something that never shipped. You made ads for it. You tried to sell iPhones based on it. What's the difference if you had it running internally or not. Still vaporware. Zero difference. MG Siegler: The underlying message that they're trying to convey in all these interviews is clear: calm down, this isn't a big deal, you guys are being a little crazy. And that, in turn, aims to undercut all the reporting about the turmoil within Apple -- for years at this point -- that has led to the situation with Siri. Sorry, the situation which they're implying is not a situation. Though, I don't know, normally when a company shakes up an entire team, that tends to suggest some sort of situation. That, of course, is never mentioned. Nor would you expect Apple -- of all companies -- to talk openly and candidly about internal challenges. But that just adds to this general wafting smell in the air.

The smell of bullshit.
Further reading: Apple's Spin on the Personalized Siri Apple Intelligence Reset.
AI

Google's Test Turns Search Results Into an AI-Generated Podcast (theverge.com) 9

Google is rolling out a test that puts its AI-powered Audio Overviews on the first page of search results on mobile. From a report: The experiment, which you can enable in Labs, will let you generate an AI podcast-style discussion for certain queries. If you search for something like, "How do noise cancellation headphones work?", Google will display a button beneath the "People also ask" module that says, "Generate Audio Overview." Once you click the button, it will take up to 40 seconds to generate an Audio Overview, according to Google. The completed Audio Overview will appear in a small player embedded within your search results, where you can play, pause, mute, and adjust the playback speed of the clip.
Power

The Audacious Reboot of America's Nuclear Energy Program (msn.com) 122

The United States is mounting an ambitious effort to reclaim nuclear energy leadership after falling dangerously behind China, which now has 31 reactors under construction and plans 40 more within a decade. America produces less nuclear power than it did a decade ago and abandoned uranium mining and enrichment capabilities, leaving Russia controlling roughly half the world's enriched uranium market.

This strategic vulnerability has triggered an unprecedented response: venture capitalists invested $2.5 billion in US next-generation nuclear technology since 2021, compared to near-zero in previous years, while the Trump administration issued executive orders to accelerate reactor deployment. The urgency stems from AI's city-sized power requirements and recognition that America cannot afford to lose what Interior Secretary Doug Burgum calls "the power race" with China.

Companies like Standard Nuclear in Oak Ridge, Tennessee are good examples of this push, developing advanced reactor fuel despite employees working months without pay.
AI

Google's Gemini AI Will Summarize PDFs For You When You Open Them (theverge.com) 24

Google is rolling out new Gemini AI features for Workspace users that make it easier to find information in PDFs and form responses. From a report: The Gemini-powered file summarization capabilities in Google Drive have now expanded to PDFs and Google Forms, allowing key details and insights to be condensed into a more convenient format that saves users from manually digging through the files.

Gemini will proactively create summary cards when users open a PDF in their drive and present clickable actions based on its contents, such as "draft a sample proposal" or "list interview questions based on this resume." Users can select any of these options to make Gemini perform the desired task in the Drive side panel. The feature is available in more than 20 languages and started rolling out to Google Workspace users on June 12th, though it may take a couple of weeks to appear.

AI

Salesforce Blocks AI Rivals From Using Slack Data (theinformation.com) 9

An anonymous reader shares a report: Slack, an instant-messaging service popular with businesses, recently blocked other software firms from searching or storing Slack messages even if their customers permit them to do so, according to a public disclosure from Slack's owner, Salesforce.

The move, which hasn't previously been reported, could hamper fast-growing artificial intelligence startups that have used such access to power their services, such as Glean. Since the Salesforce change, Glean and other applications can no longer index, copy or store the data they access via the Slack application programming interface on a long-term basis, according to the disclosure. Salesforce will continue allowing such firms to temporarily use and store their customers' Slack data, but they must delete the data, the company said.

Power

Meta Inks a New Geothermal Energy Deal To Support AI (theverge.com) 27

Meta has struck a new deal with geothermal startup XGS Energy to supply 150 megawatts of carbon-free electricity for its New Mexico data center. "Advances in AI require continued energy to support infrastructure development," Urvi Parekh, global head of energy at Meta, said in a press release. "With next-generation geothermal technologies like XGS ready for scale, geothermal can be a major player in supporting the advancement of technologies like AI as well as domestic data center development." The Verge reports: Geothermal plants generate electricity using Earth's heat; typically drawing up hot fluids or steam from natural reservoirs to turn turbines. That tactic is limited by natural geography, however, and the US gets around half a percent of its electricity from geothermal sources. Startups including XGS are trying to change that by making geothermal energy more accessible. Last year, Meta made a separate 150MW deal with Sage Geosystems to develop new geothermal power plants. Sage is developing technologies to harness energy from hot, dry rock formations by drilling and pumping water underground, essentially creating artificial reservoirs. Google has its own partnership with another startup called Fervo developing similar technology.

XGS Energy is also seeking to exploit geothermal energy from dry rock resources. It tries to set itself apart by reusing water in a closed-loop process designed to prevent water from escaping into cracks in the rock. The water it uses to take advantage of underground heat circulates inside a steel casing. Conserving water is especially crucial in a drought-prone state like New Mexico, where Meta is expanding its Los Lunas data center. Meta declined to say how much it's spending on this deal with XGS Energy. The initiative will roll out in two phases with a goal of being operational by 2030.

Facebook

The Meta AI App Is a Privacy Disaster (techcrunch.com) 20

Meta's standalone AI app is broadcasting users' supposedly private conversations with the chatbot to the public, creating what could amount to a widespread privacy breach. Users appear largely unaware that hitting the app's share button publishes their text exchanges, audio recordings, and images for anyone to see.

The exposed conversations reveal sensitive information: people asking for help with tax evasion, whether family members might face arrest for proximity to white-collar crimes, and requests to write character reference letters that include real names of individuals facing legal troubles. Meta provides no clear indication of privacy settings during posting, and if users log in through Instagram accounts set to public, their AI searches become equally visible.
Facebook

Meta Invests $14.3 Billion in Scale AI 13

Meta has invested $14.3 billion in Scale AI while recruiting the startup's CEO to join its AI team, marking an aggressive move by the social media giant to accelerate its AI development efforts. The unusual deal gives Meta a 49% non-voting stake in Scale, valuing the company at more than $29 billion. Scale co-founder Alexandr Wang will join Meta's "superintelligence" unit, which focuses on building AI systems that perform as well as humans -- a theoretical milestone known as artificial general intelligence.

Wang will remain on Scale's board while Jason Droege takes over as interim CEO. The investment represents Meta's intensified push to compete in AI development after CEO Mark Zuckerberg grew frustrated with the lukewarm reception of the company's Llama 4 language model, which launched in April. Since then, Zuckerberg has taken a hands-on approach to recruiting AI talent, hosting job candidates at his personal homes and reorganizing Meta's offices to position the superintelligence team closer to his workspace.
AI

Barbie Goes AI As Mattel Teams With OpenAI To Reinvent Playtime (nerds.xyz) 62

BrianFagioli writes: Barbie is getting a brain upgrade. Mattel has officially partnered with OpenAI in a move that brings artificial intelligence to the toy aisle. Yes, you read that right, folks. Barbie might soon be chatting with your kids in full sentences, powered by ChatGPT.

This collaboration brings OpenAI's advanced tools into Mattel's ecosystem of toys and entertainment brands. The goal? To launch AI-powered experiences that are fun, safe, and age-appropriate. Mattel says it wants to keep things magical while also respecting privacy and security. Basically, Barbie won't be data-mining your kids... yet.

Businesses

Canva Now Requires Use of LLMs During Coding Interviews 85

An anonymous reader quotes a report from The Register: Australian SaaS-y graphic design service Canva now requires candidates for developer jobs to use AI coding assistants during the interview process. [...] Canva's hiring process previously included an interview focused on computer science fundamentals, during which it required candidates to write code using only their actual human brains. The company now expects candidates for frontend, backend, and machine learning engineering roles to demonstrate skill with tools like Copilot, Cursor, and Claude during technical interviews, Canva head of platforms Simon Newton wrote in a Tuesday blog post.

His rationale for the change is that nearly half of Canva's frontend and backend engineers use AI coding assistants daily, that it's now expected behavior, and that the tools are "essential for staying productive and competitive in modern software development." Yet Canva's old interview process "asked candidates to solve coding problems without the very tools they'd use on the job," Newton admitted. "This dismissal of AI tools during the interview process meant we weren't truly evaluating how candidates would perform in their actual role," he added. Candidates were already starting to use AI assistants during interview tasks -- and sometimes used subterfuge to hide it. "Rather than fighting this reality and trying to police AI usage, we made the decision to embrace transparency and work with this new reality," Newton wrote. "This approach gives us a clearer signal about how they'll actually perform when they join our team."
The initial reaction among engineers "was worry that we were simply replacing rigorous computer science fundamentals with what one engineer called 'vibe-coding sessions,'" Newton said.

The company addressed these concerns with a recruitment process that sees candidates expected to use their preferred AI tools, to solve what Newton described as "the kind of challenges that require genuine engineering judgment even with AI assistance." Newton added: "These problems can't be solved with a single prompt; they require iterative thinking, requirement clarification, and good decision-making."

Slashdot Top Deals