Android

Google Launches NotebookLM App For Android and iOS 26

Google has launched the NotebookLM app for Android and iOS, offering a native mobile experience with offline support, audio overviews, and integration into the system share sheet for adding sources like PDFs and YouTube videos. 9to5Google reports: This native experience starts on a homepage of your notebooks with filters at the top for Recent, Shared, Title, and Downloaded. The app features a light and dark mode based on your device's system theme with no manual toggle. Each colorful card features the notebook name, emoji, number of sources, and date, as well as a play button for Audio Overviews. There's background playback and offline support for the podcast-style experience (the fullscreen player has a nice glow), while you can "Join" the AI hosts (in beta) to ask follow-up questions.

You get a "Create new" button at the bottom of the list to add PDFs, websites, YouTube videos, and text. Notably, the NotebookLM app will appear in the Android and iOS share sheet to quickly add sources. When you open a notebook, there's a bottom bar for the list of Sources, Chat Q&A, and Studio. It's similar to the current mobile website, with the native client letting users ditch the Progressive Web App. Out of the gate, there are phone and (straightforward) tablet interfaces.
You can download the app for iOS and Android using their respective links.
Cloud

xAI's Grok 3 Comes To Microsoft Azure (techcrunch.com) 11

An anonymous reader quotes a report from TechCrunch: Microsoft on Monday became one of the first hyperscalers to provide managed access to Grok, the AI model developed by billionaire Elon Musk's AI startup, xAI. Available through Microsoft's Azure AI Foundry platform, Grok -- specifically Grok 3 and Grok 3 mini -- will "have all the service-level agreements Azure customers expect from any Microsoft product," says Microsoft. They'll also be billed directly by Microsoft, as is the case with the other models hosted in Azure AI Foundry. [...] The Grok 3 and Grok 3 mini models in Azure AI Foundry are decidedly more locked down than the Grok models on X. They also come with additional data integration, customization, and governance capabilities not necessarily offered by xAI through its API.
AI

AI is More Persuasive Than People in Online Debates (nature.com) 85

Chatbots are more persuasive in online debates than people -- especially when they are able to personalize their arguments using information about their opponent. From a report: The finding, published in Nature Human Behaviour on 19 May, highlights how large language models (LLMs) could be used to influence people's opinions, for example in political campaigns or targeted advertising.

"Obviously as soon as people see that you can persuade people more with LLMs, they're going to start using them," says study co-author Francesco Salvi, a computational scientist at the Swiss Federal Technology Institute of Lausanne (EPFL). "I find it both fascinating and terrifying." Research has already shown that artificial intelligence (AI) chatbots can make people change their minds, even about conspiracy theories, but it hasn't been clear how persuasive they are in comparison to humans.
GPT-4 was 64.4% more persuasive than humans in one-to-one debates, the study found.
AI

LinkedIn Executive Warns AI Threatens Entry-Level Jobs as Graduate Unemployment Rises (nytimes.com) 36

AI is eroding entry-level positions across multiple industries, threatening the traditional career ladder for young professionals, LinkedIn's chief economic opportunity officer warned Monday. College graduate unemployment has risen 30% since September 2022, compared to 18% for workers overall, according to LinkedIn data. The company's research shows Generation Z workers expressing greater pessimism about their futures than any other age group.

"Breaking first is the bottom rung of the career ladder," wrote Aneesh Raman in a New York Times column, citing examples across technology, law, and retail where AI is replacing tasks traditionally assigned to junior workers. A LinkedIn survey of 3,000 executives found 63% believe AI will eventually handle mundane entry-level tasks, with professionals holding advanced degrees likely facing greater disruption than those without.

Some firms are adapting by redesigning roles. KPMG now assigns recent graduates tax work previously reserved for more experienced employees, while Macfarlanes has early-career lawyers interpreting complex contracts once handled by senior colleagues. Though economic uncertainty also impacts hiring, Raman warned that delayed career entry can cost young workers approximately $22,000 in earnings over a decade.
Microsoft

Microsoft's Plan To Fix the Web: Letting Every Website Run AI Search for Cheap (theverge.com) 22

Microsoft has announced NLWeb, an open protocol designed to democratize AI-powered search capabilities for websites and apps. Developed by Microsoft technical fellow Ramanathan V. Guha, who previously created RSS and Schema.org, NLWeb allows site owners to implement ChatGPT-style natural language search with minimal code. The protocol enables websites to process complex queries like "spicy and crunchy appetizers for Diwali" or "jackets warm enough for Quebec," requiring only an AI model, some code, and the site's own data.

During his demonstration to news outlet The Verge, Guha showed how NLWeb remembers user preferences, such as dietary restrictions, for future interactions. "It's a protocol, and the protocol is a way of asking a natural-language question, and the answer comes back in structured form," explained Guha, who argues the approach is significantly cheaper than traditional search methods that require extensive web crawling and indexing. Microsoft is partnering with publishers and companies including TripAdvisor, Eventbrite, and Shopify to implement NLWeb, though Guha acknowledges the challenge of achieving widespread adoption in a web that historically tends toward centralization.
AI

How Miami Schools Are Leading 100,000 Students Into the A.I. Future 63

Miami-Dade County Public Schools, the nation's third-largest school district, is now deploying Google's Gemini chatbots to more than 105,000 high school students -- marking the largest U.S. school district AI deployment to date. This represents a dramatic reversal from just two years ago when the district blocked such tools over cheating and misinformation concerns.

The initiative follows President Trump's recent executive order promoting AI integration "in all subject areas" from kindergarten through 12th grade. District officials spent months testing various chatbots for accuracy, privacy, and safety before selecting Google's platform.
Australia

New South Wales Education Department Caught Unaware After Microsoft Teams Began Collecting Students' Biometric Data (theguardian.com) 47

New submitter optical_phiber writes: In March 2025, the New South Wales (NSW) Department of Education discovered that Microsoft Teams had begun collecting students' voice and facial biometric data without their prior knowledge. This occurred after Microsoft enabled a Teams feature called 'voice and face enrollment' by default, which creates biometric profiles to enhance meeting experiences and transcriptions via its CoPilot AI tool.

The NSW department learned of the data collection a month after it began and promptly disabled the feature and deleted the data within 24 hours. However, the department did not disclose how many individuals were affected or whether they were notified. Despite Microsoft's policy of retaining data only while the user is enrolled and deleting it within 90 days of account deletion, privacy experts have raised serious concerns. Rys Farthing of Reset Tech Australia criticized the unnecessary collection of children's data, warning of the long-term risks and calling for stronger protections.

Power

Danes Are Finally Going Nuclear. They Have To, Because of All Their Renewables (telegraph.co.uk) 178

"The Danish government plans to evaluate the prospect of beginning a nuclear power programme," reports the Telegraph, noting that this week Denmark lifted a nuclear power ban imposed 40 years ago. Unlike its neighbours in Sweden and Germany, Denmark has never had a civil nuclear power programme. It has only ever had three small research reactors, the last of which closed in 2001. Most of the renewed interest in nuclear seen around the world stems from the expected growth in electricity demand from AI data centres, but Denmark is different. The Danes are concerned about possible blackouts similar to the one that struck Iberia recently. Like Spain and Portugal, Denmark is heavily dependent on weather-based renewable energy which is not very compatible with the way power grids operate... ["The spinning turbines found in fossil-fuelled energy systems provide inertia and act as a shock absorber to stabilise the grid during sudden changes in supply or demand," explains a diagram in the article, while solar and wind energy provide no inertia.]

The Danish government is worried about how it will continue to decarbonise its power grid if it closes all of its fossil fuel generators leaving minimal inertia. There are only three realistic routes to decarbonisation that maintain physical inertia on the grid: hydropower, geothermal energy and nuclear. Hydro and geothermal depend on geographic and geological features that not every country possesses. While renewable energy proponents argue that new types of inverters could provide synthetic inertia, trials have so far not been particularly successful and there are economic challenges that are difficult to resolve.

Denmark is realising that in the absence of large-scale hydroelectric or geothermal energy, it may have little choice other than to re-visit nuclear power if it is to maintain a stable, low carbon electricity grid.

Thanks to long-time Slashdot reader schwit1 for sharing the news.
AI

Why We're Unlikely to Get Artificial General Intelligence Any Time Soon (msn.com) 261

OpenAI CEO and Sam Altman believe Artificial General Intelligence could arrive within the next few years. But the speculations of some technologists "are getting ahead of reality," writes the New York Times, adding that many scientists "say no one will reach AGI without a new idea — something beyond the powerful neural networks that merely find patterns in data. That new idea could arrive tomorrow. But even then, the industry would need years to develop it." "The technology we're building today is not sufficient to get there," said Nick Frosst, a founder of the AI startup Cohere who previously worked as a researcher at Google and studied under the most revered AI researcher of the last 50 years. "What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That's very different from what you and I do." In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today's technology were unlikely to lead to AGI.

Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of IQ tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying AGI is essentially a matter of opinion.... And scientists have no hard evidence that today's technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of AGI's imminent arrival are based on statistical extrapolations — and wishful thinking. According to various benchmark tests, today's technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do.

Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected — the challenges, small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Frosst and other sceptics say pushing machines to human-level intelligence will require at least one big idea that the world's technologists have not yet dreamed up. There is no way of knowing how long that will take. "A system that's better than humans in one way will not necessarily be better in other ways," Harvard University cognitive scientist Steven Pinker said. "There's just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven't even thought of yet. There's a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets."

While Google's AlphaGo could be humans in a game with "a small, limited set of rules," the article points out that the real world "is bounded only by the laws of physics. Modelling the entirety of the real world is well beyond today's machines, so how can anyone be sure that AGI — let alone superintelligence — is just around the corner?" And they offer this alternative perspective from Matteo Pasquinelli, a professor of the philosophy of science at Ca' Foscari University in Venice, Italy.

"AI needs us: living beings, producing constantly, feeding the machine. It needs the originality of our ideas and our lives."
AI

When a Company Does Job Interviews with a Malfunctioning AI - and Then Rejects You (slate.com) 51

IBM laid off "a couple hundred" HR workers and replaced them with AI agents. "It's becoming a huge thing," says Mike Peditto, a Chicago-area consultant with 15 years of experience advising companies on hiring practices. He tells Slate "I do think we're heading to where this will be pretty commonplace." Although A.I. job interviews have been happening since at least 2023, the trend has received a surge of attention in recent weeks thanks to several viral TikTok videos in which users share videos of their A.I. bots glitching. Although some of the videos were fakes posted by a creator whose bio warns that his content is "all satire," some are authentic — like that of Kendiana Colin, a 20-year-old student at Ohio State University who had to interact with an A.I. bot after she applied for a summer job at a stretching studio outside Columbus. In a clip she posted online earlier this month, Colin can be seen conducting a video interview with a smiling white brunette named Alex, who can't seem to stop saying the phrase "vertical-bar Pilates" in an endless loop...

Representatives at Apriora, the startup company founded in 2023 whose software Colin was forced to engage with, did not respond to a request for comment. But founder Aaron Wang told Forbes last year that the software allowed companies to screen more talent for less money... (Apriora's website claims that the technology can help companies "hire 87 percent faster" and "interview 93 percent cheaper," but it's not clear where those stats come from or what they actually mean.)

Colin (first interviewed by 404 Media) calls the experience dehumanizing — wondering why they were told dress professionally, since "They had me going the extra mile just to talk to a robot." And after the interview, the robot — and the company — then ghosted them with no future contact. "It was very disrespectful and a waste of time."

Houston resident Leo Humphries also "donned a suit and tie in anticipation for an interview" in which the virtual recruiter immediately got stuck repeating the same phrase. Although Humphries tried in vain to alert the bot that it was broken, the interview ended only when the A.I. program thanked him for "answering the questions" and offering "great information" — despite his not being able to provide a single response. In a subsequent video, Humphries said that within an hour he had received an email, addressed to someone else, that thanked him for sharing his "wonderful energy and personality" but let him know that the company would be moving forward with other candidates.
Youtube

YouTube Announces Gemini AI Feature to Target Ads When Viewers are Most Engaged (techcrunch.com) 123

A new YouTube tool will let advertisers use Google's Gemini AI model to target ads to viewers when they're most engaged, reports CNBC: Peak Points has the potential to enable more impressions and a higher click-through rate on YouTube, a primary metric that determines how creators earn money on the video platform... Peak Points is currently in a pilot program and will be rolling out over the rest of the year.
The product "aims to benefit advertisers by using a tactic that aims to grab users' attention right when they're most invested in the content," reports TechCrunch: This approach appears to be similar to a strategy called emotion-based targeting, where advertisers place ads that align with the emotions evoked by the video. It's believed that when viewers experience heightened emotional states, it leads to better recall of the ads. However, viewers may find these interruptions frustrating, especially when they're deeply engaged in the emotional arc of a video and want the ad to be over quickly to resume watching.

In related news, YouTube announced another ad format that may be more appealing to users. The platform debuted a shoppable product feed where users can browse and purchase items during an ad.

Programming

Stack Overflow Seeks Realignment 'To Support the Builders of the Future in an AI World' (devclass.com) 58

"The world has changed," writes Stack Overflow's blog. "Fast. Artificial intelligence is reshaping how we build, learn, and solve problems. Software development looks dramatically different than it did even a few years ago — and the pace of change is only accelerating."

And they believe their brand "at times" lost "fidelity and clarity. It's very much been always added to and not been thought of holistically. So, it's time for our brand to evolve too," they write, hoping to articulate a perspective "forged in the fires of community, powered by collaboration, shaped by AI, and driven by people."

The developer news site DevClass notes the change happens "as the number of posts to its site continues a dramatic decline thanks to AI-driven alternatives." According to a quick query on the official data explorer, the sum of questions and answers posted in April 2025 was down by over 64 percent from the same month in 2024, and plunged more than 90 percent from April 2020, when traffic was near its peak...

Although declining traffic is a sign of Stack Overflow's reduced significance in the developer community, the company's business is not equally affected so far. Stack Exchange is a business owned by investment company Prosus, and the Stack Exchange products include private versions of its site (Stack Overflow for Teams) as well as advertising and recruitment. According to the Prosus financial results, in the six months ended September 2024, Stack Overflow increased its revenue and reduced its losses. The company's search for a new direction though confirms that the fast-disappearing developer engagement with Stack Overflow poses an existential challenge to the organization.

DevClass says Stack Overflow's parent company "is casting about for new ways to provide value (and drive business) in this context..." The company has already experimented with various new services, via its Labs research department, including an AI Answer Assistant and Question Assistant, as well as a revamped jobs site in association with recruitment site Indeed, Discussions for technical debate, and extensions for GitHub Copilot, Slack, and Visual Studio Code.
From the official announcement on Stack Overflow's blog: This rebrand isn't just a fresh coat of paint. It's a realignment with our purpose: to support the builders of the future in an AI world — with clarity, speed, and humanity. It's about showing up in a way that reflects who we are today, and where we're headed tomorrow.
"We have appointed an internal steering group and we have engaged with an external expert partner in this area to help bring about the required change," notes a post in Stack Exchange's "meta" area. This isn't just about a visual update or marketing exercise — it's going to bring about a shift in how we present ourselves to the world which you will feel everywhere from the design to the copywriting, so that we can better achieve our goals and shared mission. As the emergence of AI has called into question the role of Stack Overflow and the Stack Exchange Network, one of the desired outputs of the rebrand process is to clarify our place in the world.

We've done work toward this already — our recent community AMA is an example of this — but we want to ensure that this comes across in our brand and identity as well. We want the community to be involved and have a strong voice in the process of renewing and refreshing our brand. Remember, Stack Overflow started with a public discussion about what to name it!

And another another post two months ago Stack Exchange is exploring early ideas for expanding beyond the "single lane" Q&A highway. Our goal right now is to better understand the problems, opportunities, and needs before deciding on any specific changes...

The vision is to potentially enable:

- A slower lane, with high-quality durable knowledge that takes time to create and curate, like questions and answers.

- A medium lane, for more flexible engagement, with features like Discussions or more flexible Stack Exchanges, where users can explore ideas or share opinions.

- A fast lane for quick, real-time interaction, with features like Chat that can bring the community together to discuss topics instantly.

With this in mind, we're seeking your feedback on the current state of Chat, what's most important to you, and how you see Chat fitting into the future.

In a post in Stack Exchange's "meta" area, brand design director David Longworth says the "tension mentioned between Stack Overflow and Stack Exchange" is probably the most relevant to the rebranding.

But he posted later that "There's a lot of people behind the scenes on this who care deeply about getting this right! Thank you on behalf of myself and the team."
AI

Is the Altruistic OpenAI Gone? (msn.com) 51

"The altruistic OpenAI is gone, if it ever existed," argues a new article in the Atlantic, based on interviews with more than 90 current and former employees, including executives. It notes that shortly before Altman's ouster (and rehiring) he was "seemingly trying to circumvent safety processes for expediency," with OpenAI co-founder/chief scientist Ilya telling three board members "I don't think Sam is the guy who should have the finger on the button for AGI." (The board had already discovered Altman "had not been forthcoming with them about a range of issues" including a breach in the Deployment Safety Board's protocols.)

Adapted from the upcoming book, Empire of AI, the article first revisits the summer of 2023, when Sutskever ("the brain behind the large language models that helped build ChatGPT") met with a group of new researchers: Sutskever had long believed that artificial general intelligence, or AGI, was inevitable — now, as things accelerated in the generative-AI industry, he believed AGI's arrival was imminent, according to Geoff Hinton, an AI pioneer who was his Ph.D. adviser and mentor, and another person familiar with Sutskever's thinking.... To people around him, Sutskever seemed consumed by thoughts of this impending civilizational transformation. What would the world look like when a supreme AGI emerged and surpassed humanity? And what responsibility did OpenAI have to ensure an end state of extraordinary prosperity, not extraordinary suffering?

By then, Sutskever, who had previously dedicated most of his time to advancing AI capabilities, had started to focus half of his time on AI safety. He appeared to people around him as both boomer and doomer: more excited and afraid than ever before of what was to come. That day, during the meeting with the new researchers, he laid out a plan. "Once we all get into the bunker — " he began, according to a researcher who was present.

"I'm sorry," the researcher interrupted, "the bunker?"

"We're definitely going to build a bunker before we release AGI," Sutskever replied. Such a powerful technology would surely become an object of intense desire for governments globally. The core scientists working on the technology would need to be protected. "Of course," he added, "it's going to be optional whether you want to get into the bunker." Two other sources I spoke with confirmed that Sutskever commonly mentioned such a bunker. "There is a group of people — Ilya being one of them — who believe that building AGI will bring about a rapture," the researcher told me. "Literally, a rapture...."

But by the middle of 2023 — around the time he began speaking more regularly about the idea of a bunker — Sutskever was no longer just preoccupied by the possible cataclysmic shifts of AGI and superintelligence, according to sources familiar with his thinking. He was consumed by another anxiety: the erosion of his faith that OpenAI could even keep up its technical advancements to reach AGI, or bear that responsibility with Altman as its leader. Sutskever felt Altman's pattern of behavior was undermining the two pillars of OpenAI's mission, the sources said: It was slowing down research progress and eroding any chance at making sound AI-safety decisions.

"For a brief moment, OpenAI's future was an open question. It might have taken a path away from aggressive commercialization and Altman. But this is not what happened," the article concludes. Instead there was "a lack of clarity from the board about their reasons for firing Altman." There was fear about a failure to realize their potential (and some employees feared losing a chance to sell millions of dollars' worth of their equity).

"Faced with the possibility of OpenAI falling apart, Sutskever's resolve immediately started to crack... He began to plead with his fellow board members to reconsider their position on Altman." And in the end "Altman would come back; there was no other way to save OpenAI." To me, the drama highlighted one of the most urgent questions of our generation: How do we govern artificial intelligence? With AI on track to rewire a great many other crucial functions in society, that question is really asking: How do we ensure that we'll make our future better, not worse? The events of November 2023 illustrated in the clearest terms just how much a power struggle among a tiny handful of Silicon Valley elites is currently shaping the future of this technology. And the scorecard of this centralized approach to AI development is deeply troubling. OpenAI today has become everything that it said it would not be....
The author believes OpenAI "has grown ever more secretive, not only cutting off access to its own research but shifting norms across the industry to no longer share meaningful technical details about AI models..."

"At the same time, more and more doubts have risen about the true economic value of generative AI, including a growing body of studies that have shown that the technology is not translating into productivity gains for most workers, while it's also eroding their critical thinking."
Programming

Curl Warns GitHub About 'Malicious Unicode' Security Issue (daniel.haxx.se) 69

A Curl contributor replaced an ASCII letter with a Unicode alternative in a pull request, writes Curl lead developer/founder Daniel Stenberg. And not a single human reviewer on the team (or any of their CI jobs) noticed.

The change "looked identical to the ASCII version, so it was not possible to visually spot this..." The impact of changing one or more letters in a URL can of course be devastating depending on conditions... [W]e have implemented checks to help us poor humans spot things like this. To detect malicious Unicode. We have added a CI job that scans all files and validates every UTF-8 sequence in the git repository.

In the curl git repository most files and most content are plain old ASCII so we can "easily" whitelist a small set of UTF-8 sequences and some specific files, the rest of the files are simply not allowed to use UTF-8 at all as they will then fail the CI job and turn up red. In order to drive this change home, we went through all the test files in the curl repository and made sure that all the UTF-8 occurrences were instead replaced by other kind of escape sequences and similar. Some of them were also used more or less by mistake and could easily be replaced by their ASCII counterparts.

The next time someone tries this stunt on us it could be someone with less good intentions, but now ideally our CI will tell us... We want and strive to be proactive and tighten everything before malicious people exploit some weakness somewhere but security remains this never-ending race where we can only do the best we can and while the other side is working in silence and might at some future point attack us in new creative ways we had not anticipated. That future unknown attack is a tricky thing.

In the original blog post Stenberg complained he got "barely no responses" from GitHub (joking "perhaps they are all just too busy implementing the next AI feature we don't want.") But hours later he posted an update.

"GitHub has told me they have raised this as a security issue internally and they are working on a fix."
AI

Walmart Prepares for a Future Where AI Shops for Consumers 73

Walmart is preparing for a future where AI agents shop on behalf of consumers by adapting its systems to serve both humans and autonomous bots. As major players like Visa and PayPal also invest in agentic commerce, Walmart is positioning itself as a leader by developing its own AI agents and supporting broader industry integration. PYMNTS reports: Instead of scrolling through ads or comparing product reviews, future consumers may rely on digital assistants, like OpenAI's Operator, to manage their shopping lists, from replenishing household essentials to selecting the best TV based on personal preferences, according to the report (paywalled). "It will be different," Walmart U.S. Chief Technology Officer Hari Vasudev said, per the report. "Advertising will have to evolve." The emergence of AI-generated summaries in search results has already altered the way consumers gather product information, the report said. However, autonomous shopping agents represent a bigger transformation. These bots could not only find products but also finalize purchases, including payments, without the user ever lifting a finger. [...]

Retail experts say agentic commerce will require companies to overhaul how they market and present their products online, the WSJ report said. They may need to redesign product pages and pricing strategies to cater to algorithmic buyers. The customer relationship could shift away from retailers if purchases are completed through third-party agents. [...] To prepare, Walmart is developing its own AI shopping agents, accessible through its website and app, according to the WSJ report. These bots can already handle basic tasks like reordering groceries, and they're being trained to respond to broader prompts, such as planning a themed birthday party. Walmart is working toward a future in which outside agents can seamlessly communicate with the retailer's own systems -- something Vasudev told the WSJ he expects to be governed by industry-wide protocols that are still under development. [...]

Third-party shopping bots may also act independently, crawling retailers' websites much like consumers browse stores without engaging sales associates, the WSJ report said. In those cases, the retailer has little control over how its products are evaluated. Whether consumers instruct their AI to shop specifically at Walmart or ask for the best deal available, the outcomes will increasingly be shaped by algorithms, per the report. Operator, for example, considers search ranking, sponsored content and user preferences when making recommendations. That's a far cry from how humans shop. Bots don't respond to eye-catching visuals or emotionally driven branding in the same way people do. This means retailers must optimize their content not just for people but for machine readers as well, the report said. Pricing strategies could also shift as companies may need to make rapid pricing decisions and determine whether it's worth offering AI agents exclusive discounts to keep them from choosing a competitor's lower-priced item, according to the report.
Cloud

UK Needs More Nuclear To Power AI, Says Amazon Boss 66

In an exclusive interview with the BBC, AWS CEO Matt Garman said the UK must expand nuclear energy to meet the soaring electricity demands of AI-driven data centers. From the report: Amazon Web Services (AWS), which is part of the retail giant Amazon, plans to spend 8 billion pounds on new data centers in the UK over the next four years. Matt Garman, chief executive of AWS, told the BBC nuclear is a "great solution" to data centres' energy needs as "an excellent source of zero carbon, 24/7 power." AWS is the single largest corporate buyer of renewable energy in the world and has funded more than 40 renewable solar and wind farm projects in the UK.

The UK's 500 data centres currently consume 2.5% of all electricity in the UK, while Ireland's 80 hoover up 21% of the country's total power, with those numbers projected to hit 6% and 30% respectively by 2030. The body that runs the UK's power grid estimates that by 2050 data centers alone will use nearly as much energy as all industrial users consume today.

In an exclusive interview with the BBC, Matt Garman said that future energy needs were central to AWS planning process. "It's something we plan many years out," he said. "We invest ahead. I think the world is going to have to build new technologies. I believe nuclear is a big part of that particularly as we look 10 years out."
AI

MIT Asks arXiv To Take Down Preprint Paper On AI and Scientific Discovery 19

MIT has formally requested the withdrawal of a preprint paper on AI and scientific discovery due to serious concerns about the integrity and validity of its data and findings. It didn't provide specific details on what it believes is wrong with the paper. From a post: "Earlier this year, the COD conducted a confidential internal review based upon allegations it received regarding certain aspects of this paper. While student privacy laws and MIT policy prohibit the disclosure of the outcome of this review, we are writing to inform you that MIT has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper. Based upon this finding, we also believe that the inclusion of this paper in arXiv may violate arXiv's Code of Conduct.

"Our understanding is that only authors of papers appearing on arXiv can submit withdrawal requests. We have directed the author to submit such a request, but to date, the author has not done so. Therefore, in an effort to clarify the research record, MIT respectfully request that the paper be marked as withdrawn from arXiv as soon as possible." Preprints, by definition, have not yet undergone peer review. MIT took this step in light of the publication's prominence in the research conversation and because it was a formal step it could take to mitigate the effects of misconduct. The author is no longer at MIT. [...]

"We are making this information public because we are concerned that, even in its non-published form, the paper is having an impact on discussions and projections about the effects of AI on science. Ensuring an accurate research record is important to MIT. We therefore would like to set the record straight and share our view that at this point the findings reported in this paper should not be relied on in academic or public discussions of these topics."
The paper in question, titled "Artificial Intelligence, Scientific Discovery, and Product Innovation" and authored by Aidan Toner-Rodgers, investigated the effects of introducing an AI-driven materials discovery tool to 1,018 scientists in a U.S. R&D lab. The study reported that AI-assisted researchers discovered 44% more materials, filed 39% more patents, and achieved a 17% increase in product innovation. These gains were primarily attributed to AI automating 57% of idea-generation tasks, allowing top-performing scientists to focus on evaluating AI-generated suggestions effectively. However, the benefits were unevenly distributed; lower-performing scientists saw minimal improvements, and 82% of participants reported decreased job satisfaction due to reduced creativity and skill utilization.

The Wall Street Journal reported on MIT's statement.
AI

OpenAI Launches Codex, an AI Coding Agent, In ChatGPT 12

OpenAI has launched Codex, a powerful AI coding agent in ChatGPT that autonomously handles tasks like writing features, fixing bugs, and testing code in a cloud-based environment. TechCrunch reports: Codex is powered by codex-1, a version of the company's o3 AI reasoning model optimized for software engineering tasks. OpenAI says codex-1 produces "cleaner" code than o3, adheres more precisely to instructions, and will iteratively run tests on its code until passing results are achieved.

The Codex agent runs in a sandboxed, virtual computer in the cloud. By connecting with GitHub, Codex's environment can come preloaded with your code repositories. OpenAI says the AI coding agent will take anywhere from one to 30 minutes to write simple features, fix bugs, answer questions about your codebase, and run tests, among other tasks. Codex can handle multiple software engineering tasks simultaneously, says OpenAI, and it doesn't limit users from accessing their computer and browser while it's running.

Codex is rolling out starting today to subscribers to ChatGPT Pro, Enterprise, and Team. OpenAI says users will have "generous access" to Codex to start, but in the coming weeks, the company will implement rate limits for the tool. Users will then have the option to purchase additional credits to use Codex, an OpenAI spokesperson tells TechCrunch. OpenAI plans to expand Codex access to ChatGPT Plus and Edu users soon.
United States

MIT Says It No Longer Stands Behind Student's AI Research Paper (msn.com) 28

MIT said Friday it can no longer stand behind a widely circulated paper on AI written by a doctoral student in its economics program. The paper said that the introduction of an AI tool in a materials-science lab led to gains in new discoveries, but had more ambiguous effects on the scientists who used it. WSJ: MIT didn't name the student in its statement Friday, but it did name the paper. That paper, by Aidan Toner-Rodgers, was covered by The Wall Street Journal and other media outlets. In a press release, MIT said it "has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper."

The university said the author of the paper is no longer at MIT. The paper said that after an AI tool was implemented at a large materials-science lab, researchers discovered significantly more materials -- a result that suggested that, in certain settings, AI could substantially improve worker productivity. But it also showed that most of the productivity gains went to scientists who were already highly effective, and that overall the AI tool made scientists less happy about their work.

AI

US, UAE Unveil Plan For New 5GW AI Campus In Abu Dhabi (patentlyapple.com) 30

An anonymous reader quotes a report from Patently Apple: It's being reported in the Gulf region that a new 5GW UAE-US AI Campus in Abu Dhabi was unveiled on Thursday at Qasr Al Watan in the presence of President His Highness Sheikh Mohamed bin Zayed Al Nahyan and US. President Donald Trump, who is on a state visit to the UAE. The new AI campus -- the largest of its kind outside the United States -- will host US hyperscalers and large enterprises, enabling them to leverage regional compute resources with the capability to serve the Global South. The UAE-US AI Campus will feature 5GW of capacity for AI data centers in Abu Dhabi, offering a regional platform through which US hyperscalers can provide low-latency services to nearly half of the global population.

Upon completion, the facility will utilize nuclear, solar, and gas power to minimize carbon emissions. It will also house a science park focused on advancing innovation in artificial intelligence. The campus will be built by G42 and operated in partnership with several US companies including NVIDIA, OpenAI, SoftBank, Cisco and others. The initiative is part of the newly established US-UAE AI Acceleration Partnership, a bilateral framework designed to deepen collaboration on artificial intelligence and advanced technologies. The UAE and US will jointly regulate access to the compute resources, which are reserved for US hyperscalers and approved cloud service providers.
An official press release from the White House can be found here.

Slashdot Top Deals