Earth

Tech Giants' Indirect Operational Emissions Rose 50% Since 2020 (reuters.com) 40

An anonymous reader quotes a report from Reuters: Indirect carbon emissions from the operations of four of the leading AI-focused tech companies rose on average by 150% from 2020-2023, due to the demands of power-hungry data centers, a United Nations report (PDF) said on Thursday. The use of artificial intelligence by Amazon, Microsoft, Alphabet and Meta drove up their global indirect emissions because of the vast amounts of energy required to power data centers, the report by the International Telecommunication Union (ITU), the U.N. agency for digital technologies, said.

Indirect emissions include those generated by purchased electricity, steam, heating and cooling consumed by a company. Amazon's operational carbon emissions grew the most at 182% in 2023 compared to three years before, followed by Microsoft at 155%, Meta at 145% and Alphabet at 138%, according to the report. The ITU tracked the greenhouse gas emissions of 200 leading digital companies between 2020 and 2023. [...] As investment in AI increases, carbon emissions from the top-emitting AI systems are predicted to reach up to 102.6 million tons of carbon dioxide equivalent per year, the report stated.

The data centres that are needed for AI development could also put pressure on existing energy infrastructure. "The rapid growth of artificial intelligence is driving a sharp rise in global electricity demand, with electricity use by data centers increasing four times faster than the overall rise in electricity consumption," the report found. It also highlighted that although a growing number of digital companies had set emissions targets, those ambitions had not yet fully translated into actual reductions of emissions.
UPDATE: The headline has been revised to clarify that four leading AI-focused tech companies saw their operational emissions rise to 150% of their 2020 levels by 2023 -- a 50% increase, not a 150% one.
Google

News Sites Are Getting Crushed by Google's New AI Tools (wsj.com) 134

"It is true, Google AI is stomping on the entire internet," writes Slashdot reader TheWho79, sharing a report from the Wall Street Journal. "From HuffPost to the Atlantic, publishers prepare to pivot or shut the doors. ... Even highly regarded old school bullet-proof publications like Washington Post are getting hit hard." From the report: Traffic from organic search to HuffPost's desktop and mobile websites fell by just over half in the past three years, and by nearly that much at the Washington Post, according to digital market data firm Similarweb. Business Insider cut about 21% of its staff last month, a move CEO Barbara Peng said was aimed at helping the publication "endure extreme traffic drops outside of our control." Organic search traffic to its websites declined by 55% between April 2022 and April 2025, according to data from Similarweb.

At a companywide meeting earlier this year, Nicholas Thompson, chief executive of the Atlantic, said the publication should assume traffic from Google would drop toward zero and the company needed to evolve its business model. [...] "Google is shifting from being a search engine to an answer engine," Thompson said in an interview with The Wall Street Journal. "We have to develop new strategies."

The rapid development of click-free answers in search "is a serious threat to journalism that should not be underestimated," said William Lewis, the Washington Post's publisher and chief executive. Lewis is former CEO of the Journal's publisher, Dow Jones. The Washington Post is "moving with urgency" to connect with previously overlooked audiences and pursue new revenue sources and prepare for a "post-search era," he said.

At the New York Times, the share of traffic coming from organic search to the paper's desktop and mobile websites slid to 36.5% in April 2025 from almost 44% three years earlier, according to Similarweb. The Wall Street Journal's traffic from organic search was up in April compared with three years prior, Similarweb data show, though as a share of overall traffic it declined to 24% from 29%.
Further reading: Google's AI Mode Is 'the Definition of Theft,' Publishers Say
Security

Trump Quietly Throws Out Biden's Cyber Policies (axios.com) 109

An anonymous reader quotes a report from Axios: President Trump quietly took a red pen to much of the Biden administration's cyber legacy in a little-noticed move late Friday. Under an executive order signed just before the weekend, Trump is tossing out some of the major touchstones of Biden's cyber policy legacy -- while keeping a few others. The order preserves efforts around post-quantum cryptography, advanced encryption standards, and border gateway protocol security, along with the Cyber Trust Mark program -- an Energy Star-type labeling initiative for consumer smart devices. But hallmark programs tied to software bills of materials, zero-trust implementation, and space contractor cybersecurity requirements have been either rescinded or left in limbo. The new executive order amends both the Biden cyber executive order signed in January and an Obama administration order.

Each of the following Biden-era programs is now out the door or significantly rolled back:
- A broad requirement for federal software vendors to provide a software bill of materials - essentially an ingredient list of code components - is gone.
- Biden-era efforts to encourage federal agencies to accept digital identity documents and help states develop mobile driver's licenses were revoked.
- Several AI cybersecurity research mandates, including those focused on AI-generated code security and AI-driven patch management pilots, have been scrapped or deprioritized.
- The requirement that software contractors formally attest they followed secure development practices - and submit those attestations to a federal repository - has been cut. Instead, the National Institute of Standards and Technology will now coordinate a new industry consortium to review software security guidelines.

AI

Starbucks To Roll Out Microsoft Azure OpenAI Assistant For Baristas 37

Starbucks is piloting a generative AI assistant called "Green Dot Assist" to streamline barista tasks and improve service speed, with plans for a broader rollout in fiscal 2026. The assistant is built on Microsoft Azure's OpenAI platform. CNBC reports: Instead of flipping through manuals or accessing Starbucks' intranet, baristas will be able to use a tablet behind the counter equipped with Green Dot Assist to get answers to a range of questions, from how to make an iced shaken espresso to troubleshooting equipment errors. Baristas can either type or verbally ask their queries in conversational language.

As the AI assistant evolves, Starbucks has even bigger plans for its next generation. Those ideas include automatically creating a ticket with IT for equipment issues or generating suggestions for a substitute when a barista calls out of work, according to [Starbucks Chief Technology Officer Deb Hall Lefevre]. [...] Lefevre said tenured baristas have been learning to use the new POS in as little as an hour. Plus, the technology can offer personalized recommendations and loyal customers' repeat orders, helping Starbucks achieve the personalized touch it's looking to bring back to its cafes.
"It's just another example of how innovation technology is coming into service of our partners and making sure that we're doing all we can to simplify the operations, make their jobs just a little bit easier, maybe a little bit more fun, so that they can do what they do best," Lefevre told CNBC.
Social Networks

Bluesky's Decline Stems From Never Hearing From the Other Side (washingtonpost.com) 183

Bluesky's user engagement has fallen roughly 50% since peaking in mid-November, according to a recent Pew Research Center analysis, as progressive groups' efforts to migrate users from Elon Musk's X platform show signs of failure. The research found that while many news influencers maintain Bluesky accounts, two-thirds post irregularly compared to more than 80% who still post daily to X. A Washington Post columnist tries to make sense of it: The people who have migrated to Bluesky tend to be those who feel the most visceral disgust for Musk and Trump, plus a smattering of those who are merely curious and another smattering who are tired of the AI slop and unregenerate racism that increasingly pollutes their X feeds. Because the Musk and Trump haters are the largest and most passionate group, the result is something of an echo chamber where it's hard to get positive engagement unless you're saying things progressives want to hear -- and where the negative engagement on things they don't want to hear can be intense. That's true even for content that isn't obviously political: Ethan Mollick, a professor at the University of Pennsylvania's Wharton School who studies AI, recently announced that he'll be limiting his Bluesky posting because AI discussions on the platform are too "fraught."

All this is pretty off-putting for folks who aren't already rather progressive, and that creates a threefold problem for the ones who dream of getting the old band back together. Most obviously, it makes it hard for the platform to build a large enough userbase for the company to become financially self-sustaining, or for liberals to amass the influence they wielded on old Twitter. There, they accumulated power by shaping the contours of a conversation that included a lot of non-progressives. On Bluesky, they're mostly talking among themselves.

AI

Gabbard Says AI is Speeding Up Intel Work, Including the Release of the JFK Assassination Files (apnews.com) 39

AI is speeding up the work of America's intelligence services, Director of National Intelligence Tulsi Gabbard said Tuesday. From a report: Speaking to a technology conference, Gabbard said AI programs, when used responsibly, can save money and free up intelligence officers to focus on gathering and analyzing information. The sometimes slow pace of intelligence work frustrated her as a member of Congress, Gabbard said, and continues to be a challenge. AI can run human resource programs, for instance, or scan sensitive documents ahead of potential declassification, Gabbard said. Her office has released tens of thousands of pages of material related to the assassinations of President John F. Kennedy and his brother, New York Sen. Robert F. Kennedy, on the orders of President Donald Trump.

Experts had predicted the process could take many months or even years, but AI accelerated the work by scanning the documents to see if they contained any material that should remain classified, Gabbard said during her remarks at the Amazon Web Services Summit in Washington. "We have been able to do that through the use of AI tools far more quickly than what was done previously -- which was to have humans go through and look at every single one of these pages," Gabbard said.

AI

Apple's Upgraded AI Models Underwhelm On Performance (techcrunch.com) 24

Apple's latest AI models continue to lag behind competitors, according to the company's own benchmark testing it disclosed this week. The tech giant's newest "Apple On-Device" model, which runs locally on iPhones and other devices, performed only "comparably" to similarly-sized models from Google and Alibaba in human evaluations of text generation quality -- not better, despite being Apple's most recent release.

The performance gap widens with Apple's more powerful "Apple Server" model, designed for data center deployment. Human testers rated it behind OpenAI's year-old GPT-4o in text generation tasks. In image analysis tests, evaluators preferred Meta's Llama 4 Scout model over Apple Server, a particularly notable result given that Llama 4 Scout itself underperforms leading models from Google, Anthropic, and OpenAI on various benchmarks.
Network

Cisco Updates Networking Products in Bid To Tap AI-Fueled Demand (bloomberg.com) 8

Cisco is updating its networking and security products to make AI networks speedier and more secure, part of a broader push to capitalize on the AI spending boom. From a report: A new generation of switches -- networking equipment that links computer systems -- will offer a 10-fold improvement in performance, the company said on Tuesday. That will help prevent AI applications from suffering bottlenecks when transferring data, Cisco said. Networking speed has become a bigger issue as data center operators try to manage a flood of AI information -- both in the cloud and within the companies' own facilities. Slowdowns can hinder AI models, Cisco President and Chief Product Officer Jeetu Patel said in an interview. That applies to the development phase -- known as training -- and the operation of the models, a stage called inference. A massive build-out of data centers has made Cisco more relevant, he said. "AI is going to be network-bound, both on training and inference," Patel said. Having computer processors sit idle during training because of slow networks is "just throwing away money."
Businesses

New Grads Join Worst Entry-Level Job Market in Years (deccanherald.com) 84

The Class of 2025 is encountering the worst entry-level job market in years with unemployment among recent degree-holders aged 22 to 27 reaching 5.8% this spring -- the highest level in approximately four years and well above the national average. According to Federal Reserve Bank of New York data, 85% of the unemployment rate increase since mid-2023 stems from new labor market entrants struggling to find work.

Corporate hiring freezes implemented under threats of President Trump's tariffs, combined with AI replacing traditional entry-level positions, have severely constrained opportunities for new graduates. More than 60% of executives surveyed on LinkedIn indicate that AI will eventually assume tasks currently assigned to entry-level employees, particularly mundane and manual roles.

The impact varies significantly by major, with computer engineering graduates -- once highly sought-after -- now facing a 7.5% unemployment rate, the third-highest among recent graduates. Employment in computer science and mathematical jobs for those under 27 has dropped 8% since 2022, even as it grew 0.8% for older workers.
AI

OpenAI Taps Google in Unprecedented Cloud Deal Despite AI Rivalry (reuters.com) 6

OpenAI plans to add Alphabet's Google cloud service to meet its growing needs for computing capacity, Reuters reported Tuesday, marking a surprising collaboration between two prominent competitors in the AI race. From the report: The deal, which has been under discussion for a few months, was finalized in May, one of the sources added. It underscores how massive computing demands to train and deploy AI models are reshaping the competitive dynamics in AI, and marks OpenAI's latest move to diversify its compute sources beyond its major supporter Microsoft, including its high-profile Stargate data center project.

It is a win for Google's cloud unit, which will supply additional computing capacity to OpenAI's existing infrastructure for training and running its AI models, sources said, who requested anonymity to discuss private matters. The move also comes as OpenAI's ChatGPT poses the biggest threat to Google's dominant search business in years, with Google executives recently saying that the AI race may not be winner-take-all.

Python

New Code.org Curriculum Aims To Make Schoolkids Python-Literate and AI-Ready 50

Longtime Slashdot reader theodp writes: The old Code.org curriculum page for middle and high school students has been changed to include a new Python Lab in the tech-backed nonprofit's K-12 offerings. Elsewhere on the site, a Computer Science and AI Foundations curriculum is described that includes units on 'Foundations of AI Programming [in Python]' and 'Insights from Data and AI [aka Data Science].' A more-detailed AI Foundations Syllabus 25-26 document promises a second semester of material is coming soon: "This semester offers an innovative approach to teaching programming by integrating learning with and about artificial intelligence (AI). Using Python as the primary language, students build foundational programming skills while leveraging AI tools to enhance computational thinking and problem-solving. The curriculum also introduces students to the basics of creating AI-powered programs, exploring machine learning, and applying data science principles."

Newly-posted videos on Code.org's YouTube channel appear to be intended to support the new Python-based CS & AI course. "Python is extremely versatile," explains a Walmart data scientist to open the video for Data Science: Using Python. "So, first of all, Python is one of the very few languages that can handle numbers very, very well." A researcher at the Univ. of Washington's Institute for Health Metrics and Evaluation (IHME) adds, "Python is the gold standard and what people expect data scientists to know [...] Key to us being able to handle really big data sets is our use of Python and cluster computing." Adding to the Python love, an IHME data analyst explains, "Python is a great choice for large databases because there's a lot of support for Python libraries."

Code.org is currently recruiting teachers to attend its CS and AI Foundations Professional Learning program this summer, which is being taught by Code.org's national network of university and nonprofit regional partners (teachers who signup have a chance to win $250 in DonorsChoose credits for their classrooms). A flyer for a five-day Michigan Professional Development program to prepare teachers for a pilot of the Code.org CS & A course touts the new curriculum as "an alternative to the AP [Computer Science] pathway" (teachers are offered scholarships covering registration, lodging, meals, and workshop materials).

Interestingly, Code.org's embrace of Python and Data Science comes as the nonprofit changes its mission to 'make CS and AI a core part of K-12 education' and launches a new national campaign with tech leaders to make CS and AI a graduation requirement. Prior to AI changing the education conversation, Code.org in 2021 boasted that it had lined up a consortium of tech giants, politicians, and educators to push its new $15 million Amazon-bankrolled Java AP CS A curriculum into K-12 classrooms. Just three years later, however, Amazon CEO Andy Jassy was boasting to investors that Amazon had turned to AI to automatically do Java coding that he claimed would have otherwise taken human coders 4,500 developer-years to complete.
Facebook

Meta Is Creating a New AI Lab To Pursue 'Superintelligence' 77

Meta is preparing to unveil a new AI research lab dedicated to pursuing "superintelligence," a hypothetical A.I. system that exceeds the powers of the human brain, as the tech giant jockeys to stay competitive in the technology race, New York Times reported Tuesday, citing four people with the knowledge of the company's plans. From the report: Meta has tapped Alexandr Wang, 28, the founder and chief executive of the A.I. start-up Scale AI, to join the new lab, the people said, and has been in talks to invest billions of dollars in his company as part of a deal that would also bring other Scale employees to the company.

Meta has offered seven- to nine-figure compensation packages to dozens of researchers from leading A.I. companies such as OpenAI and Google, with some agreeing to join, according to the people. The new lab is part of a larger reorganization of Meta's A.I. efforts, the people said. The company, which owns Facebook, Instagram and WhatsApp, has recently grappled with internal management struggles over the technology, as well as employee churn and several product releases that fell flat, two of the people said.
Businesses

Private Equity CEO Predicts AI Will Leave 60% of Finance Conference Attendees Jobless (entrepreneur.com) 73

Robert F. Smith, CEO of Vista Equity Partners, told attendees at the SuperReturn International 2025 conference in Berlin last week that 60% of the 5,500 finance professionals present will be "looking for work" next year due to AI disruption.

Smith predicted that while 40% of attendees will adopt AI agents -- programs that autonomously perform complex, multi-step tasks -- the remaining majority will need to find new employment as AI transforms the sector. "All of the jobs currently carried out by one billion knowledge workers today would change due to AI," Smith said, clarifying that while jobs won't disappear entirely, they will fundamentally transform.
AI

Ohio University Says All Students Will Be Required To Train and 'Be Fluent' In AI (theguardian.com) 73

Ohio State University is launching a campus-wide AI fluency initiative requiring all students to integrate AI into their studies, aiming to make them proficient in both their major and the responsible use of AI. "Ohio State has an opportunity and responsibility to prepare students to not just keep up, but lead in this workforce of the future," said the university's president, Walter "Ted" Carter Jr. He added: "Artificial intelligence is transforming the way we live, work, teach and learn. In the not-so-distant future, every job, in every industry, is going to be [affected] in some way by AI." The Guardian reports: The university said its program will prioritize the incoming freshman class and onward, in order to make every Ohio State graduate "fluent in AI and how it can be responsibly applied to advance their field." [...] Steven Brown, an associate professor of philosophy at the university, told NBC News that after students turned in the first batch of AI-assisted papers he found "a lot of really creative ideas."

"My favorite one is still a paper on karma and the practice of returning shopping carts," Brown said. Brown said that banning AI from classwork is "shortsighted," and he encouraged his students to discuss ethics and philosophy with AI chatbots. "It would be a disaster for our students to have no idea how to effectively use one of the most powerful tools that humanity has ever created," Brown said. "AI is such a powerful tool for self-education that we must rapidly adapt our pedagogy or be left in the dust."

Separately, Ohio's AI in Education Coalition is working to develop a comprehensive strategy to ensure that the state's K-12 education system, encompassing the years of formal schooling from kindergarten through 12th grade in high school, is prepared for and can help lead the AI revolution. "AI technology is here to stay," then lieutenant governor Jon Husted said last year while announcing an AI toolkit for Ohio's K-12 school districts that he added would ensure the state "is a leader in responding to the challenges and opportunities made possible by artificial intelligence."

AI

Apple Lets Developers Tap Into Its Offline AI Models (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch: Apple is launching what it calls the Foundation Models framework, which the company says will let developers tap into its AI models in an offline, on-device fashion. Onstage at WWDC 2025 on Monday, Apple VP of software engineering Craig Federighi said that the Foundation Models framework will let apps use on-device AI models created by Apple to drive experiences. These models ship as a part of Apple Intelligence, Apple's family of models that power a number of iOS features and capabilities.

"For example, if you're getting ready for an exam, an app like Kahoot can create a personalized quiz from your notes to make studying more engaging," Federighi said. "And because it happens using on-device models, this happens without cloud API costs [] We couldn't be more excited about how developers can build on Apple intelligence to bring you new experiences that are smart, available when you're offline, and that protect your privacy."

In a blog post, Apple says that the Foundation Models framework has native support for Swift, Apple's programming language for building apps for its various platforms. The company claims developers can access Apple Intelligence models with as few as three lines of code. Guided generation, tool calling, and more are all built into the Foundation Models framework, according to Apple. Automattic is already using the framework in its Day One journaling app, Apple says, while mapping app AllTrails is tapping the framework to recommend different hiking routes.

AI

China Shuts Down AI Tools During Nationwide College Exams 27

According to Bloomberg, several major Chinese AI companies, including Alibaba, ByteDance, and Tencent, have temporarily disabled certain chatbot features during the gaokao college entrance exams to prevent cheating. "Popular AI apps, including Alibaba's Qwen and ByteDance's Doubao, have stopped picture recognition features from responding to questions about test papers, while Tencent's Yuanbao, Moonshot's Kimi have suspended photo-recognition services entirely during exam hours," adds The Verge. From the report: The rigorous multi-day "gaokao" exams are sat by more than 13.3 million Chinese students between June 7-10th, each fighting to secure one of the limited spots at universities across the country. Students are already banned from using devices like phones and laptops during the hours-long tests, so the disabling of AI chatbots serves as an additional safety net to prevent cheating during exam season.

When asked to explain the suspension, Bloomberg reports the Yuanbao and Kimi chatbots responded that functions had been disabled "to ensure the fairness of the college entrance examinations." Similarly, the DeepSeek AI tool that went viral earlier this year is also blocking its service during specific hours "to ensure fairness in the college entrance examination,"according to The Guardian.
The Guardian notes that the news is being driven by students on the Chinese social media platform Weibo. "The gaokao entrance exam incites fierce competition as it's the only means to secure a college placement in China, driving concerns that students may try to improve their chances with AI tools," notes The Verge.
Facebook

Meta in Talks for Scale AI Investment That Could Top $10 Billion (bloomberg.com) 8

An anonymous reader shares a report: Meta is in talks to make a multibillion-dollar investment into AI startup Scale AI, according to people familiar with the matter. The financing could exceed $10 billion in value, some of the people said, making it one of the largest private company funding events of all time.

[...] Scale AI, whose customers include Microsoft and OpenAI, provides data labeling services to help companies train machine-learning models and has become a key beneficiary of the generative AI boom. The startup was last valued at about $14 billion in 2024, in a funding round that included backing from Meta and Microsoft.

Apple

Apple Researchers Challenge AI Reasoning Claims With Controlled Puzzle Tests 71

Apple researchers have found that state-of-the-art "reasoning" AI models like OpenAI's o3-mini, Gemini (with thinking mode-enabled), Claude 3.7, DeepSeek-R1 face complete performance collapse [PDF] beyond certain complexity thresholds when tested on controllable puzzle environments. The finding raises questions about the true reasoning capabilities of large language models.

The study, which examined models using Tower of Hanoi, checker jumping, river crossing, and blocks world puzzles rather than standard mathematical benchmarks, found three distinct performance regimes that contradict conventional assumptions about AI reasoning progress.

At low complexity levels, standard language models surprisingly outperformed their reasoning-enhanced counterparts while using fewer computational resources. At medium complexity, reasoning models demonstrated advantages, but both model types experienced complete accuracy collapse at high complexity levels. Most striking was the counterintuitive finding that reasoning models actually reduced their computational effort as problems became more difficult, despite operating well below their token generation limits.

Even when researchers provided explicit solution algorithms, requiring only step-by-step execution rather than creative problem-solving, the models' performance failed to improve significantly. The researchers noted fundamental inconsistencies in how models applied learned strategies across different problem scales, with some models successfully handling 100-move sequences in one puzzle type while failing after just five moves in simpler scenarios.
Medicine

The Medical Revolutions That Prevented Millions of Cancer Deaths (vox.com) 76

Vox publishes a story about "the quiet revolutions that have prevented millions of cancer deaths....

"The age-adjusted death rate in the US for cancer has declined by about a third since 1991, meaning people of a given age have about a third lower risk of dying from cancer than people of the same age more than three decades ago... " The dramatic bend in the curve of cancer deaths didn't happen by accident — it's the compound interest of three revolutions. While anti-smoking policy has been the single biggest lifesaver, other interventions have helped reduce people's cancer risk. One of the biggest successes is the HPV vaccine. A study last year found that death rates of cervical cancer — which can be caused by HPV infections — in US women ages 20-39 had dropped 62 percent from 2012 to 2021, thanks largely to the spread of the vaccine. Other cancers have been linked to infections, and there is strong research indicating that vaccination can have positive effects on reducing cancer incidence.

The next revolution is better and earlier screening. It's generally true that the earlier cancer is caught, the better the chances of survival... According to one study, incidences of late-stage colorectal cancer in Americans over 50 declined by a third between 2000 and 2010 in large part because rates of colonoscopies almost tripled in that same time period. And newer screening methods, often employing AI or using blood-based tests, could make preliminary screening simpler, less invasive and therefore more readily available. If 20th-century screening was about finding physical evidence of something wrong — the lump in the breast — 21st-century screening aims to find cancer before symptoms even arise.

Most exciting of all are frontier developments in treating cancer... From drugs like lenalidomide and bortezomib in the 2000s, which helped double median myeloma survival, to the spread of monoclonal antibodies, real breakthroughs in treatments have meaningfully extended people's lives — not just by months, but years. Perhaps the most promising development is CAR-T therapy, a form of immunotherapy. Rather than attempting to kill the cancer directly, immunotherapies turn a patient's own T-cells into guided missiles. In a recent study of 97 patients with multiple myeloma, many of whom were facing hospice care, a third of those who received CAR-T therapy had no detectable cancer five years later. It was the kind of result that doctors rarely see.

The article begins with some recent quotes from Jon Gluck, who was told after a cancer diagnosis that he had as little as 18 months left to live — 22 years ago...
AI

'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry (msn.com) 206

The Atlantic makes that case that "the foundation of the AI industry is a scam" and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines." [OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."

Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....

The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.

Slashdot Top Deals