Facebook

Meta Is Creating a New AI Lab To Pursue 'Superintelligence' 77

Meta is preparing to unveil a new AI research lab dedicated to pursuing "superintelligence," a hypothetical A.I. system that exceeds the powers of the human brain, as the tech giant jockeys to stay competitive in the technology race, New York Times reported Tuesday, citing four people with the knowledge of the company's plans. From the report: Meta has tapped Alexandr Wang, 28, the founder and chief executive of the A.I. start-up Scale AI, to join the new lab, the people said, and has been in talks to invest billions of dollars in his company as part of a deal that would also bring other Scale employees to the company.

Meta has offered seven- to nine-figure compensation packages to dozens of researchers from leading A.I. companies such as OpenAI and Google, with some agreeing to join, according to the people. The new lab is part of a larger reorganization of Meta's A.I. efforts, the people said. The company, which owns Facebook, Instagram and WhatsApp, has recently grappled with internal management struggles over the technology, as well as employee churn and several product releases that fell flat, two of the people said.
Businesses

Private Equity CEO Predicts AI Will Leave 60% of Finance Conference Attendees Jobless (entrepreneur.com) 73

Robert F. Smith, CEO of Vista Equity Partners, told attendees at the SuperReturn International 2025 conference in Berlin last week that 60% of the 5,500 finance professionals present will be "looking for work" next year due to AI disruption.

Smith predicted that while 40% of attendees will adopt AI agents -- programs that autonomously perform complex, multi-step tasks -- the remaining majority will need to find new employment as AI transforms the sector. "All of the jobs currently carried out by one billion knowledge workers today would change due to AI," Smith said, clarifying that while jobs won't disappear entirely, they will fundamentally transform.
AI

Ohio University Says All Students Will Be Required To Train and 'Be Fluent' In AI (theguardian.com) 73

Ohio State University is launching a campus-wide AI fluency initiative requiring all students to integrate AI into their studies, aiming to make them proficient in both their major and the responsible use of AI. "Ohio State has an opportunity and responsibility to prepare students to not just keep up, but lead in this workforce of the future," said the university's president, Walter "Ted" Carter Jr. He added: "Artificial intelligence is transforming the way we live, work, teach and learn. In the not-so-distant future, every job, in every industry, is going to be [affected] in some way by AI." The Guardian reports: The university said its program will prioritize the incoming freshman class and onward, in order to make every Ohio State graduate "fluent in AI and how it can be responsibly applied to advance their field." [...] Steven Brown, an associate professor of philosophy at the university, told NBC News that after students turned in the first batch of AI-assisted papers he found "a lot of really creative ideas."

"My favorite one is still a paper on karma and the practice of returning shopping carts," Brown said. Brown said that banning AI from classwork is "shortsighted," and he encouraged his students to discuss ethics and philosophy with AI chatbots. "It would be a disaster for our students to have no idea how to effectively use one of the most powerful tools that humanity has ever created," Brown said. "AI is such a powerful tool for self-education that we must rapidly adapt our pedagogy or be left in the dust."

Separately, Ohio's AI in Education Coalition is working to develop a comprehensive strategy to ensure that the state's K-12 education system, encompassing the years of formal schooling from kindergarten through 12th grade in high school, is prepared for and can help lead the AI revolution. "AI technology is here to stay," then lieutenant governor Jon Husted said last year while announcing an AI toolkit for Ohio's K-12 school districts that he added would ensure the state "is a leader in responding to the challenges and opportunities made possible by artificial intelligence."

AI

Apple Lets Developers Tap Into Its Offline AI Models (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch: Apple is launching what it calls the Foundation Models framework, which the company says will let developers tap into its AI models in an offline, on-device fashion. Onstage at WWDC 2025 on Monday, Apple VP of software engineering Craig Federighi said that the Foundation Models framework will let apps use on-device AI models created by Apple to drive experiences. These models ship as a part of Apple Intelligence, Apple's family of models that power a number of iOS features and capabilities.

"For example, if you're getting ready for an exam, an app like Kahoot can create a personalized quiz from your notes to make studying more engaging," Federighi said. "And because it happens using on-device models, this happens without cloud API costs [] We couldn't be more excited about how developers can build on Apple intelligence to bring you new experiences that are smart, available when you're offline, and that protect your privacy."

In a blog post, Apple says that the Foundation Models framework has native support for Swift, Apple's programming language for building apps for its various platforms. The company claims developers can access Apple Intelligence models with as few as three lines of code. Guided generation, tool calling, and more are all built into the Foundation Models framework, according to Apple. Automattic is already using the framework in its Day One journaling app, Apple says, while mapping app AllTrails is tapping the framework to recommend different hiking routes.

AI

China Shuts Down AI Tools During Nationwide College Exams 27

According to Bloomberg, several major Chinese AI companies, including Alibaba, ByteDance, and Tencent, have temporarily disabled certain chatbot features during the gaokao college entrance exams to prevent cheating. "Popular AI apps, including Alibaba's Qwen and ByteDance's Doubao, have stopped picture recognition features from responding to questions about test papers, while Tencent's Yuanbao, Moonshot's Kimi have suspended photo-recognition services entirely during exam hours," adds The Verge. From the report: The rigorous multi-day "gaokao" exams are sat by more than 13.3 million Chinese students between June 7-10th, each fighting to secure one of the limited spots at universities across the country. Students are already banned from using devices like phones and laptops during the hours-long tests, so the disabling of AI chatbots serves as an additional safety net to prevent cheating during exam season.

When asked to explain the suspension, Bloomberg reports the Yuanbao and Kimi chatbots responded that functions had been disabled "to ensure the fairness of the college entrance examinations." Similarly, the DeepSeek AI tool that went viral earlier this year is also blocking its service during specific hours "to ensure fairness in the college entrance examination,"according to The Guardian.
The Guardian notes that the news is being driven by students on the Chinese social media platform Weibo. "The gaokao entrance exam incites fierce competition as it's the only means to secure a college placement in China, driving concerns that students may try to improve their chances with AI tools," notes The Verge.
Facebook

Meta in Talks for Scale AI Investment That Could Top $10 Billion (bloomberg.com) 8

An anonymous reader shares a report: Meta is in talks to make a multibillion-dollar investment into AI startup Scale AI, according to people familiar with the matter. The financing could exceed $10 billion in value, some of the people said, making it one of the largest private company funding events of all time.

[...] Scale AI, whose customers include Microsoft and OpenAI, provides data labeling services to help companies train machine-learning models and has become a key beneficiary of the generative AI boom. The startup was last valued at about $14 billion in 2024, in a funding round that included backing from Meta and Microsoft.

Apple

Apple Researchers Challenge AI Reasoning Claims With Controlled Puzzle Tests 71

Apple researchers have found that state-of-the-art "reasoning" AI models like OpenAI's o3-mini, Gemini (with thinking mode-enabled), Claude 3.7, DeepSeek-R1 face complete performance collapse [PDF] beyond certain complexity thresholds when tested on controllable puzzle environments. The finding raises questions about the true reasoning capabilities of large language models.

The study, which examined models using Tower of Hanoi, checker jumping, river crossing, and blocks world puzzles rather than standard mathematical benchmarks, found three distinct performance regimes that contradict conventional assumptions about AI reasoning progress.

At low complexity levels, standard language models surprisingly outperformed their reasoning-enhanced counterparts while using fewer computational resources. At medium complexity, reasoning models demonstrated advantages, but both model types experienced complete accuracy collapse at high complexity levels. Most striking was the counterintuitive finding that reasoning models actually reduced their computational effort as problems became more difficult, despite operating well below their token generation limits.

Even when researchers provided explicit solution algorithms, requiring only step-by-step execution rather than creative problem-solving, the models' performance failed to improve significantly. The researchers noted fundamental inconsistencies in how models applied learned strategies across different problem scales, with some models successfully handling 100-move sequences in one puzzle type while failing after just five moves in simpler scenarios.
Medicine

The Medical Revolutions That Prevented Millions of Cancer Deaths (vox.com) 76

Vox publishes a story about "the quiet revolutions that have prevented millions of cancer deaths....

"The age-adjusted death rate in the US for cancer has declined by about a third since 1991, meaning people of a given age have about a third lower risk of dying from cancer than people of the same age more than three decades ago... " The dramatic bend in the curve of cancer deaths didn't happen by accident — it's the compound interest of three revolutions. While anti-smoking policy has been the single biggest lifesaver, other interventions have helped reduce people's cancer risk. One of the biggest successes is the HPV vaccine. A study last year found that death rates of cervical cancer — which can be caused by HPV infections — in US women ages 20-39 had dropped 62 percent from 2012 to 2021, thanks largely to the spread of the vaccine. Other cancers have been linked to infections, and there is strong research indicating that vaccination can have positive effects on reducing cancer incidence.

The next revolution is better and earlier screening. It's generally true that the earlier cancer is caught, the better the chances of survival... According to one study, incidences of late-stage colorectal cancer in Americans over 50 declined by a third between 2000 and 2010 in large part because rates of colonoscopies almost tripled in that same time period. And newer screening methods, often employing AI or using blood-based tests, could make preliminary screening simpler, less invasive and therefore more readily available. If 20th-century screening was about finding physical evidence of something wrong — the lump in the breast — 21st-century screening aims to find cancer before symptoms even arise.

Most exciting of all are frontier developments in treating cancer... From drugs like lenalidomide and bortezomib in the 2000s, which helped double median myeloma survival, to the spread of monoclonal antibodies, real breakthroughs in treatments have meaningfully extended people's lives — not just by months, but years. Perhaps the most promising development is CAR-T therapy, a form of immunotherapy. Rather than attempting to kill the cancer directly, immunotherapies turn a patient's own T-cells into guided missiles. In a recent study of 97 patients with multiple myeloma, many of whom were facing hospice care, a third of those who received CAR-T therapy had no detectable cancer five years later. It was the kind of result that doctors rarely see.

The article begins with some recent quotes from Jon Gluck, who was told after a cancer diagnosis that he had as little as 18 months left to live — 22 years ago...
AI

'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry (msn.com) 206

The Atlantic makes that case that "the foundation of the AI industry is a scam" and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines." [OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."

Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....

The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.

XBox (Games)

Microsoft Announces Upcoming Windows-Powered Handheld Xbox Device: the 'ROG Xbox Ally' (engadget.com) 44

Nintendo's new Switch 2 console sold a record 3 million units after its launch Thursday. But then today Microsoft announced their own upcoming handheld gaming device that's Xbox-branded (and Windows-powered).

Working with ASUS' ROG division, they build a device that weighs more than the Nintendo Switch 2, and "is marginally heavier than the Steam Deck," reports Engadget. But "at least those grips look more ergonomic than those on the Nintendo Switch 2 (which is already cramping my hands) or even the Steam Deck." There are two variants of the handheld: the ROG Xbox Ally and ROG Xbox Ally X. Microsoft didn't reveal pricing, but the handhelds are coming this holiday... Critically, Microsoft and ROG aren't locking the devices to only playing Xbox games (though you can do that natively, via the cloud or by accessing an Xbox console remotely). You'll be able to play games from Battle.net and "other leading PC storefronts" too. Obviously, there's Game Pass integration here, as well as support for the Xbox Play Anywhere initiative, which enables you to play games with synced progress across a swathe of devices after buying them once...

There's a dedicated physical Xbox button that can bring up a Game Bar overlay, which seemingly makes it easy to switch between apps and games, tweak settings, start chatting with friends and more... You'll be able to mod games on either system as well.

The Xbox Ally is powered by the AMD Ryzen Z2 A Processor, and has 16GB of RAM and 512GB of SSD storage. The Xbox Ally X is the more powerful model. It has a AMD Ryzen AI Z2 Extreme processor, 24GB of RAM and 1TB of storage. They each have a microSD card reader, so you won't need to worry about shelling out for proprietary storage options to have extra space for your games... Both systems boast "HD haptics..." Both systems should be capable of outputting video to a TV or monitor, as they have two USB-C ports with support for DisplayPort 2.1 and Power Delivery 3.0.

"Microsoft has needed to respond to SteamOS ever since the Steam Deck launched three years ago," argues The Verge, "and it has steadily been tweaking its Xbox app and the Xbox Game Bar on Windows to make both more handheld-friendly..." But there was always a bigger overhaul of Windows required, and we're starting to see parts of that today. "The reality is that we've made tremendous progress on this over the last couple of years, and this is really the device that galvanized those teams and got everybody marching and working towards a moment that we're just really excited to put into the hands of players," says Roanne Sones, corporate vice president of gaming Devices and ecosystem at Xbox, in a briefing with The Verge...

I'll need to try this new interface fully to really get a feel for the Windows changes here, but Microsoft is promising that this isn't just lipstick on top of Windows. "This isn't surface-level changes, we've made significant improvements," says Potvin. "Some of our early testing with the components we've turned off in Windows, we get about 2GB of memory going back to the games while running in the full-screen experience."

Facebook

Mozilla Criticizes Meta's 'Invasive' Feed of Users' AI Prompts, Demands Its Shutdown (mozillafoundation.org) 37

In late April Meta introduced its Meta AI app, which included something called a Discover feed. ("You can see the best prompts people are sharing, or remix them to make them your own.")

But while Meta insisted "you're in control: nothing is shared to your feed unless you choose to post it" — just two days later Business Insider noticed that "clearly, some people don't realize they're sharing personal stuff." To be clear, your AI chats are not public by default — you have to choose to share them individually by tapping a share button. Even so, I get the sense that some people don't really understand what they're sharing, or what's going on.

Like the woman with the sick pet turtle. Or another person who was asking for advice about what legal measures he could take against his former employer after getting laid off. Or a woman asking about the effects of folic acid for a woman in her 60s who has already gone through menopause. Or someone asking for help with their Blue Cross health insurance bill... Perhaps these people knew they were sharing on a public feed and wanted to do so. Perhaps not. This leaves us with an obvious question: What's the point of this, anyway? Even if you put aside the potential accidental oversharing, what's the point of seeing a feed of people's AI prompts at all?

Now Mozilla has issued their own warning. "Meta is quietly turning private AI chats into public content," warns a new post this week from the Mozilla Foundation, "and too many people don't realize it's happening." That's why the Mozilla community is demanding that Meta:

- Shut down the Discover feed until real privacy protections are in place.

- Make all AI interactions private by default with no public sharing option unless explicitly enabled through informed consent.

- Provide full transparency about how many users have unknowingly shared private information.

- Create a universal, easy-to-use opt-out system for all Meta platforms that prevents user data from being used for AI training.

- Notify all users whose conversations may have been made public, and allow them to delete their content permanently.

Meta is blurring the line between private and public — and it's happening at the cost of our privacy. People have the right to know when they're speaking in public, especially when they believe they're speaking in private.

If you agree, add your name to demand Meta shut down its invasive AI feed — and guarantee that no private conversations are made public without clear, explicit, and informed opt-in consent.

AI

After 'AI-First' Promise, Duolingo CEO Admits 'I Did Not Expect the Blowback' (ft.com) 46

Last month, Duolingo CEO Luis von Ahn "shared on LinkedIn an email he had sent to all staff announcing Duolingo was going 'AI-first'," remembers the Financial Times.

"I did not expect the amount of blowback," he admits.... He attributes this anger to a general "anxiety" about technology replacing jobs. "I should have been more clear to the external world," he reflects on a video call from his office in Pittsburgh. "Every tech company is doing similar things [but] we were open about it...."

Since the furore, von Ahn has reassured customers that AI is not going to replace the company's workforce. There will be a "very small number of hourly contractors who are doing repetitive tasks that we no longer need", he says. "Many of these people are probably going to be offered contractor jobs for other stuff." Duolingo is still recruiting if it is satisfied the role cannot be automated. Graduates who make up half the people it hires every year "come with a different mindset" because they are using AI at university.

The thrust of the AI-first strategy, the 46-year-old says, is overhauling work processes... He wants staff to explore whether their tasks "can be entirely done by AI or with the help of AI. It's just a mind shift that people first try AI. It may be that AI doesn't actually solve the problem you're trying to solve.....that's fine." The aim is to automate repetitive tasks to free up time for more creative or strategic work.

Examples where it is making a difference include technology and illustration. Engineers will spend less time writing code. "Some of it they'll need to but we want it to be mediated by AI," von Ahn says... Similarly, designers will have more of a supervisory role, with AI helping to create artwork that fits Duolingo's "very specific style". "You no longer do the details and are more of a creative director. For the vast majority of jobs, this is what's going to happen...." [S]ocietal implications for AI, such as the ethics of stealing creators' copyright, are "a real concern". "A lot of times you don't even know how [the large language model] was trained. We should be careful." When it comes to artwork, he says Duolingo is "ensuring that the entirety of the model is trained just with our own illustrations".

AI

'Welcome to Campus. Here's Your ChatGPT.' (nytimes.com) 68

The New York Times reports: California State University announced this year that it was making ChatGPT available to more than 460,000 students across its 23 campuses to help prepare them for "California's future A.I.-driven economy." Cal State said the effort would help make the school "the nation's first and largest A.I.-empowered university system..." Some faculty members have already built custom chatbots for their students by uploading course materials like their lecture notes, slides, videos and quizzes into ChatGPT.
And other U.S. campuses including the University of Maryland are also "working to make A.I. tools part of students' everyday experiences," according to the article. It's all part of an OpenAI initiative "to overhaul college education — by embedding its artificial intelligence tools in every facet of campus life."

The Times calls it "a national experiment on millions of students." If the company's strategy succeeds, universities would give students A.I. assistants to help guide and tutor them from orientation day through graduation. Professors would provide customized A.I. study bots for each class. Career services would offer recruiter chatbots for students to practice job interviews. And undergrads could turn on a chatbot's voice mode to be quizzed aloud ahead of a test. OpenAI dubs its sales pitch "A.I.-native universities..." To spread chatbots on campuses, OpenAI is selling premium A.I. services to universities for faculty and student use. It is also running marketing campaigns aimed at getting students who have never used chatbots to try ChatGPT...

OpenAI's campus marketing effort comes as unemployment has increased among recent college graduates — particularly in fields like software engineering, where A.I. is now automating some tasks previously done by humans. In hopes of boosting students' career prospects, some universities are racing to provide A.I. tools and training...

[Leah Belsky, OpenAI's vice president of education] said a new "memory" feature, which retains and can refer to previous interactions with a user, would help ChatGPT tailor its responses to students over time and make the A.I. "more valuable as you grow and learn." Privacy experts warn that this kind of tracking feature raises concerns about long-term tech company surveillance. In the same way that many students today convert their school-issued Gmail accounts into personal accounts when they graduate, Ms. Belsky envisions graduating students bringing their A.I. chatbots into their workplaces and using them for life.

"It would be their gateway to learning — and career life thereafter," Ms. Belsky said.

United Kingdom

Could UK Lawyers Face Life in Prison for Citing Fake AI-Generated Cases? (apnews.com) 45

The Associated Press reports that on Friday, U.K. High Court justice Victoria Sharp and fellow judge Jeremy Johnson ruled on the possibility of false information being submitted to the court. Concerns had been raised by lower-court judges about "suspected use by lawyers of generative AI tools to produce written legal arguments or witness statements which are not then checked." In a ruling written by Sharp, the judges said that in a 90 million pound ($120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. The client in the case, Hamad Al-Haroun, apologized for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was "extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around."

In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had "not provided to the court a coherent explanation for what happened." The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action.

Sharp said providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison.

AI

AI Firms Say They Can't Respect Copyright. But A Nonprofit's Researchers Just Built a Copyright-Respecting Dataset (msn.com) 100

Is copyrighted material a requirement for training AI? asks the Washington Post. That's what top AI companies are arguing, and "Few AI developers have tried the more ethical route — until now.

"A group of more than two dozen AI researchers have found that they could build a massive eight-terabyte dataset using only text that was openly licensed or in public domain. They tested the dataset quality by using it to train a 7 billion parameter language model, which performed about as well as comparable industry efforts, such as Llama 2-7B, which Meta released in 2023." A paper published Thursday detailing their effort also reveals that the process was painstaking, arduous and impossible to fully automate. The group built an AI model that is significantly smaller than the latest offered by OpenAI's ChatGPT or Google's Gemini, but their findings appear to represent the biggest, most transparent and rigorous effort yet to demonstrate a different way of building popular AI tools....

As it turns out, the task involves a lot of humans. That's because of the technical challenges of data not being formatted in a way that's machine readable, as well as the legal challenges of figuring out what license applies to which website, a daunting prospect when the industry is rife with improperly licensed data. "This isn't a thing where you can just scale up the resources that you have available" like access to more computer chips and a fancy web scraper, said Stella Biderman [executive director of the nonprofit research institute Eleuther AI]. "We use automated tools, but all of our stuff was manually annotated at the end of the day and checked by people. And that's just really hard."

Still, the group managed to unearth new datasets that can be used ethically. Those include a set of 130,000 English language books in the Library of Congress, which is nearly double the size of the popular-books dataset Project Gutenberg. The group's initiative also builds on recent efforts to develop more ethical, but still useful, datasets, such as FineWeb from Hugging Face, the open-source repository for machine learning... Still, Biderman remained skeptical that this approach could find enough content online to match the size of today's state-of-the-art models... Biderman said she didn't expect companies such as OpenAI and Anthropic to start adopting the same laborious process, but she hoped it would encourage them to at least rewind back to 2021 or 2022, when AI companies still shared a few sentences of information about what their models were trained on.

"Even partial transparency has a huge amount of social value and a moderate amount of scientific value," she said.

AI

Anthropic's AI is Writing Its Own Blog - Oh Wait. No It's Not (techcrunch.com) 2

"Everyone has a blog these days, even Claude," Anthropic wrote this week on a page titled "Claude Explains."

"Welcome to the small corner of the Anthropic universe where Claude is writing on every topic under the sun".

Not any more. After blog posts titled "Improve code maintainability with Claude" and "Rapidly develop web applications with Claude" — Anthropic suddenly removed the whole page sometime after Wednesday. But TechCrunch explains the whole thing was always less than it seemed, and "One might be easily misled into thinking that Claude is responsible for the blog's copy end-to-end." According to a spokesperson, the blog is overseen by Anthropic's "subject matter experts and editorial teams," who "enhance" Claude's drafts with "insights, practical examples, and [...] contextual knowledge."

"This isn't just vanilla Claude output — the editorial process requires human expertise and goes through iterations," the spokesperson said. "From a technical perspective, Claude Explains shows a collaborative approach where Claude [creates] educational content, and our team reviews, refines, and enhances it...." Anthropic says it sees Claude Explains as a "demonstration of how human expertise and AI capabilities can work together," starting with educational resources. "Claude Explains is an early example of how teams can use AI to augment their work and provide greater value to their users," the spokesperson said. "Rather than replacing human expertise, we're showing how AI can amplify what subject matter experts can accomplish [...] We plan to cover topics ranging from creative writing to data analysis to business strategy...."

The Anthropic spokesperson noted that the company is still hiring across marketing, content, and editorial, and "many other fields that involve writing," despite the company's dip into AI-powered blog drafting. Take that for what you will.

IOS

What To Expect From Apple's WWDC (arstechnica.com) 26

Apple's Worldwide Developers Conference 25 (WWDC) kicks off next week, June 9th, showcasing the company's latest software and new technologies. That includes the next version of iOS, which is rumored to have the most significant design overhaul since the introduction of iOS 7. Here's an overview of what to expect: Major Software Redesigns
Apple plans to shift its operating system naming to reflect the release year, moving from sequential numbers to year-based identifiers. Consequently, the upcoming releases will be labeled as iOS 26, macOS 26, watchOS 26, etc., streamlining the versioning across platforms.

iOS 26 is anticipated to feature a glossy, glass-like interface inspired by visionOS, incorporating translucent elements and rounded buttons. This design language is expected to extend across iPadOS, macOS, watchOS, and tvOS, promoting a cohesive user experience across devices. Core applications like Phone, Safari, and Camera are slated for significant redesigns, too. For instance, Safari may introduce a translucent, "glassy" address bar, aligning with the new visual aesthetics.

While AI is not expected to be the main focus due to Siri's current readiness, some AI-related updates are rumored. The Shortcuts app may gain "Apple Intelligence," enabling users to create shortcuts using natural language. It's also possible that Gemini will be offered as an option for AI functionalities on the iPhone, similar to ChatGPT.

Other App and Feature Updates
The lock screen might display charging estimates, indicating how long it will take for the phone to fully charge. There's a rumor about bringing live translation features to AirPods. The Messages app could receive automatic translations and call support; the Music app might introduce full-screen animated lock screen art; and Apple Notes may get markdown support. Users may also only need to log into a captive Wi-Fi portal once, and all their devices will automatically be logged in.

Significant updates are expected for Apple Home. There's speculation about the potential announcement of a "HomePad" with a screen, Apple's competitor to devices like the Nest Hub Mini. A new dedicated Apple gaming app is also anticipated to replace Game Center.
If you're expecting new hardware, don't hold your breath. The event is expected to focus primarily on software developments. It may even see discontinued support for several older Intel-based Macs in macOS 26, including models like the 2018 MacBook Pro and the 2019 iMac, as Apple continues its transition towards exclusive support for Apple Silicon devices.

Sources:
Apple WWDC 2025 Rumors and Predictions! (Waveform)
WWDC 2025 Overview (MacRumors)
WWDC 2025: What to expect from this year's conference (TechCrunch)
What to expect from Apple's Worldwide Developers Conference next week (Ars Technica)
Apple's WWDC 2025: How to Watch and What to Expect (Wired)
AI

Trump AI Czar Sacks on Universal Basic Income: 'It's Not Going To Happen' (businessinsider.com) 361

David Sacks, President Trump's AI policy advisor, has dismissed the prospect of implementing a universal basic income program, declaring "it's not going to happen" during his tenure. He said: The future of AI has become a Rorschach test where everyone sees what they want. The Left envisions a post-economic order in which people stop working and instead receive government benefits. In other words, everyone on welfare. This is their fantasy; it's not going to happen."
Businesses

Klarna CEO Says Company Will Use Humans To Offer VIP Customer Service (techcrunch.com) 24

An anonymous reader quotes a report from TechCrunch: My wife taught me something," Klarna CEO Sebastian Siemiatkowski told the crowd at London SXSW. He was addressing the headlines about the company looking to hire human workers after previously saying Klarna used artificial intelligence to do work that would equate to 700 workers. "Two things can be true at the same time," he said. Siemiatkowski said it's true that the company looked to stop hiring human workers a few years ago and rolled out AI agents that have helped reduce the cost of customer support and increase the company's revenue per employee. The company had 5,500 workers two years ago, and that number now stands at around 3,000, he said, adding that as the company's salary costs have gone down, Klarna now seeks to reinvest a majority of that money into employee cash and equity compensation.

But, he insisted, this doesn't mean there isn't an opportunity for humans to work at his company. "We think offering human customer service is always going to be a VIP thing," he said, comparing it to how people pay more for clothing stitched by hand rather than machines. "So we think that two things can be done at the same time. We can use AI to automatically take away boring jobs, things that are manual work, but we are also going to promise our customers to have a human connection."

United Kingdom

UK Tech Job Openings Climb 21% To Pre-Pandemic Highs (theregister.com) 17

UK tech job openings have surged 21% to pre-pandemic levels, driven largely by a 200% spike in demand for AI skills. London accounted for 80% of the AI-related postings. The Register reports: Accenture collected data from LinkedIn in the first and second week of February 2025, and supplemented the results with a survey of more than 4,000 respondents conducted by research firm YouGov between July and August 2024. The research found a 53 percent annual increase in those describing themselves as having tech skills, amounting to 1.69 million people reporting skills in disciplines including cyber, data, and robotics. [...]

The research found that London-based companies said they would allocate a fifth of their tech budgets to AI this year, compared to 13 percent who said the same and were based in North East England, Scotland, and Wales. Growth in revenue per employee increased during the period when LLMs emerged, from 7 percent annually between 2018 and 2022 to 27 percent between 2018 and 2024. Meanwhile, growth in the same measure fell slightly in industries less affected by AI, such as mining and hospitality, the researchers said.

Slashdot Top Deals