AI

Citigroup Mandates AI Training For 175,000 Employees To Help Them 'Reinvent Themselves' (fortune.com) 35

Citigroup has rolled out mandatory AI training for all 175,000 of its employees across 80 locations worldwide, a sweeping initiative that CEO Jane Fraser describes as helping workers "reinvent themselves" before the technology permanently alters what they do for a living.

The $205 billion bank sent out an internal memo last year requiring staffers to learn prompting skills specifically. Fraser told the Washington Post at Davos that AI "will change the nature of what people do every day" and "will take some jobs away." The adaptive training platform lets experts complete the course in under 10 minutes while beginners need about 30 minutes. Citi reported last year that employees had entered more than 6.5 million prompts into its built-in AI tools, and Q4 2025 data shows a 70% adoption rate for the bank's proprietary AI tools.
Mozilla

Mozilla is Building an AI 'Rebel Alliance' To Take on Industry Heavweights OpenAI, Anthropic (cnbc.com) 47

Mozilla, the nonprofit organization behind the Firefox browser that has spent two decades battling tech giants over control of the internet, is now turning its attention to AI and deploying roughly $1.4 billion in reserves to fund what president Mark Surman calls a "rebel alliance" of startups focused on AI safety, transparency and governance.

The organization released a report Tuesday outlining its strategy to counter the growing dominance of OpenAI and Anthropic, which have raised more than $60 billion and $30 billion respectively from investors and now command valuations of $500 billion and $350 billion. Mozilla Ventures, a fund launched in 2022 with an initial $35 million commitment, has invested in more than 55 companies to date and is exploring raising additional capital.

Surman, who runs the organization from a farm outside Toronto, acknowledged the financial mismatch but said Mozilla is playing the long game. By 2028, he wants Mozilla to be funding a "mainstream" open-source AI ecosystem for developers. The effort faces headwinds from the Trump administration, which has criticized AI safety efforts as "woke AI" and signed an executive order establishing a task force to challenge state AI regulations.
Facebook

Instagram, Facebook and WhatsApp To Test Premium Subscriptions (techcrunch.com) 38

An anonymous reader shares a report: Meta plans to test new subscriptions that give people access to exclusive features on its apps, the company told TechCrunch on Monday. The tech giant said the new subscriptions will unlock more productivity and creativity, along with expanded AI capabilities.

In the coming months, Meta said it will offer a premium experience on Instagram, Facebook, and WhatsApp that gives users access to special features and more control over how they share and connect, while keeping the core experiences free. Meta doesn't appear to be locked into one strategy, noting that it will test a variety of subscription features and bundles, and that each app subscription will have a distinct set of exclusive features.

Earth

Doomsday Clock Ticks To 85 Seconds Before Midnight, Its Closest Ever (reuters.com) 69

The Bulletin of the Atomic Scientists on Tuesday set their symbolic Doomsday Clock to 85 seconds before midnight -- the closest the timepiece has ever been to the theoretical point of annihilation since scientists created it during the Cold War in 1947.

The clock now stands four seconds nearer than last year's setting, and this marks the third time in four years that the Bulletin has moved it closer to midnight. The Chicago-based nonprofit pointed to aggressive behavior by nuclear powers Russia, China and the United States, fraying nuclear arms control frameworks, ongoing conflicts in Ukraine and the Middle East, unregulated AI integration into military systems, and climate change.

"In terms of nuclear risks, nothing in 2025 trended in the right direction," said Alexandra Bell, the Bulletin's president and CEO. The last remaining nuclear arms pact between the US and Russia, the New START treaty, expires on February 5.
AI

Pinterest Cuts Up To 15% Jobs To Redirect Resources To AI (reuters.com) 19

Pinterest said on Tuesday it would trim its workforce by less than 15% and reduce office space, as the social media company looks to reallocate resources to AI-focused roles and initiatives. From a report: The announcement comes as the company competes with TikTok and Meta-owned Facebook and Instagram for digital advertising budgets, as these platforms continue to draw marketers with their extensive user base.

Pinterest had 5,205 full-time employees as of September 2025. The latest job cut would translate to less than 780 positions. Top executives at the World Economic Forum's annual meeting said while jobs would disappear, new ones would spring up, with two telling Reuters that AI would be used as an excuse by companies which were planning layoffs anyway. Last week, design software maker Autodesk also announced a 7% job cut to redirect investments to its cloud platform and AI efforts.

Books

How Anthropic Built Claude: Buy Books, Slice Spines, Scan Pages, Recycle the Remains (msn.com) 122

Court documents unsealed last week in a copyright lawsuit against Anthropic reveal that the AI company ran an operation called "Project Panama" to buy millions of physical books, slice off their spines, scan the pages to train its Claude chatbot, and then send the remains to recycling companies.

The company spent tens of millions of dollars on the effort and hired Tom Turvey, a Google executive who had worked on the legally contested Google Books project two decades earlier. Anthropic bought books in batches of tens of thousands from retailers including Better World Books and World of Books. A vendor document noted the company was seeking to scan between 500,000 and two million books.

Before Project Panama, Anthropic co-founder Ben Mann downloaded books from LibGen, a shadow library of pirated material, over 11 days in June 2021. He later shared a link to the Pirate Library Mirror site with colleagues, writing "this is awesome!!!" Meta employees similarly downloaded books from torrent platforms after approval from Mark Zuckerberg, court filings allege, though one engineer wrote that "torrenting from a corporate laptop doesn't feel right." Anthropic settled for $1.5 billion in August without admitting wrongdoing.
AI

Microsoft's Latest AI Chip Claims Performance Edge Over Amazon and Google (geekwire.com) 18

An anonymous reader quotes a report from GeekWire: Microsoft on Monday announced Maia 200, the second generation of its custom AI chip, claiming it's the most powerful first-party silicon from any major cloud provider. The company says Maia 200 delivers three times the performance of Amazon's latest Trainium chip on certain benchmarks, and exceeds Google's most recent tensor processing unit (TPU) on others. The chip is already running workloads at Microsoft's data center near Des Moines, Iowa. Microsoft says Maia 200 is powering OpenAI's GPT-5.2 models, Microsoft 365 Copilot, and internal projects from its Superintelligence team. A second deployment at a data center near Phoenix is planned next.

It's part of the larger trend among cloud giants to build their own custom silicon for AI rather than rely solely on Nvidia. [...] The company says Maia 200 offers 30% better performance-per-dollar than its current hardware. Maia 200 also builds on the first-generation chip with a more specific focus on inference, the process of running AI models after they've been trained. [...] Microsoft is also opening the door to outside developers. The company announced a software development kit that will let AI startups and researchers optimize their models for Maia 200. Developers and academics can sign up for an early preview starting today.

AI

DOT Plans To Use Google Gemini AI To Write Regulations (propublica.org) 62

The Trump administration is planning to use AI to write federal transportation regulations, ProPublica reported on Monday, citing the U.S. Department of Transportation records and interviews with six agency staffers. From the report: The plan was presented to DOT staff last month at a demonstration of AI's "potential to revolutionize the way we draft rulemakings," agency attorney Daniel Cohen wrote to colleagues. The demonstration, Cohen wrote, would showcase "exciting new AI tools available to DOT rule writers to help us do our job better and faster."

Discussion of the plan continued among agency leadership last week, according to meeting notes reviewed by ProPublica. Gregory Zerzan, the agency's general counsel, said at that meeting that President Donald Trump is "very excited about this initiative." Zerzan seemed to suggest that the DOT was at the vanguard of a broader federal effort, calling the department the "point of the spear" and "the first agency that is fully enabled to use AI to draft rules." Zerzan appeared interested mainly in the quantity of regulations that AI could produce, not their quality. "We don't need the perfect rule on XYZ. We don't even need a very good rule on XYZ," he said, according to the meeting notes. "We want good enough." Zerzan added, "We're flooding the zone."

These developments have alarmed some at DOT. The agency's rules touch virtually every facet of transportation safety, including regulations that keep airplanes in the sky, prevent gas pipelines from exploding and stop freight trains carrying toxic chemicals from skidding off the rails. Why, some staffers wondered, would the federal government outsource the writing of such critical standards to a nascent technology notorious for making mistakes? The answer from the plan's boosters is simple: speed. Writing and revising complex federal regulations can take months, sometimes years. But, with DOT's version of Google Gemini, employees could generate a proposed rule in a matter of minutes or even seconds, two DOT staffers who attended the December demonstration remembered the presenter saying.

United States

New California Law Means Big Changes For Photos of Homes in Real Estate Listings (sfchronicle.com) 38

California house hunters now have legal protection against the kind of real estate photo trickery that has long plagued the home-buying process, as a new state law requiring disclosure of digitally altered listing images took effect on January 1.

Assembly Bill 723 mandates that real estate agents and brokers include a "reasonably conspicuous" statement whenever photos have been altered using editing software or AI to add, remove, or change elements like furniture, appliances, flooring, views or landscaping. Agents must also provide access to the original, unaltered image through a QR code, link, or placement next to the altered photo.

The law does not cover wide-angle lenses -- a perennial complaint among buyers who find rooms smaller than they appeared -- nor does it apply to routine adjustments like cropping, color correction or exposure. California is the first state to require such disclosures, though Wisconsin passed a similar law in December that takes effect next year.
The Internet

How a 15,000-Person Island Stumbled Into a $70 Million AI Windfall (sherwood.news) 11

An anonymous reader shares a report: From Sandisk shareholders to vibe coders, AI is making -- and breaking -- fortunes at a rapid pace. One unlikely beneficiary has been the British Overseas Territory of Anguilla, which lucked into a future fortune when ICANN, the Internet Corporation for Assigned Names and Numbers, gave the island the ".ai" top-level domain in the mid-1990s. Indeed, since ChatGPT's launch at the end of 2022, the gold rush for websites to associate themselves with the burgeoning AI technology has seen a flood of revenue for the island of just ~15,000 people.

In 2023, Anguilla generated 87 million East Caribbean dollars (~$32 million) from domain name sales, some 22% of its total government revenue that year, with 354,000 ".ai" domains registered. As of January 2, 2026, the number of ".ai" domains surpassed 1 million, per data from Domain Name Stat -- suggesting that the nation's revenue from ".ai" has likely soared, too. This is confirmed in the government's 2026 budget address, in which Cora Richardson Hodge, the premier of Anguilla, said, "Revenue from domain name registration continues to exceed expectations."

News

Saudi Arabia To Scale Back Neom Megaproject (ft.com) 56

Saudi Arabia is preparing to significantly scale back Neom, Crown Prince Mohammed bin Salman's flagship development that sprawls across a Belgium-sized stretch of Red Sea coastline and was once billed as the world's largest construction site. Financial Times is reporting that Prince Mohammed, who chairs the project, now envisions something "far smaller" as a year-long review nears completion. The Line, a futuristic 170-kilometer linear city that served as Neom's centerpiece, will be radically reimagined as a result, the report added.

Architects are already working on a more modest design that would repurpose infrastructure built over the past few years. Neom could pivot toward becoming a data center hub, taking advantage of seawater cooling from its coastal location as Saudi Arabia pushes to become a leading AI player. The Trojena ski resort is also being downsized and will no longer host the 2029 Asian Winter Games as originally planned. Construction largely stalled after longtime CEO Nadhmi al-Nasr abruptly departed in November 2024.
United Kingdom

AI is Hitting UK Harder Than Other Big Economies, Study Finds (theguardian.com) 54

The UK is losing more jobs than it is creating because of AI and is being hit harder than rival large economies, new research suggests. From a report: British companies reported that AI had resulted in net job losses over the past 12 months, down 8% -- the highest rate among other leading economies including the US, Japan, Germany and Australia, according to a study by the investment bank Morgan Stanley. The research surveyed companies using AI for at least a year across five industries: consumer staples and retail, real estate, transport, healthcare equipment and cars.

It found that British businesses reported an average 11.5% increase in productivity aided by AI. US businesses reported similar gains, but created more jobs than they cut. It suggests UK workers are being hit particularly hard by the rise of AI, as higher costs and taxes also weigh on the job market. Unemployment is at a four-year high, as rises in the minimum wage and employer national insurance contributions squeeze hiring.
Games

Angry Gamers Are Forcing Studios To Scrap or Rethink New Releases (msn.com) 71

The video game industry is experiencing something that most consumer-facing businesses would consider remarkable: organized online campaigns from players are actually forcing studios to cancel projects or publicly walk back any association with AI-generated content.

Running With Scissors, the publisher behind the Postal shooter franchise, recently scrapped a title after players accused its trailer of containing AI-generated graphics. Goonswarm Games, the developer behind the canceled project, subsequently shut down entirely and cited six years of lost work alongside what it described as a flood of threats and accusations.

Sandfall Interactive's "Obscur: Expedition 33" had its Indie Game Awards Game of the Year honor rescinded after the developer said it had considered AI-generated images, even though the final release contained none. Larian Studios, the developer behind Baldur's Gate 3, faced immediate backlash after CEO Swen Vincke mentioned in an interview that the company was using generative AI to "explore ideas" for an upcoming release. Vincke later clarified on X that artists use AI only for reference images the way they would use "art books or Google," and Larian executives eventually stated on Reddit that AI would play no role in final artwork.
GNU is Not Unix

Richard Stallman Was Asked: Is Software Piracy Wrong? (slashdot.org) 205

Friday 72-year-old Richard Stallman made a two-hour-and-20-minutes appearance at the Georgia Institute of Technology, talking about everything from AI and connected cars to smartphones, age verfication laws, and his favorite Linux distro. But early on, Stallman also told the audience how "I despise DRM...I don't want any copy of anything with DRM. Whatever it is, I never want it so badly that I would bow down to DRM." (So he doesn't use Spotify or Netflix...)

This led to an interesting moment when someone asked him later if we have an ethical obligation to avoid piracy.. First Stallman swapped in his preferred phrase, "forbidden sharing"...

"I won't use the word piracy to refer to sharing. Sharing is good and it should be lawful. Those laws are wrong. Copyright as it is now is an injustice."

Stallman said "I don't hesitate to share copies of anything," but added that "I don't have copies of non-free software, because I'm disgusted by it." After a pause, he added this. "Just because there is a law to to give some people unjust power, that doesn't mean breaking that law becomes wrong....

"Dividing people by forbidding them to help each other is nasty."

And later Stallman was asked how he watches movies, if he's opposed to DRM-heavy sites like Netflix, and the DRM in Blu-ray discs? "The only way I can see a movie is if I get a file — you know, like an MP4 file or MKV file. And I would get that, I suppose, by copying from somebody else."

"Sharing is good. Stopping people from sharing is evil."
The Media

Is Google Prioritizing YouTube and X Over News Publishers on Discover? (pressgazette.co.uk) 32

Earlier this month, the media site Press Gazette reported that now Google "is increasingly prioritising AI summaries, X posts and Youtube videos" on its "Discover" feed (which appears on the leftmost homescreen page of many Android phones and the Google app's homepage).

"The changes could be devastating for publishers who rely heavily on Discover for referral traffic. And it looks set to accelerate a global trend of declining traffic to publishers from both Google search and Discover." Xavi Beumala from website analytics platform Marfeel warned in a research update: "Google Discover is no longer a publisher-first surface. It's becoming an AI platform with YouTube and X absorbing real estate that once went to newsrooms..." [They warn later that "This is not a marginal UI experiment. It is a reallocation of feed real estate away from links and toward inline Youtube plays and generated summaries."] Google says it prioritises "helpful, reliable, people-first content". Unlike Google News, there is no requirement that Google Discover showcases bona fide publisher websites.

In recent months fake news stories published by fraudulent website publishers have been promoted on Google Discover, reaping tens of millions of clicks. Google said it was working on a "fix" for this issue...

Facebook, Instagram and Tiktok content may also start flowing into the Discover feed in future. When Google announced the addition of posts from X, Instagram and Youtube Shorts in September, it said there would be "more platforms to come".

Google

Google Discover Replaces News Headlines With Sometimes Inaccurate AI-Generated Alternatives (theverge.com) 25

An anonymous reader shared this report from The Verge: In early December, I brought you the news that Google has begun replacing Verge headlines, and those of our competitors, with AI clickbait nonsense in its content feed [which appears on the leftmost homescreen page of many Android phones and the Google app's homepage]. Google appeared to be backing away from the experiment, but now tells The Verge that its AI headlines in Google Discover are a feature, one that "performs well for user satisfaction." I once again see lots of misleading claims every time I check my phone...

For example, Google's AI claimed last week that "US reverses foreign drone ban," citing and linking to this PCMag story for the news. That's not just false — PCMag took pains to explain that it's false in the story that Google links to...! What does the author of that PCMag story think? "It makes me feel icky," Jim Fisher tells me over the phone. "I'd encourage people to click on stories and read them, and not trust what Google is spoon-feeding them." He says Google should be using the headline that humans wrote, and if Google needs a summary, it can use the ones that publications already submit to help search engines parse our work.

Google claims it's not rewriting headlines. It characterizes these new offerings as "trending topics," even though each "trending topic" presents itself as one of our stories, links to our stories, and uses our images, all without competent fact-checking to ensure the AI is getting them right... The AI is also no longer restricted to roughly four words per headline, so I no longer see nonsense headlines like "Microsoft developers using AI" or "AI tag debate heats." (Instead, I occasionally see tripe like "Fares: Need AAA & AA Games" or "Dispatch sold millions; few avoided romance.")

But Google's AI has no clue what parts of these stories are new, relevant, significant, or true, and it can easily confuse one story for another. On December 26th, Google told me that "Steam Machine price & HDMI details emerge." They hadn't. On January 11th, Google proclaimed that "ASUS ROG Ally X arrives." (It arrived in 2024; the new Xbox Ally arrived months ago.) On January 20th, it wrote that "Glasses-free 3D tech wows," introducing readers to "New 3D tech called Immensity from Leia" — but linking to this TechRadar story about an entirely different company called Visual Semiconductor...

Google declined our request for an interview to more fully explain the idea.

The site Android Police spotted more inaccurate headlines in December: A story from 9to5Google, which was actually titled 'Don't buy a Qi2 25W wireless charger hoping for faster speeds — just get the 'slower' one instead' was retitled as 'Qi2 slows older Pixels.' Similarly, Ars Technica's 'Valve's Steam Machine looks like a console, but don't expect it to be priced like one' was changed to 'Steam Machine price revealed.' At the time, we believed that the inaccuracies were due to the feature being unstable and in early testing.... Now, Google has stopped calling Discover replacing human-written headlines as an "experiment."
"Google buries a 'Generated with AI, which can make mistakes' message under the 'See more' button in the summary," reports 9to5Google, "making it look like this is the publisher's intended headline." While it is obvious that Google has refined this feature over the past couple of months, it doesn't take long to still find plenty of misleading headlines throughout Discover... Another article from NotebookCheck about an Anker power bank with a retractable cable was given a headline that's about another product entirely. A pair of headlines from Tom's Hardware and PCMag, meanwhile, show the two sides of using AI for this purpose. The Tom's Hardware headline, "Free GPU & Amazon Scams," isn't representative of the actual article, which is about someone who bought a GPU from Amazon, canceled their order, and the retailer shipped it anyway. There's nothing about "Amazon Scams" in the article.
GNU is Not Unix

Richard Stallman Critiques AI, Connected Cars, Smartphones, and DRM (youtube.com) 77

Richard Stallman spoke Friday at Atlanta's Georgia Institute of Technology, continuing his activism for free software while also addressing today's new technologies.

Speaking about AI, Stallman warned that "nowadays, people often use the term artificial intelligence for things that aren't intelligent at all..." He makes a point of calling large language models "generators" because "They generate text and they don't understand really what that text means." (And they also make mistakes "without batting a virtual eyelash. So you can't trust anything that they generate.") Stallman says "Every time you call them AI, you are endorsing the claim that they are intelligent and they're not. So let's let's refuse to do that."

"So I've come up with the term Pretend Intelligence. We could call it PI. And if we start saying this more often, we might help overcome this marketing hype campaign that wants people to trust those systems, and trust their lives and all their activities to the control of those systems and the big companies that develop and control them."

"By the way, as far as I can tell, none of them is free software."

When it comes to today's cars, Stallman says they contain "malicious functionalities... Cars should not be connected. They should not upload anything." (He adds that "I am hoping to find a skilled mechanic to work with me in a project to make disconnected cars.")

And later Stallman calls the smartphone "an Orwellian tracking and surveillance device," saying he refuses to own one. (An advantage of free software is that it allows the removal of malicious functionalities.)

Stallman spoke for about 53 minutes — but then answered questions for nearly 90 minutes longer. Here's some of the highlights...
AI

The Risks of AI in Schools Outweigh the Benefits, Report Says (npr.org) 33

This month saw results from a yearlong global study of "potential negative risks that generative AI poses to student". The study (by the Brookings Institution's Center for Universal Education) also suggests how to prevent risks and maximize benefits: After interviews, focus groups, and consultations with over 500 students, teachers, parents, education leaders, and technologists across 50 countries, a close review of over 400 studies, and a Delphi panel, we find that at this point in its trajectory, the risks of utilizing generative AI in children's education overshadow its benefits.
"At the top of Brookings' list of risks is the negative effect AI can have on children's cognitive growth," reports NPR — "how they learn new skills and perceive and solve problems." The report describes a kind of doom loop of AI dependence, where students increasingly off-load their own thinking onto the technology, leading to the kind of cognitive decline or atrophy more commonly associated with aging brains... As one student told the researchers, "It's easy. You don't need to (use) your brain." The report offers a surfeit of evidence to suggest that students who use generative AI are already seeing declines in content knowledge, critical thinking and even creativity. And this could have enormous consequences if these young people grow into adults without learning to think critically...

Survey responses revealed deep concern that use of AI, particularly chatbots, "is undermining students' emotional well-being, including their ability to form relationships, recover from setbacks, and maintain mental health," the report says. One of the many problems with kids' overuse of AI is that the technology is inherently sycophantic — it has been designed to reinforce users' beliefs... Winthrop offers an example of a child interacting with a chatbot, "complaining about your parents and saying, 'They want me to wash the dishes — this is so annoying. I hate my parents.' The chatbot will likely say, 'You're right. You're misunderstood. I'm so sorry. I understand you.' Versus a friend who would say, 'Dude, I wash the dishes all the time in my house. I don't know what you're complaining about. That's normal.' That right there is the problem."

AI did have some advantages, the article points out: The report says another benefit of AI is that it allows teachers to automate some tasks: "generating parent emails ... translating materials, creating worksheets, rubrics, quizzes, and lesson plans" — and more. The report cites multiple research studies that found important time-saving benefits for teachers, including one U.S. study that found that teachers who use AI save an average of nearly six hours a week and about six weeks over the course of a full school year...

AI can also help make classrooms more accessible for students with a wide range of learning disabilities, including dyslexia. But "AI can massively increase existing divides" too, [warns Rebecca Winthrop, one of the report's authors and a senior fellow at Brookings]. That's because the free AI tools that are most accessible to students and schools can also be the least reliable and least factually accurate... "[T]his is the first time in ed-tech history that schools will have to pay more for more accurate information. And that really hurts schools without a lot of resources."

The report calls for more research — and make several recommendations (including "holistic" learning and "AI tools that teach, not tell.") But this may be their most important recommendation. "Provide a clear vision for ethical AI use that centers human agency..."

"We find that AI has the potential to benefit or hinder students, depending on how it is used."
AI

Google's 'AI Overviews' Cite YouTube For Health Queries More Than Any Medical Sites, Study Suggests (theguardian.com) 38

An anonymous reader shared this report from the Guardian: Google's search feature AI Overviews cites YouTube more than any medical website when answering queries about health conditions, according to research that raises fresh questions about a tool seen by 2 billion people each month.

The company has said its AI summaries, which appear at the top of search results and use generative AI to answer questions from users, are "reliable" and cite reputable medical sources such as the Centers for Disease Control and Prevention and the Mayo Clinic. However, a study that analysed responses to more than 50,000 health queries, captured using Google searches from Berlin, found the top cited source was YouTube. The video-sharing platform is the world's second most visited website, after Google itself, and is owned by Google. Researchers at SE Ranking, a search engine optimisation platform, found YouTube made up 4.43% of all AI Overview citations. No hospital network, government health portal, medical association or academic institution came close to that number, they said. "This matters because YouTube is not a medical publisher," the researchers wrote. "It is a general-purpose video platform...."

In one case that experts said was "dangerous" and "alarming", Google provided bogus information about crucial liver function tests that could have left people with serious liver disease wrongly thinking they were healthy. The company later removed AI Overviews for some but not all medical searches... Hannah van Kolfschooten, a researcher specialising in AI, health and law at the University of Basel who was not involved with the research, said: "This study provides empirical evidence that the risks posed by AI Overviews for health are structural, not anecdotal. It becomes difficult for Google to argue that misleading or harmful health outputs are rare cases.

"Instead, the findings show that these risks are embedded in the way AI Overviews are designed. In particular, the heavy reliance on YouTube rather than on public health authorities or medical institutions suggests that visibility and popularity, rather than medical reliability, is the central driver for health knowledge."

AI

AI Luminaries Clash At Davos Over How Close Human-Level Intelligence Really Is (yahoo.com) 105

An anonymous reader shared this report from Fortune The large language models (LLMs) that have captivated the world are not a path to human-level intelligence, two AI experts asserted in separate remarks at Davos. Demis Hassabis, the Nobel Prize-winning CEO of Google DeepMind, and the executive who leads the development of Google's Gemini models, said today's AI systems, as impressive as they are, are "nowhere near" human-level artificial general intelligence, or AGI. [Though the artilcle notes that later Hassabis predicted there was a 50% chance AGI might be achieved within the decade.] Yann LeCun — an AI pioneer who won a Turing Award, computer science's most prestigious prize, for his work on neural networks — went further, saying that the LLMs that underpin all of the leading AI models will never be able to achieve humanlike intelligence and that a completely different approach is needed... ["The reason ... LLMs have been so successful is because language is easy," LeCun said later.]

Their views differ starkly from the position asserted by top executives of Google's leading AI rivals, OpenAI and Anthropic, who assert that their AI models are about to rival human intelligence. Dario Amodei, the CEO of Anthropic, told an audience at Davos that AI models would replace the work of all software developers within a year and would reach "Nobel-level" scientific research in multiple fields within two years. He said 50% of white-collar jobs would disappear within five years. OpenAI CEO Sam Altman (who was not at Davos this year) has said we are already beginning to slip past human-level AGI toward "superintelligence," or AI that would be smarter than all humans combined...

The debate over AGI may be somewhat academic for many business leaders. The more pressing question, says Cognizant CEO Ravi Kumar, is whether companies can capture the enormous value that AI already offers. According to Cognizant research released ahead of Davos, current AI technology could unlock approximately $4.5 trillion in U.S. labor productivity — if businesses can implement it effectively.

Slashdot Top Deals