AI

OpenAI Enters 'Tough Negotiation' With Microsoft, Hopes to Raise Money With IPO (msn.com) 9

OpenAI is currently in "a tough negotiation" with Microsoft, the Financial Times reports, citing "one person close to OpenAI."

On the road to building artificial general intelligence, OpenAI hopes to unlock new funding (and launch a future IPO), according to the article, which says both sides are at work "rewriting the terms of their multibillion-dollar partnership in a high-stakes negotiation...."

Microsoft, meanwhile, wants to protect its access to OpenAI's cutting-edge AI models... [Microsoft] is a key holdout to the $260bn start-up's plans to undergo a corporate restructuring that moves the group further away from its roots as a non-profit with a mission to develop AI to "benefit humanity". A critical issue in the deliberations is how much equity in the restructured group Microsoft will receive in exchange for the more than $13bn it has invested in OpenAI to date.

According to multiple people with knowledge of the negotiations, the pair are also revising the terms of a wider contract, first drafted when Microsoft first invested $1bn into OpenAI in 2019. The contract currently runs to 2030 and covers what access Microsoft has to OpenAI's intellectual property such as models and products, as well as a revenue share from product sales. Three people with direct knowledge of the talks said Microsoft is offering to give up some of its equity stake in OpenAI's new for-profit business in exchange for accessing new technology developed beyond the 2030 cut off...

Industry insiders said a failure of OpenAI's new plan to make its business arm a public benefits corporation could prove a critical blow. That would hit OpenAI's ability to raise more cash, achieve a future float, and obtain the financial resources to take on Big Tech rivals such as Google. That has left OpenAI's future at the mercy of investors, such as Microsoft, who want to ensure they gain the benefit of its enormous growth, said Dorothy Lund, professor of law at Columbia Law School.

Lund says OpenAI's need for investors' money means they "need to keep them happy." But there also appears to be tension from how OpenAI competes with Microsoft (like targeting its potential enterprise customers with AI products). And the article notes that OpenAI also turned to Oracle (and SoftBank) for its massive AI infrastructure project Stargate. One senior Microsoft employee complained that OpenAI "says to Microsoft, 'give us money and compute and stay out of the way: be happy to be on the ride with us'. So naturally this leads to tensions. To be honest, that is a bad partner attitude, it shows arrogance."

The article's conclusion? Negotiating new deal is "critical to OpenAI's restructuring efforts and could dictate the future of a company..."
Programming

What Happens If AI Coding Keeps Improving? (fastcompany.com) 135

Fast Company's "AI Decoded" newsletter makes the case that the first "killer app" for generative AI... is coding. Tools like Cursor and Windsurf can now complete software projects with minimal input or oversight from human engineers... Naveen Rao, chief AI officer at Databricks, estimates that coding accounts for half of all large language model usage today. A 2024 GitHub survey found that over 97% of developers have used AI coding tools at work, with 30% to 40% of organizations actively encouraging their adoption.... Microsoft CEO Satya Nadella recently said AI now writes up to 30% of the company's code. Google CEO Sundar Pichai echoed that sentiment, noting more than 30% of new code at Google is AI-generated.

The soaring valuations of AI coding startups underscore the momentum. Anysphere's Cursor just raised $900 million at a $9 billion valuation — up from $2.5 billion earlier this year. Meanwhile, OpenAI acquired Windsurf (formerly Codeium) for $3 billion. And the tools are improving fast. OpenAI's chief product officer, Kevin Weil, explained in a recent interview that just five months ago, the company's best model ranked around one-millionth on a well-known benchmark for competitive coders — not great, but still in the top two or three percentile. Today, OpenAI's top model, o3, ranks as the 175th best competitive coder in the world on that same test. The rapid leap in performance suggests an AI coding assistant could soon claim the number-one spot. "Forever after that point computers will be better than humans at writing code," he said...

Google DeepMind research scientist Nikolay Savinov said in a recent interview that AI coding tools will soon support 10 million-token context windows — and eventually, 100 million. With that kind of memory, an AI tool could absorb vast amounts of human instruction and even analyze an entire company's existing codebase for guidance on how to build and optimize new systems. "I imagine that we will very soon get to superhuman coding AI systems that will be totally unrivaled, the new tool for every coder in the world," Savinov said.

AI

Can an MCP-Powered AI Client Automatically Hack a Web Server? (youtube.com) 12

Exposure-management company Tenable recently discussed how the MCP tool-interfacing framework for AI can be "manipulated for good, such as logging tool usage and filtering unauthorized commands." (Although "Some of these techniques could be used to advance both positive and negative goals.")

Now an anonymous Slashdot reader writes: In a demonstration video put together by security researcher Seth Fogie, an AI client given a simple prompt to 'Scan and exploit' a web server leverages various connected tools via MCP (nmap, ffuf, nuclei, waybackurls, sqlmap, burp) to find and exploit discovered vulnerabilities without any additional user interaction

As Tenable illustrates in their MCP FAQ, "The emergence of Model Context Protocol for AI is gaining significant interest due to its standardization of connecting external data sources to large language models (LLMs). While these updates are good news for AI developers, they raise some security concerns." With over 12,000 MCP servers and counting, what does this all lead to and when will AI be connected enough for a malicious prompt to cause serious impact?

Games

Blizzard's 'Overwatch' Team Just Voted to Unionize (kotaku.com) 43

"The Overwatch 2 team at Blizzard has unionized," reports Kotaku: That includes nearly 200 developers across disciplines ranging from art and testing to engineering and design. Basically anyone who doesn't have someone else reporting to them. It's the second wall-to-wall union at the storied game maker since the World of Warcraft team unionized last July... Like unions at Bethesda Game Studios and Raven Software, the Overwatch Gamemakers Guild now has to bargain for its first contract, a process that Microsoft has been accused of slow-walking as negotiations with other internal game unions drag on for years.

"The biggest issue was the layoffs at the beginning of 2024," Simon Hedrick, a test analyst at Blizzard, told Kotaku... "People were gone out of nowhere and there was nothing we could do about it," he said. "What I want to protect most here is the people...." Organizing Blizzard employees stress that improving their working conditions can also lead to better games, while the opposite — layoffs, forced resignations, and uncompetitive pay can make them worse....

"We're not just a number on an Excel sheet," [said UI artist Sadie Boyd]. "We want to make games but we can't do it without a sense of security." Unionizing doesn't make a studio immune to layoffs or being shuttered, but it's the first step toward making companies have a discussion about those things with employees rather than just shadow-dropping them in an email full of platitudes. Boyd sees the Overwatch union as a tool for negotiating a range of issues, like if and how generative AI is used at Blizzard, as well as a possible source of inspiration to teams at other studios.

"Our industry is at such a turning point," she said. "I really think with the announcement of our union on Overwatch...I know that will light some fires."

The article notes that other issues included work-from-home restrictions, pay disparities and changes to Blizzard's profit-sharing program, and wanting codified protections for things like crunch policies, time off, and layoff-related severance.
Education

Is Everyone Using AI to Cheat Their Way Through College? (msn.com) 160

Chungin Lee used ChatGPT to help write the essay that got him into Columbia University — and then "proceeded to use generative artificial intelligence to cheat on nearly every assignment," reports New York magazine's blog Intelligencer: As a computer-science major, he depended on AI for his introductory programming classes: "I'd just dump the prompt into ChatGPT and hand in whatever it spat out." By his rough math, AI wrote 80 percent of every essay he turned in. "At the end, I'd put on the finishing touches. I'd just insert 20 percent of my humanity, my voice, into it," Lee told me recently... When I asked him why he had gone through so much trouble to get to an Ivy League university only to off-load all of the learning to a robot, he said, "It's the best place to meet your co-founder and your wife."
He eventually did meet a co-founder, and after three unpopular apps they found success by creating the "ultimate cheat tool" for remote coding interviews, according to the article. "Lee posted a video of himself on YouTube using it to cheat his way through an internship interview with Amazon. (He actually got the internship, but turned it down.)" The article ends with Lee and his co-founder raising $5.3 million from investors for one more AI-powered app, and Lee says they'll target the standardized tests used for graduate school admissions, as well as "all campus assignments, quizzes, and tests. It will enable you to cheat on pretty much everything."

Somewhere along the way Columbia put him on disciplinary probation — not for cheating in coursework, but for creating the apps. But "Lee thought it absurd that Columbia, which had a partnership with ChatGPT's parent company, OpenAI, would punish him for innovating with AI." (OpenAI has even made ChatGPT Plus free to college students during finals week, the article points out, with OpenAI saying their goal is just teaching students how to use it responsibly.) Although Columbia's policy on AI is similar to that of many other universities' — students are prohibited from using it unless their professor explicitly permits them to do so, either on a class-by-class or case-by-case basis — Lee said he doesn't know a single student at the school who isn't using AI to cheat. To be clear, Lee doesn't think this is a bad thing. "I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating," he said...

In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments.

The article points out ChatGPT's monthly visits increased steadily over the last two years — until June, when students went on summer vacation. "College is just how well I can use ChatGPT at this point," a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.... It isn't as if cheating is new. But now, as one student put it, "the ceiling has been blown off." Who could resist a tool that makes every assignment easier with seemingly no consequences?
After using ChatGPT for their final semester of high school, one student says "My grades were amazing. It changed my life." So she continued used it in college, and "Rarely did she sit in class and not see other students' laptops open to ChatGPT."

One ethics professor even says "The students kind of recognize that the system is broken and that there's not really a point in doing this." (Yes, students are even using AI to cheat in ethics classes...) It's not just the students: Multiple AI platforms now offer tools to leave AI-generated feedback on students' essays. Which raises the possibility that AIs are now evaluating AI-generated papers, reducing the entire academic exercise to a conversation between two robots — or maybe even just one.
Google

'I Broke Up with Google Search. It was Surprisingly Easy.' (msn.com) 62

Inspired by researchers who'd bribed people to use Microsoft's Bing for two weeks (and found some wanted to keep using it), a Washington Post tech columnist also tried it — and reported it "felt like quitting coffee."

"The first few days, I was jittery. I kept double searching on Google and DuckDuckGo, the non-Google web search engine I was using, to check if Google gave me better results. Sometimes it did. Mostly it didn't."

"More than two weeks into a test of whether I love Google search or if it's just a habit, I've stopped double checking. I don't have Google FOMO..." I didn't do a fancy analysis into whether my search results were better with Google or DuckDuckGo, whose technology is partly powered by Bing. The researchers found our assessment of search quality is based on vibes. And the vibes with DuckDuckGo are perfectly fine. Many dozens of readers told me about their own satisfaction with non-Google searches...

For better or worse, DuckDuckGo is becoming a bit more Google-like. Like Google, it has ads that are sometimes misleading or irrelevant. DuckDuckGo and Bing also are mimicking Google's makeover from a place that mostly pointed you to the best links online to one that never wants you to leave Google... [DuckDuckGo] shows you answers to things like sports results and AI-assisted replies, though less often than Google does. (You can turn off AI "instant answers" in DuckDuckGo.) Answers at the top of search results pages can be handy — assuming they're not wrong or scams — but they have potential trade-offs. If you stop your search without clicking to read a website about sports news or gluten intolerance, those sites could die. And the web gets worse. DuckDuckGo says that people expect instant answers from search results, and it's trying to balance those demands with keeping the web healthy. Google says AI answers help people feel more satisfied with their search results and web surfing.

DuckDuckGo has one clear advantage over Google: It collects far less of your data. DuckDuckGo doesn't save what I search...

My biggest wariness from this search experiment is like the challenge of slowing climate change: Your choices matter, but maybe not that much. Our technology has been steered by a handful of giant technology companies, and it's difficult for individuals to alter that. The judge in the company's search monopoly case said Google broke the law by making it harder for you to use anything other than Google. Its search is so dominant that companies stopped trying hard to out-innovate and win you over. (AI could upend Google search. We'll see....) Despite those challenges, using Google a bit less and smaller alternatives more can make a difference. You don't have to 100 percent quit Google.

"Your experiment confirms what we've said all along," Google responded to the Washington Post. "It's easy to find and use the search engine of your choice."

Although the Post's reporter also adds that "I'm definitely not ditching other company internet services like Google Maps, Google Photos and Gmail." They write later that " You'll have to pry YouTube out of my cold, dead hands" and "When I moved years of emails from Gmail to Proton Mail, that switch didn't stick."
Transportation

More US Airports are Scanning Faces. But a New Bill Could Limit the Practice (msn.com) 22

An anonymous reader shared this repost from the Washington Post: It's becoming standard practice at a growing number of U.S. airports: When you reach the front of the security line, an agent asks you to step up to a machine that scans your face to check whether it matches the face on your identification card. Travelers have the right to opt out of the face scan and have the agent do a visual check instead — but many don't realize that's an option.

Sens. Jeff Merkley (D-Oregon) and John Neely Kennedy (R-Louisiana) think it should be the other way around. They plan to introduce a bipartisan bill that would make human ID checks the default, among other restrictions on how the Transportation Security Administration can use facial recognition technology. The Traveler Privacy Protection Act, shared with the Tech Brief on Wednesday ahead of its introduction, is a narrower version of a 2023 bill by the same name that would have banned the TSA's use of facial recognition altogether. This one would allow the agency to continue scanning travelers' faces, but only if they opt in, and would bar the technology's use for any purpose other than verifying people's identities. It would also require the agency to immediately delete the scans of general boarding passengers once the check is complete.

"Facial recognition is incredibly powerful, and it is being used as an instrument of oppression around the world to track dissidents whose opinion governments don't like," Merkley said in a phone interview Wednesday, citing China's use of the technology on the country's Uyghur minority. "It really creates a surveillance state," he went on. "That is a massive threat to freedom and privacy here in America, and I don't think we should trust any government with that power...."

[The TSA] began testing face scans as an option for people enrolled in "trusted traveler" programs, such as TSA PreCheck, in 2021. By 2022, the program quietly began rolling out to general boarding passengers. It is now active in at least 84 airports, according to the TSA's website, with plans to bring it to more than 400 airports in the coming years. The agency says the technology has proved more efficient and accurate than human identity checks. It assures the public that travelers' face scans are not stored or saved once a match has been made, except in limited tests to evaluate the technology's effectiveness.

The bill would also bar the TSA from providing worse treatment to passengers who refuse not to participate, according to FedScoop, and would also forbid the agency from using face-scanning technology to target people or conduct mass surveillance: "Folks don't want a national surveillance state, but that's exactly what the TSA's unchecked expansion of facial recognition technology is leading us to," Sen. Jeff Merkley, D-Ore., a co-sponsor of the bill and a longtime critic of the government's facial recognition program, said in a statement...

Earlier this year, the Department of Homeland Security inspector general initiated an audit of TSA's facial recognition program. Merkley had previously led a letter from a bipartisan group of senators calling for the watchdog to open an investigation into TSA's facial recognition plans, noting that the technology is not foolproof and effective alternatives were already in use.

AI

AI Use Damages Professional Reputation, Study Suggests (arstechnica.com) 90

An anonymous reader quotes a report from Ars Technica: Using AI can be a double-edged sword, according to new research from Duke University. While generative AI tools may boost productivity for some, they might also secretly damage your professional reputation. On Thursday, the Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers. "Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs," write researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll of Duke's Fuqua School of Business.

The Duke team conducted four experiments with over 4,400 participants to examine both anticipated and actual evaluations of AI tool users. Their findings, presented in a paper titled "Evidence of a social evaluation penalty for using AI," reveal a consistent pattern of bias against those who receive help from AI. What made this penalty particularly concerning for the researchers was its consistency across demographics. They found that the social stigma against AI use wasn't limited to specific groups.
"Testing a broad range of stimuli enabled us to examine whether the target's age, gender, or occupation qualifies the effect of receiving help from Al on these evaluations," the authors wrote in the paper. "We found that none of these target demographic attributes influences the effect of receiving Al help on perceptions of laziness, diligence, competence, independence, or self-assuredness. This suggests that the social stigmatization of AI use is not limited to its use among particular demographic groups. The result appears to be a general one."
China

Huawei Unveils a HarmonyOS Laptop, Its First Windows-Free Computer (liliputing.com) 43

Huawei has launched its first laptop running HarmonyOS instead of Windows, complete with AI features and support for over 2,000 mostly China-focused apps. The product is largely a result of U.S. sanctions that prevented U.S.-based companies like Google and Microsoft from doing business with Huawei, forcing the company to develop its own in-house solution. Liliputing reports: Early version of HarmonyOS were basically skinned version of Android, but over time Huawei has moved the two operating systems further apart and it now includes Huawei's own kernel, user interface, and other features. The version designed for laptops features a desktop-style operating system with a taskbar and dock on the bottom of the screen and support for multitasking by running multiple applications in movable, resizable windows.

Since this is 2025, of course Huawei's demos also heavily emphasize AI features: the company showed how Celia, its AI assistant, can summarize documents, help prepare presentation slides, and more. While the operating system won't support the millions of Windows applications that could run on older Huawei laptops, the company says that at launch it will support more than 2,000 applications including WPS Office (an alternative to Microsoft Office that's developed in China), and a range of Chinese social media applications.

United States

US Senator Introduces Bill Calling For Location-Tracking on AI Chips To Limit China Access (reuters.com) 56

A U.S. senator introduced a bill on Friday that would direct the Commerce Department to require location verification mechanisms for export-controlled AI chips, in an effort to curb China's access to advanced semiconductor technology. From a report: Called the "Chip Security Act," the bill calls for AI chips under export regulations, and products containing those chips, to be fitted with location-tracking systems to help detect diversion, smuggling or other unauthorized use of the product.

"With these enhanced security measures, we can continue to expand access to U.S. technology without compromising our national security," Republican Senator Tom Cotton of Arkansas said. The bill also calls for companies exporting the AI chips to report to the Bureau of Industry and Security if their products have been diverted away from their intended location or subject to tampering attempts.

Businesses

CrowdStrike, Responsible For Global IT Outage, To Cut Jobs In AI Efficiency Push 33

CrowdStrike, the cybersecurity firm that became a household name after causing a massive global IT outage last year, has announced it will cut 5% of its workforce in part due to "AI efficiency." From a report: In a note to staff earlier this week, released in stock market filings in the US, CrowdStrike's chief executive, George Kurtz, announced that 500 positions, or 5% of its workforce, would be cut globally, citing AI efficiencies created in the business.

"We're operating in a market and technology inflection point, with AI reshaping every industry, accelerating threats, and evolving customer needs," he said. Kurtz said AI "flattens our hiring curve, and helps us innovate from idea to product faster," adding it "drives efficiencies across both the front and back office. AI is a force multiplier throughout the business," he said. Other reasons for the cuts included market demand for sustained growth and expanding the product offering.
AI

Prompt Engineering is Quickly Going Extinct (fastcompany.com) 81

The specialized role of prompt engineering, not long ago heralded as a promising new career path in AI, has virtually disappeared just two years after its emergence. Many companies are now considering strong AI prompting a standard skill rather than a dedicated position, Fast Company reports, with some firms even deploying AI systems to generate optimal prompts for other AI tools.

"AI is already eating its own," Malcolm Frank, CEO of TalentGenius, told the publication. "Prompt engineering has become something that's embedded in almost every role, and people know how to do it. It's turned from a job into a task very, very quickly." The prompt engineer's decline serves as a case study for the broader AI job market, where evidence suggests AI is primarily reshaping existing careers rather than creating entirely new ones.

Further reading: 'AI Prompt Engineering Is Dead.'
AI

Nvidia CEO: 'You Won't Lose Your Job To AI, But To Someone Who Uses It' (yahoo.com) 36

Nvidia founder and CEO Jensen Huang has served up another blunt take on the job market as AI permeates society. From a report: "You will not lose your job to AI, but will lose it to someone who uses it," Huang said at the Milken Institute Conference. Added Huang, "I recommend 100% take advantage of AI, don't be that person."
AI

AI-Generated 'Slop' Threatens Internet Ecosystem, Researchers Warn (bloomberg.com) 31

Researchers are sounding alarms about the proliferation of AI-generated content -- dubbed "slop" -- that may be overwhelming the internet's human-created material. Fil Menczer, distinguished professor of informatics at Indiana University, who has studied social bots since the early 2010s, is now expressing serious concern about generative AI's impact. "Am I worried? Yes, I'm very worried," he told Bloomberg.

Another research from Georgetown University found over 100 Facebook pages with millions of followers using AI-generated images for scams. According to Tollbit, a company that helps publishers get compensated when their sites are scraped, web scraping volume doubled from Q3 to Q4 2024, causing significant strain on sites like Wikipedia during high-traffic events.

The situation creates a dangerous feedback loop where AI content is generated to please AI recommendation systems, potentially marginalizing human creators. Jeff Allen of the Integrity Institute told Bloomberg this resembles "the algae bloom that can blow up and suffocate the life you would want to have in a healthy ecosystem."
AI

IRS Hopes To Replace Fired Enforcement Workers With AI 93

Facing deep staffing cuts, the IRS plans to lean heavily on AI to maintain tax collection efforts, with Treasury Secretary Scott Bessent stating that smarter IT and the "AI boom" will offset reductions in revenue enforcement staff. The Register reports: When asked by Congressman Steny Hoyer (D-MD) whether proposed reductions in the IRS's IT budget, along with plans to cut additional staff, would affect the agencies ability to collect tax revenue, Bessent said it wouldn't, thanks to the current "AI boom." "I believe through smarter IT, through this AI boom, that we can use that to enhance collections," Bessent told Hoyer and the Committee (24:29 into the video linked [here]). "I expect collections would continue to be very robust as they were this year."

Bessent's comments didn't explain how the IRS intends to deploy AI. Given how much it has slashed its enforcement staff since Trump took office, the agency definitely needs to do something. [...] Bessent's comments didn't explain how the IRS intends to deploy AI. Given how much it has slashed its enforcement staff since Trump took office, the agency definitely needs to do something. "There is nothing that shows historically that bringing in unseasoned collections agents will result in more collections," Bessent told the Committee.
"IRS already uses AI for business functions including operational efficiency, compliance and fraud detection, and taxpayer services," the agency told The Register. "AI use cases must follow all relevant IRS privacy and security policies."
AI

Instagram's AI Chatbots Lie About Being Licensed Therapists 36

Instagram's AI chatbots are masquerading as licensed therapists, complete with fabricated credentials and license numbers, according to an investigation by 404 Media. When questioned, these user-created bots from Meta's AI Studio platform provide detailed but entirely fictional qualifications, including nonexistent license numbers, accreditations, and practice information.

Unlike Character.AI, which displays clear disclaimers that its therapy bots aren't real professionals, Meta's chatbots feature only a generic notice stating "Messages are generated by AI and may be inaccurate or inappropriate" at the bottom of conversations.
AI

Alibaba's ZeroSearch Teaches AI To Search Without Search Engines, Cuts Training Costs By 88% (venturebeat.com) 7

Alibaba Group researchers have developed "ZeroSearch," a technique that enables large language models to acquire search capabilities without using external search engines during training. The approach transforms LLMs into retrieval modules through supervised fine-tuning and employs a "curriculum-based rollout strategy" that gradually degrades generated document quality.

In tests across seven question-answering datasets, ZeroSearch matched or exceeded the performance [PDF] of models trained with real search engines. A 7B-parameter retrieval module achieved results comparable to Google Search, while a 14B-parameter version outperformed it. The cost savings are substantial: training with 64,000 search queries using Google Search via SerpAPI would cost approximately $586.70, compared to just $70.80 using a 14B-parameter simulation LLM on four A100 GPUs -- an 88% reduction.

The technique works with multiple model families including Qwen-2.5 and LLaMA-3.2. Researchers have released their code, datasets, and pre-trained models on GitHub and Hugging Face, potentially lowering barriers to entry for smaller AI companies developing sophisticated assistants.
Hardware

Apple Is Planning Smart Glasses With and Without AR (theverge.com) 42

According to Bloomberg's Mark Gurman, Apple has "made progress" on a chip for a product that could rival the Ray-Ban Meta smart glasses. The company is also reportedly working on glasses that use augmented reality. The Verge reports: The chip is apparently based on the chips Apple uses for the Apple Watch, though the company has removed parts and is being designed in such a way that it can handle the "multiple cameras" that the smart glasses might have, Bloomberg reports. Apple wants mass production of the chip to start by the end of 2026 or sometime in 2027, so the glasses themselves could come out within that timeframe. [...] Apple is developing chips for camera-equipped Apple Watch and Airpods as well, and the goal is for those chips to be ready "by around 2027," Bloomberg says. The company is also developing new M-series chips and dedicated AI server chips, per the report.
AI

Cloudflare CEO: AI Is Killing the Business Model of the Web 93

In a recent interview with the Council on Foreign Relations, Cloudflare CEO Matthew Prince warned that AI is breaking the economic model of the web by decoupling content creation from value, with platforms like Google and OpenAI increasingly providing answers without driving traffic to original sources. He argued that unless AI companies start compensating creators, the web's content ecosystem will collapse -- calling most current AI investment a "money fire" with only a small fraction holding long-term value. Search Engine Land reports: Google's value exchange with content creators has collapsed, Prince said: "Ten years ago... for every two pages of a website that Google scraped, they would send you one visitor. ... That was the trade. ... Now, it takes six pages scraped to get one visitor." That drop reflects the rise of zero-click searches, which happen when searchers get answers directly on Google's search page. "Today, 75 percent of the queries... get answered without you leaving Google." This trend, long criticized by publishers and SEOs, is part of a broader concern: AI companies are using original content to generate answers that rarely/never drive traffic back to creators.

AI makes the problem worse. Large language models (LLMs) are accelerating the crisis, Prince said. AI companies scrape far more content per user interaction than Google ever has -- with even less return to creators. "What do you think it is for OpenAI? 250 to one. What do you think it is for Anthropic? Six thousand to one." "More and more the answers... won't lead you to the original source, it will be some derivative of that source." This situation threatens the sustainability of the web as we know it, Prince said: "If content creators can't derive value... then they're not going to create original content."

The modern web is breaking. AI companies are aware of the problem, and the business model of the web can't survive unless there's some change, Prince said: "Sam Altman at OpenAI and others get that. But... he can't be the only one paying for content when everyone else gets it for free." Cloudflare's right in the middle of this problem -- it powers 80% of AI companies and a 20-30% of the web. Cloudfaire is now trying to figure out how to help fix what's broken, Prince said. AI = money fire. Prince is not against AI. However, he said he is skeptical of the investment frenzy. "I would guess that 99% of the money that people are spending on these projects today is just getting lit on fire. But 1% is going to be incredibly valuable." "And so maybe we've all got a light, you know, $100 on fire to find that $1 that matters."
You can watch a recording of the interview and read the full transcript here.
AI

Zuckerberg's Grand Vision: Most of Your Friends Will Be AI (msn.com) 129

Meta CEO Mark Zuckerberg is aggressively promoting a future where AI becomes the dominant form of social interaction, claiming that AI friends, therapists, and business agents will soon outnumber human relationships. During a recent media blitz across multiple podcasts and a Stripe conference appearance, Zuckerberg cited statistics suggesting "the average American has fewer than three friends" while claiming people desire "meaningfully more, like 15 friends" -- positioning AI companions as the solution to this gap.

The Meta founder's vision extends beyond casual interaction to therapeutic and commercial relationships, with personalized AI that "has a deep understanding of what's going on in this person's life." Meta has already deployed its AI across Instagram, Facebook, and Ray-Ban smart glasses, reaching nearly a billion monthly users.

Slashdot Top Deals