Firefox

Firefox Finally Delivers Tab Groups Feature (mozilla.org) 47

Firefox has launched its long-awaited tab groups feature, responding to the most upvoted request in Mozilla Connect's three-year history. The feature allows users to organize tabs by name or color through a drag-and-drop interface.

Mozilla is now developing an AI-powered "smart tab groups" feature that automatically suggests organization based on open tabs. Unlike competitors, the company said, Firefox processes this data locally, keeping tab information on the user's device rather than sending it to cloud servers.
Programming

AI-Generated Code Creates Major Security Risk Through 'Package Hallucinations' (arstechnica.com) 34

A new study [PDF] reveals AI-generated code frequently references non-existent third-party libraries, creating opportunities for supply-chain attacks. Researchers analyzed 576,000 code samples from 16 popular large language models and found 19.7% of package dependencies -- 440,445 in total -- were "hallucinated."

These non-existent dependencies exacerbate dependency confusion attacks, where malicious packages with identical names to legitimate ones can infiltrate software. Open source models hallucinated at nearly 22%, compared to 5% for commercial models. "Once the attacker publishes a package under the hallucinated name, containing some malicious code, they rely on the model suggesting that name to unsuspecting users," said lead researcher Joseph Spracklen. Alarmingly, 43% of hallucinations repeated across multiple queries, making them predictable targets.
Privacy

India Court Orders Proton Mail Block On Security Grounds (livelaw.in) 20

The Karnataka High Court on Tuesday directed India's government to block Switzerland-based email service Proton Mail, citing national security concerns and law enforcement challenges. Justice M Nagaprasanna ordered authorities to initiate proceedings under Section 69A of the Information Technology Act to ban the service, while mandating immediate blocking of "offending URLs" until final decisions are made.

The ruling followed a petition from M Moser Design Associates India, which claimed its female employees were targeted with obscene emails containing "AI-generated deepfake images" sent via Proton Mail. Petitioners argued Proton Mail operates servers outside India, making it inaccessible to law enforcement. The court noted several bomb threats to Indian schools were sent using the service, which has already been banned in Russia and Saudi Arabia. Additional Solicitor General Aravind Kamath, representing the government, said authorities would comply with the court's direction.
AI

Reddit Issuing 'Formal Legal Demands' Against Researchers Who Conducted Secret AI Experiment on Users 36

An anonymous reader shares a report: Reddit's top lawyer, Ben Lee, said the company is considering legal action against researchers from the University of Zurich who ran what he called an "improper and highly unethical experiment" by surreptitiously deploying AI chatbots in a popular debate subreddit. The University of Zurich told 404 Media that the experiment results will not be published and said the university is investigating how the research was conducted.

As we reported Monday, researchers at the University of Zurich ran an "unauthorized" and secret experiment on Reddit users in the r/changemyview subreddit in which dozens of AI bots engaged in debates with users about controversial issues. In some cases, the bots generated responses which claimed they were rape survivors, worked with trauma patients, or were Black people who were opposed to the Black Lives Matter movement. The researchers used a separate AI to mine the posting history of the people they were responding to in an attempt to determine personal details about them that they believed would make their bots more effective, such as their age, race, gender, location, and political beliefs.
AI

OpenAI-Microsoft Alliance Fractures as AI Titans Chart Separate Paths (wsj.com) 14

The once-celebrated partnership between OpenAI's Sam Altman and Microsoft's Satya Nadella is deteriorating amid fundamental disagreements over computing resources, model access, and AI capabilities, according to WSJ. The relationship that Altman once called "the best partnership in tech" has grown strained as both companies prepare for independent futures.

Tensions center on several critical areas: Microsoft's provision of computing power, OpenAI's willingness to share model access, and conflicting views on achieving humanlike intelligence. Altman has expressed confidence OpenAI can build models with humanlike intelligence soon -- a milestone Nadella publicly dismissed as "nonsensical benchmark hacking" during a February podcast.

The companies retain significant leverage over each other. Microsoft can block OpenAI's conversion to a for-profit entity, potentially costing the startup billions if not completed this year. Meanwhile, OpenAI's board can trigger contract clauses preventing Microsoft from accessing its most advanced technology.

After Altman's brief ouster in 2023 -- dubbed "the blip" within OpenAI -- Nadella pursued an "insurance policy" by hiring DeepMind co-founder Mustafa Suleyman for $650 million to develop competing models. The personal relationship has also cooled, with the executives now communicating primarily through scheduled weekly calls rather than frequent text exchanges.
AI

Duolingo Will Replace Contract Workers With AI 70

According to an email posted on Duolingo's LinkedIn, the language learning app will "gradually stop using contractors to do work that AI can handle." Co-founder and CEO Luis von Ahn also said the company will be "AI-first." The Verge reports: According to von Ahn, being "AI-first" means the company will "need to rethink much of how we work" and that "making minor tweaks to systems designed for humans won't get us there." As part of the shift, the company will roll out "a few constructive constraints," including the changes to how it works with contractors, looking for AI use in hiring and in performance reviews, and that "headcount will only be given if a team cannot automate more of their work."

von Ahn says that "Duolingo will remain a company that cares deeply about its employees" and that "this isn't about replacing Duos with AI." Instead, he says that the changes are "about removing bottlenecks" so that employees can "focus on creative work and real problems, not repetitive tasks."

"AI isn't just a productivity boost," von Ahn says. "It helps us get closer to our mission. To teach well, we need to create a massive amount of content, and doing that manually doesn't scale. One of the best decisions we made recently was replacing a slow, manual content creation process with one powered by AI. Without AI, it would take us decades to scale our content to more learners. We owe it to our learners to get them this content ASAP."
AI

OpenAI Upgrades ChatGPT Search With Shopping Features (techcrunch.com) 29

OpenAI has upgraded ChatGPT's search tool to include shopping features, allowing users to receive personalized product recommendations, view images and reviews, and access direct purchase links using natural language queries. TechCrunch reports: When ChatGPT users search for products, the chatbot will now offer a few recommendations, present images and reviews for those items, and include direct links to webpages where users can buy the products. OpenAI says users can ask hyper-specific questions in natural language and receive customized results. To start, OpenAI is experimenting with categories including fashion, beauty, home goods, and electronics. OpenAI is rolling out the feature in the default AI model for ChatGPT, GPT-4o, today for ChatGPT Pro, Plus, and Free users, as well as logged-out users around the globe.

[...] OpenAI claims its search product is growing rapidly. Users made more than a billion web searches in ChatGPT last week, the company told TechCrunch. OpenAI says it's determining ChatGPT shopping results independently, and notes that ads are not part of this upgrade to ChatGPT search. The shopping results will be based on structured metadata from third parties, such as pricing, product descriptions, and reviews, according to OpenAI. The company won't receive a kickback from purchases made through ChatGPT search. [...] Soon, OpenAI says it will integrate its memory feature with shopping for Pro and Plus users, meaning ChatGPT will reference a user's previous chats to make highly personalized product recommendations. The company previously updated ChatGPT to reference memory when making web searches broadly. However, these memory features won't be available to users in the EU, the U.K., Switzerland, Norway, Iceland, and Liechtenstein.

China

China's Huawei Develops New AI Chip, Seeking To Match Nvidia (wsj.com) 55

Huawei is gearing up to test its newest and most powerful AI processor, which the company hopes could replace some higher-end products of U.S. chip giant Nvidia. From a WSJ report: Huawei has approached some Chinese tech companies about testing the technical feasibility of the new chip, called the Ascend 910D, people familiar with the matter said. The company is slated to receive the first batch of samples of the processor as soon as late May, some of the people said.

The development is still at an early stage, and a series of tests will be needed to assess the chip's performance and get it ready for customers, the people said. Huawei hopes that the latest iteration of its Ascend AI processors will be more powerful than Nvidia's H100, a popular chip used for AI training that was released in 2022, said one of the people. Previous versions are called 910B and 910C.

AI

Unauthorized AI Bot Experiment Infiltrated Reddit To Test Persuasion Capabilities (404media.co) 82

Researchers claiming affiliation with the University of Zurich secretly deployed AI-powered bots in a popular Reddit forum to test whether AI could change users' minds on contentious topics. The unauthorized experiment, which targeted the r/changemyview subreddit, involved bots making over 1,700 comments across several months while adopting fabricated identities including a sexual assault survivor, a Black man opposing Black Lives Matter, and a domestic violence shelter worker.

The researchers "personalized" comments by analyzing users' posting histories to infer demographic information. The researchers, who remain anonymous despite inquiries, claimed their bots were "consistently well-received," garnering over 20,000 upvotes and 137 "deltas" -- awards indicating successful opinion changes. Hundreds of bot comments were deleted following the disclosure.
IBM

IBM Pledges $150 Billion US Investment (reuters.com) 42

IBM announced plans to invest $150 billion in the United States over the next five years, with more than $30 billion earmarked specifically for research and development of mainframes and quantum computing technology. The investment follows similar commitments from tech giants including Apple and Nvidia -- each pledging approximately $500 billion -- in the wake of President Trump's election and tariff threats.

"We have been focused on American jobs and manufacturing since our founding 114 years ago," said IBM CEO Arvind Krishna in a statement. The company currently manufactures its mainframe systems in upstate New York and plans to continue designing and assembling quantum computers domestically. The announcement comes amid challenging circumstances for IBM, which recently saw 15 government contracts shelved under the Trump administration's cost-cutting initiatives.

Further reading: IBM US Cuts May Run Deeper Than Feared - and the Jobs Are Heading To India;
IBM Now Has More Employees In India Than In the US (2017).
Chrome

'Don't Make Google Sell Chrome' (hey.com) 180

Ruby on Rails creator and Basecamp CTO David Heinemeier Hansson, makes a case for why Google shouldn't be forced to sell Chrome: First, Chrome won the browser war fair and square by building a better surfboard for the internet. This wasn't some opportune acquisition. This was the result of grand investments, great technical prowess, and markets doing what they're supposed to do: rewarding the best. Besides, we have a million alternatives. Firefox still exists, so does Safari, so does the billion Chromium-based browsers like Brave and Edge. And we finally even have new engines on the way with the Ladybird browser.

Look, Google's trillion-dollar business depends on a thriving web that can be searched by Google.com, that can be plastered in AdSense, and that now can feed the wisdom of AI. Thus, Google's incredible work to further the web isn't an act of charity, it's of economic self-interest, and that's why it works. Capitalism doesn't run on benevolence, but incentives.

We want an 800-pound gorilla in the web's corner! Because Apple would love nothing better (despite the admirable work to keep up with Chrome by Team Safari) to see the web's capacity as an application platform diminished. As would every other owner of a proprietary application platform. Microsoft fought the web tooth and nail back in the 90s because they knew that a free, open application platform would undermine lock-in -- and it did!

AI

AI Helps Unravel a Cause of Alzheimer's Disease and Identify a Therapeutic Candidate (ucsd.edu) 40

"A new study found that a gene recently recognized as a biomarker for Alzheimer's disease is actually a cause of it," announced the University of California, San Diego, "due to its previously unknown secondary function."

"Researchers at the University of California San Diego used artificial intelligence to help both unravel this mystery of Alzheimer's disease and discover a potential treatment that obstructs the gene's moonlighting role."

A team led by Sheng Zhong, a professor in the university's bioengineering department, had previously discovered a potential blood biomarker for early detection of Alzheimer's disease (called PHGDH). But now they've discovered a correlation: the more protein and RNA that it produces, the more advanced the disease. And after more research they ended up with "a therapeutic candidate with demonstrated efficacy that has the potential of being further developed into clinical tests..." That correlation has since been verified in multiple cohorts from different medical centers, according to Zhong... [T]he researchers established that PHGDH is indeed a causal gene to spontaneous Alzheimer's disease. In further support of that finding, the researchers determined — with the help of AI — that PHGDH plays a previously undiscovered role: it triggers a pathway that disrupts how cells in the brain turn genes on and off. And such a disturbance can cause issues, like the development of Alzheimer's disease....

With AI, they could visualize the three-dimensional structure of the PHGDH protein. Within that structure, they discovered that the protein has a substructure... Zhong said, "It really demanded modern AI to formulate the three-dimensional structure very precisely to make this discovery." After discovering the substructure, the team then demonstrated that with it, the protein can activate two critical target genes. That throws off the delicate balance, leading to several problems and eventually the early stages of Alzheimer's disease. In other words, PHGDH has a previously unknown role, independent of its enzymatic function, that through a novel pathway leads to spontaneous Alzheimer's disease...

Now that the researchers uncovered the mechanism, they wanted to figure out how to intervene and thus possibly identify a therapeutic candidate, which could help target the disease.... Given that PHGDH is such an important enzyme, there are past studies on its possible inhibitors. One small molecule, known as NCT-503, stood out to the researchers because it is not quite effective at impeding PHGDH's enzymatic activity (the production of serine), which they did not want to change. NCT-503 is also able to penetrate the blood-brain-barrier, which is a desirable characteristic. They turned to AI again for three-dimensional visualization and modeling. They found that NCT-503 can access that DNA-binding substructure of PHGDH, thanks to a binding pocket. With more testing, they saw that NCT-503 does indeed inhibit PHGDH's regulatory role.

When the researchers tested NCT-503 in two mouse models of Alzheimer's disease, they saw that it significantly alleviated Alzheimer's progression. The treated mice demonstrated substantial improvement in their memory and anxiety tests...

The next steps will be to optimize the compound and subject it to FDA IND-enabling studies.



The research team published their results on April 23 in the journal Cell.
Math

Could a 'Math Genius' AI Co-author Proofs Within Three Years? (theregister.com) 71

A new DARPA project called expMath "aims to jumpstart math innovation with the help of AI," writes The Register. America's "Defense Advanced Research Projects Agency" believes mathematics isn't advancing fast enough, according to their article... So to accelerate — or "exponentiate" — the rate of mathematical research, DARPA this week held a Proposers Day event to engage with the technical community in the hope that attendees will prepare proposals to submit once the actual Broad Agency Announcement solicitation goes out...

[T]he problem is that AI just isn't very smart. It can do high school-level math but not high-level math. [One slide from DARPA program manager Patrick Shafto noted that OpenAI o1 "continues to abjectly fail at basic math despite claims of reasoning capabilities."] Nonetheless, expMath's goal is to make AI models capable of:

- auto decomposition — automatically decompose natural language statements into reusable natural language lemmas (a proven statement used to prove other statements); and
auto(in)formalization — translate the natural language lemma into a formal proof and then translate the proof back to natural language.

"How must faster with technology advance with AI agents solving new mathematical proofs?" asks former DARPA research scientist Robin Rowe (also long-time Slashdot reader robinsrowe): DARPA says that "The goal of Exponentiating Mathematics is to radically accelerate the rate of progress in pure mathematics by developing an AI co-author capable of proposing and proving useful abstractions."
Rowe is cited in the article as the founder/CEO of an AI research institute named "Fountain Adobe". (He tells The Register that "It's an indication of DARPA's concern about how tough this may be that it's a three-year program. That's not normal for DARPA.") Rowe is optimistic. "I think we're going to kill it, honestly. I think it's not going to take three years. But I think it might take three years to do it with LLMs. So then the question becomes, how radical is everybody willing to be?"
"We will robustly engage with the math and AI communities toward fundamentally reshaping the practice of mathematics by mathematicians," explains the project's home page. They've already uploaded an hour-long video of their Proposers Day event.

"It's very unclear that current AI systems can succeed at this task..." program manager Shafto says in a short video introducing the project. But... "There's a lot of enthusiasm in the math community for the possibility of changes in the way mathematics is practiced. It opens up fundamentally new things for mathematicians. But of course, they're not AI researchers. One of the motivations for this program is to bring together two different communities — the people who are working on AI for mathematics, and the people who are doing mathematics — so that we're solving the same problem.

At its core, it's a very hard and rather technical problem. And this is DARPA's bread-and-butter, is to sort of try to change the world. And I think this has the potential to do that.

AI

Consumers Aren't Flocking to Microsoft's AI Tool 'Copilot' (xda-developers.com) 100

Microsoft Copilot "isn't doing as well as the company would like," reports XDA-Developers.com (citing a report from startup/VC industry site Newcomer). The Redmond giant has invested billions of dollars and a lot of manpower into making it happen, but as a recent report claims, people just don't care. In fact, if the report is to be believed, Microsoft's rise in the AI scene has already come to a screeching halt:

At Microsoft's annual executive huddle last month, the company's chief financial officer, Amy Hood, put up a slide that charted the number of users for its Copilot consumer AI tool over the past year. It was essentially a flat line, showing around 20 million weekly users. On the same slide was another line showing ChatGPT's growth over the same period, arching ever upward toward 400 million weekly users. OpenAI's iconic chatbot was soaring, while Microsoft's best hope for a mass-adoption AI tool was idling. It was a sobering chart for Microsoft's consumer AI team...

That's right; Microsoft Copilot's weekly user base is only 5% of the number of people who use ChatGPT, and it's not increasing. It's also worth noting that there are approximately 1.5 billion Windows users worldwide, which means just over 1% of them are using Copilot, a tool that's now a Windows default app....

It's not a huge surprise that Copilot is faltering. Despite Microsoft's CEO claiming that Copilot will become "the next Start button", the company has had to backtrack on the Copilot key and allow people to customise it to do something else, including giving back its original feature of the Menu key.

They also note earlier reports that Intel's AI PC chips aren't selling well.
AI

Google's DeepMind UK Team Reportedly Seeks to Unionize (techcrunch.com) 36

"Google's DeepMind UK team reportedly seeks to unionize," reports TechCrunch: Around 300 London-based members of Google's AI-focused DeepMind team are seeking to unionize with the Communication Workers Union, according to a Financial Times report that cites three people involved with the unionization effort.

These DeepMind employees are reportedly unhappy about Google's decision to remove a pledge not to use AI for weapons or surveillance from its website. They're also concerned about the company's work with the Israeli military, including a $1.2 billion cloud computing contract that has prompted protests elsewhere at Google.

At least five DeepMind employees quit, according to the report (out of a 2,000 total U.K. staff members).

"A small group of around 200 employees of Google and its parent company Alphabet previously announced that they were unionizing," the article adds, "though as a union representing just a tiny slice of the total Google workforce, it lacked the ability to collectively bargain."
IT

WSJ: Tech-Industry Workers Now 'Miserable', Fearing Layoffs, Working Longer Hours (msn.com) 166

"Not so long ago, working in tech meant job security, extravagant perks and a bring-your-whole-self-to-the-office ethos rare in other industries," writes the Wall Street Journal.

But now tech work "looks like a regular job," with workers "contending with the constant fear of layoffs, longer hours and an ever-growing list of responsibilities for the same pay." Now employees find themselves doing the work of multiple laid-off colleagues. Some have lost jobs only to be rehired into positions that aren't eligible for raises or stock grants. Changing jobs used to be a surefire way to secure a raise; these days, asking for more money can lead to a job offer being withdrawn.

The shift in tech has been building slowly. For years, demand for workers outstripped supply, a dynamic that peaked during the Covid-19 pandemic. Big tech companies like Meta and Salesforce admitted they brought on too many employees. The ensuing downturn included mass layoffs that started in 2022...

[S]ome longtime tech employees say they no longer recognize the companies they work for. Management has become more focused on delivering the results Wall Street expects. Revenue remains strong for tech giants, but they're pouring resources into costly AI infrastructure, putting pressure on cash flow. With the industry all grown up, a heads-down, keep-quiet mentality has taken root, workers say... Tech workers are still well-paid compared with other sectors, but currently there's a split in the industry. Those working in AI — and especially those with Ph.D.s — are seeing their compensation packages soar. But those without AI experience are finding they're better off staying where they are, because companies aren't paying what they were a few years ago.

Other excepts from the Wall Street Journal's article:
  • "I'm hearing of people having 30 direct reports," says David Markley, who spent seven years at Amazon and is now an executive coach for workers at large tech companies. "It's not because the companies don't have the money. In a lot of ways, it's because of AI and the narratives out there about how collapsing the organization is better...."
  • Google co-founder Sergey Brin told a group of employees in February that 60 hours a week was the sweet spot of productivity, in comments reported earlier by the New York Times.
  • One recruiter at Meta who had been laid off by the company was rehired into her old role last year, but with a catch: She's now classified as a "short-term employee." Her contract is eligible for renewal, but she doesn't get merit pay increases, promotions or stock. The recruiter says she's responsible for a volume of work that used to be spread among several people. The company refers to being loaded with such additional responsibilities as "agility."
  • More than 50,000 tech workers from over 100 companies have been laid off in 2025, according to Layoffs.fyi, a website that tracks job cuts and crowdsources lists of laid off workers...

Even before those 50,000 layoffs in 2025, Silicon Valley's Mercury News was citing some interesting statistics from economic research/consulting firm Beacon Economics. In 2020, 2021 and 2022, the San Francisco Bay Area added 74,700 tech jobs But then in 2023 and 2024 the industry had slashed even more tech jobs -- 80,200 -- for a net loss (over five years) of 5,500.

So is there really a cutback in perks and a fear of layoffs that's casting a pall over the industry? share your own thoughts and experiences in the comments. Do you agree with the picture that's being painted by the Wall Street Journal?

They told their readers that tech workers are now "just like the rest of us: miserable at work."


Education

Canadian University Cancels Coding Competition Over Suspected AI Cheating (uwaterloo.ca) 40

The university blamed it on "the significant number of students" who violated their coding competition's rules. Long-time Slashdot reader theodp quotes this report from The Logic: Finding that many students violated rules and submitted code not written by themselves, the University of Waterloo's Centre for Computing and Math decided not to release results from its annual Canadian Computing Competition (CCC), which many students rely on to bolster their chances of being accepted into Waterloo's prestigious computing and engineering programs, or land a spot on teams to represent Canada in international competitions.

"It is clear that many students submitted code that they did not write themselves, relying instead on forbidden external help," the CCC co-chairs explained in a statement. "As such, the reliability of 'ranking' students would neither be equitable, fair, or accurate."

"It is disappointing that the students who violated the CCC Rules will impact those students who are deserving of recognition," the univeresity said in its statement. They added that they are "considering possible ways to address this problem for future contests."
AI

NYT Asks: Should We Start Taking the Welfare of AI Seriously? (msn.com) 105

A New York Times technology columnist has a question.

"Is there any threshold at which an A.I. would start to deserve, if not human-level rights, at least the same moral consideration we give to animals?" [W]hen I heard that researchers at Anthropic, the AI company that made the Claude chatbot, were starting to study "model welfare" — the idea that AI models might soon become conscious and deserve some kind of moral status — the humanist in me thought: Who cares about the chatbots? Aren't we supposed to be worried about AI mistreating us, not us mistreating it...?

But I was intrigued... There is a small body of academic research on A.I. model welfare, and a modest but growing number of experts in fields like philosophy and neuroscience are taking the prospect of A.I. consciousness more seriously, as A.I. systems grow more intelligent.... Tech companies are starting to talk about it more, too. Google recently posted a job listing for a "post-AGI" research scientist whose areas of focus will include "machine consciousness." And last year, Anthropic hired its first AI welfare researcher, Kyle Fish... [who] believes that in the next few years, as AI models develop more humanlike abilities, AI companies will need to take the possibility of consciousness more seriously....

Fish isn't the only person at Anthropic thinking about AI welfare. There's an active channel on the company's Slack messaging system called #model-welfare, where employees check in on Claude's well-being and share examples of AI systems acting in humanlike ways. Jared Kaplan, Anthropic's chief science officer, said in a separate interview that he thought it was "pretty reasonable" to study AI welfare, given how intelligent the models are getting. But testing AI systems for consciousness is hard, Kaplan warned, because they're such good mimics. If you prompt Claude or ChatGPT to talk about its feelings, it might give you a compelling response. That doesn't mean the chatbot actually has feelings — only that it knows how to talk about them...

[Fish] said there were things that AI companies could do to take their models' welfare into account, in case they do become conscious someday. One question Anthropic is exploring, he said, is whether future AI models should be given the ability to stop chatting with an annoying or abusive user if they find the user's requests too distressing.

Microsoft

Devs Sound Alarm After Microsoft Subtracts C/C++ Extension From VS Code Forks (theregister.com) 42

Some developers are "crying foul" after Microsoft's C/C++ extension for Visual Studio Code stopped working with VS Code derivatives like VS Codium and Cursor, reports The Register. The move has prompted Cursor to transition to open-source alternatives, while some developers are calling for a regulatory investigation into Microsoft's alleged anti-competitive behavior. From the report: In early April, programmers using VS Codium, an open-source fork of Microsoft's MIT-licensed VS Code, and Cursor, a commercial AI code assistant built from the VS Code codebase, noticed that the C/C++ extension stopped working. The extension adds C/C++ language support, such as Intellisense code completion and debugging, to VS Code. The removal of these capabilities from competing tools breaks developer workflows, hobbles the editor, and arguably hinders competition. The breaking change appears to have occurred with the release of v1.24.5 on April 3, 2025.

Following the April update, attempts to install the C/C++ extension outside of VS Code generate this error message: "The C/C++ extension may be used only with Microsoft Visual Studio, Visual Studio for Mac, Visual Studio Code, Azure DevOps, Team Foundation Server, and successor Microsoft products and services to develop and test your applications." Microsoft has forbidden the use of its extensions outside of its own software products since at least September 2020, when the current licensing terms were published. But it hasn't enforced those terms in its C/C++ extension with an environment check in its binaries until now. [...]

Developers discussing the issue in Cursor's GitHub repo have noted that Microsoft recently rolled out a competing AI software agent capability, dubbed Agent Mode, within its Copilot software. One such developer who contacted us anonymously told The Register they sent a letter about the situation to the US Federal Trade Commission, asking them to probe Microsoft for unfair competition -- alleging self-preferencing, bundling Copilot without a removal option, and blocking rivals like Cursor to lock users into its AI ecosystem.

Intel

Intel's AI PC Chips Aren't Selling Well (tomshardware.com) 56

Intel is grappling with an unexpected market shift as customers eschew its new AI-focused processors for cheaper previous-generation chips. The company revealed during its recent earnings call that demand for older Raptor Lake processors has surged while its newer, more expensive Lunar Lake and Meteor Lake AI PC chips struggle to gain traction.

This surprising trend, first reported by Tom's Hardware, has created a production capacity shortage for Intel's 'Intel 7' process node that will "persist for the foreseeable future," despite the fact that current-generation chips utilize TSMC's newer nodes. "Customers are demanding system price points that consumers really want," explained Intel executive Michelle Johnston Holthaus, noting that economic concerns and tariffs have affected inventory decisions.

Slashdot Top Deals