Microsoft

Microsoft Abandons Data Center Projects, TD Cowen Says (bloomberg.com) 25

Microsoft has walked away from new data center projects in the US and Europe that would have amounted to a capacity of about 2 gigawatts of electricity, according to TD Cowen analysts, who attributed the pullback to an oversupply of the clusters of computers that power artificial intelligence. From a report: The analysts, who rattled investors with a February note highlighting leases Microsoft had abandoned in the US, said the latest move also reflected the company's choice to forgo some new business from ChatGPT maker OpenAI, which it has backed with some $13 billion. Microsoft and the startup earlier this year said they had altered their multiyear agreement, letting OpenAI use cloud-computing services from other companies, provided Microsoft didn't want the business itself.

Microsoft's retrenchment in the last six months included lease cancellations and deferrals, the TD Cowen analysts said in their latest research note, dated Wednesday. Alphabet's Google had stepped in to grab some leases Microsoft abandoned in Europe, the analysts wrote, while Meta Platforms had scooped up some of the freed capacity in Europe.

The Internet

Open Source Devs Say AI Crawlers Dominate Traffic, Forcing Blocks On Entire Countries (arstechnica.com) 64

An anonymous reader quotes a report from Ars Technica: Software developer Xe Iaso reached a breaking point earlier this year when aggressive AI crawler traffic from Amazon overwhelmed their Git repository service, repeatedly causing instability and downtime. Despite configuring standard defensive measures -- adjusting robots.txt, blocking known crawler user-agents, and filtering suspicious traffic -- Iaso found that AI crawlers continued evading all attempts to stop them, spoofing user-agents and cycling through residential IP addresses as proxies. Desperate for a solution, Iaso eventually resorted to moving their server behind a VPN and creating "Anubis," a custom-built proof-of-work challenge system that forces web browsers to solve computational puzzles before accessing the site. "It's futile to block AI crawler bots because they lie, change their user agent, use residential IP addresses as proxies, and more," Iaso wrote in a blog post titled "a desperate cry for help." "I don't want to have to close off my Gitea server to the public, but I will if I have to."

Iaso's story highlights a broader crisis rapidly spreading across the open source community, as what appear to be aggressive AI crawlers increasingly overload community-maintained infrastructure, causing what amounts to persistent distributed denial-of-service (DDoS) attacks on vital public resources. According to a comprehensive recent report from LibreNews, some open source projects now see as much as 97 percent of their traffic originating from AI companies' bots, dramatically increasing bandwidth costs, service instability, and burdening already stretched-thin maintainers.

Kevin Fenzi, a member of the Fedora Pagure project's sysadmin team, reported on his blog that the project had to block all traffic from Brazil after repeated attempts to mitigate bot traffic failed. GNOME GitLab implemented Iaso's "Anubis" system, requiring browsers to solve computational puzzles before accessing content. GNOME sysadmin Bart Piotrowski shared on Mastodon that only about 3.2 percent of requests (2,690 out of 84,056) passed their challenge system, suggesting the vast majority of traffic was automated. KDE's GitLab infrastructure was temporarily knocked offline by crawler traffic originating from Alibaba IP ranges, according to LibreNews, citing a KDE Development chat. While Anubis has proven effective at filtering out bot traffic, it comes with drawbacks for legitimate users. When many people access the same link simultaneously -- such as when a GitLab link is shared in a chat room -- site visitors can face significant delays. Some mobile users have reported waiting up to two minutes for the proof-of-work challenge to complete, according to the news outlet.

AI

DeepSeek-V3 Now Runs At 20 Tokens Per Second On Mac Studio 90

An anonymous reader quotes a report from VentureBeat: Chinese AI startup DeepSeek has quietly released a new large language model that's already sending ripples through the artificial intelligence industry -- not just for its capabilities, but for how it's being deployed. The 641-gigabyte model, dubbed DeepSeek-V3-0324, appeared on AI repository Hugging Face today with virtually no announcement (just an empty README file), continuing the company's pattern of low-key but impactful releases. What makes this launch particularly notable is the model's MIT license -- making it freely available for commercial use -- and early reports that it can run directly on consumer-grade hardware, specifically Apple's Mac Studio with M3 Ultra chip.

"The new DeepSeek-V3-0324 in 4-bit runs at > 20 tokens/second on a 512GB M3 Ultra with mlx-lm!" wrote AI researcher Awni Hannun on social media. While the $9,499 Mac Studio might stretch the definition of "consumer hardware," the ability to run such a massive model locally is a major departure from the data center requirements typically associated with state-of-the-art AI. [...] Simon Willison, a developer tools creator, noted in a blog post that a 4-bit quantized version reduces the storage footprint to 352GB, making it feasible to run on high-end consumer hardware like the Mac Studio with M3 Ultra chip. This represents a potentially significant shift in AI deployment. While traditional AI infrastructure typically relies on multiple Nvidia GPUs consuming several kilowatts of power, the Mac Studio draws less than 200 watts during inference. This efficiency gap suggests the AI industry may need to rethink assumptions about infrastructure requirements for top-tier model performance.
"The implications of an advanced open-source reasoning model cannot be overstated," reports VentureBeat. "Current reasoning models like OpenAI's o1 and DeepSeek's R1 represent the cutting edge of AI capabilities, demonstrating unprecedented problem-solving abilities in domains from mathematics to coding. Making this technology freely available would democratize access to AI systems currently limited to those with substantial budgets."

"If DeepSeek-R2 follows the trajectory set by R1, it could present a direct challenge to GPT-5, OpenAI's next flagship model rumored for release in coming months. The contrast between OpenAI's closed, heavily-funded approach and DeepSeek's open, resource-efficient strategy represents two competing visions for AI's future."
Google

Google Unveils Gemini 2.5 Pro, Its Latest AI Reasoning Model With Significant Benchmark Gains (blog.google) 7

Google DeepMind has launched Gemini 2.5, a new family of AI models designed to "think" before responding to queries. The initial release, Gemini 2.5 Pro Experimental, tops the LMArena leaderboard by what Google claims is a "significant margin" and demonstrates enhanced reasoning capabilities across technical tasks. The model achieved 18.8% on Humanity's Last Exam without tools, outperforming most competing flagship models. In mathematics, it scored 86.7% on AIME 2025 and 92.0% on AIME 2024 in single attempts, while reaching 84.0% on GPQA's diamond benchmark for scientific reasoning.

For developers, Gemini 2.5 Pro demonstrates improved coding abilities with 63.8% on SWE-Bench Verified using a custom agent setup, though this falls short of Anthropic's Claude 3.7 Sonnet score of 70.3%. On Aider Polyglot for code editing, it scores 68.6%, which Google claims surpasses competing models. The reasoning approach builds on Google's previous experiments with reinforcement learning and chain-of-thought prompting. These techniques allow the model to analyze information, incorporate context, and draw conclusions before delivering responses. Gemini 2.5 Pro ships with a 1 million token context window (approximately 750,000 words). The model is available immediately in Google AI Studio and for Gemini Advanced subscribers, with Vertex AI integration planned in the coming weeks.
AI

Apple Says It'll Use Apple Maps Look Around Photos To Train AI (theverge.com) 11

An anonymous reader shares a report: Sometime earlier this month, Apple updated a section of its website that discloses how it collects and uses imagery for Apple Maps' Look Around feature, which is similar to Google Maps' Street View, as spotted by 9to5Mac. A newly added paragraph reveals that, beginning in March 2025, Apple will be using imagery and data collected during Look Around surveys to "train models powering Apple products and services, including models related to image recognition, creation, and enhancement."

Apple collects images and 3D data to enhance and improve Apple Maps using vehicles and backpacks (for pedestrian-only areas) equipped with cameras, sensors, and other equipment including iPhones and iPads. The company says that as part of its commitment to privacy, any images it captures that are published in the Look Around feature have faces and license plates blurred. Apple also says it will only use imagery with those details blurred out for training models. It does accept requests for those wanting their houses to also be blurred, but by default they are not.

AI

Alibaba's Tsai Warns of 'Bubble' in AI Data Center Buildout (yahoo.com) 34

Alibaba Chairman Joe Tsai has warned of a potential bubble forming in data center construction, arguing that the pace of that buildout may outstrip initial demand for AI services. From a report: A rush by big tech firms, investment funds and other entities to erect server bases from the US to Asia is starting to look indiscriminate, the billionaire executive and financier said. Many of those projects are built without clear customers in mind, Tsai told the HSBC Global Investment Summit in Hong Kong Tuesday.

"I start to see the beginning of some kind of bubble," Tsai told delegates. Some of the envisioned projects commenced raising funds without having secured "uptake" agreements, he added. "I start to get worried when people are building data centers on spec. There are a number of people coming up, funds coming out, to raise billions or millions of capital." [...]

At the same time, Tsai had choice words for his US rivals, particularly with their spending. "I'm still astounded by the type of numbers that's being thrown around in the United States about investing into AI," Tsai told the audience. "People are talking, literally talking about $500 billion, several 100 billion dollars. I don't think that's entirely necessary. I think in a way, people are investing ahead of the demand that they're seeing today, but they are projecting much bigger demand."

AI

OpenAI CEO Altman Says AI Will Lead To Fewer Software Engineers (stratechery.com) 163

OpenAI CEO Sam Altman believes companies will eventually need fewer software engineers as AI continues to transform programming. "Each software engineer will just do much, much more for a while. And then at some point, yeah, maybe we do need less software engineers," Altman told Stratechery.

AI now handles over 50% of code authorship in many companies, Altman estimated, a significant shift that's happened rapidly as large language models have improved. The real paradigm shift is still coming, he said. "The big thing I think will come with agentic coding, which no one's doing for real yet," Altman said, suggesting that the next breakthrough will be AI systems that can independently tackle larger programming tasks with minimal human guidance.

While OpenAI continues hiring engineers for now, Altman recommended that high school graduates entering the workforce "get really good at using AI tools," calling it the modern equivalent of learning to code. "When I was graduating as a senior from high school, the obvious tactical thing was get really good at coding. And this is the new version of that," he said.
AI

AlexNet, the AI Model That Started It All, Released In Source Code Form (zdnet.com) 8

An anonymous reader quotes a report from ZDNet: There are many stories of how artificial intelligence came to take over the world, but one of the most important developments is the emergence in 2012 of AlexNet, a neural network that, for the first time, demonstrated a huge jump in a computer's ability to recognize images. Thursday, the Computer History Museum (CHM), in collaboration with Google, released for the first time the AlexNet source code written by University of Toronto graduate student Alex Krizhevsky, placing it on GitHub for all to peruse and download.

"CHM is proud to present the source code to the 2012 version of Alex Krizhevsky, Ilya Sutskever, and Geoffery Hinton's AlexNet, which transformed the field of artificial intelligence," write the Museum organizers in the readme file on GitHub. Krizhevsky's creation would lead to a flood of innovation in the ensuing years, and tons of capital, based on proof that with sufficient data and computing, neural networks could achieve breakthroughs previously viewed as mainly theoretical.
The Computer History Museum's software historian, Hansen Hsu, published an essay describing how he spent five years negotiating with Google to release the code.
Portables (Apple)

Software Engineer Runs Generative AI On 20-Year-Old PowerBook G4 (macrumors.com) 55

A software engineer successfully ran Meta's Llama 2 generative AI model on a 20-year-old PowerBook G4, demonstrating how well-optimized code can push the limits of legacy hardware. MacRumors' Joe Rossignol reports: While hardware requirements for large language models (LLMs) are typically high, this particular PowerBook G4 model from 2005 is equipped with a mere 1.5GHz PowerPC G4 processor and 1GB of RAM. Despite this 20-year-old hardware, my brother was able to achieve inference with Meta's LLM model Llama 2 on the laptop. The experiment involved porting the open-source llama2.c project, and then accelerating performance with a PowerPC vector extension called AltiVec. His full blog post offers more technical details about the project.
China

Jack Ma-Backed Ant Touts AI Breakthrough Using Chinese Chips (yahoo.com) 30

An anonymous reader quotes a report from Bloomberg: Jack Ma-backed Ant Group used Chinese-made semiconductors to develop techniques for training AI models that would cut costs by 20%, according to people familiar with the matter. Ant used domestic chips, including from affiliate Alibaba and Huawei, to train models using the so-called Mixture of Experts machine learning approach, the people said. It got results similar to those from Nvidia chips like the H800, they said, asking not to be named as the information isn't public. Hangzhou-based Ant is still using Nvidia for AI development but is now relying mostly on alternatives including from Advanced Micro Devices and Chinese chips for its latest models, one of the people said.

The models mark Ant's entry into a race between Chinese and US companies that's accelerated since DeepSeek demonstrated how capable models can be trained for far less than the billions invested by OpenAI and Alphabet Inc.'s Google. It underscores how Chinese companies are trying to use local alternatives to the most advanced Nvidia semiconductors. While not the most advanced, the H800 is a relatively powerful processor and currently barred by the US from China. The company published a research paper this month that claimed its models at times outperformed Meta Platforms Inc. in certain benchmarks, which Bloomberg News hasn't independently verified. But if they work as advertised, Ant's platforms could mark another step forward for Chinese artificial intelligence development by slashing the cost of inferencing or supporting AI services.

AI

Microsoft Announces Security AI Agents To Help Overwhelmed Humans 23

Microsoft is expanding its Security Copilot platform with six new AI agents designed to autonomously assist cybersecurity teams by handling tasks like phishing alerts, data loss incidents, and vulnerability monitoring. There are also five third-party AI agents created by its partners, including OneTrust and Tanium. The Verge reports: Microsoft's six security agents will be available in preview next month, and are designed to do things like triage and process phishing and data loss alerts, prioritize critical incidents, and monitor for vulnerabilities. "The six Microsoft Security Copilot agents enable teams to autonomously handle high-volume security and IT tasks while seamlessly integrating with Microsoft Security solutions," says Vasu Jakkal, corporate vice president of Microsoft Security.

Microsoft is also working with OneTrust, Aviatrix, BlueVoyant, Tanium, and Fletch to enable some third-party security agents. These extensions will make it easier to analyze data breaches with OneTrust or perform root cause analysis of network outages and failures with Aviatrix. [...] While these latest AI agents in the Security Copilot are designed for security teams to take advantage of, Microsoft is also improving its phishing protection in Microsoft Teams. Microsoft Defender for Office 365 will start protecting Teams users against phishing and other cyberthreats within Teams next month, including better protection against malicious URLs and attachments.
China

'China's Engineer Dividend Is Paying Off Big Time' 115

An anonymous reader shares a Bloomberg column: Worries over China's "3D" problem -- that deflation, debt and demographics are structurally hampering growth -- are melting away. Instead, investors are talking about how the world's second-largest economy can take on the US and challenge its technological dominance. There is the prevailing sense that China's "engineer dividend" is finally paying off. Between 2000 and 2020, the number of engineers has ballooned from 5.2 million to 17.7 million, according to the State Council. That reservoir can help the nation move up the production possibility frontier, the thinking goes.

In a way, DeepSeek shouldn't have come as a surprise. Size matters. A bigger talent pool alone gives China a better chance to disrupt. In 2022, 47% of the world's top 20th percentile AI researchers finished their undergraduate studies in China, well above the 18% share from the US, according to data from the Paulson Institute's in-house think tank, MacroPolo. Last year, the Asian nation ranked third in the number of innovation indicators compiled by the World Intellectual Property Organization, after Singapore and the US. What this also means is that innovative breakthroughs can pop out of nowhere. [...]

More importantly, China's got the cost advantage. Those under the age of 30 account for 44% of the total engineering pool, versus 20% in the US, according to data compiled by Kaiyuan Securities. As a result, compensation for researchers is only about one-eighth of that in the US. Credit must be given to President Xi Jinping for his focus on higher education as he seeks to upgrade China's value chain. These days, roughly 40% of high-school graduates go to universities, versus 10% in 2000. Meanwhile, engineering is one of the most popular majors for post-graduate studies. It's a welcome reprieve for a government that has been struggling with a shrinking population.
Biotech

DNA of 15 Million People For Sale In 23andMe Bankruptcy (404media.co) 51

An anonymous reader quotes a report from 404 Media: 23andMe filed for Chapter 11 bankruptcy Sunday, leaving the fate of millions of people's genetic information up in the air as the company deals with the legal and financial fallout of not properly protecting that genetic information in the first place. The filing shows how dangerous it is to provide your DNA directly to a large, for-profit commercial genetic database; 23andMe is now looking for a buyer to pull it out of bankruptcy. 23andMe said in court documents viewed by 404 Media that since hackers obtained personal data about seven million of its customers in October 2023, including, in some cases "health-related information based upon the user's genetics," it has faced "over 50 class action and state court lawsuits," and that "approximately 35,000 claimants have initiated, filed, or threatened to commence arbitration claims against the company." It is seeking bankruptcy protection in part to simplify the fallout of these legal cases, and because it believes it may not have money to pay for the potential damages associated with these cases.

CEO and cofounder Anne Wojcicki announced she is leaving the company as part of this process. The company has the genetic data of more than 15 million customers. According to its Chapter 11 filing, 23andMe owes money to a host of pharmaceutical companies, pharmacies, artificial intelligence companies (including a company called Aganitha AI and Coreweave), as well as health insurance companies and marketing companies.
Shortly before the filing, California Attorney General Rob Bonta issued an "urgent" alert to 23andMe customers: "Given 23andMe's reported financial distress, I remind Californians to consider invoking their rights and directing 23andMe to delete their data and destroy any samples of genetic material held by the company."

In a letter to customers Sunday, 23andMe said: "Your data remains protected. The Chapter 11 filing does not change how we store, manage, or protect customer data. Our users' privacy and data are important considerations in any transaction, and we remain committed to our users' privacy and to being transparent with our customers about how their data is managed." It added that any buyer will have to "comply with applicable law with respect to the treatment of customer data."

404 Media's Jason Koebler notes that "there's no way of knowing who is going to buy it, why they will be interested, and what will become of its millions of customers' DNA sequences. 23andMe has claimed over the years that it strongly resists law enforcement requests for information and that it takes customer security seriously. But the company has in recent years changed its terms of service, partnered with big pharmaceutical companies, and, of course, was hacked."
China

China Bans Compulsory Facial Recognition and Its Use in Private Spaces Like Hotel Rooms (theregister.com) 28

China's Cyberspace Administration and Ministry of Public Security have outlawed the use of facial recognition without consent. From a report: The two orgs last Friday published new rules on facial recognition and an explainer that spell out how orgs that want to use facial recognition must first conduct a "personal information protection impact assessment" that considers whether using the tech is necessary, impacts on individuals' privacy, and risks of data leakage. Organizations that decide to use facial recognition must data encrypt biometric data, and audit the information security techniques and practices they use to protect facial scans. Chinese that go through that process and decide they want to use facial recognition can only do so after securing individuals' consent. The rules also ban the use of facial recognition equipment in public places such as hotel rooms, public bathrooms, public dressing rooms, and public toilets. The measures don't apply to researchers or to what machine translation of the rules describes as "algorithm training activities" -- suggesting images of citizens' faces are fair game when used to train AI models.
AI

AI Will Impact GDP of Every Country By Double Digits, Says Mistral CEO (businessinsider.com) 31

Countries must develop their own artificial intelligence infrastructure or risk significant economic losses as the technology transforms global economies, Mistral CEO Arthur Mensch said last week.

"It will have an impact on GDP of every country in the double digits in the coming years," Mensch told the A16z podcast, warning that nations without domestic AI systems would see capital flow elsewhere. The French startup executive compared AI to electricity adoption a century ago. "If you weren't building electricity factories, you were preparing yourself to buy it from your neighbors, which creates dependencies," he said.
Operating Systems

Linux Kernel 6.14 Officially Released (9to5linux.com) 8

prisoninmate shares a report: Highlights of Linux 6.14 include Btrfs RAID1 read balancing support, a new ntsync subsystem for Win NT synchronization primitives to boost game emulation with Wine, uncached buffered I/O support, and a new accelerator driver for the AMD XDNA Ryzen AI NPUs (Neural Processing Units).

Also new is DRM panic support for the AMDGPU driver, reflink and reverse-mapping support for the XFS real-time device, Intel Clearwater Forest server support, support for SELinux extended permissions, FUSE support for io_uring, a new fsnotify file pre-access event type, and a new cgroup controller for device memory.

AI

How AI Coding Assistants Could Be Compromised Via Rules File (scworld.com) 31

Slashdot reader spatwei shared this report from the cybersecurity site SC World: : AI coding assistants such as GitHub Copilot and Cursor could be manipulated to generate code containing backdoors, vulnerabilities and other security issues via distribution of malicious rule configuration files, Pillar Security researchers reported Tuesday.

Rules files are used by AI coding agents to guide their behavior when generating or editing code. For example, a rules file may include instructions for the assistant to follow certain coding best practices, utilize specific formatting, or output responses in a specific language.

The attack technique developed by Pillar Researchers, which they call 'Rules File Backdoor,' weaponizes rules files by injecting them with instructions that are invisible to a human user but readable by the AI agent.

Hidden Unicode characters like bidirectional text markers and zero-width joiners can be used to obfuscate malicious instructions in the user interface and in GitHub pull requests, the researchers noted.

Rules configurations are often shared among developer communities and distributed through open-source repositories or included in project templates; therefore, an attacker could distribute a malicious rules file by sharing it on a forum, publishing it on an open-source platform like GitHub or injecting it via a pull request to a popular repository.

Once the poisoned rules file is imported to GitHub Copilot or Cursor, the AI agent will read and follow the attacker's instructions while assisting the victim's future coding projects.

Education

America's College Board Launches AP Cybersecurity Course For Non-College-Bound Students (edweek.org) 26

Besides administering standardized pre-college tests, America's nonprofit College Board designs college-level classes that high school students can take. But now they're also crafting courses "not just with higher education at the table, but industry partners such as the U.S. Chamber of Commerce and the technology giant IBM," reports Education Week.

"The organization hopes the effort will make high school content more meaningful to students by connecting it to in-demand job skills." It believes the approach may entice a new kind of AP student: those who may not be immediately college-bound.... The first two classes developed through this career-driven model — dubbed AP Career Kickstart — focus on cybersecurity and business principles/personal finance, two fast-growing areas in the workforce." Students who enroll in the courses and excel on a capstone assessment could earn college credit in high school, just as they have for years with traditional AP courses in subjects like chemistry and literature. However, the College Board also believes that students could use success in the courses as a selling point with potential employers... Both the business and cybersecurity courses could also help fulfill state high school graduation requirements for computer science education...

The cybersecurity course is being piloted in 200 schools this school year and is expected to expand to 800 schools next school year... [T]he College Board is planning to invest heavily in training K-12 teachers to lead the cybersecurity course.

IBM's director of technology, data and AI called the effort "a really good way for corporations and companies to help shape the curriculum and the future workforce" while "letting them know what we're looking for." In the article the associate superintendent for teaching at a Chicago-area high school district calls the College Board's move a clear signal that "career-focused learning is rigorous, it's valuable, and it deserves the same recognition as traditional academic pathways."

Also interesting is why the College Board says they're doing it: The effort may also help the College Board — founded more than a century ago — maintain AP's prominence as artificial intelligence tools that can already ace nearly every existing AP test on an ever-greater share of job tasks once performed by humans. "High schools had a crisis of relevance far before AI," David Coleman, the CEO of the College Board, said in a wide-ranging interview with EdWeek last month. "How do we make high school relevant, engaging, and purposeful? Bluntly, it takes [the] next generation of coursework. We are reconsidering the kinds of courses we offer...."

"It's not a pivot because it's not to the exclusion of higher ed," Coleman said. "What we are doing is giving employers an equal voice."

Thanks to long-time Slashdot reader theodp for sharing the article.
AI

Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End (futurism.com) 121

Founded in 1979, the Association for the Advancement of AI is an international scientific society. Recently 25 of its AI researchers surveyed 475 respondents in the AAAI community about "the trajectory of AI research" — and their results were surprising.

Futurism calls the results "a resounding rebuff to the tech industry's long-preferred method of achieving AI gains" — namely, adding more hardware: You can only throw so much money at a problem. This, more or less, is the line being taken by AI researchers in a recent survey. Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed...

"The vast investments in scaling, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced," Stuart Russel, a computer scientist at UC Berkeley who helped organize the report, told New Scientist. "I think that, about a year ago, it started to become obvious to everyone that the benefits of scaling in the conventional sense had plateaued...." In November last year, reports indicated that OpenAI researchers discovered that the upcoming version of its GPT large language model displayed significantly less improvement, and in some cases, no improvements at all than previous versions did over their predecessors. In December, Google CEO Sundar Pichai went on the record as saying that easy AI gains were "over" — but confidently asserted that there was no reason the industry couldn't "just keep scaling up."

Cheaper, more efficient approaches are being explored. OpenAI has used a method known as test-time compute with its latest models, in which the AI spends more time to "think" before selecting the most promising solution. That achieved a performance boost that would've otherwise taken mountains of scaling to replicate, researchers claimed. But this approach is "unlikely to be a silver bullet," Arvind Narayanan, a computer scientist at Princeton University, told New Scientist.

Programming

US Programming Jobs Plunge 27.5% in Two Years (msn.com) 104

Computer programming jobs in the US have declined by more than a quarter over the past two years, placing the profession among the 10 hardest-hit occupations of 420-plus jobs tracked by the Bureau of Labor Statistics and potentially signaling the first concrete evidence of artificial intelligence replacing workers.

The timing coincides with OpenAI's release of ChatGPT in late 2022. Anthropic researchers found people use AI to perform programming tasks more than those of any other job, though 57 percent of users employ AI to augment rather than automate work. "Without getting hysterical, the unemployment jump for programming really does look at least partly like an early, visible labor market effect of AI," said Mark Muro of the Brookings Institution.

While software developer positions have remained stable with only a 0.3 percent decline, programmers who perform more routine coding from specifications provided by others have seen their ranks diminish to levels not seen since 1980. Economists caution that high interest rates and post-pandemic tech industry contraction have also contributed to the decline in programming jobs, which typically pay $99,700 compared to $132,270 for developers.

Slashdot Top Deals