Education

Microsoft: Computer Programming Is Dying, Long Live AI Literacy 104

theodp writes: On Tuesday, Microsoft GM of Education and Workforce Policy (and former Code.org Chief Academic Officer) Pat Yongpradit posted an obituary of sorts for coders. "Computer programmers and software developers are codified differently in the BLS [Bureau of Labor Statistics] data," Yongpradit wrote. "The modern AI-infused world needs less computer programmers (coders) and more software developers (more holistic and higher level). So when folks say that there is less hiring of computer programmers, they are right. But there will be more hiring of software developers, especially those who have adopted an AI-forward mindset and skillset. [...] The number of just pure computer programming roles has already been declining due to reasons like outsourcing, AI will just accelerate the decline."

On Wednesday, Yongpradit's colleague Allyson Knox, Senior Director of Education and Workforce Policy at Microsoft, put another AI nail in the coder coffin, testifying before the House Committee on Education -- the Workforce Subcommittee on Early Childhood, Elementary, and Secondary Education on Building an AI-ready America: Teaching in the Age of AI. "Thank you to Chairman Tim Walberg, Ranking Member Bobby Scott, Chair Kevin Kiley, Ranking Member Suzanne Bonamici and members of the Subcommittee for the opportunity to share Microsoft perspective and that of the educators and parents we hear from every day across the country," Knox wrote in a LinkedIn post.

"Three themes continue to emerge throughout these discussions: 1. Educators want support to build AI literacy and critical thinking skills. 2. Schools need guidance and guardrails to ensure student data is protected and adults remain in control. 3. Teachers want classroom-ready tools, and a voice in shaping them. If we focus on these priorities, we can help ensure AI expands opportunity for every student across the United States."

Yongpradit and Knox report up to Microsoft President Brad Smith, who last July told Code.org CEO Hadi Partovi it was time for the tech-backed nonprofit to "switch hats" from coding to AI as Microsoft announced a new $4 billion initiative to advance AI education. Smith's thoughts on the extraordinary promise of AI in education were cited by Knox in her 2026 Congressional testimony. Interestingly, Knox argued for the importance of computer programming literacy in her 2013 Congressional testimony at a hearing on Our Nation of Builders: Training the Builders of the Future. "Congress needs to come up with fresh ideas on how we can continue to train the next generation of builders, programmers, manufacturers, technicians and entrepreneurs," said Rep. Lee Terry said to open the discussion.

So, are reports of computer programming's imminent death greatly exaggerated?
Television

Your Smart TV May Be Crawling the Web for AI (theverge.com) 42

Bright Data, a company that operates one of the world's largest residential proxy networks, has been running an SDK inside smart TV apps that turns those devices into nodes for web crawling -- collecting data used by AI companies, among other clients -- and most consumers have had no idea it was happening.

The company has published more than 200 first-party apps to LG's app store alone and still lists Samsung's Tizen OS and LG's webOS as supported platforms, though LG says the SDK is "not officially supported" and its operation on webOS "is not guaranteed." Google, Amazon, and Roku have all since adopted policies restricting or banning background proxy SDKs, and Bright Data no longer supports those platforms.

Several Roku apps still running the SDK disappeared from the store after a journalist with The Verge behind this reporting contacted the company.
AI

OpenAI Raises $110 Billion in the Largest Private Funding Round Ever (openai.com) 20

OpenAI has closed what is now the largest private financing in history -- a $110 billion round at a $730 billion pre-money valuation that more than doubles the $40 billion raise it completed just a year ago, itself a record for a private tech company at the time.

Amazon invested $50 billion, SoftBank put in $30 billion, and Nvidia committed $30 billion, and additional investors are expected to join as the round progresses. The valuation is a sharp jump from the $500 billion OpenAI commanded in a secondary financing in October, and the round dwarfs recent raises by rivals Anthropic ($30 billion) and xAI ($20 billion).

The company has been telling investors it is now targeting roughly $600 billion in total compute spend by 2030, a more measured figure than the $1.4 trillion in infrastructure commitments CEO Sam Altman had touted months earlier. OpenAI is projecting more than $280 billion in total revenue by 2030, split roughly equally between consumer and enterprise. ChatGPT now has over 900 million weekly active users and more than 50 million paying subscribers.
AI

Memory Price Hikes Will Kill Off Budget PCs and Smartphones, Analyst Warns 62

An anonymous reader quotes a report from The Register: Ballooning memory prices are forecast to kill off entry-level PCs, leading to a decline in global shipments this year -- and a similar effect is going to hit smartphones. Analyst biz Gartner is projecting a drop in PC shipments of more than 10 percent during 2026, and a decline of around 8 percent for smartphones, all due to the AI-driven memory shortage. Some types of memory have doubled or quadrupled in price since last year, and Gartner believes DRAM and NAND flash used in PCs and phones is set for a further 130 percent rise by the end of 2026.

The upshot of this is that the budget PC will disappear, simply because vendors won't be able to build them at a price that will satisfy cost-conscious buyers, according to Gartner research director Ranjit Atwal. "Because the price of memory is increasing so much, vendors lose the ability to provide entry-level PCs -- those below about $500," he told The Register. PC makers could just raise the price of their cheap and cheerful boxes to above that level to compensate for the memory hike, however, price-sensitive buyers simply won't bite, he added.

Another factor expected to add to declining fortunes of the PC industry this year is AI devices -- systems equipped with special hardware for accelerating AI tasks, typically via a neural processing unit (NPU) embedded in the CPU. These systems were predicted to take the market by storm, but they require more memory to support AI processing and vendors like to mark them up to a premium price. "Historically, downgrading specifications was the way to go when prices were being squeezed, but that's difficult here," Atwal said. "The thinking was that the average price [of AI PCs] would fall this year, and lead to more adoption," said Atwal, "but that's not happening." The lack of killer applications isn't helping either.
The Military

Anthropic CEO Says AI Company 'Cannot In Good Conscience Accede' To Pentagon (apnews.com) 84

An anonymous reader quotes a report from the Associated Press: Anthropic CEO Dario Amodei said Thursday the artificial intelligence company "cannot in good conscience accede" to the Pentagon's demands to allow wider use of its technology. The maker of the AI chatbot Claude said in a statement that it's not walking away from negotiations, but that new contract language received from the Defense Department "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons."

The Pentagon's top spokesman has reiterated that the military wants to use Anthropic's artificial intelligence technology in legal ways and will not let the company dictate any limits ahead of a Friday deadline to agree to its demands. Sean Parnell said Thursday on social media that the Pentagon "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement."

Anthropic's policies prevent its models, such as its chatbot Claude, from being used for those purposes. It's the last of its peers -- the Pentagon also has contracts with Google, OpenAI and Elon Musk's xAI -- to not supply its technology to a new U.S. military internal network. Parnell said the Pentagon wants to "use Anthropic's model for all lawful purposes" but didn't offer details on what that entailed. He said opening up use of the technology would prevent the company from "jeopardizing critical military operations." "We will not let ANY company dictate the terms regarding how we make operational decisions," he said.
In a post on X, Parnell said Anthropic will "have until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW."
Businesses

Jack Dorsey's Block Cuts Nearly Half of Its Staff In AI Gamble (theverge.com) 34

Jack Dorsey's Block is cutting more than 4,000 jobs, or nearly half its workforce, as part of a deliberate shift toward becoming a smaller, "intelligence-native" company built around AI. The Verge reports: "We're not making this decision because we're in trouble," Dorsey says. "Our business is strong. Gross profit continues to grow, we continue to serve more and more customers, and profitability is improving. But something has changed. We're already seeing that the intelligence tools we're creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. And that's accelerating rapidly."

Dorsey opted to do a big layoff instead of gradual cuts because "I'd rather take a hard, clear action now and build from a position we believe in than manage a slow reduction of people toward the same outcome." The layoffs were announced on Thursday as part of the company's Q4 2025 earnings. In a shareholder letter (PDF), Dorsey says that "We believe Block will be significantly more valuable as a smaller, faster, intelligence-native company. Everything we do from here is in service of that."

See Also: Jack Dorsey's Block Accused of 'AI-Washing' to Excuse Laying Off Nearly Half Its Workforce (entrepreneur.com)
Education

What's the Point of School When AI Can Do Your Homework? 153

An anonymous reader quotes a report from 404 Media: There's a new agentic AI called Einstein that will, according to its developers, live the life of a student for them. Einstein's website claims that the AI will attend lectures for you, write your papers, and even log into EdTech platforms like Canvas to take tests and participate in discussions. Educators told me that Einstein is just one of many AI tools that can do homework for students, but should be seen as a warning to schools that are increasingly seen by students as a place to gain a diploma and status as opposed to the value of education itself.

If an AI can go to school for you what's the point of going to school? For Advait Paliwal, Brown dropout and co-creator of Einstein, there isn't one. "I think about horses," he said. "They used to pull carriages, but when cars came around, I'd argue horses became a lot more free," he said. "They can do whatever they want now. It would be weird if horses revolted and said 'no, I want to pull carriages, this is my purpose in life.'" But humans aren't horses. "This is much bigger than Einstein," Matthew Kirschenbaum told 404 Media. "Einstein is symptomatic. I doubt we'll be talking about Einstein, as such, in a year. But it's symptomatic of what's about to descend on higher ed and secondary ed as well."

[...] The attractiveness of agentic AIs is a symptom of a decades-long trend in higher education. "Universitiesby and large adopted a transactive model of education," Kirschenbaum said. "Students see their diploma as a credential. They pay tuition and at the end of four years, sometimes five years, they receive the credential and, in theory at least, that is then the springboard to economic stability and prosperity." Paliwal seems to agree. He told 404 Media that he attempted to change the university from the inside while working as a TA, but felt stymied by politics. "The only way to force these institutions to evolve is to bring reality to their face. And usually the loudest critics are the ones who can't do their own job well and live in fear of automation," he said.
"I think we really need to question what learning even is and whether traditional educational institutions are actually helping or harming us," said Paliwal. "We're seeing a rise in unemployment across degree holders because of AI, and that makes me question whether this is really what humans are born to do. We've been brainwashed as a society into valuing ourselves by the output of our productive work, and I think humanity is a lot more beautiful than that. Is it really education if we're just memorizing things to perform a task well?"

Kirschenbaum added: "What we're finding is that if forms of education can be transacted then we've just about arrived at the point where autonomous software AI agents are capable of performing the transaction on your behalf," he said. "And so the whole educational paradigm has come back to essentially bite itself in the ass."
Google

Google Launches Nano Banana 2 Model With Faster Image Generation (techcrunch.com) 6

Google has launched Nano Banana 2 (Gemini 3.1 Flash Image), a faster, more realistic image generation model that becomes the default across Gemini, Search, Lens, and Flow. TechCrunch reports: The new Nano Banana 2 retains some of the high-fidelity characteristics of the Pro model but produces images faster. The company says you can create images with a resolution ranging from 512px to 4K, in different aspect ratios. Nano Banana 2 can maintain character consistency for up to five characters and fidelity of up to 14 objects in one workflow for better storytelling. Users can also issue complex requests with detailed nuances for image generation, Google says. In addition, users can create media with more vibrant lighting, richer textures, and sharper detail.

[...] On Google's higher-end plans, Google AI Pro and Ultra, subscribers can continue to use Nano Banana Pro for specialized tasks by regenerating images via the three-dot menu. [...] The company said that all images created through the new model will have a SynthID watermark, which is Google's mark to denote AI-generated images. The images are also interoperable with C2PA Content Credentials, created by an industry body consisting of companies like Adobe, Microsoft, Google, OpenAI, and Meta. Google said that since launching the SynthID verification in the Gemini app in November, people have used it over 20 million times.

China

Chinese Official's Use of ChatGPT Revealed a Global Intimidation Opperation (cnn.com) 20

New submitter sabbede shares a report from CNN Politics: A sprawling Chinese influence operation -- accidentally revealed by a Chinese law enforcement official's use of ChatGPT -- focused on intimidating Chinese dissidents abroad, including by impersonating US immigration officials, according to a new report from ChatGPT-maker OpenAI. The Chinese law enforcement official used ChatGPT like a diary to document the alleged covert campaign of suppression, OpenAI said. In one instance, Chinese operators allegedly disguised themselves as US immigration officials to warn a US-based Chinese dissident that their public statements had supposedly broken the law, according to the ChatGPT user. In another case, they describe an effort to use forged documents from a US county court to try to get a Chinese dissident's social media account taken down. "This is what Chinese modern transnational repression looks like," Ben Nimmo, principal investigator at OpenAI, told reporters ahead of the report's release. "It's not just digital. It's not just about trolling. It's industrialized. It's about trying to hit critics of the CCP [Chinese Communist Party] with everything, everywhere, all at once."

Michael Horowitz, a former Pentagon official focused on emerging technologies, said the report from OpenAI "clearly demonstrates the way that China is actively employing AI tools to enhance information operations. US-China AI competition is continuing to intensify. This competition is not just taking place at the frontier, but in how China's government is planning and implementing the day-to-day of their surveillance and information apparatus."
Firefox

Firefox 148 Lets You Kill All AI Features in One Click (firefox.com) 48

Mozilla has released Firefox 148 for Windows, macOS and Linux, bringing a new AI Settings section that lets users disable all of the browser's AI-powered features in one click and then selectively re-enable the ones they actually want, such as the local translation tool that works locally rather than in the cloud.

The update also patches more than 50 security vulnerabilities -- none known to be under active exploitation -- over half of which Mozilla classifies as high risk, including five sandbox escape flaws and eight use-after-free bugs in the JavaScript engine that could allow code execution.
Businesses

Which Piece of Speculative Fiction Had the Greatest Single-Day Stock Market Impact? (ft.com) 27

Speaking of the Citrini's blog post, which imagines a near-future AI-driven economic collapse, and which ended up help triggering the S&P 500's worst single-day drop in nearly two weeks on Monday, FT Alphaville decided to track how US stock markets have moved on the release days of notable dystopian speculative fiction throughout history. The story adds: You may contend that this is facile. We would agree. You might contend that the comparisons make no sense because it's possible to read a blog post during a single work shift, but it's tricker to complete a whole novel (or sneak out to watch a movie). We would contend: do you really think traders read? Let's begin. The methodology -- tracking S&P 500 daily moves for post-1986 releases and DJIA moves for pre-1986 ones -- crowned The Matrix as the all-time leader, its March 1999 US debut coinciding with a 1.11% drop in the index. Citrini's "The 2028 Global Intelligence Crisis" came in a close second at -1.04%. On the positive end, the 2013 release of Her, a film about a man falling in love with an AI agent, coincided with the largest gain in the set at +1.66%.
AI

The AI Case Against Indian IT Ignores What Indian IT Actually Does (indiadispatch.com) 28

A fictional memo set in June 2028, published by short seller Citrini Research, wiped roughly $10 billion off Indian IT stocks in a single trading session on February 24 and sent the Nifty IT index down as much as 5.3% -- its worst single-day fall since August 2023 -- on the argument that AI coding agents have collapsed the cost advantage of Indian developers to the price of electricity. The index has shed more than $68 billion in market value in February alone, its worst month since 2003.

But the core claim that India's entire $205 billion software export industry rests on cheap labor is roughly 15 years out of date, an analysis argues, custom application maintenance alone accounts for about 35% of a typical Indian IT firm's revenue, per HSBC, and enterprise platforms require deterministic outputs that probabilistic AI systems cannot wholesale replace. HSBC estimates gross AI-led revenue deflation for the sector at 14-16%, a measured headwind rather than an extinction event. The story adds: 24 years of software export data that has never posted a decline, $200 billion in annual revenue, partnerships with the very AI labs whose products are supposed to be the instrument of the sector's destruction, possibly a new $1.5 trillion market category emerging at the intersection of services and software, and the largest U.S. corporates in the middle of mapping their entire workforces into process architectures that require technology partners to modernise. I think India's IT is going to be fine.
AI

Burger King Will Use AI To Check If Employees Say 'Please' and 'Thank You' (theverge.com) 124

An anonymous reader shares a report: Burger King is launching an AI chatbot that will live in the headsets used by employees. The voice-enabled chatbot, called "Patty," is part of an overarching BK Assistant platform that will not only assist employees with meal preparation but also evaluate their interactions with customers for "friendliness."

Thibault Roux, Burger King's chief digital officer, tells The Verge that the company compiled information from franchisees and guests on how to measure friendliness, resulting in the fast food chain training its AI system to recognize certain words and phrases, such as "welcome to Burger King," "please," and "thank you." Managers can then ask the AI assistant how their location is performing on friendliness. "This is all meant to be a coaching tool," Roux says, adding that the company is "iterating" on capturing the tone of conversations as well.

IT

Cloudflare Experiment Ports Most of Next.js API in 'One Week' With AI (theregister.com) 29

An anonymous reader shares a report: A Cloudflare engineer says he has implemented 94% of the Next.js API by directing Anthropic's Claude, spending about $1,100 on tokens. The purpose of the experimental project was not to show off AI coding, but to address an issue with Next.js, the popular React-based framework sponsored by Vercel.

According to Cloudflare engineering director Steve Faulkner, the Next.js tooling is "entirely bespoke... If you want to deploy it to Cloudflare, Netlify, or AWS Lambda, you have to take that build output and reshape it into something the target platform can actually run."

The Next.js team is addressing this following numerous complaints that deploying the framework with full features on platforms other than Vercel is too difficult, with a feature in progress called deployment adapters. "Vercel will use the same adapter API as every other partner," the company said when introducing the planned feature last year.

Businesses

Uber Employees Have Built an AI Clone of Their CEO To Practice Presentations Before the Real Thing (businessinsider.com) 30

An anonymous reader shares a report: Some Uber employees have built an AI clone of CEO Dara Khosrowshahi -- internally dubbed "Dara AI" -- and have been using it to rehearse and fine-tune presentations before delivering them to the actual Khosrowshahi, he revealed on a recent podcast.

Khosrowshahi said a team member told him that some teams "make the presentation to the Dara AI as a prep for making a presentation to me," and that the bot helps them adjust their slides and sharpen their delivery. Asked by the podcast host whether employees might eventually show Dara AI to the board, Khosrowshahi laughed but noted that AI models still can't process and act on new information the way executives do. "When the models can learn in real-time, that is the point at which I'm going to think that, yeah, we are all replaceable," he said.

Security

AI Can Find Hundreds of Software Bugs -- Fixing Them Is Another Story (theregister.com) 26

Anthropic last week promoted Claude Code Security, a research preview capability that uses its Claude Opus 4.6 model to hunt for software vulnerabilities, claiming its red team had surfaced over 500 bugs in production open-source codebases -- but security researchers say the real bottleneck was never discovery.

Guy Azari, a former security researcher at Microsoft and Palo Alto Networks, told The Register that only two to three of those 500 vulnerabilities have been fixed and none have received CVE assignments. The National Vulnerability Database already carried a backlog of roughly 30,000 CVE entries awaiting analysis in 2025, and nearly two-thirds of reported open-source vulnerabilities lacked an NVD severity score.

The curl project closed its bug bounty program because maintainers could no longer handle the flood of poorly crafted reports from AI tools and humans alike. Feross Aboukhadijeh, CEO of security firm Socket, said discovery is becoming dramatically cheaper but validating findings, coordinating with maintainers, and developing architecture-aligned patches remains slow, human-intensive work.
Businesses

Tech Firms Aren't Just Encouraging Their Workers To Use AI. They're Enforcing It. (msn.com) 101

Tech companies ranging from 300-person startups to giants like Amazon, Google, Meta, Microsoft and Salesforce have moved beyond encouraging employees to use AI tools and are now actively tracking adoption and, in several cases, tying it to performance reviews. Google is factoring AI use into some software engineer reviews for the first time this year, and Meta's new performance review system will do the same -- it can track how many lines of code an engineer wrote with AI assistance.

Amazon Web Services managers have dashboards showing individual engineer AI-tool usage and consider adoption when evaluating promotions. About 42% of tech-industry workers said their direct manager expects AI use in daily work as of last October, up from 32% eight months earlier, according to AI consulting firm Section. At software maker Autodesk, CEO Andrew Anagnost acknowledged that some employees had been using initially blocked coding tools like Cursor stealthily -- and warned that AI holdouts "probably won't survive long term."
XBox (Games)

Xbox Co-founder Says Microsoft is Quietly Sunsetting the Platform (gamesbeat.com) 46

Seamus Blackley, one of the original founders of Xbox who helped convince Bill Gates and Steve Ballmer to back a console project more than 26 years ago, told GamesBeat in an interview that he believes Microsoft is quietly sunsetting the platform under the guise of an AI-driven leadership transition.

Microsoft recently announced that Asha Sharma, whose career has focused on AI and software as a service, will replace Phil Spencer as Xbox CEO, and that COO and president Sarah Bond is leaving the company. Blackley said he expects Sharma's role to be that of "a palliative care doctor who slides Xbox gently into the night," arguing that Satya Nadella's all-consuming bet on generative AI has turned every business unit -- Xbox included -- into a nail for the same hammer.

He compared the appointment to putting someone who doesn't like movies in charge of a major motion picture studio, and advised Sharma to either develop a genuine passion for games or find a way to leave the job soon.
AI

Hacker Used Anthropic's Claude To Steal Sensitive Mexican Data (bloomberg.com) 22

A hacker exploited Anthropic's AI chatbot to carry out a series of attacks against Mexican government agencies, resulting in the theft of a huge trove of sensitive tax and voter information, according to cybersecurity researchers. From a report: The unknown Claude user wrote Spanish-language prompts for the chatbot to act as an elite hacker, finding vulnerabilities in government networks, writing computer scripts to exploit them and determining ways to automate data theft, Israeli cybersecurity startup Gambit Security said in research published Wednesday.

The activity started in December and continued for roughly a month. In all, 150 gigabytes of Mexican government data was stolen, including documents related to 195 million taxpayer records as well as voter records, government employee credentials and civil registry files, according to the researchers.

AI

Anthropic Drops Flagship Safety Pledge (time.com) 81

Anthropic, the AI company that has long positioned itself as the industry's most safety-conscious research lab, is dropping the central commitment of its Responsible Scaling Policy -- a 2023 pledge to never train an AI system unless it could guarantee beforehand that its safety measures were adequate. "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments ... if competitors are blazing ahead," chief science officer Jared Kaplan told TIME.

The overhauled policy, approved unanimously by CEO Dario Amodei and Anthropic's board, instead commits the company to matching or surpassing competitors' safety efforts and to delaying development only if Anthropic considers itself to be leading the AI race and believes catastrophic risks are significant.

The company also plans to publish detailed "Risk Reports" every three to six months and release "Frontier Safety Roadmaps" laying out future safety goals. Chris Painter, director of policy at the AI evaluation nonprofit METR, who reviewed an early draft, told TIME the shift signals that Anthropic "believes it needs to shift into triage mode with its safety plans, because methods to assess and mitigate risk are not keeping up with the pace of capabilities."

Slashdot Top Deals