IT

Lenovo Stockpiling PC Memory Due To 'Unprecedented' AI Squeeze (bloomberg.com) 19

Lenovo is stockpiling memory and other critical components to navigate a supply crunch brought on by the boom in AI. From a report: The world's biggest PC maker is holding on to component inventories that are roughly 50% higher than usual, [non-paywalled source] Chief Financial Officer Winston Cheng told Bloomberg TV on Monday. The frenzy to build and fill AI data centers with advanced hardware is raising prices for producers of consumer electronics, but Lenovo also sees opportunity in this to capitalize on its stockpile.
IOS

Apple iOS 27 to Be No-Frills 'Snow Leopard' Update, Other Than New AI (bloomberg.com) 17

Apple's next major iPhone software update will prioritize stability and performance over flashy new features, according to Bloomberg's Mark Gurman, who reports that iOS 27 is being developed as a "Snow Leopard-style" release [non-paywalled source] focused on fixing bugs, removing bloat and improving underlying code after this year's sweeping Liquid Glass design overhaul in iOS 26.

Engineering teams are currently combing through Apple's operating systems to eliminate unnecessary code and address quality issues that users have reported since iOS 26's September release. Those complaints include device overheating, unexplained battery drain, user interface glitches, keyboard failures, cellular connectivity problems, app crashes, and sluggish animations.

iOS 27 won't be feature-free. Apple plans several AI additions: a health-focused AI agent tied to a Health+ subscription, expanded AI-powered web search meant to compete with ChatGPT and Perplexity, and deeper AI integration across apps. The company has also been internally testing a chatbot app called Veritas as a proving ground for its re-architected Siri, though a standalone chatbot product isn't currently planned.
Games

Ubisoft Shows Off New AI-Powered FPS And Hopes You've Forgotten About Its Failed NFTs (kotaku.com) 25

Ubisoft has revealed Teammates, a first-person shooter built around AI-powered squadmates that the company is calling its "first playable generative AI research project" -- not long after the publisher went all-in on NFTs and the metaverse only to largely move on from both. Built in the Snowdrop Engine that powers The Division 2 and Star Wars Outlaws, the game features an AI assistant named Jaspar and two AI squadmates called Pablo and Sofia. Players can issue natural voice commands to direct the squadmates in combat or puzzle-solving, while Jaspar handles mission tracking and guidance. The project comes from the same team behind Ubisoft's Neo NPCs, demonstrated at GDC 2024.
Google

How Google Finally Leapfrogged Rivals With New Gemini Rollout (msn.com) 38

An anonymous reader shares a report: With the release of its third version last week, Google's Gemini large language model surged past ChatGPT and other competitors to become the most capable AI chatbot, as determined by consensus industry-benchmark tests. [...] Aaron Levie, chief executive of the cloud content management company Box, got early access to Gemini 3 several days ahead of the launch. The company ran its own evaluations of the model over the weekend to see how well it could analyze large sets of complex documents. "At first we kind of had to squint and be like, 'OK, did we do something wrong in our eval?' because the jump was so big," he said. "But every time we tested it, it came out double-digit points ahead."

[...] Google has been scrambling to get an edge in the AI race since the launch of ChatGPT three years ago, which stoked fears among investors that the company's iconic search engine would lose significant traffic to chatbots. The company struggled for months to get traction. Chief Executive Sundar Pichai and other executives have since worked to overhaul the company's AI development strategy by breaking down internal silos, streamlining leadership and consolidating work on its models, employees say. Sergey Brin, one of Google's co-founders, resumed a day-to-day role at the company helping to oversee its AI-development efforts.

AI

How An MIT Student Awed Top Economists With His AI Study - Until It All Fell Apart (msn.com) 80

In May MIT announced "no confidence" in a preprint paper on how AI increased scientific discovery, asking arXiv to withdraw it. The paper, authored by 27-year-old grad student Aidan Toner-Rodgers, had claimed an AI-driven materials discovery tool helped 1,018 scientists at a U.S. R&D lab.

But within weeks his academic mentors "were asking an unthinkable question," reports the Wall Street Journal. Had Toner-Rodgers made it all up? Toner-Rodgers's illusory success seems in part thanks to the dynamics he has now upset: an academic culture at MIT where high levels of trust, integrity and rigor are all — for better or worse — assumed. He focused on AI, a field where peer-reviewed research is still in its infancy and the hunger for data is insatiable. What has stunned his former colleagues and mentors is the sheer breadth of his apparent deception. He didn't just tweak a few variables. It appears he invented the entire study. In the aftermath, MIT economics professors have been discussing ways to raise standards for graduate students' research papers, including scrutinizing raw data, and students are going out of their way to show their work isn't counterfeit, according to people at the school.

Since parting with the university, Toner-Rodgers has told other students that his paper's problems were essentially a mere issue with data rights. According to him, he had indeed burrowed into a trove of data from a large materials-science company, as his paper said he did. But instead of getting formal permission to use the data, he faked a data-use agreement after the company wanted to pull out, he told other students via a WhatsApp message in May... On Jan. 31, Corning filed a complaint with the World Intellectual Property Organization against the registrar of the domain name corningresearch.com. Someone who controlled that domain name could potentially create email addresses or webpages that gave the impression they were affiliated with the company. WIPO soon found that Toner-Rodgers had apparently registered the domain name, according to the organization's written decision on the case. Toner-Rodgers never responded to the complaint, and Corning successfully won the transfer of the domain name. WIPO declined to comment...

In the WhatsApp chat in May, in which Toner-Rodgers told other students he had faked the data-use agreement, he wrote, "This was a huge and embarrassing act of dishonesty on my part, and in hindsight it clearly would've been better to just abandon the paper." Both Corning and 3M told the Journal that they didn't roll out the experiment Toner-Rodgers described, and that they didn't share data with him.

AI

'We Could've Asked ChatGPT': UK Students Fight Back Over Course Taught By AI (theguardian.com) 55

An anonymous reader shared this report from the Guardian: James and Owen were among 41 students who took a coding module at the University of Staffordshire last year, hoping to change careers through a government-funded apprenticeship programme designed to help them become cybersecurity experts or software engineers. But after a term of AI-generated slides being read, at times, by an AI voiceover, James said he had lost faith in the programme and the people running it, worrying he had "used up two years" of his life on a course that had been done "in the cheapest way possible".

"If we handed in stuff that was AI-generated, we would be kicked out of the uni, but we're being taught by an AI," said James during a confrontation with his lecturer recorded as a part of the course in October 2024. James and other students confronted university officials multiple times about the AI materials. But the university appears to still be using AI-generated materials to teach the course. This year, the university uploaded a policy statement to the course website appearing to justify the use of AI, laying out "a framework for academic professionals leveraging AI automation" in scholarly work and teaching...

For students, AI teaching appears to be less transformative than it is demoralising. In the US, students post negative online reviews about professors who use AI. In the UK, undergraduates have taken to Reddit to complain about their lecturers copying and pasting feedback from ChatGPT or using AI-generated images in courses.

"I feel like a bit of my life was stolen," James told the Guardian (which also quotes an unidentified student saying they felt "robbed of knowledge and enjoyment".) But the article also points out that a survey last year of 3,287 higher-education teaching staff by edtech firm Jisc found that nearly a quarter were using AI tools in their teaching.
Mozilla

Mozilla Announces 'TABS API' For Developers Building AI Agents (omgubuntu.co.uk) 10

"Fresh from announcing it is building an AI browsing mode in Firefox and laying the groundwork for agentic interactions in the Firefox 145 release, the corp arm of Mozilla is now flexing its AI muscles in the direction of those more likely to care," writes the blog OMG Ubuntu: If you're a developer building AI agents, you can sign up to get early access to Mozilla's TABS API, a "powerful web content extraction and transformation toolkit designed specifically for AI agent builders"... The TABS API enables devs to create agents to automate web interactions, like clicking, scrolling, searching, and submitting forms "just like a human". Real-time feedback and adaptive behaviours will, Mozilla say, offer "full control of the web, without the complexity."

As TABS is not powered by a Mozilla-backed LLM you'll need to connect it to your choice of third-party LLM for any relevant processing... Developers get 1,000 requests monthly on the free tier, which seems reasonable for prototyping personal projects. Complex agentic workloads may require more. Though pricing is yet to be locked in, the TABS API website suggests it'll cost ~$5 per 1000 requests. Paid plans will offer additional features too, like lower latency and, somewhat ironically, CAPTCHA solving so AI can 'prove' it's not a robot on pages gated to prevent automated activities.

Google, OpenAI, and other major AI vendors offer their own agentic APIs. Mozilla is pitching up late, but it plans to play differently. It touts a "strong focus on data minimisation and security", with scraped data treated ephemerally — i.e., not kept. As a distinction, that matters. AI agents can be given complex online tasks that involve all sorts of personal or sensitive data being fetched and worked with.... If you're minded to make one, perhaps without a motivation to asset-strip the common good, Mozilla's TABS API look like a solid place to start.

Programming

Microsoft and GitHub Preview New Tool That Identifies, Prioritizes, and Fixes Vulnerabilities With AI (thenewstack.io) 18

"Security, development, and AI now move as one," says Microsoft's director of cloud/AI security product marketing.

Microsoft and GitHub "have launched a native integration between Microsoft Defender for Cloud and GitHub Advanced Security that aims to address what one executive calls decades of accumulated security debt in enterprise codebases..." according to The New Stack: The integration, announced this week in San Francisco at the Microsoft Ignite 2025 conference and now available in public preview, connects runtime intelligence from production environments directly into developer workflows. The goal is to help organizations prioritize which vulnerabilities actually matter and use AI to fix them faster. "Throughout my career, I've seen vulnerability trends going up into the right. It didn't matter how good of a detection engine and how accurate our detection engine was, people just couldn't fix things fast enough," said Marcelo Oliveira, VP of product management at GitHub, who has spent nearly a decade in application security. "That basically resulted in decades of accumulation of security debt into enterprise code bases." According to industry data, critical and high-severity vulnerabilities constitute 17.4% of security backlogs, with a mean time to remediation of 116 days, said Andrew Flick, senior director of developer services, languages and tools at Microsoft, in a blog post. Meanwhile, applications face attacks as frequently as once every three minutes, Oliveira said.

The integration represents the first native link between runtime intelligence and developer workflows, said Elif Algedik, director of product marketing for cloud and AI security at Microsoft, in a blog post... The problem, according to Flick, comes down to three challenges: security teams drowning in alert fatigue while AI rapidly introduces new threat vectors that they have little time to understand; developers lacking clear prioritization while remediation takes too long; and both teams relying on separate, nonintegrated tools that make collaboration slow and frustrating... The new integration works bidirectionally. When Defender for Cloud detects a vulnerability in a running workload, that runtime context flows into GitHub, showing developers whether the vulnerability is internet-facing, handling sensitive data or actually exposed in production. This is powered by what GitHub calls the Virtual Registry, which creates code-to-runtime mapping, Flick said...

In the past, this alert would age in a dashboard while developers worked on unrelated fixes because they didn't know this was the critical one, he said. Now, a security campaign can be created in GitHub, filtering for runtime risk like internet exposure or sensitive data, notifying the developer to prioritize this issue.

GitHub Copilot "now automatically checks dependencies, scans for first-party code vulnerabilities and catches hardcoded secrets before code reaches developers," the article points out — but GitHub's VP of product management says this takes things even further.

"We're not only helping you fix existing vulnerabilities, we're also reducing the number of vulnerabilities that come into the system when the level of throughput of new code being created is increasing dramatically with all these agentic coding agent platforms."
The Internet

How the Internet Rewired Work - and What That Tells Us About AI's Likely Impact (msn.com) 105

"The internet did transform work — but not the way 1998 thought..." argues the Wall Street Journal. "The internet slipped inside almost every job and rewired how work got done."

So while the number of single-task jobs like travel agent dropped, most jobs "are bundles of judgment, coordination and hands-on work," and instead the internet brought "the quiet transformation of nearly every job in the economy... Today, just 10% of workers make minimal use of the internet on the job — roles like butcher and carpet installer." [T]he bigger story has been additive. In 1998, few could conceive of social media — let alone 65,000 social-media managers — and 200,000 information-security analysts would have sounded absurd when data still lived on floppy disks... Marketing shifted from campaign bursts to always-on funnels and A/B testing. Clinics embedded e-prescribing and patient portals, reshaping front-office and clinical handoffs. The steps, owners and metrics shifted. Only then did the backbone scale: We went from server closets wedged next to the mop sink to data centers and cloud regions, from lone system administrators to fulfillment networks, cybersecurity and compliance.

That is where many unexpected jobs appeared. Networked machines and web-enabled software quietly transformed back offices as much as our on-screen lives. Similarly, as e-commerce took off, internet-enabled logistics rewired planning roles — logisticians, transportation and distribution managers — and unlocked a surge in last-mile work. The build-out didn't just hire coders; it hired coordinators, pickers, packers and drivers. It spawned hundreds of thousands of warehouse and delivery jobs — the largest pockets of internet-driven job growth, and yet few had them on their 1998 bingo card... Today, the share of workers in professional and managerial occupations has more than doubled since the dawn of the digital era.

So what does that tell us about AI? Our mental model often defaults to an industrial image — John Henry versus the steam drill — where jobs are one dominant task, and automation maps one-to-one: Automate the task, eliminate the job. The internet revealed a different reality: Modern roles are bundles. Technologies typically hit routine tasks first, then workflows, and only later reshape jobs, with second-order hiring around the backbone. That complexity is what made disruption slower and more subtle than anyone predicted. AI fits that pattern more than it breaks it... [LLMs] can draft briefs, summarize medical notes and answer queries. Those are tasks — important ones — but still parts of larger roles. They don't manage risk, hold accountability, reassure anxious clients or integrate messy context across teams. Expect a rebalanced division of labor: The technical layer gets faster and cheaper; the human layer shifts toward supervision, coordination, complex judgment, relationship work and exception handling.

What to expect from AI, then, is messy, uneven reshuffling in stages. Some roles will contract sharply — and those contractions will affect real people. But many occupations will be rewired in quieter ways. Productivity gains will unlock new demand and create work that didn't exist, alongside a build-out around data, safety, compliance and infrastructure.

AI is unprecedented; so was the internet. The real risk is timing: overestimating job losses, underestimating the long, quiet rewiring already under way, and overlooking the jobs created in the backbone. That was the internet's lesson. It's likely to be AI's as well.

Windows

Microsoft Warns Its Windows AI Feature Brings Data Theft and Malware Risks, and 'Occasionally May Hallucinate' (itsfoss.com) 65

"Copilot Actions on Windows 11" is currently available in Insider builds (version 26220.7262) as part of Copilot Labs, according to a recent report, "and is off by default, requiring admin access to set it up."

But maybe it's off for a good reason...besides the fact that it can access any apps installed on your system: In a support document, Microsoft admits that features like Copilot Actions introduce "\novel security risks ." They warn about cross-prompt injection (XPIA), where malicious content in documents or UI elements can override the AI's instructions. The result? " Unintended actions like data exfiltration or malware installation ."

Yeah, you read that right. Microsoft is shipping a feature that could be tricked into installing malware on your system. Microsoft's own warning hits hard: "We recommend that you only enable this feature if you understand the security implications." When you try to enable these experimental features, Windows shows you a warning dialog that you have to acknowledge. ["This feature is still being tested and may impact the performance or security of your device."]

Even with these warnings, the level of access Copilot Actions demands is concerning. When you enable the feature, it gets read and write access to your Documents, Downloads, Desktop, Pictures, Videos, and Music folders... Microsoft says they are implementing safeguards. All actions are logged, users must approve data access requests, the feature operates in isolated workspaces, and the system uses audit logs to track activity.

But you are still giving an AI system that can "hallucinate and produce unexpected outputs" (Microsoft's words, not mine) full access to your personal files.

To address this, Ars Technica notes, Microsoft added this helpful warning to its support document this week. "As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs."

But Microsoft didn't describe "what actions they should take to prevent their devices from being compromised. I asked Microsoft to provide these details, and the company declined..."
Programming

Amazon's AI-Powered IDE Kiro Helps Vibe Coders with 'Spec Mode' (geekwire.com) 20

A promotional video for Amazon's Kiro software development system took a unique approach, writes GeekWire. "Instead of product diagrams or keynote slides, a crew from Seattle's Packrat creative studio used action figures on a miniature set to create a stop-motion sequence..."

"Can the software development hero conquer the 'AI Slop Monster' to uncover the gleaming, fully functional robot buried beneath the coding chaos?" Kiro (pronounced KEE-ro) is Amazon's effort to rethink how developers use AI. It's an integrated development environment that attempts to tame the wild world of vibe coding... But rather than simply generating code from prompts [in "vibe mode"], Kiro breaks down requests into formal specifications, design documents, and task lists [in "spec mode"]. This spec-driven development approach aims to solve a fundamental problem with vibe coding: AI can quickly generate prototypes, but without structure or documentation, that code becomes unmaintainable...

The market for AI-powered development tools is booming. Gartner expects AI code assistants to become ubiquitous, forecasting that 90% of enterprise software engineers will use them by 2028, up from less than 14% in early 2024... Amazon launched Kiro in preview in July, to a strong response. Positive early reviews were tempered by frustration from users unable to gain access. Capacity constraints have since been resolved, and Amazon says more than 250,000 developers used Kiro in the first three months...

Now, the company is taking Kiro out of preview into general availability, rolling out new features and opening the tool more broadly to development teams and companies... During the preview period, Kiro handled more than 300 million requests and processed trillions of tokens as developers explored its capabilities, according to stats provided by the company. Rackspace used Kiro to complete what they estimated as 52 weeks of software modernization in three weeks, according to Amazon executives. SmugMug and Flickr are among other companies espousing the virtues of Kiro's spec-driven development approach. Early users are posting in glowing terms about the efficiencies they're seeing from adopting the tool... startups in most countries can apply for up to 100 free Pro+ seats for a year's worth of Kiro credits.

Kiro offers property-based testing "to verify that generated code actually does what developers specified," according to the article — plus a checkpointing system that "lets developers roll back changes or retrace an agent's steps when an idea goes sideways..."

"And yes, they've been using Kiro to build Kiro, which has allowed them to move much faster."
Facebook

Meta Plans New AI-Powered 'Morning Brief' Drawn From Facebook and 'External Sources' (msn.com) 14

Meta "is testing a new product that would give Facebook users a personalized daily briefing powered by the company's generative AI technology" reports the Washington Post. They cite records they've reviwed showing that Meta "would analyze Facebook content and external sources to push custom updates to its users." The company plans to test the product with a small group of Facebook users in select cities such as New York and San Francisco, according to a person familiar with the project who spoke on the condition of anonymity to discuss private company matters...

Meta's foray into pushing updates for consumers follows years of controversy over its relationship with publishers. The tech company has waffled between prominently featuring content from mainstream news sources on Facebook to pulling news links altogether as regulators pushed the tech giant to pay publishers for content on its platforms. More recently, publishers have sued Meta, alleging it infringed on their copyrighted works to train its AI models.

AI

Analyzing 47,000 ChatGPT Conversations Shows Echo Chambers, Sensitive Data - and Unpredictable Medical Advice (yahoo.com) 33

For nearly three years OpenAI has touted ChatGPT as a "revolutionary" (and work-transforming) productivity tool, reports the Washington Post.

But after analyzing 47,000 ChatGPT conversations, the Post found that users "are overwhelmingly turning to the chatbot for advice and companionship, not productivity tasks." The Post analyzed a collection of thousands of publicly shared ChatGPT conversations from June 2024 to August 2025. While ChatGPT conversations are private by default, the conversations analyzed were made public by users who created shareable links to their chats that were later preserved in the Internet Archive and downloaded by The Post. It is possible that some people didn't know their conversations would become publicly preserved online. This unique data gives us a glimpse into an otherwise black box...

Overall, about 10 percent of the chats appeared to show people talking about their emotions, role-playing, or seeking social interactions with the chatbot. Some users shared highly private and sensitive information with the chatbot, such as information about their family in the course of seeking legal advice. People also sent ChatGPT hundreds of unique email addresses and dozens of phone numbers in the conversations... Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said that it appears ChatGPT "is trained to further or deepen the relationship." In some of the conversations analyzed, the chatbot matched users' viewpoints and created a personalized echo chamber, sometimes endorsing falsehoods and conspiracy theories.

Four of ChatGPT's answers about health problems got a failing score from a chair of medicine at the University of California San Francisco, the Post points out. But four other answers earned a perfect score.
Advertising

Google Starts Testing Ads In AI Mode 13

Google has begun testing sponsored ads inside its Gemini-powered AI Mode, placing labeled "sponsored" links at the bottom of AI-generated responses. Engadget reports: [A] Google spokesperson says the result shown is akin to similar tests it's been running this year. "People seeing ads in AI Mode in the wild is simply part of Google's ongoing tests, which we've been running for several months," the spokesperson said. The push to start offering ads in AI Mode was announced in May. The company also told 9to5Google that there are no current plans to fully update AI Mode to incorporate ads. For now, the software seems to be prioritizing organic links over sponsored links, but we all know how insidious ads can be once the floodgates open...
AI

Malaysia's Palm Oil Estates Are Turning Into Data Centers 17

An anonymous reader quotes a report from Bloomberg: Malaysia's palm oil giants, long-blamed for razing rainforests, fueling toxic haze and driving orangutans to the brink of extinction, are recasting themselves as unlikely champions in a different, potentially greener race: the quest to lure the world's AI data centers to the Southeast Asian country (source paywalled; alternative source). Palm oil companies are earmarking some of the vast tracts of land they own for industrial parks studded with data centers and solar panels, the latter meant to feed the insatiable energy appetites of the former. The logic is simple: data centers are power and land hogs. By 2035, they could demand at least five gigawatts of electricity in Malaysia -- almost 20% of the country's current generation capacity and roughly enough to power a major city like Miami. Malaysia also needs space to house server farms, and palm oil giants control more land than any other private entity in the country.

The country has been at the heart of a regional data center boom. Last year, it was the fastest-growing data center market in the Asia-Pacific region and roughly 40% of all planned capacity in Southeast Asia is now slated for Malaysia, according to industry consultant DC Byte. Over the past four years, $34 billion in data center investments has poured into the country -- Alphabet's Google committed $2 billion, Microsoft announced a $2.2 billion investment and Amazon is spending $6.2 billion, to name a few. The government aims for 81 data centers by 2035. The rush is partly a spillover from Singapore, where a years-long moratorium on new centers forced operators to look north. Johor, just across the causeway, is now a hive of construction cranes and server farms -- including for firms such as Singapore Telecommunications, Nvidia and ByteDance. But delivering on government promises of renewable power is proving harder.

The strains are already being felt in Malaysia's data center capital. Sedenak Tech Park, one of Johor's flagship sites, is telling potential tenants they'll need to wait until the fourth quarter of 2026 for promised water and power hookups under its second-phase expansion, according to DC Byte. The vacancy rate in Johor's live facilities is just 1.1%, according to real estate consultant Knight Frank. Despite its rapid growth, the market is nowhere near saturation, with six gigawatts of capacity expected to be built out over time, said Knight Frank's head of data centers for Asia Pacific, Fred Fitzalan Howard. That potential bottleneck has incentivized palm oil majors such as SD Guthrie Bhd. to pitch themselves as both landowners and green-power suppliers.
The $8.9 billion palm oil producer, SD Guthrie, is the world's largest palm oil planter by acreage, with more than 340,000 hectares in Malaysia. "SD Guthrie is pivoting to solar farms and industrial parks, betting that tech giants hungry for server space will prefer sites with ready access to renewable energy," reports Bloomberg. "The company has reserved 10,000 hectares for such projects over the next decade, starting with clearing old rubber estates and low-yielding palm plots in areas near data center and semiconductor investment hubs."

"The company's calculation is based on this: one megawatt of solar requires about 1.5 hectares. Helmy said SD Guthrie wants one gigawatt in operation within three years, enough to power up to 10 hyperscale data centers used for AI computing. The new business is expected to make up about a third of its profits by the end of the decade."
Google

Google Must Double AI Serving Capacity Every 6 Months To Meet Demand 57

Google's AI infrastructure chief told employees the company must double its AI serving capacity every six months in order to meet demand. In a presentation earlier this month, Amin Vahdat, a vice president at Google Cloud, gave a presentation titled "AI Infrastructure." It included a slide on "AI compute demand" that said: "Now we must double every 6 months.... the next 1000x in 4-5 years." CNBC reports: The presentation was delivered a week after Alphabet reported better-than-expected third-quarter results and raised its capital expenditures forecast for the second time this year, to a range of $91 billion to $93 billion, followed by a "significant increase" in 2026. Hyperscaler peers Microsoft, Amazon and Meta also boosted their capex guidance, and the four companies now expect to collectively spend more than $380 billion this year.

Google's "job is of course to build this infrastructure but it's not to outspend the competition, necessarily," Vahdat said. "We're going to spend a lot," he said, adding that the real goal is to provide infrastructure that is far "more reliable, more performant and more scalable than what's available anywhere else." In addition to infrastructure build-outs, Vahdat said Google bolsters capacity with more efficient models and through its custom silicon. Last week, Google announced the public launch of its seventh generation Tensor Processing Unit called Ironwood, which the company says is nearly 30 times more power efficient than its first Cloud TPU from 2018.

Vahdat said the company has a big advantage with DeepMind, which has research on what AI models can look like in future years. Google needs to "be able to deliver 1,000 times more capability, compute, storage networking for essentially the same cost and increasingly, the same power, the same energy level," Vahdat said. "It won't be easy but through collaboration and co-design, we're going to get there."
China

Tech Company CTO and Others Indicted For Exporting Nvidia Chips To China (arstechnica.com) 11

An anonymous reader quotes a report from Ars Technica: The US crackdown on chip exports to China has continued with the arrests of four people accused of a conspiracy to illegally export Nvidia chips. Two US citizens and two nationals of the People's Republic of China (PRC), all of whom live in the US, were charged in an indictment (PDF) unsealed on Wednesday in US District Court for the Middle District of Florida. The indictment alleges a scheme to send Nvidia "GPUs to China by falsifying paperwork, creating fake contracts, and misleading US authorities," John Eisenberg, assistant attorney general for the Justice Department's National Security Division, said in a press release yesterday.

The four arrestees are Hon Ning Ho (aka Mathew Ho), a US citizen who was born in Hong Kong and lives in Tampa, Florida; Brian Curtis Raymond, a US citizen who lives in Huntsville, Alabama; Cham Li (aka Tony Li), a PRC national who lives in San Leandro, California; and Jing Chen (aka Harry Chen), a PRC national who lives in Tampa on an F-1 non-immigrant student visa. The suspects face a raft of charges for conspiracy to violate the Export Control Reform Act of 2018, smuggling, and money laundering. They could serve many decades in prison if convicted and given the maximum sentences and forfeit their financial gains. The indictment says that Chinese companies paid the conspirators nearly $3.9 million.
One of the suspects was briefly the CTO of Corvex, a Virginia-based AI cloud computing company that is planning to go public. Corvex told CNBC yesterday that it "had no part in the activities cited in the Department of Justice's indictment," and that "the person in question is not an employee of Corvex. Previously a consultant to the company, he was transitioning into an employee role but that offer has been rescinded."
AI

AI Nutrition Tracking Stinks (theverge.com) 33

AI nutrition tracking features in popular fitness apps are producing wildly inaccurate calorie and macro counts despite promises to simplify food logging through automated photo analysis. The Verge tested AI-powered nutrition tools in Ladder, Oura Advisor, January and MyFitnessPal. Ladder's AI estimated the outlet's carefully measured 355-calorie breakfast at 780 calories and got the macro breakdown wrong even after the reviewer manually edited entries to include exact brands and amounts.

Oura Advisor routinely mistook matcha protein shakes for green smoothies. January misidentified barbecue sauce as teriyaki sauce and failed to detect mushrooms in a chicken dish. None of the apps could identify healthier ingredient swaps or accurately log ethnic foods. Oura classified a mix of edamame, quinoa and brown rice as mashed potatoes and white rice. Ladder logged dal makhani curry as chicken soup. The AI features require extensive manual corrections that negate any time savings from automated logging, the publication concluded in its scathing review.
Power

Meta Enters Power Trading To Support Its AI Energy Needs (bloomberg.com) 12

Meta is venturing into the complex world of electricity trading, betting it can accelerate the construction of new US power plants that are vital to its AI ambitions. From a report: The foray into power trading comes after Meta heard from investors and plant developers that too few power buyers were willing to make the early, long-term commitments required to spur investment, according to Urvi Parekh, the company's head of global energy. Trading electricity will give the company the flexibility to enter more of those longer contracts.

Plant developers "want to know that the consumers of power are willing to put skin in the game," Parekh said in an interview. "Without Meta taking a more active voice in the need to expand the amount of power that's on the system, it's not happening as quickly as we would like."

Microsoft

Microsoft's AI-Powered Copy and Paste Can Now Use On-Device AI (theverge.com) 45

An anonymous reader shares a report: Microsoft is upgrading its Advanced Paste tool in PowerToys for Windows 11, allowing you to use an on-device AI model to power some of its features. With the 0.96 update, you can route requests through Microsoft's Foundry Local tool or the open-source Ollama, both of which run AI models on your device's neural processing unit (NPU) instead of connecting to the cloud.

That means you won't need to purchase API credits to perform certain actions, like having AI translate or summarize the text copied to your clipboard. Plus, you can keep your data on your device.

Slashdot Top Deals