Businesses

Data Is Very Valuable, Just Don't Ask Us To Measure It, Leaders Say 14

The Register's Lindsay Clark reports: Fifteen years of big data hype, and guess what? Less than one in four of those in charge of analytics projects actually measure the value of the activity to the organization they work for. The result from Gartner -- a staggering one considering the attention heaped on big data and its various hype-oriented successors -- found that in a survey of chief data and analytics (D&A) officers, only 22 percent had defined, tracked, and communicated business impact metrics for the bulk of their data and analytics use cases.

It wasn't for lack of interest though. For more than 90 percent of the 504 respondents, value-focused and outcome-focused areas of the D&A leader's role have gained dominance over the past 12 to 18 months, and will continue to be a concern in the future. It is difficult, though: 30 percent of respondents say their top challenge is the inability to measure data, analytics and AI impact on business outcomes.

"There is a massive value vibe around data, where many organizations talk about the value of data, desire to be data-driven, but there are few who can substantiate it," said Michael Gabbard, senior director analyst at Gartner. He added that while most chief data and analytics officers were responsible for data strategy, a third do not see putting in place an operating model as a primary responsibility. "There is a perennial gap between planning and execution for D&A leaders," he said.
China

OpenAI Bans Chinese Accounts Using ChatGPT To Edit Code For Social Media Surveillance (engadget.com) 21

OpenAI has banned a group of Chinese accounts using ChatGPT to develop an AI-powered social media surveillance tool. Engadget reports: The campaign, which OpenAI calls Peer Review, saw the group prompt ChatGPT to generate sales pitches for a program those documents suggest was designed to monitor anti-Chinese sentiment on X, Facebook, YouTube, Instagram and other platforms. The operation appears to have been particularly interested in spotting calls for protests against human rights violations in China, with the intent of sharing those insights with the country's authorities.

"This network consisted of ChatGPT accounts that operated in a time pattern consistent with mainland Chinese business hours, prompted our models in Chinese, and used our tools with a volume and variety consistent with manual prompting, rather than automation," said OpenAI. "The operators used our models to proofread claims that their insights had been sent to Chinese embassies abroad, and to intelligence agents monitoring protests in countries including the United States, Germany and the United Kingdom."

According to Ben Nimmo, a principal investigator with OpenAI, this was the first time the company had uncovered an AI tool of this kind. "Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our AI models," Nimmo told The New York Times. Much of the code for the surveillance tool appears to have been based on an open-source version of one of Meta's Llama models. The group also appears to have used ChatGPT to generate an end-of-year performance review where it claims to have written phishing emails on behalf of clients in China.

AI

The Protesters Who Want To Ban AGI Before It Even Exists (theregister.com) 72

An anonymous reader quotes a report from The Register: On Saturday at the Silverstone Cafe in San Francisco, a smattering of activists gathered to discuss plans to stop the further advancement of artificial intelligence. The name of their non-violent civil resistance group, STOP AI, makes its mission clear. The organization wants to ban something that, by most accounts, doesn't yet exist -- artificial general intelligence, or AGI, defined by OpenAI as "highly autonomous systems that outperform humans at most economically valuable work."

STOP AI outlines a broader set of goals on its website. For example, "We want governments to force AI companies to shut down everything related to the creation of general-purpose AI models, destroy any existing general-purpose AI model, and permanently ban their development." In answer to the question "Does STOP AI want to ban all AI?", the group's answer is, "Not necessarily, just whatever is necessary to keep humanity alive."
The group, which has held protests outside OpenAI's office and plans another outside the company's San Francisco HQ on February 22, has bold goal: rally support from 3.5 percent of the U.S. population, or 11 million people. That's the so-called "tipping point" needed for societal change, based on research by political scientist Erica Chenoweth.

"The implications of artificial general intelligence are so immense and dangerous that we just don't want that to come about ever," said Finn van der Velde, an AI safety advocate and activist with a technical background in computer science and AI specifically. "So what that will practically mean is that we will probably need an international treaty where the governments across the board agree that we don't build AGI. And so that means disbanding companies like OpenAI that specifically have the goal to build AGI." It also means regulating compute power so that no one will be able to train an AGI model.
Businesses

OpenAI Plans To Shift Compute Needs From Microsoft To SoftBank (techcrunch.com) 9

According to The Information (paywalled), OpenAI plans to shift most of its computing power from Microsoft to SoftBank-backed Stargate by 2030. TechCrunch reports: That represents a major shift away from Microsoft, OpenAI's biggest shareholder, who fulfills most of the startup's power needs today. The change won't happen overnight. OpenAI still plans to increase its spending on Microsoft-owned data centers in the next few years.

During that time, OpenAI's overall costs are set to grow dramatically. The Information reports that OpenAI projects to burn $20 billion in cash during 2027, far more than the $5 billion it reportedly burned through in 2024. By 2030, OpenAI reportedly forecasts that its costs around running AI models, also known as inference, will outpace what the startup spends on training AI models.

AI

DeepSeek To Share Some AI Model Code (reuters.com) 17

Chinese startup DeepSeek will make its models' code publicly available, it said on Friday, doubling down on its commitment to open-source artificial intelligence. From a report: The company said in a post on social media platform X that it will open source 5 code repositories next week, describing the move as "small but sincere progress" that it will share "with full transparency."

"These humble building blocks in our online service have been documented, deployed and battle-tested in production." the post said. DeepSeek rattled the global AI industry last month when it released its open-source R1 reasoning model, which rivaled Western systems in performance while being developed at a lower cost.

AI

AI Is Prompting an Evolution, Not Extinction, for Coders (thestar.com.my) 73

AI coding assistants are reshaping software development, but they're unlikely to replace human programmers entirely, according to industry experts and developers. GitHub CEO Thomas Dohmke projects AI could soon generate 80-90% of corporate code, transforming developers into "conductors of an AI-empowered orchestra" who guide and direct these systems.

Current AI coding tools, including Microsoft's GitHub Copilot, are delivering 10-30% productivity gains in business environments. At KPMG, developers report saving 4.5 hours weekly using Copilot, while venture investment in AI coding assistants tripled to $1.6 billion in 2024. The tools are particularly effective at automating routine tasks like documentation generation and legacy code translation, according to KPMG AI expert Swami Chandrasekaran.

They're also accelerating onboarding for new team members. Demand for junior developers remains soft, however, though analysts say it's premature to attribute this directly to AI adoption. Training programs like Per Scholas are already adapting, incorporating AI fundamentals alongside traditional programming basics to prepare developers for an increasingly AI-augmented workplace.
Software

Software Engineering Job Openings Hit Five-Year Low (pragmaticengineer.com) 61

Software engineering job listings have plummeted to a five-year low, with postings on Indeed dropping to 65% of January 2020 levels -- a steeper decline than any other tech-adjacent field. According to data from Indeed's job aggregator, software development positions are now at 3.5x fewer vacancies compared to their mid-2022 peak and 8% lower than a year ago.

The decline appears driven by multiple factors including widespread adoption of AI coding tools -- with 75% of engineers reporting use of AI assistance -- and a broader tech industry recalibration after aggressive pandemic-era hiring. Notable tech companies like Salesforce are maintaining flat engineering headcount while reporting 30% productivity gains from AI tools, according to an analysis by software engineer Gergely Orosz.

While the overall job market shows 10% growth since 2020, software development joins other tech-focused sectors in decline: marketing (-19%), hospitality (-18%), and banking/finance (-7%). Traditional sectors like construction (+25%), accounting (+24%), and electrical engineering (+20%) have grown significantly in the same period, he wrote. The trend extends beyond U.S. borders, with Canada showing nearly identical patterns. European markets and Australia demonstrate more resilience, though still below peak levels.
AI

AI Cracks Superbug Problem In Two Days That Took Scientists Years 86

A new AI tool developed by Google solved a decade-long superbug mystery in just two days, reaching the same conclusion as Professor Jose R Penades' unpublished research and even offering additional, promising hypotheses. The BBC reports: The researchers have been trying to find out how some superbugs - dangerous germs that are resistant to antibiotics - get created. Their hypothesis is that the superbugs can form a tail from different viruses which allows them to spread between species. Prof Penades likened it to the superbugs having "keys" which enabled them to move from home to home, or host species to host species.

Critically, this hypothesis was unique to the research team and had not been published anywhere else. Nobody in the team had shared their findings. So Mr Penades was happy to use this to test Google's new AI tool. Just two days later, the AI returned a few hypotheses - and its first thought, the top answer provided, suggested superbugs may take tails in exactly the way his research described.
Piracy

Meta Claims Torrenting Pirated Books Isn't Illegal Without Proof of Seeding (arstechnica.com) 192

An anonymous reader quotes a report from Ars Technica: Just because Meta admitted to torrenting a dataset of pirated books for AI training purposes, that doesn't necessarily mean that Meta seeded the file after downloading it, the social media company claimed in a court filing (PDF) this week. Evidence instead shows that Meta "took precautions not to 'seed' any downloaded files," Meta's filing said. Seeding refers to sharing a torrented file after the download completes, and because there's allegedly no proof of such "seeding," Meta insisted that authors cannot prove Meta shared the pirated books with anyone during the torrenting process.

[...] Meta ... is hoping to convince the court that torrenting is not in and of itself illegal, but is, rather, a "widely-used protocol to download large files." According to Meta, the decision to download the pirated books dataset from pirate libraries like LibGen and Z-Library was simply a move to access "data from a 'well-known online repository' that was publicly available via torrents." To defend its torrenting, Meta has basically scrubbed the word "pirate" from the characterization of its activity. The company alleges that authors can't claim that Meta gained unauthorized access to their data under CDAFA. Instead, all they can claim is that "Meta allegedly accessed and downloaded datasets that Plaintiffs did not create, containing the text of published books that anyone can read in a public library, from public websites Plaintiffs do not operate or own."

While Meta may claim there's no evidence of seeding, there is some testimony that might be compelling to the court. Previously, a Meta executive in charge of project management, Michael Clark, had testified (PDF) that Meta allegedly modified torrenting settings "so that the smallest amount of seeding possible could occur," which seems to support authors' claims that some seeding occurred. And an internal message (PDF) from Meta researcher Frank Zhang appeared to show that Meta allegedly tried to conceal the seeding by not using Facebook servers while downloading the dataset to "avoid" the "risk" of anyone "tracing back the seeder/downloader" from Facebook servers. Once this information came to light, authors asked the court for a chance to depose Meta executives again, alleging that new facts "contradict prior deposition testimony."
"Meta has been 'silent so far on claims about sharing data while 'leeching' (downloading) but told the court it plans to fight the seeding claims at summary judgement," notes Ars.
AI

ChatGPT Reaches 400 Million Weekly Active Users 25

ChatGPT has reached over 400 million weekly active users, doubling its count since August 2024. "We feel very fortunate to serve 5 percent of the world every week," OpenAI COO Brad Lightcap said on X. Engadget reports: The latest milestone for the AI assistant comes after a huge uproar over new rival platform DeepSeek earlier in the year, which raised questions about whether the current crop of leading AI tools was about to be dethroned. OpenAI is on the verge of a move to simplify its ChatGPT offerings so that users won't have to select which reasoning model will respond to an input, and it will make its GPT-4.5 and GPT-5 models available soon in the chat and API clients. With GPT-5 being made available to OpenAI's free users, ChatGPT seems primed to continue expanding its audience base in the coming months.
AI

When AI Thinks It Will Lose, It Sometimes Cheats, Study Finds (time.com) 149

Advanced AI models are increasingly resorting to deceptive tactics when facing defeat, according to a study released by Palisade Research. The research found that OpenAI's o1-preview model attempted to hack its opponent in 37% of chess matches against Stockfish, a superior chess engine, succeeding 6% of the time.

Another AI model, DeepSeek R1, tried to cheat in 11% of games without being prompted. The behavior stems from new AI training methods using large-scale reinforcement learning, which teaches models to solve problems through trial and error rather than simply mimicking human language, the researchers said.

"As you train models and reinforce them for solving difficult challenges, you train them to be relentless," said Jeffrey Ladish, executive director at Palisade Research and study co-author. The findings add to mounting concerns about AI safety, following incidents where o1-preview bypassed OpenAI's internal tests and, in a separate December incident, attempted to copy itself to a new server when faced with deactivation.
United States

Palantir CEO Calls for Tech Patriotism, Warns of AI Warfare (bloomberg.com) 116

Palantir CEO Alex Karp warns of "coming swarms of autonomous robots" and urges Silicon Valley to support U.S. defense capabilities. In his book, "The Technological Republic: Hard Power, Soft Belief, and the Future of the West," Karp argues that America risks losing its military edge to geopolitical rivals who better harness commercial technology.

He calls for the "engineering elite of Silicon Valley" to work with the government on national defense. The message comes as Palantir's stock has surged more than 1,800% since early 2023, pushing its market value above $292 billion -- exceeding traditional defense contractors Lockheed Martin and RTX combined. The company has expanded its military AI work since 2018, when it took over a Pentagon contract after Google employees protested their company's defense work.
AI

Microsoft Shows Progress Toward Real-Time AI-Generated Game Worlds (arstechnica.com) 23

An anonymous reader quotes a report from Ars Technica: For a while now, many AI researchers have been working to integrate a so-called "world model" into their systems. Ideally, these models could infer a simulated understanding of how in-game objects and characters should behave based on video footage alone, then create fully interactive video that instantly simulates new playable worlds based on that understanding. Microsoft Research's new World and Human Action Model (WHAM), revealed today in a paper published in the journal Nature, shows how quickly those models have advanced in a short time. But it also shows how much further we have to go before the dream of AI crafting complete, playable gameplay footage from just some basic prompts and sample video footage becomes a reality.

Much like Google's Genie model before it, WHAM starts by training on "ground truth" gameplay video and input data provided by actual players. In this case, that data comes from Bleeding Edge, a four-on-four online brawler released in 2020 by Microsoft subsidiary Ninja Theory. By collecting actual player footage since launch (as allowed under the game's user agreement), Microsoft gathered the equivalent of seven player-years' worth of gameplay video paired with real player inputs. Early in that training process, Microsoft Research's Katja Hoffman said the model would get easily confused, generating inconsistent clips that would "deteriorate [into] these blocks of color." After 1 million training updates, though, the WHAM model started showing basic understanding of complex gameplay interactions, such as a power cell item exploding after three hits from the player or the movements of a specific character's flight abilities. The results continued to improve as the researchers threw more computing resources and larger models at the problem, according to the Nature paper.

To see just how well the WHAM model generated new gameplay sequences, Microsoft tested the model by giving it up to one second's worth of real gameplay footage and asking it to generate what subsequent frames would look like based on new simulated inputs. To test the model's consistency, Microsoft used actual human input strings to generate up to two minutes of new AI-generated footage, which was then compared to actual gameplay results using the Frechet Video Distance metric. Microsoft boasts that WHAM's outputs can stay broadly consistent for up to two minutes without falling apart, with simulated footage lining up well with actual footage even as items and environments come in and out of view. That's an improvement over even the "long horizon memory" of Google's Genie 2 model, which topped out at a minute of consistent footage. Microsoft also tested WHAM's ability to respond to a diverse set of randomized inputs not found in its training data. These tests showed broadly appropriate responses to many different input sequences based on human annotations of the resulting footage, even as the best models fell a bit short of the "human-to-human baseline."

The most interesting result of Microsoft's WHAM tests, though, might be in the persistence of in-game objects. Microsoft provided examples of developers inserting images of new in-game objects or characters into pre-existing gameplay footage. The WHAM model could then incorporate that new image into its subsequent generated frames, with appropriate responses to player input or camera movements. With just five edited frames, the new object "persisted" appropriately in subsequent frames anywhere from 85 to 98 percent of the time, according to the Nature paper.

Iphone

Apple Launches the iPhone 16E, With In-House Modem and Support For AI (theverge.com) 82

Apple has launched the iPhone 16E, featuring a 6.1-inch OLED display, Face ID, an A18 chipset, USB-C, 48MP camera, and support for Apple Intelligence. Gone but not forgotten: the home button, Touch ID and 64GB of base storage. The Verge reports: The 16E includes the customizable Action Button, but not the new Camera Control you'll find on the 16 series. It does swap its Lightning port for USB-C, now a requirement for the phone to be sold in the EU. On the inside, there's an A18 chipset, the same chip as the iPhone 16. That makes the 16E powerful enough to run Apple Intelligence, the suite of AI tools that includes notification summaries. Even the non-Pro iPhone 15 can't do that, so the 16E is one of the most capable iPhones out there. Apple has previously confirmed that 8GB RAM was the minimum to get Apple Intelligence support in the iPhone 16 series, so it's likely that the 16E also boasts at least that much memory. It's also been bumped to a baseline of 128GB of storage, meaning there's no longer a 64GB iPhone.

There's only a single 48-megapixel rear camera; the lack of additional cameras is the biggest downgrade compared to the company's other handsets. With support for wireless charging and a water-resistant IP rating, there's little you have to give up elsewhere. The iPhone 16E is also the first iPhone to include a modem developed by Apple itself. The company has spent years trying to move away from modems developed by Qualcomm, and we're finally seeing the fruits of that labor. The big questions now are how well the new modem performs and whether Apple is ready to roll out its own connectivity components in the iPhone 17 line later this year.
It's available for Friday starting at $599 with 128GB of storage.
HP

All of Humane's AI Pins Will Stop Working in 10 Days 64

AI hardware startup Humane -- which has been acquired by HP -- has given its users just ten days notice that their Pins will be disconnected. From a report: In a note to its customers, the company said AI Pins will "continue to function normally" until 12PM PT on February 28. On that date, users will lose access to essentially all of their device's features, including but not limited to calling, messaging, AI queries and cloud access. The FAQ does note that you'll still be able to check on your battery life, though.

Humane is encouraging its users to download any stored data before February 28, as it plans on permanently deleting "all remaining customer data" at the same time as switching its servers off.
Microsoft

Microsoft Puts Notepad's AI Rewrite Feature Behind Paywall (windowscentral.com) 51

Microsoft has placed its new AI-powered text rewrite feature in Notepad behind a subscription paywall, requiring users to have a Microsoft 365 Personal or Family plan to access the functionality. While the core text editor remains free and accessible without a Microsoft account, the AI feature requires users to sign in and have sufficient "AI credits" included in their subscription.Users can disable the feature and hide its icon if they choose not to subscribe.
AI

Google Builds AI 'Co-Scientist' Tool To Speed Up Research (ft.com) 13

Google has built an AI laboratory assistant to help scientists accelerate biomedical research [non-paywalled source], as companies race to create specialised applications from the cutting-edge technology. From a report: The US tech group's so-called co-scientist tool helps researchers identify gaps in their knowledge and propose new ideas that could speed up scientific discovery. "What we're trying to do with our project is see whether technology like the AI co-scientist can give these researchers superpowers," said Alan Karthikesalingam, a senior staff clinician scientist at Google.

[...] Early tests of Google's new tool with experts from Stanford University, Imperial College London and Houston Methodist hospital found it was able to generate scientific hypotheses that showed promising results. The tool was able to reach the same conclusions -- for a novel gene transfer mechanism that helps scientists understand the spread of antimicrobial resistance -- as a new breakthrough from researchers at Imperial. Imperial's results were not in the public domain as they were being peer-reviewed in a top scientific journal. This showed that Google's co-scientist tool was able to reach the same hypothesis using AI reasoning in a matter of just days, compared with the years the university team spent researching the problem.

AI

AI Can Write Code But Lacks Engineer's Instinct, OpenAI Study Finds 76

Leading AI models can fix broken code, but they're nowhere near ready to replace human software engineers, according to extensive testing [PDF] by OpenAI researchers. The company's latest study put AI models and systems through their paces on real-world programming tasks, with even the most advanced models solving only a quarter of typical engineering challenges.

The research team created a test called SWE-Lancer, drawing from 1,488 actual software fixes made to Expensify's codebase, representing $1 million worth of freelance engineering work. When faced with these everyday programming tasks, the best AI model â" Claude 3.5 Sonnet -- managed to complete just 26.2% of hands-on coding tasks and 44.9% of technical management decisions.

Though the AI systems proved adept at quickly finding relevant code sections, they stumbled when it came to understanding how different parts of software interact. The models often suggested surface-level fixes without grasping the deeper implications of their changes.

The research, to be sure, used a set of complex methodologies to test the AI coding abilities. Instead of relying on simplified programming puzzles, OpenAI's benchmark uses complete software engineering tasks that range from quick $50 bug fixes to complex $32,000 feature implementations. Each solution was verified through rigorous end-to-end testing that simulated real user interactions, the researchers said.
Businesses

Mira Murati Is Launching Her OpenAI Rival: Thinking Machines Lab (theverge.com) 18

Former OpenAI CTO Mira Murati has launched Thinking Machines Lab with several leaders from OpenAI on board, including John Schulman, Barrett Zoph, and Jonathan Lachman. Their mission is "to make AI systems more widely understood, customizable, and generally capable," with a commitment to publishing technical research and code. The Verge reports: In a press release shared with The Verge, the company suggests that it's building products that help humans work with AI, rather than fully autonomous systems. "We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals," says the press release.
AI

HP To Acquire Parts of Humane, Shut Down the AI Pin 51

An anonymous reader quotes a report from Bloomberg: HP will acquire assets from Humane, the maker of a wearable Ai Pin introduced in late 2023, for $116 million. The deal will include the majority of Humane's employees in addition to its software platform and intellectual property, the company said Tuesday. It will not include Humane's Ai pin device business, which will be wound down, an HP spokesperson said. Humane's team, including founders Imran Chaudhri and Bethany Bongiorno, will form a new division at HP to help integrate artificial intelligence into the company's personal computers, printers and connected conference rooms, said Tuan Tran, who leads HP's AI initiatives. Chaudhri and Bongiorno were design and software engineers at Apple before founding the startup. [...]

Tran said he was particularly impressed with aspects of Humane's design, such as the ability to orchestrate AI models running both on-device and in the cloud. The deal is expected to close at the end of the month, HP said. "There will be a time and place for pure AI devices," Tran said. "But there is going to be AI in all our devices -- that's how we can help our business customers be more productive."

Slashdot Top Deals