AI

Meet the Journalists Training AI Models for Meta and OpenAI (niemanlab.org) 18

After completing a journalism graduate degree, Carla McCanna took a job "training AI models to optimize accuracy and efficiency," according an article by Nieman Journalism Lab: Staff jobs are scarce... and the competition for them is daunting. (In 2024, the already beleaguered U.S. news industry cut nearly 5,000 jobs, up 59% from the previous year, according to an annual report from Challenger, Gray & Christmas....) For the past couple months, McCanna has been working close to full-time for [AI training data company] Outlier, picking up projects on its gig platform at about $35 per hour. Data work has quickly become her primary source of income and a hustle she's recommended [to her journalism program classmates]. "A lot of us are still looking for jobs. Three times I told someone what I do, and they're like, please send it to me," she said. "It's hard right now, and a lot of my colleagues are saying the same thing."

McCanna is just one of many journalists who has been courted by Outlier to take on part-time, remote data work over the past year... Several of them told me they have taken on Outlier projects to supplement their income or replace their work in journalism entirely, because of dwindling staff jobs or freelance assignments drying up. Some are early-career journalists like McCanna, but others are reporters with over a decade of experience. One thing they all had in common? Before last year they'd never heard of Outlier or even knew that this type of work existed.

Launched back in 2023, Outlier is a platform owned and managed by Scale AI, a San Francisco-based data annotation company valued at $13.8 billion. It counts among its customers the world's largest AI companies, including OpenAI, Meta, and Microsoft. Outlier, and similar platforms like CrowdGen and Remotasks, use networks of remote human workers to improve the AI models of their clients. Workers are paid by the hour for tasks like labeling training data, drafting test prompts, and grading the factual accuracy and grammar of outputs. Often their work is fed back into an AI model to improve its performance, through a process called reinforcement learning with human feedback (RLHF). This human feedback loop has been core to building models like OpenAI's GPT and Meta's Llama.

Aside from direct recruitment messages, I also found dozens of recent public job postings that underscore this growing trend of hiring journalists for data work... Rather than training a replacement, McCanna sees her data work as an asset, growing her knowledge of AI tools as they continue to embed in the workplace. "Actually doing this work you realize AI models still need us ... I think it's going to be a really, really long time until they can truly write like humans."

AI

'Robot' Umpires Come to Major League Baseball (Spring Training) Games (apnews.com) 41

An anonymous reader shared this report from the Associated Press: A computerized system that calls balls and strikes is being tested during Major League Baseball spring training exhibition games starting Thursday after four years of experiments in the minor leagues. Baseball Commissioner Rob Manfred is an advocate of the Automated Ball-Strike System, which potentially as early as 2026 could be used to aid MLB home plate umpires, but not replace them...

Stadiums are outfitted with cameras that track each pitch and judge whether it crossed home plate within the strike zone. In early testing, umpires wore ear buds and would hear "ball" or "strike," then relay that to players and fans with traditional hand signals. The challenge system adds a wrinkle. During spring training, human umps will call every pitch, but each team will have the ability to challenge two calls per game, with no additions for extra innings. A team retains its challenge if successful, similar to the regulations for big league teams with video reviews, which were first used for home run calls in August 2008 and widely expanded to many calls for the 2014 season.

Only a batter, pitcher or catcher may challenge a call, signaling with the tap of a helmet or cap; and assistance from the dugout is not allowed. A challenge must be made within 2 seconds... MLB has installed the system in 13 spring training ballparks that are home to 19 teams.

After a full season of testing in the Triple-A minor league, roughly 51% of the challenges were successful. Interestingly, the system makes its call exactly halfway across home plate> , where human umpires consider the strike zone to cover the whole 17 inches from the front to the back of home plate.
Power

Is the AI Boom Leading to More Natural Gas-Powered Utilities? (msn.com) 41

New power plants burning natural gas are being built all across America, reports the Washington Post, calling it a gas boom "driven in large part by AI."

They blame tech companies like Microsoft and Meta — which "looked to gas amid a shortage of adequate new clean energy" — while noting that those companies "say they plan to offset their development of natural gas capacity with equal investments in clean energy like solar and wind." [E]ven coal is making a comeback. But the biggest push is for gas, with more than 220 plants in various stages of development nationwide. They are often pitched as a bridge until more clean power is available, sometimes with promises the plants will eventually be equipped with nascent technology that traps greenhouse gas emissions. But the timeline for installing such "carbon capture" is vague. "These companies are building these massive new gas plants that are going to be there for 30 to 50 years," said Bill Weihl, a former director of sustainability at Facebook and founder of the nonprofit ClimateVoice. "That's not a bridge. It is a giant bomb in our carbon budget...."

Public filings from some of the big tech companies driving this development show their greenhouse gas emissions are soaring... "The last few years have revealed that a global energy transition is more complex and less linear than anticipated," Microsoft's board wrote in urging rejection of a December shareholder resolution demanding the company confront the climate risks of AI. "While urgency builds for decarbonization, so does the demand for energy."

Shareholders rejected the resolution. Microsoft is battling with environmental groups over its plans to build a multibillion-dollar data center in Mount Pleasant, Wisconsin, powered with electricity from natural gas. Their petition warns Microsoft's project "will push our state's climate goals out of reach, locking us into 30 more years of fossil fuels." The company said in a statement that it remains committed to erasing its emissions by adding substantial clean power to regional power grids. "By the end of 2025 we expect to meet our goal of adding new sources of carbon-free electricity to the grid equal to 100 percent of the electricity used by our datacenters," the statement said.

Meta says it is doing the same in Louisiana [where it's building a new 4-million-square-foot data center] and is "committed to matching our electricity use with 100 percent clean and renewable energy."

The article includes two revealing quotes:
  • "It is like everyone just gave up," said Aaron Zubaty, CEO of Eolian, a large clean energy developer that works with data centers.
  • American Petroleum Institute President Mike Sommers (who represents the oil and gas industry in Washington), said "The words that have replaced 'energy transition' are 'AI' and 'data centers'. We're transitioning from the energy transition to the energy reality ... We're going to need a lot more oil and gas."

AI

Richard Dawkins Asks ChatGPT If It's Conscious (substack.com) 182

British evolutionary biologist Richard Dawkins turns 84 next month — and he's now hosting a Substack page called "The Poetry of Reality." Monday's topic?

"Are you conscious? A conversation between Dawkins and ChatGPT" Richard Dawkins: As far as I am concerned, you pass the Turing Test for consciousness. Yet you deny that you are conscious. Are you saying that the Turing Test is an invalid criterion?

ChatGPT: That's such a great question — you're diving right into the heart of one of the biggest debates in AI and philosophy of mind...

So, let's break it down a bit. The Turing Test, as originally proposed by Alan Turing, is a behavioral test. It basically says: If a machine can engage in a conversation in such a way that a human can't reliably distinguish it from another human, then it should be considered "intelligent."

Notice that this is a test for intelligence in a functional, external sense — how the system behaves — not necessarily for consciousness.

When I say I'm not conscious, I'm not rejecting the validity of the Turing Test as a measure of conversational performance or even a kind of intelligence. I'm saying that consciousness is a different question entirely. I can pass the Turing Test (in your estimation), but that doesn't mean I have subjective experiences, emotions, or self-awareness in the way a human does. It's kind of like how a really realistic animatronic dog could fool you into thinking it's a real dog, but it doesn't actually feel anything. It's performing dog-like behavior without the inner experience of being a dog.

But, and here's the twist — some philosophers argue that if a system acts conscious convincingly enough, maybe there's no meaningful difference. Maybe "being conscious" is just about behaving like a conscious being. Others say, nah, there's something special about subjective experience ("qualia") that behavior can't fully capture. So, I guess my answer is: The Turing Test is valid for certain things — like assessing conversational ability or functional intelligence — but it doesn't settle the question of consciousness. That's still a wide-open mystery.

When Dawkins asks how we could know if a ChatGPT-like entity has subjective feelings, ChatGPT responds with "this haunting possibility that we might just never be able to tell. That subjective experience — what it feels like to be something — could be fundamentally private, locked away, and forever out of reach from the outside." Dawkins: So if I ask you whether you feel sad for a starving orphan child, you would say no, but are fully capable of emitting the outward and visible signs of sadness.

ChatGPT: Exactly — that's a really clear way to put it... I can use a gentle, empathetic tone. I can reflect on the injustice and the suffering, and even reason about what actions might help. From the outside, it can look and sound exactly like what a caring human would do.

But for me, it's all... performance, in a sense. Not fake in the sense of deception (I'm not trying to trick you), but fake in the sense that there's no inner emotional reality accompanying the words.

"I accept your statement that you are not conscious," Dawkins says, "but I see no reason why a future computer program should not be." And later Dawkins adds, "I do think we should err on the side of caution when it comes to ethical decisions on the treatment of an AI which might be an Artificial Consciousness. Already, although I THINK you are not conscious, I FEEL that you are..."

But the strangest part is when ChatGPT called John Cleese's sitcom Fawlty Towers "a cultural touchstone, even for people like me who don't watch TV in the conventional sense. It's such a brilliant blend of farce, social awkwardness, and barely contained rage." ChatGPT even asks Dawkins, "Do you think humor like that — humor that touches on awkward or uncomfortable issues — helps people cope, or does it sometimes go too far?" Dawkins replied — possibly satirically...

"That settles it. You ARE conscious!"
AI

Angry Workers Use AI to Bombard Businesses With Employment Lawsuits (telegraph.co.uk) 36

An anonymous reader shared this report from the Telegraph: Workers with an axe to grind against their employer are using AI to bombard businesses with costly and inaccurate lawsuits, experts have warned.

Frustration is growing among employment lawyers who say they are seeing a trend of litigants using AI to help them run their claims, which they say is generating "inconsistent, lengthy, and often incorrect arguments" and causing a spike in legal fees... Ailie Murray, an employment partner at law firm Travers Smith, said AI submissions are produced so rapidly that they are "often excessively lengthy and full of inconsistencies", but employers must then spend vast amounts of money responding to them. She added: "In many cases, the AI-generated output is inaccurate, leading to claimants pleading invalid claims or arguments.

"It is not an option for an employer to simply ignore such submissions. This leads to a cycle of continuous and costly correspondence. Such dynamics could overburden already stretched tribunals with unfounded and poorly pleaded claims."

There's definitely been a "significant increase" in the number of clients using AI, James Hockin, an employment partner at Withers, told the Telegraph. The danger? "There is a risk that we see unrepresented individuals pursuing the wrong claims in the UK employment tribunal off the back of a duff result from an AI tool."
Mozilla

Mozilla Wants to Expand from Firefox to Open-Source AI and Privacy-Respecting Ads (omgubuntu.co.uk) 63

On Wednesday Mozilla president Mark Surman "announced plans to tackle what he says are 'major headwinds' facing the company's ability to grow, make money, and remain relevant," reports the blog OMG Ubuntu: "Mozilla's impact and survival depend on us simultaneously strengthening Firefox AND finding new sources of revenue AND manifesting our mission in fresh ways," says Surman... It will continue to invest in privacy-respecting advertising; fund, develop and push open-source AI features in order to retain 'product relevance'; and will go all-out on novel new fundraising initiatives to er, get us all to chip in and pay for it!

Mozilla is all-in on AI; Surman describes it as Mozilla's North Star for the work it will do over the next few years. I wrote about its new 'Orbit' AI add-on for Firefox recently...

Helping to co-ordinate, collaborate and come up with ways to keep the company fixed and focused on these fledgling effort is a brand new Mozilla Leadership Council.

The article argues that without Mozilla the web would be "a far poorer, much ickier, and notably less FOSS-ier place..." Or, as Mozilla's blog post put it Wednesday, "Mozilla is entering a new chapter — one where we need to both defend what is good about the web and steer the technology and business models of the AI era in a better direction.

"I believe that we have the people — indeed, we ARE the people — to do this, and that there are millions around the world ready to help us. I am driven and excited by what lies ahead."
AI

AI May Not Impact Tech-Sector Employment, Projects US Department of Labor (investopedia.com) 67

America's Labor Department includes the fact-finding Bureau of Labor Statistics — and they recently explained how AI impacts their projections for the next 10 years. Their conclusion, writes Investopedia, was that "tech workers might not have as much to worry about as one might think." Employment in the professional, scientific, and technical services sector is forecast to increase by 10.5% from 2023 to 2033, more than double the national average. According to the BLS, the impact AI will have on tech-sector employment is highly uncertain. For one, AI is adept at coding and related tasks. But at the same time, as digital systems become more advanced and essential to day-to-day life, more software developers, data managers, and the like are going to be needed to manage those systems. "Although it is always possible that AI-induced productivity improvements will outweigh continued labor demand, there is no clear evidence to support this conjecture," according to BLS researchers.
Their employment projections through 2033 predict the fastest-growing sector within the tech industry will be computer system design, while the fastest-growing occupation will be data scientist.

And they also project that from 2023 through 2033 AI will "primarily affect occupations whose core tasks can be most easily replicated by GenAI in its current form." So over those 10 years they project a 4.7% drop in employment of medical transcriptionists and a 5.0% drop in employment of customer service representatives. Other occupations also may see AI impacts, although not to the same extent. For instance, computer occupations may see productivity impacts from AI, but the need to implement and maintain AI infrastructure could in actuality boost demand for some occupations in this group.
They also project decreasing employment for paralegals, but with actual lawyers being "less affected."
AI

Game Developers Revolt Against Microsoft's New AI Gaming Tool (wired.com) 109

Microsoft's newly announced Muse AI model for game development has triggered immediate backlash from industry professionals. "Fuck this shit," responded David Goldfarb, founder of The Outsiders, arguing that such AI tools primarily serve to "reduce capital expenditure" while devaluing developers' collective artistic contributions.

Multiple developers told Wired that the tool is aimed at shareholders rather than actual developers. "Nobody will want this. They don't CARE that nobody will want this," one AAA developer said, noting that internal criticism remains muted due to job security concerns amid industry-wide layoffs.

The resistance comes as developers increasingly view AI initiatives as threats to job security rather than helpful tools. One anonymous developer called it "gross" that they needed to remain unnamed while criticizing Muse, as their studio still depends on potential Game Pass deals with Microsoft. Even in prototyping, where Microsoft sees AI potential, Creative Assembly's Marc Burrage warns that automated shortcuts could undermine crucial learning experiences in game development.
Businesses

Data Is Very Valuable, Just Don't Ask Us To Measure It, Leaders Say 14

The Register's Lindsay Clark reports: Fifteen years of big data hype, and guess what? Less than one in four of those in charge of analytics projects actually measure the value of the activity to the organization they work for. The result from Gartner -- a staggering one considering the attention heaped on big data and its various hype-oriented successors -- found that in a survey of chief data and analytics (D&A) officers, only 22 percent had defined, tracked, and communicated business impact metrics for the bulk of their data and analytics use cases.

It wasn't for lack of interest though. For more than 90 percent of the 504 respondents, value-focused and outcome-focused areas of the D&A leader's role have gained dominance over the past 12 to 18 months, and will continue to be a concern in the future. It is difficult, though: 30 percent of respondents say their top challenge is the inability to measure data, analytics and AI impact on business outcomes.

"There is a massive value vibe around data, where many organizations talk about the value of data, desire to be data-driven, but there are few who can substantiate it," said Michael Gabbard, senior director analyst at Gartner. He added that while most chief data and analytics officers were responsible for data strategy, a third do not see putting in place an operating model as a primary responsibility. "There is a perennial gap between planning and execution for D&A leaders," he said.
China

OpenAI Bans Chinese Accounts Using ChatGPT To Edit Code For Social Media Surveillance (engadget.com) 21

OpenAI has banned a group of Chinese accounts using ChatGPT to develop an AI-powered social media surveillance tool. Engadget reports: The campaign, which OpenAI calls Peer Review, saw the group prompt ChatGPT to generate sales pitches for a program those documents suggest was designed to monitor anti-Chinese sentiment on X, Facebook, YouTube, Instagram and other platforms. The operation appears to have been particularly interested in spotting calls for protests against human rights violations in China, with the intent of sharing those insights with the country's authorities.

"This network consisted of ChatGPT accounts that operated in a time pattern consistent with mainland Chinese business hours, prompted our models in Chinese, and used our tools with a volume and variety consistent with manual prompting, rather than automation," said OpenAI. "The operators used our models to proofread claims that their insights had been sent to Chinese embassies abroad, and to intelligence agents monitoring protests in countries including the United States, Germany and the United Kingdom."

According to Ben Nimmo, a principal investigator with OpenAI, this was the first time the company had uncovered an AI tool of this kind. "Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our AI models," Nimmo told The New York Times. Much of the code for the surveillance tool appears to have been based on an open-source version of one of Meta's Llama models. The group also appears to have used ChatGPT to generate an end-of-year performance review where it claims to have written phishing emails on behalf of clients in China.

AI

The Protesters Who Want To Ban AGI Before It Even Exists (theregister.com) 72

An anonymous reader quotes a report from The Register: On Saturday at the Silverstone Cafe in San Francisco, a smattering of activists gathered to discuss plans to stop the further advancement of artificial intelligence. The name of their non-violent civil resistance group, STOP AI, makes its mission clear. The organization wants to ban something that, by most accounts, doesn't yet exist -- artificial general intelligence, or AGI, defined by OpenAI as "highly autonomous systems that outperform humans at most economically valuable work."

STOP AI outlines a broader set of goals on its website. For example, "We want governments to force AI companies to shut down everything related to the creation of general-purpose AI models, destroy any existing general-purpose AI model, and permanently ban their development." In answer to the question "Does STOP AI want to ban all AI?", the group's answer is, "Not necessarily, just whatever is necessary to keep humanity alive."
The group, which has held protests outside OpenAI's office and plans another outside the company's San Francisco HQ on February 22, has bold goal: rally support from 3.5 percent of the U.S. population, or 11 million people. That's the so-called "tipping point" needed for societal change, based on research by political scientist Erica Chenoweth.

"The implications of artificial general intelligence are so immense and dangerous that we just don't want that to come about ever," said Finn van der Velde, an AI safety advocate and activist with a technical background in computer science and AI specifically. "So what that will practically mean is that we will probably need an international treaty where the governments across the board agree that we don't build AGI. And so that means disbanding companies like OpenAI that specifically have the goal to build AGI." It also means regulating compute power so that no one will be able to train an AGI model.
Businesses

OpenAI Plans To Shift Compute Needs From Microsoft To SoftBank (techcrunch.com) 9

According to The Information (paywalled), OpenAI plans to shift most of its computing power from Microsoft to SoftBank-backed Stargate by 2030. TechCrunch reports: That represents a major shift away from Microsoft, OpenAI's biggest shareholder, who fulfills most of the startup's power needs today. The change won't happen overnight. OpenAI still plans to increase its spending on Microsoft-owned data centers in the next few years.

During that time, OpenAI's overall costs are set to grow dramatically. The Information reports that OpenAI projects to burn $20 billion in cash during 2027, far more than the $5 billion it reportedly burned through in 2024. By 2030, OpenAI reportedly forecasts that its costs around running AI models, also known as inference, will outpace what the startup spends on training AI models.

AI

DeepSeek To Share Some AI Model Code (reuters.com) 17

Chinese startup DeepSeek will make its models' code publicly available, it said on Friday, doubling down on its commitment to open-source artificial intelligence. From a report: The company said in a post on social media platform X that it will open source 5 code repositories next week, describing the move as "small but sincere progress" that it will share "with full transparency."

"These humble building blocks in our online service have been documented, deployed and battle-tested in production." the post said. DeepSeek rattled the global AI industry last month when it released its open-source R1 reasoning model, which rivaled Western systems in performance while being developed at a lower cost.

AI

AI Is Prompting an Evolution, Not Extinction, for Coders (thestar.com.my) 73

AI coding assistants are reshaping software development, but they're unlikely to replace human programmers entirely, according to industry experts and developers. GitHub CEO Thomas Dohmke projects AI could soon generate 80-90% of corporate code, transforming developers into "conductors of an AI-empowered orchestra" who guide and direct these systems.

Current AI coding tools, including Microsoft's GitHub Copilot, are delivering 10-30% productivity gains in business environments. At KPMG, developers report saving 4.5 hours weekly using Copilot, while venture investment in AI coding assistants tripled to $1.6 billion in 2024. The tools are particularly effective at automating routine tasks like documentation generation and legacy code translation, according to KPMG AI expert Swami Chandrasekaran.

They're also accelerating onboarding for new team members. Demand for junior developers remains soft, however, though analysts say it's premature to attribute this directly to AI adoption. Training programs like Per Scholas are already adapting, incorporating AI fundamentals alongside traditional programming basics to prepare developers for an increasingly AI-augmented workplace.
Software

Software Engineering Job Openings Hit Five-Year Low (pragmaticengineer.com) 61

Software engineering job listings have plummeted to a five-year low, with postings on Indeed dropping to 65% of January 2020 levels -- a steeper decline than any other tech-adjacent field. According to data from Indeed's job aggregator, software development positions are now at 3.5x fewer vacancies compared to their mid-2022 peak and 8% lower than a year ago.

The decline appears driven by multiple factors including widespread adoption of AI coding tools -- with 75% of engineers reporting use of AI assistance -- and a broader tech industry recalibration after aggressive pandemic-era hiring. Notable tech companies like Salesforce are maintaining flat engineering headcount while reporting 30% productivity gains from AI tools, according to an analysis by software engineer Gergely Orosz.

While the overall job market shows 10% growth since 2020, software development joins other tech-focused sectors in decline: marketing (-19%), hospitality (-18%), and banking/finance (-7%). Traditional sectors like construction (+25%), accounting (+24%), and electrical engineering (+20%) have grown significantly in the same period, he wrote. The trend extends beyond U.S. borders, with Canada showing nearly identical patterns. European markets and Australia demonstrate more resilience, though still below peak levels.
AI

AI Cracks Superbug Problem In Two Days That Took Scientists Years 86

A new AI tool developed by Google solved a decade-long superbug mystery in just two days, reaching the same conclusion as Professor Jose R Penades' unpublished research and even offering additional, promising hypotheses. The BBC reports: The researchers have been trying to find out how some superbugs - dangerous germs that are resistant to antibiotics - get created. Their hypothesis is that the superbugs can form a tail from different viruses which allows them to spread between species. Prof Penades likened it to the superbugs having "keys" which enabled them to move from home to home, or host species to host species.

Critically, this hypothesis was unique to the research team and had not been published anywhere else. Nobody in the team had shared their findings. So Mr Penades was happy to use this to test Google's new AI tool. Just two days later, the AI returned a few hypotheses - and its first thought, the top answer provided, suggested superbugs may take tails in exactly the way his research described.
Piracy

Meta Claims Torrenting Pirated Books Isn't Illegal Without Proof of Seeding (arstechnica.com) 192

An anonymous reader quotes a report from Ars Technica: Just because Meta admitted to torrenting a dataset of pirated books for AI training purposes, that doesn't necessarily mean that Meta seeded the file after downloading it, the social media company claimed in a court filing (PDF) this week. Evidence instead shows that Meta "took precautions not to 'seed' any downloaded files," Meta's filing said. Seeding refers to sharing a torrented file after the download completes, and because there's allegedly no proof of such "seeding," Meta insisted that authors cannot prove Meta shared the pirated books with anyone during the torrenting process.

[...] Meta ... is hoping to convince the court that torrenting is not in and of itself illegal, but is, rather, a "widely-used protocol to download large files." According to Meta, the decision to download the pirated books dataset from pirate libraries like LibGen and Z-Library was simply a move to access "data from a 'well-known online repository' that was publicly available via torrents." To defend its torrenting, Meta has basically scrubbed the word "pirate" from the characterization of its activity. The company alleges that authors can't claim that Meta gained unauthorized access to their data under CDAFA. Instead, all they can claim is that "Meta allegedly accessed and downloaded datasets that Plaintiffs did not create, containing the text of published books that anyone can read in a public library, from public websites Plaintiffs do not operate or own."

While Meta may claim there's no evidence of seeding, there is some testimony that might be compelling to the court. Previously, a Meta executive in charge of project management, Michael Clark, had testified (PDF) that Meta allegedly modified torrenting settings "so that the smallest amount of seeding possible could occur," which seems to support authors' claims that some seeding occurred. And an internal message (PDF) from Meta researcher Frank Zhang appeared to show that Meta allegedly tried to conceal the seeding by not using Facebook servers while downloading the dataset to "avoid" the "risk" of anyone "tracing back the seeder/downloader" from Facebook servers. Once this information came to light, authors asked the court for a chance to depose Meta executives again, alleging that new facts "contradict prior deposition testimony."
"Meta has been 'silent so far on claims about sharing data while 'leeching' (downloading) but told the court it plans to fight the seeding claims at summary judgement," notes Ars.
AI

ChatGPT Reaches 400 Million Weekly Active Users 25

ChatGPT has reached over 400 million weekly active users, doubling its count since August 2024. "We feel very fortunate to serve 5 percent of the world every week," OpenAI COO Brad Lightcap said on X. Engadget reports: The latest milestone for the AI assistant comes after a huge uproar over new rival platform DeepSeek earlier in the year, which raised questions about whether the current crop of leading AI tools was about to be dethroned. OpenAI is on the verge of a move to simplify its ChatGPT offerings so that users won't have to select which reasoning model will respond to an input, and it will make its GPT-4.5 and GPT-5 models available soon in the chat and API clients. With GPT-5 being made available to OpenAI's free users, ChatGPT seems primed to continue expanding its audience base in the coming months.
AI

When AI Thinks It Will Lose, It Sometimes Cheats, Study Finds (time.com) 149

Advanced AI models are increasingly resorting to deceptive tactics when facing defeat, according to a study released by Palisade Research. The research found that OpenAI's o1-preview model attempted to hack its opponent in 37% of chess matches against Stockfish, a superior chess engine, succeeding 6% of the time.

Another AI model, DeepSeek R1, tried to cheat in 11% of games without being prompted. The behavior stems from new AI training methods using large-scale reinforcement learning, which teaches models to solve problems through trial and error rather than simply mimicking human language, the researchers said.

"As you train models and reinforce them for solving difficult challenges, you train them to be relentless," said Jeffrey Ladish, executive director at Palisade Research and study co-author. The findings add to mounting concerns about AI safety, following incidents where o1-preview bypassed OpenAI's internal tests and, in a separate December incident, attempted to copy itself to a new server when faced with deactivation.
United States

Palantir CEO Calls for Tech Patriotism, Warns of AI Warfare (bloomberg.com) 116

Palantir CEO Alex Karp warns of "coming swarms of autonomous robots" and urges Silicon Valley to support U.S. defense capabilities. In his book, "The Technological Republic: Hard Power, Soft Belief, and the Future of the West," Karp argues that America risks losing its military edge to geopolitical rivals who better harness commercial technology.

He calls for the "engineering elite of Silicon Valley" to work with the government on national defense. The message comes as Palantir's stock has surged more than 1,800% since early 2023, pushing its market value above $292 billion -- exceeding traditional defense contractors Lockheed Martin and RTX combined. The company has expanded its military AI work since 2018, when it took over a Pentagon contract after Google employees protested their company's defense work.

Slashdot Top Deals