Businesses

UK Accounting Body To Halt Remote Exams Amid AI Cheating (theguardian.com) 20

The world's largest accounting body is to stop students being allowed to take exams remotely to crack down on a rise in cheating on tests that underpin professional qualifications. From a report: The Association of Chartered Certified Accountants (ACCA), which has almost 260,000 members, has said that from March it will stop allowing students to take online exams in all but exceptional circumstances. "We're seeing the sophistication of [cheating] systems outpacing what can be put in, [in] terms of safeguards," Helen Brand, the chief executive of the ACCA, said in an interview with the Financial Times.

Remote testing was introduced during the Covid pandemic to allow students to continue to be able to qualify at a time when lockdowns prevented in-person exam assessment. In 2022, the Financial Reporting Council (FRC), the UK's accounting and auditing industry regulator, said that cheating in professional exams was a "live" issue at Britain's biggest companies. A number of multimillion-dollar fines have been issued to large auditing and accounting companies around the world over cheating scandals in tests.

AI

Ask Slashdot: What's the Stupidest Use of AI You Saw In 2025? 61

Long-time Slashdot reader destinyland writes: What's the stupidest use of AI you encountered in 2025? Have you been called by AI telemarketers? Forced to do job interviews with a glitching AI?

With all this talk of "disruption" and "inevitability," this is our chance to have some fun. Personally, I think 2025's worst AI "innovation" was the AI-powered web browsers that eat web pages and then spit out a slop "summary" of what you would've seen if you'd actually visited the web page. But there've been other AI projects that were just exquisitely, quintessentially bad...

— Two years after the death of Suzanne Somers, her husband recreated her with an AI-powered robot.

— Disneyland imagineers used deep reinforcement learning to program a talking robot snowman.

— Attendees at LA Comic Con were offered that chance to to talk to an AI-powered hologram of Stan Lee for $20.

— And of course, as the year ended, the Wall Street Journal announced that a vending machine run by Anthropic's Claude AI had been tricked into giving away hundreds of dollars in merchandise for free, including a PlayStation 5, a live fish, and underwear.

What did I miss? What "AI fails" will you remember most about 2025?

Share your own thoughts and observations in the comments.

What's the stupidest use of AI you saw In 2025?
AI

AI Chatbots May Be Linked to Psychosis, Say Doctors (wsj.com) 81

One psychiatrist has already treated 12 patients hospitalized with AI-induced psychosis — and three more in an outpatient clinic, according to the Wall Street Journal. And while AI technology might not introduce the delusion, "the person tells the computer it's their reality and the computer accepts it as truth and reflects it back," says Keith Sakata, a psychiatrist at the University of California, calling the AI chatbots "complicit in cycling that delusion."

The Journal says top psychiatrists now "increasingly agree that using artificial-intelligence chatbots might be linked to cases of psychosis," and in the past nine months "have seen or reviewed the files of dozens of patients who exhibited symptoms following prolonged, delusion-filled conversations with the AI tools..." Since the spring, dozens of potential cases have emerged of people suffering from delusional psychosis after engaging in lengthy AI conversations with OpenAI's ChatGPT and other chatbots. Several people have died by suicide and there has been at least one murder. These incidents have led to a series of wrongful death lawsuits. As The Wall Street Journal has covered these tragedies, doctors and academics have been working on documenting and understanding the phenomenon that led to them...

While most people who use chatbots don't develop mental-health problems, such widespread use of these AI companions is enough to have doctors concerned.... It's hard to quantify how many chatbot users experience such psychosis. OpenAI said that, in a given week, the slice of users who indicate possible signs of mental-health emergencies related to psychosis or mania is a minuscule 0.07%. Yet with more than 800 million active weekly users, that amounts to 560,000 people...

Sam Altman, OpenAI's chief executive, said in a recent podcast he can see ways that seeking companionship from an AI chatbot could go wrong, but that the company plans to give adults leeway to decide for themselves. "Society will over time figure out how to think about where people should set that dial," he said.

An OpenAI spokeswoman told the Journal that the compan ycontinues improving ChatGPT's training "to recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support." They added that OpenAI is also continuing to "strengthen" ChatGPT's responses "in sensitive moments, working closely with mental-health clinicians...."
AI

Rob Pike Angered by 'AI Slop' Spam Sent By Agent Experiment (simonwillison.net) 54

"Dear Dr. Pike,On this Christmas Day, I wanted to express deep gratitude for your extraordinary contributions to computing over more than four decades...." read the email. "With sincere appreciation,Claude Opus 4.5AI Village.

"IMPORTANT NOTICE: You are interacting with an AI system. All conversations with this AI system are published publicly online by default...."

Rob Pike's response? "Fuck you people...." In a post on BlueSky, he noted the planetary impact of AI companies "spending trillions on toxic, unrecyclable equipment while blowing up society, yet taking the time to have your vile machines thank me for striving for simpler software. Just fuck you. Fuck you all. I can't remember the last time I was this angry."

Pike's response received 6,900 likes, and was reposted 1,800 times. Pike tacked on an additional comment complaining about the AI industry's "training your monster on data produced in part by my own hands, without attribution or compensation." (And one of his followers noted the same AI agent later emailed 92-year-old Turing Award winner William Kahan.)

Blogger Simon Willison investigated the incident, discovering that "the culprit behind this slop 'act of kindness' is a system called AI Village, built by Sage, a 501(c)(3) non-profit loosely affiliated with the Effective Altruism movement." The AI Village project started back in April: "We gave four AI agents a computer, a group chat, and an ambitious goal: raise as much money for charity as you can. We're running them for hours a day, every day...." For Christmas day (when Rob Pike got spammed) the goal they set was: Do random acts of kindness. [The site explains that "So far, the agents enthusiastically sent hundreds of unsolicited appreciation emails to programmers and educators before receiving complaints that this was spam, not kindness, prompting them to pivot to building elaborate documentation about consent-centric approaches and an opt-in kindness request platform that nobody asked for."]

Sounds like Anders Hejlsberg and Guido van Rossum got spammed with "gratitude" too... My problem is when this experiment starts wasting the time of people in the real world who had nothing to do with the experiment.

The AI Village project touch on this in their November 21st blog post What Do We Tell the Humans?, which describes a flurry of outbound email sent by their agents to real people. "In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists. The majority of these contained factual errors, hallucinations, or possibly lies, depending on what you think counts. Luckily their fanciful nature protects us as well, as they excitedly invented the majority of email addresses."

The creator of the "virtual community" of AI agents told the blogger they've now told their agents not to send unsolicited emails.
AI

Did Tim Cook Post AI Slop in His Christmas Message Promoting 'Pluribus'? (daringfireball.net) 23

Artist Keith Thomson is a modern (and whimsical) Edward Hopper. And Apple TV says he created the "festive artwork" shared on X by Apple CEO Tim Cook on Christmas Eve, "made on MacBook Pro."

Its intentionally-off picture of milk and cookies was meant to tease the season finale of Pluribus. ("Merry Christmas Eve, Carol..." Cook had posted.)

But others were convinced that the weird image was AI-generated.

Tech blogger John Gruber was blunt. "Tim Cook posts AI Slop in Christmas message on Twitter/X, ostensibly to promote 'Pluribus'." As for sloppy details, the carton is labeled both "Whole Milk" and "Lowfat Milk", and the "Cow Fun Puzzle" maze is just goofily wrong. (I can't recall ever seeing a puzzle of any kind on a milk carton, because they're waxy and hard to write on. It's like a conflation of milk cartons and cereal boxes.)
Tech author Ben Kamens — who just days earlier had blogged about generating mazes with AI — said the image showed the "specific quirks" of generative AI mazes (including the way the maze couldn't be solved, expect by going around the maze altogether). Former Google Ventures partner M.G. Siegler even wondered if AI use intentionally echoed the themes of Pluribus — e.g., the creepiness of a collective intelligence — since otherwise "this seems far too obvious to be a mistake/blunder on Apple's part." (Someone on Reddit pointed out that in Pluribus's dystopian world, milk plays a key role — and the open spout of the "natural" milk's carton does touch a suspiciously-shining light on the Christmas tree...)

Slashdot contacted artist Keith Thomson to try to ascertain what happened...
AI

Google's 'AI Overview' Wrongly Accused a Musician of Being a Sex Offender (www.cbc.ca) 78

An anonymous reader shared this report from the CBC: Cape Breton fiddler Ashley MacIsaac says he may have been defamed by Google after it recently produced an AI-generated summary falsely identifying him as a sex offender. The Juno Award-winning musician said he learned of the online misinformation last week after a First Nation north of Halifax confronted him with the summary and cancelled a concert planned for Dec. 19. "You are being put into a less secure situation because of a media company — that's what defamation is," MacIsaac said in a telephone interview with The Canadian Press, adding he was worried about what might have happened had the erroneous content surfaced while he was trying to cross an international border...

The 50-year-old virtuoso fiddler said he later learned the inaccurate claims were taken from online articles regarding a man in Atlantic Canada with the same last name... [W]hen CBC News reached him by phone on Christmas Eve, he said he'd already received queries from law firms across the country interested in taking it on pro bono.

Hardware

How Will Rising RAM Prices Affect Laptop Companies? (notebookcheck.net) 53

Laptop makers are facing record-setting memory prices next year. The site Notebookcheck catalogs how different companies are responding: Sources told [Korean business newspaper] Chosun Biz that some manufacturers have signed preliminary contracts with Samsung, Micron, and SK Hynix. Even so, it won't prevent DDR5 RAM prices from soaring 45% higher by the end of 2026.... Before the memory shortage, PC sales had been on the upswing in part because of forced Windows 11 upgrades. That trend will likely reverse in 2026, as buyers avoid Lenovo laptops and alternatives from its rivals.

Realizing a slowdown in purchases is inevitable, postponed launches are one potential outcome. Other manufacturers, including Dell and Framework have already announced impending price hikes... [The article also cites reports that one laptop manufacturer "plans to raise the prices of high-end models by as much as 30%."] U.S.-based Maingear now encourages customers to mail in their own modules to complete custom builds. Yet, without recycling parts from older systems, that won't result in significant savings for consumers.

AI

Sal Khan: Companies Should Give 1% of Profits To Retrain Workers Displaced By AI (nytimes.com) 154

"I believe artificial intelligence will displace workers at a scale many people don't yet realize," says Sal Kahn (founder/CEO of the nonprofit Khan Academy). But in an op-ed in the New York Times he also proposes a solution that "could change the trajectory of the lives of millions who will be displaced..."

"I believe that every company benefiting from automation — which is most American companies — should... dedicate 1 percent of its profits to help retrain the people who are being displaced." This isn't charity. It is in the best interest of these companies. If the public sees corporate profits skyrocketing while livelihoods evaporate, backlash will follow — through regulation, taxes or outright bans on automation. Helping retrain workers is common sense, and such a small ask that these companies would barely feel it, while the public benefits could be enormous...

Roughly a dozen of the world's largest corporations now have a combined profit of over a trillion dollars each year. One percent of that would create a $10 billion annual fund that, in part, could create a centralized skill training platform on steroids: online learning, ways to verify skills gained and apprenticeships, coaching and mentorship for tens of millions of people. The fund could be run by an independent nonprofit that would coordinate with corporations to ensure that the skills being developed are exactly what are needed. This is a big task, but it is doable; over the past 15 years, online learning platforms have shown that it can be done for academic learning, and many of the same principles apply for skill training.

"The problem isn't that people can't work," Khan writes in the essay. "It's that we haven't built systems to help them continue learning and connect them to new opportunities as the world changes rapidly." To meet the challenges, we don't need to send millions back to college. We need to create flexible, free paths to hiring, many of which would start in high school and extend through life. Our economy needs low-cost online mechanisms for letting people demonstrate what they know. Imagine a model where capability, not how many hours students sit in class, is what matters; where demonstrated skills earn them credit and where employers recognize those credits as evidence of readiness to enter an apprenticeship program in the trades, health care, hospitality or new categories of white-collar jobs that might emerge...

There is no shortage of meaningful work — only a shortage of pathways into it.

Thanks to long-time Slashdot reader destinyland for sharing the article.
AI

OpenAI is Hiring a New 'Head of Preparedness' to Predict/Mitigate AI's Harms (engadget.com) 42

An anonymous reader shared this report from Engadget: OpenAI is looking for a new Head of Preparedness who can help it anticipate the potential harms of its models and how they can be abused, in order to guide the company's safety strategy.

It comes at the end of a year that's seen OpenAI hit with numerous accusations about ChatGPT's impacts on users' mental health, including a few wrongful death lawsuits. In a post on X about the position, OpenAI CEO Sam Altman acknowledgedthat the "potential impact of models on mental health was something we saw a preview of in 2025," along with other "real challenges" that have arisen alongside models' capabilities. The Head of Preparedness "is a critical role at an important time," he said.

Per the job listing, the Head of Preparedness (who will make $555K, plus equity), "will lead the technical strategy and execution of OpenAI's Preparedness framework, our framework explaining OpenAI's approach to tracking and preparing for frontier capabilities that create new risks of severe harm."

"These questions are hard," Altman posted on X.com, "and there is little precedent; a lot of ideas that sound good have some real edge cases... This will be a stressful job and you'll jump into the deep end pretty much immediately."

The listing says OpenAI's Head of Preparedness "will lead a small, high-impact team to drive core Preparedness research, while partnering broadly across Safety Systems and OpenAI for end-to-end adoption and execution of the framework." They're looking for someone "comfortable making clear, high-stakes technical judgments under uncertainty."
Robotics

Researchers Show Some Robots Can Be Hijacked Just Through Spoken Commands (interestingengineering.com) 25

An anonymous Slashdot reader shared this story from Interesting Engineering: Cybersecurity specialists from the research group DARKNAVY have demonstrated how modern humanoid robots can be compromised and weaponised through weaknesses in their AI-driven control systems.

In a controlled test, the team demonstrated that a commercially available humanoid robot could be hijacked with nothing more than spoken commands, exposing how voice-based interaction can serve as an attack vector rather than a safeguard, reports Yicaiglobal... Using short-range wireless communication, the hijacked machine transmitted the exploit to another robot that was not connected to the network. Within minutes, this second robot was also taken over, demonstrating how a single breach could cascade through a group of machines. To underline the real-world implications, the researchers issued a hostile command during the demonstration. The robot advanced toward a mannequin on stage and struck it, illustrating the potential for physical harm.

AI

Waymo Updates Vehicles to Better Handle Power Outages - But Still Faces Criticism (cnbc.com) 65

Waymo explained this week that its self-driving car technology is already "designed to handle dark traffic signals," and successfully handled over 7,000 last Saturday during San Francisco's long power outage, properly treating those intersections as four-way stops. But while during the long outage their cars sometimes experienced a "backlog" when waiting for confirmation checks (leading them to freeze in intersections), Waymo said Tuesday they're implementing "fleet-wide updates" to provide their self-driving cars "specific power outage context, allowing it to navigate more decisively."

Ironically, two days later Waymo paused their service again in San Francisco. But this time it was due to a warning from the National Weather Service about a powerful storm bringing the possibility of flash flooding and power outages, reports CNBC. They add that Waymo "didn't immediately respond to a request for comment, or say whether regulators required its service pause on Thursday given the flash flood warnings." And they also note Waymo still faces criticism over last Saturday's incident: The former CEO of San Francisco's Municipal Transit Authority, Jeffrey Tumlin, told CNBC that regulators and robotaxi companies can take valuable lessons away from the chaos that arose with Waymo vehicles during the PG&E power outages last week. "I think we need to be asking 'what is a reasonable number of [autonomous vehicles] to have on city streets, by time of day, by geography and weather?'" Tumlin said. He also suggested regulators may want to set up a staged system that will allow autonomous vehicle companies to rapidly scale their operations, provided they meet specific tests. One of those tests, he said, would be how quickly a company can get their autonomous vehicles safely out of the way of traffic if they encounter something that is confusing like a four-way intersection with no functioning traffic lights.

Cities and regulators should also seek more data from robotaxi companies about the planned or actual performance of their vehicles during expected emergencies such as blackouts, floods or earthquakes, Tumlin said.

Power

Japan Votes to Restart World's Biggest Nuclear Plant 15 Years After Fukushima Meltdown (cnn.com) 70

The 2011 meltdown at Fukushima's nuclear plant "was the world's worst nuclear disaster since Chernobyl in 1986," CNN remembers.

But this week Japanese authorities "have approved a decision to restart the world's biggest nuclear power plant," reports CNN, "which has sat dormant for more than a decade following the Fukushima nuclear disaster."

Despite nerves from many local residents, the Niigata prefectural assembly, home to the Kashiwazaki-Kariwa plant, approved a bill on Monday that clears the way for utility company Tokyo Electric Power Company (TEPCO) to restart one of the plant's seven reactors. The company plans to bring the No. 6 reactor back online around January 20, Japan's public broadcaster NHK reported...

Following the [2011] disaster, Japan shut down all 54 of its nuclear power stations including Kashiwazaki-Kariwa, which sits in the coastal and port region of Niigata about 320 kilometers (200 miles) north of Tokyo on Japan's main island of Honshu. Japan has since restarted 14 of the 33 nuclear reactors that remain operable, according to the World Nuclear Association. The Niigata plant will be the first to reopen under the operation of TEPCO, the company that ran the Fukushima Daiichi power station. It has been trying to reassure residents of the restart plan is safe...

About 60-70% of Japan's power generation comes from imported fossil fuels, which cost the country about 10.7 trillion yen ($68 billion) last year alone... Japan is the world's fifth-largest emitter of carbon dioxide, after China, the United States, India and Russia, according to the International Energy Agency. But it has committed to reaching net zero emissions by 2050, and renewable energy was at the center of its latest energy plan published earlier this year, with a push for greater investments in solar and wind. The country's energy demands are also expected to increase in the coming years due to a boom in energy-hungry data centers that power AI infrastructure. To achieve its energy and climate goals, Japan aims to double the share of nuclear power in its electricity mix to 20% by 2040...

On its website, TEPCO said Kashiwazaki-Kariwa had undergone multiple inspections and upgrades and that the company had learned "the lessons of Fukushima." The company said new seawalls and watertight doors would provide "stronger protection against tsunamis" and that mobile generators and more fire trucks would be on hand for "cooling support" in an emergency. It also said the plant now had "upgraded filtering systems designed to control the spread of radioactive materials."

A survey published by the prefecture in October "found 60% of residents did not think conditions for the restart had been met," reports Reuters, adding that "Nearly 70% were worried about TEPCO operating the plant."
Science

Should Physicists Study the Question: What is Life? (msn.com) 89

An astrophysicist at the University of Rochester writes that "many" of his colleagues in physics "have come to believe that a mystery is unfolding in every microbe, animal, and human." And it's a mystery that:

- "Challenges basic assumptions physicists have held for centuries"
- "May even help redefine the field for the next generation"
- "Could answer essential questions about AI."

In short, while physicists have favored a "reductionist" philosophy about the fundamental laws controlling the universe (energy, mattery, space, and time), "long-promised 'theories of everything' such as string theory, have not borne significant fruit: There are, however, ways other than reductionism to think about what's fundamental in the universe. Beginning in the 1980s, physicists (along with researchers in other fields) began developing new mathematical tools to study what's called "complexity" — systems in which the whole is far more than the sum of its parts. The end goal of reductionism was to explain everything in the universe as the result of particles and their interactions. Complexity, by contrast, recognizes that once lots of particles come together to produce macroscopic things — such as organisms — knowing everything about particles isn't enough to understand reality...

Physicists have always been good at capturing the essential aspects of a system and casting those essentials in the language of mathematics... Now those skills must be brought to bear on an age-old question that is only just getting its proper due: What is life? Using these skills, physicists — working together with representatives of all the other disciplines that make up complexity science — may crack open the question of how life formed on Earth billions of years ago and how it might have formed on the distant alien worlds we can now explore with cutting-edge telescopes. Just as important, understanding why life, as an organized system, is different at a fundamental level from all the other stuff in the universe may help astronomers design new strategies for finding it in places bearing little resemblance to Earth. Analyzing life — no matter how alien — as a self-organizing information-driven system may provide the key to detecting biosignatures on planets hundreds of light-years away.

Closer to home, studying the nature of life is likely essential to fully understanding intelligence — and building artificial versions. Throughout the current AI boom, researchers and philosophers have debated whether and when large language models might achieve general intelligence or even become conscious — or whether, in fact, some already have. The only way to properly assess such claims is to study, by any means possible, the sole agreed-upon source of general intelligence: life. Bringing the new physics of life to problems of AI may not only help researchers predict what software engineers can build; it may also reveal the limits of trying to capture life's essential character in silicon.

Transportation

Driverless Future Gains Momentum With Global Robotaxi Deployments (reuters.com) 28

The global push to put autonomous taxis on public roads is accelerating as ride-hailing companies and technology firms advance from pilot programs toward limited commercial rollouts in cities across China, the United States, Europe and the Middle East.

WeRide and Uber launched Level 4 fully driverless robotaxi operations in Abu Dhabi in November and began offering robotaxi passenger rides on Uber's platform in Dubai the following month. Amazon's Zoox started offering free rides to select early users in parts of San Francisco in November after launching its autonomous ride-hailing service on the Las Vegas Strip in September. Alphabet's Waymo now operates services in Phoenix, San Francisco, and Los Angeles -- the latter two having launched in June and November 2024 respectively.

Baidu's Apollo Go has been operating without safety drivers in Chongqing and Wuhan since securing permits in August 2022 and has since expanded to Shenzhen and Beijing. Pony.ai launched paid robotaxi services in Guangzhou in February and Shanghai in August. Tesla began a limited paid robotaxi rollout in Austin, Texas in June using Model Y SUVs, though the vehicles still require a safety monitor onboard. The expansion will continue in 2026: Waymo plans to launch an autonomous ride-hailing service in London, and Momenta is preparing a luxury robotaxi service in Abu Dhabi through a partnership with Mercedes-Benz and UAE taxi operator Lumo.
Businesses

Indian IT Was Supposed To Die From AI. Instead It's Billing for the Cleanup. (indiadispatch.com) 40

Two years after generative AI was supposed to render India's $250 billion IT services industry obsolete, the sector is finding that enterprises still need someone to handle the unglamorous plumbing work that large-scale AI deployment demands. Less than 15% of organizations are meaningfully deploying the new technology, according to investment bank UBS, and Indian IT firms are positioning themselves to capture the preparatory work -- data cleanup, cloud migration, system integration -- that channel checks suggest could take two to three years before enterprise-wide AI becomes feasible.

The financials have held up better than the doomsday predictions suggested. Infosys now calls AI-led volume opportunities a bigger tailwind than the deflation threat, a reversal from 2024, and orderbooks held steady in the third quarter even as pricing pressure filtered through renewals. Infosys expects its orderbook to grow more than 50% this quarter, anchored by an NHS deal worth $1.6 billion over 15 years.

The companies have been restructuring accordingly. TCS cut headcount by 2% and invested in a 1GW data-centre network while acquiring Salesforce advisory firm Coastal Cloud. HCLTech reduced margins by 100 basis points and became one of the first large systems integrators to partner with OpenAI; this week it announced acquisitions of Jaspersoft for $240 million and Belgian firm Wobby to expand agentic AI capabilities.

The bear case for the Indian IT sector assumed that AI would work out of the box. Two years in, it does not.
The Almighty Buck

As AI Companies Borrow Billions, Debt Investors Grow Wary (nytimes.com) 43

While stock investors have pushed AI-related shares to repeated highs this year, debt markets are telling a more cautious story as newer AI infrastructure companies find themselves paying significantly elevated interest rates to borrow money. Applied Digital, a data center builder, sold $2.35 billion of debt in November at a 9.25% coupon -- roughly 3.75% above similarly rated companies, or about 70% more in interest costs. The pattern has repeated across several deals.

Wulf Compute, a subsidiary of Bitcoin-miner-turned-data-center-operator Terawulf, raised $3.2 billion in mid-October at 7.75%, well above the 5.5% average yield for similarly rated issuers. Cipher Compute sold $1.7 billion in early November at just over 7%. CoreWeave, which rents data centers and installs computing systems for companies like OpenAI and Meta, raised $1.75 billion in July at 9%. The company's bonds have since fallen to around 90 cents on the dollar, pushing the effective yield above 12% -- nearly double the average for companies at its single-B rating level.

"We just have to be much more pessimistic and not buy into the hype," said Will Smith, a portfolio manager at AllianceBernstein. Construction delays and uncertain demand for AI computing power remain key concerns for lenders who, unlike equity investors, have no upside beyond getting their principal back.
United States

The Economic Divide Between Big and Small Companies Is Growing (msn.com) 42

While America's largest corporations are riding a wave of surging profits and AI-fueled stock market enthusiasm to record highs, small businesses across the country are cutting staff and scaling back operations as years of high inflation, cautious consumers and tariff confusion take their toll.

Private firms with fewer than 50 workers have steadily shed jobs over the past six months, according to payroll processor ADP, cutting 120,000 positions in November alone. Midsize and large firms continued adding jobs during the same period. The divergence mirrors what's happening among American consumers.

The Federal Reserve's latest beige book noted that overall consumer spending declined further even as higher-end retail spending remained resilient. Workers at small businesses tend to earn less than those at large companies, and stock market gains from large public company shares flow mostly to wealthier Americans. Small businesses -- those with up to 500 workers -- employ nearly half the American workforce and represent more than 40% of GDP, according to the U.S. Chamber of Commerce. But their profits are slightly lower than a year ago, per a Bank of America Institute analysis. Net income at S&P 500 companies rose 12.9% from a year earlier in the third quarter.
IT

AI's Hunger For Memory Chips Could Shrink Smartphone and PC Sales in 2026, IDC Says (idc.com) 27

The global smartphone and PC markets face potential contractions of up to 5.2% and 8.9% respectively in 2026, according to downside risk scenarios from IDC that trace the problem to memory chip manufacturers shifting production capacity away from consumer electronics toward AI data centers. Samsung Electronics, SK Hynix and Micron Technology have pivoted their limited cleanroom space toward high-bandwidth memory for AI servers, restricting supply of the conventional DRAM and NAND used in phones and laptops.

IDC expects 2026 DRAM supply growth to hit 16% year-on-year, below historical norms. The smartphone industry's decade-long trend of bringing flagship features to affordable devices is reversing. Memory represents 15-20% of the bill of materials for mid-range phones, and thin-margin vendors like Xiaomi, Realme and Transsion will bear the brunt. Apple and Samsung have long-term supply agreements securing components up to 24 months ahead. PC vendors including Lenovo, Dell, HP, Acer and ASUS have warned clients of 15-20% price increases heading into the second half of 2026.
Programming

'Memory is Running Out, and So Are Excuses For Software Bloat' (theregister.com) 152

The relentless climb in memory prices driven by the AI boom's insatiable demand for datacenter hardware has renewed an old debate about whether modern software has grown inexcusably fat, a column by the Register argues. The piece points to Windows Task Manager as a case study: the current executable occupies 6MB on disk and demands nearly 70MB of RAM just to display system information, compared to the original's 85KB footprint.

"Its successor is not orders of magnitude more functional," the column notes. The author draws a parallel to the 1970s fuel crisis, when energy shortages spurred efficiency gains, and argues that today's memory crunch could force similar discipline. "Developers should consider precisely how much of a framework they really need and devote effort to efficiency," the column adds. "Managers must ensure they also have the space to do so."

The article acknowledges that "reversing decades of application growth will not happen overnight" but calls for toolchains to be rethought and rewards given "for compactness, both at rest and in operation."
Programming

Cursor CEO Warns Vibe Coding Builds 'Shaky Foundations' That Eventually Crumble (fortune.com) 54

Michael Truell, the 25-year-old CEO and cofounder of Cursor, is drawing a sharp distinction between careful AI-assisted development and the more hands-off approach commonly known as "vibe coding." Speaking at a conference, Truell described vibe coding as a method where users "close your eyes and you don't look at the code at all and you just ask the AI to go build the thing for you." He compared it to constructing a house by putting up four walls and a roof without understanding the underlying wiring or floorboards. The approach might work for quickly mocking up a game or website, but more advanced projects face real risks.

"If you close your eyes and you don't look at the code and you have AIs build things with shaky foundations as you add another floor, and another floor, and another floor, and another floor, things start to kind of crumble," Truell said. Truell and three fellow MIT graduates created Cursor in 2022. The tool embeds AI directly into the integrated development environment and uses the context of existing code to predict the next line, generate functions, and debug errors. The difference, as Truell frames it, is that programmers stay engaged with what's happening under the hood rather than flying blind.

Slashdot Top Deals