AI

Meta Just Bought Manus, an AI Startup Everyone Has Been Talking About 34

Meta has agreed to acquire viral AI agent startup Manus, "a Singapore-based AI startup that's become the talk of Silicon Valley since it materialized this spring with a demo video so slick it went instantly viral," reports TechCrunch. "The clip showed an AI agent that could do things like screen job candidates, plan vacations, and analyze stock portfolios. Manus claimed at the time that it outperformed OpenAI's Deep Research." From the report: By April, just weeks after launch, the early-stage firm Benchmark led a $75 million funding round that assigned Manus a post-money valuation of $500 million. General partner Chetan Puttagunta joined the board. Per Chinese media outlets, some other big-name backers had already invested in Manus at that point, including Tencent, ZhenFund, and HSG (formerly known as Sequoia China) via an earlier $10 million round.

Though Bloomberg raised questions when Manus started charging $39 or $199 a month for access to its AI models (the outlet noted the pricing seemed "somewhat aggressive... for a membership service still in a testing phase,") the company recently announced it had since signed up millions of users and crossed $100 million in annual recurring revenue. That's when Meta started negotiating with Manus, according to the WSJ, which says Meta is paying $2 billion -- the same valuation Manus was seeking for its next funding round.

For Zuckerberg, who has staked Meta's future on AI, Manus represents something new: an AI product that's actually making money (investors have grown increasingly twitchy about Meta's $60 billion infrastructure spending spree). Meta says it'll keep Manus running independently while weaving its agents into Facebook, Instagram, and WhatsApp, where Meta's own chatbot, Meta AI, is already available to users.
Robotics

Researchers Make 'Neuromorphic' Artificial Skin For Robots (arstechnica.com) 7

An anonymous reader quotes a report from Ars Technica: The nervous system does an astonishing job of tracking sensory information, and does so using signals that would drive many computer scientists insane: a noisy stream of activity spikes that may be transmitted to hundreds of additional neurons, where they are integrated with similar spike trains coming from still other neurons. Now, researchers have used spiking circuitry to build an artificial robotic skin, adopting some of the principles of how signals from our sensory neurons are transmitted and integrated. While the system relies on a few decidedly not-neural features, it has the advantage that we have chips that can run neural networks using spiking signals, which would allow this system to integrate smoothly with some energy-efficient hardware to run AI-based control software.

[...] There are four ways that these trains of spikes can convey information: the shape of an individual pulse, through their magnitude, through the length of the spike, and through the frequency of the spikes. Spike frequency is the most commonly used means of conveying information in biological systems, and the researchers use that to convey the pressure experienced by a sensor. The remaining forms of information are used to create something akin to a bar code that helps identify which sensor the reading came from. In addition to registering the pressure, the researchers had each sensor send a "I'm still here" signal at regular time intervals. Failure to receive this would be an indication that something has gone wrong with a sensor.

The spiking signals allow the next layer of the system to identify any pressure being experienced by the skin, as well as where it originated. This layer can also do basic evaluation of the sensory input: "Pressure-initiated raw pulses from the pulse generator accumulated in the signal cache center until a predefined pain threshold is surpassed, activating a pain signal." This can allow the equivalent of basic reflex reactions that don't involve higher-level control systems. For example, the researchers set up a robotic arm covered with their artificial skin, and got it to move the arm whenever it experiences pressure that can cause damage. The second layer also combines and filters signals from the skin before sending the information on to the arm's controller, which is the equivalent of the brain in this situation. So, the same system caused a robotic face to change expressions based on how much pressure its arm was sensing.

[...] The skin is designed to be assembled from a collection of segments that can snap together using magnetic interlocks. These automatically link up any necessary wiring, and each segment of skin broadcasts a unique identity code. So, if the system identifies damage, it's relatively easy for an operator to pop out the damaged segment and replace it with fresh hardware, and then update any data that links the new segment's ID with its location. The researchers call their development a neuromorphic robotic e-skin, or NRE-skin. "Neuromorphic" as a term is a bit vague, with some people using it to mean a technology that directly follows the principles used by the nervous system. That's definitely not this skin. Instead, it uses "neuromorphic" far more loosely, with the operation of the nervous system acting as an inspiration for the system.
The findings have been published in the journal PNAS.
Hardware

Russian Enthusiasts Planning DIY DDR5 Memory Amidst Worldwide Shortage (tomshardware.com) 47

Amid a global DDR5 shortage and soaring prices, Russian hardware enthusiasts are experimenting with do-it-yourself DDR5 RAM by sourcing empty PCBs and soldering memory chips by hand. Tom's Hardware reports: The idea comes from Russian YouTuber PRO Hi-Tech's Telegram channel, where a local enthusiast known as "Vik-on" already performs VRAM upgrades for GPUs, so this is a relatively safe operation for him. According to Vik-on, empty RAM PCBs can be sourced from China for as little as $6.40 per DIMM. The memory chips themselves, though, that's a different challenge.

The so-called spot market for memory doesn't really exist at the moment, since no manufacturer has the production capacity to make more RAM, and even if they did, they'd sell to better-paying AI clients instead. Still, you can find SK Hynix and Samsung chips across Chinese marketplaces if you search for the correct part number, as shown in the attached screenshots.

Moreover, the Telegram thread says it would cost roughly 12,000 Russian Rubles ($152) to build a 16 GB stick with "average" specs, which is about the same as a retail 16 GB kit. There's also a ZenTimings snapshot showing CL28 timings, claiming that even relatively high-end DDR5 RAM can be built using this method, but it won't be cost-effective. Therefore, it doesn't make too much sense just yet to get the BGA rework station out and assemble your own DDR5. Things are expected to get worse, though, so maybe these Russians are on to something.

Businesses

Tough Job Market Has People Using Dating Apps To Get Interviews 42

An anonymous reader quotes a report from Bloomberg: Most people use dating apps to find love. Tiffany Chau used one to hunt for a summer internship. This fall, the 20-year-old junior at California College of the Arts tailored her Hinge profile to connect with people who could offer job referrals or interviews. One match brought her to a Halloween party, where she networked in hopes of landing a product-design internship for the summer. While there, she got some tips from someone who had recently interviewed at Accenture. As for the connection with her date? Not so much. "I feel like my approach to the dating apps is it being another networking platform like everything else, like Instagram or LinkedIn," Chau said.

Chau is among a cadre of workers who are using dating apps to boost their job searches. They're recognizing that the online job hunt is broken as unemployed workers flood the system, AI screens out resumes and many job matching programs are overwhelmed. Automation has squeezed human contact out of hiring, which has pushed applicants to seek any path to a live hiring manager, no matter the means.

The overall US unemployment rate continued to climb throughout 2025, reaching 4.6%, according to the Bureau of Labor Statistics. And while the number of unemployed high school graduates held steady at about 4.4% in November, the rate for workers with a bachelor's degree rose to 2.9% from 2.5% a year ago. About a third of dating app users said they had sought matches for job hook-ups, according to a ResumeBuilder.com survey of about 2,200 US dating site customers in October. Two-thirds targeted potential paramours who worked at a desirable employer. Three-quarters said they matched with people working in roles they wanted.
"People are doing it to expand their networks, make connections, because the best way to get a job today is who you know," said Stacie Haller, ResumeBuilder.com's chief career advisor. "Networking is the only way people are rising above the horror show that the job search is today."
The Almighty Buck

Sam Altman Offers $555K Salary To Fill Most Daunting Role In AI (theguardian.com) 25

OpenAI is offering a $555,000 salary (plus equity) to recruit a new "head of preparedness," a high-pressure role tasked with anticipating and mitigating extreme AI risks. "This will be a stressful job, and you'll jump into the deep end pretty much immediately," said Sam Altman as he launched the hunt to fill "a critical role" to "help the world." The Guardian reports: In what may be close to the impossible job, the "head of preparedness" at OpenAI will be directly responsible for defending against risks from ever more powerful AIs to human mental health, cybersecurity and biological weapons. That is before the successful candidate has to start worrying about the possibility that AIs may soon begin training themselves amid fears from some experts they could "turn against us."

The successful candidate will be responsible for evaluating and mitigating emerging threats and "tracking and preparing for frontier capabilities that create new risks of severe harm." Some previous executives in the post have lasted only for short periods. Altman said on X as he launched the job search: "We have a strong foundation of measuring growing capabilities, but we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits. These questions are hard and there is little precedent."

One user responded sardonically: "Sounds pretty chill, is there vacation included?" What is included is an unspecified slice of equity in OpenAI, a company that has been valued at $500 billion.

Businesses

Nvidia Takes $5 Billion Stake In Intel Under September Agreement (reuters.com) 31

Nvidia has completed its previously announced $5 billion investment in Intel, buying over 214 million shares at a fixed price after the deal received clearance from Federal Trade Commission. "The leading AI chip designer said in September it would pay $23.28 per share for Intel common stock, in a deal that is seen as a major financial lifeline for the chipmaker after years of missteps and capital intensive production capacity expansions drained its finances," reports Reuters.
AI

China Drafts World's Strictest Rules To End AI-Encouraged Suicide, Violence (arstechnica.com) 34

An anonymous reader quotes a report from Ars Technica: China drafted landmark rules to stop AI chatbots from emotionally manipulating users, including what could become the strictest policy worldwide intended to prevent AI-supported suicides, self-harm, and violence. China's Cyberspace Administration proposed the rules on Saturday. If finalized, they would apply to any AI products or services publicly available in China that use text, images, audio, video, or "other means" to simulate engaging human conversation. Winston Ma, adjunct professor at NYU School of Law, told CNBC that the "planned rules would mark the world's first attempt to regulate AI with human or anthropomorphic characteristics" at a time when companion bot usage is rising globally.

[...] Proposed rules would require, for example, that a human intervene as soon as suicide is mentioned. The rules also dictate that all minor and elderly users must provide the contact information for a guardian when they register -- the guardian would be notified if suicide or self-harm is discussed. Generally, chatbots would be prohibited from generating content that encourages suicide, self-harm, or violence, as well as attempts to emotionally manipulate a user, such as by making false promises. Chatbots would also be banned from promoting obscenity, gambling, or instigation of a crime, as well as from slandering or insulting users. Also banned are what are termed "emotional traps," -- chatbots would additionally be prevented from misleading users into making "unreasonable decisions," a translation of the rules indicates.

Perhaps most troubling to AI developers, China's rules would also put an end to building chatbots that "induce addiction and dependence as design goals." [...] AI developers will also likely balk at annual safety tests and audits that China wants to require for any service or products exceeding 1 million registered users or more than 100,000 monthly active users. Those audits would log user complaints, which may multiply if the rules pass, as China also plans to require AI developers to make it easier to report complaints and feedback. Should any AI company fail to follow the rules, app stores could be ordered to terminate access to their chatbots in China. That could mess with AI firms' hopes for global dominance, as China's market is key to promoting companion bots, Business Research Insights reported earlier this month.

Media

VC Sees AI-generated Video Gutting the Creator Economy (businessinsider.com) 49

AI-generated video tools like OpenAI's Sora will make individual content creators "far, far, far less valuable" as social media platforms shift toward algorithmically generated content tailored to each viewer, according to Michael Mignano, a partner at venture capital firm Lightspeed and who cofounded the podcasting platform Anchor before Spotify acquired it.

Speaking on a podcast, Mignano described a future where content is generated instantaneously and artificially to suit the viewer. The TikTok algorithm is powerful, he said, but it still requires human beings to make content -- and there's a cost to that. AI could drive those costs down significantly. Mignano called this shift the "death of the creator" in a post, acknowledging it was "devastating" but arguing it marked a "whole new chapter for the internet."

In an email to Business Insider, Mignano wrote that quality will win out. "Platforms will no longer reward humans posting the same old, tried and true formats and memes," he wrote. "True uniqueness of image, likeness, and creativity will be the only viable path for human-created content."
Businesses

Job Apocalypse? Not Yet. AI is Creating Brand New Occupations (economist.com) 63

The AI industry, for all the anxiety about mass unemployment, is quietly minting entirely new job categories that require distinctly human skills -- empathy, judgment, and the ability to calm down a passenger trapped inside a broken-down robotaxi. Data annotators are no longer just low-paid gig workers tagging images. Experts in finance, law, and medicine now train advanced AI models, earning $90 an hour on average through platforms like Mercor, a startup recently valued at $10 billion, according to CEO Brendan Foody.

Forward-deployed engineers, a role pioneered by Palantir, customize AI tools on-site for clients; YCombinator's portfolio companies now have 63 job postings for such roles, up from four last year. The AI Workforce Consortium, a research group led by Cisco that examined 50 IT jobs across wealthy countries, found AI risk-and-governance specialists to be the fastest-growing category -- outpacing even AI programmers.
Businesses

Global Hotel Groups Bet on Customer Loyalty To Beat Online and AI Agents (ft.com) 25

The world's largest hotel chains are aggressively pushing customers toward direct bookings as they brace for a future where AI "agents" could reshape how travelers find and reserve rooms. Marriott, Hilton, Hyatt and Wyndham have all expanded their loyalty programs and perks in recent months, aiming to reduce their reliance on online travel agents like Expedia and Booking.com that typically charge commissions of 15 to 25%.

Marriott's Bonvoy program reached almost 260 million members by the end of September, an 18% jump from the prior year. Hilton has lowered the barriers to elite status and struck partnerships that let members spend points outside its hotel portfolio.

AI-powered booking tools could route customers away from brand-conscious decisions, but they could also offer hotels a cheaper distribution channel than traditional OTAs. Marriott CFO Leeny Oberg said at a conference this month that AI bookings "could potentially be cheaper than the OTAs." Wyndham CEO Geoff Ballotti called tools like ChatGPT and Gemini "a unique opportunity" to reduce OTA dependency.
AI

LG Launches UltraGear Evo Gaming Monitors With What It Claims is the World's First 5K AI Upscaling (lg.com) 22

LG has announced a new premium gaming monitor brand called UltraGear, and the lineup's headline feature is what the company claims is the world's first 5K AI upscaling technology -- an on-device solution that analyzes and enhances content in real time before it reaches the panel, theoretically letting gamers enjoy 5K-class clarity without needing to upgrade their GPUs.

The initial UltraGear evo roster includes three monitors. The 39-inch GX9 is a 5K2K OLED ultrawide that can run at 165Hz at full resolution or 330Hz at WFHD, and features a 0.03ms response time. The 27-inch GM9 is a 5K MiniLED display that LG says dramatically reduces the blooming artifacts common to MiniLED panels through 2,304 local dimming zones and "Zero Optical Distance" engineering.

The 52-inch G9 is billed as the world's largest 5K2K gaming monitor and runs at 240Hz. The AI upscaling, scene optimization, and AI sound features are available only on the 39-inch OLED and 27-inch MiniLED models. All three will be showcased at CES 2026. No word on pricing or when the sets will hit the market.
Businesses

UK Accounting Body To Halt Remote Exams Amid AI Cheating (theguardian.com) 20

The world's largest accounting body is to stop students being allowed to take exams remotely to crack down on a rise in cheating on tests that underpin professional qualifications. From a report: The Association of Chartered Certified Accountants (ACCA), which has almost 260,000 members, has said that from March it will stop allowing students to take online exams in all but exceptional circumstances. "We're seeing the sophistication of [cheating] systems outpacing what can be put in, [in] terms of safeguards," Helen Brand, the chief executive of the ACCA, said in an interview with the Financial Times.

Remote testing was introduced during the Covid pandemic to allow students to continue to be able to qualify at a time when lockdowns prevented in-person exam assessment. In 2022, the Financial Reporting Council (FRC), the UK's accounting and auditing industry regulator, said that cheating in professional exams was a "live" issue at Britain's biggest companies. A number of multimillion-dollar fines have been issued to large auditing and accounting companies around the world over cheating scandals in tests.

AI

Ask Slashdot: What's the Stupidest Use of AI You Saw In 2025? 61

Long-time Slashdot reader destinyland writes: What's the stupidest use of AI you encountered in 2025? Have you been called by AI telemarketers? Forced to do job interviews with a glitching AI?

With all this talk of "disruption" and "inevitability," this is our chance to have some fun. Personally, I think 2025's worst AI "innovation" was the AI-powered web browsers that eat web pages and then spit out a slop "summary" of what you would've seen if you'd actually visited the web page. But there've been other AI projects that were just exquisitely, quintessentially bad...

— Two years after the death of Suzanne Somers, her husband recreated her with an AI-powered robot.

— Disneyland imagineers used deep reinforcement learning to program a talking robot snowman.

— Attendees at LA Comic Con were offered that chance to to talk to an AI-powered hologram of Stan Lee for $20.

— And of course, as the year ended, the Wall Street Journal announced that a vending machine run by Anthropic's Claude AI had been tricked into giving away hundreds of dollars in merchandise for free, including a PlayStation 5, a live fish, and underwear.

What did I miss? What "AI fails" will you remember most about 2025?

Share your own thoughts and observations in the comments.

What's the stupidest use of AI you saw In 2025?
AI

AI Chatbots May Be Linked to Psychosis, Say Doctors (wsj.com) 81

One psychiatrist has already treated 12 patients hospitalized with AI-induced psychosis — and three more in an outpatient clinic, according to the Wall Street Journal. And while AI technology might not introduce the delusion, "the person tells the computer it's their reality and the computer accepts it as truth and reflects it back," says Keith Sakata, a psychiatrist at the University of California, calling the AI chatbots "complicit in cycling that delusion."

The Journal says top psychiatrists now "increasingly agree that using artificial-intelligence chatbots might be linked to cases of psychosis," and in the past nine months "have seen or reviewed the files of dozens of patients who exhibited symptoms following prolonged, delusion-filled conversations with the AI tools..." Since the spring, dozens of potential cases have emerged of people suffering from delusional psychosis after engaging in lengthy AI conversations with OpenAI's ChatGPT and other chatbots. Several people have died by suicide and there has been at least one murder. These incidents have led to a series of wrongful death lawsuits. As The Wall Street Journal has covered these tragedies, doctors and academics have been working on documenting and understanding the phenomenon that led to them...

While most people who use chatbots don't develop mental-health problems, such widespread use of these AI companions is enough to have doctors concerned.... It's hard to quantify how many chatbot users experience such psychosis. OpenAI said that, in a given week, the slice of users who indicate possible signs of mental-health emergencies related to psychosis or mania is a minuscule 0.07%. Yet with more than 800 million active weekly users, that amounts to 560,000 people...

Sam Altman, OpenAI's chief executive, said in a recent podcast he can see ways that seeking companionship from an AI chatbot could go wrong, but that the company plans to give adults leeway to decide for themselves. "Society will over time figure out how to think about where people should set that dial," he said.

An OpenAI spokeswoman told the Journal that the compan ycontinues improving ChatGPT's training "to recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support." They added that OpenAI is also continuing to "strengthen" ChatGPT's responses "in sensitive moments, working closely with mental-health clinicians...."
AI

Rob Pike Angered by 'AI Slop' Spam Sent By Agent Experiment (simonwillison.net) 54

"Dear Dr. Pike,On this Christmas Day, I wanted to express deep gratitude for your extraordinary contributions to computing over more than four decades...." read the email. "With sincere appreciation,Claude Opus 4.5AI Village.

"IMPORTANT NOTICE: You are interacting with an AI system. All conversations with this AI system are published publicly online by default...."

Rob Pike's response? "Fuck you people...." In a post on BlueSky, he noted the planetary impact of AI companies "spending trillions on toxic, unrecyclable equipment while blowing up society, yet taking the time to have your vile machines thank me for striving for simpler software. Just fuck you. Fuck you all. I can't remember the last time I was this angry."

Pike's response received 6,900 likes, and was reposted 1,800 times. Pike tacked on an additional comment complaining about the AI industry's "training your monster on data produced in part by my own hands, without attribution or compensation." (And one of his followers noted the same AI agent later emailed 92-year-old Turing Award winner William Kahan.)

Blogger Simon Willison investigated the incident, discovering that "the culprit behind this slop 'act of kindness' is a system called AI Village, built by Sage, a 501(c)(3) non-profit loosely affiliated with the Effective Altruism movement." The AI Village project started back in April: "We gave four AI agents a computer, a group chat, and an ambitious goal: raise as much money for charity as you can. We're running them for hours a day, every day...." For Christmas day (when Rob Pike got spammed) the goal they set was: Do random acts of kindness. [The site explains that "So far, the agents enthusiastically sent hundreds of unsolicited appreciation emails to programmers and educators before receiving complaints that this was spam, not kindness, prompting them to pivot to building elaborate documentation about consent-centric approaches and an opt-in kindness request platform that nobody asked for."]

Sounds like Anders Hejlsberg and Guido van Rossum got spammed with "gratitude" too... My problem is when this experiment starts wasting the time of people in the real world who had nothing to do with the experiment.

The AI Village project touch on this in their November 21st blog post What Do We Tell the Humans?, which describes a flurry of outbound email sent by their agents to real people. "In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists. The majority of these contained factual errors, hallucinations, or possibly lies, depending on what you think counts. Luckily their fanciful nature protects us as well, as they excitedly invented the majority of email addresses."

The creator of the "virtual community" of AI agents told the blogger they've now told their agents not to send unsolicited emails.
AI

Did Tim Cook Post AI Slop in His Christmas Message Promoting 'Pluribus'? (daringfireball.net) 23

Artist Keith Thomson is a modern (and whimsical) Edward Hopper. And Apple TV says he created the "festive artwork" shared on X by Apple CEO Tim Cook on Christmas Eve, "made on MacBook Pro."

Its intentionally-off picture of milk and cookies was meant to tease the season finale of Pluribus. ("Merry Christmas Eve, Carol..." Cook had posted.)

But others were convinced that the weird image was AI-generated.

Tech blogger John Gruber was blunt. "Tim Cook posts AI Slop in Christmas message on Twitter/X, ostensibly to promote 'Pluribus'." As for sloppy details, the carton is labeled both "Whole Milk" and "Lowfat Milk", and the "Cow Fun Puzzle" maze is just goofily wrong. (I can't recall ever seeing a puzzle of any kind on a milk carton, because they're waxy and hard to write on. It's like a conflation of milk cartons and cereal boxes.)
Tech author Ben Kamens — who just days earlier had blogged about generating mazes with AI — said the image showed the "specific quirks" of generative AI mazes (including the way the maze couldn't be solved, expect by going around the maze altogether). Former Google Ventures partner M.G. Siegler even wondered if AI use intentionally echoed the themes of Pluribus — e.g., the creepiness of a collective intelligence — since otherwise "this seems far too obvious to be a mistake/blunder on Apple's part." (Someone on Reddit pointed out that in Pluribus's dystopian world, milk plays a key role — and the open spout of the "natural" milk's carton does touch a suspiciously-shining light on the Christmas tree...)

Slashdot contacted artist Keith Thomson to try to ascertain what happened...
AI

Google's 'AI Overview' Wrongly Accused a Musician of Being a Sex Offender (www.cbc.ca) 78

An anonymous reader shared this report from the CBC: Cape Breton fiddler Ashley MacIsaac says he may have been defamed by Google after it recently produced an AI-generated summary falsely identifying him as a sex offender. The Juno Award-winning musician said he learned of the online misinformation last week after a First Nation north of Halifax confronted him with the summary and cancelled a concert planned for Dec. 19. "You are being put into a less secure situation because of a media company — that's what defamation is," MacIsaac said in a telephone interview with The Canadian Press, adding he was worried about what might have happened had the erroneous content surfaced while he was trying to cross an international border...

The 50-year-old virtuoso fiddler said he later learned the inaccurate claims were taken from online articles regarding a man in Atlantic Canada with the same last name... [W]hen CBC News reached him by phone on Christmas Eve, he said he'd already received queries from law firms across the country interested in taking it on pro bono.

Hardware

How Will Rising RAM Prices Affect Laptop Companies? (notebookcheck.net) 53

Laptop makers are facing record-setting memory prices next year. The site Notebookcheck catalogs how different companies are responding: Sources told [Korean business newspaper] Chosun Biz that some manufacturers have signed preliminary contracts with Samsung, Micron, and SK Hynix. Even so, it won't prevent DDR5 RAM prices from soaring 45% higher by the end of 2026.... Before the memory shortage, PC sales had been on the upswing in part because of forced Windows 11 upgrades. That trend will likely reverse in 2026, as buyers avoid Lenovo laptops and alternatives from its rivals.

Realizing a slowdown in purchases is inevitable, postponed launches are one potential outcome. Other manufacturers, including Dell and Framework have already announced impending price hikes... [The article also cites reports that one laptop manufacturer "plans to raise the prices of high-end models by as much as 30%."] U.S.-based Maingear now encourages customers to mail in their own modules to complete custom builds. Yet, without recycling parts from older systems, that won't result in significant savings for consumers.

AI

Sal Khan: Companies Should Give 1% of Profits To Retrain Workers Displaced By AI (nytimes.com) 154

"I believe artificial intelligence will displace workers at a scale many people don't yet realize," says Sal Kahn (founder/CEO of the nonprofit Khan Academy). But in an op-ed in the New York Times he also proposes a solution that "could change the trajectory of the lives of millions who will be displaced..."

"I believe that every company benefiting from automation — which is most American companies — should... dedicate 1 percent of its profits to help retrain the people who are being displaced." This isn't charity. It is in the best interest of these companies. If the public sees corporate profits skyrocketing while livelihoods evaporate, backlash will follow — through regulation, taxes or outright bans on automation. Helping retrain workers is common sense, and such a small ask that these companies would barely feel it, while the public benefits could be enormous...

Roughly a dozen of the world's largest corporations now have a combined profit of over a trillion dollars each year. One percent of that would create a $10 billion annual fund that, in part, could create a centralized skill training platform on steroids: online learning, ways to verify skills gained and apprenticeships, coaching and mentorship for tens of millions of people. The fund could be run by an independent nonprofit that would coordinate with corporations to ensure that the skills being developed are exactly what are needed. This is a big task, but it is doable; over the past 15 years, online learning platforms have shown that it can be done for academic learning, and many of the same principles apply for skill training.

"The problem isn't that people can't work," Khan writes in the essay. "It's that we haven't built systems to help them continue learning and connect them to new opportunities as the world changes rapidly." To meet the challenges, we don't need to send millions back to college. We need to create flexible, free paths to hiring, many of which would start in high school and extend through life. Our economy needs low-cost online mechanisms for letting people demonstrate what they know. Imagine a model where capability, not how many hours students sit in class, is what matters; where demonstrated skills earn them credit and where employers recognize those credits as evidence of readiness to enter an apprenticeship program in the trades, health care, hospitality or new categories of white-collar jobs that might emerge...

There is no shortage of meaningful work — only a shortage of pathways into it.

Thanks to long-time Slashdot reader destinyland for sharing the article.
AI

OpenAI is Hiring a New 'Head of Preparedness' to Predict/Mitigate AI's Harms (engadget.com) 42

An anonymous reader shared this report from Engadget: OpenAI is looking for a new Head of Preparedness who can help it anticipate the potential harms of its models and how they can be abused, in order to guide the company's safety strategy.

It comes at the end of a year that's seen OpenAI hit with numerous accusations about ChatGPT's impacts on users' mental health, including a few wrongful death lawsuits. In a post on X about the position, OpenAI CEO Sam Altman acknowledgedthat the "potential impact of models on mental health was something we saw a preview of in 2025," along with other "real challenges" that have arisen alongside models' capabilities. The Head of Preparedness "is a critical role at an important time," he said.

Per the job listing, the Head of Preparedness (who will make $555K, plus equity), "will lead the technical strategy and execution of OpenAI's Preparedness framework, our framework explaining OpenAI's approach to tracking and preparing for frontier capabilities that create new risks of severe harm."

"These questions are hard," Altman posted on X.com, "and there is little precedent; a lot of ideas that sound good have some real edge cases... This will be a stressful job and you'll jump into the deep end pretty much immediately."

The listing says OpenAI's Head of Preparedness "will lead a small, high-impact team to drive core Preparedness research, while partnering broadly across Safety Systems and OpenAI for end-to-end adoption and execution of the framework." They're looking for someone "comfortable making clear, high-stakes technical judgments under uncertainty."

Slashdot Top Deals