AI

Meta Announces New Smartglasses Features, Delays International Rollout Claiming 'Unprecedented' Demand' (cnbc.com) 30

This week Meta announced several new features for "Meta Ray-Ban Display" smartglasses:

- A new teleprompter feature for the smart glasses (arriving in a phased rollout)

- The ability to send messages on WhatsApp and Messenger by writing with your finger on any surface. (Available for those who sign up for an "early access" program).

- "Pedestrian navigation" for 32 cities. ("The 28 cities we launched Meta Ray-Ban Display with, plus Denver, Las Vegas, Portland, and Salt Lake City," and with more cities coming soon.)


But they also warned Meta Ray-Ban Display "is a first-of-its-kind product with extremely limited inventory," saying they're delaying international expansion of sales due to inventory constraints — and also due to "unprecedented" demand in the U.S. CNBC reports: "Since launching last fall, we've seen an overwhelming amount of interest, and as a result, product waitlists now extend well into 2026," Meta wrote in a blog post. Due to "limited" inventory, the company said it will pause plans to launch in the U.K., France, Italy and Canada early this year and concentrate on U.S. orders as it reassesses international availability...

Meta is one of several technology companies moving into the smart glasses market. Alphabet announced a $150 million partnership with Warby Parker in May and ChatGPT maker OpenAI is reportedly working on AI glasses with Apple.

Government

More US States Are Preparing Age-Verification Laws for App Stores (politico.com) 57

Yes, a federal judge blocked an attempt by Texas at an app store age-verification law. But this year Silicon Valley giants including Google and Apple "are expected to fight hard against similar legislation," reports Politico, "because of the vast legal liability it imposes on app stores and developers." In Texas, Utah and Louisiana, parent advocates have linked up with conservative "pro-family" groups to pass laws forcing mobile app stores to verify user ages and require parental sign-off. If those rules hold up in court, companies like Google and Apple, which run the two largest app stores, would face massive legal liability... California has taken a different approach, passing its own age-verification law last year that puts liability on device manufacturers instead of app stores. That model has been better received by the tech lobby, and is now competing with the app-based approach in states like Ohio. In Washington D.C., a GOP-led bill modeled off of Texas' law is wending its way through Capitol Hill. And more states are expected to join the fray, including Michigan and South Carolina.

Joel Thayer, president of the conservative Digital Progress Institute and a key architect of the Texas law, said states are only accelerating their push. He explicitly linked the age-verification debate to AI, arguing it's "terrifying" to think companies could build new AI products by scraping data from children's apps. Thayer also pointed to the Trump administration's recent executive order aimed at curbing state regulation of AI, saying it has galvanized lawmakers. "We're gonna see more states pushing this stuff," Thayer said. "What really put fuel in the fire is the AI moratorium for states. I think states have been reinvigorated to fight back on this."

He told Politico that the issue will likely be decided by America's Supreme Court, which in June upheld Texas legislation requiring age verification for online content. Thayer said states need a ruling from America's highest court to "triangulate exactly what the eff is going on with the First Amendment in the tech world.

"They're going to have to resolve the question at some point."
AI

Meta Signs Deals With Three Nuclear Companies For 6+ GW of Power 28

Meta has signed long-term nuclear power deals totaling more than 6 gigawatts to fuel its data centers: "one from a startup, one from a smaller energy company, and one from a larger company that already operates several nuclear reactors in the U.S," reports TechCrunch. From the report: Oklo and TerraPower, two companies developing small modular reactors (SMR), each signed agreements with Meta to build multiple reactors, while Vistra is selling capacity from its existing power plants. [...] The deals are the result of a request for proposals that Meta issued in December 2024, in which Meta sought partners that could add between 1 to 4 gigawatts of generating capacity by the early 2030s. Much of the new power will flow through the PJM interconnection, a grid which covers 13 Mid-Atlantic and Midwestern states and has become saturated with data centers.

The 20-year agreement with Vistra will have the most immediate impact on Meta's energy needs. The tech company will buy a total of 2.1 gigawatts from two existing nuclear power plants, Perry and Davis-Besse in Ohio. As part of the deal, Vistra will also add capacity to those power plants and to its Beaver Valley power plant in Pennsylvania. Together, the upgrades will generate an additional 433 MW and are scheduled to come online in the early 2030s.

Meta is also buying 1.2 gigawatts from young provider Oklo. Under its deal with Meta, Oklo is hoping to start supplying power to the grid as early as 2030. The SMR company went public via SPAC in 2023, and while Oklo has landed a large deal with data center operator Switch, it has struggled to get its reactor design approved by the Nuclear Regulatory Commission. If Oklo can deliver on its timeline, the new reactors would be built in Pike County, Ohio. The startup's Aurora Powerhouse reactors each produce 75 megawatts of electricity, and it will need to build more than a dozen to fulfill Meta's order. TerraPower is a startup co-founded by Bill Gates, and it is aiming to start sending electricity to Meta as early as 2032.
AI

AI Models Are Starting To Learn By Asking Themselves Questions (wired.com) 82

An anonymous reader quotes a report from Wired: [P]erhaps AI can, in fact, learn in a more human way -- by figuring out interesting questions to ask itself and attempting to find the right answer. A project from Tsinghua University, the Beijing Institute for General Artificial Intelligence (BIGAI), and Pennsylvania State University shows that AI can learn to reason in this way by playing with computer code. The researchers devised a system called Absolute Zero Reasoner (AZR) that first uses a large language model to generate challenging but solvable Python coding problems. It then uses the same model to solve those problems before checking its work by trying to run the code. And finally, the AZR system uses successes and failures as a signal to refine the original model, augmenting its ability to both pose better problems and solve them.

The team found that their approach significantly improved the coding and reasoning skills of both 7 billion and 14 billion parameter versions of the open source language model Qwen. Impressively, the model even outperformed some models that had received human-curated data. [...] A key challenge is that for now the system only works on problems that can easily be checked, like those that involve math or coding. As the project progresses, it might be possible to use it on agentic AI tasks like browsing the web or doing office chores. This might involve having the AI model try to judge whether an agent's actions are correct. One fascinating possibility of an approach like Absolute Zero is that it could, in theory, allow models to go beyond human teaching. "Once we have that it's kind of a way to reach superintelligence," [said Zilong Zheng, a researcher at BIGAI who worked on the project].

AI

AI Is Intensifying a 'Collapse' of Trust Online, Experts Say (nbcnews.com) 60

Experts interviewed by NBC News warn that the rapid spread of AI-generated images and videos is accelerating an online trust breakdown, especially during fast-moving news events where context is scarce. From the report: President Donald Trump's Venezuela operation almost immediately spurred the spread of AI-generated images, old videos and altered photos across social media. On Wednesday, after an Immigration and Customs Enforcement officer fatally shot a woman in her car, many online circulated a fake, most likely AI-edited image of the scene that appears to be based on real video. Others used AI in attempts to digitally remove the mask of the ICE officer who shot her.

The confusion around AI content comes as many social media platforms, which pay creators for engagement, have given users incentives to recycle old photos and videos to ramp up emotion around viral news moments. The amalgam of misinformation, experts say, is creating a heightened erosion of trust online -- especially when it mixes with authentic evidence. "As we start to worry about AI, it will likely, at least in the short term, undermine our trust default -- that is, that we believe communication until we have some reason to disbelieve," said Jeff Hancock, founding director of the Stanford Social Media Lab. "That's going to be the big challenge, is that for a while people are really going to not trust things they see in digital spaces."

Though AI is the latest technology to spark concern about surging misinformation, similar trust breakdowns have cycled through history, from election misinformation in 2016 to the mass production of propaganda after the printing press was invented in the 1400s. Before AI, there was Photoshop, and before Photoshop, there were analog image manipulation techniques. Fast-moving news events are where manipulated media have the biggest effect, because they fill in for the broad lack of information, Hancock said.
"In terms of just looking at an image or a video, it will essentially become impossible to detect if it's fake. I think that we're getting close to that point, if we're not already there," said Hancock. "The old sort of AI literacy ideas of 'let's just look at the number of fingers' and things like that are likely to go away."

Renee Hobbs, a professor of communication studies at the University of Rhode Island, added: "If constant doubt and anxiety about what to trust is the norm, then actually, disengagement is a logical response. It's a coping mechanism. And then when people stop caring about whether something's true or not, then the danger is not just deception, but actually it's worse than that. It's the whole collapse of even being motivated to seek truth."
Microsoft

Microsoft May Soon Allow IT Admins To Uninstall Copilot (bleepingcomputer.com) 41

Microsoft is testing a new Windows policy that lets IT administrators uninstall Microsoft Copilot from managed devices. The change rolls out via Windows Insider builds and works through standard management tools like Intune and SCCM. BleepingComputer reports: The new policy will apply to devices where the Microsoft 365 Copilot and Microsoft Copilot are both installed, the Microsoft Copilot app was not installed by the user, and the Microsoft Copilot app was not launched in the last 28 days. "Admins can now uninstall Microsoft Copilot for a user in a targeted way by enabling a new policy titled RemoveMicrosoftCopilotApp," the Windows Insider team said.

"If this policy is enabled, the Microsoft Copilot app will be uninstalled, once. Users can still re-install if they choose to. This policy is available on Enterprise, Pro, and EDU SKUs. To enable this policy, open the Group policy editor and go to: User Configuration -> Administrative Templates -> Windows AI -> Remove Microsoft Copilot App."

The Internet

Google: Don't Make 'Bite-Sized' Content For LLMs If You Care About Search Rank (arstechnica.com) 22

An anonymous reader quotes a report from Ars Technica: Search engine optimization, or SEO, is a big business. While some SEO practices are useful, much of the day-to-day SEO wisdom you see online amounts to superstition. An increasingly popular approach geared toward LLMs called "content chunking" may fall into that category. In the latest installment of Google's Search Off the Record podcast, John Mueller and Danny Sullivan say that breaking content down into bite-sized chunks for LLMs like Gemini is a bad idea.

You've probably seen websites engaging in content chunking and scratched your head, and for good reason -- this content isn't made for you. The idea is that if you split information into smaller paragraphs and sections, it is more likely to be ingested and cited by gen AI bots like Gemini. So you end up with short paragraphs, sometimes with just one or two sentences, and lots of subheads formatted like questions one might ask a chatbot.

According to Google's Danny Sullivan, this is a misconception, and Google doesn't use such signals to improve ranking. "One of the things I keep seeing over and over in some of the advice and guidance and people are trying to figure out what do we do with the LLMs or whatever, is that turn your content into bite-sized chunks, because LLMs like things that are really bite size, right?" said Sullivan. "So... we don't want you to do that."

The conversation, which begins around the podcast's 18-minute mark, goes on to illustrate the folly of jumping on the latest SEO trend. Sullivan notes that he has consulted engineers at Google before making this proclamation. Apparently, the best way to rank on Google continues to be creating content for humans rather than machines. That ensures long-term search exposure, because the behavior of human beings -- what they choose to click on -- is an important signal for Google.

Technology

CES Worst In Show Awards Call Out the Tech Making Things Worse (ifixit.com) 41

Longtime Slashdot reader chicksdaddy writes: CES, the Consumer Electronics Show, isn't just about shiny new gadgets. As AP reports, this year brought back the fifth annual Worst in Show anti-awards, calling out the most harmful, wasteful, invasive, and unfixable tech at the Las Vegas show. The coalition behind the awards -- including Repair.org, iFixit, EFF, PIRG, Secure Repairs, and others -- put the spotlight on products that miss the point of innovation and make life worse for users.

2026 Worst in Show winners include:

Overall (and Repairability): Samsung's AI-packed Family Hub Fridge -- over-engineered, hard to fix, and trying to do everything but keep food cold.
Privacy: Amazon Ring AI -- expanding surveillance with features like facial recognition and mobile towers.
Security: Merach UltraTread treadmill -- an AI fitness coach that also hoovers up sensitive data with weak security guarantees, including a privacy policy that declares the company "cannot guarantee the security of your personal information" (!!).
Environmental Impact: Lollipop Star -- a single-use, music-playing electronic lollipop that epitomizes needless e-waste.
Enshittification: Bosch eBike Flow App -- pushing lock-in and digital restrictions that make gear worse over time.
"Who Asked For This?": Bosch Personal AI Barista -- a voice-assistant coffee maker that nobody really wanted.
People's Choice: Lepro Ami AI Companion -- an overhyped "soulmate" cam that creeps more than it comforts.

The message? Not all tech is progress. Some products add needless complexity, threaten privacy, or throw sustainability out the window -- and the industry's watchdogs are calling them out.

IT

Torvalds Tells Kernel Devs To Stop Debating AI Slop - Bad Actors Won't Follow the Rules Anyway (theregister.com) 53

Linus Torvalds has weighed in on an ongoing debate within the Linux kernel development community about whether documentation should explicitly address AI-generated code contributions, and his position is characteristically blunt: stop making it an issue. The Linux creator was responding to Oracle-affiliated kernel developer Lorenzo Stoakes, who had argued that treating LLMs as "just another tool" ignores the threat they pose to kernel quality. "Thinking LLMs are 'just another tool' is to say effectively that the kernel is immune from this," Stoakes wrote.

Torvalds disagreed sharply. "There is zero point in talking about AI slop," he wrote. "Because the AI slop people aren't going to document their patches as such." He called such discussions "pointless posturing" and said that kernel documentation is "for good actors." The exchange comes as a team led by Intel's Dave Hansen works on guidelines for tool-generated contributions. Stoakes had pushed for language letting maintainers reject suspected AI slop outright, arguing the current draft "tries very hard to say 'NOP.'" Torvalds made clear he doesn't want kernel documentation to become a political statement on AI. "I strongly want this to be that 'just a tool' statement," he wrote.
Businesses

Lawsuit Over OpenAI For-Profit Conversion Can Head To Trial, US Judge Says 32

Longtime Slashdot reader schwit1 shares a report from Reuters: Billionaire entrepreneur Elon Musk persuaded a judge on Wednesday to allow a jury trial on his allegations that ChatGPT maker OpenAI violated its founding mission in its high-profile restructuring to a for-profit entity. Musk was a cofounder of OpenAI in 2015 but left in 2018 and now runs an AI company that competes with it.

U.S. District Judge Yvonne Gonzalez Rogers in Oakland, California, said at a hearing that there was "plenty of evidence" suggesting OpenAI's leaders made assurances that its original nonprofit structure was going to be maintained. The judge said there were enough disputed facts to let a jury consider the claims at a trial scheduled for March, rather than decide the issues herself. She said she would issue a written order after the hearing that addresses OpenAI's bid to throw out the case.

[...] Musk contends he contributed about $38 million, roughly 60% of OpenAI's early funding, along with strategic guidance and credibility, based on assurances that the organization would remain a nonprofit dedicated to the public benefit. The lawsuit accuses OpenAI co-founders Sam Altman and Greg Brockman of plotting a for-profit switch to enrich themselves, culminating in multibillion-dollar deals with Microsoft and a recent restructuring.
OpenAI, Altman and Brockman have denied the claims, and they called Musk "a frustrated commercial competitor seeking to slow down a mission-driven market leader."

Microsoft is also a defendant and has urged the judge to toss Musk's lawsuit. A lawyer for Microsoft said there was no evidence that the company "aided and abetted" OpenAI.

OpenAI in a statement after the hearing said: "Mr Musk's lawsuit continues to be baseless and a part of his ongoing pattern of harassment, and we look forward to demonstrating this at trial."
Google

Google Is Adding an 'AI Inbox' To Gmail That Summarizes Emails 46

An anonymous reader quotes a report from Wired: Google is putting even more generative AI tools into Gmail as part of its goal to further personalize user inboxes and streamline searches. On Thursday, the company announced a new "AI Inbox" tab, currently in a beta testing phase, that reads every message in a user's Gmail and suggests a list of to-dos and key topics, based on what it summarizes. In Google's example of what this AI Inbox could look like in Gmail, the new tab takes context from a user's messages and suggests they reschedule their dentist appointment, reply to a request from their child's sports coach, and pay an upcoming fee before the deadline. Also under the AI Inbox tab is a list of important topics worth browsing, nestled beneath the action items at the top. Each suggested to-do and topic links back to the original email for more context and for verification.

[...] For users who are concerned about their privacy, the information Google gleans by skimming through inboxes will not be used to improve the company's foundational AI models. "We didn't just bolt AI onto Gmail," says Blake Barnes, who leads the project for Google. "We built a secure privacy architecture, specifically for this moment." He emphasizes that users can turn off Gmail's new AI tools if they don't want them. At the same time Google announced its AI Inbox, the company made free for all Gmail users multiple Gemini features that were previously available only to paying subscribers. This includes the Help Me Write tool, which generates emails from a user prompt, as well as AI Overviews for email threads, which essentially posts a TL;DR summary at the top of long message threads. Subscribers to Google's Ultra and Pro plans, which start at $20 a month, get two additional new features in their Gmail inbox. First, an AI proofreading tool that suggests more polished grammar and sentence structures. And second, an AI Overviews tool that can search your whole inbox and create relevant summaries on a topic, rather than just summarizing a single email thread.
AI

Microsoft Turns Copilot Chats Into a Checkout Lane 41

Microsoft is embedding full e-commerce checkout directly into Copilot chats, letting users buy products without ever visiting a retailer's website. "If checkout happens inside AI conversations, retailers risk losing direct customer relationships -- while platforms like Microsoft gain leverage," reports Axios. From the report: Microsoft unveiled new agentic AI tools for retailers at the NRF 2026 retail conference, including Copilot Checkout, which lets shoppers complete purchases inside Copilot without being redirected to a retailer's website. The checkout feature is live in the U.S. with Shopify, PayPal, Stripe and Etsy integrations.

Copilot apps have more than 100 million monthly active users, spanning consumer and commercial audiences, according to the company. More than 800 million monthly active users interact with AI features across Microsoft products more broadly. Shopping journeys involving Copilot are 33% shorter than traditional search paths and see a 53% increase in purchases within 30 minutes of interaction, Microsoft says. When shopping intent is present, journeys involving Copilot are 194% more likely to result in a purchase than those without it.
AI

'The Downside To Using AI for All Those Boring Tasks at Work' (msn.com) 39

The promise of AI-powered workplace tools that sort emails, take meeting notes, and file expense reports is finally delivering meaningful productivity gains -- one software startup reported a 20% boost around mid-2025 -- but companies are discovering an unexpected tradeoff: employees are burning out from the relentless pace of high-level cognitive work.

Roger Kirkness, CEO of 14-person software startup Convictional, noticed that after AI took the scut work off his team's plates, their days became consumed by intensive thinking, and they were mentally exhausted and unproductive by Friday. The company transitioned to a four-day workweek; the same amount of work gets done, Kirkness says.

The underlying problem, according to Boston College economist and sociologist Juliet Schor, is that businesses tend to simply reallocate the time AI saves. Workers who once mentally downshifted for tasks like data entry are now expected to maintain intense focus through longer stretches of data analysis. "If you just make people work at a high-intensity pace with no breaks, you risk crowding out creativity," Schor says.
Television

TV Makers Are Taking AI Too Far (theverge.com) 53

TV manufacturers at CES 2026 in Las Vegas this week unveiled a wave of AI features that frequently consume significant screen space and take considerable time to deliver results -- all while global TV shipments declined 0.6% year over year in Q3, according to Omdia. Google demonstrated Veo generating video from a photo on a television, a process that took about two minutes to produce eight seconds of footage, The Verge writes in a column. Samsung presented a future where viewers ask their sets for sports predictions and recipes to share with kitchen displays. Hisense showed an AI agent that displays real-time stats for every soccer player on screen, a feature requiring so much space the company built a prototype 21:9 aspect ratio display to accommodate it.

Demos repeatedly showed video shrinking to make room for sports scores and information when viewers asked questions -- noticeable on 70-inch displays and likely worse on anything 50 inches or smaller. Amazon's Alexa Plus can jump to Prime Video scenes based on verbal descriptions. LG's sets switch homescreen recommendations based on voice recognition of individual family members.
IT

Tailwind CSS Lets Go 75% Of Engineers After 40% Traffic Drop From Google (seroundtable.com) 31

Adam Wathan, the creator of the popular CSS framework Tailwind CSS, has let go of 75% of his engineering team -- reducing it from four people to one -- because AI-generated search answers have decimated traffic to the project's documentation pages.

Traffic to Tailwind's documentation has fallen roughly 40% since early 2023 despite the framework being more popular than ever, Wathan wrote in a post. The documentation is the primary channel through which developers discover Tailwind's commercial products, and without that traffic the business has struggled to sustain itself; revenue has dropped close to 80%.

The reduced team also means Wathan cannot currently prioritize implementing LLMS.txt, a proposed feature that would make documentation more accessible to large language models. "Tailwind is growing faster than it ever has and is bigger than it ever has been, and our revenue is down close to 80%," he wrote in the forum post.
The Almighty Buck

AI Chip Frenzy To Wallop DRAM Prices With 70% Hike (theregister.com) 92

Samsung Electronics and SK hynix are projected to raise server memory prices by up to 70% in early 2026, according to Korea Economic Daily. "Combined with 50 percent increases in 2025, this could nearly double prices by mid-2026," reports the Register. From the report: The two Korean giants, alongside US-based Micron, dominate global memory production. All three are reallocating advanced manufacturing capacity to high-margin server DRAM and HBM chips for AI infrastructure, squeezing supply for PCs and smartphones. Financial analysts have raised their earnings forecasts for the firms in response, as they look to benefit from the AI infrastructure boom that is driving up prices for everyone else. Taiwan-based market watcher TrendForce reports that conventional DRAM prices already jumped 55-60 percent in a single quarter.

Yet despite the focus on server chips, supply of these components continues to be strained too, with supplier inventories falling and shipment growth reliant on wafer output increases, according to TrendForce. As a result, it forecasts that server DRAM prices will jump by more than 60 percent in the first quarter of 2026. Prior to Christmas, analyst IDC noted the "unprecedented" memory chip shortage and warned this would have knock-on effects for both hardware makers and end users that may persist well into 2027.

The Courts

Google and Character.AI Agree To Settle Lawsuits Over Teen Suicides 36

Google and Character.AI have agreed to settle multiple lawsuits from families alleging the chatbot encouraged self-harm and suicide among teens. "The settlements would mark the first resolutions in the wave of lawsuits against tech companies whose AI chatbots encouraged teens to hurt or kill themselves," notes Axios. From the report: Families allege that Character.AI's chatbot encouraged their children to cut their arms, suggested murdering their parents, wrote sexually explicit messages and did not discourage suicide, per lawsuits and congressional testimony. "Parties have agreed to a mediated settlement in principle to resolve all claims between them in the above-referenced matter," one document filed in U.S. District Court for the Middle District of Florida reads.

The documents do not contain any specific monetary amounts for the settlements. Pricy settlements could deter companies from continuing to offer chatbot products to kids. But without new laws on the books, don't expect major changes across the industry.
Last October, Character.AI said it would bar people under 18 from using its chatbots, in a sweeping move to address concerns over child safety.
AI

OpenAI Launches ChatGPT Health, Encouraging Users To Connect Their Medical Records 38

OpenAI has unveiled ChatGPT Health, a sandboxed health-focused mode that lets users connect medical records and wellness apps for more personalized guidance. The company makes sure to note that ChatGPT Health is "not intended for diagnosis or treatment." The Verge reports: The company is encouraging users to connect their personal medical records and wellness apps, such as Apple Health, Peloton, MyFitnessPal,Weight Watchers, and Function, "to get more personalized, grounded responses to their questions." It suggests connecting medical records so that ChatGPT can analyze lab results, visit summaries, and clinical history; MyFitnessPal and Weight Watchers for food guidance; Apple Health for health and fitness data, including movement, sleep, and activity patterns"; and Function for insights into lab tests.

On the medical records front, OpenAI says it's partnered with b.well, which will provide back-end integration for users to upload their medical records, since the company works with about 2.2 million providers. For now, ChatGPT Health requires users to sign up for a waitlist to request access, as it's starting with a beta group of early users, but the product will roll out gradually to all users regardless of subscription tier. [...]

In a blog post, OpenAI wrote that based on its "de-identified analysis of conversations," more than 230 million people around the world already ask ChatGPT questions related to health and wellness each week. OpenAI also said that over the past two years, it's worked with more than 260 physicians to provide feedback on model outputs more than 600,000 times over 30 areas of focus, to help shape the product's responses. "ChatGPT can help you understand recent test results, prepare for appointments with your doctor, get advice on how to approach your diet and workout routine, or understand the tradeoffs of different insurance options based on your healthcare patterns," OpenAI claims in the blog post.
Government

California Lawmaker Proposes a Four-Year Ban On AI Chatbots In Kids' Toys (techcrunch.com) 22

An anonymous reader quotes a report from TechCrunch: Senator Steve Padilla (D-CA) introduced a bill [dubbed SB 867] on Monday that would place a four-year ban on the sale and manufacture of toys with AI chatbot capabilities for kids under 18. The goal is to give safety regulators time to develop regulations to protect children from "dangerous AI interactions."

"Chatbots and other AI tools may become integral parts of our lives in the future, but the dangers they pose now require us to take bold action to protect our children," Senator Padilla said in a statement. "Our safety regulations around this kind of technology are in their infancy and will need to grow as exponentially as the capabilities of this technology do. Pausing the sale of these chatbot-integrated toys allows us time to craft the appropriate safety guidelines and framework for these toys to follow." [...] "Our children cannot be used as lab rats for Big Tech to experiment on," Padilla said.

Robotics

Samsung's Rolling Ballie Robot Indefinitely Shelved After Delays (msn.com) 8

Samsung Electronics has once again sidelined Ballie, a long-anticipated robot that was first announced six years ago but never released. Bloomberg News: The device -- designed to roll and roam throughout the home -- is completely absent from this week's CES, the biggest electronics trade show. And though Samsung said last year that Ballie was nearly ready for a retail release, the product is now unlikely to resurface soon.

In an emailed statement, Samsung referred to Ballie as an "active innovation platform" within the company, rather than a forthcoming consumer device. "After multiple years of real-world testing, it continues to inform how Samsung designs spatially aware, context-driven experiences, particularly in areas like smart home intelligence, ambient AI and privacy-by-design," a Samsung spokesperson said in the statement.

Slashdot Top Deals