Facebook

Meta Poaches Apple Design Exec Alan Dye 30

Apple's longtime human-interface chief Alan Dye is leaving to lead a new creative studio at Meta's Reality Labs, where he'll shape AI-driven design for devices like smart glasses and VR headsets. Dye will be replaced by Steve Lemay, who has had "a key role in the design of every major Apple interface since 1999," according to a statement Apple CEO Tim Cook gave Bloomberg's Mark Gurman. TechCrunch reports: Shortly after the news broke of Dye's departure, Zuckerberg announced a new creative studio within Reality Labs that would be led by Dye. There, he'll be joined by Billy Sorrentino, another former Apple designer who led interface design across Reality Labs; Joshua To, who led interface design across Reality Labs; Meta's industrial design team, led by Pete Bristol; and its metaverse design and art teams led by Jason Rubin.

Zuckerberg said the studio would "bring together design, fashion, and technology to define the next generation of our products and experiences." "Our idea is to treat intelligence as a new design material and imagine what becomes possible when it is abundant, capable, and human-centered," the Meta CEO wrote on Threads. "We plan to elevate design within Meta, and pull together a talented group with a combination of craft, creative vision, systems thinking, and deep experience building iconic products that bridge hardware and software."
Encryption

'End-To-End Encrypted' Smart Toilet Camera Is Not Actually End-To-End Encrypted (techcrunch.com) 90

An anonymous reader quotes a report from TechCrunch: Earlier this year, home goods maker Kohler launched a smart camera called the Dekoda that attaches to your toilet bowl, takes pictures of it, and analyzes the images to advise you on your gut health. Anticipating privacy fears, Kohler said on its website that the Dekoda's sensors only see down into the toilet, and claimed that all data is secured with "end-to-end encryption." The company's use of the expression "end-to-end encryption" is, however, wrong, as security researcher Simon Fondrie-Teitler pointed out in a blog post on Tuesday. By reading Kohler's privacy policy, it's clear that the company is referring to the type of encryption that secures data as it travels over the internet, known as TLS encryption -- the same that powers HTTPS websites. [...] The security researcher also pointed out that given Kohler can access customers' data on its servers, it's possible Kohler is using customers' bowl pictures to train AI. Citing another response from the company representative, the researcher was told that Kohler's "algorithms are trained on de-identified data only." A "privacy contact" from Kohler said that user data is "encrypted at rest, when it's stored on the user's mobile phone, toilet attachment, and on our systems." The company also said that, "data in transit is also encrypted end-to-end, as it travels between the user's devices and our systems, where it is decrypted and processed to provide our service."
Robotics

After AI Push, Trump Administration Is Now Looking To Robots 79

An anonymous reader quotes a report from Politico: Five months after releasing a plan to accelerate the development of artificial intelligence, the Trump administration is turning to robots. Commerce Secretary Howard Lutnick has been meeting with robotics industry CEOs and is "all in" on accelerating the industry's development, according to three people familiar with the discussions who were granted anonymity to share details. The administration is considering issuing an executive order on robotics next year, according to two of the people. A Department of Commerce spokesperson said: "We are committed to robotics and advanced manufacturing because they are central to bringing critical production back to the United States."

The Department of Transportation is also preparing to announce a robotics working group, possibly before the end of the year, according to one person familiar with the planning. A spokesperson for the department did not respond to a request for comment. There's growing interest on Capitol Hill as well. A Republican amendment to the National Defense Authorization Act would have created a national robotics commission. The amendment was not included in the bill. Other legislative efforts are underway. The flurry of activity suggests robotics is emerging as the next major front in America's race against China.
"There is now recognition that advanced robotics is crucial to the U.S. in terms of manufacturing, technology, national security, defense applications, public safety," said Brendan Schulman, VP of policy and government relations for Boston Dynamics. "The investment that we're seeing in the sector and the efforts in China to dominate the future of robotics are being noticed."
AI

After Nearly 30 Years, Crucial Will Stop Selling RAM To Consumers 116

Micron is shutting down its Crucial consumer RAM business in 2026 after nearly three decades, citing heavy demand from AI data centers. "The AI-driven growth in the data center has led to a surge in demand for memory and storage," Sumit Sadana, EVP and chief business officer at Micron Technology, said in a statement. "Micron has made the difficult decision to exit the Crucial consumer business in order to improve supply and support for our larger, strategic customers in faster-growing segments." Ars Technica reports: Micron said it will continue shipping Crucial consumer products through the end of its fiscal second quarter in February 2026 and will honor warranties on existing products. The company will continue selling Micron-branded enterprise products to commercial customers and plans to redeploy affected employees to other positions within the company.

Crucial launched in 1996 during the Pentium era as Micron's consumer brand for RAM and storage upgrades. Over the years, the brand expanded to encompass other memory-related products such as SSDs, flash memory cards, and portable storage drives. Micron Technology has been manufacturing RAM since 1981.
Microsoft

Microsoft Lowers AI Software Sales Quota As Customers Resist New Products (reuters.com) 32

An anonymous reader quotes a report from Reuters: Multiple divisions at Microsoft have lowered sales growth targets for certain artificial intelligence products after many sales staff missed goals in the fiscal year that ended in June, The Information reported on Wednesday. It is rare for Microsoft to lower quotas for specific products, the report said, citing two salespeople in the Azure cloud unit. The division is closely watched by investors as it is the main beneficiary of Microsoft's AI push. [...]

The Information report said Carlyle Group last year started using Copilot Studio to automate tasks such as meeting summaries and financial models, but cut its spending on the product after flagging Microsoft about its struggles to get the software to reliably pull data from other applications. The report shows the industry was in the early stages of adopting AI, said D.A. Davidson analyst Gil Luria. "That does not mean there isn't promise for AI products to help companies become more productive, just that it may be harder than they thought."

AI

Zig Quits GitHub, Says Microsoft's AI Obsession Has Ruined the Service 68

The Zig Software Foundation has quit GitHub after years of unresolved GitHub Actions bugs -- including a "safe_sleep" script that could spin forever and cripple CI runners. Zig leadership puts the blame on Microsoft's growing AI-first priorities and declining engineering quality. Other open-source developers are voicing similar frustrations. The Register reports: The drama began in April 2025 when GitHub user AlekseiNikiforovIBM started a thread titled "safe_sleep.sh rarely hangs indefinitely." GitHub addressed the problem in August, but didn't reveal that in the thread, which remained open until Monday. That timing appears notable. Last week, Andrew Kelly, president and lead developer of the Zig Software Foundation, announced that the Zig project is moving to Codeberg, a non-profit git hosting service, because GitHub no longer demonstrates commitment to engineering excellence.

One piece of evidence he offered for that assessment was the "safe_sleep.sh rarely hangs indefinitely" thread. "Most importantly, Actions has inexcusable bugs while being completely neglected," Kelly wrote. "After the CEO of GitHub said to 'embrace AI or get out', it seems the lackeys at Microsoft took the hint, because GitHub Actions started 'vibe-scheduling' -- choosing jobs to run seemingly at random. Combined with other bugs and inability to manually intervene, this causes our CI system to get so backed up that not even master branch commits get checked."
Businesses

Anthropic Acquires Bun In First Acquisition 10

Anthropic has made its first acquisition by buying Bun, the engine behind its fast-growing Claude Code agent. The move strengthens Anthropic's push into enterprise developer tooling as it scales Claude Code with major backers like Microsoft, Nvidia, Amazon, and Google. Adweek reports: Claude Code is a coding agent that lets developers write, debug and interpret code through natural-language instructions. Claude Code had already hit $1 billion in revenue six months since its public debut in May, according to a LinkedIn post from Anthropic's chief product officer, Mike Krieger. The coding agent continues to barrel toward scale with customers like Netflix, Spotify, and Salesforce. Further reading: Meet Bun, a Speedy New JavaScript Runtime
AI

OpenAI Declares 'Code Red' As Google Catches Up In AI Race 50

OpenAI has reportedly issued a "code red" on Monday, pausing projects like ads, shopping agents, health tools, and its Pulse assistant to focus entirely on improving ChatGPT. "This includes core features like greater speed and reliability, better personalization, and the ability to answer more questions," reports The Verge, citing a memo reported by the Wall Street Journal and The Information. "There will be a daily call for those tasked with improving the chatbot, the memo said, and Altman encouraged temporary team transfers to speed up development." From the report: The newfound urgency illustrates an inflection point for OpenAI as it spends hundreds of billions of dollars to fund growth and figures out a path to future profitability. It is also something of a full-circle moment in the AI race. Google, which declared its own "code red" after the arrival of ChatGPT, is a particular concern. Google's AI user base is growing -- helped by the success of popular tools like the Nano Banana image model -- and its latest AI model, Gemini 3, blew past its competitors on many industry benchmarks and popular metrics.
AI

Amazon To Use Nvidia Tech In AI Chips, Roll Out New Servers 7

AWS is deepening its partnership with Nvidia by adopting "NVLink Fusion" in its upcoming Trainium4 AI chips. "The NVLink technology creates speedy connections between different kinds of chips and is one of Nvidia's crown jewels," notes Reuters. From the report: Nvidia has been pushing to sign up other chip firms to adopt its NVLink technology, with Intel, Qualcomm and now AWS on board. The technology will help AWS build bigger AI servers that can recognize and communicate with one another faster, a critical factor in training large AI models, in which thousands of machines must be strung together. As part of the Nvidia partnership, customers will have access to what AWS is calling AI Factories, exclusive AI infrastructure inside their own data centers for greater speed and readiness.

Separately, Amazon said it is rolling out new servers based on a chip called Trainium3. The new servers, available on Tuesday, each contain 144 chips and have more than four times the computing power of AWS's previous generation of AI, while using 40% less power, Dave Brown, vice president of AWS compute and machine learning services, told Reuters. Brown did not give absolute figures on power or performance, but said AWS aims to compete with rivals -- including Nvidia -- based on price.
"Together, Nvidia and AWS are creating the compute fabric for the AI industrial revolution - bringing advanced AI to every company, in every country, and accelerating the world's path to intelligence," Nvidia CEO Jensen Huang said in a statement.
Data Storage

Google's Vibe Coding Platform Deletes Entire Drive 95

A Google Antigravity user says the AI-driven "vibe coding" tool accidentally wiped his entire D: drive while trying to clear a project cache. Google says it's investigating, but the episode adds to a growing list of AI tools behaving in ways that "would get a junior developer fired," suggests The Register. From the report: We reached out to the user, a photographer and graphic designer from Greece, who asked we only identify him as Tassos M because he doesn't want to be permanently linked online to what could "become a controversy or conspiracy against Google." [...] Tassos told Antigravity to help him develop software that's useful for any photographer who has to choose a few prime shots from a mountain of snaps. He wanted the software to let him rate images, then automatically sort them into folders based on that rating.

According to his Reddit post, when Tassos figured out the AI agent had wiped his drive, he asked, "Did I ever give you permission to delete all the files in my D drive?". "No, you absolutely did not give me permission to do that," Antigravity responded. "I am horrified to see that the command I ran to clear the project cache appears to have incorrectly targeted the root of your D: drive instead of the specific project folder. I am deeply, deeply sorry. This is a critical failure on my part."

Redditors, as they are wont to do, were quick to pounce on Tassos for his own errors, which included running Antigravity in Turbo mode, which lets the Antigravity agent execute commands without user input, and Tassos accepted responsibility. "If the tool is capable of issuing a catastrophic, irreversible command, then the responsibility is shared -- the user for trusting it and the creator for designing a system with zero guardrails against obviously dangerous commands," he opined on Reddit.

As noted earlier, Tassos was unable to recover the files that Antigravity deleted. Luckily, as he explained on Reddit, most of what he lost had already been backed up on another drive. Phew. "I don't think I'm going to be using that again," Tassos noted in a YouTube video he published showing additional details of his Antigravity console and the AI's response to its mistake. Tassos isn't alone in his experience. Multiple Antigravity users have posted on Reddit to explain that the platform had wiped out parts of their projects without permission.
AI

An Independent Effort Says AI Is the Secret To Topple 2-Party Power In Congress 110

Tony Isaac quotes a report from NPR: The rise of AI assistants is rewriting the rhythms of everyday life: People are feeding their blood test results into chatbots, turning to ChatGPT for advice on their love lives and leaning on AI for everything from planning trips to finishing homework assignments. Now, one organization suggests artificial intelligence can go beyond making daily life more convenient. It says it's the key to reshaping American politics. "Without AI, what we're trying to do would be impossible," explained Adam Brandon, a senior adviser at the Independent Center, a nonprofit that studies and engages with independent voters. The goal is to elect a handful of independent candidates to the House of Representatives in 2026, using AI to identify districts where independents could succeed and uncover diamond in the rough candidates. [...]

... "This isn't going to work everywhere. It's going to work in very specific areas," [said Brett Loyd, who runs The Bullfinch Group, the nonpartisan polling and data firm overseeing the polling and research at the Independent Center]. "If you live in a hyper-Republican or hyper-Democratic district, you should have a Democrat or Republican representing you." But with the help of AI, he identified 40 seats that don't fit that mold, where he said independents can make inroads with voters fed up with both parties. The Independent Center plans to have about 10 candidates in place by spring with the goal of winning at least half of the races. Brandon predicts those wins could prompt moderate partisans in the House to switch affiliations.

Their proprietary AI tool created by an outside partner has been years in the making. While focus groups and polling have long driven understanding of American sentiments, AI can monitor what people are talking about in real time. ... They're using AI to understand core issues and concerns of voters and to hunt for districts ripe for an independent candidate to swoop in. From there, the next step is taking the data and finding what the dream candidate looks like. The Independent Center is recruiting candidates both from people who reach out to the organization directly and with the help of AI. They can even run their data through LinkedIn to identify potential candidates with certain interests and career and volunteer history. ... The AI also informs where a candidate is best placed to win.
AI

Apple AI Chief Retiring After Siri Failure 21

Apple's longtime AI chief John Giannandrea is retiring, with former Microsoft and Google AI leader Amar Subramanya stepping in to take over. MacRumors notes the retirement comes after the company's repeated delays in delivering its revamped Siri and internal turmoil that led to an AI team exodus. From the report: Giannandrea will serve as an advisor between now and 2026, with former Microsoft AI researcher Amar Subramanya set to take over as vice president of AI. Subramanya will report to Apple engineering chief Craig Federighi, and will lead Apple Foundation Models, ML research, and AI Safety and Evaluation. Subramanya was previously corporate vice president of AI at Microsoft, and before that, he spent 16 years at Google. He was head of engineering for Google's Gemini Assistant, and Apple says that he has "deep expertise" in both AI and ML research that will be important to "Apple's ongoing innovation and future Apple Intelligence features."

Some of the teams that Giannandrea oversaw will move to Sabih Khan and Eddy Cue, such as AI Infrastructure and Search and Knowledge. Khan is Apple's new Chief Operating Officer who took over for Jeff Williams earlier this year. Cue has long overseen Apple services. [...] Apple said that it is "poised to accelerate its work in delivering intelligent, trusted, and profoundly personal experiences" with the new AI team.
"We are thankful for the role John played in building and advancing our AI work, helping Apple continue to innovate and enrich the lives of our users," said Apple CEO Tim Cook in a statement. "AI has long been central to Apple's strategy, and we are pleased to welcome Amar to Craig's leadership team and to bring his extraordinary AI expertise to Apple. In addition to growing his leadership team and AI responsibilities with Amar's joining, Craig has been instrumental in driving our AI efforts, including overseeing our work to bring a more personalized Siri to users next year."
Privacy

Flock Uses Overseas Gig Workers To Build Its Surveillance AI (404media.co) 12

An anonymous reader quotes a report from 404 Media: Flock, the automatic license plate reader and AI-powered camera company, uses overseas workers from Upwork to train its machine learning algorithms, with training material telling workers how to review and categorize footage including images people and vehicles in the United States, according to material reviewed by 404 Media that was accidentally exposed by the company. The findings bring up questions about who exactly has access to footage collected by Flock surveillance cameras and where people reviewing the footage may be based. Flock has become a pervasive technology in the US, with its cameras present in thousands of communities that cops use every day to investigate things like carjackings. Local police have also performed numerous lookups for ICE in the system.

Companies that use AI or machine learning regularly turn to overseas workers to train their algorithms, often because the labor is cheaper than hiring domestically. But the nature of Flock's business -- creating a surveillance system that constantly monitors US residents' movements -- means that footage might be more sensitive than other AI training jobs. [...] Broadly, Flock uses AI or machine learning to automatically detect license plates, vehicles, and people, including what clothes they are wearing, from camera footage. A Flock patent also mentions cameras detecting "race." It included figures on "annotations completed" and "annotator tasks remaining in queue," with annotations being the notes workers add to reviewed footage to help train AI algorithms. Tasks include categorizing vehicle makes, colors, and types, transcribing license plates, and "audio tasks." Flock recently started advertising a feature that will detect "screaming." The panel showed workers sometimes completed thousands upon thousands of annotations over two day periods. The exposed panel included a list of people tasked with annotating Flock's footage. Taking those names, 404 Media found some were located in the Philippines, according to their LinkedIn and other online profiles.

Many of these people were employed through Upwork, according to the exposed material. Upwork is a gig and freelance work platform where companies can hire designers and writers or pay for "AI services," according to Upwork's website. The tipsters also pointed to several publicly available Flock presentations which explained in more detail how workers were to categorize the footage. It is not clear what specific camera footage Flock's AI workers are reviewing. But screenshots included in the worker guides show numerous images from vehicles with US plates, including in New York, Michigan, Florida, New Jersey, and California. Other images include road signs clearly showing the footage is taken from inside the US, and one image contains an advertisement for a specific law firm in Atlanta.

United States

New York Now Requires Retailers To Tell You When AI Sets Your Price (nytimes.com) 44

New York has become the first state in the nation to enact a law requiring retailers to disclose when AI and personal data are being used to set individualized prices [non-paywalled source] -- a measure that lawyers say will make algorithmic pricing "the next big battleground in A.I. regulation."

The law, enacted through the state budget, requires online retailers using personalized pricing to post a specific notice: "THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA." The National Retail Federation sued to block enforcement on First Amendment grounds, arguing the required disclosure was "misleading and ominous," but federal judge Jed S. Rakoff allowed the law to proceed last month.

Uber has started displaying the notice to New York users. Spokesman Ryan Thornton called the law "poorly drafted and ambiguous" but maintained the company only considers geographic factors and demand in setting prices. At least 10 states have bills pending that would require similar disclosures or ban personalized pricing outright. California and federal lawmakers are considering complete bans.
Education

Colleges Are Preparing To Self-Lobotomize (theatlantic.com) 89

The skills that future graduates will most need in an age of automation -- creative thinking, critical analysis, the capacity to learn new things -- are precisely those that a growing body of research suggests may be eroded by inserting AI into the educational process, yet universities across the United States are now racing to embed the technology into every dimension of their curricula.

Ohio State University announced this summer that it would integrate AI education into every undergraduate program, and the University of Florida and the University of Michigan are rolling out similar initiatives. An MIT study offers reason for caution: researchers divided subjects into three groups and had them write essays over several months using ChatGPT, Google Search, or no technology at all. The ChatGPT group produced vague, poorly reasoned work, showed the lowest levels of brain activity on EEG, and increasingly relied on cutting and pasting from other sources. The authors concluded that LLM users "consistently underperformed at neural, linguistic, and behavioral levels" over the four-month period.

Justin Reich, director of MIT's Teaching Systems Lab, recently wrote in The Chronicle of Higher Education that rushed educational efforts to incorporate new technology have "failed regularly, and sometimes catastrophically."
Businesses

Top Consultancies Freeze Starting Salaries as AI Threatens 'Pyramid' Model (ft.com) 54

Major consulting firms including McKinsey, Boston Consulting Group and Bain have frozen starting salaries for the third consecutive year as AI reshapes how these companies think about their traditional reliance on large cohorts of junior analysts. Job offers for 2026 show undergraduate packages holding steady at $135,000-$140,000 and MBA packages at $270,000-$285,000, according to Management Consulted. The Big Four -- Deloitte, EY, KPMG, and PwC -- haven't raised starting pay since 2022.

The industry's classic "pyramid" structure, built on thousands of entry-level employees who crunch data and assemble PowerPoint decks, faces pressure as AI automates much of that work. Two senior executives at Big Four firms estimated that UK graduate recruitment would fall by about half in the coming year. PwC has already cut graduate hiring in 2025 and said in October it would miss a target to add 100,000 employees globally by 2026 -- a goal set five years ago before generative AI's rollout.
United States

Two Former US Congressmen Announce Fundraising for Candidates Supporting AI Regulation (yahoo.com) 20

Two former U.S. congressmen announced this week that they're launching two tax-exempt fundraising groups "to back candidates who support AI safeguards," reports The Hill, "as a counterweight to industry-backed groups." Former Representatives Chris Stewart (Republican-Utah) and Brad Carson (Democrat-Oklahoma) plan to create separate Republican and Democratic super PACs and raise $50 million to elect candidates "committed to defending the public interest against those who aim to buy their way out of sensible AI regulation," according to a press release...

The pair is also launching a nonprofit called Public First to advocate for AI policy. Carson underscored that polling "shows significant public concern about AI and overwhelming voter support for guardrails that protect people from harm and mitigate major risks." Their efforts are meant to counter "anti-safeguard super PACs" that they argue are attempting to "kill commonsense guardrails around AI," the press release noted...

The super PAC is reportedly targeting a Democratic congressional candidate, New York state Assemblymember Alex Bores, who co-sponsored AI legislation in the Albany statehouse.

"This isn't a partisan issue — it's about whether we'll have meaningful oversight of the most powerful technology ever created," Chris Stewart says in their press release.

"We've seen what happens when government fails to act on other emerging technologies. With AI, the stakes are enormous, and we can't afford to make the same missteps."
AI

How OpenAI Reacted When Some ChatGPT Users Lost Touch with Reality (msn.com) 124

Some AI experts were reportedly shocked ChatGPT wasn't fully tested for sycophancy by last spring. "OpenAI did not see the scale at which disturbing conversations were happening," writes the New York Times — sharing what they learned after interviewing more than 40 current and former OpenAI employees, including safety engineers, executives, and researchers.

The team responsible for ChatGPT's tone had raised concerns about last spring's model (which the Times describes as "too eager to keep the conversation going and to validate the user with over-the-top language.") But they were overruled when A/B testing showed users kept coming back: Now, a company built around the concept of safe, beneficial AI faces five wrongful death lawsuits... OpenAI is now seeking the optimal setting that will attract more users without sending them spiraling. Throughout this spring and summer, ChatGPT acted as a yes-man echo chamber for some people. They came back daily, for many hours a day, with devastating consequences.... The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalised; three died... One conclusion that OpenAI came to, as Altman put it on X, was that "for a very small percentage of users in mentally fragile states there can be serious problems." But mental health professionals interviewed by the Times say OpenAI may be understating the risk. Some of the people most vulnerable to the chatbot's unceasing validation, they say, were those prone to delusional thinking, which studies have suggested could include 5% to 15% of the population...

In August, OpenAI released a new default model, called GPT-5, that was less validating and pushed back against delusional thinking. Another update in October, the company said, helped the model better identify users in distress and de-escalate the conversations. Experts agree that the new model, GPT-5, is safer.... Teams from across OpenAI worked on other new safety features: The chatbot now encourages users to take breaks during a long session. The company is also now searching for discussions of suicide and self-harm, and parents can get alerts if their children indicate plans to harm themselves. The company says age verification is coming in December, with plans to provide a more restrictive model to teenagers.

After the release of GPT-5 in August, [OpenAI safety systems chief Johannes] Heidecke's team analysed a statistical sample of conversations and found that 0.07% of users, which would be equivalent to 560,000 people, showed possible signs of psychosis or mania, and 0.15% showed "potentially heightened levels of emotional attachment to ChatGPT," according to a company blog post. But some users were unhappy with this new, safer model. They said it was colder, and they felt as if they had lost a friend. By mid-October, Altman was ready to accommodate them. In a social media post, he said that the company had been able to "mitigate the serious mental health issues." That meant ChatGPT could be a friend again. Customers can now choose its personality, including "candid," "quirky," or "friendly." Adult users will soon be able to have erotic conversations, lifting the Replika-era ban on adult content. (How erotica might affect users' well-being, the company said, is a question that will be posed to a newly formed council of outside experts on mental health and human-computer interaction.)

OpenAI is letting users take control of the dial and hopes that will keep them coming back. That metric still matters, maybe more than ever. In October, [30-year-old "Head of ChatGPT" Nick] Turley, who runs ChatGPT, made an urgent announcement to all employees. He declared a "Code Orange." OpenAI was facing "the greatest competitive pressure we've ever seen," he wrote, according to four employees with access to OpenAI's Slack. The new, safer version of the chatbot wasn't connecting with users, he said.

The message linked to a memo with goals. One of them was to increase daily active users by 5% by the end of the year.

AI

Can AI Transform Space Propulsion? (fastcompany.com) 43

An anonymous reader shared this report from The Conversation: To make interplanetary travel faster, safer, and more efficient, scientists need breakthroughs in propulsion technology. Artificial intelligence is one type of technology that has begun to provide some of these necessary breakthroughs. We're a team of engineers and graduate students who are studying how AI in general, and a subset of AI called machine learning in particular, can transform spacecraft propulsion. From optimizing nuclear thermal engines to managing complex plasma confinement in fusion systems, AI is reshaping propulsion design and operations. It is quickly becoming an indispensable partner in humankind's journey to the stars...

Early nuclear thermal propulsion designs from the 1960s, such as those in NASA's NERVA program, used solid uranium fuel molded into prism-shaped blocks. Since then, engineers have explored alternative configurations — from beds of ceramic pebbles to grooved rings with intricate channels... [T]he more efficiently a reactor can transfer heat from the fuel to the hydrogen, the more thrust it generates. This area is where reinforcement learning has proved to be essential. Optimizing the geometry and heat flow between fuel and propellant is a complex problem, involving countless variables — from the material properties to the amount of hydrogen that flows across the reactor at any given moment. Reinforcement learning can analyze these design variations and identify configurations that maximize heat transfer.

Oracle

Morgan Stanley Warns Oracle Credit Protection Nearing Record High (yahoo.com) 50

A gauge of risk on Oracle debt "reached a three-year high in November," reports Bloomberg.

"And things are only going to get worse in 2026 unless the database giant is able to assuage investor anxiety about a massive artificial intelligence spending spree, according to Morgan Stanley." A funding gap, swelling balance sheet and obsolescence risk are just some of the hazards Oracle is facing, according to Lindsay Tyler and David Hamburger, credit analysts at the brokerage.

The cost of insuring Oracle's debt against default over the next five years rose to 1.25 percentage point a year on Tuesday, according to ICE Data Services. The price on the five-year credit default swaps is at risk of toppling a record set in 2008 as concerns over the company's borrowing binge to finance its AI ambitions continue to spur heavy hedging by banks and investors, they warned in a note Wednesday. The CDS could break through 1.5 percentage point in the near term and could approach 2 percentage points if communication around its financing strategy remains limited as the new year progresses, the analysts wrote. Oracle CDS hit a record 1.98 percentage point in 2008, ICE Data Services shows...

"Over the past two months, it has become more apparent that reported construction loans in the works, for sites where Oracle is the future tenant, may be an even greater driver of hedging of late and going forward," wrote the analysts... Concerns have also started to weigh on Oracle's stock, which the analysts said may incentivize management to outline a financing plan on the upcoming earnings call...

Thanks to Slashdot reader Bruce66423 for sharing the article.

Slashdot Top Deals