Privacy

TikTok Is Now Collecting Even More Data About Its Users (wired.com) 41

An anonymous reader quotes a report from Wired: When TikTok users in the U.S. opened the app today, they were greeted with a pop-up asking them to agree to the social media platform's new terms of service and privacy policy before they could resume scrolling. These changes are part of TikTok's transition to new ownership. In order to continue operating in the U.S., TikTok was compelled by the U.S. government to transition from Chinese control to a new, American-majority corporate entity. Called TikTok USDS Joint Venture LLC, the new entity is made up of a group of investors that includes the software company Oracle. It's easy to tap "agree" and keep on scrolling through videos on TikTok, so users might not fully understand the extent of changes they are agreeing to with this pop-up.

Now that it's under U.S.-based ownership, TikTok potentially collects more detailed information about its users, including precise location data. Here are the three biggest changes to TikTok's privacy policy that users should know about. TikTok's change in location tracking is one of the most notable updates in this new privacy policy. Before this update, the app did not collect the precise, GPS-derived location data of U.S. users. Now, if you give TikTok permission to use your phone's location services, then the app may collect granular information about your exact whereabouts. Similar kinds of precise location data is also tracked by other social media apps, like Instagram and X.

[...] Rather than an adjustment, TikTok's policy on AI interactions adds a new topic to the privacy policy document. Now, users' interactions with any of TikTok's AI tools explicitly fall under data that the service may collect and store. This includes any prompts as well as the AI-generated outputs. The metadata attached to your interactions with AI tools may also be automatically logged. [...] This change to TikTok's privacy policy may not be as immediately noticeable to users, but it will likely have an impact on the types of ads you see outside of TikTok. So, rather than just using your collected data to target you while using the app, TikTok may now further leverage that info to serve you more relevant ads wherever you go online. As part of this advertising change, TikTok also now explicitly mentions publishers as one kind of partner the platform works with to get new data.

Businesses

Toilet Maker Toto's Shares Get Unlikely Boost From AI Rush (yahoo.com) 28

An anonymous reader shares a report: Shares of Japanese toilet maker Toto gained the most in five years after booming memory demand excited expectations of growth in its little-known chipmaking materials operations. The stock surged as much as 11%, its steepest rise since February 2021, after Goldman Sachs analysts said Toto's electrostatic chucks used in NAND chipmaking will likely benefit from an AI infrastructure buildout that's tightening supplies of both high-end and commodity memory.

[...] Known for its heated toilet seats, the maker of washlets has for decades been part of the semiconductor and display supply chain via its advanced ceramic parts and films. Its electrostatic chucks -- which it began mass producing in 1988 -- are used to hold silicon wafers in place during chipmaking while helping to control temperature and contamination, according to the company. The company's new domain business accounted for 42% of its total operating income in the fiscal year ended March 2025, Bloomberg-compiled data show.

Businesses

The Great Graduate Job Drought (ft.com) 35

Global hiring remains 20% below pre-pandemic levels and job switching has hit a 10-year low, according to a LinkedIn report, and new university graduates are bearing the brunt of a labor market that increasingly favors experienced candidates over fresh talent.

In the UK, the Institute of Student Employers found that graduate hiring fell 8% in the last academic year and employers now receive 140 applications for each vacancy, up from 86 per vacancy in 2022-23. US data from the New York Federal Reserve shows unemployment among recent college graduates aged 22-27 stands at 5.8% versus 4.1% for all workers.

Recruiter Reed had 180,000 graduate job postings in 2021 but only 55,000 in 2024. In a survey of Reed clients last year, 15% said they had reduced hiring because of AI. London mayor Sadiq Khan said the capital will be "at the sharpest edge" of AI-driven changes and that entry-level jobs will be first to go.
AI

When Two Years of Academic Work Vanished With a Single Click (nature.com) 132

Marcel Bucher, a professor of plant sciences at the University of Cologne in Germany, lost two years of carefully structured academic work in an instant when he temporarily disabled ChatGPT's "data consent" option in August to test whether the AI tool's functions would still work without providing OpenAI his data. All his chats were permanently deleted and his project folders emptied without any warning or undo option, he wrote in a post on Nature.

Bucher, a ChatGPT Plus subscriber paying $20 per month, had used the platform daily to draft grant applications, prepare teaching materials, revise publication drafts and create exams. He contacted OpenAI support, first receiving responses from an AI agent before a human employee confirmed the data was permanently lost and unrecoverable. OpenAI cited "privacy by design" as the reason, telling Nature it does provide a confirmation prompt before users permanently delete a chat but maintains no backups.

Bucher said he had saved partial copies of some materials, but the underlying prompts, iterations, and project folders -- what he describes as the intellectual scaffolding behind his finished work -- are gone forever.
AI

Anthropic's AI Keeps Passing Its Own Company's Job Interview (anthropic.com) 39

Anthropic has a problem that most companies would envy: its AI model keeps getting so good, the company wrote in a blog post, that it passes the company's own hiring test for performance engineers. The test, designed in late 2023 by optimization lead Tristan Hume, asks candidates to speed up code running on a simulated computer chip. Over 1,000 people have taken it, and dozens now work at Anthropic. But Claude Opus 4 outperformed most human applicants.

Hume redesigned the test, making it harder. Then Claude Opus 4.5 matched even the best human scores within the two-hour time limit. For his third attempt, Hume abandoned realistic problems entirely and switched to abstract puzzles using a strange, minimal programming language -- something weird enough that Claude struggles with it. Anthropic is now releasing the original test as an open challenge. Beat Claude's best score and ... they want to hear from you.
AI

AI Boosts Research Careers But Flattens Scientific Discovery (ieee.org) 64

Ancient Slashdot reader erice shares the findings from a recent study showing that while AI helped researchers publish more often and boosted their careers, the resulting papers were, on average, less useful. "You have this conflict between individual incentives and science as a whole," says James Evans, a sociologist at the University of Chicago who led the study. From a recent IEEE Spectrum article: To quantify the effect, Evans and collaborators from the Beijing National Research Center for Information Science and Technology trained a natural language processing model to identify AI-augmented research across six natural science disciplines. Their dataset included 41.3 million English-language papers published between 1980 and 2025 in biology, chemistry, physics, medicine, materials science, and geology. They excluded fields such as computer science and mathematics that focus on developing AI methods themselves. The researchers traced the careers of individual scientists, examined how their papers accumulated attention, and zoomed out to consider how entire fields clustered or dispersed intellectually over time. They compared roughly 311,000 papers that incorporated AI in some way -- through the use of neural networks or large language models, for example -- with millions of others that did not.

The results revealed a striking trade-off. Scientists who adopt AI gain productivity and visibility: On average, they publish three times as many papers, receive nearly five times as many citations, and become team leaders a year or two earlier than those who do not. But when those papers are mapped in a high-dimensional "knowledge space," AI-heavy research occupies a smaller intellectual footprint, clusters more tightly around popular, data-rich problems, and generates weaker networks of follow-on engagement between studies. The pattern held across decades of AI development, spanning early machine learning, the rise of deep learning, and the current wave of generative AI. "If anything," Evans notes, "it's intensifying." [...] Aside from recent publishing distortions, Evans's analysis suggests that AI is largely automating the most tractable parts of science rather than expanding its frontiers.

AI

South Korea Launches Landmark Laws To Regulate AI 7

An anonymous reader quotes a report from the Korea Herald: South Korea will begin enforcing its Artificial Intelligence Act on Thursday, becoming the first country to formally establish safety requirements for high-performance, or so-called frontier, AI systems -- a move that sets the country apart in the global regulatory landscape. According to the Ministry of Science and ICT, the new law is designed primarily to foster growth in the domestic AI sector, while also introducing baseline safeguards to address potential risks posed by increasingly powerful AI technologies. Officials described the inclusion of legal safety obligations for frontier AI as a world-first legislative step.

The act lays the groundwork for a national-level AI policy framework. It establishes a central decision-making body -- the Presidential Council on National Artificial Intelligence Strategy -- and creates a legal foundation for an AI Safety Institute that will oversee safety and trust-related assessments. The law also outlines wide-ranging support measures, including research and development, data infrastructure, talent training, startup assistance, and help with overseas expansion.

To reduce the initial burden on businesses, the government plans to implement a grace period of at least one year. During this time, it will not carry out fact-finding investigations or impose administrative sanctions. Instead, the focus will be on consultations and education. A dedicated AI Act support desk will help companies determine whether their systems fall within the law's scope and how to respond accordingly. Officials noted that the grace period may be extended depending on how international standards and market conditions evolve. The law applies to three areas only: high-impact AI, safety obligations for high-performance AI and transparency requirements for generative AI.

Enforcement under the Korean law is intentionally light. It does not impose criminal penalties. Instead, it prioritizes corrective orders for noncompliance, with fines -- capped at 30 million won ($20,300) -- issued only if those orders are ignored. This, the government says, reflects a compliance-oriented approach rather than a punitive one. Transparency obligations for generative AI largely align with those in the EU, but Korea applies them more narrowly. Content that could be mistaken for real, such as deepfake images, video or audio, must clearly disclose its AI-generated origin. For other types of AI-generated content, invisible labeling via metadata is allowed. Personal or noncommercial use of generative AI is excluded from regulation.
"This is not about boasting that we are the first in the world," said Kim Kyeong-man, deputy minister of the office of artificial intelligence policy at the ICT ministry. "We're approaching this from the most basic level of global consensus."

Korea's approach differs from the EU by defining "high-performance AI" using technical thresholds like cumulative training compute, rather than regulating based on how AI is used. As a result, Korea believes no current models meet the bar for regulation, while the EU is phasing in broader, use-based AI rules over several years.
AI

Intel Struggles To Meet AI Data Center Demand 31

Intel says it struggled to satisfy demand for its AI data-center CPUs while new PC chips squeeze margins. CEO Lip-Bu Tan framed the turnaround as supply-constrained, not demand-constrained, with manufacturing yields (18A) improving but still below targets. Reuters reports: The forecast underscores the difficulties faced by Intel in predicting global chip markets, where the company's current products are the result of decisions made years ago. The company, whose shares have risen 40% in the past month, recently launched a long-awaited laptop chip designed to reclaim its lead in personal computers just as a memory chip crunch is expected to depress sales across that industry.

Meanwhile, Intel executives said the company was caught off guard by surging demand for server central processors that accompany AI chips. Despite running its factories at capacity, Intel cannot keep up with demand for the chips, leaving profitable data center sales on the table while the new PC chip squeezes its margins.

"In the short term, I'm disappointed that we are not able "to fully meet the demand in our markets," Chief Executive Officer Lip-Bu Tan told analysts on a conference call. The company forecast current-quarter revenue between $11.7 billion and $12.7 billion, compared with analysts' average estimate of $12.51 billion, according to data compiled by LSEG. It expects adjusted earnings per share to break even in the first quarter, compared with expectations of adjusted earnings of 5 cents per share.
EU

EU Parliament Calls For Detachment From US Tech Giants (heise.de) 102

The European Parliament is calling on the European Commission to reduce dependence on U.S. tech giants by prioritizing EU-based cloud, AI, and open-source infrastructure. The report frames "European Tech First," public procurement reform, and Public Money, Public Code as necessary self-defense against growing U.S. control over critical digital infrastructure. Heise reports: In terms of content, the report focuses on a strategic reorientation of public procurement and infrastructure. The compromise line adopted stipulates that member states can favor European tech providers in strategic sectors to systematically strengthen the technological capacity of the Community. The Greens even called for a stricter regulation here, where the use of products "Made in EU" should become the rule and exceptions would have to be explicitly justified. They also pushed for a definition for cloud infrastructure that provides for full EU jurisdiction without dependencies on third countries.

With the decision, the MEPs want to lay the foundation for a European digital public infrastructure based on open standards and interoperability. The principle of Public Money, Public Code is anchored as a strategic foundation to reduce dependence on individual providers. Software specifically developed for administration with tax money should therefore be made available to everyone under free licenses. For financing, the Parliament relies on the expansion of public-private investments. A "European Sovereign Tech Fund" endowed with ten billion euros was discussed beforehand, for example, to specifically build strategic infrastructures that the market does not provide on its own. The shadow rapporteur for the Greens, Alexandra Geese, sees Europe ready to take control of its digital future with the vote. As long as European data is held by US providers subject to laws such as the Cloud Act, security in Europe is not guaranteed.

Microsoft

The Microsoft-OpenAI Files 20

Longtime Slashdot reader theodp writes: GeekWire takes a look at AI's defining alliance in The Microsoft-OpenAI Files, an epic story drawn from 200+ documents, many made public Friday in Elon Musk's ongoing suit accusing OpenAI and its CEO Sam Altman of abandoning the nonprofit mission (Microsoft is also a defendant). Musk, who was an OpenAI co-founder, is seeking up to $134 billion in damages. "Previously undisclosed emails, messages, slide decks, reports, and deposition transcripts reveal how Microsoft pursued, rebuffed, and backed OpenAI at various moments over the past decade, ultimately shaping the course of the lab that launched the generative AI era," reports GeekWire. "The latest round of documents, filed as exhibits in Musk's lawsuit, [...] show how Nadella and Microsoft's senior leadership team rally in a crisis, maneuver against rivals such as Google and Amazon, and talk about deals in private."

Even though Microsoft didn't have a seat on the OpenAI board, text messages between Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman following Altman's firing as CEO in Nov. 2023 (news of which sent Microsoft's stock plummeting), revealed in the latest filings, show just how influential Microsoft was. A day after Altman's firing, Nadella sent Altman a detailed message from Brad Smith, Microsoft's president and top lawyer, explaining that Microsoft had created a new subsidiary called Microsoft RAI (Responsible Artificial Intelligence) Inc. from scratch -- legal work done, papers ready to file as soon as the WA Secretary of State opened Monday morning -- and was ready to capitalize and operationalize it to "support Sam in whatever way is needed," including absorbing the OpenAI team at a calculated cost of roughly $25 billion. (Altman's reply: "kk"). Just days later, as he planned his return as CEO to the now-reeling-from-Microsoft-punches nonprofit, Altman joined Microsoft's Nadella, Smith, and CTO Kevin Scott in a text messaging thread in which the four vetted prospective board members to replace those who had ousted Altman. Later that night, OpenAI announced Altman's return with the newly constituted board.

If you like stories with happy Microsoft endings, as part of an agreement clearing the way for OpenAI to restructure as a for-profit business, Microsoft in October received a 27% ownership stake in OpenAI worth approximately $135 billion and retains access to the AI startup's technology until 2032, including models that achieve AGI.
Education

Google Begins Offering Free SAT Practice Tests Powered By Gemini (arstechnica.com) 14

An anonymous reader quotes a report from Ars Technica: It's no secret that students worldwide use AI chatbots to do their homework and avoid learning things. On the flip side, students can also use AI as a tool to beef up their knowledge and plan for the future with flashcards or study guides. Google hopes its latest Gemini feature will help with the latter. The company has announced that Gemini can now create free SAT practice tests and coach students to help them get higher scores. As a standardized test, the content of the SAT follows a predictable pattern. So there's no need to use a lengthy, personalized prompt to get Gemini going. Just say something like, "I want to take a practice SAT test," and the chatbot will generate one complete with clickable buttons, graphs, and score analysis.

Of course, generative AI can go off the rails and provide incorrect information, which is a problem when you're trying to learn things. However, Google says it has worked with education firms like The Princeton Review to ensure the AI-generated tests resemble what students will see in the real deal. The interface for Gemini's practice tests includes scoring and the ability to review previous answers. If you are unclear on why a particular answer is right or wrong, the questions have an "Explain answer" button right at the bottom. After you finish the practice exam, the custom interface (which looks a bit like Gemini's Canvas coding tool) can help you follow up on areas that need improvement.
Google says support for the SAT is just the start, "with more tests coming in the future."
AI

eBay Bans Illicit Automated Shopping Amid Rapid Rise of AI Agents (arstechnica.com) 28

EBay has updated its User Agreement to explicitly ban third-party "buy for me" agents and AI chatbots from interacting with its platform without permission. From a report: On its face, a one-line terms of service update doesn't seem like major news, but what it implies is more significant: The change reflects the rapid emergence of what some are calling "agentic commerce," a new category of AI tools designed to browse, compare, and purchase products on behalf of users.

eBay's updated terms, which go into effect on February 20, 2026, specifically prohibit users from employing "buy-for-me agents, LLM-driven bots, or any end-to-end flow that attempts to place orders without human review" to access eBay's services without the site's permission. The previous version of the agreement contained a general prohibition on robots, spiders, scrapers, and automated data gathering tools but did not mention AI agents or LLMs by name.

Software

Workday CEO Calls Narrative That AI is Killing Software 'Overblown' (cnbc.com) 17

Workday CEO Carl Eschenbach on Thursday tried to ease worries that AI is destroying software business models. From a report: "It's an overblown narrative, and it's not true," he told CNBC's "Squawk Box" from the World Economic Forum in Davos, Switzerland, calling AI a tailwind and "absolutely not a headwind" for the company.

Software stocks have sold off in recent months on concerns that new AI tools will upend the sector and displace longstanding and recurring businesses that once fueled big profits. Workday shares lost 17% last year and have sunk another 15% since the start of 2026.

China

China Lagging in AI Is a 'Fairy Tale,' Mistral CEO Says (msn.com) 57

Claims that Chinese technology for AI lags the US are a "fairy tale," Arthur Mensch, the chief executive officer of Mistral, said. From a report: "China is not behind the West," Mensch said in an interview on Bloomberg Television at the World Economic Forum in Davos, Switzerland on Thursday. The capabilities of China's open-source technology is "probably stressing the CEOs in the US."

The remarks from the boss of one of Europe's leading AI companies diverge from other tech leaders at Davos, who reassured lawmakers and business chiefs that China is behind the cutting edge by months or years.

AI

'Stealing Isn't Innovation': Hundreds of Creatives Warn Against an AI Slop Future (theverge.com) 60

Around 800 artists, writers, actors, and musicians signed on to a new campaign against what they call "theft at a grand scale" by AI companies. From a report: The signatories of the campaign -- called "Stealing Isn't Innovation" -- include authors George Saunders and Jodi Picoult, actors Cate Blanchett and Scarlett Johansson, and musicians like the band R.E.M., Billy Corgan, and The Roots.

"Driven by fierce competition for leadership in the new GenAI technology, profit-hungry technology companies, including those among the richest in the world as well as private equity-backed ventures, have copied a massive amount of creative content online without authorization or payment to those who created it," a press release reads. "This illegal intellectual property grab fosters an information ecosystem dominated by misinformation, deepfakes, and a vapid artificial avalanche of low-quality materials ['AI slop'], risking AI model collapse and directly threatening America's AI superiority and international competitiveness."

Books

Nvidia Allegedly Sought 'High-Speed Access' To Pirated Book Library for AI Training (torrentfreak.com) 23

An expanded class-action lawsuit filed last Friday alleges that a member of Nvidia's data strategy team directly contacted Anna's Archive -- the sprawling shadow library hosting millions of pirated books -- to explore "including Anna's Archive in pre-training data for our LLMs."

Internal documents cited in the amended complaint show Nvidia sought information about "high-speed access" to the collection, which Anna's Archive charged tens of thousands of dollars for. According to the lawsuit, Anna's Archive warned Nvidia that its library was illegally acquired and maintained, then asked if the company had internal permission to proceed. The pirate library noted it had previously wasted time on other AI companies that couldn't secure approval. Nvidia management allegedly gave "the green light" within a week.

Anna's Archive promised access to roughly 500 terabytes of data, including millions of books normally only accessible through Internet Archive's controlled digital lending system. The lawsuit also alleges Nvidia downloaded books from LibGen, Sci-Hub, and Z-Library.
Software

'No Reasons To Own': Software Stocks Sink on Fear of New AI Tool (bloomberg.com) 40

The new year was supposed to bring opportunities for beaten-down software stocks. Instead, the group is off to its worst start in years. From a report: The release of a new artificial intelligence tool from startup Anthropic on Jan. 12 rekindled fears about disruption that weighed on software makers in 2025.

TurboTax owner Intuit tumbled 16% last week, its worst since 2022, while Adobe and Salesforce, which makes customer relationship management software, both sank more than 11%. All told, a group of software-as-a-service stocks tracked by Morgan Stanley is down 15% so far this year, following a drop of 11% in 2025. It's the worst start to a year since 2022, according to data compiled by Bloomberg.

While unproven, the tool represents just the type of capabilities that investors have been fearing, and reinforces bearish positions that are looking increasingly entrenched, according to Jordan Klein, a tech-sector specialist at Mizuho Securities. "Many buysiders see no reasons to own software no matter how cheap or beaten down the stocks get," Klein wrote in a Jan. 14 note to clients. "They assume zero catalysts for a re-rate exist right now," he said, referring to the potential for higher valuation multiples.

AI

Wikipedia's Guide to Spotting AI Is Now Being Used To Hide AI 34

Ars Technica's Benj Edwards reports: On Saturday, tech entrepreneur Siqi Chen released an open source plugin for Anthropic's Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called "Humanizer," the simple prompt plugin feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen published the plugin on GitHub, where it has picked up over 1,600 stars as of Monday. "It's really handy that Wikipedia went and collated a detailed list of 'signs of AI writing,'" Chen wrote on X. "So much so that you can just tell your LLM to... not do that."

The source material is a guide from WikiProject AI Cleanup, a group of Wikipedia editors who have been hunting AI-generated articles since late 2023. French Wikipedia editor Ilyas Lebleu founded the project. The volunteers have tagged over 500 articles for review and, in August 2025, published a formal list of the patterns they kept seeing.

Chen's tool is a "skill file" for Claude Code, Anthropic's terminal-based coding assistant, which involves a Markdown-formatted file that adds a list of written instructions (you can see them here) appended to the prompt fed into the large language model (LLM) that powers the assistant. Unlike a normal system prompt, for example, the skill information is formatted in a standardized way that Claude models are fine-tuned to interpret with more precision than a plain system prompt. (Custom skills require a paid Claude subscription with code execution turned on.)

But as with all AI prompts, language models don't always perfectly follow skill files, so does the Humanizer actually work? In our limited testing, Chen's skill file made the AI agent's output sound less precise and more casual, but it could have some drawbacks: it won't improve factuality and might harm coding ability. [...] Even with its drawbacks, it's ironic that one of the web's most referenced rule sets for detecting AI-assisted writing may help some people subvert it.
AI

Apple Reportedly Replacing Siri Interface With Actual Chatbot Experience For iOS 27 20

According to Bloomberg's Mark Gurman, Apple is reportedly planning a major Siri overhaul in iOS 27 and macOS 27 where the current assistant interface will be replaced with a deeply integrated, ChatGPT-style chatbot experience. "Users will be able to summon the new service the same way they open Siri now, by speaking the 'Siri' command or holding down the side button on their iPhone or iPad," says Gurman. "More significantly, Siri will be integrated into all of the company's core apps, including ones for mail, music, podcasts, TV, Xcode programming software and photos. That will allow users to do much more with just their voice." 9to5Mac reports: The unannounced Siri overhaul will reportedly be revealed at WWDC in June as the flagship feature for iOS 27 and macOS 27. Its release is expected in September when Apple typically ships major software updates. While Apple plans to release an improved version of Siri and Apple Intelligence this spring, that version will use the existing Siri interface. The big difference is that Google's Gemini models will power the intelligence. With the bigger update planned for iOS 27, the iOS 26 upgrade to Siri and Apple Intelligence sounds more like the first step to a long overdue modernization.

Gurman reports that the major Siri overhaul will "allow users to search the web for information, create content, generate images, summarize information and analyze uploaded files" while using "personal data to complete tasks, being able to more easily locate specific files, songs, calendar events and text messages." People are already familiar with conversational interactions with AI, and Bloomberg says the bigger update to Siri will be support both text and voice. Siri already uses these input methods, but there's no real continuity between sessions.
AI

Apple Developing AI Wearable Pin (9to5mac.com) 41

According to a report by The Information (paywalled), Apple is reportedly developing an AirTag-sized, camera-equipped AI wearable pin that could arrive as early as 2027.

"Apple's pin, which is a thin, flat, circular disc with an aluminum-and-glass shell, features two cameras -- a standard lens and a wide-angle lens -- on its front face, designed to capture photos and videos of the user's surroundings," reports The Information, citing people familiar with the device. "It also includes three microphones to pick up sounds in the area surrounding the person wearing it. It has a speaker, a physical button along one of its edges and a magnetic inductive charging interface on its back, similar to the one used on the Apple Watch..." 9to5Mac reports: The Information also notes that Apple is attempting to speed up development in hopes of competing with OpenAI's first wearable (slated to debut in 2026), and that it is not immediately clear whether this wearable would work in conjunction with other products, such as AirPods or Apple's reported upcoming smart glasses. Today's report also notes that this has been a challenging market for new companies, citing the recent failure of Humane's AI Pin as an example.

Slashdot Top Deals