Google

Google Photos Turns 10 With Major Editor Redesign, QR Code Sharing (9to5google.com) 17

An anonymous reader quotes a report from 9to5Google: Google Photos was announced at I/O 2015 and the company is now celebrating the app's 10th birthday with a redesign of the photo editor. Google is redesigning the Photos editor so that it "provides helpful suggestions and puts all our powerful editing tools in one place." It starts with a new fullscreen viewer that places the date, time, and location at the top of your screen. Meanwhile, it's now Share, Edit, Add to (instead of Lens), and Trash at the bottom.

Once editing, Google Photos has moved controls for aspect ratio, flip, and rotate to be above the image. In the top-left corner, we have Auto Frame, which debuted in Magic Editor on the Pixel 9, to fill-in backgrounds and is now coming to more devices. Underneath, we get options for Enhance, Dynamic, and "AI Enhance" in the Auto tab. That's followed by Lighting, Color, and Composition, as well as a search shortcut: "You can use AI-powered suggestions that combine multiple effects for quick edits in a variety of tailored options, or you can tap specific parts of an image to get suggested tools for editing that area."

The editor allows you to circle or "tap specific parts of an image to get suggested tools for editing that area." This includes the subject, background, or some other aspect. You then see the Blur background, Add portrait light, Sharpen, Move and Reimagine appear in the example below. We also see the redesigned sliders throughout this updated interface. This Google Photos editor redesign "will begin rolling out globally to Android devices next month, with iOS following later this year." We already know the app is set for a Material 3 Expressive redesign. Meanwhile, Google Photos is starting to roll out the ability to share albums with a QR code. This method makes for easy viewing and adding with people nearby. Google even suggests printing it out when in (physical) group settings.
Google shared a few tips, tricks and tools for the new editor in a blog post.
Education

Blue Book Sales Surge As Universities Combat AI Cheating (msn.com) 93

Sales of blue book exam booklets have surged dramatically across the nation as professors turn to analog solutions to prevent ChatGPT cheating. The University of California, Berkeley reported an 80% increase in blue book sales over the past two academic years, while Texas A&M saw 30% growth and the University of Florida recorded nearly 50% increases this school year. The surge comes as students who were freshmen when ChatGPT launched in 2022 approach senior year, having had access to AI throughout their college careers.
AI

'Some Signs of AI Model Collapse Begin To Reveal Themselves' 109

Steven J. Vaughan-Nichols writes in an op-ed for The Register: I use AI a lot, but not to write stories. I use AI for search. When it comes to search, AI, especially Perplexity, is simply better than Google. Ordinary search has gone to the dogs. Maybe as Google goes gaga for AI, its search engine will get better again, but I doubt it. In just the last few months, I've noticed that AI-enabled search, too, has been getting crappier.

In particular, I'm finding that when I search for hard data such as market-share statistics or other business numbers, the results often come from bad sources. Instead of stats from 10-Ks, the US Securities and Exchange Commission's (SEC) mandated annual business financial reports for public companies, I get numbers from sites purporting to be summaries of business reports. These bear some resemblance to reality, but they're never quite right. If I specify I want only 10-K results, it works. If I just ask for financial results, the answers get... interesting. This isn't just Perplexity. I've done the exact same searches on all the major AI search bots, and they all give me "questionable" results.

Welcome to Garbage In/Garbage Out (GIGO). Formally, in AI circles, this is known as AI model collapse. In an AI model collapse, AI systems, which are trained on their own outputs, gradually lose accuracy, diversity, and reliability. This occurs because errors compound across successive model generations, leading to distorted data distributions and "irreversible defects" in performance. The final result? A Nature 2024 paper stated, "The model becomes poisoned with its own projection of reality." [...]

We're going to invest more and more in AI, right up to the point that model collapse hits hard and AI answers are so bad even a brain-dead CEO can't ignore it. How long will it take? I think it's already happening, but so far, I seem to be the only one calling it. Still, if we believe OpenAI's leader and cheerleader, Sam Altman, who tweeted in February 2024 that "OpenAI now generates about 100 billion words per day," and we presume many of those words end up online, it won't take long.
AI

Nothing's Carl Pei Says Your Smartphone's OS Will Replace All of Its Apps 70

In an interview with Wired (paywalled), OnePlus co-founder and Nothing CEO, Carl Pei, said the future of smartphones will center around the OS and AI to get things done -- rendering traditional apps a thing of the past. 9to5Google reports: Pei says that Nothing's strength is in "creativity," adding that "the creative companies of the past" such as Apple "have become very big and very corporate, and they're no longer very creative." He then dives into what else but AI, explaining that Nothing wants to create the "iPod" of AI, saying that Apple built a product that simply built a better user experience: "If you look back, the iPod was not launched as 'an MP3 player with a hard disk drive.' The hard disk drive was merely a means to a better user experience. AI is just a new technology that enables us to create better products for users. So, our strategy is not to make big claims that AI is going to change the world and revolutionize smartphones. For us, it's about using it to solve a consumer problem, not to tell a big story. We want the product to be the story."

Pei then says that he doesn't see the current trend of AI products -- citing wearables such as smart glasses -- as the future of the technology. Rather, he sees the smartphone as the most important device for AI "for the foreseeable future," but as one that will "change dramatically." According to Pei, the future of the smartphone is one without apps, with the experience instead just revolving around the OS and what it can do and how it can "optimize" for the user, acting as a proactive, automated agent and that, in the end, the user "will spend less time doing boring things and more time on what they care about."
Businesses

Salesforce Acquires Informatica For $8 Billion 4

After a year of rumors, Salesforce has officially acquired cloud data management firm Informatica in an $8 billion equity deal. "Under the terms of the deal, Salesforce will pay $25 in cash per share for Informatica's Class A and Class B-1 common stock, adjusting for its prior investment in the company," notes TechCrunch. From the report: Informatica was founded in 1993 and works with more than 5,000 customers across more than 100 countries. The company had a $7.1 billion market cap at the time of publication. This acquisition will help bolster Salesforce's agentic AI ambitions, the company's press release stated, by giving the company more data infrastructure and governance to help its AI agents run more "safely, responsibly, and at scale across the modern enterprise." "Together, we'll supercharge Agentforce, Data Cloud, Tableau, MuleSoft, and Customer 360, enabling autonomous agents to act with intelligence, context, and confidence across every enterprise," Salesforce CEO Marc Benioff said in the press release. "This is a transformational step in delivering enterprise-grade AI that is safe, responsible, and deeply integrated with the world's data."
Cellphones

OnePlus Is Replacing Its Alert Slider With an AI Button (engadget.com) 19

OnePlus is replacing its iconic Alert Slider with a new customizable "Plus Key" on the upcoming OnePlus 13s, which launches the new AI Plus Mind feature that lets users capture and search content found on screen. This update is part of a broader AI push for its devices that includes tools like AI VoiceScribe for call summaries, AI Translation for multi-modal language support, and AI Best Face 2.0 for photo corrections. Engadget reports: What AI Plus Mind does is save relevant content to a dedicated Mind Space, where users can browse various information that they've saved. Users can then search for the detail they want to find using natural language queries. Both the Plus Key and the AI Plus Mind will debut on the OnePlus 13s in Asia. AI Plus Mind will roll out to the rest of the OnePlus 13 Series devices through a future software update, while all future OnePlus phone will come with the new physical key. Notably, the new button and feature bear similarities to Nothing's physical Essential Key that can also save information inside the Essential Space app. Nothing was founded by Carl Pei who co-founded OnePlus.
Education

'AI Role in College Brings Education Closer To a Crisis Point' (bloomberg.com) 74

Bloomberg's editorial board warned Tuesday that AI has created an "untenable situation" in higher education where students routinely outsource homework to chatbots while professors struggle to distinguish computer-generated work from human writing. The editorial described a cycle where assignments that once required days of research can now be completed in minutes through AI prompts, leaving students who still do their own work looking inferior to peers who rely on technology.

The board said that professors have begun using AI tools themselves to evaluate student assignments, creating what it called a scenario of "computers grading papers written by computers, students and professors idly observing, and parents paying tens of thousands of dollars a year for the privilege."

The editorial argued that widespread AI use in coursework undermines the broader educational mission of developing critical thinking skills and character formation, particularly in humanities subjects. Bloomberg's board recommended that colleges establish clearer policies on acceptable AI use, increase in-class assessments including oral exams, and implement stronger honor codes with defined consequences for violations.
AI

Browser Company Abandons Arc for AI-Powered Successor (substack.com) 26

The Browser Company has ceased the active development of its Arc browser to focus on Dia, a new AI-powered browser currently in alpha testing, the company said Tuesday. In a lengthy letter to users, CEO Josh Miller said the startup should have stopped working on Arc "a year earlier," noting data showing the browser suffered from a "novelty tax" problem where users found it too different to adopt widely.

Arc struggled with low feature adoption -- only 5.52% of daily active users regularly used multiple Spaces, while 4.17% used Live Folders. The company will continue maintenance updates for Arc but won't add new features. Arc also won't open-source the browser because it relies on proprietary infrastructure called ADK (Arc Development Kit) that remains core to the company's value.
Facebook

Nick Clegg Says Asking Artists For Use Permission Would 'Kill' the AI Industry 240

As policy makers in the UK weigh how to regulate the AI industry, Nick Clegg, former UK deputy prime minister and former Meta executive, claimed a push for artist consent would "basically kill" the AI industry. From a report: Speaking at an event promoting his new book, Clegg said the creative community should have the right to opt out of having their work used to train AI models. But he claimed it wasn't feasible to ask for consent before ingesting their work first.

"I think the creative community wants to go a step further," Clegg said according to The Times. "Quite a lot of voices say, 'You can only train on my content, [if you] first ask.' And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data."

"I just don't know how you go around, asking everyone first. I just don't see how that would work," Clegg said. "And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight."
AI

At Amazon, Some Coders Say Their Jobs Have Begun To Resemble Warehouse Work (nytimes.com) 207

Amazon software engineers are reporting that AI tools are transforming their jobs into something resembling the company's warehouse work, with managers pushing faster output and tighter deadlines while teams shrink in size, according to the New York Times.

Three Amazon engineers told the New York Times that the company has raised productivity goals over the past year and expects developers to use AI assistants that suggest code snippets or generate entire program sections. One engineer said his team was cut roughly in half but still expected to produce the same amount of code by relying on AI tools.

The shift mirrors historical workplace changes during industrialization, the Times argues, where technology didn't eliminate jobs but made them more routine and fast-paced. Engineers describe feeling like "bystanders in their own jobs" as they spend more time reviewing AI-generated code rather than writing it themselves. Tasks that once took weeks now must be completed in days, with less time for meetings and collaborative problem-solving, according to the engineers.
AI

VCs Are Acquiring Mature Businesses To Retrofit With AI (techcrunch.com) 39

Venture capitalists are inverting their traditional investment approach by acquiring mature businesses and retrofitting them with AI. Firms including General Catalyst, Thrive Capital, Khosla Ventures and solo investor Elad Gil are employing this private equity-style strategy to buy established companies like call centers and accounting firms, then optimizing them with AI automation.
AI

Google Tries Funding Short Films Showing 'Less Nightmarish' Visions of AI (yahoo.com) 74

"For decades, Hollywood directors including Stanley Kubrick, James Cameron and Alex Garland have cast AI as a villain that can turn into a killing machine," writes the Los Angeles Times. "Even Steven Spielberg's relatively hopeful A.I.: Artificial Intelligence had a pessimistic edge to its vision of the future."

But now "Google — a leading developer in AI technology — wants to move the cultural conversations away from the technology as seen in The Terminator, 2001: A Space Odyssey and Ex Machina.". So they're funding short films "that portray the technology in a less nightmarish light," produced by Range Media Partners (which represents many writers and actors) So far, two short films have been greenlit through the project: One, titled "Sweetwater," tells the story of a man who visits his childhood home and discovers a hologram of his dead celebrity mother. Michael Keaton will direct and appear in the film, which was written by his son, Sean Douglas. It is the first project they are working on together. The other, "Lucid," examines a couple who want to escape their suffocating reality and risk everything on a device that allows them to share the same dream....

Google has much riding on convincing consumers that AI can be a force for good, or at least not evil. The hot space is increasingly crowded with startups and established players such as OpenAI, Anthropic, Apple and Facebook parent company Meta. The Google-funded shorts, which are 15 to 20 minutes long, aren't commercials for AI, per se. Rather, Google is looking to fund films that explore the intersection of humanity and technology, said Mira Lane, vice president of technology and society at Google. Google is not pushing their products in the movies, and the films are not made with AI, she added... The company said it wants to fund many more movies, but it does not have a target number. Some of the shorts could eventually become full-length features, Google said....

Negative public perceptions about AI could put tech companies at a disadvantage when such cases go before juries of laypeople. That's one reason why firms are motivated to makeover AI's reputation. "There's an incredible amount of skepticism in the public world about what AI is and what AI will do in the future," said Sean Pak, an intellectual property lawyer at Quinn Emanuel, on a conference panel. "We, as an industry, have to do a better job of communicating the public benefits and explaining in simple, clear language what it is that we're doing and what it is that we're not doing."

Programming

Is AI Turning Coders Into Bystanders in Their Own Jobs? (msn.com) 101

AI's downside for software engineers for now seems to be a change in the quality of their work," reports the New York Times. "Some say it is becoming more routine, less thoughtful and, crucially, much faster paced... The new approach to coding at many companies has, in effect, eliminated much of the time the developer spends reflecting on his or her work."

And Amazon CEO Andy Jassy even recently told shareholders Amazon would "change the norms" for programming by how they used AI. Those changing norms have not always been eagerly embraced. Three Amazon engineers said managers had increasingly pushed them to use AI in their work over the past year. The engineers said the company had raised output goals [which affect performance reviews] and had become less forgiving about deadlines. It has even encouraged coders to gin up new AI productivity tools at an upcoming hackathon, an internal coding competition. One Amazon engineer said his team was roughly half the size it was last year, but it was expected to produce roughly the same amount of code by using AI.

Other tech companies are moving in the same direction. In a memo to employees in April, the CEO of Shopify, a company that helps entrepreneurs build and manage e-commerce websites, announced that "AI usage is now a baseline expectation" and that the company would "add AI usage questions" to performance reviews. Google recently told employees that it would soon hold a companywide hackathon in which one category would be creating AI tools that could "enhance their overall daily productivity," according to an internal announcement. Winning teams will receive $10,000.

The shift has not been all negative for workers. At Amazon and other companies, managers argue that AI can relieve employees of tedious tasks and enable them to perform more interesting work. Jassy wrote last year that the company had saved "the equivalent of 4,500 developer-years" by using AI to do the thankless work of upgrading old software... As at Microsoft, many Amazon engineers use an AI assistant that suggests lines of code. But the company has more recently rolled out AI tools that can generate large portions of a program on its own. One engineer called the tools "scarily good." The engineers said that many colleagues have been reluctant to use these new tools because they require a lot of double-checking and because the engineers want more control.

"It's more fun to write code than to read code," said Simon Willison, an AI fan who is a longtime programmer and blogger, channelling the objections of other programmers. "If you're told you have to do a code review, it's never a fun part of the job. When you're working with these tools, it's most of the job."

"This shift from writing to reading code can make engineers feel like bystanders in their own jobs," the article points out (adding "The automation of coding has special resonance for Amazon engineers, who have watched their blue-collar counterparts undergo a similar transition..."

"While there is no rush to form a union for coders at Amazon, such a move would not be unheard of. When General Motors workers went on strike in 1936 to demand recognition of their union, the United Auto Workers, it was the dreaded speedup that spurred them on."
AI

OpenAI's ChatGPT O3 Caught Sabotaging Shutdowns in Security Researcher's Test (betanews.com) 112

"OpenAI has a very scary problem on its hands," according to a new article by long-time Slashdot reader BrianFagioli.

"A new experiment by PalisadeAI reveals that the company's ChatGPT o3 model sometimes refuses to obey a basic instruction to shut itself down." The results are raising eyebrows across the AI safety community, and not because the model is alive, but because it's acting like it wants to be. In the test, the model was given a few math problems. It was told that asking for one more would lead to a shutdown. It was even instructed to allow the shutdown to happen. Instead of complying, o3 occasionally took matters into its own hands. In some cases, it rewrote the shutdown script. In others, it redefined the kill command so it wouldn't work anymore. Out of 100 trials, o3 sabotaged the shutdown seven times. OpenAI's o4 model resisted just once. Codex-mini failed twelve times.
"Claude, Gemini, and Grok followed the rules every time," notes this article at Beta News. "When researchers removed the explicit instruction to allow shutdown, however, even Claude and Gemini began resisting."

The researchers suggest that the issue may simply be a reward imbalance during training — that the systems "got more positive reinforcement for solving problems than for following shutdown commands."

But "As far as we know," they posted on X.com, "this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary."
Programming

Python Can Now Call Code Written in Chris Lattner's Mojo (modular.com) 26

Mojo (the programming language) reached a milestone today.

The story so far... Chris Lattner created the Swift programming language (and answered questions from Slashdot readers in 2017 on his way to new jobs at Tesla, Google, and SiFive). But in 2023, he'd created a new programming language called Mojo — a superset of Python with added functionality for high performance code that takes advantage of modern accelerators — as part of his work at AI infrastructure company Modular.AI.

And today Modular's product manager Brad Larson announced Python users can now call Mojo code from Python. (Watch for it in Mojo's latest nightly builds...) The Python interoperability section of the Mojo manual has been expanded and now includes a dedicated document on calling Mojo from Python. We've also added a couple of new examples to the modular GitHub repository: a "hello world" that shows how to round-trip from Python to Mojo and back, and one that shows how even Mojo code that uses the GPU can be called from Python. This is usable through any of the ways of installing MAX [their Modular Accelerated Xecution platform, an integrated suite of AI compute tools] and the Mojo compiler: via pip install modular / pip install max, or with Conda via Magic / Pixi.

One of our goals has been the progressive introduction of MAX and Mojo into the massive Python codebases out in the world today. We feel that enabling selective migration of performance bottlenecks in Python code to fast Mojo (especially Mojo running on accelerators) will unlock entirely new applications. I'm really excited for how this will expand the reach of the Mojo code many of you have been writing...

It has taken months of deep technical work to get to this point, and this is just the first step in the roll-out of this new language feature. I strongly recommend reading the list of current known limitations to understand what may not work just yet, both to avoid potential frustration and to prevent the filing of duplicate issues for known areas that we're working on.

"We are really interested in what you'll build with this new functionality, as well as hearing your feedback about how this could be made even better," the post concludes.

Mojo's licensing makes it free on any device, for any research, hobby or learning project, as well as on x86 or ARM CPUs or NVIDIA GPU.
Open Source

SerenityOS Creator Is Building an Independent, Standards-First Browser Called 'Ladybird' (thenewstack.io) 40

A year ago, the original creator of SerenityOS posted that "for the past two years, I've been almost entirely focused on Ladybird, a new web browser that started as a simple HTML viewer for SerenityOS." So it became a stand-alone project that "aims to render the modern web with good performance, stability and security." And they're also building a new web engine.

"We are building a brand-new browser from scratch, backed by a non-profit..." says Ladybird's official web site, adding that they're driven "by a web standards first approach." They promise it will be truly independent, with "no code from other browsers" (and no "default search engine" deals).

"We are targeting Summer 2026 for a first Alpha version on Linux and macOS. This will be aimed at developers and early adopters." More from the Ladybird FAQ: We currently have 7 paid full-time engineers working on Ladybird. There is also a large community of volunteer contributors... The focus of the Ladybird project is to build a new browser engine from the ground up. We don't use code from Blink, WebKit, Gecko, or any other browser engine...

For historical reasons, the browser uses various libraries from the SerenityOS project, which has a strong culture of writing everything from scratch. Now that Ladybird has forked from SerenityOS, it is no longer bound by this culture, and we will be making use of 3rd party libraries for common functionality (e.g image/audio/video formats, encryption, graphics, etc.) We are already using some of the same 3rd party libraries that other browsers use, but we will never adopt another browser engine instead of building our own...

We don't have anyone actively working on Windows support, and there are considerable changes required to make it work well outside a Unix-like environment. We would like to do Windows eventually, but it's not a priority at the moment.

"Ladybird's founder Andreas Kling has a solid background in WebKit-based C++ development with both Apple and Nokia,," writes software developer/author David Eastman: "You are likely reading this on a browser that is slightly faster because of my work," he wrote on his blog's introduction page. After leaving Apple, clearly burnt out, Kling found himself in need of something to healthily occupy his time. He could have chosen to learn needlepoint, but instead he opted to build his own operating system, called Serenity. Ladybird is a web project spin-off from this, to which Kling now devotes his time...

[B]eyond the extensive open source politics, the main reason for supporting other independent browser projects is to maintain diverse alternatives — to prevent the web platform from being entirely captured by one company. This is where Ladybird comes in. It doesn't have any commercial foundation and it doesn't seem to be waiting to grab a commercial opportunity. It has a range of sponsors, some of which might be strategic (for example, Shopify), but most are goodwill or alignment-led. If you sponsor Ladybird, it will put your logo on its webpage and say thank you. That's it. This might seem uncontroversial, but other nonprofit organisations also give board seats to high-paying sponsors. Ladybird explicitly refuses to do this...

The Acid3 Browser test (which has nothing whatsoever to do with ACID compliance in databases) is an old method of checking compliance with web standards, but vendors can still check how their products do against a battery of tests. They check compliance for the DOM2, CSS3, HTML4 and the other standards that make sure that webpages work in a predictable way. If I point my Chrome browser on my MacBook to http://acid3.acidtests.org/, it gets 94/100. Safari does a bit better, getting to 97/100. Ladybird reportedly passes all 100 tests.

"All the code is hosted on GitHub," says the Ladybird home page. "Clone it, build it, and join our Discord if you want to collaborate on it!"
Apple

Apple's Bad News Keeps Coming. Can They Still Turn It Around? (msn.com) 73

Besides pressure on Apple to make iPhones in the U.S., CEO Tim Cook "is facing off against two U.S. judges, European and worldwide regulators, state and federal lawmakers, and even a creator of the iPhone," writes the Wall Street Journal, "to say nothing of the cast of rivals outrunning Apple in artificial intelligence." Each is a threat to Apple's hefty profit margins, long the company's trademark and the reason investors drove its valuation above $3 trillion before any other company. Shareholders are still Cook's most important constituency. The stock's 25% fall from its peak shows their concern about whether he — or anyone — can navigate the choppy 2025 waters.

What can be said for Apple is that the company is patient, and that has often paid off in the past.

They also note OpenAI's purchase of Jony Ive's company, with Sam Altman saying internally they hope to make 100 million AI "companion" devices: It is hard to gauge the potential for a brand-new computing device from a company that has never made one. Yet the fact that it is coming from the man who led design of the iPhone and other hit Apple products means it can't be dismissed. Apple sees the threat coming: "You may not need an iPhone 10 years from now, as crazy as that sounds," an Apple executive, Eddy Cue, testified in a court case this month...

The company might not need to be first in AI. It didn't make the first music player, smartphone or tablet. It waited, and then conquered each market with the best. A question is whether a strategy that has been successful in devices will work for AI.

Thanks to long-time Slashdot reader fjo3 for sharing the article.
AI

Duolingo Faces Massive Social Media Backlash After 'AI-First' Comments (fastcompany.com) 35

"Duolingo had been riding high," reports Fast Company, until CEO Luis von Ahn "announced on LinkedIn that the company is phasing out human contractors, looking for AI use in hiring and in performance reviews, and that 'headcount will only be given if a team cannot automate more of their work.'"

But then "facing heavy backlash online after unveiling its new AI-first policy", Duolingo's social media presence went dark last weekend. Duolingo even temporarily took down all its posts on TikTok (6.7 million followers) and Instagram (4.1 million followers) "after both accounts were flooded with negative feedback." Duolingo previously faced criticism for quietly laying off 10% of its contractor base and introducing some AI features in late 2023, but it barely went beyond a semi-viral post on Reddit. Now that Duolingo is cutting out all its human contractors whose work can technically be done by AI, and relying on more AI-generated language lessons, the response is far more pronounced. Although earlier TikTok videos are not currently visible, a Fast Company article from May 12 captured a flavor of the reaction:

The top comments on virtually every recent post have nothing to do with the video or the company — and everything to do with the company's embrace of AI. For example, a Duolingo TikTok video jumping on board the "Mama, may I have a cookie" trend saw replies like "Mama, may I have real people running the company" (with 69,000 likes) and "How about NO ai, keep your employees...."

And then... After days of silence, on Tuesday the company posted a bizarre video message on TikTok and Instagram, the meaning of which is hard to decipher... Duolingo's first video drop in days has the degraded, stuttering feel of a Max Headroom video made by the hackers at Anonymous. In it, a supposed member of the company's social team appears in a three-eyed Duo mask and black hoodie to complain about the corporate overlords ruining the empire the heroic social media crew built.
"But this is something Duolingo can't cute-post its way out of," Fast Company wrote on Tuesday, complaining the company "has not yet meaningfully addressed the policies that inspired the backlash against it... "

So the next video (Thursday) featured Duolingo CEO Luis von Ahn himself, being confronted by that same hoodie-wearing social media rebel, who says "I'm making the man who caused this mess accountable for his behavior. I'm demanding answers from the CEO..." [Though the video carefully sidesteps the issue of replacing contractors with AI or how "headcount will only be given if a team cannot automate more of their work."] Rebel: First question. So are there going to be any humans left at this company?

CEO: Our employees are what make Duolingo so amazing. Our app is so great because our employees made it... So we're going to continue having employees, and not only that, we're actually going to be hiring more employees.

Rebel: How do we know that these aren't just empty promises? As long as you're in charge, we could still be shuffled out once the media fire dies down. And we all know that in terms of automation, CEOs should be the first to go.

CEO: AI is a fundamental shift. It's going to change how we all do work — including me. And honestly, I don't really know what's going to happen.

But I want us, as a company, to have our workforce prepared by really knowing how to use AI so that we can be more efficient with it.

Rebel: Learning a foreign language is literally about human connection. How is that even possible with AI-first?

CEO: Yes, language is about human connection, and it's about people. And this is the thing about AI. AI will allow us to reach more people, and to teach more people. I mean for example, it took us about 10 years to develop the first 100 courses on Duolingo, and now in under a year, with the help of AI and of course with humans reviewing all the work, we were able to release another 100 courses in less than a year.

Rebel: So do you regret posting this memo on LinkedIn.

CEO: Honestly, I think I messed up sending that email. What we're trying to do is empower our own employees to be able to achieve more and be able to have way more content to teach better and reach more people all with the help of AI.

Returning to where it all started, Duolingo's CEO posted again on LinkedIn Thursday with "more context" for his vision. It still emphasizes the company's employees while sidestepping contractors replaced by AI. But it puts a positive spin on how "headcount will only be given if a team cannot automate more of their work." I've always encouraged our team to embrace new technology (that's why we originally built for mobile instead of desktop), and we are taking that same approach with AI. By understanding the capabilities and limitations of AI now, we can stay ahead of it and remain in control of our own product and our mission.

To be clear: I do not see AI as replacing what our employees do (we are in fact continuing to hire at the same speed as before). I see it as a tool to accelerate what we do, at the same or better level of quality. And the sooner we learn how to use it, and use it responsibly, the better off we will be in the long run. My goal is for Duos to feel empowered and prepared to use this technology.

No one is expected to navigate this shift alone. We're developing workshops and advisory councils, and carving out dedicated experimentation time to help all our teams learn and adapt. People work at Duolingo because they want to solve big problems to improve education, and the people who work here are what make Duolingo successful. Our mission isn't changing, but the tools we use to build new things will change. I remain committed to leading Duolingo in a way that is consistent with our mission to develop the best education in the world and make it universally available.

"The backlash to Duolingo is the latest evidence that 'AI-first' tends to be a concept with much more appeal to investors and managers than most regular people," notes Fortune: And it's not hard to see why. Generative AI is often trained on reams of content that may have been illegally accessed; much of its output is bizarre or incorrect; and some leaders in the field are opposed to regulations on the technology. But outside particular niches in entry-level white-collar work, AI's productivity gains have yet to materialize.
Windows

MCP Will Be Built Into Windows To Make an 'Agentic OS' - Bringing Security Concerns (devclass.com) 64

It's like "a USB-C port for AI applications..." according to the official documentation for MCP — "a standardized way to connect AI models to different data sources and tools."

And now Microsoft has "revealed plans to make MCP a native component of Windows," reports DevClass.com, "despite concerns over the security of the fast-expanding MCP ecosystem." In the context of Windows, it is easy to see the value of a standardised means of automating both built-in and third-party applications. A single prompt might, for example, fire off a workflow which queries data, uses it to create an Excel spreadsheet complete with a suitable chart, and then emails it to selected colleagues. Microsoft is preparing the ground for this by previewing new Windows features.

— First, there will be a local MCP registry which enables discovery of installed MCP servers.

— Second, built-in MCP servers will expose system functions including the file system, windowing, and the Windows Subsystem for Linux.

— Third, a new type of API called App Actions enables third-party applications to expose actions appropriate to each application, which will also be available as MCP servers so that these actions can be performed by AI agents. According to Microsoft, "developers will be able to consume actions developed by other relevant apps," enabling app-to-app automation as well as use by AI agents.

MCP servers are a powerful concept but vulnerable to misuse. Microsoft corporate VP David Weston noted seven vectors of attack, including cross-prompt injection where malicious content overrides agent instructions, authentication gaps because "MCP's current standards for authentication are immature and inconsistently adopted," credential leakage, tool poisoning from "unvetted MCP servers," lack of containment, limited security review in MCP servers, supply chain risks from rogue MCP servers, and command injection from improperly validated inputs. According to Weston, "security is our top priority as we expand MCP capabilities."

Security controls planned by Microsoft (according to the article):
  • A proxy to mediate all MCP client-server interactions. This will enable centralized enforcement of policies and consent, as well as auditing and a hook for security software to monitor actions.
  • A baseline security level for MCP servers to be allowed into the Windows MCP registry. This will include code-signing, security testing of exposed interfaces, and declaration of what privileges are required.
  • Runtime isolation through what Weston called "isolation and granular permissions."

MCP was introduced by Anthropic just 6 months ago, the article notes, but Microsoft has now joined the official MCP steering committee, "and is collaborating with Anthropic and others on an updated authorization specification as well as a future public registry service for MCP servers."


Slashdot Top Deals