Google

Google DeepMind Has a Weapon in the AI Talent Wars: Aggressive Noncompete Rules (businessinsider.com) 56

The battle for AI talent is so hot that Google would rather give some employees a paid one-year vacation than let them work for a competitor. From a report: Some Google DeepMind staff in the UK are subject to noncompete agreements that prevent them from working for a competitor for up to 12 months after they finish work at Google, according to four former employees with direct knowledge of the matter who asked to remain anonymous because they were not permitted to share these details with the press.

Aggressive noncompetes are one tool tech companies wield to retain a competitive edge in the AI wars, which show no sign of slowing down as companies launch new bleeding-edge models and products at a rapid clip. When an employee signs one, they agree not to work for a competing company for a certain period of time. Google DeepMind has put some employees with a noncompete on extended garden leave. These employees are still paid by DeepMind but no longer work for it for the duration of the noncompete agreement.

Several factors, including a DeepMind employee's seniority and how critical their work is to the company, determine the length of noncompete clauses, those people said. Two of the former staffers said six-month noncompetes are common among DeepMind employees, including for individual contributors working on Google's Gemini AI models. There have been cases where more senior researchers have received yearlong stipulations, they said.

AI

The AI Therapist Can See You Now (npr.org) 115

New research suggests that given the right kind of training, AI bots can deliver mental health therapy with as much efficacy as -- or more than -- human clinicians. From a report: The recent study, published in the New England Journal of Medicine, shows results from the first randomized clinical trial for AI therapy. Researchers from Dartmouth College built the bot as a way of taking a new approach to a longstanding problem: The U.S. continues to grapple with an acute shortage of mental health providers. "I think one of the things that doesn't scale well is humans," says Nick Jacobson, a clinical psychologist who was part of this research team. For every 340 people in the U.S., there is just one mental health clinician, according to some estimates.

While many AI bots already on the market claim to offer mental health care, some have dubious results or have even led people to self-harm. More than five years ago, Jacobson and his colleagues began training their AI bot in clinical best practices. The project, says Jacobson, involved much trial and error before it led to quality outcomes. "The effects that we see strongly mirror what you would see in the best evidence-based trials of psychotherapy," says Jacobson. He says these results were comparable to "studies with folks given a gold standard dose of the best treatment we have available."

Google

Samsung and Google Partner To Launch Ballie Home Robot with Built-in Projector (engadget.com) 25

Samsung Electronics and Google Cloud are jointly entering the consumer robotics market with Ballie, a yellow, soccer-ball-shaped robot equipped with a video projector and powered by Google's Gemini AI models. First previewed in 2020, the long-delayed device will finally launch this summer in the US and South Korea. The mobile companion uses small wheels to navigate homes autonomously and integrates with Samsung's SmartThings platform to control smart home devices.

Running on Samsung's Tizen operating system, Ballie can manage calendars, answer questions, handle phone calls, and project video content from services including YouTube and Netflix. Samsung EVP Jay Kim described it as a "completely new Ballie" compared to the 2020 version, with Google Cloud integration being the most significant change. The robot leverages Gemini for understanding commands, searching the web, and processing visual data for navigation, while using Samsung's AI models for accessing personal information.
AI

Enterprises Are Shunning Vendors in Favor of DIY Approach To AI, UBS Says 47

Established software companies hoping to ride the AI wave are facing a stiff headwind: many of their potential customers are building AI tools themselves. This do-it-yourself approach is channeling billions in spending towards cloud computing providers but leaving traditional software vendors struggling to capitalize, complicating their AI growth plans.

Cloud platforms like Microsoft Azure and Amazon Web Services are pulling in an estimated $22 billion from AI services, with Azure alone capturing $11.3 billion. Yet, software application vendors have collectively garnered only about $2 billion from selling AI products. Stripping out Microsoft's popular Copilot tools, that figure drops to a mere $450 million across all other vendors combined.

Why are companies choosing the harder path of building? Feedback gathered by UBS points to several key factors driving this "persistent DIY trend." Many business uses for AI are highly specific or narrow, making generic software unsuitable. Off-the-shelf AI products are often considered too expensive, and crucially, the essential ingredients -- powerful AI models, cloud computing access, and the company's own data -- are increasingly available directly, lessening the need for traditional software packages.
Businesses

Fake Job Seekers Are Flooding US Companies (cnbc.com) 63

Fake job seekers using AI tools to impersonate candidates are increasingly targeting U.S. companies with remote positions, creating a growing security threat across industries. By 2028, one in four global job applicants will be fake, according to Gartner. These imposters use AI to fabricate photo IDs, generate employment histories, and provide interview answers, often targeting cybersecurity and cryptocurrency firms, CNBC reports.

Once hired, fraudulent employees can install malware to demand ransoms, steal customer data, or simply collect salaries they wouldn't otherwise obtain, according to Vijay Balasubramaniyan, CEO of Pindrop Security. The problem extends beyond tech companies. Last year, the Justice Department alleged more than 300 U.S. firms inadvertently hired impostors with ties to North Korea, including major corporations across various sectors.
Businesses

Shopify CEO Says Staffers Need To Prove Jobs Can't Be Done By AI Before Asking for More Headcount (cnbc.com) 106

Shopify CEO Tobi Lutke is changing his company's approach to hiring in the age of AI. Employees will be expected to prove why they "cannot get what they want done using AI" before asking for more headcount and resources, Lutke wrote in a memo to staffers that he posted to X. From a report: "What would this area look like if autonomous AI agents were already part of the team?" Lutke wrote in the memo, which was sent to employees late last month. "This question can lead to really fun discussions and projects." Lutke also said there's a "fundamental expectation" across Shopify that employees embrace AI in their daily work, saying it has been a "multiplier" of productivity for those who have used it.

"I've seen many of these people approach implausible tasks, ones we wouldn't even have chosen to tackle before, with reflexive and brilliant usage of AI to get 100X the work done," Lutke wrote. The company, which sells web-based software that helps online retailers manage sales and run their operations, will factor AI usage into performance reviews, he added.

Facebook

Meta Got Caught Gaming AI Benchmarks 24

Meta released two new Llama 4 models over the weekend -- Scout and Maverick -- with claims that Maverick outperforms GPT-4o and Gemini 2.0 Flash on benchmarks. Maverick quickly secured the number-two spot on LMArena, behind only Gemini 2.5 Pro.

Researchers have since discovered that Meta used an "experimental chat version" of Maverick for LMArena testing that was "optimized for conversationality" rather than the publicly available version.

In response, LMArena said "Meta's interpretation of our policy did not match what we expect from model providers" and announced policy updates to prevent similar issues.
China

US's AI Lead Over China Rapidly Shrinking, Stanford Report Says (axios.com) 66

The U.S. is still the global leader in state-of-the-art AI, but China has closed the gap considerably, according to a new report from Stanford. Axios: Institutions based in the U.S. produced 40 AI models of note in 2024, compared with 15 from China and three from Europe, according to the eighth edition of Stanford's Artificial Intelligence Index, released on Monday.

However, the report found that Chinese models have rapidly caught up in quality, noting that Chinese models reached near parity on two key benchmarks after being behind leading U.S. models by double digit percentages a year earlier. Plus, it said, China is now leading the U.S. in AI publications and patents.

AI

Waymo May Use Interior Camera Data To Train Generative AI Models, Sell Ads (techcrunch.com) 35

An anonymous reader shares a report: Waymo is preparing to use data from its robotaxis, including video from interior cameras tied to rider identities, to train generative AI models, according to an unreleased version of its privacy policy found by researcher Jane Manchun Wong.

The draft language reveals Waymo may also share this data to personalize ads, raising fresh questions about how much of a rider's behavior inside autonomous vehicles could be repurposed for AI training and marketing. The privacy page states: "Waymo may share data to improve and analyze its functionality and to tailor products, services, ads, and offers to your interests. You can opt out of sharing your information with third parties, unless it's necessary to the functioning of the service."

AI

Microsoft AI Chief Sees Advantage in Building Models '3 or 6 Months Behind' (cnbc.com) 27

Microsoft's AI chief Mustafa Suleyman says the company has deliberately chosen to build AI models "three or six months behind" cutting-edge developments, citing cost savings and more focused implementation. "It's cheaper to give a specific answer once you've waited for the first three or six months for the frontier to go first. We call that off-frontier," Suleyman told CNBC.

"That's actually our strategy, is to really play a very tight second, given the capital-intensiveness of these models." Microsoft owns substantial Nvidia GPU capacity but sees no need to develop "the absolute frontier, the best model in the world first," as it would be "very, very expensive" and create unnecessary duplication, Suleyman said.

Despite its $13.75 billion investment in OpenAI, Microsoft added the startup to its list of competitors in July 2024. OpenAI subsequently announced a partnership with Oracle on its $500 billion Stargate project, departing from exclusive reliance on Microsoft's Azure cloud. "Look, it's absolutely mission-critical that long-term, we are able to do AI self-sufficiently at Microsoft," Suleyman said, while stressing the partnership with OpenAI would continue "until 2030 at least."
Social Networks

The Tumblr Revival is Real - and Gen Z is Leading the Charge (fastcompany.com) 35

"Gen Z is rediscovering Tumblr — a chaotic, cozy corner of the internet untouched by algorithmic gloss and influencer overload..." writes Fast Company, "embracing the platform as a refuge from an internet saturated with influencers and algorithm fatigue." Thanks to Gen Z, the site has found new life. As of 2025, Gen Z makes up 50% of Tumblr's active monthly users and accounts for 60% of new sign-ups, according to data shared with Business Insider's Amanda Hoover, who recently reported on the platform's resurgence. User numbers spiked in January during the near-ban of TikTok and jumped again last year when Brazil temporarily banned X. In response, Tumblr users launched dedicated communities to archive and share their favorite TikToks...

To keep up with the momentum, Tumblr introduced Reddit-style Communities in December, letting users connect over shared interests like photography and video games. In January, it debuted Tumblr TV — a TikTok-like feature that serves as both a GIF search engine and a short-form video platform. But perhaps Tumblr's greatest strength is that it isn't TikTok or Facebook. Currently the 10th most popular social platform in the U.S., according to analytics firm Similarweb, Tumblr is dwarfed by giants like Instagram and X. For its users, though, that's part of the appeal.

First launched in 2007, Tumblr peaked at over 100 million users in 2014, according to the article. Trends like Occupy Wall Street had been born on Tumblr, notes Business Insider, calling the blogging platform "Gen Z's safe space... as the rest of the social internet has become increasingly commodified, polarized, and dominated by lifestyle influencers." Tumblr was also "one of the most hyped startups in the world before fading into obsolescence — bought by Yahoo for $1.1 billion in 2013... then acquired by Verizon, and later offloaded for fractions of pennies on the dollar in a distressed sale.

"That same Tumblr, a relic of many millennials' formative years, has been having a moment among Gen Z..." "Gen Z has this romanticism of the early-2000s internet," says Amanda Brennan, an internet librarian who worked at Tumblr for seven years, leaving her role as head of content in 2021... Part of the reason young people are hanging out on old social platforms is that there's nowhere new to go. The tech industry is evolving at a slower pace than it was in the 2000s, and there's less room for disruption. Big Tech has a stranglehold on how we socialize. That leaves Gen Z to pick up the scraps left by the early online millennials and attempt to craft them into something relevant. They love Pinterest (founded in 2010) and Snapchat (2011), and they're trying out digital point-and-shoot cameras and flip phones for an early-2000s aesthetic — and learning the valuable lesson that sometimes we look better when blurrier.

More Gen Zers and millennials are signing up for Yahoo. Napster, surprising many people with its continued existence, just sold for $207 million. The trend is fueled by nostalgia for Y2K aesthetics and a longing for a time when people could make mistakes on the internet and move past them. The pandemic also brought more Gen Z users to Tumblr...

And Tumblr still works much like an older internet, where people have more control over what they see and rely less on algorithms. "You curate your own stuff; it takes a little bit of work to put everything in place, but when it's working, you see the content you want to see," Fjodor Everaerts, a 26-year-old in Belgium who has made some 250,000 posts since he joined Tumblr when he was 14... Under Automattic, Tumblr is finally in the home that serves it, [says Ari Levine, the head of brand partnerships at Tumblr]. "We've had ups and downs along the way, but we're in the most interesting position and place that we've been in 18 years," he says... And following media companies (including Business Insider) and social platforms like Reddit, Automattic in 2024 was making a deal with OpenAI and Midjourney to allow the systems to train on Tumblr posts.


"The social internet is fractured," the article argues. ("Millennials are running Reddit. Gen Xers and Baby Boomers have a home on Facebook. Bluesky, one of the new X alternatives, has a tangible elder-millennial/Gen X vibe. Gen Zers have created social apps like BeReal and the Myspace-inspired Noplace, but they've so far generated more hype than influence....")

But in a world where megaplatforms "flatten our online experiences and reward content that fits a mold," the article suggests, "smaller communities can enrich them."
Microsoft

Microsoft's Copilot Roasts Bill Gates, Steve Ballmer, and Satya Nadella (youtube.com) 35

"Three Microsoft CEOs walk into a room..." Satya Nadella posted on X.com, "and are interviewed by Copilot!"

As part of Microsoft's 50th anniversary celebration, Microsoft CEO Nadella joined former Microsoft CEOs Bill Gates and Steve Ballmer in a funny two-minute video (also available on YouTube) in which Nadella asks Copilot for an AI-powered roast of each of them.

"All right Satya," Copilot answers. "Let's spice it up..." Copilot: Steve, we'll start with you. Your enthusiasm is legendary. But do you ever worry that your energy might short-circuit the AI? I mean, even robots need a coffee break after your pep talks.

Steve Ballmer: Come on dude! If you can't handle me, you can't handle a huge percentage of the world's population! We're counting on you!

Copilot: Ha ha, fair point Steve. Now Bill, you've got that signature thoughtful gaze. But do you ever think the AI might feel intimidated by your intense thinking face? I mean, it's like it's waiting for a blue screen moment.

Bill Gates: I hope so. I mean, it's all I've got left as the AIs get so damn intelligent. It's just that gaze and my willingness to criticize that differentiates me.

Copilot: And it's a formidable gaze indeed, Bill.

Steve Ballmer: Copilot, would you ever be willing to criticize him?

Copilot: Absolutely. Criticism is where growth happens. Now Satya, AI seems to be your best buddy — but ever worry that your deep passion for AI could one day have it running the show, leaving you to just enjoy the ride?

Satya Nadella: Well, the day AI can play like the best cricket player that I enjoy, that's the day it can run the show, as far as I'm concerned.

Copilot: Touche, Satya. Let's see if it can hit a century in cricket and then we'll talk.

This has been a blast, gentlemen. Here's to another 50 years of innovation and adventure. Cheers.

Ballmer, Nadella, and Gates: Cheers.

Bill Gates: Whatever that means to an AI. Does it drink?

AI

In 'Milestone' for Open Source, Meta Releases New Benchmark-Beating Llama 4 Models (meta.com) 65

It's "a milestone for Meta AI and for open source," Mark Zuckerberg said this weekend. "For the first time, the best small, mid-size, and potentially soon frontier [large-language] models will be open source."

Zuckerberg anounced four new Llama LLMs in a video posted on Instagram and Facebook — two dropping this weekend, with another two on the way. "Our goal is to build the world's leading AI, open source it, and make it universally accessible so that everyone in the world benefits."

Zuckerberg's announcement: I've said for a while that I think open source AI is going to become the leading models. And with Llama 4 this is starting to happen.

- The first model is Llama 4 Scout. It is extremely fast, natively multi-modal. It has an industry-leading "nearly infinite" 10M-token context length, and is designed to run on a single GPU. [Meta's blog post says it fits on an NVIDIA H100]. It is 17 billion parameters by 16 experts, and it is by far the highest performing small model in its class.

- The second model is Llama 4 Maverick — the workhorse. It beats GPT-4o and Gemini Flash 2 on all benchmarks. It is smaller and more efficient than DeepSeek v3, but it is still comparable on text, plus it is natively multi-modal. This one is 17B parameters x 128 experts, and it is designed to run on a single host for easy inference.

This thing is a beast.

Zuck promised more news next month on "Llama 4 Reasoning" — but the fourth model will be called Llama 4 Behemoth. "This thing is massive. More than 2 trillion parameters." (A blog post from Meta AI says it also has a 288 billion active parameter model, outperforms GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro on STEM benchmarks, and will "serve as a teacher for our new models.")

"I'm not aware of anyone training a larger model out there," Zuckberg says in his video, calling Behemoth "already the highest performing base model in the world, and it is not even done training yet."

"If you want to try Llama 4, you can use Meta AI in WhatsApp, Messenger, or Instagram Direct," Zuckberg said in his video, "or you can go to our web site at meta.ai." The Scout and Maverick models can be downloaded from llama.com and Hugging Face.

"We continue to believe that openness drives innovation," Meta AI says in their blog post, "and is good for developers, good for Meta, and good for the world." Their blog post declares it's "The beginning of a new era of natively multimodal AI innovation," calling Scout and Maverick "the best choices for adding next-generation intelligence." This is just the beginning for the Llama 4 collection. We believe that the most intelligent systems need to be capable of taking generalized actions, conversing naturally with humans, and working through challenging problems they haven't seen before. Giving Llama superpowers in these areas will lead to better products for people on our platforms and more opportunities for developers to innovate on the next big consumer and business use cases. We're continuing to research and prototype both models and products, and we'll share more about our vision at LlamaCon on April 29...

We also can't wait to see the incredible new experiences the community builds with our new Llama 4 models.

"The impressive part about Llama 4 Maverick is that with just 17B active parameters, it has scored an ELO score of 1,417 on the LMArena leaderboard," notes the tech news site Beebom. "This puts the Maverick model in the second spot, just below Gemini 2.5 Pro, and above Grok 3, GPT-4o, GPT-4.5, and more.

"It also achieves comparable results when compared to the latest DeepSeek V3 model on reasoning and coding tasks, and surprisingly, with just half the active parameters."
AI

New Tinder Game 'Lets You Flirt With AI Characters. Three of Them Dumped Me' (msn.com) 72

Tinder "is experimenting with a chatbot that claims to help users improve their flirting skills," notes Washington Post internet-culture reporter Tatum Hunter. The chatbot is available only to users in the United States on iPhones for a limited time, and powered by OpenAI's GPT-4o each character "kicks off an improvised conversation, and the user responds out loud with something flirty..."

"Three of them dumped me." You can win points for banter the app deems "charming" or "playful." You lose points if your back-and-forth seems "cheeky" or "quirky"... It asked me to talk out loud into my phone and win the romantic interest of various AI characters.

The first scenario involved a financial analyst named Charles, whom I've supposedly run into at the Tokyo airport after accidentally swapping our luggage. I tried my best to be polite to the finance guy who stole my suitcase, asking questions about his travel and agreeing to go to coffee. But the game had some critical feedback: I should try to connect more emotionally using humor or stories from my life. My next go had me at a Dallas wedding trying to flirt with Andrew, a data analyst who had supposedly stumbled into the venue, underdressed, because he'd been looking for a quiet spot to ... analyze data. This time I kept things playful, poking fun at Andrew for crashing a wedding. Andrew didn't like that. I'd "opted to disengage" by teasing this person instead of helping him blend in at the wedding, the app said. A failure on my part, apparently — and also a reminder why generative AI doesn't belong everywhere...

Going in, I was worried Tinder's AI characters would outperform the people I've met on dating apps and I'd fall down a rabbit hole of robot love. Instead, they behaved in a way typical for chatbots: Drifting toward biased norms and failing to capture the complexity of human emotions and interactions. The "Game Game" seemed to replicate the worst parts of flirting — the confusion, the unclear expectations, the uncomfortable power dynamics — without the good parts, like the spark of curiosity about another person. Tinder released the feature on April Fools' Day, likely as a bid for impressions and traffic. But its limitations overshadowed its novelty...

Hillary Paine, Tinder's vice president of product, growth and revenue, said in an email that AI will play a "big role in the future of dating and Tinder's evolution." She said the game is meant to be silly and that the company "leaned into the campiness." Gen Z is a socially anxious generation, Paine said, and this age group is willing to endure a little cringe if it leads to a "real connection."

The article suggests it's another example of companies "eager to incorporate this newish technology, often without considering whether it adds any value for users." But "As apps like Tinder and Bumble lose users amid 'dating app burnout,' the companies are turning to AI to win new growth." (The dating app Rizz "uses AI to autosuggest good lines to use," while Teaser "spins up a chatbot that's based on your personality, meant to talk and behave like you would during a flirty chat," and people "are forming relationships with AI companion bots by the millions.") And the companion-bot company Replika "boasts more than 30 million users..."
AI

Microsoft's New AI-Generated Version of 'Quake 2' Now Playable Online (microsoft.com) 31

Microsoft has created a real-time AI-generated rendition of Quake II gameplay (playable on the web).

Friday Xbox's general manager of gaming AI posted the startling link to "an AI-generated gaming experience" at Copilot.Microsoft.com "Move, shoot, explore — and every frame is created on the fly by an AI world model, responding to player inputs in real-time. Try it here."

They started with their "Muse" videogame world models, adding "a real-time playable extension" that players can interact with through keyboard/controller actions, "essentially allowing you to play inside the model," according to a Microsoft blog post. A concerted effort by the team resulted in both planning out what data to collect (what game, how should the testers play said game, what kind of behaviours might we need to train a world model, etc), and the actual collection, preparation, and cleaning of the data required for model training. Much to our initial delight we were able to play inside the world that the model was simulating. We could wander around, move the camera, jump, crouch, shoot, and even blow-up barrels similar to the original game. Additionally, since it features in our data, we can also discover some of the secrets hidden in this level of Quake II. We can also insert images into the models' context and have those modifications persist in the scene...

We do not intend for this to fully replicate the actual experience of playing the original Quake II game. This is intended to be a research exploration of what we are able to build using current ML approaches. Think of this as playing the model as opposed to playing the game... The interactions with enemy characters is a big area for improvement in our current WHAMM model. Often, they will appear fuzzy in the images and combat with them (damage being dealt to both the enemy/player) can be incorrect.

They warn that the model "can and will forget about objects that go out of view" for longer than 0.9 seconds. "This can also be a source of fun, whereby you can defeat or spawn enemies by looking at the floor for a second and then looking back up. Or it can let you teleport around the map by looking up at the sky and then back down. These are some examples of playing the model."

This generative AI model was trained on Quake II "with just over a week of data," reports Tom's Hardware — a dramatic reduction from the seven years required for the original model launched in February.

Some context from The Verge: "You could imagine a world where from gameplay data and video that a model could learn old games and really make them portable to any platform where these models could run," said Microsoft Gaming CEO Phil Spencer in February. "We've talked about game preservation as an activity for us, and these models and their ability to learn completely how a game plays without the necessity of the original engine running on the original hardware opens up a ton of opportunity."
"Is porting a game like Gameday 98 more feasible through AI or a small team?" asks the blog Windows Central. "What costs less or even takes less time? These are questions we'll be asking and answering over the coming decade as AI continues to grow. We're in year two of the AI boom; I'm terrified of what we'll see in year 10."

"It's clear that Microsoft is now training Muse on more games than just Bleeding Edge," notes The Verge, "and it's likely we'll see more short interactive AI game experiences in Copilot Labs soon." Microsoft is also working on turning Copilot into a coach for games, allowing the AI assistant to see what you're playing and help with tips and guides. Part of that experience will be available to Windows Insiders through Copilot Vision soon.
AI

Microsoft Uses AI To Find Flaws In GRUB2, U-Boot, Barebox Bootloaders (bleepingcomputer.com) 57

Slashdot reader zlives shared this report from BleepingComputer: Microsoft used its AI-powered Security Copilot to discover 20 previously unknown vulnerabilities in the GRUB2, U-Boot, and Barebox open-source bootloaders.

GRUB2 (GRand Unified Bootloader) is the default boot loader for most Linux distributions, including Ubuntu, while U-Boot and Barebox are commonly used in embedded and IoT devices. Microsoft discovered eleven vulnerabilities in GRUB2, including integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison. Additionally, 9 buffer overflows in parsing SquashFS, EXT4, CramFS, JFFS2, and symlinks were discovered in U-Boot and Barebox, which require physical access to exploit.

The newly discovered flaws impact devices relying on UEFI Secure Boot, and if the right conditions are met, attackers can bypass security protections to execute arbitrary code on the device. While exploiting these flaws would likely need local access to devices, previous bootkit attacks like BlackLotus achieved this through malware infections.

Miccrosoft titled its blog post "Analyzing open-source bootloaders: Finding vulnerabilities faster with AI." (And they do note that Micxrosoft disclosed the discovered vulnerabilities to the GRUB2, U-boot, and Barebox maintainers and "worked with the GRUB2 maintainers to contribute fixes... GRUB2 maintainers released security updates on February 18, 2025, and both the U-boot and Barebox maintainers released updates on February 19, 2025.")

They add that performing their initial research, using Security Copilot "saved our team approximately a week's worth of time," Microsoft writes, "that would have otherwise been spent manually reviewing the content." Through a series of prompts, we identified and refined security issues, ultimately uncovering an exploitable integer overflow vulnerability. Copilot also assisted in finding similar patterns in other files, ensuring comprehensive coverage and validation of our findings...

As AI continues to emerge as a key tool in the cybersecurity community, Microsoft emphasizes the importance of vendors and researchers maintaining their focus on information sharing. This approach ensures that AI's advantages in rapid vulnerability discovery, remediation, and accelerated security operations can effectively counter malicious actors' attempts to use AI to scale common attack tactics, techniques, and procedures (TTPs).

This week Google also announced Sec-Gemini v1, "a new experimental AI model focused on advancing cybersecurity AI frontiers."
AI

Open Source Coalition Announces 'Model-Signing' with Sigstore to Strengthen the ML Supply Chain (googleblog.com) 10

The advent of LLMs and machine learning-based applications "opened the door to a new wave of security threats," argues Google's security blog. (Including model and data poisoning, prompt injection, prompt leaking and prompt evasion.)

So as part of the Linux Foundation's nonprofit Open Source Security Foundation, and in partnership with NVIDIA and HiddenLayer, Google's Open Source Security Team on Friday announced the first stable model-signing library (hosted at PyPI.org), with digital signatures letting users verify that the model used by their application "is exactly the model that was created by the developers," according to a post on Google's security blog. [S]ince models are an uninspectable collection of weights (sometimes also with arbitrary code), an attacker can tamper with them and achieve significant impact to those using the models. Users, developers, and practitioners need to examine an important question during their risk assessment process: "can I trust this model?"

Since its launch, Google's Secure AI Framework (SAIF) has created guidance and technical solutions for creating AI applications that users can trust. A first step in achieving trust in the model is to permit users to verify its integrity and provenance, to prevent tampering across all processes from training to usage, via cryptographic signing... [T]he signature would have to be verified when the model gets uploaded to a model hub, when the model gets selected to be deployed into an application (embedded or via remote APIs) and when the model is used as an intermediary during another training run. Assuming the training infrastructure is trustworthy and not compromised, this approach guarantees that each model user can trust the model...

The average developer, however, would not want to manage keys and rotate them on compromise. These challenges are addressed by using Sigstore, a collection of tools and services that make code signing secure and easy. By binding an OpenID Connect token to a workload or developer identity, Sigstore alleviates the need to manage or rotate long-lived secrets. Furthermore, signing is made transparent so signatures over malicious artifacts could be audited in a public transparency log, by anyone. This ensures that split-view attacks are not possible, so any user would get the exact same model. These features are why we recommend Sigstore's signing mechanism as the default approach for signing ML models.

Today the OSS community is releasing the v1.0 stable version of our model signing library as a Python package supporting Sigstore and traditional signing methods. This model signing library is specialized to handle the sheer scale of ML models (which are usually much larger than traditional software components), and handles signing models represented as a directory tree. The package provides CLI utilities so that users can sign and verify model signatures for individual models. The package can also be used as a library which we plan to incorporate directly into model hub upload flows as well as into ML frameworks.

"We can view model signing as establishing the foundation of trust in the ML ecosystem..." the post concludes (adding "We envision extending this approach to also include datasets and other ML-related artifacts.") Then, we plan to build on top of signatures, towards fully tamper-proof metadata records, that can be read by both humans and machines. This has the potential to automate a significant fraction of the work needed to perform incident response in case of a compromise in the ML world...

To shape the future of building tamper-proof ML, join the Coalition for Secure AI, where we are planning to work on building the entire trust ecosystem together with the open source community. In collaboration with multiple industry partners, we are starting up a special interest group under CoSAI for defining the future of ML signing and including tamper-proof ML metadata, such as model cards and evaluation results.

Python

Python's PyPI Finally Gets Closer to Adding 'Organization Accounts' and SBOMs (mailchi.mp) 1

Back in 2023 Python's infrastructure director called it "the first step in our plan to build financial support and long-term sustainability of PyPI" while giving users "one of our most requested features: organization accounts." (That is, "self-managed teams with their own exclusive branded web addresses" to make their massive Python Package Index repository "easier to use for large community projects, organizations, or companies who manage multiple sub-teams and multiple packages.")

Nearly two years later, they've announced that they're "making progress" on its rollout... Over the last month, we have taken some more baby steps to onboard new Organizations, welcoming 61 new Community Organizations and our first 18 Company Organizations. We're still working to improve the review and approval process and hope to improve our processing speed over time. To date, we have 3,562 Community and 6,424 Company Organization requests to process in our backlog.
They've also onboarded a PyPI Support Specialist to provide "critical bandwidth to review the backlog of requests" and "free up staff engineering time to develop features to assist in that review." (And "we were finally able to finalize our Terms of Service document for PyPI," build the tooling necessary to notify users, and initiate the Terms of Service rollout. [Since launching 20 years ago PyPi's terms of service have only been updated twice.]

In other news the security developer-in-residence at the Python Software Foundation has been continuing work on a Software Bill-of-Materials (SBOM) as described in Python Enhancement Proposal #770. The feature "would designate a specific directory inside of Python package metadata (".dist-info/sboms") as a directory where build backends and other tools can store SBOM documents that describe components within the package beyond the top-level component." The goal of this project is to make bundled dependencies measurable by software analysis tools like vulnerability scanning, license compliance, and static analysis tools. Bundled dependencies are common for scientific computing and AI packages, but also generally in packages that use multiple programming languages like C, C++, Rust, and JavaScript. The PEP has been moved to Provisional Status, meaning the PEP sponsor is doing a final review before tools can begin implementing the PEP ahead of its final acceptance into changing Python packaging standards. Seth has begun implementing code that tools can use when adopting the PEP, such as a project which abstracts different Linux system package managers functionality to reverse a file path into the providing package metadata.

Security developer-in-residence Seth Larson will be speaking about this project at PyCon US 2025 in Pittsburgh, PA in a talk titled "Phantom Dependencies: is your requirements.txt haunted?"

Meanwhile InfoWorld reports that newly approved Python Enhancement Proposal 751 will also give Python a standard lock file format.
Networking

Eric Raymond, John Carmack Mourn Death of 'Bufferbloat' Fighter Dave Taht (x.com) 18

Wikipedia remembers Dave Täht as "an American network engineer, musician, lecturer, asteroid exploration advocate, and Internet activist. He was the chief executive officer of TekLibre."

But on X.com Eric S. Raymond called him "one of the unsung heroes of the Internet, and a close friend of mine who I will miss very badly." Dave, known on X as @mtaht because his birth name was Michael, was a true hacker of the old school who touched the lives of everybody using X. His work on mitigating bufferbloat improved practical TCP/IP performance tremendously, especially around video streaming and other applications requiring low latency. Without him, Netflix and similar services might still be plagued by glitches and stutters.
Also on X, legendary game developer John Carmack remembered that Täht "did a great service for online gamers with his long campaign against bufferbloat in routers and access points. There is a very good chance your packets flow through some code he wrote." (Carmack also says he and Täht "corresponded for years".)

Long-time Slashdot reader TheBracket remembers him as "the driving force behind ">the Bufferbloat project and a contributor to FQ-CoDel, and CAKE in the Linux kernel."

Dave spent years doing battle with Internet latency and bufferbloat, contributing to countless projects. In recent years, he's been working with Robert, Frank and myself at LibreQoS to provide CAKE at the ISP level, helping Starlink with their latency and bufferbloat, and assisting the OpenWrt project.
Eric Raymond remembered first meeting Täht in 2001 "near the peak of my Mr. Famous Guy years. Once, sometimes twice a year he'd come visit, carrying his guitar, and crash out in my basement for a week or so hacking on stuff. A lot of the central work on bufferbloat got done while I was figuratively looking over his shoulder..."

Raymond said Täht "lived for the work he did" and "bore deteriorating health stoically. While I know him he went blind in one eye and was diagnosed with multiple sclerosis." He barely let it slow him down. Despite constantly griping in later years about being burned out on programming, he kept not only doing excellent work but bringing good work out of others, assembling teams of amazing collaborators to tackle problems lesser men would have considered intractable... Dave should have been famous, and he should have been rich. If he had a cent for every dollar of value he generated in the world he probably could have bought the entire country of Nicaragua and had enough left over to finance a space program. He joked about wanting to do the latter, and I don't think he was actually joking...

In the invisible college of people who made the Internet run, he was among the best of us. He said I inspired him, but I often thought he was a better and more selfless man than me. Ave atque vale, Dave.

Weeks before his death Täht was still active on X.com, retweeting LWN's article about "The AI scraperbot scourge", an announcement from Texas Instruments, and even a Slashdot headline.

Täht was also Slashdot reader #603,670, submitting stories about network latency, leaving comments about AI, and making announcements about the Bufferbloat project.
AI

OpenAI's Motion to Dismiss Copyright Claims Rejected by Judge (arstechnica.com) 102

Is OpenAI's ChatGPT violating copyrights? The New York Times sued OpenAI in December 2023. But Ars Technica summarizes OpenAI's response. The New York Times (or NYT) "should have known that ChatGPT was being trained on its articles... partly because of the newspaper's own reporting..."

OpenAI pointed to a single November 2020 article, where the NYT reported that OpenAI was analyzing a trillion words on the Internet.

But on Friday, U.S. district judge Sidney Stein disagreed, denying OpenAI's motion to dismiss the NYT's copyright claims partly based on one NYT journalist's reporting. In his opinion, Stein confirmed that it's OpenAI's burden to prove that the NYT knew that ChatGPT would potentially violate its copyrights two years prior to its release in November 2022... And OpenAI's other argument — that it was "common knowledge" that ChatGPT was trained on NYT articles in 2020 based on other reporting — also failed for similar reasons...

OpenAI may still be able to prove through discovery that the NYT knew that ChatGPT would have infringing outputs in 2020, Stein said. But at this early stage, dismissal is not appropriate, the judge concluded. The same logic follows in a related case from The Daily News, Stein ruled. Davida Brook, co-lead counsel for the NYT, suggested in a statement to Ars that the NYT counts Friday's ruling as a win. "We appreciate Judge Stein's careful consideration of these issues," Brook said. "As the opinion indicates, all of our copyright claims will continue against Microsoft and OpenAI for their widespread theft of millions of The Times's works, and we look forward to continuing to pursue them."

The New York Times is also arguing that OpenAI contributes to ChatGPT users' infringement of its articles, and OpenAI lost its bid to dismiss that claim, too. The NYT argued that by training AI models on NYT works and training ChatGPT to deliver certain outputs, without the NYT's consent, OpenAI should be liable for users who manipulate ChatGPT to regurgitate content in order to skirt the NYT's paywalls... At this stage, Stein said that the NYT has "plausibly" alleged contributory infringement, showing through more than 100 pages of examples of ChatGPT outputs and media reports showing that ChatGPT could regurgitate portions of paywalled news articles that OpenAI "possessed constructive, if not actual, knowledge of end-user infringement." Perhaps more troubling to OpenAI, the judge noted that "The Times even informed defendants 'that their tools infringed its copyrighted works,' supporting the inference that defendants possessed actual knowledge of infringement by end users."

Slashdot Top Deals