×
Crime

South Korea Criminalizes Watching Or Possessing Sexually Explicit Deepfakes (reuters.com) 69

An anonymous reader quotes a report from Reuters: South Korean lawmakers on Thursday passed a bill that criminalizes possessing or watching sexually explicit deepfake images and videos, with penalties set to include prison terms and fines. There has been an outcry in South Korea over Telegram group chats where sexually explicit and illegal deepfakes were created and widely shared, prompting calls for tougher punishment. Anyone purchasing, saving or watching such material could face up to three years in jail or be fined up to 30 million won ($22,600), according to the bill.

Currently, making sexually explicit deepfakes with the intention of distributing them is punishable by five years in prison or a fine of 50 million won under the Sexual Violence Prevention and Victims Protection Act. When the new law takes effect, the maximum sentence for such crimes will also increase to seven years regardless of the intention. The bill will now need the approval of President Yoon Suk Yeol in order to be enacted. South Korean police have so far handled more than 800 deepfake sex crime cases this year, the Yonhap news agency reported on Thursday. That compares with 156 for all of 2021, when data was first collated. Most victims and perpetrators are teenagers, police say.

The Almighty Buck

Promises of 'Passive Income' On Amazon Led To Death Threats For Negative Online Review, FTC Says (cnbc.com) 78

"The Federal Trade Commission is cracking down on 'automation' companies that launch and manage online businesses on behalf of customers in exchange for an upfront investment," reports CNBC's Annie Palmer. "The latest case targets Ascend Ecom, which ran an e-commerce money-making scheme, primarily on Amazon." The FTC accuses the e-commerce company of defrauding consumers of at least $25 million through false claims, deceptive marketing practices, and attempts to suppress negative reviews. From the report: Jamaal Sanford received a disturbing email in May of last year. The message, whose sender claimed to be part of a "Russian shadow team," contained Sanford's home address, social security number and his daughter's college. It came with a very specific threat. The sender said Sanford, who lives in Springfield, Missouri, would only only be safe if he removed a negative online review. "Do not play tough guy," the email said. "You have nothing to gain by keeping the reviews and EVERYTHING to lose by not cooperating."

Months earlier, Sanford had left a scathing review for an e-commerce "automation" company called Ascend Ecom on the rating site Trustpilot. Ascend's purported business was the launching and managing of Amazon storefronts on behalf of clients, who would pay money for the service and the promise of earning thousands of dollars in "passive income." Sanford had invested $35,000 in such a scheme. He never recouped the money and is now in debt, according to a Federal Trade Commission lawsuit unsealed on Friday. His experience is a key piece of the FTC's suit, which accuses Ascend of breaking federal laws by making false claims related to earnings and business performance, and threatening or penalizing customers for posting honest reviews, among other violations. The FTC is seeking monetary relief for Ascend customers and to prevent Ascend from doing business permanently.

Printer

HP Is Adding AI To Its Printers 140

An anonymous reader quotes a report from PCWorld, written by Michael Crider: The latest perpetrator of questionable AI branding? HP. The company is introducing "Print AI," what it calls the "industry's first intelligent print experience for home, office, and large format printing." What does that mean? It's essentially a new beta software driver package for some HP printers. According to the press release, it can deliver "Perfect Output" -- capital P capital O -- a branded tool that reformats the contents of a page in order to more ideally fit it onto physical paper.

Despite my skeptical tone, this is actually a pretty cool idea. "Perfect Output can detect unwanted content like ads and web text, printing only the desired text and images, saving time, paper, and ink." That's neat! If the web page you're printing doesn't offer a built-in print format, the software will make one for you. It'll also serve to better organize printed spreadsheets and images, too. But I don't see anything in this software that's actually AI -- or even machine learning, for that matter. This is applying the same tech (functionally, if not necessarily the same code) as the "reader mode" formatting we've seen in browsers for about a decade now. Take the text and images of a page, strip out everything else that's unnecessary, and present it as efficiently as possible. [...]

The press release does mention that support and formatting tasks can be accomplished with "simple conversational prompts," which at least might be leveraging some of the large language models that have become synonymous with AI as consumers understand it. But based on the description, it's more about selling you something than helping you. "Customers can choose to print or explore a curated list of partners that offer unique photo printing capabilities, gift certificates to be printed on the card, and so much more." Whoopee.
AI

Google's NotebookLM Can Help You Dive Deeper Into YouTube Videos 14

The Verge's Emma Roth reports: NotebookLM, Google's AI note-taking app, can now summarize and help you dig deeper into YouTube videos. The new capability works by analyzing the text in a YouTube video's transcript, including autogenerated ones. Once you add a YouTube link to NotebookLM, it will use AI to provide a brief summary of key topics discussed in the transcript. You can then click on these topics to get more detailed information as well as ask questions. (If you're struggling to come up with something to ask, NotebookLM will suggest some questions.)

After clicking on some of the topics, I found that NotebookLM backs up the information provided in its chat window with a citation that links you directly to the point in the transcript where it's mentioned. You can also create an Audio Overview based on the content, which is a podcast-style discussion hosted by AI. I found that the feature worked on most of the videos I tried, except for ones published within the past two days or so. [...] In addition to adding support for YouTube videos, Google announced that NotebookLM now supports audio recordings as well, allowing you to search transcribed conversations for certain information and create study guides.
Government

US Justice Department Probes Super Micro Computer (yahoo.com) 22

According to the Wall Street Journal, the U.S. Department of Justice is investigating Super Micro Computer after short-seller Hindenburg Research alleged "accounting manipulation" at the AI server maker. Super Micro's shares fell about 12% following the report. Reuters reports: The WSJ report, which cited people familiar with the matter, said the probe was at an early stage and that a prosecutor at a U.S. attorney's office recently contacted people who may be holding relevant information. The prosecutor has asked for information that appeared to be connected to a former employee who accused the company of accounting violations, the report added.

Super Micro had late last month delayed filing its annual report, citing a need to assess "its internal controls over financial reporting," a day after Hindenburg disclosed a short position and made claims of "accounting manipulation." The short-seller had cited a three-month investigation that included interviews with former senior employees of Super Micro and litigation records. Hindenburg's allegations included evidence of undisclosed related-party transactions, failure to abide by export controls, among other issues. The company had denied Hindenburg's claims.

The Courts

DoNotPay Has To Pay $193K For Falsely Touting Untested AI Lawyer, FTC Says (arstechnica.com) 30

An anonymous reader quotes a report from Ars Technica: Among the first AI companies that the Federal Trade Commission has exposed as deceiving consumers is DoNotPay -- which initially was advertised as "the world's first robot lawyer" with the ability to "sue anyone with the click of a button." On Wednesday, the FTC announced that it took action to stop DoNotPay from making bogus claims after learning that the AI startup conducted no testing "to determine whether its AI chatbot's output was equal to the level of a human lawyer." DoNotPay also did not "hire or retain any attorneys" to help verify AI outputs or validate DoNotPay's legal claims.

DoNotPay accepted no liability. But to settle the charges that DoNotPay violated the FTC Act, the AI startup agreed to pay $193,000, if the FTC's consent agreement is confirmed following a 30-day public comment period. Additionally, DoNotPay agreed to warn "consumers who subscribed to the service between 2021 and 2023" about the "limitations of law-related features on the service," the FTC said. Moving forward, DoNotPay would also be prohibited under the settlement from making baseless claims that any of its features can be substituted for any professional service.
"The complaint relates to the usage of a few hundred customers some years ago (out of millions of people), with services that have long been discontinued," DoNotPay's spokesperson said. The company "is pleased to have worked constructively with the FTC to settle this case and fully resolve these issues, without admitting liability."
Robotics

McDonald's Touchscreen Kiosks, Feared As Job Killers, Created More Jobs Instead (cnn.com) 204

An anonymous reader quotes a report from CNN: Some McDonald's franchisees -- which own and operate 95% of McDonald's in the United States -- are now rolling out kiosks that can take cash and accept change. But even in these locations, McDonald's is reassigning cashiers to other roles, including new "guest experience lead" jobs that help customers use the kiosks and assist with any issues. "In theory, kiosks should help save on labor, but in reality, restaurants have added complexity due to mobile ordering and delivery, and the labor saved from kiosks is often reallocated for these efforts," said RJ Hottovy, an analyst who covers the restaurant and retail industries at data analytics firm Placer.ai. Kiosks "have created a restaurant within a restaurant." And in some cases, kiosks have even been a flop. Bowling ally chain Bowlero added kiosks in lanes for customers to order food and drinks, but they went unused because staff and customers weren't fully trained on using them. "The unintended consequences have surprised a lot of people," Hottovy said.

Even some of the benefits of kiosks touted by chains -- they upsell customers by suggesting menu items and speed up orders -- don't always play out. A recent study from Temple University researchers found that, when a line forms behind customers using kiosks, they experience more stress when placing their orders and purchase less food. And some customers take longer to order tapping around on kiosks and paying than they do telling a cashier they'd like to order a burger and fries. Not to mention the kiosks can malfunction or break down. "If kiosks really improved speed of service, order accuracy, and upsell, they'd be rolled out more extensively across the industry than they are today," Hottovy said.

Kiosks have also been threatened as a fast-food industry response to higher minimum wage laws. [...] But the quick-service and fast-casual segments of the restaurant industry continue to grow. Staffing levels were nearly 150,000 jobs, or 3%, above pre-pandemic levels, according to the latest Labor Department data. Christopher Andrews, a sociologist at Drew University who studies the effects of technology on work, said the impacts of kiosks were similar to other self-service technology such as ATMs and self-checkout machines in supermarkets. Both technologies were predicted to cause job losses. "The introduction of ATMs did not result in massive technological unemployment for bank tellers," he said. "Instead, it freed them up from low-value tasks such as depositing and cashing checks to perform other tasks that created value."
Self-checkout have also not resulted in retail job losses, the report adds. "In some cases, self-checkout backfired for chains because self-checkout leads to higher merchandise losses from customer errors and more intentional shoplifting than when human cashiers are ringing up customers."
Businesses

As IBM Pushes For More Automation, Its AI Simply Not Up To the Job of Replacing Staff (theregister.com) 38

An anonymous reader shares a report: IBM's plan to replace thousands of roles with AI presently looks more like outsourcing jobs to India, at the expense of organizational competency. That view of Big Blue was offered to The Register after our report on the IT giant's latest layoffs, which resonated so strongly with several IBM employees that they contacted The Register with thoughts on the job cuts. Our sources have asked not to be identified to protect their ongoing relationships with Big Blue. Suffice to say they were or are employed as senior technologists in business units that span multiple locations and were privy to company communications: These are not views from the narrow entrance to a single cubicle. We're going to refer to three by the pseudonyms Alex, Blake, and Casey.

"I always make this joke about IBM," said Alex. "It is: 'IBM doesn't want people to work for them.' Every six months or so they are doing rounds of [Resource Actions -- IBM-speak for layoffs] or forcing folks into impossible moves, which result in separation." That's consistent with CEO Arvind Krishna's commitment last year to replace around 7,800 jobs with AI. But our sources say Krishna's plan is on shaky ground: IBM's AI isn't up to the job of replacing people, and some of the people who could fix that have been let go. Alex observed that over the past four years, IBM management has constantly pushed for automation and the use of AI. "With AI tools writing that code for us ... why pay for senior-level staff when you can promote a youngster who doesn't really know any better at a much lower price?" he said. "Plus, once you have a seasoned programmer write code that is by law the company's IP and it is fed into an AI library, it basically learns it and the author is no longer needed." But our sources tell us that scenario has yet to be realized inside IBM.

Businesses

OpenAI To Remove Non-Profit Control and Give Sam Altman Equity (reuters.com) 80

OpenAI is working on a plan to restructure its core business into a for-profit benefit corporation that will no longer be controlled by its non-profit board. "Chief executive Sam Altman will also receive equity for the first time in the for-profit company, which could be worth $150 billion after the restructuring as it also tries to remove the cap on returns for investors," reports Reuters. From the report: The OpenAI non-profit will continue to exist and own a minority stake in the for-profit company, the sources said. The move could also have implications for how the company manages AI risks in a new governance structure. [...] The details of the proposed corporate structure, first reported by Reuters, highlight significant governance changes happening behind the scenes at one of the most important AI companies. The plan is still being hashed out with lawyers and shareholders and the timeline for completing the restructuring remains uncertain, the sources said. "We remain focused on building AI that benefits everyone, and we're working with our board to ensure that we're best positioned to succeed in our mission. The non-profit is core to our mission and will continue to exist," an OpenAI spokesperson said.

Earlier today, OpenAI's chief technology officer Mira Murati announced her departure from the company. Her resignation follows the departures of founders Ilya Sutskever and John Schulman.

Further reading: OpenAI Pitched White House On Unprecedented Data Center Buildout
Technology

Ray-Ban Smart Glasses Updated With Real-Time AI Video, Reminders, and QR Code Scanning (techcrunch.com) 16

An anonymous reader quotes a report from TechCrunch: Meta CEO Mark Zuckerberg announced updates to the company's Ray-Ban Meta smart glasses at Meta Connect 2024 on Wednesday. [...] Meta says its smart glasses will soon have real-time AI video capabilities, meaning you can ask the Ray Ban Meta glasses questions about what you're seeing in front of you, and Meta AI will verbally answer you in real time. Currently, the Ray-Ban Meta glasses can only take a picture and describe that to you or answer questions about it, but the video upgrade should make the experience more natural, in theory at least. These multimodal features are slated to come later this year. In a demo, users could ask Ray-Ban Meta questions about a meal they were cooking, or city scenes taking place in front of them. The real-time video capabilities mean that Meta's AI should be able to process live action and respond in an audible way. This is easier said than done, however, and we'll have to see how fast and seamless the feature is in practice. We've seen demonstrations of these real-time AI video capabilities from Google and OpenAI, but Meta would be the first to launch such features in a consumer product.

Zuckerberg also announced live language translation for Ray-Ban Meta. English speaking users can talk to someone speaking French, Italian, or Spanish, and their Ray-Ban Meta glasses should be able to translate what the other person is saying into their language of choice. Meta says this feature is coming later this year and will include more language later on. The Ray-Ban Meta glasses are getting reminders, which will allow people to ask Meta AI to remind them about things they look at through the smart glasses. In a demo, a user asked their Ray-Ban Meta glasses to remember a jacket they were looking at, so they could share the image with a friend later on. Meta announced that integrations with Amazon Music, Audible, and iHeart are coming to its smart glasses. This should make it easier for people to listen to music on their streaming service of choice using the glasses' built-in speakers. The Ray-Ban Meta glasses will also gain the ability to scan QR codes or phone numbers from the glasses. Users can ask the glasses to scan something, and the QR code will immediately open on the person's phone with no further action required.
Zuckerberg also unveiled the company's prototype AR glasses codenamed Orion, which feature a 70-degree field of view, Micro LED projectors, and silicon carbide lenses that beam graphics directly into the wearer's eyes.
AI

OpenAI CTO Mira Murati Is Leaving Firm 25

OpenAI's chief technology officer Mira Murati has announced her departure from the company, marking the latest high-profile exit from the Microsoft-backed AI firm. Murati, who briefly served as interim CEO during last year's leadership turmoil, cited a desire for personal exploration after six and a half years at OpenAI.

Her resignation follows the departures of founders Ilya Sutskever and John Schulman earlier this year. The startup, creator of ChatGPT, is currently in talks to raise over $6 billion at a $150 billion valuation, according to media reports.
Government

OpenAI Pitched White House On Unprecedented Data Center Buildout (yahoo.com) 38

An anonymous reader quotes a report from Bloomberg: OpenAI has pitched the Biden administration on the need for massive data centers that could each use as much power as entire cities, framing the unprecedented expansion as necessary to develop more advanced artificial intelligence models and compete with China. Following a recent meeting at the White House, which was attended by OpenAI Chief Executive Officer Sam Altman and other tech leaders, the startup shared a document with government officials outlining the economic and national security benefits of building 5-gigawatt data centers in various US states, based on an analysis the company engaged with outside experts on. To put that in context, 5 gigawatts is roughly the equivalent of five nuclear reactors, or enough to power almost 3 million homes. OpenAI said investing in these facilities would result in tens of thousands of new jobs, boost the gross domestic product and ensure the US can maintain its lead in AI development, according to the document, which was viewed by Bloomberg News. To achieve that, however, the US needs policies that support greater data center capacity, the document said. "Whatever we're talking about is not only something that's never been done, but I don't believe it's feasible as an engineer, as somebody who grew up in this," said Joe Dominguez, CEO of Constellation Energy Corp. "It's certainly not possible under a timeframe that's going to address national security and timing."
Google

Google Paid $2.7 Billion To Bring Back an AI Genius Who Quit in Frustration (msn.com) 71

At a time when tech companies are paying eye-popping sums to hire the best minds in artificial intelligence, Google's deal to rehire Noam Shazeer has left others in the dust. From a report: A co-author of a seminal research paper that kicked off the AI boom, Shazeer quit Google in 2021 to start his own company after the search giant refused to release a chatbot he developed. When that startup, Character.AI, began to flounder, his old employer swooped in.

Google wrote Character a check for around $2.7 billion, according to people with knowledge of the deal. The official reason for the payment was to license Character's technology. But the deal included another component: Shazeer agreed to work for Google again. Within Google, Shazeeer's return is widely viewed as the primary reason the company agreed to pay the multibillion-dollar licensing fee. The arrangement has thrust him into the middle of a debate in Silicon Valley about whether tech giants are overspending in the race to develop cutting-edge AI, which some believe will define the future of computing.

AI

Microsoft Claims Its New Tool Can Correct AI Hallucinations 50

An anonymous reader quotes a report from TechCrunch: Microsoft today revealed Correction, a service that attempts to automatically revise AI-generated text that's factually wrong. Correction first flags text that may be erroneous -- say, a summary of a company's quarterly earnings call that possibly has misattributed quotes -- then fact-checks it by comparing the text with a source of truth (e.g. uploaded transcripts). Correction, available as part of Microsoft's Azure AI Content Safety API (in preview for now), can be used with any text-generating AI model, including Meta's Llama and OpenAI's GPT-4o.

"Correction is powered by a new process of utilizing small language models and large language models to align outputs with grounding documents," a Microsoft spokesperson told TechCrunch. "We hope this new feature supports builders and users of generative AI in fields such as medicine, where application developers determine the accuracy of responses to be of significant importance."
Experts caution that this tool doesn't address the root cause of hallucinations. "Microsoft's solution is a pair of cross-referencing, copy-editor-esque meta models designed to highlight and rewrite hallucinations," reports TechCrunch. "A classifier model looks for possibly incorrect, fabricated, or irrelevant snippets of AI-generated text (hallucinations). If it detects hallucinations, the classifier ropes in a second model, a language model, that tries to correct for the hallucinations in accordance with specified 'grounding documents.'"

Os Keyes, a PhD candidate at the University of Washington who studies the ethical impact of emerging tech, has doubts about this. "It might reduce some problems," they said, "But it's also going to generate new ones. After all, Correction's hallucination detection library is also presumably capable of hallucinating." Mike Cook, a research fellow at Queen Mary University specializing in AI, added that the tool threatens to compound the trust and explainability issues around AI. "Microsoft, like OpenAI and Google, have created this issue where models are being relied upon in scenarios where they are frequently wrong," he said. "What Microsoft is doing now is repeating the mistake at a higher level. Let's say this takes us from 90% safety to 99% safety -- the issue was never really in that 9%. It's always going to be in the 1% of mistakes we're not yet detecting."
Movies

James Cameron Joins Board of Stability AI In Coup For Tech Firm 23

An anonymous reader quotes a report from the Hollywood Reporter: In a major coup for the artificial intelligence company, Stability AI says that Avatar, Terminator and Titanic director James Cameron will join its board of directors. Stability AI is the firm that developed the Stable Diffusion text-to-image generative AI model, an image- and video-focused model that is among those being closely watched by many in Hollywood, particularly in the visual effects industry. In fact, Stability AI's CEO, Prem Akkaraju, is no stranger to the business, having previously served as the CEO of visual effects firm WETA Digital. Sean Parker, the former president of Facebook and founder of Napster, also recently joined the AI firm as executive chairman.

As a director, Cameron has long been eager to push the boundaries of what is technologically possible in filmmaking (anyone who has seen the Terminator franchise knows that he is also familiar with the pitfalls of technology run amok). He was among the earliest directors to embrace the potential of computer-generated visual effects, and he continued to use his films (most recently Avatar: The Way of Water) to move the entire field forward.
"I've spent my career seeking out emerging technologies that push the very boundaries of what's possible, all in the service of telling incredible stories," Cameron said in a statement. "I was at the forefront of CGI over three decades ago, and I've stayed on the cutting edge since. Now, the intersection of generative AI and CGI image creation is the next wave. The convergence of these two totally different engines of creation will unlock new ways for artists to tell stories in ways we could have never imagined. Stability AI is poised to lead this transformation. I'm delighted to collaborate with Sean, Prem, and the Stability AI team as they shape the future of all visual media."
AI

Human Reviewers Can't Keep Up With Police Bodycam Videos. AI Now Gets the Job 68

Tony Isaac shares a report from NPR: After a decade of explosive growth, body cameras are now standard-issue for most American police as they interact with the public. The vast majority of those millions of hours of video are never watched -- it's just not humanly possible. For academics who study the everyday actions of police, the videos are an ocean of untapped data. Some are now using 'large language model' AI's -- think ChatGPT -- to digest that information and produce new insights. [...] The research found the encounters were more likely to escalate when officers started the stop by giving orders, rather than reasons for the interaction. While academics are using AI from anonymized videos to understand larger processes, some police departments have started using it to help supervise individual officers -- and even rate their performance. An AI system mentioned in the report, called TRULEO, assesses police officers' behavior through automated transcriptions of body camera footage. It'll evaluate both positive and negative conduct during interactions, such as traffic stops, and provide feedback to officers. In addition to flagging issues like swearing or abusive language, the AI can also recognize instances of professionalism.
AI

OpenAI Finally Brings Humanlike ChatGPT Advanced Voice Mode To US Plus, Team Users (venturebeat.com) 10

OpenAI is rolling out its advanced voice interface for ChatGPT to all Plus and Team subscribers in the U.S., the company said Tuesday. The feature, unveiled four months ago, lets users speak to the AI chatbot instead of typing. Five new voices join the lineup, expanding user options. OpenAI claims improved accent recognition and smoother conversations since initial testing. VentureBeat adds: OpenAI's foray into adding voices into ChatGPT has been controversial at the onset. In its May event announcing GPT-4o and the voice mode, people noticed similarities of one of the voices, Sky, to that of the actress Scarlett Johanssen. It didn't help that OpenAI CEO Sam Altman posted the word "her" on social media, a reference to the movie where Johansson voiced an AI assistant. The controversy sparked concerns around AI developers mimicking voices of well-known individuals.
Google

Google To Update Street View Images Across Dozens of Countries, Deleted Blog Post Says (theverge.com) 29

Google is getting ready to show off updated Street View imagery in nearly 80 countries. The Verge: In a now-removed blog post seen by The Verge, Google announced that the new images are coming to countries like Australia, Brazil, Denmark, Japan, the Philippines, Rwanda, Serbia, South Africa, and more. Google is also bringing Street View to a handful of countries where it's never been available, including Bosnia, Namibia, Lichtenstein, and Paraguay. The company said its more portable Street View camera, which launched in 2022, will help offer images of "even more places in the future."

Google Maps and Google Earth are getting sharper satellite imagery as well, thanks to the company's cloud-removal AI tool that takes out clouds, shadows, haze, and mist. This should result in "brighter, more vibrant" images, according to Google.

AI

OpenAI CEO Sam Altman Anticipates Superintelligence In 'a Few Thousand Days' 174

In a rare blog post today, OpenAI CEO Sam Altman laid out his vision of the AI-powered future, which he refers to as "The Intelligence Age." Among the most notable claims, Altman said superintelligence might be achieved in "a few thousand days." VentureBeat reports: Specifically, Altman argues that "deep learning works," and can generalize across a range of domains and difficult problem sets based on its training data, allowing people to "solve hard problems," including "fixing the climate, establishing a space colony, and the discovery of all physics." As he puts it: "That's really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying "rules" that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is."

In a provocative statement that many AI industry participants and close observers have already seized upon in discussions on X, Altman also said that superintelligence -- AI that is "vastly smarter than humans," according to previous OpenAI statements -- may be achieved in "a few thousand days." "This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there." A thousand days is roughly 2.7 years, a time that is much sooner than the five years most experts give out.
The Internet

Cloudflare's New Marketplace Will Let Websites Charge AI Bots For Scraping (techcrunch.com) 12

An anonymous reader quotes a report from TechCrunch: Cloudflare announced plans on Monday to launch a marketplace in the next year where website owners can sell AI model providers access to scrape their site's content. The marketplace is the final step of Cloudflare CEO Matthew Prince's larger plan to give publishers greater control over how and when AI bots scrape their websites. "If you don't compensate creators one way or another, then they stop creating, and that's the bit which has to get solved," said Prince in an interview with TechCrunch.

As the first step in its new plan, on Monday, Cloudflare launched free observability tools for customers, called AI Audit. Website owners will get a dashboard to view analytics on why, when, and how often AI models are crawling their sites for information. Cloudflare will also let customers block AI bots from their sites with the click of a button. Website owners can block all web scrapers using AI Audit, or let certain web scrapers through if they have deals or find their scraping beneficial. A demo of AI Audit shared with TechCrunch showed how website owners can use the tool, which is able to see where each scraper that visits your site comes from, and offers selective windows to see how many times scrapers from OpenAI, Meta, Amazon, and other AI model providers are visiting your site. [...]

Slashdot Top Deals