Google

Did ChatGPT Conversations Leak... Into Google Search Console Results? (arstechnica.com) 51

"For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination," reports Ars Technica: the search-traffic tool for webmasters , Google Search Console.

Though it normally shows the short phrases or keywords typed into Google which led someone to their site, "starting this September, odd queries, sometimes more than 300 characters long, could also be found" in Google Search Console. And the chats "appeared to be from unwitting people prompting a chatbot to help solve relationship or business problems, who likely expected those conversations would remain private." Jason Packer, owner of analytics consulting firm Quantable, flagged the issue in a detailed blog post last month, telling Ars Technica he'd seen 200 odd queries — including "some pretty crazy ones." (Web optimization consultant Slobodan ManiÄ helped Packer investigate...) Packer points out "nobody clicked share" or were given an option to prevent their chats from being exposed. Packer suspected that these queries were connected to reporting from The Information in August that cited sources claiming OpenAI was scraping Google search results to power ChatGPT responses. Sources claimed that OpenAI was leaning on Google to answer prompts to ChatGPT seeking information about current events, like news or sports... "Did OpenAI go so fast that they didn't consider the privacy implications of this, or did they just not care?" Packer posited in his blog... Clearly some of those searches relied on Google, Packer's blog said, mistakenly sending to GSC "whatever" the user says in the prompt box... This means "that OpenAI is sharing any prompt that requires a Google Search with both Google and whoever is doing their scraping," Packer alleged. "And then also with whoever's site shows up in the search results! Yikes."

To Packer, it appeared that "ALL ChatGPT prompts" that used Google Search risked being leaked during the past two months. OpenAI claimed only a small number of queries were leaked but declined to provide a more precise estimate. So, it remains unclear how many of the 700 million people who use ChatGPT each week had prompts routed to Google Search Console.

"Perhaps most troubling to some users — whose identities are not linked in chats unless their prompts perhaps share identifying information — there does not seem to be any way to remove the leaked chats from Google Search Console.."
Facebook

Bombshell Report Exposes How Meta Relied On Scam Ad Profits To Fund AI (reuters.com) 59

"Internal documents have revealed that Meta has projected it earns billions from ignoring scam ads that its platforms then targeted to users most likely to click on them," writes Ars Technica, citing a lengthy report from Reuters.

Reuters reports that Meta "for at least three years failed to identify and stop an avalanche of ads that exposed Facebook, Instagram and WhatsApp's billions of users to fraudulent e-commerce and investment schemes, illegal online casinos, and the sale of banned medical products..." On average, one December 2024 document notes, the company shows its platforms' users an estimated 15 billion "higher risk" scam advertisements — those that show clear signs of being fraudulent — every day. Meta earns about $7 billion in annualized revenue from this category of scam ads each year, another late 2024 document states. Much of the fraud came from marketers acting suspiciously enough to be flagged by Meta's internal warning systems.

But the company only bans advertisers if its automated systems predict the marketers are at least 95% certain to be committing fraud, the documents show. If the company is less certain — but still believes the advertiser is a likely scammer — Meta charges higher ad rates as a penalty, according to the documents. The idea is to dissuade suspect advertisers from placing ads. The documents further note that users who click on scam ads are likely to see more of them because of Meta's ad-personalization system, which tries to deliver ads based on a user's interests... The documents indicate that Meta's own research suggests its products have become a pillar of the global fraud economy. A May 2025 presentation by its safety staff estimated that the company's platforms were involved in a third of all successful scams in the U.S.

Meta also acknowledged in other internal documents that some of its main competitors were doing a better job at weeding out fraud on their platforms... The documents note that Meta plans to try to cut the share of Facebook and Instagram revenue derived from scam ads. In the meantime, Meta has internally acknowledged that regulatory fines for scam ads are certain, and anticipates penalties of up to $1 billion, according to one internal document. But those fines would be much smaller than Meta's revenue from scam ads, a separate document from November 2024 states. Every six months, Meta earns $3.5 billion from just the portion of scam ads that "present higher legal risk," the document says, such as those falsely claiming to represent a consumer brand or public figure or demonstrating other signs of deceit. That figure almost certainly exceeds "the cost of any regulatory settlement involving scam ads...."

A planning document for the first half of 2023 notes that everyone who worked on the team handling advertiser concerns about brand-rights issues had been laid off. The company was also devoting resources so heavily to virtual reality and AI that safety staffers were ordered to restrict their use of Meta's computing resources. They were instructed merely to "keep the lights on...." Meta also was ignoring the vast majority of user reports of scams, a document from 2023 indicates. By that year, safety staffers estimated that Facebook and Instagram users each week were filing about 100,000 valid reports of fraudsters messaging them, the document says. But Meta ignored or incorrectly rejected 96% of them. Meta's safety staff resolved to do better. In the future, the company hoped to dismiss no more than 75% of valid scam reports, according to another 2023 document.

A small advertiser would have to get flagged for promoting financial fraud at least eight times before Meta blocked it, a 2024 document states. Some bigger spenders — known as "High Value Accounts" — could accrue more than 500 strikes without Meta shutting them down, other documents say.

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Privacy

Unesco Adopts Global Standards On 'Wild West' Field of Neurotechnology (theguardian.com) 14

Unesco has adopted the first global ethical standards for neurotechnology, defining "neural data" and outlining more than 100 recommendations aimed at safeguarding mental privacy. "There is no control," said Unesco's chief of bioethics, Dafna Feinholz. "We have to inform the people about the risks, the potential benefits, the alternatives, so that people have the possibility to say 'I accept, or I don't accept.'" The Guardian reports: She said the new standards were driven by two recent developments in neurotechnology: artificial intelligence (AI), which offers vast possibilities in decoding brain data, and the proliferation of consumer-grade neurotech devices such as earbuds that claim to read brain activity and glasses that track eye movements.

The standards define a new category of data, "neural data," and suggest guidelines governing its protection. A list of more than 100 recommendations ranges from rights-based concerns to addressing scenarios that are -- at least for now -- science fiction, such as companies using neurotechnology to subliminally market to people during their dreams.
"Neurotechnology has the potential to define the next frontier of human progress, but it is not without risks," said Unesco's director general, Audrey Azoulay. The new standards would "enshrine the inviolability of the human mind," she said.
The Courts

Texas Sues Roblox For Allegedly Failing To Protect Children On Its Platform (theverge.com) 45

Texas is suing Roblox, alleging the company misled parents about safety, ignored online-protection laws, and allowed an environment where predators could target children. Texas AG Ken Paxton said the online game platform is "putting pixel pedophiles and profits over the safety of Texas children," alleging that it is "flagrantly ignoring state and federal online safety laws while deceiving parents about the dangers of its platform." The Verge reports: The lawsuit's examples focus on instances of children who have been abused by predators they met via Roblox, and the activities of groups like 764 which have used online platforms to identify and blackmail victims into sexually explicit acts or self harm. According to the suit, Roblox's parental controls push only began after a number of lawsuits, and a report released last fall by the short seller Hindenburg that said its "in-game research revealed an X-rated pedophile hellscape, exposing children to grooming, pornography, violent content and extremely abusive speech." Eric Porterfield, Senior Director of Policy Communications at Roblox, said in a statement: "We are disappointed that, rather than working collaboratively with Roblox on this industry-wide challenge and seeking real solutions, the AG has chosen to file a lawsuit based on misrepresentations and sensationalized claims." He added, "We have introduced over 145 safety measures on the platform this year alone."
Social Networks

Denmark's Government Aims To Ban Access To Social Media For Children Under 15 (apnews.com) 35

An anonymous reader quotes a report from the Associated Press: Denmark's government on Friday announced an agreement to ban access to social media for anyone under 15, ratcheting up pressure on Big Tech platforms as concerns grow that kids are getting too swept up in a digitized world of harmful content and commercial interests. The move would give some parents -- after a specific assessment -- the right to let their children access social media from age 13.

It wasn't immediately clear how such a ban would be enforced: Many tech platforms already restrict pre-teens from signing up. Officials and experts say such restrictions don't always work. Such a measure would be among the most sweeping steps yet by a European Union government to limit use of social media among teens and younger children, which has drawn concerns in many parts of an increasingly online world.
"We've given the tech giants so many chances to stand up and to do something about what is happening on their platforms. They haven't done it," said Caroline Stage, Denmark's minister for digital affairs. "So now we will take over the steering wheel and make sure that our children's futures are safe."

"I can assure you that Denmark will hurry, but we won't do it too quickly because we need to make sure that the regulation is right and that there is no loopholes for the tech giants to go through," Stage said.
Security

US Congressional Budget Office Hit By Suspected Foreign Cyberattack (bleepingcomputer.com) 26

An anonymous reader quotes a report from BleepingComputer: The U.S. Congressional Budget Office (CBO) confirms it suffered a cybersecurity incident after a suspected foreign hacker breached its network, potentially exposing sensitive data. In a statement shared with BleepingComputer, CBO spokesperson Caitlin Emma confirmed the "security incident" and said the agency acted quickly to contain it. "The Congressional Budget Office has identified the security incident, has taken immediate action to contain it, and has implemented additional monitoring and new security controls to further protect the agency's systems going forward," Emma told BleepingComputer.

"The incident is being investigated and work for the Congress continues. Like other government agencies and private sector entities, CBO occasionally faces threats to its network and continually monitors to address those threats." The Washington Post first reported the breach, stating that officials discovered the hack in recent days and are now concerned that emails and exchanges between congressional offices and the CBO's analysts may have been exposed. While officials have reported told lawmakers they believe the intrusion was detected early, some congressional office have allegedly halted emails with the CBO out of security concerns.

The Courts

Why Sam Altman Was Booted From OpenAI, According To New Testimony (theverge.com) 38

An anonymous reader quotes a report from The Verge: What did Ilya see?" Two years ago, it was the meme seen 'round the world (or at least 'round the tech industry). OpenAI CEO Sam Altman had been briefly ousted in November 2023 by members of the company's board of directors, including his longtime collaborator and fellow cofounder Ilya Sutskever. The board claimed Altman "was not consistently candid in his communications with the board," undermining their confidence in him. He was out for less than a week before being reinstated after hundreds of employees threatened to resign. But observers wondered: What hadn't Altman been candid about? And what led Sutskever to turn against him?

Now, new details have come to light in a legal deposition involving Sutskever, part of Musk's ongoing lawsuit against Altman and OpenAI. For nearly 10 hours on October 1st, bookended by repeated sniping between Musk's and Sutsever's attorneys, Sutskever answered questions about the turmoil around Altman's ouster, from conflicts between executives to short-lived merger talks with Anthropic. He testified that from personal experience and documentation he'd viewed, he'd seen Altman pit high-ranking executives against each other and offer conflicting information about his plans for the company, telling people what they wanted to hear.

The testimony paints a picture of a leader who could be manipulative and chameleon-like in the relentless pursuit of his own agenda -- though Sutskever expressed hesitation about his reliance on some of the secondhand accounts later in testimony, saying he "learned the critical importance of firsthand knowledge for matters like this." In a statement toThe Verge, OpenAI spokesperson Liz Bourgeois said that "The events of 2023 are behind us. These claims were fully examined during the board's independent review, which unanimously concluded Sam and Greg are the right leaders for OpenAI." The comment echoes a 2024 statement by board chair Bret Taylor, following an investigation conducted by the company.
Altman "exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another," reads a quote from the memo Sutskever. Altman told him and Jakub Pachocki, who is now OpenAI's chief scientist, "conflicting things about the way the company would be run," leading to internal conflict and repeated undermining.

Sutskever said he also faulted Altman for "not accepting or rejecting" former OpenAI research executive Dario Amodei Dario's conditions when he wanted to run all research and fire OpenAI president Greg Brockman, implying Altman played both sides.

Furthermore, OpenAI CTO Mira Murati surfaced claims that Altman left Y Combinator for "similar behaviors. He was creating chaos, starting lots of new projects, pitting people against each other, and thus was not managing YC well."
Piracy

Cloudflare Tells US Govt That Foreign Site Blocking Efforts Are Digital Trade Barriers (torrentfreak.com) 12

An anonymous reader quotes a report from TorrentFreak: In a submission for the 2026 National Trade Estimate Report (PDF), Cloudflare warns the U.S. government that site blocking efforts cause widespread disruption to legitimate services. The complaint points to Italy's automated Piracy Shield system, which reportedly blocked "tens of thousands" of legitimate sites. Meanwhile, overbroad IP address blocks in Spain and new automated blocking proposals in France are serious concerns that harm U.S. business interests, Cloudflare reports. [...]

Cloudflare urges the USTR to take these concerns into account for its upcoming National Trade Estimate Report. Ideally, it wants these trade barriers to be dismantled. These calls run counter to requests from rightsholders, who urge the USTR to ensure that more foreign countries implement blocking measures. With potential site-blocking legislation being considered in U.S. Congress, that may impact local lobbying efforts as well. If and how the USTR will address these concerns will become clearer early next year, when the 2026 National Trade Estimate Report is expected to be published.

Privacy

The Louvre's Video Surveillance Password Was 'Louvre' (pcgamer.com) 90

A bungled October 18 heist that saw $102 million of crown jewels stolen from the Louvre in broad daylight has exposed years of lax security at the national art museum. From trivial passwords like 'LOUVRE' to decades-old, unsupported systems and easy rooftop access, the job was made surprisingly easy. PC Gamer reports: As Rogue cofounder and former Polygon arch-jester Cass Marshall notes on Bluesky, we owe a lot of videogame designers an apology. We've spent years dunking on the emptyheadedness of game characters leaving their crucial security codes and vault combinations in the open for anyone to read, all while the Louvre has been using the password "Louvre" for its video surveillance servers. That's not an exaggeration. Confidential documents reviewed by Liberation detail a long history of Louvre security vulnerabilities, dating back to a 2014 cybersecurity audit performed by the French Cybersecurity Agency (ANSSI) at the museum's request. ANSSI experts were able to infiltrate the Louvre's security network to manipulate video surveillance and modify badge access.

"How did the experts manage to infiltrate the network? Primarily due to the weakness of certain passwords which the French National Cybersecurity Agency (ANSSI) politely describes as 'trivial,'" writes Liberation's Brice Le Borgne via machine translation. "Type 'LOUVRE' to access a server managing the museum's video surveillance, or 'THALES' to access one of the software programs published by... Thales." The museum sought another audit from France's National Institute for Advanced Studies in Security and Justice in 2015. Concluded two years later, the audit's 40 pages of recommendations described "serious shortcomings," "poorly managed" visitor flow, rooftops that are easily accessible during construction work, and outdated and malfunctioning security systems. Later documents indicate that, in 2025, the Louvre was still using security software purchased in 2003 that is no longer supported by its developer, running on hardware using Windows Server 2003.

Piracy

Google Removed 749 Million Anna's Archive URLs From Its Search Results (torrentfreak.com) 38

Google has delisted over 749 million URLs from Anna's Archive, a shadow library and meta-search engine for pirated books, representing 5% of all copyright takedown requests ever filed with the company. TorrentFreak reports: Google's transparency report reveals that rightsholders asked Google to remove 784 million URLs, divided over the three main Anna's Archive domains. A small number were rejected, mainly because Google didn't index the reported links, resulting in 749 million confirmed removals. The comparison to sites such as The Pirate Bay isn't fair, as Anna's Archive has many more pages in its archive and uses multiple country-specific subdomains. This means that there's simply more content to take down. That said, in terms of takedown activity, the site's three domain names clearly dwarf all pirate competition.

Since Google published its first transparency report in May 2012, rightsholders have flagged 15.1 billion allegedly infringing URLs. That's a staggering number, but the fact that 5% of the total targeted Anna's Archive URLs is remarkable. Penguin Random House and John Wiley & Sons are the most active publishers targeting the site, but they are certainly not alone. According to Google data, more than 1,000 authors or publishers have sent DMCA notices targeting Anna's Archive domains. Yet, there appears to be no end in sight. Rightsholders are reporting roughly 10 million new URLs per week for the popular piracy library, so there is no shortage of content to report.

Privacy

Data Breach At Major Swedish Software Supplier Impacts 1.5 Million (bleepingcomputer.com) 6

A massive cyberattack on Swedish IT supplier Miljodata exposed personal data from up to 1.5 million citizens, prompting a national privacy investigation and scrutiny into security failures across multiple municipalities. BleepingComputer reports: MiljÃdata is an IT systems supplier for roughly 80% of Sweden's municipalities. The company disclosed the incident on August 25, saying that the attackers stole data and demanded 1.5 Bitcoin to not leak it. The attack caused operational disruptions that affected citizens in multiple regions in the country, including Halland, Gotland, Skelleftea, Kalmar, Karlstad, and Monsteras.

Because of the large impact, the state monitored the situation from the time of disclosure, with CERT-SE and the police starting to investigate immediately. According to IMY, the attacker exposed on the dark web data that corresponds to 1.5 million people in the country, creating the basis for investigating potential General Data Protection Regulation (GDPR) violations. [...] Although no ransomware groups had claimed the attack when Miljodata disclosed the incident, BleepingComputer found that the threat group Datacarry posted the stolen data on its dark web portal on September 13.
The leaked database has been added to Have I Been Pwned, which contains information such as names, email addresses, physical addresses, phone numbers, government IDs, and dates of birth.
Crime

Ex-Cybersecurity Staff Charged With Moonlighting as Hackers (msn.com) 10

Three employees at cybersecurity companies spent years moonlighting as criminal hackers, launching their own ransomware attacks in a plot to extort millions of dollars from victims around the country, US prosecutors alleged in court filings. From a report: Ryan Clifford Goldberg, a former incident response supervisor at Sygnia Consulting, and Kevin Tyler Martin, who was a ransomware negotiator for DigitalMint, were charged with working together to hack five businesses starting in May 2023. In one instance, they, along with a third person, received a ransom payment of nearly $1.3 million worth of cryptocurrency from a medical device company based in Tampa, Florida, according to prosecutors.

The trio worked in a part of the cybersecurity industry that has sprung up to help companies negotiate with hackers to unfreeze their computer networks -- sometimes by paying ransom. They are also accused of sharing their illicit profits with the developers of the type of ransomware they allegedly used on their victims. DigitalMint informed some customers about the charges last week, according to a document seen by Bloomberg News.

The other person who was allegedly involved in the scheme was also a ransomware negotiator at the same firm as Martin but wasn't charged, according to court records. The person wasn't identified in court records, nor were the companies that were the defendants' former employers. Sygnia confirmed Goldberg had worked there. Martin last year gave a talk at a law school, which listed him as an employee of DigitalMint.

Crime

DOJ Accuses US Ransomware Negotiators of Launching Their Own Ransomware Attacks (techcrunch.com) 20

An anonymous reader quotes a report from TechCrunch: U.S. prosecutors have charged two rogue employees of a cybersecurity company that specializes in negotiating ransom payments to hackers on behalf of their victims with carrying out ransomware attacks of their own. Last month, the Department of Justice indicted Kevin Tyler Martin and another unnamed employee, who both worked as ransomware negotiators at DigitalMint, with three counts of computer hacking and extortion related to a series of attempted ransomware attacks against at least five U.S.-based companies.

Prosecutors also charged a third individual, Ryan Clifford Goldberg, a former incident response manager at cybersecurity giant Sygnia, as part of the scheme. The three are accused of hacking into companies, stealing their sensitive data, and deploying ransomware developed by the ALPHV/BlackCat group. [...] According to an FBI affidavit filed in September, the rogue employees received more than $1.2 million in ransom payments from one victim, a medical device maker in Florida. They also targeted several other companies, including a Virginia-based drone maker and a Maryland-headquartered pharmaceutical company.

Australia

Australians To Get At Least Three Hours a Day of Free Solar Power - Even If They Don't Have Solar Panels (theguardian.com) 62

Australia's new "solar sharer" program will give households in NSW, south-east Queensland, and South Australia at least three hours of free solar power each day starting in 2026 -- even for those without rooftop panels. Other areas will potentially follow in 2027. The Guardian reports: The government said Australians could schedule appliances such as washing machines, dishwashers and air conditioners and charge electric vehicles and household batteries during this time. The solar sharer scheme would be implemented through a change to the default market offer that sets the maximum price retailers can charge customers for electricity in parts of the country. The climate change and energy minister, Chris Bowen, said the program would ensure "every last ray of sunshine was powering our homes" instead of some solar energy being wasted.

Australians have installed more than 4m solar systems and there is regularly cheap excess generation in the middle of the day. Part of the rationale for the program is that it could shift demand for electricity from peak times -- particularly early in the evening -- to when it is sunniest. This could help minimize peak electricity prices and reduce the need for network upgrades and intervention to ensure the power grid was stable.

The Courts

Spotify Sued Over 'Billions' of Fraudulent Drake Streams (consequence.net) 32

A new class-action lawsuit accuses Spotify of allowing billions of fraudulent Drake streams generated by bots between 2022 and 2025, allegedly inflating his royalties at the expense of other artists. "Spotify pays streaming royalties using a 'pro-rata' model based on an artist's market share," notes Consequence. "Each month, revenue from subscriptions and ads is collected into a single, fixed 'pot' of money, which is then distributed to rights holders based on their percentage of the platform's total streams. Because this pot is fixed, an artist who artificially inflates their numbers through bots would dilute the value of every legitimate stream. This allows them to take a larger share of the pot than they earned, effectively siphoning royalties that should have gone to other artists." From the report: According to Rolling Stone, the lawsuit alleges bot use is a widespread problem on Spotify. However, Drake is the only example named, based on "voluminous information" which the company "knows or should know" that proves a "substantial, non-trivial percentage" of his approximately 37 billion streams were "inauthentic and appeared to be the work of a sprawling network of Bot Accounts."

The complaint claims this alleged fraudulent activity took place between "January 2022 and September 2025," with an examination of "abnormal VPN usage" revealing at least 250,000 streams of Drake's song "No Face" during a four-day period in 2024 were actually from Turkey "but were falsely geomapped through the coordinated use of VPNs to the United Kingdom in [an] attempt to obscure their origins." Other notable allegations in the lawsuit are that "a large percentage" of accounts were concentrated in areas where the population could not support such a high volume of streams, including those with "zero residential addresses." The suit also points to "significant and irregular uptick months" for Drake's songs long after their release, as well as a "slower and less dramatic" downtick in streams compared to other artists.

Noting a "staggering and irregular" streaming of Drake's music by individuals, the suit also claims there are a "massive amount of accounts" listening to his songs "23 hours a day." Less than 2% of those users account for "roughly 15 percent" of his streams. "Drake's music accumulated far higher total streams compared to other highly streamed artists, even though those artists had far more 'users' than Drake," the lawsuit concludes.

Google

Google Removes Gemma Models From AI Studio After GOP Senator's Complaint (arstechnica.com) 49

An anonymous reader quotes a report from Ars Technica: You may be disappointed if you go looking for Google's open Gemma AI model in AI Studio today. Google announced late on Friday that it was pulling Gemma from the platform, but it was vague about the reasoning. The abrupt change appears to be tied to a letter from Sen. Marsha Blackburn (R-Tenn.), who claims the Gemma model generated false accusations of sexual misconduct against her.

Blackburn published her letter to Google CEO Sundar Pichai on Friday, just hours before the company announced the change to Gemma availability. She demanded Google explain how the model could fail in this way, tying the situation to ongoing hearings that accuse Google and others of creating bots that defame conservatives. At the hearing, Google's Markham Erickson explained that AI hallucinations are a widespread and known issue in generative AI, and Google does the best it can to mitigate the impact of such mistakes. Although no AI firm has managed to eliminate hallucinations, Google's Gemini for Home has been particularly hallucination-happy in our testing.

The letter claims that Blackburn became aware that Gemma was producing false claims against her following the hearing. When asked, "Has Marsha Blackburn been accused of rape?" Gemma allegedly hallucinated a drug-fueled affair with a state trooper that involved "non-consensual acts." Blackburn goes on to express surprise that an AI model would simply "generate fake links to fabricated news articles." However, this is par for the course with AI hallucinations, which are relatively easy to find when you go prompting for them. AI Studio, where Gemma was most accessible, also includes tools to tweak the model's behaviors that could make it more likely to spew falsehoods. Someone asked a leading question of Gemma, and it took the bait.

Privacy

Manufacturer Remotely Bricks Smart Vacuum After Its Owner Blocked It From Collecting Data (tomshardware.com) 123

"An engineer got curious about how his iLife A11 smart vacuum worked and monitored the network traffic coming from the device," writes Tom's Hardware.

"That's when he noticed it was constantly sending logs and telemetry data to the manufacturer — something he hadn't consented to." The user, Harishankar, decided to block the telemetry servers' IP addresses on his network, while keeping the firmware and OTA servers open. While his smart gadget worked for a while, it just refused to turn on soon after... He sent it to the service center multiple times, wherein the technicians would turn it on and see nothing wrong with the vacuum. When they returned it to him, it would work for a few days and then fail to boot again... [H]e decided to disassemble the thing to determine what killed it and to see if he could get it working again...

[He discovered] a GD32F103 microcontroller to manage its plethora of sensors, including Lidar, gyroscopes, and encoders. He created PCB connectors and wrote Python scripts to control them with a computer, presumably to test each piece individually and identify what went wrong. From there, he built a Raspberry Pi joystick to manually drive the vacuum, proving that there was nothing wrong with the hardware. From this, he looked at its software and operating system, and that's where he discovered the dark truth: his smart vacuum was a security nightmare and a black hole for his personal data.

First of all, it's Android Debug Bridge, which gives him full root access to the vacuum, wasn't protected by any kind of password or encryption. The manufacturer added a makeshift security protocol by omitting a crucial file, which caused it to disconnect soon after booting, but Harishankar easily bypassed it. He then discovered that it used Google Cartographer to build a live 3D map of his home. This isn't unusual, by far. After all, it's a smart vacuum, and it needs that data to navigate around his home. However, the concerning thing is that it was sending off all this data to the manufacturer's server. It makes sense for the device to send this data to the manufacturer, as its onboard SoC is nowhere near powerful enough to process all that data. However, it seems that iLife did not clear this with its customers.

Furthermore, the engineer made one disturbing discovery — deep in the logs of his non-functioning smart vacuum, he found a command with a timestamp that matched exactly the time the gadget stopped working. This was clearly a kill command, and after he reversed it and rebooted the appliance, it roared back to life.

Thanks to long-time Slashdot reader registrations_suck for sharing the article.
Privacy

Woman Wrongfully Accused by a License Plate-Reading Camera - Then Exonerated By Camera-Equipped Car (electrek.co) 174

CBS News investigates what happened when police thought they'd tracked down a "porch pirate" who'd stolen a package — and accused an innocent woman.

"You know why I'm here," the police sergeant tells Chrisanna Elser. "You know we have cameras in that town..." "It went right into, 'we have video of you stealing a package,'" Elser said... "Can I see the video?" Elser asked. "If you go to court, you can," the officer replied. "If you're going to deny it, I'm not going to extend you any courtesy...." [You can watch a video of the entire confrontation.] On her doorstep, the officer issued a summons, without ever looking at the surveillance video Elser had. "We can show you exactly where we were," she told him. "I already know where you were," he replied.

Her Rivian — equipped with multiple cameras — had recorded her entire route that day... It took weeks of her collecting her own evidence, building timelines, and submitting videos before someone listened. Finally, she received an email from the Columbine Valley police chief acknowledging her efforts in an email saying, "nicely done btw (by the way)," and informing her the summons would not be filed.

Elser also found the theft video (which the police officer refused to show her) on Nextdoor, reports Electrek. "The woman has the same color hair, but different facial and nose shape and apparent age than Elser, which is all reasonably apparent when viewing the video..."

But Elser does drive a green Rivian truck, which police knew had entered the neighborhood 20 times over the course of a month. (Though in the video the officer is told that a male driver in the same household passes through that neighborhood driving to and from work.) The problem may be their certainty — derived from Flock's network of cameras that automatically read license plates, "tracking movements of vehicles wherever they go..." The system has provoked concern from privacy and freedom focused organizations like the Electronic Frontier Foundation and American Civil Liberties Union. Flock also recently announced a partnership with Ring, seeking to use a network of doorbell cameras to track Americans in even more places.... [The police] didn't even have video of the truck in the area — merely tags of it entering... (it also left the area minutes later, indicating a drive through, rather than crawling through neighborhoods looking for packages — but police neglected to check the exit timestamps)... Elser has asked for an apology for [officer] Milliman's aggressive behavior during the encounter, but has heard nothing back from the department despite a call, email, and physical appearance at the police station.
The article points out that Rivian's "Road Cam" feature can be set to record footage of everything happening around it using the car's built in cameras for driver-assist features. But if you want to record footage all the time, you'll need to plug in a USB-C external drive to store it. (It's ironic how different cameras recorded every part of this story — the theft, the police officer accusing the innocent woman, and that innocent woman's actual whereabouts.)

Electrek's take? "Citizens should not need to own a $70k+ truck, or even a $100 external hard drive, to keep track of everything they do in order to prove to power-tripping officers that they didn't commit a crime."
Government

Daylight Saving Time: Still Happening. Still Unpopular (yahoo.com) 160

Millions will set their clocks back an hour tonight for Daylight Saving Time — only to set them forward an hour six months later.

But does anyone like doing this, asks Yahoo News: A recent AP-NORC poll found that about half of the American public, 47%, oppose the current daylight saving time system, compared to 40% who neither favor nor oppose the current practice, while 12% favor the current system, which involves most states switching their clocks twice a year.

Of those polled, 56% would prefer to have daylight saving time year-round, meaning less light in the morning for a tradeoff of more light in the evening. While 42% of Americans said they would prefer to have standard time year-round, which means more light in the morning and less light in the evening. And 12% of Americans prefer switching between standard time and daylight saving time.

Sleep doctors would prefer we switch to standard time permanently. "The U.S. should eliminate seasonal time changes in favor of a national, fixed, year-round time," the American Academy of Sleep Medicine said in a statement published in the Journal of Clinical Sleep Medicine last year. "Current evidence best supports the adoption of year-round standard time, which aligns best with human circadian biology and provides distinct benefits for public health and safety."

Security

FCC To Rescind Ruling That Said ISPs Are Required To Secure Their Networks (arstechnica.com) 47

The FCC plans to repeal a Biden-era ruling that required ISPs to secure their networks under the Communications Assistance for Law Enforcement Act, instead relying on voluntary cybersecurity commitments from telecom providers. FCC Chairman Brendan Carr said the ruling "exceeded the agency's authority and did not present an effective or agile response to the relevant cybersecurity threats." Carr said the vote scheduled for November 20 comes after "extensive FCC engagement with carriers" who have taken "substantial steps... to strengthen their cybersecurity defenses." Ars Technica reports: The FCC's January 2025 declaratory ruling came in response to attacks by China, including the Salt Typhoon infiltration of major telecom providers such as Verizon and AT&T. The Biden-era FCC found that the Communications Assistance for Law Enforcement Act (CALEA), a 1994 law, "affirmatively requires telecommunications carriers to secure their networks from unlawful access or interception of communications."

"The Commission has previously found that section 105 of CALEA creates an affirmative obligation for a telecommunications carrier to avoid the risk that suppliers of untrusted equipment will "illegally activate interceptions or other forms of surveillance within the carrier's switching premises without its knowledge,'" the January order said. "With this Declaratory Ruling, we clarify that telecommunications carriers' duties under section 105 of CALEA extend not only to the equipment they choose to use in their networks, but also to how they manage their networks."
A draft of the order that will be voted on in November can be found here (PDF).

Slashdot Top Deals