Government

Pentagon Purchases a Device Allegedly Linked To Havana Syndrome (cnn.com) 72

"Since the United States reopened its embassy in Cuba in 2015, a number of personnel have reported a series of debilitating medical ailments which include dizziness, fatigue, problems with memory, and impaired vision," writes longtime Slashdot reader smooth wombat. "For ten years, these sudden and unexplained onsets have been studied with no conclusive evidence one way or the other. Now comes word that a device, purchased by the Pentagon, has been tested which may be linked to what is known as Havana Syndrome." From a report: A division of the Department of Homeland Security, Homeland Security Investigations, purchased the device for millions of dollars in the waning days of the Biden administration, using funding provided by the Defense Department, according to two of the sources. Officials paid âoeeight figuresâ for the device, these people said, declining to offer a more specific number. [...]

The device acquired by HSI produces pulsed radio waves, one of the sources said, which some officials and academics have speculated for years could be the cause of the incidents. Although the device is not entirely Russian in origin, it contains Russian components, this person added. Officials have long struggled to understand how a device powerful enough to cause the kind of damage some victims have reported could be made portable; that remains a core question, according to one of the sources briefed on the device. The device could fit in a backpack, this person said.

[...] One key concern now for some officials is that if the technology proves viable it may have proliferated, several of the sources said, meaning that more than one country could now have access to a device that may be capable of causing career-ending injuries to US officials.
Further reading: 'Havana Syndrome' Debate Rises Again in US Government
Government

Senate Passes a Bill That Would Let Nonconsensual Deepfake Victims Sue (theverge.com) 63

The U.S. Senate unanimously passed the Disrupt Explicit Forged Images and Non-Consensual Edits Act (DEFIANCE Act), giving victims of sexually explicit AI deepfakes the right to sue the individuals who created them. The Verge reports: The bill passed with unanimous consent -- meaning there was no roll-call vote, and no Senator objected to its passage on the floor Tuesday. It's meant to build on the work of the Take It Down Act, a law that criminalizes the distribution of nonconsensual intimate images (NCII) and requires social media platforms to promptly remove them. [...] Now the ball is again in the House leadership's court; if they decide to bring the bill to the floor, it will have to pass in order to reach the president's desk.
Power

Trump Says Microsoft To Make Changes To Curb Data Center Power Costs For Americans (cnbc.com) 42

An anonymous reader quotes a report from CNBC: President Donald Trump said in a social media post on Monday that Microsoft will announce changes to ensure that Americans won't see rising utility bills as the company builds more data centers to meet rising artificial intelligence demand. "I never want Americans to pay higher Electricity bills because of Data Centers," Trump wrote on Truth Social. "Therefore, my Administration is working with major American Technology Companies to secure their commitment to the American People, and we will have much to announce in the coming weeks."

[...] Trump congratulated Microsoft on its efforts to keep prices in check, suggesting that other companies will make similar commitments. "First up is Microsoft, who my team has been working with, and which will make major changes beginning this week to ensure that Americans don't 'pick up the tab' for their POWER consumption, in the form of paying higher Utility bills," Trump wrote on Monday. Utilities charged U.S. consumers 6% more for electricity in August from a year earlier, including in states with many data centers, CNBC reported in November.

Microsoft is paying close to attention to the impact of its data centers on local residents. "I just want you to know we are doing everything we can, and I believe we're succeeding, in managing this issue well, so that you all don't have to pay more for electricity because of our presence," Brad Smith, the company's president and vice chair, said at a September town hall meeting in Wisconsin, where Microsoft is building an AI data center. While Microsoft is moving forward with some facilities, the company withdrew plans for a data center in Caledonia, Wisconsin, amid loud opposition to its efforts there. The project would would have been located 20 miles away from a data center in the village of Mount Pleasant.

Government

EPA To Stop Considering Lives Saved By Limiting Air Pollution (nytimes.com) 145

An anonymous reader quotes a report from the New York Times: For decades, the Environmental Protection Agency has calculated the health benefits of reducing air pollution, using the cost estimates of avoided asthma attacks and premature deaths to justify clean-air rules. Not anymore. Under President Trump, the E.P.A. plans to stop tallying gains from the health benefits caused by curbing two of the most widespread deadly air pollutants, fine particulate matter and ozone, when regulating industry, according to internal agency emails and documents reviewed by The New York Times.

It's a seismic shift that runs counter to the E.P.A.'s mission statement, which says the agency's core responsibility is to protect human health and the environment, environmental law experts said. The change could make it easier to repeal limits on these pollutants from coal-burning power plants, oil refineries, steel mills and other industrial facilities across the country, the emails and documents show. That would most likely lower costs for companies while resulting in dirtier air.
"The idea that E.P.A. would not consider the public health benefits of its regulations is anathema to the very mission of E.P.A.," said Richard Revesz, the faculty director of the Institute for Policy Integrity at New York University School of Law.

"If you're only considering the costs to industry and you're ignoring the benefits, then you can't justify any regulations that protect public health, which is the very reason that E.P.A. was set up."
Security

Fintech Firm Betterment Confirms Data Breach After Hackers Send Fake $10,000 Crypto Scam Messages (theverge.com) 3

An anonymous reader quotes a report from The Verge: Betterment, a financial app, sent a sketchy-looking notification on Friday asking users to send $10,000 to Bitcoin and Ethereum crypto wallets and promising to "triple your crypto," according to a thread on Reddit. The Betterment account says in an X thread that this was an "unauthorized message" that was sent via a "third-party system." TechCrunch has since confirmed that an undisclosed number of Betterment's customers have had their personal information accessed. "The company said customer names, email and postal addresses, phone numbers, and dates of birth were compromised in the attack," reports TechCrunch.

Betterment said it detected the attack on the same day and "immediately revoked the unauthorized access and launched a comprehensive investigation, which is ongoing." The fintech firm also said it has reached out to the customers targeted by the hackers and "advised them to disregard the message."

"Our ongoing investigation has continued to demonstrate that no customer accounts were accessed and that no passwords or other log-in credentials were compromised," Betterment wrote in the email.
The Courts

Supreme Court Takes Case That Could Strip FCC of Authority To Issue Fines (arstechnica.com) 49

An anonymous reader quotes a report from Ars Technica: The Supreme Court will hear a case that could invalidate the Federal Communications Commission's authority to issue fines against companies regulated by the FCC. AT&T, Verizon, and T-Mobile challenged the FCC's ability to punish them after the commission fined the carriers for selling customer location data without their users' consent. AT&T convinced the US Court of Appeals for the 5th Circuit to overturn its fine (PDF), while Verizon lost in the 2nd Circuit and T-Mobile lost in the District of Columbia Circuit. Verizon petitioned (PDF) the Supreme Court to reverse its loss, while the FCC and Justice Department petitioned (PDF) the court to overturn AT&T's victory in the 5th Circuit. The Supreme Court granted both petitions to hear the challenges and consolidated the cases in a list of orders (PDF) released Friday. Oral arguments will be held.

In 2024, the FCC fined the big three carriers a total of $196 million for location data sales revealed in 2018, saying the companies were punished "for illegally sharing access to customers' location information without consent and without taking reasonable measures to protect that information against unauthorized disclosure." Carriers challenged in three appeals courts, arguing that the fines violated their Seventh Amendment right to a jury trial. [...] While the Supreme Court is only taking up the AT&T and Verizon cases, the T-Mobile case would be affected by whatever ruling the Supreme Court issues. T-Mobile is seeking a rehearing in the District of Columbia Circuit, an effort that could be boosted or rendered moot by whatever the Supreme Court decides.

Government

More US States Are Preparing Age-Verification Laws for App Stores (politico.com) 57

Yes, a federal judge blocked an attempt by Texas at an app store age-verification law. But this year Silicon Valley giants including Google and Apple "are expected to fight hard against similar legislation," reports Politico, "because of the vast legal liability it imposes on app stores and developers." In Texas, Utah and Louisiana, parent advocates have linked up with conservative "pro-family" groups to pass laws forcing mobile app stores to verify user ages and require parental sign-off. If those rules hold up in court, companies like Google and Apple, which run the two largest app stores, would face massive legal liability... California has taken a different approach, passing its own age-verification law last year that puts liability on device manufacturers instead of app stores. That model has been better received by the tech lobby, and is now competing with the app-based approach in states like Ohio. In Washington D.C., a GOP-led bill modeled off of Texas' law is wending its way through Capitol Hill. And more states are expected to join the fray, including Michigan and South Carolina.

Joel Thayer, president of the conservative Digital Progress Institute and a key architect of the Texas law, said states are only accelerating their push. He explicitly linked the age-verification debate to AI, arguing it's "terrifying" to think companies could build new AI products by scraping data from children's apps. Thayer also pointed to the Trump administration's recent executive order aimed at curbing state regulation of AI, saying it has galvanized lawmakers. "We're gonna see more states pushing this stuff," Thayer said. "What really put fuel in the fire is the AI moratorium for states. I think states have been reinvigorated to fight back on this."

He told Politico that the issue will likely be decided by America's Supreme Court, which in June upheld Texas legislation requiring age verification for online content. Thayer said states need a ruling from America's highest court to "triangulate exactly what the eff is going on with the First Amendment in the tech world.

"They're going to have to resolve the question at some point."
Piracy

Italy Fines Cloudflare 14 Million Euros For Refusing To Filter Pirate Sites On Public 1.1.1.1 DNS (torrentfreak.com) 39

An anonymous reader quotes a report from TorrentFreak: Italy's communications regulator AGCOM imposed a record-breaking 14.2 million-euro fine on Cloudflare after the company failed to implement the required piracy blocking measures. Cloudflare argued that filtering its global 1.1.1.1 DNS resolver would be "impossible" without hurting overall performance. AGCOM disagreed, noting that Cloudflare is not necessarily a neutral intermediary either.

[...] "The measure, in addition to being one of the first financial penalties imposed in the copyright sector, is particularly significant given the role played by Cloudflare" AGCOM notes, adding that Cloudflare is linked to roughly 70% of the pirate sites targeted under its regime. In its detailed analysis, the regulator further highlighted that Cloudflare's cooperation is "essential" for the enforcement of Italian anti-piracy laws, as its services allow pirate sites to evade standard blocking measures.

Cloudflare has strongly contested the accusations throughout AGCOM's proceedings and previously criticized the Piracy Shield system for lacking transparency and due process. While the company did not immediately respond to our request for comment, it will almost certainly appeal the fine. This appeal may also draw the interest of other public DNS resolvers, such as Google and OpenDNS. AGCOM, meanwhile, says that it remains fully committed to enforcing the local piracy law. The regulator notes that since the Piracy Shield started in February 2024, 65,000 domain names and 14,000 IP addresses were blocked.

Businesses

Lawsuit Over OpenAI For-Profit Conversion Can Head To Trial, US Judge Says 32

Longtime Slashdot reader schwit1 shares a report from Reuters: Billionaire entrepreneur Elon Musk persuaded a judge on Wednesday to allow a jury trial on his allegations that ChatGPT maker OpenAI violated its founding mission in its high-profile restructuring to a for-profit entity. Musk was a cofounder of OpenAI in 2015 but left in 2018 and now runs an AI company that competes with it.

U.S. District Judge Yvonne Gonzalez Rogers in Oakland, California, said at a hearing that there was "plenty of evidence" suggesting OpenAI's leaders made assurances that its original nonprofit structure was going to be maintained. The judge said there were enough disputed facts to let a jury consider the claims at a trial scheduled for March, rather than decide the issues herself. She said she would issue a written order after the hearing that addresses OpenAI's bid to throw out the case.

[...] Musk contends he contributed about $38 million, roughly 60% of OpenAI's early funding, along with strategic guidance and credibility, based on assurances that the organization would remain a nonprofit dedicated to the public benefit. The lawsuit accuses OpenAI co-founders Sam Altman and Greg Brockman of plotting a for-profit switch to enrich themselves, culminating in multibillion-dollar deals with Microsoft and a recent restructuring.
OpenAI, Altman and Brockman have denied the claims, and they called Musk "a frustrated commercial competitor seeking to slow down a mission-driven market leader."

Microsoft is also a defendant and has urged the judge to toss Musk's lawsuit. A lawyer for Microsoft said there was no evidence that the company "aided and abetted" OpenAI.

OpenAI in a statement after the hearing said: "Mr Musk's lawsuit continues to be baseless and a part of his ongoing pattern of harassment, and we look forward to demonstrating this at trial."
Privacy

Illinois Health Department Exposed Over 700,000 Residents' Personal Data For Years (techcrunch.com) 14

Illinois Department of Human Services disclosed that a misconfigured internal mapping website exposed sensitive personal data for more than 700,000 Illinois residents for over four years, from April 2021 to September 2025. Officials say they can't confirm whether the publicly accessible data was ever viewed. TechCrunch reports: Officials said the exposed data included personal information on 672,616 individuals who are Medicaid and Medicare Savings Program recipients. The data included their addresses, case numbers, and demographic data -- but not individuals' names. The exposed data also included names, addresses, case statuses, and other information relating to 32,401 individuals in receipt of services from the department's Division of Rehabilitation Services.
Piracy

French Court Orders Google DNS to Block Pirate Sites, Dismisses 'Cloudflare-First' Defense (torrentfreak.com) 34

Paris Judicial Court ordered Google to block additional pirate sports-streaming domains at the DNS level, rejecting Google's argument that enforcement should target upstream providers like Cloudflare first. "The blockade was requested by Canal+ and aims to stop pirate streams of Champions League games," notes TorrentFreak. From the report: Most recently, Google was compelled to take action following a complaint from French broadcaster Canal+ and its subsidiaries regarding Champions League piracy.. Like previous blocking cases, the request is grounded in Article L. 333-10 of the French Sports Code, which enables rightsholders to seek court orders against any entity that can help to stop 'serious and repeated' sports piracy. After reviewing the evidence and hearing arguments from both sides, the Paris Court granted the blocking request, ordering Google to block nineteen domain names, including antenashop.site, daddylive3.com, livetv860.me, streamysport.org and vavoo.to.

The latest blocking order covers the entire 2025/2026 Champions League series, which ends on May 30, 2026. It's a dynamic order too, which means that if these sites switch to new domains, as verified by ARCOM, these have to be blocked as well. Google objected to the blocking request. Among other things, it argued that several domains were linked to Cloudflare's CDN. Therefore, suspending the sites on the CDN level would be more effective, as that would render them inaccessible. Based on the subsidiarity principle, Google argued that blocking measures should only be ordered if attempts to block the pirate sites through more direct means have failed.

The court dismissed these arguments, noting that intermediaries cannot dictate the enforcement strategy or blocking order. Intermediaries cannot require "prior steps" against other technical intermediaries, especially given the "irremediable" character of live sports piracy. The judge found the block proportional because Google remains free to choose the technical method, even if the result is mandated. Internet providers, search engines, CDNs, and DNS resolvers can all be required to block, irrespective of what other measures were taken previously. Google further argued that the blocking measures were disproportionate because they were complex, costly, easily bypassed, and had effects beyond the borders of France.

The Paris court rejected these claims. It argued that Google failed to demonstrate that implementing these blocking measures would result in "important costs" or technical impossibilities. Additionally, the court recognized that there would still be options for people to bypass these blocking measures. However, the blocks are a necessary step to "completely cease" the infringing activities.

Privacy

Samsung Hit with Restraining Order Over Smart TV Surveillance Tech in Texas (texasattorneygeneral.gov) 59

Texas Attorney General Ken Paxton has secured a temporary restraining order against Samsung, blocking the company from continuing to collect data through its smart TVs' Automated Content Recognition technology.

The ACR system captured screenshots of what users were watching every 500 milliseconds, according to the state's lawsuit, and did so without consumer knowledge or consent. The District Court found good cause to believe Samsung's actions violated the Texas Deceptive Trade Practices Act. The TRO prohibits Samsung and any parties working in concert with the company from using, selling, transferring, collecting, or sharing ACR data tied to Texas consumers.

Samsung is one of five major TV manufacturers the Texas Attorney General's office has sued over ACR deployment. Paxton previously secured a similar order against Hisense.
The Courts

Google and Character.AI Agree To Settle Lawsuits Over Teen Suicides 36

Google and Character.AI have agreed to settle multiple lawsuits from families alleging the chatbot encouraged self-harm and suicide among teens. "The settlements would mark the first resolutions in the wave of lawsuits against tech companies whose AI chatbots encouraged teens to hurt or kill themselves," notes Axios. From the report: Families allege that Character.AI's chatbot encouraged their children to cut their arms, suggested murdering their parents, wrote sexually explicit messages and did not discourage suicide, per lawsuits and congressional testimony. "Parties have agreed to a mediated settlement in principle to resolve all claims between them in the above-referenced matter," one document filed in U.S. District Court for the Middle District of Florida reads.

The documents do not contain any specific monetary amounts for the settlements. Pricy settlements could deter companies from continuing to offer chatbot products to kids. But without new laws on the books, don't expect major changes across the industry.
Last October, Character.AI said it would bar people under 18 from using its chatbots, in a sweeping move to address concerns over child safety.
Government

California Lawmaker Proposes a Four-Year Ban On AI Chatbots In Kids' Toys (techcrunch.com) 22

An anonymous reader quotes a report from TechCrunch: Senator Steve Padilla (D-CA) introduced a bill [dubbed SB 867] on Monday that would place a four-year ban on the sale and manufacture of toys with AI chatbot capabilities for kids under 18. The goal is to give safety regulators time to develop regulations to protect children from "dangerous AI interactions."

"Chatbots and other AI tools may become integral parts of our lives in the future, but the dangers they pose now require us to take bold action to protect our children," Senator Padilla said in a statement. "Our safety regulations around this kind of technology are in their infancy and will need to grow as exponentially as the capabilities of this technology do. Pausing the sale of these chatbot-integrated toys allows us time to craft the appropriate safety guidelines and framework for these toys to follow." [...] "Our children cannot be used as lab rats for Big Tech to experiment on," Padilla said.

Crime

Founder of Spyware Maker PcTattletale Pleads Guilty To Hacking, Advertising Surveillance Software (techcrunch.com) 3

An anonymous reader quotes a report from TechCrunch: The founder of a U.S.-based spyware company, whose surveillance products allowed customers to spy on the phones and computers of unsuspecting victims, pleaded guilty to federal charges linked to his long-running operation. pcTattletale founder Bryan Fleming entered a guilty plea in a San Diego federal court on Tuesday to charges of computer hacking, the sale and advertising of surveillance software for unlawful uses, and conspiracy.

The plea follows a multi-year investigation by agents with Homeland Security Investigations (HSI), a unit within U.S. Immigration and Customs Enforcement. HSI began investigating pcTattletale in mid-2021 as part of a wider probe into the industry of consumer-grade surveillance software, also known as "stalkerware."

This is the first successful U.S. federal prosecution of a stalkerware operator in more than a decade, following the 2014 indictment and subsequent guilty plea of the creator of a phone surveillance app called StealthGenie. Fleming's conviction could pave the way for further federal investigations and prosecutions against those operating spyware, but also those who simply advertise and sell covert surveillance software. HSI said that pcTattletale is one of several stalkerware websites under investigation.

Government

Congressman Introduces Legislation To Criminalize Insider Trading On Prediction Markets (axios.com) 55

Ritchie Torres has introduced a bill to ban government officials from using insider information to trade on political prediction markets like Polymarket. The bill was prompted by reports that traders on Polymarket made large profits betting on Nicolas Maduro's removal, raising suspicions that some wagers were placed using material non-public information. "While such insider trading in capital markets is already illegal and often prosecuted by the Justice Department and Securities and Exchange Commission, online prediction markets are far less regulated," notes Axios. From the report: Rep. Ritchie Torres' (D-N.Y.) three-page bill, a copy of which was obtained by Axios, is called the Public Integrity in Financial Prediction Markets Act of 2026. It would ban federal elected officials, political appointees and bureaucrats from making insider trades on prediction sites sites such as Polymarket. Specifically, the bill prohibits such government officials from trading based on information that is not publicly available and that "a reasonable investor would consider important in making an investment decision." [...] It's not clear if House Speaker Mike Johnson (R-La.) would put Torres' bill to a vote in the House or if President Trump would sign it. "We're looking at the specifics of the bill, but we already ban the activity it cites and are in support of means to prevent this type of activity," said Elisabeth Diana, a spokesperson for the prediction website Kalshi.

Diana added that the "activity from the past few days" did not occur on their platform.
AI

An AI-Generated NWS Map Invented Fake Towns In Idaho (washingtonpost.com) 42

National Weather Service pulled an AI-generated forecast graphic after it hallucinated fake town names in Idaho. "The blunder -- not the first of its kind to be posted by the NWS in the past year -- comes as the agency experiments with a wide range of AI uses, from advanced forecasting to graphic design," reports the Washington Post. "Experts worry that without properly trained officials, mistakes could erode trust in the agency and the technology." From the report: At first glance, there was nothing out of the ordinary about Saturday's wind forecast for Camas Prairie, Idaho. "Hold onto your hats!" said a social media post from the local weather office in Missoula, Montana. "Orangeotild" had a 10 percent chance of high winds, while just south, "Whata Bod" would be spared larger gusts. The problem? Neither of those places exist. Nor do a handful of the other spots marked on the National Weather Service's forecast graphic, riddled with spelling and geographical errors that the agency confirmed were linked to the use of generative AI.

NWS said AI is not commonly used for public-facing content, nor is its use prohibited. The agency said it is exploring ways to employ AI to inform the public and acknowledged mistakes have been made. "Recently, a local office used AI to create a base map to display forecast information, however the map inadvertently displayed illegible city names," said NWS spokeswoman Erica Grow Cei. "The map was quickly corrected and updated social media posts were distributed."

A post with the inaccurate map was deleted Monday, the same day The Washington Post contacted officials with questions about the image. Cei added that "NWS is exploring strategic ways to continue optimizing our service delivery for Americans, including the implementation of AI where it makes sense. NWS will continue to carefully evaluate results in cases where AI is implemented to ensure accuracy and efficiency, and will discontinue use in scenarios where AI is not effective." A Nov. 25 tweet out of the Rapid City, South Dakota, office also had misspelled locations and the Google Gemini logo in its forecast. NWS did not confirm whether the Rapid City image was made with generative AI.

Privacy

NYC Wegmans Is Storing Biometric Data On Shoppers' Eyes, Voices and Faces (gothamist.com) 26

schwit1 shares a report from Gothamist: Wegmans in New York City has begun collecting biometric data from anyone who enters its supermarkets, according to new signage posted at the chain's Manhattan and Brooklyn locations earlier this month. Anyone entering the store could have data on their face, eyes and voices collected and stored by the Rochester-headquartered supermarket chain. The information is used to "protect the safety and security of our patrons and employees," according to the signage. The new scanning policy is an expansion of a 2024 pilot.

The chain had initially said that the scanning system was only for a small group of employees and promised to delete any biometric data it collected from shoppers during the pilot rollout. The new notice makes no such assurances. Wegmans representatives did not reply to questions about how the data would be stored, why it changed its policy or if it would share the data with law enforcement.

Advertising

Vietnam Bans Unskippable Ads (phunuonline.com.vn) 50

Vietnam will begin enforcing new online advertising rules in February 2026 that ban forced video ads longer than five seconds and must allow users to close ads with just one tap. "Furthermore, platforms must provide clear icons and instructions for users to report advertisements that violate the law, and allow them to opt out, turn off, or stop viewing inappropriate ads," reports a local news outlet (translated to English). "These reports must be received and processed promptly, and the results communicated to users as required." From the report: In cases where the entity posting the infringing advertisement cannot be identified or where specialized laws do not have specific regulations, the Ministry of Culture, Sports and Tourism is the focal agency to receive notifications and send requests to block or remove the advertisement to organizations and businesses providing online advertising services in Vietnam.

Advertisers, advertising service providers, and advertising transmission and distribution units are responsible for blocking and removing infringing advertisements within 24 hours of receiving a request from the competent authority. For advertisements that infringe on national security, the blocking and removal must be carried out immediately, no later than 24 hours.

In case of non-compliance, the Ministry of Culture, Sports and Tourism, in coordination with the Ministry of Public Security, will apply technical measures to block infringing advertisements and services and handle the matter according to the law. Telecommunications companies and Internet service providers must also implement technical measures to block access to infringing advertisements within 24 hours of receiving a request.

United States

The Nation's Strictest Privacy Law Goes Into Effect (arstechnica.com) 45

An anonymous reader quotes a report from Ars Technica: Californians are getting a new, supercharged way to stop data brokers from hoarding and selling their personal information, as a recently enacted law that's among the strictest in the nation took effect at the beginning of the year. [...] Two years ago, California's Delete Act took effect. It required data brokers to provide residents with a means to obtain a copy of all data pertaining to them and to demand that such information be deleted. Unfortunately, Consumer Watchdog found that only 1 percent of Californians exercised these rights in the first 12 months after the law went into effect. A chief reason: Residents were required to file a separate demand with each broker. With hundreds of companies selling data, the burden was too onerous for most residents to take on.

On January 1, a new law known as DROP (Delete Request and Opt-out Platform) took effect. DROP allows California residents to register a single demand for their data to be deleted and no longer collected in the future. CalPrivacy then forwards it to all brokers. Starting in August, brokers will have 45 days after receiving the notice to report the status of each deletion request. If any of the brokers' records match the information in the demand, all associated data -- including inferences -- must be deleted unless legal exemptions such as information provided during one-to-one interactions between the individual and the broker apply. To use DROP, individuals must first prove they're a California resident.

Slashdot Top Deals