×
United States

A Breakthrough Online Privacy Proposal Hits Congress (wired.com) 27

An anonymous reader quotes a report from Wired: Congress may be closer than ever to passing a comprehensive data privacy framework after key House and Senate committee leaders released a new proposal on Sunday. The bipartisan proposal, titled the American Privacy Rights Act, or APRA, would limit the types of consumer data that companies can collect, retain, and use, allowing solely what they'd need to operate their services. Users would also be allowed to opt out of targeted advertising, and have the ability to view, correct, delete, and download their data from online services. The proposal would also create a national registry of data brokers, and force those companies to allow users to opt out of having their data sold. [...] In an interview with The Spokesman Review on Sunday, [Cathy McMorris Rodgers, House Energy and Commerce Committee chair] claimed that the draft's language is stronger than any active laws, seemingly as an attempt to assuage the concerns of Democrats who have long fought attempts to preempt preexisting state-level protections. APRA does allow states to pass their own privacy laws related to civil rights and consumer protections, among other exceptions.

In the previous session of Congress, the leaders of the House Energy and Commerce Committees brokered a deal with Roger Wicker, the top Republican on the Senate Commerce Committee, on a bill that would preempt state laws with the exception of the California Consumer Privacy Act and the Biometric Information Privacy Act of Illinois. That measure, titled the American Data Privacy and Protection Act, also created a weaker private right of action than most Democrats were willing to support. Maria Cantwell, Senate Commerce Committee chair, refused to support the measure, instead circulating her own draft legislation. The ADPPA hasn't been reintroduced, but APRA was designed as a compromise. "I think we have threaded a very important needle here," Cantwell told The Spokesman Review. "We are preserving those standards that California and Illinois and Washington have."

APRA includes language from California's landmark privacy law allowing people to sue companies when they are harmed by a data breach. It also provides the Federal Trade Commission, state attorneys general, and private citizens the authority to sue companies when they violate the law. The categories of data that would be impacted by APRA include certain categories of "information that identifies or is linked or reasonably linkable to an individual or device," according to a Senate Commerce Committee summary of the legislation. Small businesses -- those with $40 million or less in annual revenue and limited data collection -- would be exempt under APRA, with enforcement focused on businesses with $250 million or more in yearly revenue. Governments and "entities working on behalf of governments" are excluded under the bill, as are the National Center for Missing and Exploited Children and, apart from certain cybersecurity provisions, "fraud-fighting" nonprofits. Frank Pallone, the top Democrat on the House Energy and Commerce Committee, called the draft "very strong" in a Sunday statement, but said he wanted to "strengthen" it with tighter child safety provisions.

Businesses

Insurers Are Spying on Your Home From the Sky (wsj.com) 104

Across the U.S., insurance companies are using aerial images of homes as a tool to ditch properties seen as higher risk [non-paywalled link]. From a report: Nearly every building in the country is being photographed, often without the owner's knowledge. Companies are deploying drones, manned airplanes and high-altitude balloons to take images of properties. No place is shielded: The industry-funded Geospatial Insurance Consortium has an airplane imagery program it says covers 99% of the U.S. population. The array of photos is being sorted by computer models to spy out underwriting no-nos, such as damaged roof shingles, yard debris, overhanging tree branches and undeclared swimming pools or trampolines. The red-flagged images are providing insurers with ammunition for nonrenewal notices nationwide.
Advertising

Mozilla Asks: Will Google's Privacy Sandbox Protect Advertisers (and Google) More than You? (mozilla.org) 56

On Mozilla's blog, engineer Martin Thomson explores Google's "Privacy Sandbox" initiative (which proposes sharing a subset of private user information — but without third-party cookies).

The blog post concludes that Google's Protected Audience "protects advertisers (and Google) more than it protects you." But it's not all bad — in theory: The idea behind Protected Audience is that it creates something like an alternative information dimension inside of your (Chrome) browser... Any website can push information into that dimension. While we normally avoid mixing data from multiple sites, those rules are changed to allow that. Sites can then process that data in order to select advertisements. However, no one can see into this dimension, except you. Sites can only open a window for you to peek into that dimension, but only to see the ads they chose...

Protected Audience might be flawed, but it demonstrates real potential. If this is possible, that might give people more of a say in how their data is used. Rather than just have someone spy on your every action then use that information as they like, you might be able to specify what they can and cannot do. The technology could guarantee that your choice is respected. Maybe advertising is not the first thing you would do with this newfound power, but maybe if the advertising industry is willing to fund investments in new technology that others could eventually use, that could be a good thing.

But here's some of the blog post's key criticisms:
  • "[E]ntities like Google who operate large sites, might rely less on information from other sites. Losing the information that comes from tracking people might affect them far less when they can use information they gather from their many services... [W]e have a company that dominates both the advertising and browser markets, proposing a change that comes with clear privacy benefits, but it will also further entrench its own dominance in the massively profitable online advertising market..."
  • "[T]he proposal fails to meet its own privacy goals. The technical privacy measures in Protected Audience fail to prevent sites from abusing the API to learn about what you did on other sites.... Google loosened privacy protections in a number of places to make it easier to use. Of course, by weakening protections, the current proposal provides no privacy. In other words, to help make Protected Audience easier to use, they made the design even leakier..."
  • "A lot of these leaks are temporary. Google has a plan and even a timeline for closing most of the holes that were added to make Protected Audience easier to use for advertisers. The problem is that there is no credible fix for some of the information leaks embedded in Protected Audience's architecture... In failing to achieve its own privacy goals, Protected Audience is not now — and maybe not ever — a good addition to the Web."

AI

In America, A Complex Patchwork of State AI Regulations Has Already Arrived (cio.com) 13

While the European Parliament passed a wide-ranging "AI Act" in March, "Leaders from Microsoft, Google, and OpenAI have all called for AI regulations in the U.S.," writes CIO magazine. Even the Chamber of Commerce, "often opposed to business regulation, has called on Congress to protect human rights and national security as AI use expands," according to the article, while the White House has released a blueprint for an AI bill of rights.

But even though the U.S. Congress hasn't passed AI legislation — 16 different U.S. states have, "and state legislatures have already introduced more than 400 AI bills across the U.S. this year, six times the number introduced in 2023." Many of the bills are targeted both at the developers of AI technologies and the organizations putting AI tools to use, says Goli Mahdavi, a lawyer with global law firm BCLP, which has established an AI working group. And with populous states such as California, New York, Texas, and Florida either passing or considering AI legislation, companies doing business across the US won't be able to avoid the regulations. Enterprises developing and using AI should be ready to answer questions about how their AI tools work, even when deploying automated tools as simple as spam filtering, Mahdavi says. "Those questions will come from consumers, and they will come from regulators," she adds. "There's obviously going to be heightened scrutiny here across the board."
There's sector-specific bills, and bills that demand transparency (of both development and output), according to the article. "The third category of AI bills covers broad AI bills, often focused on transparency, preventing bias, requiring impact assessment, providing for consumer opt-outs, and other issues."

One example the article notes is Senate Bill 1047, introduced in the California State Legislature in February, "would require safety testing of AI products before they're released, and would require AI developers to prevent others from creating derivative models of their products that are used to cause critical harms."

Adrienne Fischer, a lawyer with Basecamp Legal, a Denver law firm monitoring state AI bills, tells CIO that many of the bills promote best practices in privacy and data security, but said the fragmented regulatory environment "underscores the call for national standards or laws to provide a coherent framework for AI usage."

Thanks to Slashdot reader snydeq for sharing the article.
Privacy

Four Baseball Teams Now Let Ticket-Holders Enter Using AI-Powered 'Facial Authentication' (sfgate.com) 42

"The San Francisco Giants are one of four teams in Major League Baseball this season offering fans a free shortcut through the gates into the ballpark," writes SFGate.

"The cost? Signing up for the league's 'facial authentication' software through its ticketing app." The Giants are using MLB's new Go-Ahead Entry program, which intends to cut down on wait times for fans entering games. The pitch is simple: Take a selfie through the MLB Ballpark app (which already has your tickets on it), upload the selfie and, once you're approved, breeze through the ticketing lines and into the ballpark. Fans will barely have to slow down at the entrance gate on their way to their seats...

The Philadelphia Phillies were MLB's test team for the technology in 2023. They're joined by the Giants, Nationals and Astros in 2024...

[Major League Baseball] says it won't be saving or storing pictures of faces in a database — and it clearly would really like you to not call this technology facial recognition. "This is not the type of facial recognition that's scanning a crowd and specifically looking for certain kinds of people," Karri Zaremba, a senior vice president at MLB, told ESPN. "It's facial authentication. ... That's the only way in which it's being utilized."

Privacy advocates "have pointed out that the creep of facial recognition technology may be something to be wary of," the article acknowledges. But it adds that using the technology is still completely optional.

And they also spoke to the San Francisco Giants' senior vice president of ticket sales, who gushed about the possibility of app users "walking into the ballpark without taking your phone out, or all four of us taking our phones out."
United States

Is The US About To Pass a Landmark Online Privacy Bill? (msn.com) 35

Leaders from two key committees in the U.S. Congress "are nearing an agreement on a national framework aimed at protecting Americans' personal data online," reports the Washington Post.

They call the move "a significant milestone that could put lawmakers closer than ever to passing legislation that has eluded them for decades, according to a person familiar with the matter, who spoke on the condition of anonymity to discuss the talks." The tentative deal is expected to broker a compromise between congressional Democrats and Republicans by preempting state data protection laws and creating a mechanism to let individuals sue companies that violate their privacy, the person said. Rep. Cathy McMorris Rodgers (R-Wash.) and Sen. Maria Cantwell (D-Wash.), the chairs of the House Energy and Commerce Committee and the Senate Commerce Committee, respectively, are expected to announce the deal next week...

Lawmakers have tried to pass a comprehensive federal privacy law for more than two decades, but negotiations in both chambers have repeatedly broken down amid partisan disputes over the scope of the protections. Those divides have created a vacuum that states have increasingly looked to fill, with more than a dozen passing their own privacy laws... [T]heir expected deal would mark the first time the heads of the two powerful commerce committees, which oversee a broad swath of internet policy, have come to terms on a major consumer privacy bill...

The federal government already has laws safeguarding people's health and financial data, in addition to protections for children's personal data, but there's no overarching standard to regulate the vast majority of the collection, use and sale of data that companies engage in online.

Facebook

Meta (Again) Denies Netflix Read Facebook Users' Private Messenger Messages (techcrunch.com) 28

TechCrunch reports this week that Meta "is denying that it gave Netflix access to users' private messages..." The claim references a court filing that emerged as part of the discovery process in a class-action lawsuit over data privacy practices between a group of consumers and Facebook's parent, Meta. The document alleges that Netflix and Facebook had a "special relationship" and that Facebook even cut spending on original programming for its Facebook Watch video service so as not to compete with Netflix, a large Facebook advertiser. It also says that Netflix had access to Meta's "Inbox API" that offered the streamer "programmatic access to Facebook's user's private message inboxes...."

Meta's communications director, Andy Stone, reposted the original X post on Tuesday with a statement disputing that Netflix had been given access to users' private messages. "Shockingly untrue," Stone wrote on X. "Meta didn't share people's private messages with Netflix. The agreement allowed people to message their friends on Facebook about what they were watching on Netflix, directly from the Netflix app. Such agreements are commonplace in the industry...."

Beyond Stone's X post, Meta has not provided further comment. However, The New York Times had previously reported in 2018 that Netflix and Spotify could read users' private messages, according to documents it had obtained. Meta denied those claims at the time via a blog post titled "Facts About Facebook's Messaging Partnerships," where it explained that Netflix and Spotify had access to APIs that allowed consumers to message friends about what they were listening to on Spotify or watching on Netflix directly from those companies' respective apps. This required the companies to have "write access" to compose messages to friends, "read access" to allow users to read messages back from friends, and "delete access," which meant if you deleted a message from the third-party app, it would also delete the message from Facebook.

"No third party was reading your private messages, or writing messages to your friends without your permission. Many news stories imply we were shipping over private messages to partners, which is not correct," the blog post stated. In any event, Messenger didn't implement default end-to-end encryption until December 2023, a practice that would have made these sorts of claims a non-starter, as it wouldn't have left room for doubt.

Privacy

Academics Probe Apple's Privacy Settings and Get Lost and Confused (theregister.com) 24

Matthew Connatser reports via The Register: A study has concluded that Apple's privacy practices aren't particularly effective, because default apps on the iPhone and Mac have limited privacy settings and confusing configuration options. The research was conducted by Amel Bourdoucen and Janne Lindqvist of Aalto University in Finland. The pair noted that while many studies had examined privacy issues with third-party apps for Apple devices, very little literature investigates the issue in first-party apps -- like Safari and Siri. The aims of the study [PDF] were to investigate how much data Apple's own apps collect and where it's sent, and to see if users could figure out how to navigate the landscape of Apple's privacy settings.

The lengths to which Apple goes to secure its ecosystem -- as described in its Platform Security Guide [PDF] -- has earned it kudos from the information security world. Cupertino uses its hard-earned reputation as a selling point and as a bludgeon against Google. Bourdoucen and Janne Lindqvist don't dispute Apple's technical prowess, but argue that it is undermined by confusing user interfaces. "Our work shows that users may disable default apps, only to discover later that the settings do not match their initial preference," the paper states. "Our results demonstrate users are not correctly able to configure the desired privacy settings of default apps. In addition, we discovered that some default app configurations can even reduce trust in family relationships."

The researchers criticize data collection by Apple apps like Safari and Siri, where that data is sent, how users can (and can't) disable that data tracking, and how Apple presents privacy options to users. The paper illustrates these issues in a discussion of Apple's Siri voice assistant. While users can ostensibly choose not to enable Siri in the initial setup on macOS-powered devices, it still collects data from other apps to provide suggestions. To fully disable Siri, Apple users must find privacy-related options across five different submenus in the Settings app. Apple's own documentation for how its privacy settings work isn't good either. It doesn't mention every privacy option, explain what is done with user data, or highlight whether settings are enabled or disabled. Also, it's written in legalese, which almost guarantees no normal user will ever read it. "We discovered that the features are not clearly documented," the paper concludes. "Specifically, we discovered that steps required to disable features of default apps are largely undocumented and the data handling practices are not completely disclosed."

Privacy

Commercial Bank of Ethiopia Names and Shames Customers Over Bank Glitch Money (bbc.com) 26

An Ethiopian bank has put up posters shaming customers it says have not returned money they gained during a technical glitch. From a report: Notices bearing their names and photos could be seen outside branches of the Commercial Bank of Ethiopia (CBE) on Friday. The bank says it has recovered almost three-quarters of the $14m it lost, its head said last week. He warned that those keeping money that is not theirs will be prosecuted. Last month, an hours-long glitch allowed customers at the CBE, Ethiopia's largest commercial bank, to withdraw or transfer more than they had in their accounts.
Cellphones

Feds Finally Decide To Do Something About Years-Old SS7 Spy Holes In Phone Networks 32

Jessica Lyons reports via The Register: The FCC appears to finally be stepping up efforts to secure decades-old flaws in American telephone networks that are allegedly being used by foreign governments and surveillance outfits to remotely spy on and monitor wireless devices. At issue are the Signaling System Number 7 (SS7) and Diameter protocols, which are used by fixed and mobile network operators to enable interconnection between networks. They are part of the glue that holds today's telecommunications together. According to the US watchdog and some lawmakers, both protocols include security weaknesses that leave folks vulnerable to unwanted snooping. SS7's problems have been known about for years and years, as far back as at least 2008, and we wrote about them in 2010 and 2014, for instance. Little has been done to address these exploitable shortcomings.

SS7, which was developed in the mid-1970s, can be potentially abused to track people's phones' locations; redirect calls and text messages so that info can be intercepted; and spy on users. The Diameter protocol was developed in the late-1990s and includes support for network access and IP mobility in local and roaming calls and messages. It does not, however, encrypt originating IP addresses during transport, which makes it easier for miscreants to carry out network spoofing attacks. "As coverage expands, and more networks and participants are introduced, the opportunity for a bad actor to exploit SS7 and Diameter has increased," according to the FCC [PDF].

On March 27 the commission asked telecommunications providers to weigh in and detail what they are doing to prevent SS7 and Diameter vulnerabilities from being misused to track consumers' locations. The FCC has also asked carriers to detail any exploits of the protocols since 2018. The regulator wants to know the date(s) of the incident(s), what happened, which vulnerabilities were exploited and with which techniques, where the location tracking occurred, and -- if known -- the attacker's identity. This time frame is significant because in 2018, the Communications Security, Reliability, and Interoperability Council (CSRIC), a federal advisory committee to the FCC, issued several security best practices to prevent network intrusions and unauthorized location tracking. Interested parties have until April 26 to submit comments, and then the FCC has a month to respond.
Google

Users Say Google's VPN App Breaks the Windows DNS Settings (arstechnica.com) 37

An anonymous reader shares a report: Google offers a VPN via its "Google One" monthly subscription plan, and while it debuted on phones, a desktop app has been available for Windows and Mac OS for over a year now. Since a lot of people pay for Google One for the cloud storage increase for their Google accounts, you might be tempted to try the VPN on a desktop, but Windows users testing out the app haven't seemed too happy lately. An open bug report on Google's GitHub for the project says the Windows app "breaks" the Windows DNS, and this has been ongoing since at least November.

A VPN would naturally route all your traffic through a secure tunnel, but you've still got to do DNS lookups somewhere. A lot of VPN services also come with a DNS service, and Google is no different. The problem is that Google's VPN app changes the Windows DNS settings of all network adapters to always use Google's DNS, whether the VPN is on or off. Even if you change them, Google's program will change them back. Most VPN apps don't work this way, and even Google's Mac VPN program doesn't work this way. The users in the thread (and the ones emailing us) expect the app, at minimum, to use the original Windows settings when the VPN is off. Since running a VPN is often about privacy and security, users want to be able to change the DNS away from Google even when the VPN is running.

Privacy

Missouri County Declares State of Emergency Amid Suspected Ransomware Attack (arstechnica.com) 41

An anonymous reader quotes a report from Ars Technica: Jackson County, Missouri, has declared a state of emergency and closed key offices indefinitely as it responds to what officials believe is a ransomware attack that has made some of its IT systems inoperable. "Jackson County has identified significant disruptions within its IT systems, potentially attributable to a ransomware attack," officials wrote Tuesday. "Early indications suggest operational inconsistencies across its digital infrastructure and certain systems have been rendered inoperative while others continue to function as normal."

The systems confirmed inoperable include tax and online property payments, issuance of marriage licenses, and inmate searches. In response, the Assessment, Collection and Recorder of Deeds offices at all county locations are closed until further notice. The closure occurred the same day that the county was holding a special election to vote on a proposed sales tax to fund a stadium for MLB's Kansas City Royals and the NFL's Kansas City Chiefs. Neither the Jackson County Board of Elections nor the Kansas City Board of Elections have been affected by the attack; both remain open.

The Jackson County website says there are 654,000 residents in the 607-square-mile county, which includes most of Kansas City, the biggest city in Missouri. The response to the attack and the investigation into it have just begun, but so far, officials said they had no evidence that data had been compromised. Jackson County Executive Frank White, Jr. has issued (PDF) an executive order declaring a state of emergency. The County has notified law enforcement and retained IT security contractors to help investigate and remediate the attack.
"The potential significant budgetary impact of this incident may require appropriations from the County's emergency fund and, if these funds are found to be insufficient, the enactment of additional budgetary adjustments or cuts," White wrote. "It is directed that all county staff are to take whatever steps are necessary to protect resident data, county assets, and continue essential services, thereby mitigating the impact of this potential ransomware attack."
Security

New XZ Backdoor Scanner Detects Implants In Any Linux Binary (bleepingcomputer.com) 33

Bill Toulas reports via BleepingComputer: Firmware security firm Binarly has released a free online scanner to detect Linux executables impacted by the XZ Utils supply chain attack, tracked as CVE-2024-3094. CVE-2024-3094 is a supply chain compromise in XZ Utils, a set of data compression tools and libraries used in many major Linux distributions. Late last month, Microsoft engineer Andres Freud discovered the backdoor in the latest version of the XZ Utils package while investigating unusually slow SSH logins on Debian Sid, a rolling release of the Linux distribution.

The backdoor was introduced by a pseudonymous contributor to XZ version 5.6.0, which remained present in 5.6.1. However, only a few Linux distributions and versions following a "bleeding edge" upgrading approach were impacted, with most using an earlier, safe library version. Following the discovery of the backdoor, a detection and remediation effort was started, with CISA proposing downgrading the XZ Utils 5.4.6 Stable and hunting for and reporting any malicious activity.

Binarly says the approach taken so far in the threat mitigation efforts relies on simple checks such as byte string matching, file hash blocklisting, and YARA rules, which could lead to false positives. This approach can trigger significant alert fatigue and doesn't help detect similar backdoors on other projects. To address this problem, Binarly developed a dedicated scanner that would work for the particular library and any file carrying the same backdoor. [...] Binarly's scanner increases detection as it scans for various supply chain points beyond just the XZ Utils project, and the results are of much higher confidence.
Binarly has made a free API available to accomodate bulk scans, too.
Google

Google Pledges To Destroy Browsing Data To Settle 'Incognito' Lawsuit (wsj.com) 35

Google plans to destroy a trove of data that reflects millions of users' web-browsing histories, part of a settlement of a lawsuit that alleged the company tracked millions of users without their knowledge. WSJ: The class action, filed in 2020, accused Google of misleading users about how Chrome tracked the activity of anyone who used the private "Incognito" browsing option. The lawsuit alleged that Google's marketing and privacy disclosures didn't properly inform users of the kinds of data being collected, including details about which websites they viewed. The settlement details, filed Monday in San Francisco federal court, set out the actions the company will take to change its practices around private browsing. According to the court filing, Google has agreed to destroy billions of data points that the lawsuit alleges it improperly collected, to update disclosures about what it collects in private browsing and give users the option to disable third-party cookies in that setting.

The agreement doesn't include damages for individual users. But the settlement will allow individuals to file claims. Already the plaintiff attorneys have filed 50 in California state court. Attorney David Boies, who represents the consumers in the lawsuit, said the settlement requires Google to delete and remediate "in unprecedented scope and scale" the data it improperly collected. "This settlement is an historic step in requiring honesty and accountability from dominant technology companies," Boies said.

AT&T

AT&T Says Data From 73 Million Customers Has Leaked Onto the Dark Web (cnn.com) 21

Personal data from 73 million AT&T customers has leaked onto the dark web, reports CNN — both current and former customers.

AT&T has launched an investigation into the source of the data leak... In a news release Saturday morning, the telecommunications giant said the data was "released on the dark web approximately two weeks ago," and contains information such as account holders' Social Security numbers. ["The information varied by customer and account," AT&T said in a statement, " but may have included full name, email address, mailing address, phone number, social security number, date of birth, AT&T account number and passcode."]

"It is not yet known whether the data ... originated from AT&T or one of its vendors," the company added. "Currently, AT&T does not have evidence of unauthorized access to its systems resulting in exfiltration of the data set."

The data seems to have been from 2019 or earlier. The leak does not appear to contain financial information or specifics about call history, according to AT&T. The company said the leak shows approximately 7.6 million current account holders and 65.4 million former account holders were affected.

CNN says the first reports of the leak came two weeks ago from a social media account claiming "the largest collection of malware source code, samples, and papers. Reached for a comment by CNN, AT&T had said at the time that "We have no indications of a compromise of our systems."

AT&T's web site now includes a special page with an FAQ — and the tagline that announces "We take cybersecurity very seriously..."

"It has come to our attention that a number of AT&T passcodes have been compromised..."

The page points out that AT&T has already reset the passcodes of "all 7.6 million impacted customers." It's only further down in the FAQ that they acknowledge that the breach "appears to be from 2019 or earlier, impacting approximately 7.6 million current AT&T account holders and 65.4 million former account holders." Our internal teams are working with external cybersecurity experts to analyze the situation... We encourage customers to remain vigilant by monitoring account activity and credit reports. You can set up free fraud alerts from nationwide credit bureaus — Equifax, Experian, and TransUnion. You can also request and review your free credit report at any time via Freecreditreport.com...

We will reach out by mail or email to individuals with compromised sensitive personal information and offering complimentary identity theft and credit monitoring services... If your information was impacted, you will be receiving an email or letter from us explaining the incident, what information was compromised, and what we are doing for you in response.

Government

Do Age Verification Laws Drag Us Back to the Dark Ages of the Internet? (404media.co) 159

404 Media claims to have identified "the fundamental flaw with the age verification bills and laws" that have already passed in eight state legislatures (with two more taking effect in July): "the delusional, unfounded belief that putting hurdles between people and pornography is going to actually prevent them from viewing porn."

They argue that age verification laws "drag us back to the dark ages of the internet." Slashdot reader samleecole shared this excerpt: What will happen, and is already happening, is that people — including minors — will go to unmoderated, actively harmful alternatives that don't require handing over a government-issued ID to see people have sex. Meanwhile, performers and companies that are trying to do the right thing will suffer....

The legislators passing these bills are doing so under the guise of protecting children, but what's actually happening is a widespread rewiring of the scaffolding of the internet. They ignore long-established legal precedent that has said for years that age verification is unconstitutional, eventually and inevitably reducing everything we see online without impossible privacy hurdles and compromises to that which is not "harmful to minors." The people who live in these states, including the minors the law is allegedly trying to protect, are worse off because of it. So is the rest of the internet.

Yet new legislation is advancing in Kentucky and Nebraska, while the state of Kansas just passed a law which even requires age-verification for viewing "acts of homosexuality," according to a report: Websites can be fined up to $10,000 for each instance a minor accesses their content, and parents are allowed to sue for damages of at least $50,000. This means that the state can "require age verification to access LGBTQ content," according to attorney Alejandra Caraballo, who said on Threads that "Kansas residents may soon need their state IDs" to access material that simply "depicts LGBTQ people."
One newspaper opinion piece argues there's an easier solution: don't buy your children a smartphone: Or we could purchase any of the various software packages that block social media and obscene content from their devices. Or we could allow them to use social media, but limit their screen time. Or we could educate them about the issues that social media causes and simply trust them to make good choices. All of these options would have been denied to us if we lived in a state that passed a strict age verification law. Not only do age verification laws reduce parental freedom, but they also create myriad privacy risks. Requiring platforms to collect government IDs and face scans opens the door to potential exploitation by hackers and enemy governments. The very information intended to protect children could end up in the wrong hands, compromising the privacy and security of millions of users...

Ultimately, age verification laws are a misguided attempt to address the complex issue of underage social media use. Instead of placing undue burdens on users and limiting parental liberty, lawmakers should look for alternative strategies that respect privacy rights while promoting online safety.

This week a trade association for the adult entertainment industry announced plans to petition America's Supreme Court to intervene.
Cellphones

America's DHS Is Expected to Stop Buying Access to Your Phone Movements (notus.org) 49

America's Department of Homeland Security "is expected to stop buying access to data showing the movement of phones," reports the U.S. news site NOTUS.

They call the purchasers "a controversial practice that has allowed it to warrantlessly track hundreds of millions of people for years." Since 2018, agencies within the department — including Immigration and Customs Enforcement, U.S. Customs and Border Protection and the U.S. Secret Service — have been buying access to commercially available data that revealed the movement patterns of devices, many inside the United States. Commercially available phone data can be bought and searched without judicial oversight.

Three people familiar with the matter said the Department of Homeland Security isn't expected to buy access to more of this data, nor will the agency make any additional funding available to buy access to this data. The agency "paused" this practice after a 2023 DHS watchdog report [which had recommended they draw up better privacy controls and policies]. However, the department instead appears to be winding down the use of the data...

"The information that is available commercially would kind of knock your socks off," said former top CIA official Michael Morell on a podcast last year. "If we collected it using traditional intelligence methods, it would be top-secret sensitive. And you wouldn't put it in a database, you'd keep it in a safe...." DHS' internal watchdog opened an investigation after a bipartisan outcry from lawmakers and civil society groups about warrantless tracking...

"Meanwhile, U.S. spy agencies are fighting to preserve the same capability as part of the renewal of surveillance authorities," the article adds.

"A bipartisan coalition of lawmakers, led by Democratic Sen. Ron Wyden in the Senate and Republican Rep. Warren Davidson in the House, is pushing to ban U.S. government agencies from buying data on Americans."
Security

'Security Engineering' Author Ross Anderson, Cambridge Professor, Dies at Age 67 (therecord.media) 7

The Record reports: Ross Anderson, a professor of security engineering at the University of Cambridge who is widely recognized for his contributions to computing, passed away at home on Thursday according to friends and colleagues who have been in touch with his family and the University.

Anderson, who also taught at Edinburgh University, was one of the most respected academic engineers and computer scientists of his generation. His research included machine learning, cryptographic protocols, hardware reverse engineering and breaking ciphers, among other topics. His public achievements include, but are by no means limited to, being awarded the British Computer Society's Lovelace Medal in 2015, and publishing several editions of the Security Engineering textbook.

Anderson's security research made headlines throughout his career, with his name appearing in over a dozen Slashdot stories...

My favorite story? UK Banks Attempt To Censor Academic Publication.

"Cambridge University has resisted the demands and has sent a response to the bankers explaining why they will keep the page online..."


Cloud

Cloud Server Host Vultr Rips User Data Ownership Clause From ToS After Web Outage (theregister.com) 28

Tobias Mann reports via The Register: Cloud server provider Vultr has rapidly revised its terms-of-service after netizens raised the alarm over broad clauses that demanded the "perpetual, irrevocable, royalty-free" rights to customer "content." The red tape was updated in January, as captured by the Internet Archive, and this month users were asked to agree to the changes by a pop-up that appeared when using their web-based Vultr control panel. That prompted folks to look through the terms, and there they found clauses granting the US outfit a "worldwide license ... to use, reproduce, process, adapt ... modify, prepare derivative works, publish, transmit, and distribute" user content.

It turned out these demands have been in place since before the January update; customers have only just noticed them now. Given Vultr hosts servers and storage in the cloud for its subscribers, some feared the biz was giving itself way too much ownership over their stuff, all in this age of AI training data being put up for sale by platforms. In response to online outcry, largely stemming from Reddit, Vultr in the past few hours rewrote its ToS to delete those asserted content rights. CEO J.J. Kardwell told The Register earlier today it's a case of standard legal boilerplate being taken out of context. The clauses were supposed to apply to customer forum posts, rather than private server content, and while, yes, the terms make more sense with that in mind, one might argue the legalese was overly broad in any case.

"We do not use user data," Kardwell stressed to us. "We never have, and we never will. We take privacy and security very seriously. It's at the core of what we do globally." [...] According to Kardwell, the content clauses are entirely separate to user data deployed in its cloud, and are more aimed at one's use of the Vultr website, emphasizing the last line of the relevant fine print: "... for purposes of providing the services to you." He also pointed out that the wording has been that way for some time, and added the prompt asking users to agree to an updated ToS was actually spurred by unrelated Microsoft licensing changes. In light of the controversy, Vultr vowed to remove the above section to "simplify and further clarify" its ToS, and has indeed done so. In a separate statement, the biz told The Register the removal will be followed by a full review and update to its terms of service.
"It's clearly causing confusion for some portion of users. We recognize that the average user doesn't have a law degree," Kardwell added. "We're very focused on being responsive to the community and the concerns people have and we believe the strongest thing we can do to demonstrate that there is no bad intent here is to remove it."
Government

Biden Orders Every US Agency To Appoint a Chief AI Officer 48

An anonymous reader quotes a report from Ars Technica: The White House has announced the "first government-wide policy (PDF) to mitigate risks of artificial intelligence (AI) and harness its benefits." To coordinate these efforts, every federal agency must appoint a chief AI officer with "significant expertise in AI." Some agencies have already appointed chief AI officers, but any agency that has not must appoint a senior official over the next 60 days. If an official already appointed as a chief AI officer does not have the necessary authority to coordinate AI use in the agency, they must be granted additional authority or else a new chief AI officer must be named.

Ideal candidates, the White House recommended, might include chief information officers, chief data officers, or chief technology officers, the Office of Management and Budget (OMB) policy said. As chief AI officers, appointees will serve as senior advisers on AI initiatives, monitoring and inventorying all agency uses of AI. They must conduct risk assessments to consider whether any AI uses are impacting "safety, security, civil rights, civil liberties, privacy, democratic values, human rights, equal opportunities, worker well-being, access to critical resources and services, agency trust and credibility, and market competition," OMB said. Perhaps most urgently, by December 1, the officers must correct all non-compliant AI uses in government, unless an extension of up to one year is granted.

The chief AI officers will seemingly enjoy a lot of power and oversight over how the government uses AI. It's up to the chief AI officers to develop a plan to comply with minimum safety standards and to work with chief financial and human resource officers to develop the necessary budgets and workforces to use AI to further each agency's mission and ensure "equitable outcomes," OMB said. [...] Among the chief AI officer's primary responsibilities is determining what AI uses might impact the safety or rights of US citizens. They'll do this by assessing AI impacts, conducting real-world tests, independently evaluating AI, regularly evaluating risks, properly training staff, providing additional human oversight where necessary, and giving public notice of any AI use that could have a "significant impact on rights or safety," OMB said. Chief AI officers will ultimately decide if any AI use is safety- or rights-impacting and must adhere to OMB's minimum standards for responsible AI use. Once a determination is made, the officers will "centrally track" the determinations, informing OMB of any major changes to "conditions or context in which the AI is used." The officers will also regularly convene "a new Chief AI Officer Council to coordinate" efforts and share innovations government-wide.
Chief AI officers must consult with the public and maintain options to opt-out of "AI-enabled decisions," OMB said. "However, these chief AI officers also have the power to waive opt-out options "if they can demonstrate that a human alternative would result in a service that is less fair (e.g., produces a disparate impact on protected classes) or if an opt-out would impose undue hardship on the agency."

Slashdot Top Deals