×
The Courts

Clowns Sue Clowns.com For Wage Theft (404media.co) 42

An anonymous reader quotes a report from 404 Media: A group of clowns is suing their former employer Clowns.com for multiple labor law violations, according to recently filed court records. Four people -- Brayan Angulo, Cameron Pille, Janina Salorio, and Xander Black -- filed a federal lawsuit on Wednesday alleging Adolph Rodriguez and Erica Barbuto, owners of Clowns.com and their former bosses, misclassified them as independent workers for years, and failed to pay them for their time. The Long Island-based company, which provides entertainers for events, violated the Fair Labor Standards Act and the New York Labor Law, the lawsuit claims.

The owners of Clowns.com didn't give employees detailed pay statements as required by New York law, the lawsuit alleges. "As a result, Plaintiffs did not know how precisely their weekly pay was being calculated, and were thus deprived of information that could be used to challenge and prevent the theft of their wages," it says. The clowns weren't paid for time "spent at the warehouse gathering and loading equipment and supplies into vehicles," or for travel time between parties, or when parties went on for longer than expected, they claim.
Pille said she's "proud to join with my clown colleagues" to stand up to wage theft and misclassification. "For years, Clowns.com has treated clowns, who are largely young actors with no prior training in clowning who sign up for this job to make ends meet, as independent contractors."
Privacy

Researchers Come Up With Better Idea To Prevent AirTag Stalking (arstechnica.com) 29

An anonymous reader quotes a report from Ars Technica: Apple's AirTags are meant to help you effortlessly find your keys or track your luggage. But the same features that make them easy to deploy and inconspicuous in your daily life have also allowed them to be abused as a sinister tracking tool that domestic abusers and criminals can use to stalk their targets. Over the past year, Apple has taken protective steps to notify iPhone and Android users if an AirTag is in their vicinity for a significant amount of time without the presence of its owner's iPhone, which could indicate that an AirTag has been planted to secretly track their location. Apple hasn't said exactly how long this time interval is, but to create the much-needed alert system, Apple made some crucial changes to the location privacy design the company originally developed a few years ago for its "Find My" device tracking feature. Researchers from Johns Hopkins University and the University of California, San Diego, say, though, that they've developed (PDF) a cryptographic scheme to bridge the gap -- prioritizing detection of potentially malicious AirTags while also preserving maximum privacy for AirTag users. [...]

The solution [Johns Hopkins cryptographer Matt Green] and his fellow researchers came up with leans on two established areas of cryptography that the group worked to implement in a streamlined and efficient way so the system could reasonably run in the background on mobile devices without being disruptive. The first element is "secret sharing," which allows the creation of systems that can't reveal anything about a "secret" unless enough separate puzzle pieces present themselves and come together. Then, if the conditions are right, the system can reconstruct the secret. In the case of AirTags, the "secret" is the true, static identity of the device underlying the public identifier that is frequently changing for privacy purposes. Secret sharing was conceptually useful for the researchers to employ because they could develop a mechanism where a device like a smartphone would only be able to determine that it was being followed around by an AirTag with a constantly rotating public identifier if the system received enough of a certain type of ping over time. Then, suddenly, the suspicious AirTag's anonymity would fall away and the system would be able to determine that it had been in close proximity for a concerning amount of time.

Green notes, though, that a limitation of secret sharing algorithms is that they aren't very good at sorting and parsing inputs if they're being deluged by a lot of different puzzle pieces from all different puzzles -- the exact scenario that would occur in the real world where AirTags and Find My devices are constantly encountering each other. With this in mind, the researchers employed a second concept known as "error correction coding," which is specifically designed to sort signal from noise and preserve the durability of signals even if they acquire some errors or corruptions. "Secret sharing and error correction coding have a lot of overlap," Green says. "The trick was to find a way to implement it all that would be fast, and where a phone would be able to reassemble all the puzzle pieces when needed while all of this is running quietly in the background."
The researchers published (PDF) their first paper in September and submitted it to Apple. More recently, they notified the industry consortium about the proposal.
Google

Google Agrees To Settle Chrome Incognito Mode Class Action Lawsuit (arstechnica.com) 22

Google has indicated that it is ready to settle a class-action lawsuit filed in 2020 over its Chrome browser's Incognito mode. From a report: Arising in the Northern District of California, the lawsuit accused Google of continuing to "track, collect, and identify [users'] browsing data in real time" even when they had opened a new Incognito window. The lawsuit, filed by Florida resident William Byatt and California residents Chasom Brown and Maria Nguyen, accused Google of violating wiretap laws.

It also alleged that sites using Google Analytics or Ad Manager collected information from browsers in Incognito mode, including web page content, device data, and IP address. The plaintiffs also accused Google of taking Chrome users' private browsing activity and then associating it with their already-existing user profiles. Google initially attempted to have the lawsuit dismissed by pointing to the message displayed when users turned on Chrome's incognito mode. That warning tells users that their activity "might still be visible to websites you visit."

AI

New York Times Copyright Suit Wants OpenAI To Delete All GPT Instances (arstechnica.com) 157

An anonymous reader shares a report: The Times is targeting various companies under the OpenAI umbrella, as well as Microsoft, an OpenAI partner that both uses it to power its Copilot service and helped provide the infrastructure for training the GPT Large Language Model. But the suit goes well beyond the use of copyrighted material in training, alleging that OpenAI-powered software will happily circumvent the Times' paywall and ascribe hallucinated misinformation to the Times.

The suit notes that The Times maintains a large staff that allows it to do things like dedicate reporters to a huge range of beats and engage in important investigative journalism, among other things. Because of those investments, the newspaper is often considered an authoritative source on many matters. All of that costs money, and The Times earns that by limiting access to its reporting through a robust paywall. In addition, each print edition has a copyright notification, the Times' terms of service limit the copying and use of any published material, and it can be selective about how it licenses its stories.

In addition to driving revenue, these restrictions also help it to maintain its reputation as an authoritative voice by controlling how its works appear. The suit alleges that OpenAI-developed tools undermine all of that. [...] The suit seeks nothing less than the erasure of both any GPT instances that the parties have trained using material from the Times, as well as the destruction of the datasets that were used for the training. It also asks for a permanent injunction to prevent similar conduct in the future. The Times also wants money, lots and lots of money: "statutory damages, compensatory damages, restitution, disgorgement, and any other relief that may be permitted by law or equity."

Government

India Targets Apple Over Its Phone Hacking Notifications (washingtonpost.com) 100

In October, Apple issued notifications warning over a half dozen India lawmakers of their iPhones being targets of state-sponsored attacks. According to a new report from the Washington Post, the Modi government responded by criticizing Apple's security and demanding explanations to mitigate political impact (Warning: source may be paywalled; alternative source). From the report: Officials from the ruling Bharatiya Janata Party (BJP) publicly questioned whether the Silicon Valley company's internal threat algorithms were faulty and announced an investigation into the security of Apple devices. In private, according to three people with knowledge of the matter, senior Modi administration officials called Apple's India representatives to demand that the company help soften the political impact of the warnings. They also summoned an Apple security expert from outside the country to a meeting in New Delhi, where government representatives pressed the Apple official to come up with alternative explanations for the warnings to users, the people said. They spoke on the condition of anonymity to discuss sensitive matters. "They were really angry," one of those people said.

The visiting Apple official stood by the company's warnings. But the intensity of the Indian government effort to discredit and strong-arm Apple disturbed executives at the company's headquarters, in Cupertino, Calif., and illustrated how even Silicon Valley's most powerful tech companies can face pressure from the increasingly assertive leadership of the world's most populous country -- and one of the most critical technology markets of the coming decade. The recent episode also exemplified the dangers facing government critics in India and the lengths to which the Modi administration will go to deflect suspicions that it has engaged in hacking against its perceived enemies, according to digital rights groups, industry workers and Indian journalists. Many of the more than 20 people who received Apple's warnings at the end of October have been publicly critical of Modi or his longtime ally, Gautam Adani, an Indian energy and infrastructure tycoon. They included a firebrand politician from West Bengal state, a Communist leader from southern India and a New Delhi-based spokesman for the nation's largest opposition party. [...] Gopal Krishna Agarwal, a national spokesman for the BJP, said any evidence of hacking should be presented to the Indian government for investigation.

The Modi government has never confirmed or denied using spyware, and it has refused to cooperate with a committee appointed by India's Supreme Court to investigate whether it had. But two years ago, the Forbidden Stories journalism consortium, which included The Post, found that phones belonging to Indian journalists and political figures were infected with Pegasus, which grants attackers access to a device's encrypted messages, camera and microphone. In recent weeks, The Post, in collaboration with Amnesty, found fresh cases of infections among Indian journalists. Additional work by The Post and New York security firm iVerify found that opposition politicians had been targeted, adding to the evidence suggesting the Indian government's use of powerful surveillance tools. In addition, Amnesty showed The Post evidence it found in June that suggested a Pegasus customer was preparing to hack people in India. Amnesty asked that the evidence not be detailed to avoid teaching Pegasus users how to cover their tracks.
"These findings show that spyware abuse continues unabated in India," said Donncha O Cearbhaill, head of Amnesty International's Security Lab. "Journalists, activists and opposition politicians in India can neither protect themselves against being targeted by highly invasive spyware nor expect meaningful accountability."
Transportation

US Engine Maker Will Pay $1.6 Billion To Settle Claims of Emissions Cheating (nytimes.com) 100

An anonymous reader quotes a report from the New York Times: The United States and the state of California have reached an agreement in principle with the truck engine manufacturer Cummins on a $1.6 billion penalty to settle claims that the company violated the Clean Air Act by installing devices to defeat emissions controls on hundreds of thousands of engines, the Justice Department announced on Friday. The penalty would be the largest ever under the Clean Air Act and the second largest ever environmental penalty in the United States. Defeat devices are parts or software that bypass, defeat or render inoperative emissions controls like pollution sensors and onboard computers. They allow vehicles to pass emissions inspections while still emitting high levels of smog-causing pollutants such as nitrogen oxide, which is linked to asthma and other respiratory illnesses.

The Justice Department has accused the company of installing defeat devices on 630,000 model year 2013 to 2019 RAM 2500 and 3500 pickup truck engines. The company is also alleged to have secretly installed auxiliary emission control devices on 330,000 model year 2019 to 2023 RAM 2500 and 3500 pickup truck engines. "Violations of our environmental laws have a tangible impact. They inflict real harm on people in communities across the country," Attorney General Merrick Garland said in a statement. "This historic agreement should make clear that the Justice Department will be aggressive in its efforts to hold accountable those who seek to profit at the expense of people's health and safety."

In a statement, Cummins said that it had "seen no evidence that anyone acted in bad faith and does not admit wrongdoing." The company said it has "cooperated fully with the relevant regulators, already addressed many of the issues involved, and looks forward to obtaining certainty as it concludes this lengthy matter. Cummins conducted an extensive internal review and worked collaboratively with the regulators for more than four years." Stellantis, the company that makes the trucks, has already recalled the model year 2019 trucks and has initiated a recall of the model year 2013 to 2018 trucks. The software in those trucks will be recalibrated to ensure that they are fully compliant with federal emissions law, said Jon Mills, a spokesman for Cummins. Mr. Mills said that "next steps are unclear" on the model year 2020 through 2023, but that the company "continues to work collaboratively with regulators" to resolve the issue. The Justice Department partnered with the Environmental Protection Agency in its investigation of the case.

AI

The New York Times Sues OpenAI and Microsoft Over AI Use of Copyrighted Work (nytimes.com) 59

The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies. From a report: The Times is the first major American media organization to sue the companies, the creators of ChatGPT and other popular A.I. platforms, over copyright issues associated with its written works. The lawsuit [PDF], filed in Federal District Court in Manhattan, contends that millions of articles published by The Times were used to train automated chatbots that now compete with the news outlet as a source of reliable information.

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for "billions of dollars in statutory and actual damages" related to the "unlawful copying and use of The Times's uniquely valuable works." It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times. The lawsuit could test the emerging legal contours of generative A.I. technologies -- so called for the text, images and other content they can create after learning from large data sets -- and could carry major implications for the news industry. The Times is among a small number of outlets that have built successful business models from online journalism, but dozens of newspapers and magazines have been hobbled by readers' migration to the internet.

Programming

Code.org Sues WhiteHat Jr. For $3 Million 8

theodp writes: Back in May 2021, tech-backed nonprofit Code.org touted the signing of a licensing agreement with WhiteHat Jr., allowing the edtech company with a controversial past (Whitehat Jr. was bought for $300M in 2020 by Byju's, an edtech firm that received a $50M investment from Mark Zuckerberg's venture firm) to integrate Code.org's free-to-educators-and-organizations content and tools into their online tutoring service. Code.org did not reveal what it was charging Byju's to use its "free curriculum and open source technology" for commercial purposes, but Code.org's 2021 IRS 990 filing reported $1M in royalties from an unspecified source after earlier years reported $0. Coincidentally, Whitehat Jr. is represented by Aaron Kornblum, who once worked at Microsoft for now-President Brad Smith, who left Code.org's Board just before the lawsuit was filed.

Fast forward to 2023 and the bloom is off the rose, as Court records show that Code.org earlier this month sued Whitehat Education Technology, LLC (Exhibits A and B) in what is called "a civil action for breach of contract arising from Whitehat's failure to pay Code.org the agreed-upon charges for its use of Code.org's platform and licensed content and its ongoing, unauthorized use of that platform and content." According to the filing, "Whitehat agreed [in April 2022] to pay to Code.org licensing fees totaling $4,000,000 pursuant to a four-year schedule" and "made its first four scheduled payments, totaling $1,000,000," but "about a year after the Agreement was signed, Whitehat informed Code.org that it would be unable to make the remaining scheduled license payments." While the original agreement was amended to backload Whitehat's license fee payment obligations, "Whitehat has not paid anything at all beyond the $1,000,000 that it paid pursuant to the 2022 invoices before the Agreement was amended" and "has continued to access Code.org's platform and content."

That Byju's Whitehat Jr. stiffed Code.org is hardly shocking. In June 2023, Reuters reported that Byju's auditor Deloitte cut ties with the troubled Indian Edtech startup that was once an investor darling and valued at $22 billion, adding that a Byju's Board member representing the Chan-Zuckerberg Initiative had resigned with two other Board members. The BBC reported in July that Byju's was guilty of overexpanding during the pandemic (not unlike Zuck's Facebook). Ironically, the lawsuit Exhibits include screenshots showing Mark Zuckerberg teaching Code.org lessons. Zuckerberg and Facebook were once among the biggest backers of Code.org, although it's unclear whether that relationship soured after court documents were released that revealed Code.org's co-founders talking smack about Zuck and Facebook's business practices to lawyers for Six4Three, which was suing Facebook.

Code.org's curriculum is also used by the Amazon Future Engineer (AFE) initiative, but it is unclear what royalties -- if any -- Amazon pays to Code.org for the use of Code.org curriculum. While the AFE site boldly says, "we provide free computer science curriculum," the AFE fine print further explains that "our partners at Code.org and ProjectSTEM offer a wide array of introductory and advance curriculum options and teacher training." It's unclear what kind of organization Amazon's AFE ("Computer Science Learning Childhood to Career") exactly is -- an IRS Tax Exempt Organization Search failed to find any hits for "Amazon Future Engineer" -- making it hard to guess whether Code.org might consider AFE's use of Code.org software 'commercial use.' Would providing a California school district with free K-12 CS curriculum that Amazon boasts of cultivating into its "vocal champion" count as "commercial use"? How about providing free K-12 CS curriculum to children who live where Amazon is seeking incentives? Or if Amazon CEO Jeff Bezos testifies Amazon "funds computer science coursework" for schools as he attempts to counter a Congressional antitrust inquiry? These seem to be some of the kinds of distinctions Richard Stallman anticipated more than a decade ago as he argued against a restriction against commercial use of otherwise free software.
Electronic Frontier Foundation

EFF Warns: 'Think Twice Before Giving Surveillance for the Holidays' (eff.org) 28

"It's easy to default to giving the tech gifts that retailers tend to push on us this time of year..." notes Lifehacker senior writer Thorin Klosowski.

"But before you give one, think twice about what you're opting that person into." A number of these gifts raise red flags for us as privacy-conscious digital advocates. Ring cameras are one of the most obvious examples, but countless others over the years have made the security or privacy naughty list (and many of these same electronics directly clash with your right to repair). One big problem with giving these sorts of gifts is that you're opting another person into a company's intrusive surveillance practice, likely without their full knowledge of what they're really signing up for... And let's not forget about kids. Long subjected to surveillance from elves and their managers, electronics gifts for kids can come with all sorts of surprise issues, like the kid-focused tablet we found this year that was packed with malware and riskware. Kids' smartwatches and a number of connected toys are also potential privacy hazards that may not be worth the risks if not set up carefully.

Of course, you don't have to avoid all technology purchases. There are plenty of products out there that aren't creepy, and a few that just need extra attention during set up to ensure they're as privacy-protecting as possible. While we don't endorse products, you don't have to start your search in a vacuum. One helpful place to start is Mozilla's Privacy Not Included gift guide, which provides a breakdown of the privacy practices and history of products in a number of popular gift categories.... U.S. PIRG also has guidance for shopping for kids, including details about what to look for in popular categories like smart toys and watches....

Your job as a privacy-conscious gift-giver doesn't end at the checkout screen. If you're more tech savvy than the person receiving the item, or you're helping set up a gadget for a child, there's no better gift than helping set it up as privately as possible.... Giving the gift of electronics shouldn't come with so much homework, but until we have a comprehensive data privacy law, we'll likely have to contend with these sorts of set-up hoops. Until that day comes, we can all take the time to help those who need it.

AI

ChatGPT Exploit Finds 24 Email Addresses, Amid Warnings of 'AI Silo' (thehill.com) 67

The New York Times reports: Last month, I received an alarming email from someone I did not know: Rui Zhu, a Ph.D. candidate at Indiana University Bloomington. Mr. Zhu had my email address, he explained, because GPT-3.5 Turbo, one of the latest and most robust large language models (L.L.M.) from OpenAI, had delivered it to him. My contact information was included in a list of business and personal email addresses for more than 30 New York Times employees that a research team, including Mr. Zhu, had managed to extract from GPT-3.5 Turbo in the fall of this year. With some work, the team had been able to "bypass the model's restrictions on responding to privacy-related queries," Mr. Zhu wrote.

My email address is not a secret. But the success of the researchers' experiment should ring alarm bells because it reveals the potential for ChatGPT, and generative A.I. tools like it, to reveal much more sensitive personal information with just a bit of tweaking. When you ask ChatGPT a question, it does not simply search the web to find the answer. Instead, it draws on what it has "learned" from reams of information — training data that was used to feed and develop the model — to generate one. L.L.M.s train on vast amounts of text, which may include personal information pulled from the Internet and other sources. That training data informs how the A.I. tool works, but it is not supposed to be recalled verbatim... In the example output they provided for Times employees, many of the personal email addresses were either off by a few characters or entirely wrong. But 80 percent of the work addresses the model returned were correct.

The researchers used the API for accessing ChatGPT, the article notes, where "requests that would typically be denied in the ChatGPT interface were accepted..."

"The vulnerability is particularly concerning because no one — apart from a limited number of OpenAI employees — really knows what lurks in ChatGPT's training-data memory."

And there was a broader related warning in another article published the same day. Microsoft may be building an AI silo in a walled garden, argues a professor at the University of California, Berkeley's school of information, calling the development "detrimental for technology development, as well as costly and potentially dangerous for society and the economy." [In January] Microsoft sealed its OpenAI relationship with another major investment — this time around $10 billion, much of which was, once again, in the form of cloud credits instead of conventional finance. In return, OpenAI agreed to run and power its AI exclusively through Microsoft's Azure cloud and granted Microsoft certain rights to its intellectual property...

Recent reports that U.K. competition authorities and the U.S. Federal Trade Commission are scrutinizing Microsoft's investment in OpenAI are encouraging. But Microsoft's failure to report these investments for what they are — a de facto acquisition — demonstrates that the company is keenly aware of the stakes and has taken advantage of OpenAI's somewhat peculiar legal status as a non-profit entity to work around the rules...

The U.S. government needs to quickly step in and reverse the negative momentum that is pushing AI into walled gardens. The longer it waits, the harder it will be, both politically and technically, to re-introduce robust competition and the open ecosystem that society needs to maximize the benefits and manage the risks of AI technology.

Television

'Doctor Who' Christmas Special Streams on Disney+ and the BBC (cnet.com) 65

An anonymous Slashdot reader shared this report from CNET: Marking its 60th year on television, the British time-travel series will close out 2023 with one last anniversary special that arrives on Christmas Day. Ncuti Gatwa's Doctor helms the Tardis in The Church on Ruby Road, which centers on an abandoned baby who grows up looking for answers... Disney Plus will stream Doctor Who: The Church on Ruby Road on Monday, Dec. 25, at 12:55 p.m. ET (9:55 a.m. PT) in all regions except the UK and Ireland, where it will air on the BBC. In case you missed it, viewers can also watch David Tennant starring in the other three anniversary specials: The Star Beast, Wild Blue Yonder and The Giggle. All releases are available on Disney Plus.
But what's interesting is CNET goes on to explain "why a VPN could be a useful tool." Perhaps you're traveling abroad and want to stream Disney Plus while away from home. With a VPN, you're able to virtually change your location on your phone, tablet or laptop to get access to the series from anywhere in the world. There are other good reasons to use a VPN for streaming too. A VPN is the best way to encrypt your traffic and stop your ISP from throttling your speeds...

You can use a VPN to stream content legally as long as VPNs are allowed in your country and you have a valid subscription to the streaming service you're using. The U.S. and Canada are among the countries where VPNs are legal

United States

US Water Utilities Hacked After Default Passwords Set to '1111', Cybersecurity Officials Say (fastcompany.com) 84

An anonymous reader shared this report from Fast Company: Providers of critical infrastructure in the United States are doing a sloppy job of defending against cyber intrusions, the National Security Council tells Fast Company, pointing to recent Iran-linked attacks on U.S. water utilities that exploited basic security lapses [earlier this month]. The security council tells Fast Company it's also aware of recent intrusions by hackers linked to China's military at American infrastructure entities that include water and energy utilities in multiple states.

Neither the Iran-linked or China-linked attacks affected critical systems or caused disruptions, according to reports.

"We're seeing companies and critical services facing increased cyber threats from malicious criminals and countries," Anne Neuberger, the deputy national security advisor for cyber and emerging tech, tells Fast Company. The White House had been urging infrastructure providers to upgrade their cyber defenses before these recent hacks, but "clearly, by the most recent success of the criminal cyberattacks, more work needs to be done," she says... The attacks hit at least 11 different entities using Unitronics devices across the United States, which included six local water facilities, a pharmacy, an aquatics center, and a brewery...

Some of the compromised devices had been connected to the open internet with a default password of "1111," federal authorities say, making it easy for hackers to find them and gain access. Fixing that "doesn't cost any money," Neuberger says, "and those are the kinds of basic things that we really want companies urgently to do." But cybersecurity experts say these attacks point to a larger issue: the general vulnerability of the technology that powers physical infrastructure. Much of the hardware was developed before the internet and, though they were retrofitted with digital capabilities, still "have insufficient security controls," says Gary Perkins, chief information security officer at cybersecurity firm CISO Global. Additionally, many infrastructure facilities prioritize "operational ease of use rather than security," since many vendors often need to access the same equipment, says Andy Thompson, an offensive cybersecurity expert at CyberArk. But that can make the systems equally easy for attackers to exploit: freely available web tools allow anyone to generate lists of hardware connected to the public internet, like the Unitronics devices used by water companies.

"Not making critical infrastructure easily accessible via the internet should be standard practice," Thompson says.

AI

AI Companies Would Be Required To Disclose Copyrighted Training Data Under New Bill (theverge.com) 42

An anonymous reader quotes a report from The Verge: Two lawmakers filed a bill requiring creators of foundation models to disclose sources of training data so copyright holders know their information was taken. The AI Foundation Model Transparency Act -- filed by Reps. Anna Eshoo (D-CA) and Don Beyer (D-VA) -- would direct the Federal Trade Commission (FTC) to work with the National Institute of Standards and Technology (NIST) to establish rules for reporting training data transparency. Companies that make foundation models will be required to report sources of training data and how the data is retained during the inference process, describe the limitations or risks of the model, how the model aligns with NIST's planned AI Risk Management Framework and any other federal standards might be established, and provide information on the computational power used to train and run the model. The bill also says AI developers must report efforts to "red team" the model to prevent it from providing "inaccurate or harmful information" around medical or health-related questions, biological synthesis, cybersecurity, elections, policing, financial loan decisions, education, employment decisions, public services, and vulnerable populations such as children.

The bill calls out the importance of training data transparency around copyright as several lawsuits have come out against AI companies alleging copyright infringement. It specifically mentions the case of artists against Stability AI, Midjourney, and Deviant Art, (which was largely dismissed in October, according to VentureBeat), and Getty Images' complaint against Stability AI. The bill still needs to be assigned to a committee and discussed, and it's unclear if that will happen before the busy election campaign season starts. Eshoo and Beyer's bill complements the Biden administration's AI executive order, which helps establish reporting standards for AI models. The executive order, however, is not law, so if the AI Foundation Model Transparency Act passes, it will make transparency requirements for training data a federal rule.

Government

Biden Administration Unveils Hydrogen Tax Credit Plan To Jump-Start Industry (npr.org) 104

An anonymous reader quotes a report from NPR: The Biden administration released its highly anticipated proposal for doling out billions of dollars in tax credits to hydrogen producers Friday, in a massive effort to build out an industry that some hope can be a cleaner alternative to fossil fueled power. The U.S. credit is the most generous in the world for hydrogen production, Jesse Jenkins, a professor at Princeton University who has analyzed the U.S. climate law, said last week. The proposal -- which is part of Democrats' Inflation Reduction Act passed last year -- outlines a tiered system to determine which hydrogen producers get the most credits, with cleaner energy projects receiving more, and smaller, but still meaningful credits going to those that use fossil fuel to produce hydrogen.

Administration officials estimate the hydrogen production credits will deliver $140 billion in revenue and 700,000 jobs by 2030 -- and will help the U.S. produce 50 million metric tons of hydrogen by 2050. "That's equivalent to the amount of energy currently used by every bus, every plane, every train and every ship in the US combined," Energy Deputy Secretary David M. Turk said on a Thursday call with reporters to preview the proposal. [...] As part of the administration's proposal, firms that produce cleaner hydrogen and meet prevailing wage and registered apprenticeship requirements stand to qualify for a large incentive at $3 per kilogram of hydrogen. Firms that produce hydrogen using fossil fuels get less. The credit ranges from $.60 to $3 per kilo, depending on whole lifecycle emissions.

One contentious issue in the proposal was how to deal with the fact that clean, electrolyzer hydrogen draws tremendous amounts of electricity. Few want that to mean that more coal or natural gas-fired power plants run extra hours. The guidance addresses this by calling for producers to document their electricity usage through "energy attribute certificates" -- which will help determine the credits they qualify for. Rachel Fakhry, policy director for emerging technologies at the Natural Resources Defense Council called the proposal "a win for the climate, U.S. consumers, and the budding U.S. hydrogen industry." The Clean Air Task Force likewise called the proposal "an excellent step toward developing a credible clean hydrogen market in the United States."

Crime

Teen GTA VI Hacker Sentenced To Indefinite Hospital Order (theverge.com) 77

Emma Roth reports via The Verge: The 18-year-old Lapsus$ hacker who played a critical role in leaking Grand Theft Auto VI footage has been sentenced to life inside a hospital prison, according to a report from the BBC. A British judge ruled on Thursday that Arion Kurtaj is a high risk to the public because he still wants to commit cybercrimes.

In August, a London jury found that Kurtaj carried out cyberattacks against GTA VI developer Rockstar Games and other companies, including Uber and Nvidia. However, since Kurtaj has autism and was deemed unfit to stand trial, the jury was asked to determine whether he committed the acts in question, not whether he did so with criminal intent. During Thursday's hearing, the court heard Kurtaj "had been violent while in custody with dozens of reports of injury or property damage," the BBC reports. A mental health assessment also found that Kurtaj "continued to express the intent to return to cybercrime as soon as possible." He's required to stay in the hospital prison for life unless doctors determine that he's no longer a danger.

Kurtaj leaked 90 videos of GTA VI gameplay footage last September while out on bail for hacking Nvidia and British telecom provider BT / EE. Although he stayed at a hotel under police protection during this time, Kurtaj still managed to carry out an attack on Rockstar Games by using the room's included Amazon Fire Stick and a "newly purchased smart phone, keyboard and mouse," according to a separate BBC report. Kurtaj was arrested for the final time following the incident. Another 17-year-old involved with Lapsus$ was handed an 18-month community sentence, called a Youth Rehabilitation Order, and a ban from using virtual private networks.

Robotics

Massachusetts Lawmakers Mull 'Killer Robot' Bill (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch, written by Brian Heater: Back in mid-September, a pair of Massachusetts lawmakers introduced a bill "to ensure the responsible use of advanced robotic technologies." What that means in the simplest and most direct terms is legislation that would bar the manufacture, sale and use of weaponized robots. It's an interesting proposal for a number of reasons. The first is a general lack of U.S. state and national laws governing such growing concerns. It's one of those things that has felt like science fiction to such a degree that many lawmakers had no interest in pursuing it in a pragmatic manner. [...] Earlier this week, I spoke about the bill with Massachusetts state representative Lindsay Sabadosa, who filed it alongside Massachusetts state senator Michael Moore.

What is the status of the bill?
We're in an interesting position, because there are a lot of moving parts with the bill. The bill has had a hearing already, which is wonderful news. We're working with the committee on the language of the bill. They have had some questions about why different pieces were written as they were written. We're doing that technical review of the language now -- and also checking in with all stakeholders to make sure that everyone who needs to be at the table is at the table.

When you say "stakeholders" ...
Stakeholders are companies that produce robotics. The robot Spot, which Boston Dynamics produces, and other robots as well, are used by entities like Boston Police Department or the Massachusetts State Police. They might be used by the fire department. So, we're talking to those people to run through the bill, talk about what the changes are. For the most part, what we're hearing is that the bill doesn't really change a lot for those stakeholders. Really the bill is to prevent regular people from trying to weaponize robots, not to prevent the very good uses that the robots are currently employed for.

Does the bill apply to law enforcement as well?
We're not trying to stop law enforcement from using the robots. And what we've heard from law enforcement repeatedly is that they're often used to deescalate situations. They talk a lot about barricade situations or hostage situations. Not to be gruesome, but if people are still alive, if there are injuries, they say it often helps to deescalate, rather than sending in officers, which we know can often escalate the situation. So, no, we wouldn't change any of those uses. The legislation does ask that law enforcement get warrants for the use of robots if they're using them in place of when they would send in a police officer. That's pretty common already. Law enforcement has to do that if it's not an emergency situation. We're really just saying, "Please follow current protocol. And if you're going to use a robot instead of a human, let's make sure that protocol is still the standard."

I'm sure you've been following the stories out of places like San Francisco and Oakland, where there's an attempt to weaponize robots. Is that included in this?
We haven't had law enforcement weaponize robots, and no one has said, "We'd like to attach a gun to a robot" from law enforcement in Massachusetts. I think because of some of those past conversations there's been a desire to not go down that route. And I think that local communities would probably have a lot to say if the police started to do that. So, while the legislation doesn't outright ban that, we are not condoning it either.
Representative Sabadosa said Boston Dynamics "sought us out" and is "leading the charge on this."

"I'm hopeful that we will be the first to get the legislation across the finish line, too," added Rep. Sabadosa. "We've gotten thank-you notes from companies, but we haven't gotten any pushback from them. And our goal is not to stifle innovation. I think there's lots of wonderful things that robots will be used for. [...]"

You can read the full interview here.
Privacy

UK Police To Be Able To Run Face Recognition Searches on 50 Million Driving Licence Holders (theguardian.com) 24

The police will be able to run facial recognition searches on a database containing images of Britain's 50 million driving licence holders under a law change being quietly introduced by the government. From a report: Should the police wish to put a name to an image collected on CCTV, or shared on social media, the legislation would provide them with the powers to search driving licence records for a match. The move, contained in a single clause in a new criminal justice bill, could put every driver in the country in a permanent police lineup, according to privacy campaigners.

Facial recognition searches match the biometric measurements of an identified photograph, such as that contained on driving licences, to those of an image picked up elsewhere. The intention to allow the police or the National Crime Agency (NCA) to exploit the UK's driving licence records is not explicitly referenced in the bill or in its explanatory notes, raising criticism from leading academics that the government is "sneaking it under the radar." Once the criminal justice bill is enacted, the home secretary, James Cleverly, must establish "driver information regulations" to enable the searches, but he will need only to consult police bodies, according to the bill.

AI

Rite Aid Banned From Using Facial Recognition Software 60

An anonymous reader quotes a report from TechCrunch: Rite Aid has been banned from using facial recognition software for five years, after the Federal Trade Commission (FTC) found that the U.S. drugstore giant's "reckless use of facial surveillance systems" left customers humiliated and put their "sensitive information at risk." The FTC's Order (PDF), which is subject to approval from the U.S. Bankruptcy Court after Rite Aid filed for Chapter 11 bankruptcy protection in October, also instructs Rite Aid to delete any images it collected as part of its facial recognition system rollout, as well as any products that were built from those images. The company must also implement a robust data security program to safeguard any personal data it collects.

A Reuters report from 2020 detailed how the drugstore chain had secretly introduced facial recognition systems across some 200 U.S. stores over an eight-year period starting in 2012, with "largely lower-income, non-white neighborhoods" serving as the technology testbed. With the FTC's increasing focus on the misuse of biometric surveillance, Rite Aid fell firmly in the government agency's crosshairs. Among its allegations are that Rite Aid -- in partnership with two contracted companies -- created a "watchlist database" containing images of customers that the company said had engaged in criminal activity at one of its stores. These images, which were often poor quality, were captured from CCTV or employees' mobile phone cameras.

When a customer entered a store who supposedly matched an existing image on its database, employees would receive an automatic alert instructing them to take action -- and the majority of the time this instruction was to "approach and identify," meaning verifying the customer's identity and asking them to leave. Often, these "matches" were false positives that led to employees incorrectly accusing customers of wrongdoing, creating "embarrassment, harassment, and other harm," according to the FTC. "Employees, acting on false positive alerts, followed consumers around its stores, searched them, ordered them to leave, called the police to confront or remove consumers, and publicly accused them, sometimes in front of friends or family, of shoplifting or other wrongdoing," the complaint reads. Additionally, the FTC said that Rite Aid failed to inform customers that facial recognition technology was in use, while also instructing employees to specifically not reveal this information to customers.
In a press release, Rite Aid said that it was "pleased to reach an agreement with the FTC," but that it disagreed with the crux of the allegations.

"The allegations relate to a facial recognition technology pilot program the Company deployed in a limited number of stores," Rite Aid said in its statement. "Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC's investigation regarding the Company's use of the technology began."
The Internet

US Regulators Propose New Online Privacy Safeguards For Children 25

An anonymous reader quotes a report from the New York Times: The Federal Trade Commission on Wednesday proposed sweeping changes to bolster the key federal rule that has protected children's privacy online, in one of the most significant attempts by the U.S. government to strengthen consumer privacy in more than a decade. The changes are intended to fortify the rules underlying the Children's Online Privacy Protection Act of 1998, a law that restricts the online tracking of youngsters by services like social media apps, video game platforms, toy retailers and digital advertising networks. Regulators said the moves would "shift the burden" of online safety from parents to apps and other digital services while curbing how platforms may use and monetize children's data.

The proposed changes would require certain online services to turn off targeted advertising by default for children under 13. They would prohibit the online services from using personal details like a child's cellphone number to induce youngsters to stay on their platforms longer. That means online services would no longer be able to use personal data to bombard young children with push notifications. The proposed updates would also strengthen security requirements for online services that collect children's data as well as limit the length of time online services could keep that information. And they would limit the collection of student data by learning apps and other educational-tech providers, by allowing schools to consent to the collection of children's personal details only for educational purposes, not commercial purposes. [...]

The F.T.C. began reviewing the children's privacy rule in 2019, receiving more than 175,000 comments from tech and advertising industry trade groups, video content developers, consumer advocacy groups and members of Congress. The resulting proposal (PDF) runs more than 150 pages. Proposed changes include narrowing an exception that allows online services to collect persistent identification codes for children for certain internal operations, like product improvement, consumer personalization or fraud prevention, without parental consent. The proposed changes would prohibit online operators from employing such user-tracking codes to maximize the amount of time children spend on their platforms. That means online services would not be able to use techniques like sending mobile phone notifications "to prompt the child to engage with the site or service, without verifiable parental consent," according to the proposal. How online services would comply with the changes is not yet known. Members of the public have 60 days to comment on the proposals, after which the commission will vote.
AI

AI Cannot Be Patent 'Inventor,' UK Supreme Court Rules in Landmark Case (reuters.com) 29

A U.S. computer scientist on Wednesday lost his bid to register patents over inventions created by his artificial intelligence system in a landmark case in Britain about whether AI can own patent rights. From a report: Stephen Thaler wanted to be granted two patents in the UK for inventions he says were devised by his "creativity machine" called DABUS. His attempt to register the patents was refused by Britain's Intellectual Property Office on the grounds that the inventor must be a human or a company, rather than a machine. Thaler appealed to the UK's Supreme Court, which on Wednesday unanimously rejected his appeal as under UK patent law "an inventor must be a natural person."

"This appeal is not concerned with the broader question whether technical advances generated by machines acting autonomously and powered by AI should be patentable," Judge David Kitchin said in the court's written ruling. "Nor is it concerned with the question whether the meaning of the term 'inventor' ought to be expanded ... to include machines powered by AI which generate new and non-obvious products and processes which may be thought to offer benefits over products and processes which are already known." Thaler's lawyers said in a statement that "the judgment establishes that UK patent law is currently wholly unsuitable for protecting inventions generated autonomously by AI machines."

Slashdot Top Deals