Government

India Targets Apple Over Its Phone Hacking Notifications (washingtonpost.com) 100

In October, Apple issued notifications warning over a half dozen India lawmakers of their iPhones being targets of state-sponsored attacks. According to a new report from the Washington Post, the Modi government responded by criticizing Apple's security and demanding explanations to mitigate political impact (Warning: source may be paywalled; alternative source). From the report: Officials from the ruling Bharatiya Janata Party (BJP) publicly questioned whether the Silicon Valley company's internal threat algorithms were faulty and announced an investigation into the security of Apple devices. In private, according to three people with knowledge of the matter, senior Modi administration officials called Apple's India representatives to demand that the company help soften the political impact of the warnings. They also summoned an Apple security expert from outside the country to a meeting in New Delhi, where government representatives pressed the Apple official to come up with alternative explanations for the warnings to users, the people said. They spoke on the condition of anonymity to discuss sensitive matters. "They were really angry," one of those people said.

The visiting Apple official stood by the company's warnings. But the intensity of the Indian government effort to discredit and strong-arm Apple disturbed executives at the company's headquarters, in Cupertino, Calif., and illustrated how even Silicon Valley's most powerful tech companies can face pressure from the increasingly assertive leadership of the world's most populous country -- and one of the most critical technology markets of the coming decade. The recent episode also exemplified the dangers facing government critics in India and the lengths to which the Modi administration will go to deflect suspicions that it has engaged in hacking against its perceived enemies, according to digital rights groups, industry workers and Indian journalists. Many of the more than 20 people who received Apple's warnings at the end of October have been publicly critical of Modi or his longtime ally, Gautam Adani, an Indian energy and infrastructure tycoon. They included a firebrand politician from West Bengal state, a Communist leader from southern India and a New Delhi-based spokesman for the nation's largest opposition party. [...] Gopal Krishna Agarwal, a national spokesman for the BJP, said any evidence of hacking should be presented to the Indian government for investigation.

The Modi government has never confirmed or denied using spyware, and it has refused to cooperate with a committee appointed by India's Supreme Court to investigate whether it had. But two years ago, the Forbidden Stories journalism consortium, which included The Post, found that phones belonging to Indian journalists and political figures were infected with Pegasus, which grants attackers access to a device's encrypted messages, camera and microphone. In recent weeks, The Post, in collaboration with Amnesty, found fresh cases of infections among Indian journalists. Additional work by The Post and New York security firm iVerify found that opposition politicians had been targeted, adding to the evidence suggesting the Indian government's use of powerful surveillance tools. In addition, Amnesty showed The Post evidence it found in June that suggested a Pegasus customer was preparing to hack people in India. Amnesty asked that the evidence not be detailed to avoid teaching Pegasus users how to cover their tracks.
"These findings show that spyware abuse continues unabated in India," said Donncha O Cearbhaill, head of Amnesty International's Security Lab. "Journalists, activists and opposition politicians in India can neither protect themselves against being targeted by highly invasive spyware nor expect meaningful accountability."
Transportation

US Engine Maker Will Pay $1.6 Billion To Settle Claims of Emissions Cheating (nytimes.com) 100

An anonymous reader quotes a report from the New York Times: The United States and the state of California have reached an agreement in principle with the truck engine manufacturer Cummins on a $1.6 billion penalty to settle claims that the company violated the Clean Air Act by installing devices to defeat emissions controls on hundreds of thousands of engines, the Justice Department announced on Friday. The penalty would be the largest ever under the Clean Air Act and the second largest ever environmental penalty in the United States. Defeat devices are parts or software that bypass, defeat or render inoperative emissions controls like pollution sensors and onboard computers. They allow vehicles to pass emissions inspections while still emitting high levels of smog-causing pollutants such as nitrogen oxide, which is linked to asthma and other respiratory illnesses.

The Justice Department has accused the company of installing defeat devices on 630,000 model year 2013 to 2019 RAM 2500 and 3500 pickup truck engines. The company is also alleged to have secretly installed auxiliary emission control devices on 330,000 model year 2019 to 2023 RAM 2500 and 3500 pickup truck engines. "Violations of our environmental laws have a tangible impact. They inflict real harm on people in communities across the country," Attorney General Merrick Garland said in a statement. "This historic agreement should make clear that the Justice Department will be aggressive in its efforts to hold accountable those who seek to profit at the expense of people's health and safety."

In a statement, Cummins said that it had "seen no evidence that anyone acted in bad faith and does not admit wrongdoing." The company said it has "cooperated fully with the relevant regulators, already addressed many of the issues involved, and looks forward to obtaining certainty as it concludes this lengthy matter. Cummins conducted an extensive internal review and worked collaboratively with the regulators for more than four years." Stellantis, the company that makes the trucks, has already recalled the model year 2019 trucks and has initiated a recall of the model year 2013 to 2018 trucks. The software in those trucks will be recalibrated to ensure that they are fully compliant with federal emissions law, said Jon Mills, a spokesman for Cummins. Mr. Mills said that "next steps are unclear" on the model year 2020 through 2023, but that the company "continues to work collaboratively with regulators" to resolve the issue. The Justice Department partnered with the Environmental Protection Agency in its investigation of the case.

AI

The New York Times Sues OpenAI and Microsoft Over AI Use of Copyrighted Work (nytimes.com) 59

The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies. From a report: The Times is the first major American media organization to sue the companies, the creators of ChatGPT and other popular A.I. platforms, over copyright issues associated with its written works. The lawsuit [PDF], filed in Federal District Court in Manhattan, contends that millions of articles published by The Times were used to train automated chatbots that now compete with the news outlet as a source of reliable information.

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for "billions of dollars in statutory and actual damages" related to the "unlawful copying and use of The Times's uniquely valuable works." It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times. The lawsuit could test the emerging legal contours of generative A.I. technologies -- so called for the text, images and other content they can create after learning from large data sets -- and could carry major implications for the news industry. The Times is among a small number of outlets that have built successful business models from online journalism, but dozens of newspapers and magazines have been hobbled by readers' migration to the internet.

Programming

Code.org Sues WhiteHat Jr. For $3 Million 8

theodp writes: Back in May 2021, tech-backed nonprofit Code.org touted the signing of a licensing agreement with WhiteHat Jr., allowing the edtech company with a controversial past (Whitehat Jr. was bought for $300M in 2020 by Byju's, an edtech firm that received a $50M investment from Mark Zuckerberg's venture firm) to integrate Code.org's free-to-educators-and-organizations content and tools into their online tutoring service. Code.org did not reveal what it was charging Byju's to use its "free curriculum and open source technology" for commercial purposes, but Code.org's 2021 IRS 990 filing reported $1M in royalties from an unspecified source after earlier years reported $0. Coincidentally, Whitehat Jr. is represented by Aaron Kornblum, who once worked at Microsoft for now-President Brad Smith, who left Code.org's Board just before the lawsuit was filed.

Fast forward to 2023 and the bloom is off the rose, as Court records show that Code.org earlier this month sued Whitehat Education Technology, LLC (Exhibits A and B) in what is called "a civil action for breach of contract arising from Whitehat's failure to pay Code.org the agreed-upon charges for its use of Code.org's platform and licensed content and its ongoing, unauthorized use of that platform and content." According to the filing, "Whitehat agreed [in April 2022] to pay to Code.org licensing fees totaling $4,000,000 pursuant to a four-year schedule" and "made its first four scheduled payments, totaling $1,000,000," but "about a year after the Agreement was signed, Whitehat informed Code.org that it would be unable to make the remaining scheduled license payments." While the original agreement was amended to backload Whitehat's license fee payment obligations, "Whitehat has not paid anything at all beyond the $1,000,000 that it paid pursuant to the 2022 invoices before the Agreement was amended" and "has continued to access Code.org's platform and content."

That Byju's Whitehat Jr. stiffed Code.org is hardly shocking. In June 2023, Reuters reported that Byju's auditor Deloitte cut ties with the troubled Indian Edtech startup that was once an investor darling and valued at $22 billion, adding that a Byju's Board member representing the Chan-Zuckerberg Initiative had resigned with two other Board members. The BBC reported in July that Byju's was guilty of overexpanding during the pandemic (not unlike Zuck's Facebook). Ironically, the lawsuit Exhibits include screenshots showing Mark Zuckerberg teaching Code.org lessons. Zuckerberg and Facebook were once among the biggest backers of Code.org, although it's unclear whether that relationship soured after court documents were released that revealed Code.org's co-founders talking smack about Zuck and Facebook's business practices to lawyers for Six4Three, which was suing Facebook.

Code.org's curriculum is also used by the Amazon Future Engineer (AFE) initiative, but it is unclear what royalties -- if any -- Amazon pays to Code.org for the use of Code.org curriculum. While the AFE site boldly says, "we provide free computer science curriculum," the AFE fine print further explains that "our partners at Code.org and ProjectSTEM offer a wide array of introductory and advance curriculum options and teacher training." It's unclear what kind of organization Amazon's AFE ("Computer Science Learning Childhood to Career") exactly is -- an IRS Tax Exempt Organization Search failed to find any hits for "Amazon Future Engineer" -- making it hard to guess whether Code.org might consider AFE's use of Code.org software 'commercial use.' Would providing a California school district with free K-12 CS curriculum that Amazon boasts of cultivating into its "vocal champion" count as "commercial use"? How about providing free K-12 CS curriculum to children who live where Amazon is seeking incentives? Or if Amazon CEO Jeff Bezos testifies Amazon "funds computer science coursework" for schools as he attempts to counter a Congressional antitrust inquiry? These seem to be some of the kinds of distinctions Richard Stallman anticipated more than a decade ago as he argued against a restriction against commercial use of otherwise free software.
Electronic Frontier Foundation

EFF Warns: 'Think Twice Before Giving Surveillance for the Holidays' (eff.org) 28

"It's easy to default to giving the tech gifts that retailers tend to push on us this time of year..." notes Lifehacker senior writer Thorin Klosowski.

"But before you give one, think twice about what you're opting that person into." A number of these gifts raise red flags for us as privacy-conscious digital advocates. Ring cameras are one of the most obvious examples, but countless others over the years have made the security or privacy naughty list (and many of these same electronics directly clash with your right to repair). One big problem with giving these sorts of gifts is that you're opting another person into a company's intrusive surveillance practice, likely without their full knowledge of what they're really signing up for... And let's not forget about kids. Long subjected to surveillance from elves and their managers, electronics gifts for kids can come with all sorts of surprise issues, like the kid-focused tablet we found this year that was packed with malware and riskware. Kids' smartwatches and a number of connected toys are also potential privacy hazards that may not be worth the risks if not set up carefully.

Of course, you don't have to avoid all technology purchases. There are plenty of products out there that aren't creepy, and a few that just need extra attention during set up to ensure they're as privacy-protecting as possible. While we don't endorse products, you don't have to start your search in a vacuum. One helpful place to start is Mozilla's Privacy Not Included gift guide, which provides a breakdown of the privacy practices and history of products in a number of popular gift categories.... U.S. PIRG also has guidance for shopping for kids, including details about what to look for in popular categories like smart toys and watches....

Your job as a privacy-conscious gift-giver doesn't end at the checkout screen. If you're more tech savvy than the person receiving the item, or you're helping set up a gadget for a child, there's no better gift than helping set it up as privately as possible.... Giving the gift of electronics shouldn't come with so much homework, but until we have a comprehensive data privacy law, we'll likely have to contend with these sorts of set-up hoops. Until that day comes, we can all take the time to help those who need it.

AI

ChatGPT Exploit Finds 24 Email Addresses, Amid Warnings of 'AI Silo' (thehill.com) 67

The New York Times reports: Last month, I received an alarming email from someone I did not know: Rui Zhu, a Ph.D. candidate at Indiana University Bloomington. Mr. Zhu had my email address, he explained, because GPT-3.5 Turbo, one of the latest and most robust large language models (L.L.M.) from OpenAI, had delivered it to him. My contact information was included in a list of business and personal email addresses for more than 30 New York Times employees that a research team, including Mr. Zhu, had managed to extract from GPT-3.5 Turbo in the fall of this year. With some work, the team had been able to "bypass the model's restrictions on responding to privacy-related queries," Mr. Zhu wrote.

My email address is not a secret. But the success of the researchers' experiment should ring alarm bells because it reveals the potential for ChatGPT, and generative A.I. tools like it, to reveal much more sensitive personal information with just a bit of tweaking. When you ask ChatGPT a question, it does not simply search the web to find the answer. Instead, it draws on what it has "learned" from reams of information — training data that was used to feed and develop the model — to generate one. L.L.M.s train on vast amounts of text, which may include personal information pulled from the Internet and other sources. That training data informs how the A.I. tool works, but it is not supposed to be recalled verbatim... In the example output they provided for Times employees, many of the personal email addresses were either off by a few characters or entirely wrong. But 80 percent of the work addresses the model returned were correct.

The researchers used the API for accessing ChatGPT, the article notes, where "requests that would typically be denied in the ChatGPT interface were accepted..."

"The vulnerability is particularly concerning because no one — apart from a limited number of OpenAI employees — really knows what lurks in ChatGPT's training-data memory."

And there was a broader related warning in another article published the same day. Microsoft may be building an AI silo in a walled garden, argues a professor at the University of California, Berkeley's school of information, calling the development "detrimental for technology development, as well as costly and potentially dangerous for society and the economy." [In January] Microsoft sealed its OpenAI relationship with another major investment — this time around $10 billion, much of which was, once again, in the form of cloud credits instead of conventional finance. In return, OpenAI agreed to run and power its AI exclusively through Microsoft's Azure cloud and granted Microsoft certain rights to its intellectual property...

Recent reports that U.K. competition authorities and the U.S. Federal Trade Commission are scrutinizing Microsoft's investment in OpenAI are encouraging. But Microsoft's failure to report these investments for what they are — a de facto acquisition — demonstrates that the company is keenly aware of the stakes and has taken advantage of OpenAI's somewhat peculiar legal status as a non-profit entity to work around the rules...

The U.S. government needs to quickly step in and reverse the negative momentum that is pushing AI into walled gardens. The longer it waits, the harder it will be, both politically and technically, to re-introduce robust competition and the open ecosystem that society needs to maximize the benefits and manage the risks of AI technology.

Television

'Doctor Who' Christmas Special Streams on Disney+ and the BBC (cnet.com) 65

An anonymous Slashdot reader shared this report from CNET: Marking its 60th year on television, the British time-travel series will close out 2023 with one last anniversary special that arrives on Christmas Day. Ncuti Gatwa's Doctor helms the Tardis in The Church on Ruby Road, which centers on an abandoned baby who grows up looking for answers... Disney Plus will stream Doctor Who: The Church on Ruby Road on Monday, Dec. 25, at 12:55 p.m. ET (9:55 a.m. PT) in all regions except the UK and Ireland, where it will air on the BBC. In case you missed it, viewers can also watch David Tennant starring in the other three anniversary specials: The Star Beast, Wild Blue Yonder and The Giggle. All releases are available on Disney Plus.
But what's interesting is CNET goes on to explain "why a VPN could be a useful tool." Perhaps you're traveling abroad and want to stream Disney Plus while away from home. With a VPN, you're able to virtually change your location on your phone, tablet or laptop to get access to the series from anywhere in the world. There are other good reasons to use a VPN for streaming too. A VPN is the best way to encrypt your traffic and stop your ISP from throttling your speeds...

You can use a VPN to stream content legally as long as VPNs are allowed in your country and you have a valid subscription to the streaming service you're using. The U.S. and Canada are among the countries where VPNs are legal

United States

US Water Utilities Hacked After Default Passwords Set to '1111', Cybersecurity Officials Say (fastcompany.com) 84

An anonymous reader shared this report from Fast Company: Providers of critical infrastructure in the United States are doing a sloppy job of defending against cyber intrusions, the National Security Council tells Fast Company, pointing to recent Iran-linked attacks on U.S. water utilities that exploited basic security lapses [earlier this month]. The security council tells Fast Company it's also aware of recent intrusions by hackers linked to China's military at American infrastructure entities that include water and energy utilities in multiple states.

Neither the Iran-linked or China-linked attacks affected critical systems or caused disruptions, according to reports.

"We're seeing companies and critical services facing increased cyber threats from malicious criminals and countries," Anne Neuberger, the deputy national security advisor for cyber and emerging tech, tells Fast Company. The White House had been urging infrastructure providers to upgrade their cyber defenses before these recent hacks, but "clearly, by the most recent success of the criminal cyberattacks, more work needs to be done," she says... The attacks hit at least 11 different entities using Unitronics devices across the United States, which included six local water facilities, a pharmacy, an aquatics center, and a brewery...

Some of the compromised devices had been connected to the open internet with a default password of "1111," federal authorities say, making it easy for hackers to find them and gain access. Fixing that "doesn't cost any money," Neuberger says, "and those are the kinds of basic things that we really want companies urgently to do." But cybersecurity experts say these attacks point to a larger issue: the general vulnerability of the technology that powers physical infrastructure. Much of the hardware was developed before the internet and, though they were retrofitted with digital capabilities, still "have insufficient security controls," says Gary Perkins, chief information security officer at cybersecurity firm CISO Global. Additionally, many infrastructure facilities prioritize "operational ease of use rather than security," since many vendors often need to access the same equipment, says Andy Thompson, an offensive cybersecurity expert at CyberArk. But that can make the systems equally easy for attackers to exploit: freely available web tools allow anyone to generate lists of hardware connected to the public internet, like the Unitronics devices used by water companies.

"Not making critical infrastructure easily accessible via the internet should be standard practice," Thompson says.

AI

AI Companies Would Be Required To Disclose Copyrighted Training Data Under New Bill (theverge.com) 42

An anonymous reader quotes a report from The Verge: Two lawmakers filed a bill requiring creators of foundation models to disclose sources of training data so copyright holders know their information was taken. The AI Foundation Model Transparency Act -- filed by Reps. Anna Eshoo (D-CA) and Don Beyer (D-VA) -- would direct the Federal Trade Commission (FTC) to work with the National Institute of Standards and Technology (NIST) to establish rules for reporting training data transparency. Companies that make foundation models will be required to report sources of training data and how the data is retained during the inference process, describe the limitations or risks of the model, how the model aligns with NIST's planned AI Risk Management Framework and any other federal standards might be established, and provide information on the computational power used to train and run the model. The bill also says AI developers must report efforts to "red team" the model to prevent it from providing "inaccurate or harmful information" around medical or health-related questions, biological synthesis, cybersecurity, elections, policing, financial loan decisions, education, employment decisions, public services, and vulnerable populations such as children.

The bill calls out the importance of training data transparency around copyright as several lawsuits have come out against AI companies alleging copyright infringement. It specifically mentions the case of artists against Stability AI, Midjourney, and Deviant Art, (which was largely dismissed in October, according to VentureBeat), and Getty Images' complaint against Stability AI. The bill still needs to be assigned to a committee and discussed, and it's unclear if that will happen before the busy election campaign season starts. Eshoo and Beyer's bill complements the Biden administration's AI executive order, which helps establish reporting standards for AI models. The executive order, however, is not law, so if the AI Foundation Model Transparency Act passes, it will make transparency requirements for training data a federal rule.

Government

Biden Administration Unveils Hydrogen Tax Credit Plan To Jump-Start Industry (npr.org) 104

An anonymous reader quotes a report from NPR: The Biden administration released its highly anticipated proposal for doling out billions of dollars in tax credits to hydrogen producers Friday, in a massive effort to build out an industry that some hope can be a cleaner alternative to fossil fueled power. The U.S. credit is the most generous in the world for hydrogen production, Jesse Jenkins, a professor at Princeton University who has analyzed the U.S. climate law, said last week. The proposal -- which is part of Democrats' Inflation Reduction Act passed last year -- outlines a tiered system to determine which hydrogen producers get the most credits, with cleaner energy projects receiving more, and smaller, but still meaningful credits going to those that use fossil fuel to produce hydrogen.

Administration officials estimate the hydrogen production credits will deliver $140 billion in revenue and 700,000 jobs by 2030 -- and will help the U.S. produce 50 million metric tons of hydrogen by 2050. "That's equivalent to the amount of energy currently used by every bus, every plane, every train and every ship in the US combined," Energy Deputy Secretary David M. Turk said on a Thursday call with reporters to preview the proposal. [...] As part of the administration's proposal, firms that produce cleaner hydrogen and meet prevailing wage and registered apprenticeship requirements stand to qualify for a large incentive at $3 per kilogram of hydrogen. Firms that produce hydrogen using fossil fuels get less. The credit ranges from $.60 to $3 per kilo, depending on whole lifecycle emissions.

One contentious issue in the proposal was how to deal with the fact that clean, electrolyzer hydrogen draws tremendous amounts of electricity. Few want that to mean that more coal or natural gas-fired power plants run extra hours. The guidance addresses this by calling for producers to document their electricity usage through "energy attribute certificates" -- which will help determine the credits they qualify for. Rachel Fakhry, policy director for emerging technologies at the Natural Resources Defense Council called the proposal "a win for the climate, U.S. consumers, and the budding U.S. hydrogen industry." The Clean Air Task Force likewise called the proposal "an excellent step toward developing a credible clean hydrogen market in the United States."

Crime

Teen GTA VI Hacker Sentenced To Indefinite Hospital Order (theverge.com) 77

Emma Roth reports via The Verge: The 18-year-old Lapsus$ hacker who played a critical role in leaking Grand Theft Auto VI footage has been sentenced to life inside a hospital prison, according to a report from the BBC. A British judge ruled on Thursday that Arion Kurtaj is a high risk to the public because he still wants to commit cybercrimes.

In August, a London jury found that Kurtaj carried out cyberattacks against GTA VI developer Rockstar Games and other companies, including Uber and Nvidia. However, since Kurtaj has autism and was deemed unfit to stand trial, the jury was asked to determine whether he committed the acts in question, not whether he did so with criminal intent. During Thursday's hearing, the court heard Kurtaj "had been violent while in custody with dozens of reports of injury or property damage," the BBC reports. A mental health assessment also found that Kurtaj "continued to express the intent to return to cybercrime as soon as possible." He's required to stay in the hospital prison for life unless doctors determine that he's no longer a danger.

Kurtaj leaked 90 videos of GTA VI gameplay footage last September while out on bail for hacking Nvidia and British telecom provider BT / EE. Although he stayed at a hotel under police protection during this time, Kurtaj still managed to carry out an attack on Rockstar Games by using the room's included Amazon Fire Stick and a "newly purchased smart phone, keyboard and mouse," according to a separate BBC report. Kurtaj was arrested for the final time following the incident. Another 17-year-old involved with Lapsus$ was handed an 18-month community sentence, called a Youth Rehabilitation Order, and a ban from using virtual private networks.

Robotics

Massachusetts Lawmakers Mull 'Killer Robot' Bill (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch, written by Brian Heater: Back in mid-September, a pair of Massachusetts lawmakers introduced a bill "to ensure the responsible use of advanced robotic technologies." What that means in the simplest and most direct terms is legislation that would bar the manufacture, sale and use of weaponized robots. It's an interesting proposal for a number of reasons. The first is a general lack of U.S. state and national laws governing such growing concerns. It's one of those things that has felt like science fiction to such a degree that many lawmakers had no interest in pursuing it in a pragmatic manner. [...] Earlier this week, I spoke about the bill with Massachusetts state representative Lindsay Sabadosa, who filed it alongside Massachusetts state senator Michael Moore.

What is the status of the bill?
We're in an interesting position, because there are a lot of moving parts with the bill. The bill has had a hearing already, which is wonderful news. We're working with the committee on the language of the bill. They have had some questions about why different pieces were written as they were written. We're doing that technical review of the language now -- and also checking in with all stakeholders to make sure that everyone who needs to be at the table is at the table.

When you say "stakeholders" ...
Stakeholders are companies that produce robotics. The robot Spot, which Boston Dynamics produces, and other robots as well, are used by entities like Boston Police Department or the Massachusetts State Police. They might be used by the fire department. So, we're talking to those people to run through the bill, talk about what the changes are. For the most part, what we're hearing is that the bill doesn't really change a lot for those stakeholders. Really the bill is to prevent regular people from trying to weaponize robots, not to prevent the very good uses that the robots are currently employed for.

Does the bill apply to law enforcement as well?
We're not trying to stop law enforcement from using the robots. And what we've heard from law enforcement repeatedly is that they're often used to deescalate situations. They talk a lot about barricade situations or hostage situations. Not to be gruesome, but if people are still alive, if there are injuries, they say it often helps to deescalate, rather than sending in officers, which we know can often escalate the situation. So, no, we wouldn't change any of those uses. The legislation does ask that law enforcement get warrants for the use of robots if they're using them in place of when they would send in a police officer. That's pretty common already. Law enforcement has to do that if it's not an emergency situation. We're really just saying, "Please follow current protocol. And if you're going to use a robot instead of a human, let's make sure that protocol is still the standard."

I'm sure you've been following the stories out of places like San Francisco and Oakland, where there's an attempt to weaponize robots. Is that included in this?
We haven't had law enforcement weaponize robots, and no one has said, "We'd like to attach a gun to a robot" from law enforcement in Massachusetts. I think because of some of those past conversations there's been a desire to not go down that route. And I think that local communities would probably have a lot to say if the police started to do that. So, while the legislation doesn't outright ban that, we are not condoning it either.
Representative Sabadosa said Boston Dynamics "sought us out" and is "leading the charge on this."

"I'm hopeful that we will be the first to get the legislation across the finish line, too," added Rep. Sabadosa. "We've gotten thank-you notes from companies, but we haven't gotten any pushback from them. And our goal is not to stifle innovation. I think there's lots of wonderful things that robots will be used for. [...]"

You can read the full interview here.
Privacy

UK Police To Be Able To Run Face Recognition Searches on 50 Million Driving Licence Holders (theguardian.com) 24

The police will be able to run facial recognition searches on a database containing images of Britain's 50 million driving licence holders under a law change being quietly introduced by the government. From a report: Should the police wish to put a name to an image collected on CCTV, or shared on social media, the legislation would provide them with the powers to search driving licence records for a match. The move, contained in a single clause in a new criminal justice bill, could put every driver in the country in a permanent police lineup, according to privacy campaigners.

Facial recognition searches match the biometric measurements of an identified photograph, such as that contained on driving licences, to those of an image picked up elsewhere. The intention to allow the police or the National Crime Agency (NCA) to exploit the UK's driving licence records is not explicitly referenced in the bill or in its explanatory notes, raising criticism from leading academics that the government is "sneaking it under the radar." Once the criminal justice bill is enacted, the home secretary, James Cleverly, must establish "driver information regulations" to enable the searches, but he will need only to consult police bodies, according to the bill.

AI

Rite Aid Banned From Using Facial Recognition Software 60

An anonymous reader quotes a report from TechCrunch: Rite Aid has been banned from using facial recognition software for five years, after the Federal Trade Commission (FTC) found that the U.S. drugstore giant's "reckless use of facial surveillance systems" left customers humiliated and put their "sensitive information at risk." The FTC's Order (PDF), which is subject to approval from the U.S. Bankruptcy Court after Rite Aid filed for Chapter 11 bankruptcy protection in October, also instructs Rite Aid to delete any images it collected as part of its facial recognition system rollout, as well as any products that were built from those images. The company must also implement a robust data security program to safeguard any personal data it collects.

A Reuters report from 2020 detailed how the drugstore chain had secretly introduced facial recognition systems across some 200 U.S. stores over an eight-year period starting in 2012, with "largely lower-income, non-white neighborhoods" serving as the technology testbed. With the FTC's increasing focus on the misuse of biometric surveillance, Rite Aid fell firmly in the government agency's crosshairs. Among its allegations are that Rite Aid -- in partnership with two contracted companies -- created a "watchlist database" containing images of customers that the company said had engaged in criminal activity at one of its stores. These images, which were often poor quality, were captured from CCTV or employees' mobile phone cameras.

When a customer entered a store who supposedly matched an existing image on its database, employees would receive an automatic alert instructing them to take action -- and the majority of the time this instruction was to "approach and identify," meaning verifying the customer's identity and asking them to leave. Often, these "matches" were false positives that led to employees incorrectly accusing customers of wrongdoing, creating "embarrassment, harassment, and other harm," according to the FTC. "Employees, acting on false positive alerts, followed consumers around its stores, searched them, ordered them to leave, called the police to confront or remove consumers, and publicly accused them, sometimes in front of friends or family, of shoplifting or other wrongdoing," the complaint reads. Additionally, the FTC said that Rite Aid failed to inform customers that facial recognition technology was in use, while also instructing employees to specifically not reveal this information to customers.
In a press release, Rite Aid said that it was "pleased to reach an agreement with the FTC," but that it disagreed with the crux of the allegations.

"The allegations relate to a facial recognition technology pilot program the Company deployed in a limited number of stores," Rite Aid said in its statement. "Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC's investigation regarding the Company's use of the technology began."
The Internet

US Regulators Propose New Online Privacy Safeguards For Children 25

An anonymous reader quotes a report from the New York Times: The Federal Trade Commission on Wednesday proposed sweeping changes to bolster the key federal rule that has protected children's privacy online, in one of the most significant attempts by the U.S. government to strengthen consumer privacy in more than a decade. The changes are intended to fortify the rules underlying the Children's Online Privacy Protection Act of 1998, a law that restricts the online tracking of youngsters by services like social media apps, video game platforms, toy retailers and digital advertising networks. Regulators said the moves would "shift the burden" of online safety from parents to apps and other digital services while curbing how platforms may use and monetize children's data.

The proposed changes would require certain online services to turn off targeted advertising by default for children under 13. They would prohibit the online services from using personal details like a child's cellphone number to induce youngsters to stay on their platforms longer. That means online services would no longer be able to use personal data to bombard young children with push notifications. The proposed updates would also strengthen security requirements for online services that collect children's data as well as limit the length of time online services could keep that information. And they would limit the collection of student data by learning apps and other educational-tech providers, by allowing schools to consent to the collection of children's personal details only for educational purposes, not commercial purposes. [...]

The F.T.C. began reviewing the children's privacy rule in 2019, receiving more than 175,000 comments from tech and advertising industry trade groups, video content developers, consumer advocacy groups and members of Congress. The resulting proposal (PDF) runs more than 150 pages. Proposed changes include narrowing an exception that allows online services to collect persistent identification codes for children for certain internal operations, like product improvement, consumer personalization or fraud prevention, without parental consent. The proposed changes would prohibit online operators from employing such user-tracking codes to maximize the amount of time children spend on their platforms. That means online services would not be able to use techniques like sending mobile phone notifications "to prompt the child to engage with the site or service, without verifiable parental consent," according to the proposal. How online services would comply with the changes is not yet known. Members of the public have 60 days to comment on the proposals, after which the commission will vote.
AI

AI Cannot Be Patent 'Inventor,' UK Supreme Court Rules in Landmark Case (reuters.com) 29

A U.S. computer scientist on Wednesday lost his bid to register patents over inventions created by his artificial intelligence system in a landmark case in Britain about whether AI can own patent rights. From a report: Stephen Thaler wanted to be granted two patents in the UK for inventions he says were devised by his "creativity machine" called DABUS. His attempt to register the patents was refused by Britain's Intellectual Property Office on the grounds that the inventor must be a human or a company, rather than a machine. Thaler appealed to the UK's Supreme Court, which on Wednesday unanimously rejected his appeal as under UK patent law "an inventor must be a natural person."

"This appeal is not concerned with the broader question whether technical advances generated by machines acting autonomously and powered by AI should be patentable," Judge David Kitchin said in the court's written ruling. "Nor is it concerned with the question whether the meaning of the term 'inventor' ought to be expanded ... to include machines powered by AI which generate new and non-obvious products and processes which may be thought to offer benefits over products and processes which are already known." Thaler's lawyers said in a statement that "the judgment establishes that UK patent law is currently wholly unsuitable for protecting inventions generated autonomously by AI machines."

Canada

Meta's News Ban In Canada Remains As Online News Act Goes Into Effect (bbc.com) 147

An anonymous reader quotes a report from the BBC: A bill that mandates tech giants pay news outlets for their content has come into effect in Canada amid an ongoing dispute with Facebook and Instagram owner Meta over the law. Some have hailed it as a game-changer that sets out a permanent framework that will see a steady drip of funds from wealthy tech companies to Canada's struggling journalism industry. But it has also been met with resistance by Google and Meta -- the only two companies big enough to be encompassed by the law. In response, over the summer, Meta blocked access to news on Facebook and Instagram for Canadians. Google looked set to follow, but after months of talks, the federal government was able to negotiate a deal with the search giant as the company has agreed to pay Canadian news outlets $75 million annually.

No such agreement appears to be on the horizon with Meta, which has called the law "fundamentally flawed." If Meta is refusing to budge, so is the government. "We will continue to push Meta, that makes billions of dollars in profits, even though it is refusing to invest in the journalistic rigor and stability of the media," Prime Minister Justin Trudeau told reporters on Friday.
According to a study by the Media Ecosystem Observatory, the views of Canadian news on Facebook dropped 90% after the company blocked access to news on the platform. Local news outlets have been hit particularly hard.

"The loss of journalism on Meta platforms represents a significant decline in the resiliency of the Canadian media ecosystem," said Taylor Owen, a researcher at McGill and the co-author of the study. He believes it also hurts Meta's brand in the long run, pointing to the fact that the Canada's federal government, as well as that of British Columbia, other municipalities and a handful of large Canadian corporations, have all pulled their advertising off Facebook and Instagram in retaliation.
Security

Comcast Discloses Data Breach of Close To 36 Million Xfinity Customers [UPDATE] (techcrunch.com) 40

In a notice on Monday, Xfinity notified customers of a "data security incident" that resulted in the theft of customer information, including usernames, passwords, contact information, and more. The Verge reports: Xfinity traces the breach to a security vulnerability disclosed by cloud computing company Citrix, which began alerting customers of a flaw in software Xfinity and other companies use on October 10th. While Xfinity says it patched the security hole, it later uncovered suspicious activity on its internal systems "that was concluded to be a result of this vulnerability."

The hack resulted in the theft of customer usernames and hashed passwords, according to Xfinity's notice. Meanwhile, "some customers" may have had their names, contact information, last four digits of their social security numbers, dates of birth, and / or secret questions and answers exposed. Xfinity has notified federal law enforcement about the incident and says "data analysis is continuing."

We still don't know how many users were affected by the breach. Xfinity will automatically ask customers to change their passwords the next time they log in to their accounts, and it's also encouraging users to turn on two-factor authentication. You can find the full notice, including contact information for the company's incident response team, on Xfinity's website (PDF).
UPDATE 12/19/23: According to TechCrunch, almost 36 million Xfinity customers had their sensitive information accessed by hackers via a vulnerability known as "CitrixBleed." The vulnerability is "found in Citrix networking devices often used by big corporations and has been under mass-exploitation by hackers since late August," the report says. "Citrix made patches available in early October, but many organizations did not patch in time. Hackers have used the CitrixBleed vulnerability to hack into big-name victims, including aerospace giant Boeing, the Industrial and Commercial Bank of China and international law firm Allen & Overy."

"In a filing with Maine's attorney general, Comcast confirmed that almost 35.8 million customers are affected by the breach. Comcast's latest earnings report shows the company has more than 32 million broadband customers, suggesting this breach has impacted most, if not all Xfinity customers."
Crime

Nikola Founder Trevor Milton Sentenced To 4 Years For Securities Fraud (techcrunch.com) 34

An anonymous reader quotes a report from TechCrunch: Trevor Milton, the disgraced founder and former CEO of electric truck startup Nikola, was sentenced Monday to four years in prison for securities fraud. The sentence, by Judge Edgardo Ramos in the U.S. District Court in Manhattan, caps a multi-year saga that at one point sent Nikola stock soaring 83% only to come crashing down months later over accusations of fraud and canceled contracts. The sentencing hearing comes after four separate delays, during which Milton has remained free under a $100 million bond.

In his ruling, Ramos said he would impose a sentence of 48 months on each count, served concurrently, and a fine of $1 million. Milton is expected to appeal the sentence, which Ramos acknowledged. Milton sobbed as he pled with Judge Ramos for leniency in a long and often confusing statement ahead of the sentencing. At one point, Milton said he stepped down from the CEO post at Nikola not because of fraud allegations, but to support his wife. "I stepped down because my wife was suffering live threatening sickness," he said in his statement, which reporter Matthew Russell Lee of Inner City Press shared on social media post X. She suffered medical malpractice, someone else's plasma. So I stepped down for that -- not because I was a fraud. The truth matters. I chose my wife over money or power."

During the sentencing hearing, defense attorneys said that Milton wasn't trying to defraud investors or intending to harm anyone. Instead, they argued he simply wanted to be loved and praised like Elon Musk. Prosecutors pushed back and said he lied repeatedly and targeted retail investors. Federal prosecutors recommended an 11-year sentence, but Milton faced a maximum term of 60 years in prison. The government also sought a $5 million fine, forfeiture of a ranch in Utah and an undetermined amount of restitution to investors. Restitution will be determined after Monday's sentencing hearing.
Timeline of events:

June, 2016: Nikola Motor Receives Over 7,000 Preorders Worth Over $2.3 Billion For Its Electric Truck
December, 2016: Nikola Motor Company Reveals Hydrogen Fuel Cell Truck With Range of 1,200 Miles
February, 2020: Nikola Motors Unveils Hybrid Fuel-Cell Concept Truck With 600-Mile Range
June, 2020: Nikola Founder Exaggerated the Capability of His Debut Truck
September, 2020: Nikola Motors Accused of Massive Fraud, Ocean of Lies
September, 2020: Nikola Admits Prototype Was Rolling Downhill In Promo Video
September, 2020: Nikola Founder Trevor Milton Steps Down as Chairman in Battle With Short Seller
October, 2020: Nikola Stock Falls 14 Percent After CEO Downplays Badger Truck Plans
November, 2020: Nikola Stock Plunges As Company Cancels Badger Pickup Truck
July, 2021: Nikola Founder Trevor Milton Indicted on Three Counts of Fraud
December, 2021: EV Startup Nikola Agrees To $125 Million Settlement
September, 2022: Nikola Founder Lied To Investors About Tech, Prosecutor Says in Fraud Trial
Patents

Apple To Pause Selling New Versions of Its Watch After Losing Patent Dispute (nytimes.com) 36

An anonymous reader quotes a report from the New York Times: Apple said on Monday that it would pause sales of its flagship smartwatches online starting Thursday and at retail locations on Christmas Eve. Two months ago, Apple lost a patent case over the technology its smartwatches use to detect people's pulse rate. The company was ordered to stop selling the Apple Watch Series 9 and Watch Ultra 2 after Christmas, which could set off a run on sales of the watches in the final week of holiday shopping. The move by Apple follows a ruling by the International Trade Commission in October that found several Apple Watches infringe on patents held by Masimo, a medical technology company in Irvine, Calif.

In court, Masimo detailed how Apple poached its top executives and more than a dozen other employees before later releasing a watch with pulse oximeter capabilities -- whichmeasures the percentage of oxygen that red blood cells carry from the lungs to the body -- that were patented by Masimo. To avoid a complete ban on sales, Apple had two months to cut a deal with Masimo to license its technology, or it could appeal to the Biden administration to reverse the ruling. But Joe Kiani, the chief executive of Masimo, said in an interview that Apple had not engaged in licensing negotiations. Instead, he said that Apple had appealed to President Biden to veto the I.T.C. ruling, which Mr. Kiani knows because the administration contacted Masimo about Apple's request. "They're trying to make the agency look like it's helping patent trolls," Mr. Kiani said of the I.T.C.

Mr. Kiani said that he was willing to sell Apple a chip that Masimo had designed to provide pulse oximeter readings on the Apple Watch. The chip is currently in a Masimo medical watch, called the W1, that is approved by the Food and Drug Administration. The device uses algorithms to process red and near-infrared light to determine how oxygen-rich is the blood in arteries. "If they don't want to use our chip, I'll work with them to make their product good," Mr. Kiani said. "Once it's good enough, I'm happy to give them a license." Apple introduced its first watch with pulse oximetry in 2020. It has included the technology, which it calls "blood oxygen," in subsequent models. But unlike Masimo's W1 device, Apple hasn't had its watches cleared by the F.D.A. for use as a medical device for pulse oximetry.
"The Apple Watch accounts for nearly $20 billion of the company's $383.29 billion in annual sales," notes the NYT. The company is the largest smartwatch seller in the world, accounting for about a third of all smartwatch sales.

Slashdot Top Deals