×
Patents

Scientists Still Shoot For the Moon With Patent-Free Covid Drug 11

An anonymous reader quotes a report from Bloomberg, written by Naomi Kresge: In the early days of the Covid-19 pandemic, hundreds of scientists from all over the world banded together in an open-source effort to develop an antiviral that would be available for all. They could never have anticipated the many roadblocks they would face along the way, including the Russian invasion of Ukraine, which made refugees out of a group of Kyiv chemists who were doing important work for the project. The group, which called itself Covid Moonshot, hasn't given up on its effort to introduce a more affordable, patent-free treatment for the virus. Their open-source Covid antiviral, now funded by Wellcome, is on track to be ready for human testing within the next year and a half, according to Annette von Delft, a University of Oxford scientist and one of the Moonshot group's leaders. More early discovery work on a range of potential inhibitors for other viruses is also still going on and being funded by a US government grant.

"It's a bit like a proof of concept," von Delft says, for bringing a patent-free experimental drug into the clinic, a model that could be repurposed as a tool to fight neglected tropical diseases or antimicrobial resistance, or prepare for future pandemics. "Can we come up with a strategic model that can help those kinds of compounds with less of a business case along?" Of course, there was definitely a business case for a Covid antiviral, and some of the biggest drugmakers rushed to develop them. In 2022, Pfizer Inc.'s Paxlovid was one of the world's best-selling medicines with $18.9 billion in revenue. Demand has since cratered for the pill, which needs to be given shortly after infection and can't be taken alongside a number of other commonly prescribed medicines. Analysts expect the Paxlovid revenue to plunge just shy of $1 billion this year.

However, there is still a need for a better Covid antiviral, particularly in countries where access to the Pfizer pill is limited, according to von Delft. Covid cases have surged again this holiday season, with the rise of a new variant called JN.1 reminding us that the virus is still changing to evade the immunity we've built up so far. Just before Christmas, UK authorities said about one in every 24 people in England and Scotland had the disease. An accessible antiviral could help people return to work more quickly, and it could also be tested as a potential treatment for long Covid. "We know from experience in viral disease that there will be resistance variants evolving over time," von Delft said. "We'll need more than one."
Security

Cyberattack Targets Albanian Parliament's Data System, Halting Its Work (securityweek.com) 2

An anonymous reader quotes a report from SecurityWeek: Albania's Parliament said on Tuesday that it had suffered a cyberattack with hackers trying to get into its data system, resulting in a temporary halt in its services. A statement said Monday's cyberattack had not "touched the data of the system," adding that experts were working to discover what consequences the attack could have. It said the system's services would resume at a later time. Local media reported that a cellphone provider and an air flight company were also targeted by Monday's cyberattacks, allegedly from Iranian-based hackers called Homeland Justice, which could not be verified independently.

Albania suffered a cyberattack in July 2022 that the government and multinational technology companies blamed on the Iranian Foreign Ministry. Believed to be in retaliation for Albania sheltering members of the Iranian opposition group Mujahedeen-e-Khalq, or MEK, the attack led the government to cut diplomatic relations with Iran two months later. The Iranian Foreign Ministry denied Tehran was behind an attack on Albanian government websites and noted that Iran has suffered cyberattacks from the MEK. In June, Albanian authorities raided a camp for exiled MEK members to seize computer devices allegedly linked to prohibited political activities. [...] In a statement sent later Tuesday to The Associated Press, MEK's media spokesperson Ali Safavi claimed the reported cyberattacks in Albania "are not related to the presence or activities" of MEK members in the country.

Piracy

Reckless DMCA Deindexing Pushes NASA's Artemis Towards Black Hole (torrentfreak.com) 83

Andy Maxwell reports via TorrentFreak: As the crew of Artemis 2 prepare to become the first humans to fly to the moon since 1972, the possibilities of space travel are once again igniting imaginations globally. More than 92% of internet users who want to learn more about this historic mission and the program in general are statistically likely to use Google search. Behind the scenes, however, the ability to find relevant content is under attack. Blundering DMCA takedown notices sent by a company calling itself DMCA Piracy Prevention Inc. claim to protect the rights of an OnlyFans/Instagram model working under the name 'Artemis'. Instead, keyword-based systems that fail to discriminate between copyright-infringing content and that referencing the word Artemis in any other context, are flooding towards Google. They contain demands to completely deindex non-infringing, unrelated content, produced by innocent third parties all over the world.

A recent deindexing demand dated December 13, 2022, lists DMCA Piracy Prevention Inc. of Canada as the sender. The name of the content owner is redacted but the notice itself states that the company represents a content creator performing under the name Artemis. The notice demands the removal of 3,617 URLs from Google search. If successful, those URLs would be completely unfindable by more than 92% of the world's population who use that search engine. [...] At least 9 of the first 20 URLs in the notice demand the removal of non-infringing articles and news reports referencing the Artemis space program. None have anything to do with the content the sender claims to protect. [...]

Theories as to who might own and/or operate DMCA Piracy Prevention Inc. aren't hard to find but the company does exist and is registered as a corporate entity in Canada. Registered at the same address is a company with remarkably similar details. BranditScan is a corporate entity operating in exactly the same market offering similar if not identical services. BranditScan has sent DMCA takedown notices to Google under three different notifier accounts.

United States

New US Immigration Rules Spur More Visa Approvals For STEM Workers (science.org) 102

Following policy adjustments by the U.S. Citizenship and Immigration Services (USCIS) in January, more foreign-born workers in science, technology, engineering, and math (STEM) fields are able to live and work permanently in the United States. "The jump comes after USCIS in January 2022 tweaked its guidance criteria relating to two visa categories available to STEM workers," reports Science Magazine. "One is the O-1A, a temporary visa for 'aliens of extraordinary ability' that often paves the way to a green card. The second, which bestows a green card on those with advanced STEM degrees, governs a subset of an EB-2 (employment-based) visa." From the report: The USCIS data, reported exclusively by ScienceInsider, show that the number of O-1A visas awarded in the first year of the revised guidance jumped by almost 30%, to 4570, and held steady in fiscal year 2023, which ended on 30 September. Similarly, the number of STEM EB-2 visas approved in 2022 after a "national interest" waiver shot up by 55% over 2021, to 70,240, and stayed at that level this year. "I'm seeing more aspiring and early-stage startup founders believe there's a way forward for them," says Silicon Valley immigration attorney Sophie Alcorn. She predicts the policy changes will result in "new technology startups that would not have otherwise been created."

President Joe Biden has long sought to make it easier for foreign-born STEM workers to remain in the country and use their talent to spur the U.S. economy. But under the terms of a 1990 law, only 140,000 employment-based green cards may be issued annually, and no more than 7% of those can go to citizens of any one country. The ceiling is well below the demand. And the country quotas have created decades-long queues for scientists and high-tech entrepreneurs born in India and China. The 2022 guidance doesn't alter those limits on employment-based green cards but clarifies the visa process for foreign-born scientists pending any significant changes to the 1990 law. The O-1A work visa, which can be renewed indefinitely, was designed to accelerate the path to a green card for foreign-born high-tech entrepreneurs.

Although there is no cap on the number of O-1A visas awarded, foreign-born scientists have largely ignored this option because it wasn't clear what metrics USCIS would use to assess their application. The 2022 guidance on O-1As removed that uncertainty by listing eight criteria -- including awards, peer-reviewed publications, and reviewing the work of other scientistsâ"and stipulating that applicants need to satisfy at least three of them. The second visa policy change affects those with advanced STEM degrees seeking the national interest waiver for an EB-2. Under the normal process of obtaining such a visa, the Department of Labor requires employers to first satisfy rules meant to protect U.S. workers from foreign competition, for example, by showing that the company has failed to find a qualified domestic worker and that the job will pay the prevailing wage. That time-consuming exercise can be waived if visa applicants can prove they are doing "exceptional" work of "substantial merit and national importance." But once again, the standard for determining whether the labor-force requirements can be waived was vague, so relatively few STEM workers chose that route. The 2022 USCIS guidance not only specifies criteria, which closely track those for the nonimmigrant, O-1A visa, but also allows scientists to sponsor themselves.

The Courts

Clowns Sue Clowns.com For Wage Theft (404media.co) 42

An anonymous reader quotes a report from 404 Media: A group of clowns is suing their former employer Clowns.com for multiple labor law violations, according to recently filed court records. Four people -- Brayan Angulo, Cameron Pille, Janina Salorio, and Xander Black -- filed a federal lawsuit on Wednesday alleging Adolph Rodriguez and Erica Barbuto, owners of Clowns.com and their former bosses, misclassified them as independent workers for years, and failed to pay them for their time. The Long Island-based company, which provides entertainers for events, violated the Fair Labor Standards Act and the New York Labor Law, the lawsuit claims.

The owners of Clowns.com didn't give employees detailed pay statements as required by New York law, the lawsuit alleges. "As a result, Plaintiffs did not know how precisely their weekly pay was being calculated, and were thus deprived of information that could be used to challenge and prevent the theft of their wages," it says. The clowns weren't paid for time "spent at the warehouse gathering and loading equipment and supplies into vehicles," or for travel time between parties, or when parties went on for longer than expected, they claim.
Pille said she's "proud to join with my clown colleagues" to stand up to wage theft and misclassification. "For years, Clowns.com has treated clowns, who are largely young actors with no prior training in clowning who sign up for this job to make ends meet, as independent contractors."
Privacy

Researchers Come Up With Better Idea To Prevent AirTag Stalking (arstechnica.com) 29

An anonymous reader quotes a report from Ars Technica: Apple's AirTags are meant to help you effortlessly find your keys or track your luggage. But the same features that make them easy to deploy and inconspicuous in your daily life have also allowed them to be abused as a sinister tracking tool that domestic abusers and criminals can use to stalk their targets. Over the past year, Apple has taken protective steps to notify iPhone and Android users if an AirTag is in their vicinity for a significant amount of time without the presence of its owner's iPhone, which could indicate that an AirTag has been planted to secretly track their location. Apple hasn't said exactly how long this time interval is, but to create the much-needed alert system, Apple made some crucial changes to the location privacy design the company originally developed a few years ago for its "Find My" device tracking feature. Researchers from Johns Hopkins University and the University of California, San Diego, say, though, that they've developed (PDF) a cryptographic scheme to bridge the gap -- prioritizing detection of potentially malicious AirTags while also preserving maximum privacy for AirTag users. [...]

The solution [Johns Hopkins cryptographer Matt Green] and his fellow researchers came up with leans on two established areas of cryptography that the group worked to implement in a streamlined and efficient way so the system could reasonably run in the background on mobile devices without being disruptive. The first element is "secret sharing," which allows the creation of systems that can't reveal anything about a "secret" unless enough separate puzzle pieces present themselves and come together. Then, if the conditions are right, the system can reconstruct the secret. In the case of AirTags, the "secret" is the true, static identity of the device underlying the public identifier that is frequently changing for privacy purposes. Secret sharing was conceptually useful for the researchers to employ because they could develop a mechanism where a device like a smartphone would only be able to determine that it was being followed around by an AirTag with a constantly rotating public identifier if the system received enough of a certain type of ping over time. Then, suddenly, the suspicious AirTag's anonymity would fall away and the system would be able to determine that it had been in close proximity for a concerning amount of time.

Green notes, though, that a limitation of secret sharing algorithms is that they aren't very good at sorting and parsing inputs if they're being deluged by a lot of different puzzle pieces from all different puzzles -- the exact scenario that would occur in the real world where AirTags and Find My devices are constantly encountering each other. With this in mind, the researchers employed a second concept known as "error correction coding," which is specifically designed to sort signal from noise and preserve the durability of signals even if they acquire some errors or corruptions. "Secret sharing and error correction coding have a lot of overlap," Green says. "The trick was to find a way to implement it all that would be fast, and where a phone would be able to reassemble all the puzzle pieces when needed while all of this is running quietly in the background."
The researchers published (PDF) their first paper in September and submitted it to Apple. More recently, they notified the industry consortium about the proposal.
Google

Google Agrees To Settle Chrome Incognito Mode Class Action Lawsuit (arstechnica.com) 22

Google has indicated that it is ready to settle a class-action lawsuit filed in 2020 over its Chrome browser's Incognito mode. From a report: Arising in the Northern District of California, the lawsuit accused Google of continuing to "track, collect, and identify [users'] browsing data in real time" even when they had opened a new Incognito window. The lawsuit, filed by Florida resident William Byatt and California residents Chasom Brown and Maria Nguyen, accused Google of violating wiretap laws.

It also alleged that sites using Google Analytics or Ad Manager collected information from browsers in Incognito mode, including web page content, device data, and IP address. The plaintiffs also accused Google of taking Chrome users' private browsing activity and then associating it with their already-existing user profiles. Google initially attempted to have the lawsuit dismissed by pointing to the message displayed when users turned on Chrome's incognito mode. That warning tells users that their activity "might still be visible to websites you visit."

AI

New York Times Copyright Suit Wants OpenAI To Delete All GPT Instances (arstechnica.com) 157

An anonymous reader shares a report: The Times is targeting various companies under the OpenAI umbrella, as well as Microsoft, an OpenAI partner that both uses it to power its Copilot service and helped provide the infrastructure for training the GPT Large Language Model. But the suit goes well beyond the use of copyrighted material in training, alleging that OpenAI-powered software will happily circumvent the Times' paywall and ascribe hallucinated misinformation to the Times.

The suit notes that The Times maintains a large staff that allows it to do things like dedicate reporters to a huge range of beats and engage in important investigative journalism, among other things. Because of those investments, the newspaper is often considered an authoritative source on many matters. All of that costs money, and The Times earns that by limiting access to its reporting through a robust paywall. In addition, each print edition has a copyright notification, the Times' terms of service limit the copying and use of any published material, and it can be selective about how it licenses its stories.

In addition to driving revenue, these restrictions also help it to maintain its reputation as an authoritative voice by controlling how its works appear. The suit alleges that OpenAI-developed tools undermine all of that. [...] The suit seeks nothing less than the erasure of both any GPT instances that the parties have trained using material from the Times, as well as the destruction of the datasets that were used for the training. It also asks for a permanent injunction to prevent similar conduct in the future. The Times also wants money, lots and lots of money: "statutory damages, compensatory damages, restitution, disgorgement, and any other relief that may be permitted by law or equity."

Government

India Targets Apple Over Its Phone Hacking Notifications (washingtonpost.com) 100

In October, Apple issued notifications warning over a half dozen India lawmakers of their iPhones being targets of state-sponsored attacks. According to a new report from the Washington Post, the Modi government responded by criticizing Apple's security and demanding explanations to mitigate political impact (Warning: source may be paywalled; alternative source). From the report: Officials from the ruling Bharatiya Janata Party (BJP) publicly questioned whether the Silicon Valley company's internal threat algorithms were faulty and announced an investigation into the security of Apple devices. In private, according to three people with knowledge of the matter, senior Modi administration officials called Apple's India representatives to demand that the company help soften the political impact of the warnings. They also summoned an Apple security expert from outside the country to a meeting in New Delhi, where government representatives pressed the Apple official to come up with alternative explanations for the warnings to users, the people said. They spoke on the condition of anonymity to discuss sensitive matters. "They were really angry," one of those people said.

The visiting Apple official stood by the company's warnings. But the intensity of the Indian government effort to discredit and strong-arm Apple disturbed executives at the company's headquarters, in Cupertino, Calif., and illustrated how even Silicon Valley's most powerful tech companies can face pressure from the increasingly assertive leadership of the world's most populous country -- and one of the most critical technology markets of the coming decade. The recent episode also exemplified the dangers facing government critics in India and the lengths to which the Modi administration will go to deflect suspicions that it has engaged in hacking against its perceived enemies, according to digital rights groups, industry workers and Indian journalists. Many of the more than 20 people who received Apple's warnings at the end of October have been publicly critical of Modi or his longtime ally, Gautam Adani, an Indian energy and infrastructure tycoon. They included a firebrand politician from West Bengal state, a Communist leader from southern India and a New Delhi-based spokesman for the nation's largest opposition party. [...] Gopal Krishna Agarwal, a national spokesman for the BJP, said any evidence of hacking should be presented to the Indian government for investigation.

The Modi government has never confirmed or denied using spyware, and it has refused to cooperate with a committee appointed by India's Supreme Court to investigate whether it had. But two years ago, the Forbidden Stories journalism consortium, which included The Post, found that phones belonging to Indian journalists and political figures were infected with Pegasus, which grants attackers access to a device's encrypted messages, camera and microphone. In recent weeks, The Post, in collaboration with Amnesty, found fresh cases of infections among Indian journalists. Additional work by The Post and New York security firm iVerify found that opposition politicians had been targeted, adding to the evidence suggesting the Indian government's use of powerful surveillance tools. In addition, Amnesty showed The Post evidence it found in June that suggested a Pegasus customer was preparing to hack people in India. Amnesty asked that the evidence not be detailed to avoid teaching Pegasus users how to cover their tracks.
"These findings show that spyware abuse continues unabated in India," said Donncha O Cearbhaill, head of Amnesty International's Security Lab. "Journalists, activists and opposition politicians in India can neither protect themselves against being targeted by highly invasive spyware nor expect meaningful accountability."
Transportation

US Engine Maker Will Pay $1.6 Billion To Settle Claims of Emissions Cheating (nytimes.com) 100

An anonymous reader quotes a report from the New York Times: The United States and the state of California have reached an agreement in principle with the truck engine manufacturer Cummins on a $1.6 billion penalty to settle claims that the company violated the Clean Air Act by installing devices to defeat emissions controls on hundreds of thousands of engines, the Justice Department announced on Friday. The penalty would be the largest ever under the Clean Air Act and the second largest ever environmental penalty in the United States. Defeat devices are parts or software that bypass, defeat or render inoperative emissions controls like pollution sensors and onboard computers. They allow vehicles to pass emissions inspections while still emitting high levels of smog-causing pollutants such as nitrogen oxide, which is linked to asthma and other respiratory illnesses.

The Justice Department has accused the company of installing defeat devices on 630,000 model year 2013 to 2019 RAM 2500 and 3500 pickup truck engines. The company is also alleged to have secretly installed auxiliary emission control devices on 330,000 model year 2019 to 2023 RAM 2500 and 3500 pickup truck engines. "Violations of our environmental laws have a tangible impact. They inflict real harm on people in communities across the country," Attorney General Merrick Garland said in a statement. "This historic agreement should make clear that the Justice Department will be aggressive in its efforts to hold accountable those who seek to profit at the expense of people's health and safety."

In a statement, Cummins said that it had "seen no evidence that anyone acted in bad faith and does not admit wrongdoing." The company said it has "cooperated fully with the relevant regulators, already addressed many of the issues involved, and looks forward to obtaining certainty as it concludes this lengthy matter. Cummins conducted an extensive internal review and worked collaboratively with the regulators for more than four years." Stellantis, the company that makes the trucks, has already recalled the model year 2019 trucks and has initiated a recall of the model year 2013 to 2018 trucks. The software in those trucks will be recalibrated to ensure that they are fully compliant with federal emissions law, said Jon Mills, a spokesman for Cummins. Mr. Mills said that "next steps are unclear" on the model year 2020 through 2023, but that the company "continues to work collaboratively with regulators" to resolve the issue. The Justice Department partnered with the Environmental Protection Agency in its investigation of the case.

AI

The New York Times Sues OpenAI and Microsoft Over AI Use of Copyrighted Work (nytimes.com) 59

The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies. From a report: The Times is the first major American media organization to sue the companies, the creators of ChatGPT and other popular A.I. platforms, over copyright issues associated with its written works. The lawsuit [PDF], filed in Federal District Court in Manhattan, contends that millions of articles published by The Times were used to train automated chatbots that now compete with the news outlet as a source of reliable information.

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for "billions of dollars in statutory and actual damages" related to the "unlawful copying and use of The Times's uniquely valuable works." It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times. The lawsuit could test the emerging legal contours of generative A.I. technologies -- so called for the text, images and other content they can create after learning from large data sets -- and could carry major implications for the news industry. The Times is among a small number of outlets that have built successful business models from online journalism, but dozens of newspapers and magazines have been hobbled by readers' migration to the internet.

Programming

Code.org Sues WhiteHat Jr. For $3 Million 8

theodp writes: Back in May 2021, tech-backed nonprofit Code.org touted the signing of a licensing agreement with WhiteHat Jr., allowing the edtech company with a controversial past (Whitehat Jr. was bought for $300M in 2020 by Byju's, an edtech firm that received a $50M investment from Mark Zuckerberg's venture firm) to integrate Code.org's free-to-educators-and-organizations content and tools into their online tutoring service. Code.org did not reveal what it was charging Byju's to use its "free curriculum and open source technology" for commercial purposes, but Code.org's 2021 IRS 990 filing reported $1M in royalties from an unspecified source after earlier years reported $0. Coincidentally, Whitehat Jr. is represented by Aaron Kornblum, who once worked at Microsoft for now-President Brad Smith, who left Code.org's Board just before the lawsuit was filed.

Fast forward to 2023 and the bloom is off the rose, as Court records show that Code.org earlier this month sued Whitehat Education Technology, LLC (Exhibits A and B) in what is called "a civil action for breach of contract arising from Whitehat's failure to pay Code.org the agreed-upon charges for its use of Code.org's platform and licensed content and its ongoing, unauthorized use of that platform and content." According to the filing, "Whitehat agreed [in April 2022] to pay to Code.org licensing fees totaling $4,000,000 pursuant to a four-year schedule" and "made its first four scheduled payments, totaling $1,000,000," but "about a year after the Agreement was signed, Whitehat informed Code.org that it would be unable to make the remaining scheduled license payments." While the original agreement was amended to backload Whitehat's license fee payment obligations, "Whitehat has not paid anything at all beyond the $1,000,000 that it paid pursuant to the 2022 invoices before the Agreement was amended" and "has continued to access Code.org's platform and content."

That Byju's Whitehat Jr. stiffed Code.org is hardly shocking. In June 2023, Reuters reported that Byju's auditor Deloitte cut ties with the troubled Indian Edtech startup that was once an investor darling and valued at $22 billion, adding that a Byju's Board member representing the Chan-Zuckerberg Initiative had resigned with two other Board members. The BBC reported in July that Byju's was guilty of overexpanding during the pandemic (not unlike Zuck's Facebook). Ironically, the lawsuit Exhibits include screenshots showing Mark Zuckerberg teaching Code.org lessons. Zuckerberg and Facebook were once among the biggest backers of Code.org, although it's unclear whether that relationship soured after court documents were released that revealed Code.org's co-founders talking smack about Zuck and Facebook's business practices to lawyers for Six4Three, which was suing Facebook.

Code.org's curriculum is also used by the Amazon Future Engineer (AFE) initiative, but it is unclear what royalties -- if any -- Amazon pays to Code.org for the use of Code.org curriculum. While the AFE site boldly says, "we provide free computer science curriculum," the AFE fine print further explains that "our partners at Code.org and ProjectSTEM offer a wide array of introductory and advance curriculum options and teacher training." It's unclear what kind of organization Amazon's AFE ("Computer Science Learning Childhood to Career") exactly is -- an IRS Tax Exempt Organization Search failed to find any hits for "Amazon Future Engineer" -- making it hard to guess whether Code.org might consider AFE's use of Code.org software 'commercial use.' Would providing a California school district with free K-12 CS curriculum that Amazon boasts of cultivating into its "vocal champion" count as "commercial use"? How about providing free K-12 CS curriculum to children who live where Amazon is seeking incentives? Or if Amazon CEO Jeff Bezos testifies Amazon "funds computer science coursework" for schools as he attempts to counter a Congressional antitrust inquiry? These seem to be some of the kinds of distinctions Richard Stallman anticipated more than a decade ago as he argued against a restriction against commercial use of otherwise free software.
Electronic Frontier Foundation

EFF Warns: 'Think Twice Before Giving Surveillance for the Holidays' (eff.org) 28

"It's easy to default to giving the tech gifts that retailers tend to push on us this time of year..." notes Lifehacker senior writer Thorin Klosowski.

"But before you give one, think twice about what you're opting that person into." A number of these gifts raise red flags for us as privacy-conscious digital advocates. Ring cameras are one of the most obvious examples, but countless others over the years have made the security or privacy naughty list (and many of these same electronics directly clash with your right to repair). One big problem with giving these sorts of gifts is that you're opting another person into a company's intrusive surveillance practice, likely without their full knowledge of what they're really signing up for... And let's not forget about kids. Long subjected to surveillance from elves and their managers, electronics gifts for kids can come with all sorts of surprise issues, like the kid-focused tablet we found this year that was packed with malware and riskware. Kids' smartwatches and a number of connected toys are also potential privacy hazards that may not be worth the risks if not set up carefully.

Of course, you don't have to avoid all technology purchases. There are plenty of products out there that aren't creepy, and a few that just need extra attention during set up to ensure they're as privacy-protecting as possible. While we don't endorse products, you don't have to start your search in a vacuum. One helpful place to start is Mozilla's Privacy Not Included gift guide, which provides a breakdown of the privacy practices and history of products in a number of popular gift categories.... U.S. PIRG also has guidance for shopping for kids, including details about what to look for in popular categories like smart toys and watches....

Your job as a privacy-conscious gift-giver doesn't end at the checkout screen. If you're more tech savvy than the person receiving the item, or you're helping set up a gadget for a child, there's no better gift than helping set it up as privately as possible.... Giving the gift of electronics shouldn't come with so much homework, but until we have a comprehensive data privacy law, we'll likely have to contend with these sorts of set-up hoops. Until that day comes, we can all take the time to help those who need it.

AI

ChatGPT Exploit Finds 24 Email Addresses, Amid Warnings of 'AI Silo' (thehill.com) 67

The New York Times reports: Last month, I received an alarming email from someone I did not know: Rui Zhu, a Ph.D. candidate at Indiana University Bloomington. Mr. Zhu had my email address, he explained, because GPT-3.5 Turbo, one of the latest and most robust large language models (L.L.M.) from OpenAI, had delivered it to him. My contact information was included in a list of business and personal email addresses for more than 30 New York Times employees that a research team, including Mr. Zhu, had managed to extract from GPT-3.5 Turbo in the fall of this year. With some work, the team had been able to "bypass the model's restrictions on responding to privacy-related queries," Mr. Zhu wrote.

My email address is not a secret. But the success of the researchers' experiment should ring alarm bells because it reveals the potential for ChatGPT, and generative A.I. tools like it, to reveal much more sensitive personal information with just a bit of tweaking. When you ask ChatGPT a question, it does not simply search the web to find the answer. Instead, it draws on what it has "learned" from reams of information — training data that was used to feed and develop the model — to generate one. L.L.M.s train on vast amounts of text, which may include personal information pulled from the Internet and other sources. That training data informs how the A.I. tool works, but it is not supposed to be recalled verbatim... In the example output they provided for Times employees, many of the personal email addresses were either off by a few characters or entirely wrong. But 80 percent of the work addresses the model returned were correct.

The researchers used the API for accessing ChatGPT, the article notes, where "requests that would typically be denied in the ChatGPT interface were accepted..."

"The vulnerability is particularly concerning because no one — apart from a limited number of OpenAI employees — really knows what lurks in ChatGPT's training-data memory."

And there was a broader related warning in another article published the same day. Microsoft may be building an AI silo in a walled garden, argues a professor at the University of California, Berkeley's school of information, calling the development "detrimental for technology development, as well as costly and potentially dangerous for society and the economy." [In January] Microsoft sealed its OpenAI relationship with another major investment — this time around $10 billion, much of which was, once again, in the form of cloud credits instead of conventional finance. In return, OpenAI agreed to run and power its AI exclusively through Microsoft's Azure cloud and granted Microsoft certain rights to its intellectual property...

Recent reports that U.K. competition authorities and the U.S. Federal Trade Commission are scrutinizing Microsoft's investment in OpenAI are encouraging. But Microsoft's failure to report these investments for what they are — a de facto acquisition — demonstrates that the company is keenly aware of the stakes and has taken advantage of OpenAI's somewhat peculiar legal status as a non-profit entity to work around the rules...

The U.S. government needs to quickly step in and reverse the negative momentum that is pushing AI into walled gardens. The longer it waits, the harder it will be, both politically and technically, to re-introduce robust competition and the open ecosystem that society needs to maximize the benefits and manage the risks of AI technology.

Television

'Doctor Who' Christmas Special Streams on Disney+ and the BBC (cnet.com) 65

An anonymous Slashdot reader shared this report from CNET: Marking its 60th year on television, the British time-travel series will close out 2023 with one last anniversary special that arrives on Christmas Day. Ncuti Gatwa's Doctor helms the Tardis in The Church on Ruby Road, which centers on an abandoned baby who grows up looking for answers... Disney Plus will stream Doctor Who: The Church on Ruby Road on Monday, Dec. 25, at 12:55 p.m. ET (9:55 a.m. PT) in all regions except the UK and Ireland, where it will air on the BBC. In case you missed it, viewers can also watch David Tennant starring in the other three anniversary specials: The Star Beast, Wild Blue Yonder and The Giggle. All releases are available on Disney Plus.
But what's interesting is CNET goes on to explain "why a VPN could be a useful tool." Perhaps you're traveling abroad and want to stream Disney Plus while away from home. With a VPN, you're able to virtually change your location on your phone, tablet or laptop to get access to the series from anywhere in the world. There are other good reasons to use a VPN for streaming too. A VPN is the best way to encrypt your traffic and stop your ISP from throttling your speeds...

You can use a VPN to stream content legally as long as VPNs are allowed in your country and you have a valid subscription to the streaming service you're using. The U.S. and Canada are among the countries where VPNs are legal

United States

US Water Utilities Hacked After Default Passwords Set to '1111', Cybersecurity Officials Say (fastcompany.com) 84

An anonymous reader shared this report from Fast Company: Providers of critical infrastructure in the United States are doing a sloppy job of defending against cyber intrusions, the National Security Council tells Fast Company, pointing to recent Iran-linked attacks on U.S. water utilities that exploited basic security lapses [earlier this month]. The security council tells Fast Company it's also aware of recent intrusions by hackers linked to China's military at American infrastructure entities that include water and energy utilities in multiple states.

Neither the Iran-linked or China-linked attacks affected critical systems or caused disruptions, according to reports.

"We're seeing companies and critical services facing increased cyber threats from malicious criminals and countries," Anne Neuberger, the deputy national security advisor for cyber and emerging tech, tells Fast Company. The White House had been urging infrastructure providers to upgrade their cyber defenses before these recent hacks, but "clearly, by the most recent success of the criminal cyberattacks, more work needs to be done," she says... The attacks hit at least 11 different entities using Unitronics devices across the United States, which included six local water facilities, a pharmacy, an aquatics center, and a brewery...

Some of the compromised devices had been connected to the open internet with a default password of "1111," federal authorities say, making it easy for hackers to find them and gain access. Fixing that "doesn't cost any money," Neuberger says, "and those are the kinds of basic things that we really want companies urgently to do." But cybersecurity experts say these attacks point to a larger issue: the general vulnerability of the technology that powers physical infrastructure. Much of the hardware was developed before the internet and, though they were retrofitted with digital capabilities, still "have insufficient security controls," says Gary Perkins, chief information security officer at cybersecurity firm CISO Global. Additionally, many infrastructure facilities prioritize "operational ease of use rather than security," since many vendors often need to access the same equipment, says Andy Thompson, an offensive cybersecurity expert at CyberArk. But that can make the systems equally easy for attackers to exploit: freely available web tools allow anyone to generate lists of hardware connected to the public internet, like the Unitronics devices used by water companies.

"Not making critical infrastructure easily accessible via the internet should be standard practice," Thompson says.

AI

AI Companies Would Be Required To Disclose Copyrighted Training Data Under New Bill (theverge.com) 42

An anonymous reader quotes a report from The Verge: Two lawmakers filed a bill requiring creators of foundation models to disclose sources of training data so copyright holders know their information was taken. The AI Foundation Model Transparency Act -- filed by Reps. Anna Eshoo (D-CA) and Don Beyer (D-VA) -- would direct the Federal Trade Commission (FTC) to work with the National Institute of Standards and Technology (NIST) to establish rules for reporting training data transparency. Companies that make foundation models will be required to report sources of training data and how the data is retained during the inference process, describe the limitations or risks of the model, how the model aligns with NIST's planned AI Risk Management Framework and any other federal standards might be established, and provide information on the computational power used to train and run the model. The bill also says AI developers must report efforts to "red team" the model to prevent it from providing "inaccurate or harmful information" around medical or health-related questions, biological synthesis, cybersecurity, elections, policing, financial loan decisions, education, employment decisions, public services, and vulnerable populations such as children.

The bill calls out the importance of training data transparency around copyright as several lawsuits have come out against AI companies alleging copyright infringement. It specifically mentions the case of artists against Stability AI, Midjourney, and Deviant Art, (which was largely dismissed in October, according to VentureBeat), and Getty Images' complaint against Stability AI. The bill still needs to be assigned to a committee and discussed, and it's unclear if that will happen before the busy election campaign season starts. Eshoo and Beyer's bill complements the Biden administration's AI executive order, which helps establish reporting standards for AI models. The executive order, however, is not law, so if the AI Foundation Model Transparency Act passes, it will make transparency requirements for training data a federal rule.

Government

Biden Administration Unveils Hydrogen Tax Credit Plan To Jump-Start Industry (npr.org) 104

An anonymous reader quotes a report from NPR: The Biden administration released its highly anticipated proposal for doling out billions of dollars in tax credits to hydrogen producers Friday, in a massive effort to build out an industry that some hope can be a cleaner alternative to fossil fueled power. The U.S. credit is the most generous in the world for hydrogen production, Jesse Jenkins, a professor at Princeton University who has analyzed the U.S. climate law, said last week. The proposal -- which is part of Democrats' Inflation Reduction Act passed last year -- outlines a tiered system to determine which hydrogen producers get the most credits, with cleaner energy projects receiving more, and smaller, but still meaningful credits going to those that use fossil fuel to produce hydrogen.

Administration officials estimate the hydrogen production credits will deliver $140 billion in revenue and 700,000 jobs by 2030 -- and will help the U.S. produce 50 million metric tons of hydrogen by 2050. "That's equivalent to the amount of energy currently used by every bus, every plane, every train and every ship in the US combined," Energy Deputy Secretary David M. Turk said on a Thursday call with reporters to preview the proposal. [...] As part of the administration's proposal, firms that produce cleaner hydrogen and meet prevailing wage and registered apprenticeship requirements stand to qualify for a large incentive at $3 per kilogram of hydrogen. Firms that produce hydrogen using fossil fuels get less. The credit ranges from $.60 to $3 per kilo, depending on whole lifecycle emissions.

One contentious issue in the proposal was how to deal with the fact that clean, electrolyzer hydrogen draws tremendous amounts of electricity. Few want that to mean that more coal or natural gas-fired power plants run extra hours. The guidance addresses this by calling for producers to document their electricity usage through "energy attribute certificates" -- which will help determine the credits they qualify for. Rachel Fakhry, policy director for emerging technologies at the Natural Resources Defense Council called the proposal "a win for the climate, U.S. consumers, and the budding U.S. hydrogen industry." The Clean Air Task Force likewise called the proposal "an excellent step toward developing a credible clean hydrogen market in the United States."

Crime

Teen GTA VI Hacker Sentenced To Indefinite Hospital Order (theverge.com) 77

Emma Roth reports via The Verge: The 18-year-old Lapsus$ hacker who played a critical role in leaking Grand Theft Auto VI footage has been sentenced to life inside a hospital prison, according to a report from the BBC. A British judge ruled on Thursday that Arion Kurtaj is a high risk to the public because he still wants to commit cybercrimes.

In August, a London jury found that Kurtaj carried out cyberattacks against GTA VI developer Rockstar Games and other companies, including Uber and Nvidia. However, since Kurtaj has autism and was deemed unfit to stand trial, the jury was asked to determine whether he committed the acts in question, not whether he did so with criminal intent. During Thursday's hearing, the court heard Kurtaj "had been violent while in custody with dozens of reports of injury or property damage," the BBC reports. A mental health assessment also found that Kurtaj "continued to express the intent to return to cybercrime as soon as possible." He's required to stay in the hospital prison for life unless doctors determine that he's no longer a danger.

Kurtaj leaked 90 videos of GTA VI gameplay footage last September while out on bail for hacking Nvidia and British telecom provider BT / EE. Although he stayed at a hotel under police protection during this time, Kurtaj still managed to carry out an attack on Rockstar Games by using the room's included Amazon Fire Stick and a "newly purchased smart phone, keyboard and mouse," according to a separate BBC report. Kurtaj was arrested for the final time following the incident. Another 17-year-old involved with Lapsus$ was handed an 18-month community sentence, called a Youth Rehabilitation Order, and a ban from using virtual private networks.

Robotics

Massachusetts Lawmakers Mull 'Killer Robot' Bill (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch, written by Brian Heater: Back in mid-September, a pair of Massachusetts lawmakers introduced a bill "to ensure the responsible use of advanced robotic technologies." What that means in the simplest and most direct terms is legislation that would bar the manufacture, sale and use of weaponized robots. It's an interesting proposal for a number of reasons. The first is a general lack of U.S. state and national laws governing such growing concerns. It's one of those things that has felt like science fiction to such a degree that many lawmakers had no interest in pursuing it in a pragmatic manner. [...] Earlier this week, I spoke about the bill with Massachusetts state representative Lindsay Sabadosa, who filed it alongside Massachusetts state senator Michael Moore.

What is the status of the bill?
We're in an interesting position, because there are a lot of moving parts with the bill. The bill has had a hearing already, which is wonderful news. We're working with the committee on the language of the bill. They have had some questions about why different pieces were written as they were written. We're doing that technical review of the language now -- and also checking in with all stakeholders to make sure that everyone who needs to be at the table is at the table.

When you say "stakeholders" ...
Stakeholders are companies that produce robotics. The robot Spot, which Boston Dynamics produces, and other robots as well, are used by entities like Boston Police Department or the Massachusetts State Police. They might be used by the fire department. So, we're talking to those people to run through the bill, talk about what the changes are. For the most part, what we're hearing is that the bill doesn't really change a lot for those stakeholders. Really the bill is to prevent regular people from trying to weaponize robots, not to prevent the very good uses that the robots are currently employed for.

Does the bill apply to law enforcement as well?
We're not trying to stop law enforcement from using the robots. And what we've heard from law enforcement repeatedly is that they're often used to deescalate situations. They talk a lot about barricade situations or hostage situations. Not to be gruesome, but if people are still alive, if there are injuries, they say it often helps to deescalate, rather than sending in officers, which we know can often escalate the situation. So, no, we wouldn't change any of those uses. The legislation does ask that law enforcement get warrants for the use of robots if they're using them in place of when they would send in a police officer. That's pretty common already. Law enforcement has to do that if it's not an emergency situation. We're really just saying, "Please follow current protocol. And if you're going to use a robot instead of a human, let's make sure that protocol is still the standard."

I'm sure you've been following the stories out of places like San Francisco and Oakland, where there's an attempt to weaponize robots. Is that included in this?
We haven't had law enforcement weaponize robots, and no one has said, "We'd like to attach a gun to a robot" from law enforcement in Massachusetts. I think because of some of those past conversations there's been a desire to not go down that route. And I think that local communities would probably have a lot to say if the police started to do that. So, while the legislation doesn't outright ban that, we are not condoning it either.
Representative Sabadosa said Boston Dynamics "sought us out" and is "leading the charge on this."

"I'm hopeful that we will be the first to get the legislation across the finish line, too," added Rep. Sabadosa. "We've gotten thank-you notes from companies, but we haven't gotten any pushback from them. And our goal is not to stifle innovation. I think there's lots of wonderful things that robots will be used for. [...]"

You can read the full interview here.

Slashdot Top Deals