Government

US Moves Closer To Filing Sweeping Antitrust Case Against Apple (nytimes.com) 119

An anonymous reader quotes a report from the New York Times: The Justice Department is in the late stages of an investigation into Apple and could file a sweeping antitrust case taking aim at the company's strategies to protect the dominance of the iPhone as soon as the first half of this year, said three people with knowledge of the matter. The agency is focused on how Apple has used its control over its hardware and software to make it more difficult for consumers to ditch the company's devices, as well as for rivals to compete, said the people, who spoke anonymously because the investigation was active. Specifically, investigators have examined how the Apple Watch works better with the iPhone than with other brands, as well as how Apple locks competitors out of its iMessage service. They have also scrutinized Apple's payments system for the iPhone, which blocks other financial firms from offering similar services, these people said.

The Justice Department is closing in on what would be the most consequential federal antitrust lawsuit challenging Apple, which is the most valuable tech company in the world. If the lawsuit is filed, American regulators will have sued four of the biggest tech companies for monopolistic business practices in less than five years. The Justice Department is currently facing off against Google in two antitrust cases, focused on its search and ad tech businesses, while the Federal Trade Commission has sued Amazon and Meta for stifling competition. The Apple suit would likely be even more expansive than previous challenges to the company, attacking its powerful business model that draws together the iPhone with devices like the Apple Watch and services like Apple Pay to attract and keep consumers loyal to its products. Rivals have said that they have been denied access to key Apple features, like the Siri virtual assistant, prompting them to argue the practices are anticompetitive.

Google

Google Contractor Pays Parents $50 To Scan Their Childrens' Faces (404media.co) 46

Google is collecting the eyelid shape and skin tone of children via parent submitted videos, according to a project description online reviewed by 404 Media. From the report: Canadian tech conglomerate TELUS, which says it is working on Google's behalf, is offering parents $50 to film their children wearing various props such as hats or sunglasses as part of the project, the description adds. The project shows the methods some companies are using to build machine learning, artificial intelligence, or facial recognition datasets and products. Rather than scraping already existing images or analyzing previously collected material, TELUS, and by extension Google, is asking the public to contribute directly and get paid in return. Google told 404 Media the collection was part of the company's efforts to verify users' age.
Crime

Mexican Cartel Provided Wi-Fi To Locals - With Threat of Death If They Didn't Use It (theguardian.com) 97

A cartel in the embattled central Mexico state of Michoacan set up its own makeshift internet antennas and told locals they had to pay to use its wifi service or they would be killed, according to prosecutors. New submitter awwshit shares a story: Dubbed "narco-antennas" by local media, the cartel's system involved internet antennas set up in various towns built with stolen equipment. The group charged approximately 5,000 people elevated prices between 400 and 500 pesos ($25 and $30) a month, the Michoacan state prosecutor's office told the Associated Press. That meant the group could rake in about $150,000 a month. People were terrorized "to contract the internet services at excessive costs, under the claim that they would be killed if they did not," prosecutors said, though they did not report any such deaths. Local media identified the criminal group as a faction known as Los Viagras. Prosecutors declined to say which cartel was involved because the case was still under investigation, but they confirmed Los Viagras dominates the towns forced to make the wifi payments.
Censorship

Substack Faces User Revolt Over Anti-Censorship Stance (theguardian.com) 271

Alex Hern reports via the Guardian: The email newsletter service Substack is facing a user revolt after its chief executive defended hosting and handling payments for "Nazis" on its platform, citing anti-censorship reasons. In a note on the site published in December, the chief executive, Hamish McKenzie, said the firm "doesn't like Nazis," and wished "no one held these views." But he said the company did not think that censorship -- by demonetising sites that publish extreme views -- was a solution to the problem, and instead made it worse. Some of the largest newsletters on the service have threatened to take their business elsewhere if Substack does not reverse its stance.

On Tuesday Casey Newton, who writes Platformer -- a popular tech newsletter on the platform with thousands of subscribers paying at least $10 a month -- became the most prominent yet. [...] Substack takes a 10% cut of subscriptions from paid newsletters, meaning the loss of Platformer alone could represent six figures of revenue. Other newsletters have already made the jump. Talia Lavin, a journalist with thousands of paid subscribers on her newsletter The Sword and the Sandwich, moved to a competing service, Buttondown, on Tuesday.
Substack's leadership team said in a statement: "As we face growing pressure to censor content published on Substack that to some seems dubious or objectionable, our answer remains the same: we make decisions based on principles not PR, we will defend free expression, and we will stick to our hands-off approach to content moderation."
Crime

Firmware Prank Causes LED Curtain In Russia To Display 'Slava Ukraini' (therecord.media) 109

Alexander Martin reports via The Record: The owner of an apartment in Veliky Novgorod in Russia has been arrested for discrediting the country's armed forces after a neighbor alerted the police to the message 'Slava Ukraini' scrolling across their LED curtains. When police went to the scene, they saw the garland which the owner had hung in celebration of the New Year and a "slogan glorifying the Armed Forces of Ukraine," as a spokesperson for the Ministry of Internal Affairs told state-owned news agency TASS. The apartment owner said the garland was supposed to display a "Happy New Year" greeting, TASS reported.

Several other people in Russia described a similar experience on the AlexGyver web forum, linked to a DIY blog popular in the country. They said at the stroke of midnight on New Year's Eve, their LED curtains also began to show the "Glory to Ukraine" message in Ukrainian. It is not clear whether any of these other posters were also arrested. The man in Veliky Novgorod will have to defend his case in court, according to TASS. Police have seized the curtain itself.

An independent investigation into the cause of the message by the AlexGyver forum users found that affected curtains all used the same open-source firmware code. The original code appears to have originated in Ukraine before someone created a fork translated into Russian. According to the Telegram channel for AlexGyver, the code had been added to the original project on October 18, and then in December the people or person running the fork copied and pasted that update into their own version. "Everyone who downloaded and updated the firmware in December received a gift," the Telegram channel wrote. The message was "really encrypted, hidden from the 'reader' of the code, and is displayed on the first day of the year exclusively for residents of Russia by [geographic region]."

Government

New Jersey Used COVID Relief Funds To Buy Banned Chinese Surveillance Cameras (404media.co) 25

A federal criminal complaint has revealed that state and local agencies in New Jersey bought millions of dollars worth of banned Chinese surveillance cameras. The cameras were purchased from a local company that rebranded the banned equipment made by Dahua Technology, a company that has been implicated in the surveillance of the Uyghur people in Xinjiang. According to 404 Media, "At least $15 million of the equipment was bought using federal COVID relief funds." From the report: The feds charged Tamer Zakhary, the CEO of the New Jersey-based surveillance company Packetalk, with three counts of wire fraud and a separate count of false statements for repeatedly lying to state and local agencies about the provenance of his company's surveillance cameras. Some of the cameras Packetalk sold to local agencies were Dahua cameras that had the Dahua logo removed and the colors of the camera changed, according to the criminal complaint.

Dahua Technology is the second largest surveillance camera company in the world. In 2019, the U.S. government banned the purchase of Dahua cameras using federal funds because their cameras have "been implicated in human rights violations and abuses in the implementation of China's campaign of repression, mass arbitrary detention, and high-technology surveillance against Uyghurs, Kazakhs, and other members of Muslim minority groups in Xingjiang." The FCC later said that Dahua cameras "pose an unacceptable risk to U.S. national security." Dahua is not named in the federal complaint, but [404 Media's Jason Koebler] was able to cross-reference details in the complaint with Dahua and was able to identify specific cameras sold by Packetalk to Dahua's product.

According to the FBI, Zakhary sold millions of dollars of surveillance equipment, including rebranded Dahua cameras, to agencies all over New Jersey despite knowing that the cameras were illegal to sell to public agencies. Zakhary also specifically helped two specific agencies in New Jersey (called "Victim Agency-1" and "Victim Agency-2" in the complaint) justify their purchases using federal COVID relief money from the CARES Act, according to the criminal complaint. The feds allege, essentially, that Zakhary tricked local agencies into buying banned cameras using COVID funds: "Zakhary fraudulently misrepresented to the Public Safety Customers that [Packetalk's] products were compliant with Section 889 of the John S. McCain National Defense Authorization Act for 2019 [which banned Dahua cameras], when, in fact, they were not," the complaint reads. "As a result of Zakhary's fraudulent misrepresentations, the Public Safety Customers purchased at least $35 million in surveillance cameras and equipment from [Packetalk], over $15 million of which was federal funds and grants."

Privacy

23andMe Tells Victims It's Their Fault Data Was Breached (techcrunch.com) 95

An anonymous reader quotes a report from TechCrunch: Facing more than 30 lawsuits from victims of its massive data breach, 23andMe is now deflecting the blame to the victims themselves in an attempt to absolve itself from any responsibility, according to a letter sent to a group of victims seen by TechCrunch. "Rather than acknowledge its role in this data security disaster, 23andMe has apparently decided to leave its customers out to dry while downplaying the seriousness of these events," Hassan Zavareei, one of the lawyers representing the victims who received the letter from 23andMe, told TechCrunch in an email.

In December, 23andMe admitted that hackers had stolen the genetic and ancestry data of 6.9 million users, nearly half of all its customers. The data breach started with hackers accessing only around 14,000 user accounts. The hackers broke into this first set of victims by brute-forcing accounts with passwords that were known to be associated with the targeted customers, a technique known as credential stuffing. From these 14,000 initial victims, however, the hackers were able to then access the personal data of the other 6.9 million million victims because they had opted-in to 23andMe's DNA Relatives feature. This optional feature allows customers to automatically share some of their data with people who are considered their relatives on the platform. In other words, by hacking into only 14,000 customers' accounts, the hackers subsequently scraped personal data of another 6.9 million customers whose accounts were not directly hacked.

But in a letter sent to a group of hundreds of 23andMe users who are now suing the company, 23andMe said that "users negligently recycled and failed to update their passwords following these past security incidents, which are unrelated to 23andMe." "Therefore, the incident was not a result of 23andMe's alleged failure to maintain reasonable security measures," the letter reads. [...] 23andMe's lawyers argued that the stolen data cannot be used to inflict monetary damage against the victims. "The information that was potentially accessed cannot be used for any harm. As explained in the October 6, 2023 blog post, the profile information that may have been accessed related to the DNA Relatives feature, which a customer creates and chooses to share with other users on 23andMe's platform. Such information would only be available if plaintiffs affirmatively elected to share this information with other users via the DNA Relatives feature. Additionally, the information that the unauthorized actor potentially obtained about plaintiffs could not have been used to cause pecuniary harm (it did not include their social security number, driver's license number, or any payment or financial information)," the letter read.
"This finger pointing is nonsensical," said Zavareei. "23andMe knew or should have known that many consumers use recycled passwords and thus that 23andMe should have implemented some of the many safeguards available to protect against credential stuffing -- especially considering that 23andMe stores personal identifying information, health information, and genetic information on its platform."

"The breach impacted millions of consumers whose data was exposed through the DNA Relatives feature on 23andMe's platform, not because they used recycled passwords," added Zavareei. "Of those millions, only a few thousand accounts were compromised due to credential stuffing. 23andMe's attempt to shirk responsibility by blaming its customers does nothing for these millions of consumers whose data was compromised through no fault of their own whatsoever."
Facebook

Meet 'Link History,' Facebook's New Way To Track the Websites You Visit (gizmodo.com) 17

An anonymous reader quotes a report from Gizmodo: Facebook recently rolled out a new "Link History" setting that creates a special repository of all the links you click on in the Facebook mobile app. Users can opt-out, but Link History is turned on by default, and the data is used for targeted ads. The company pitches Link History as a useful tool for consumers "with your browsing activity saved in one place," rather than another way to keep tabs on your behavior. With the new setting you'll "never lose a link again," Facebook says in a pop-up encouraging users to consent to the new tracking method. The company goes on to mention that "When you allow link history, we may use your information to improve your ads across Meta technologies."

Facebook promises to delete the Link History it's created for you within 90 days if you turn the setting off. According to a Facebook help page, Link History isn't available everywhere. The company says it's rolling out globally "over time." This is a privacy improvement in some ways, but the setting raises more questions than it answers. Meta has always kept track of the links you click on, and this is the first time users have had any visibility or control over this corner of the company's internet spying apparatus. In other words, Meta is just asking users for permission for a category of tracking that it's been using for over a decade. Beyond that, there are a number of ways this setting might give users an illusion of privacy that Meta isn't offering.
"The Link History doesn't mention anything about the invasive ways Facebook monitors what you're doing once you visit a webpage," notes Gizmodo's Thomas Germain. "It seems the setting only affects Meta's record of the fact that you clicked a link in the first place. Furthermore, Meta links everything you do on Facebook, Instagram, WhatsApp, and its other products. Unlike several of Facebook's other privacy settings, Link History doesn't say that it affects any of Meta's other apps, leaving you with the data harvesting status quo on other parts of Mark Zuckerberg's empire."

"Link History also creates a confusing new regime that establishes privacy settings that don't apply if you access Facebook outside of the Facebook app. If you log in to Facebook on a computer or a mobile browser instead, Link History doesn't protect you. In fact, you can't see the Link History page at all if you're looking at Facebook on your laptop."
The Courts

The Humble Emoji Has Infiltrated the Corporate World (theatlantic.com) 56

An anonymous reader shares a report: A court in Washington, D.C., has been stuck with a tough, maybe impossible question: What does full moon face emoji mean? Let me explain: In the summer of 2022, Ryan Cohen, a major investor in Bed Bath & Beyond, responded to a tweet about the beleaguered retailer with this side-eyed-moon emoji. Later that month, Cohen -- hailed as a "meme king" for his starring role in the GameStop craze -- disclosed that his stake in the company had grown to nearly 12 percent; the stock price subsequently shot up. That week, he sold all of his shares and walked away with a reported $60 million windfall.

Now shareholders are suing him for securities fraud, claiming that Cohen misled investors by using the emoji the way meme-stock types sometimes do -- to suggest that the stock was going "to the moon." A class-action lawsuit with big money on the line has come to legal arguments such as this: "There is no way to establish objectively the truth or falsity of a tiny lunar cartoon," as Cohen's lawyers wrote in an attempt to get the emoji claim dismissed. That argument was denied, and the court held that "emojis may be actionable."

The humble emoji -- and its older cousin, the emoticon -- has infiltrated the corporate world, especially in tech. Last month, when OpenAI briefly ousted Sam Altman and replaced him with an interim CEO, the company's employees reportedly responded with a vulgar emoji on Slack. That FTX, the failed cryptocurrency exchange once run by Sam Bankman-Fried, apparently used these little icons to approve million-dollar expense reports was held up during bankruptcy proceedings as a damning example of its poor corporate controls. And in February, a judge allowed a lawsuit to move forward alleging that an NFT company called Dapper Labs was illegally promoting unregistered securities on Twitter, because "the 'rocket ship' emoji, 'stock chart' emoji, and 'money bags' emoji objectively mean one thing: a financial return on investment."

Medicine

Will 2024 Bring a 'Major Turning Point' in US Health Care? (usatoday.com) 154

"This year has been a major turning point in American health care," reports USA Today, "and patients can anticipate several major developments in the new year," including the beginning of a CRISPR "revolution" and "a new reckoning with drug prices that could change the landscape of the U.S. health care system for decades to come." Health care officials expect 2024 to bring a wave of innovation and change in medicine, treatment and public health... Many think 2024 could be the year more people have the tools to follow through on New Year's resolutions about weight loss. If they can afford them and manage to stick with them, people can turn to a new generation of remarkably effective weight-loss drugs, also called GLP-1s, which offer the potential for substantial weight loss...

In 2023, mental health issues became among the nation's most deadly, costly and pervasive health crises... The dearth of remedies has also paved the way for an unsuspecting class of drugs: psychedelics. MDMA, a party drug commonly known as "ecstasy," could win approval for legal distribution in 2024, as a treatment for post-traumatic stress disorder. Another psychedelic, a ketamine derivative eskatemine, sold as Spravato, was approved in 2019 to treat depression, but it is being treated like a conventional therapy that must be dosed regularly, not like a psychedelic that provides a long-lasting learning experience, said Matthew Johnson, an expert in psychedelics at Johns Hopkins University. MDMA (midomafetamine capsules) would be different, as the first true psychedelic to win FDA approval.

In a late-stage trial of patients with moderate or severe post-traumatic stress disorder, close to 90% showed clinically significant improvements four months after three treatments with MDMA and more than 70% no longer met the criteria for having the disorder, which represented "really impressive results," according to Matthew Johnson, an expert in psychedelics at Johns Hopkins University in Maryland. Psilocybin, known colloquially as "magic mushrooms," is also working its way through the federal approval process, but it likely won't come up before officials for another year, Johnson said. Psychedelics are something to keep an eye on in the future, as they're being used to treat an array of mental health issues: eskatimine for depression, MDMA for PTSD and psilocybin for addiction. Johnson said his research suggests that psychedelics will probably have a generalizable benefit across many mental health challenges in the years to come.

2024 will also be the first year America's drug-makers face new limits on how much they can increase prices for drugs covered by the federal health insurance program Medicare.
Earth

20% of America's Plants and Animals are At Risk of Extinction (usatoday.com) 56

It was a half a century ago that America passed legislation to protect vanishing species and their habitats — and since then, more than five dozen species have recovered. Just one example: In 1963 only 417 nesting pairs of bald eagles were found in the lower 48 states. But today there's more than 300,000 bald eagles, writes USA Today. "[T]hough its future remains uncertain, many experts say it remains one of the nation's crowning achievements."

But 1,252 species are still listed as endangered in the U.S. — 486 animals, and 766 plants — with 417 more species categorized as "threatened." The perils of the changing climate add urgency to calls for increased funding and more protection. In North Carolina, for example, the rising sea steadily creeps over a refuge that's home to the sole remaining wild red wolf population. Off New England, warming waters forced changes in the foraging habits of the endangered North Atlantic right whale, putting the massive marine mammals in harm's way more often... One in 5 plant and animal species in the nation remain at risk of extinction, says Susan Holmes, executive director of the Endangered Species Coalition. "Loss of habitat and climate change are absolutely some of the most important threats that we have."

"We are at what I would say is a pivotal moment with the threats of climate change," she said. "We have to act faster than ever in order to ensure that these species are going to thrive."

Patents

Scientists Still Shoot For the Moon With Patent-Free Covid Drug 11

An anonymous reader quotes a report from Bloomberg, written by Naomi Kresge: In the early days of the Covid-19 pandemic, hundreds of scientists from all over the world banded together in an open-source effort to develop an antiviral that would be available for all. They could never have anticipated the many roadblocks they would face along the way, including the Russian invasion of Ukraine, which made refugees out of a group of Kyiv chemists who were doing important work for the project. The group, which called itself Covid Moonshot, hasn't given up on its effort to introduce a more affordable, patent-free treatment for the virus. Their open-source Covid antiviral, now funded by Wellcome, is on track to be ready for human testing within the next year and a half, according to Annette von Delft, a University of Oxford scientist and one of the Moonshot group's leaders. More early discovery work on a range of potential inhibitors for other viruses is also still going on and being funded by a US government grant.

"It's a bit like a proof of concept," von Delft says, for bringing a patent-free experimental drug into the clinic, a model that could be repurposed as a tool to fight neglected tropical diseases or antimicrobial resistance, or prepare for future pandemics. "Can we come up with a strategic model that can help those kinds of compounds with less of a business case along?" Of course, there was definitely a business case for a Covid antiviral, and some of the biggest drugmakers rushed to develop them. In 2022, Pfizer Inc.'s Paxlovid was one of the world's best-selling medicines with $18.9 billion in revenue. Demand has since cratered for the pill, which needs to be given shortly after infection and can't be taken alongside a number of other commonly prescribed medicines. Analysts expect the Paxlovid revenue to plunge just shy of $1 billion this year.

However, there is still a need for a better Covid antiviral, particularly in countries where access to the Pfizer pill is limited, according to von Delft. Covid cases have surged again this holiday season, with the rise of a new variant called JN.1 reminding us that the virus is still changing to evade the immunity we've built up so far. Just before Christmas, UK authorities said about one in every 24 people in England and Scotland had the disease. An accessible antiviral could help people return to work more quickly, and it could also be tested as a potential treatment for long Covid. "We know from experience in viral disease that there will be resistance variants evolving over time," von Delft said. "We'll need more than one."
Security

Cyberattack Targets Albanian Parliament's Data System, Halting Its Work (securityweek.com) 2

An anonymous reader quotes a report from SecurityWeek: Albania's Parliament said on Tuesday that it had suffered a cyberattack with hackers trying to get into its data system, resulting in a temporary halt in its services. A statement said Monday's cyberattack had not "touched the data of the system," adding that experts were working to discover what consequences the attack could have. It said the system's services would resume at a later time. Local media reported that a cellphone provider and an air flight company were also targeted by Monday's cyberattacks, allegedly from Iranian-based hackers called Homeland Justice, which could not be verified independently.

Albania suffered a cyberattack in July 2022 that the government and multinational technology companies blamed on the Iranian Foreign Ministry. Believed to be in retaliation for Albania sheltering members of the Iranian opposition group Mujahedeen-e-Khalq, or MEK, the attack led the government to cut diplomatic relations with Iran two months later. The Iranian Foreign Ministry denied Tehran was behind an attack on Albanian government websites and noted that Iran has suffered cyberattacks from the MEK. In June, Albanian authorities raided a camp for exiled MEK members to seize computer devices allegedly linked to prohibited political activities. [...] In a statement sent later Tuesday to The Associated Press, MEK's media spokesperson Ali Safavi claimed the reported cyberattacks in Albania "are not related to the presence or activities" of MEK members in the country.

Piracy

Reckless DMCA Deindexing Pushes NASA's Artemis Towards Black Hole (torrentfreak.com) 83

Andy Maxwell reports via TorrentFreak: As the crew of Artemis 2 prepare to become the first humans to fly to the moon since 1972, the possibilities of space travel are once again igniting imaginations globally. More than 92% of internet users who want to learn more about this historic mission and the program in general are statistically likely to use Google search. Behind the scenes, however, the ability to find relevant content is under attack. Blundering DMCA takedown notices sent by a company calling itself DMCA Piracy Prevention Inc. claim to protect the rights of an OnlyFans/Instagram model working under the name 'Artemis'. Instead, keyword-based systems that fail to discriminate between copyright-infringing content and that referencing the word Artemis in any other context, are flooding towards Google. They contain demands to completely deindex non-infringing, unrelated content, produced by innocent third parties all over the world.

A recent deindexing demand dated December 13, 2022, lists DMCA Piracy Prevention Inc. of Canada as the sender. The name of the content owner is redacted but the notice itself states that the company represents a content creator performing under the name Artemis. The notice demands the removal of 3,617 URLs from Google search. If successful, those URLs would be completely unfindable by more than 92% of the world's population who use that search engine. [...] At least 9 of the first 20 URLs in the notice demand the removal of non-infringing articles and news reports referencing the Artemis space program. None have anything to do with the content the sender claims to protect. [...]

Theories as to who might own and/or operate DMCA Piracy Prevention Inc. aren't hard to find but the company does exist and is registered as a corporate entity in Canada. Registered at the same address is a company with remarkably similar details. BranditScan is a corporate entity operating in exactly the same market offering similar if not identical services. BranditScan has sent DMCA takedown notices to Google under three different notifier accounts.

United States

New US Immigration Rules Spur More Visa Approvals For STEM Workers (science.org) 102

Following policy adjustments by the U.S. Citizenship and Immigration Services (USCIS) in January, more foreign-born workers in science, technology, engineering, and math (STEM) fields are able to live and work permanently in the United States. "The jump comes after USCIS in January 2022 tweaked its guidance criteria relating to two visa categories available to STEM workers," reports Science Magazine. "One is the O-1A, a temporary visa for 'aliens of extraordinary ability' that often paves the way to a green card. The second, which bestows a green card on those with advanced STEM degrees, governs a subset of an EB-2 (employment-based) visa." From the report: The USCIS data, reported exclusively by ScienceInsider, show that the number of O-1A visas awarded in the first year of the revised guidance jumped by almost 30%, to 4570, and held steady in fiscal year 2023, which ended on 30 September. Similarly, the number of STEM EB-2 visas approved in 2022 after a "national interest" waiver shot up by 55% over 2021, to 70,240, and stayed at that level this year. "I'm seeing more aspiring and early-stage startup founders believe there's a way forward for them," says Silicon Valley immigration attorney Sophie Alcorn. She predicts the policy changes will result in "new technology startups that would not have otherwise been created."

President Joe Biden has long sought to make it easier for foreign-born STEM workers to remain in the country and use their talent to spur the U.S. economy. But under the terms of a 1990 law, only 140,000 employment-based green cards may be issued annually, and no more than 7% of those can go to citizens of any one country. The ceiling is well below the demand. And the country quotas have created decades-long queues for scientists and high-tech entrepreneurs born in India and China. The 2022 guidance doesn't alter those limits on employment-based green cards but clarifies the visa process for foreign-born scientists pending any significant changes to the 1990 law. The O-1A work visa, which can be renewed indefinitely, was designed to accelerate the path to a green card for foreign-born high-tech entrepreneurs.

Although there is no cap on the number of O-1A visas awarded, foreign-born scientists have largely ignored this option because it wasn't clear what metrics USCIS would use to assess their application. The 2022 guidance on O-1As removed that uncertainty by listing eight criteria -- including awards, peer-reviewed publications, and reviewing the work of other scientistsâ"and stipulating that applicants need to satisfy at least three of them. The second visa policy change affects those with advanced STEM degrees seeking the national interest waiver for an EB-2. Under the normal process of obtaining such a visa, the Department of Labor requires employers to first satisfy rules meant to protect U.S. workers from foreign competition, for example, by showing that the company has failed to find a qualified domestic worker and that the job will pay the prevailing wage. That time-consuming exercise can be waived if visa applicants can prove they are doing "exceptional" work of "substantial merit and national importance." But once again, the standard for determining whether the labor-force requirements can be waived was vague, so relatively few STEM workers chose that route. The 2022 USCIS guidance not only specifies criteria, which closely track those for the nonimmigrant, O-1A visa, but also allows scientists to sponsor themselves.

The Courts

Clowns Sue Clowns.com For Wage Theft (404media.co) 42

An anonymous reader quotes a report from 404 Media: A group of clowns is suing their former employer Clowns.com for multiple labor law violations, according to recently filed court records. Four people -- Brayan Angulo, Cameron Pille, Janina Salorio, and Xander Black -- filed a federal lawsuit on Wednesday alleging Adolph Rodriguez and Erica Barbuto, owners of Clowns.com and their former bosses, misclassified them as independent workers for years, and failed to pay them for their time. The Long Island-based company, which provides entertainers for events, violated the Fair Labor Standards Act and the New York Labor Law, the lawsuit claims.

The owners of Clowns.com didn't give employees detailed pay statements as required by New York law, the lawsuit alleges. "As a result, Plaintiffs did not know how precisely their weekly pay was being calculated, and were thus deprived of information that could be used to challenge and prevent the theft of their wages," it says. The clowns weren't paid for time "spent at the warehouse gathering and loading equipment and supplies into vehicles," or for travel time between parties, or when parties went on for longer than expected, they claim.
Pille said she's "proud to join with my clown colleagues" to stand up to wage theft and misclassification. "For years, Clowns.com has treated clowns, who are largely young actors with no prior training in clowning who sign up for this job to make ends meet, as independent contractors."
Privacy

Researchers Come Up With Better Idea To Prevent AirTag Stalking (arstechnica.com) 29

An anonymous reader quotes a report from Ars Technica: Apple's AirTags are meant to help you effortlessly find your keys or track your luggage. But the same features that make them easy to deploy and inconspicuous in your daily life have also allowed them to be abused as a sinister tracking tool that domestic abusers and criminals can use to stalk their targets. Over the past year, Apple has taken protective steps to notify iPhone and Android users if an AirTag is in their vicinity for a significant amount of time without the presence of its owner's iPhone, which could indicate that an AirTag has been planted to secretly track their location. Apple hasn't said exactly how long this time interval is, but to create the much-needed alert system, Apple made some crucial changes to the location privacy design the company originally developed a few years ago for its "Find My" device tracking feature. Researchers from Johns Hopkins University and the University of California, San Diego, say, though, that they've developed (PDF) a cryptographic scheme to bridge the gap -- prioritizing detection of potentially malicious AirTags while also preserving maximum privacy for AirTag users. [...]

The solution [Johns Hopkins cryptographer Matt Green] and his fellow researchers came up with leans on two established areas of cryptography that the group worked to implement in a streamlined and efficient way so the system could reasonably run in the background on mobile devices without being disruptive. The first element is "secret sharing," which allows the creation of systems that can't reveal anything about a "secret" unless enough separate puzzle pieces present themselves and come together. Then, if the conditions are right, the system can reconstruct the secret. In the case of AirTags, the "secret" is the true, static identity of the device underlying the public identifier that is frequently changing for privacy purposes. Secret sharing was conceptually useful for the researchers to employ because they could develop a mechanism where a device like a smartphone would only be able to determine that it was being followed around by an AirTag with a constantly rotating public identifier if the system received enough of a certain type of ping over time. Then, suddenly, the suspicious AirTag's anonymity would fall away and the system would be able to determine that it had been in close proximity for a concerning amount of time.

Green notes, though, that a limitation of secret sharing algorithms is that they aren't very good at sorting and parsing inputs if they're being deluged by a lot of different puzzle pieces from all different puzzles -- the exact scenario that would occur in the real world where AirTags and Find My devices are constantly encountering each other. With this in mind, the researchers employed a second concept known as "error correction coding," which is specifically designed to sort signal from noise and preserve the durability of signals even if they acquire some errors or corruptions. "Secret sharing and error correction coding have a lot of overlap," Green says. "The trick was to find a way to implement it all that would be fast, and where a phone would be able to reassemble all the puzzle pieces when needed while all of this is running quietly in the background."
The researchers published (PDF) their first paper in September and submitted it to Apple. More recently, they notified the industry consortium about the proposal.
Google

Google Agrees To Settle Chrome Incognito Mode Class Action Lawsuit (arstechnica.com) 22

Google has indicated that it is ready to settle a class-action lawsuit filed in 2020 over its Chrome browser's Incognito mode. From a report: Arising in the Northern District of California, the lawsuit accused Google of continuing to "track, collect, and identify [users'] browsing data in real time" even when they had opened a new Incognito window. The lawsuit, filed by Florida resident William Byatt and California residents Chasom Brown and Maria Nguyen, accused Google of violating wiretap laws.

It also alleged that sites using Google Analytics or Ad Manager collected information from browsers in Incognito mode, including web page content, device data, and IP address. The plaintiffs also accused Google of taking Chrome users' private browsing activity and then associating it with their already-existing user profiles. Google initially attempted to have the lawsuit dismissed by pointing to the message displayed when users turned on Chrome's incognito mode. That warning tells users that their activity "might still be visible to websites you visit."

AI

New York Times Copyright Suit Wants OpenAI To Delete All GPT Instances (arstechnica.com) 157

An anonymous reader shares a report: The Times is targeting various companies under the OpenAI umbrella, as well as Microsoft, an OpenAI partner that both uses it to power its Copilot service and helped provide the infrastructure for training the GPT Large Language Model. But the suit goes well beyond the use of copyrighted material in training, alleging that OpenAI-powered software will happily circumvent the Times' paywall and ascribe hallucinated misinformation to the Times.

The suit notes that The Times maintains a large staff that allows it to do things like dedicate reporters to a huge range of beats and engage in important investigative journalism, among other things. Because of those investments, the newspaper is often considered an authoritative source on many matters. All of that costs money, and The Times earns that by limiting access to its reporting through a robust paywall. In addition, each print edition has a copyright notification, the Times' terms of service limit the copying and use of any published material, and it can be selective about how it licenses its stories.

In addition to driving revenue, these restrictions also help it to maintain its reputation as an authoritative voice by controlling how its works appear. The suit alleges that OpenAI-developed tools undermine all of that. [...] The suit seeks nothing less than the erasure of both any GPT instances that the parties have trained using material from the Times, as well as the destruction of the datasets that were used for the training. It also asks for a permanent injunction to prevent similar conduct in the future. The Times also wants money, lots and lots of money: "statutory damages, compensatory damages, restitution, disgorgement, and any other relief that may be permitted by law or equity."

Government

India Targets Apple Over Its Phone Hacking Notifications (washingtonpost.com) 100

In October, Apple issued notifications warning over a half dozen India lawmakers of their iPhones being targets of state-sponsored attacks. According to a new report from the Washington Post, the Modi government responded by criticizing Apple's security and demanding explanations to mitigate political impact (Warning: source may be paywalled; alternative source). From the report: Officials from the ruling Bharatiya Janata Party (BJP) publicly questioned whether the Silicon Valley company's internal threat algorithms were faulty and announced an investigation into the security of Apple devices. In private, according to three people with knowledge of the matter, senior Modi administration officials called Apple's India representatives to demand that the company help soften the political impact of the warnings. They also summoned an Apple security expert from outside the country to a meeting in New Delhi, where government representatives pressed the Apple official to come up with alternative explanations for the warnings to users, the people said. They spoke on the condition of anonymity to discuss sensitive matters. "They were really angry," one of those people said.

The visiting Apple official stood by the company's warnings. But the intensity of the Indian government effort to discredit and strong-arm Apple disturbed executives at the company's headquarters, in Cupertino, Calif., and illustrated how even Silicon Valley's most powerful tech companies can face pressure from the increasingly assertive leadership of the world's most populous country -- and one of the most critical technology markets of the coming decade. The recent episode also exemplified the dangers facing government critics in India and the lengths to which the Modi administration will go to deflect suspicions that it has engaged in hacking against its perceived enemies, according to digital rights groups, industry workers and Indian journalists. Many of the more than 20 people who received Apple's warnings at the end of October have been publicly critical of Modi or his longtime ally, Gautam Adani, an Indian energy and infrastructure tycoon. They included a firebrand politician from West Bengal state, a Communist leader from southern India and a New Delhi-based spokesman for the nation's largest opposition party. [...] Gopal Krishna Agarwal, a national spokesman for the BJP, said any evidence of hacking should be presented to the Indian government for investigation.

The Modi government has never confirmed or denied using spyware, and it has refused to cooperate with a committee appointed by India's Supreme Court to investigate whether it had. But two years ago, the Forbidden Stories journalism consortium, which included The Post, found that phones belonging to Indian journalists and political figures were infected with Pegasus, which grants attackers access to a device's encrypted messages, camera and microphone. In recent weeks, The Post, in collaboration with Amnesty, found fresh cases of infections among Indian journalists. Additional work by The Post and New York security firm iVerify found that opposition politicians had been targeted, adding to the evidence suggesting the Indian government's use of powerful surveillance tools. In addition, Amnesty showed The Post evidence it found in June that suggested a Pegasus customer was preparing to hack people in India. Amnesty asked that the evidence not be detailed to avoid teaching Pegasus users how to cover their tracks.
"These findings show that spyware abuse continues unabated in India," said Donncha O Cearbhaill, head of Amnesty International's Security Lab. "Journalists, activists and opposition politicians in India can neither protect themselves against being targeted by highly invasive spyware nor expect meaningful accountability."
Transportation

US Engine Maker Will Pay $1.6 Billion To Settle Claims of Emissions Cheating (nytimes.com) 100

An anonymous reader quotes a report from the New York Times: The United States and the state of California have reached an agreement in principle with the truck engine manufacturer Cummins on a $1.6 billion penalty to settle claims that the company violated the Clean Air Act by installing devices to defeat emissions controls on hundreds of thousands of engines, the Justice Department announced on Friday. The penalty would be the largest ever under the Clean Air Act and the second largest ever environmental penalty in the United States. Defeat devices are parts or software that bypass, defeat or render inoperative emissions controls like pollution sensors and onboard computers. They allow vehicles to pass emissions inspections while still emitting high levels of smog-causing pollutants such as nitrogen oxide, which is linked to asthma and other respiratory illnesses.

The Justice Department has accused the company of installing defeat devices on 630,000 model year 2013 to 2019 RAM 2500 and 3500 pickup truck engines. The company is also alleged to have secretly installed auxiliary emission control devices on 330,000 model year 2019 to 2023 RAM 2500 and 3500 pickup truck engines. "Violations of our environmental laws have a tangible impact. They inflict real harm on people in communities across the country," Attorney General Merrick Garland said in a statement. "This historic agreement should make clear that the Justice Department will be aggressive in its efforts to hold accountable those who seek to profit at the expense of people's health and safety."

In a statement, Cummins said that it had "seen no evidence that anyone acted in bad faith and does not admit wrongdoing." The company said it has "cooperated fully with the relevant regulators, already addressed many of the issues involved, and looks forward to obtaining certainty as it concludes this lengthy matter. Cummins conducted an extensive internal review and worked collaboratively with the regulators for more than four years." Stellantis, the company that makes the trucks, has already recalled the model year 2019 trucks and has initiated a recall of the model year 2013 to 2018 trucks. The software in those trucks will be recalibrated to ensure that they are fully compliant with federal emissions law, said Jon Mills, a spokesman for Cummins. Mr. Mills said that "next steps are unclear" on the model year 2020 through 2023, but that the company "continues to work collaboratively with regulators" to resolve the issue. The Justice Department partnered with the Environmental Protection Agency in its investigation of the case.

AI

The New York Times Sues OpenAI and Microsoft Over AI Use of Copyrighted Work (nytimes.com) 59

The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies. From a report: The Times is the first major American media organization to sue the companies, the creators of ChatGPT and other popular A.I. platforms, over copyright issues associated with its written works. The lawsuit [PDF], filed in Federal District Court in Manhattan, contends that millions of articles published by The Times were used to train automated chatbots that now compete with the news outlet as a source of reliable information.

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for "billions of dollars in statutory and actual damages" related to the "unlawful copying and use of The Times's uniquely valuable works." It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times. The lawsuit could test the emerging legal contours of generative A.I. technologies -- so called for the text, images and other content they can create after learning from large data sets -- and could carry major implications for the news industry. The Times is among a small number of outlets that have built successful business models from online journalism, but dozens of newspapers and magazines have been hobbled by readers' migration to the internet.

Programming

Code.org Sues WhiteHat Jr. For $3 Million 8

theodp writes: Back in May 2021, tech-backed nonprofit Code.org touted the signing of a licensing agreement with WhiteHat Jr., allowing the edtech company with a controversial past (Whitehat Jr. was bought for $300M in 2020 by Byju's, an edtech firm that received a $50M investment from Mark Zuckerberg's venture firm) to integrate Code.org's free-to-educators-and-organizations content and tools into their online tutoring service. Code.org did not reveal what it was charging Byju's to use its "free curriculum and open source technology" for commercial purposes, but Code.org's 2021 IRS 990 filing reported $1M in royalties from an unspecified source after earlier years reported $0. Coincidentally, Whitehat Jr. is represented by Aaron Kornblum, who once worked at Microsoft for now-President Brad Smith, who left Code.org's Board just before the lawsuit was filed.

Fast forward to 2023 and the bloom is off the rose, as Court records show that Code.org earlier this month sued Whitehat Education Technology, LLC (Exhibits A and B) in what is called "a civil action for breach of contract arising from Whitehat's failure to pay Code.org the agreed-upon charges for its use of Code.org's platform and licensed content and its ongoing, unauthorized use of that platform and content." According to the filing, "Whitehat agreed [in April 2022] to pay to Code.org licensing fees totaling $4,000,000 pursuant to a four-year schedule" and "made its first four scheduled payments, totaling $1,000,000," but "about a year after the Agreement was signed, Whitehat informed Code.org that it would be unable to make the remaining scheduled license payments." While the original agreement was amended to backload Whitehat's license fee payment obligations, "Whitehat has not paid anything at all beyond the $1,000,000 that it paid pursuant to the 2022 invoices before the Agreement was amended" and "has continued to access Code.org's platform and content."

That Byju's Whitehat Jr. stiffed Code.org is hardly shocking. In June 2023, Reuters reported that Byju's auditor Deloitte cut ties with the troubled Indian Edtech startup that was once an investor darling and valued at $22 billion, adding that a Byju's Board member representing the Chan-Zuckerberg Initiative had resigned with two other Board members. The BBC reported in July that Byju's was guilty of overexpanding during the pandemic (not unlike Zuck's Facebook). Ironically, the lawsuit Exhibits include screenshots showing Mark Zuckerberg teaching Code.org lessons. Zuckerberg and Facebook were once among the biggest backers of Code.org, although it's unclear whether that relationship soured after court documents were released that revealed Code.org's co-founders talking smack about Zuck and Facebook's business practices to lawyers for Six4Three, which was suing Facebook.

Code.org's curriculum is also used by the Amazon Future Engineer (AFE) initiative, but it is unclear what royalties -- if any -- Amazon pays to Code.org for the use of Code.org curriculum. While the AFE site boldly says, "we provide free computer science curriculum," the AFE fine print further explains that "our partners at Code.org and ProjectSTEM offer a wide array of introductory and advance curriculum options and teacher training." It's unclear what kind of organization Amazon's AFE ("Computer Science Learning Childhood to Career") exactly is -- an IRS Tax Exempt Organization Search failed to find any hits for "Amazon Future Engineer" -- making it hard to guess whether Code.org might consider AFE's use of Code.org software 'commercial use.' Would providing a California school district with free K-12 CS curriculum that Amazon boasts of cultivating into its "vocal champion" count as "commercial use"? How about providing free K-12 CS curriculum to children who live where Amazon is seeking incentives? Or if Amazon CEO Jeff Bezos testifies Amazon "funds computer science coursework" for schools as he attempts to counter a Congressional antitrust inquiry? These seem to be some of the kinds of distinctions Richard Stallman anticipated more than a decade ago as he argued against a restriction against commercial use of otherwise free software.
Electronic Frontier Foundation

EFF Warns: 'Think Twice Before Giving Surveillance for the Holidays' (eff.org) 28

"It's easy to default to giving the tech gifts that retailers tend to push on us this time of year..." notes Lifehacker senior writer Thorin Klosowski.

"But before you give one, think twice about what you're opting that person into." A number of these gifts raise red flags for us as privacy-conscious digital advocates. Ring cameras are one of the most obvious examples, but countless others over the years have made the security or privacy naughty list (and many of these same electronics directly clash with your right to repair). One big problem with giving these sorts of gifts is that you're opting another person into a company's intrusive surveillance practice, likely without their full knowledge of what they're really signing up for... And let's not forget about kids. Long subjected to surveillance from elves and their managers, electronics gifts for kids can come with all sorts of surprise issues, like the kid-focused tablet we found this year that was packed with malware and riskware. Kids' smartwatches and a number of connected toys are also potential privacy hazards that may not be worth the risks if not set up carefully.

Of course, you don't have to avoid all technology purchases. There are plenty of products out there that aren't creepy, and a few that just need extra attention during set up to ensure they're as privacy-protecting as possible. While we don't endorse products, you don't have to start your search in a vacuum. One helpful place to start is Mozilla's Privacy Not Included gift guide, which provides a breakdown of the privacy practices and history of products in a number of popular gift categories.... U.S. PIRG also has guidance for shopping for kids, including details about what to look for in popular categories like smart toys and watches....

Your job as a privacy-conscious gift-giver doesn't end at the checkout screen. If you're more tech savvy than the person receiving the item, or you're helping set up a gadget for a child, there's no better gift than helping set it up as privately as possible.... Giving the gift of electronics shouldn't come with so much homework, but until we have a comprehensive data privacy law, we'll likely have to contend with these sorts of set-up hoops. Until that day comes, we can all take the time to help those who need it.

AI

ChatGPT Exploit Finds 24 Email Addresses, Amid Warnings of 'AI Silo' (thehill.com) 67

The New York Times reports: Last month, I received an alarming email from someone I did not know: Rui Zhu, a Ph.D. candidate at Indiana University Bloomington. Mr. Zhu had my email address, he explained, because GPT-3.5 Turbo, one of the latest and most robust large language models (L.L.M.) from OpenAI, had delivered it to him. My contact information was included in a list of business and personal email addresses for more than 30 New York Times employees that a research team, including Mr. Zhu, had managed to extract from GPT-3.5 Turbo in the fall of this year. With some work, the team had been able to "bypass the model's restrictions on responding to privacy-related queries," Mr. Zhu wrote.

My email address is not a secret. But the success of the researchers' experiment should ring alarm bells because it reveals the potential for ChatGPT, and generative A.I. tools like it, to reveal much more sensitive personal information with just a bit of tweaking. When you ask ChatGPT a question, it does not simply search the web to find the answer. Instead, it draws on what it has "learned" from reams of information — training data that was used to feed and develop the model — to generate one. L.L.M.s train on vast amounts of text, which may include personal information pulled from the Internet and other sources. That training data informs how the A.I. tool works, but it is not supposed to be recalled verbatim... In the example output they provided for Times employees, many of the personal email addresses were either off by a few characters or entirely wrong. But 80 percent of the work addresses the model returned were correct.

The researchers used the API for accessing ChatGPT, the article notes, where "requests that would typically be denied in the ChatGPT interface were accepted..."

"The vulnerability is particularly concerning because no one — apart from a limited number of OpenAI employees — really knows what lurks in ChatGPT's training-data memory."

And there was a broader related warning in another article published the same day. Microsoft may be building an AI silo in a walled garden, argues a professor at the University of California, Berkeley's school of information, calling the development "detrimental for technology development, as well as costly and potentially dangerous for society and the economy." [In January] Microsoft sealed its OpenAI relationship with another major investment — this time around $10 billion, much of which was, once again, in the form of cloud credits instead of conventional finance. In return, OpenAI agreed to run and power its AI exclusively through Microsoft's Azure cloud and granted Microsoft certain rights to its intellectual property...

Recent reports that U.K. competition authorities and the U.S. Federal Trade Commission are scrutinizing Microsoft's investment in OpenAI are encouraging. But Microsoft's failure to report these investments for what they are — a de facto acquisition — demonstrates that the company is keenly aware of the stakes and has taken advantage of OpenAI's somewhat peculiar legal status as a non-profit entity to work around the rules...

The U.S. government needs to quickly step in and reverse the negative momentum that is pushing AI into walled gardens. The longer it waits, the harder it will be, both politically and technically, to re-introduce robust competition and the open ecosystem that society needs to maximize the benefits and manage the risks of AI technology.

Television

'Doctor Who' Christmas Special Streams on Disney+ and the BBC (cnet.com) 65

An anonymous Slashdot reader shared this report from CNET: Marking its 60th year on television, the British time-travel series will close out 2023 with one last anniversary special that arrives on Christmas Day. Ncuti Gatwa's Doctor helms the Tardis in The Church on Ruby Road, which centers on an abandoned baby who grows up looking for answers... Disney Plus will stream Doctor Who: The Church on Ruby Road on Monday, Dec. 25, at 12:55 p.m. ET (9:55 a.m. PT) in all regions except the UK and Ireland, where it will air on the BBC. In case you missed it, viewers can also watch David Tennant starring in the other three anniversary specials: The Star Beast, Wild Blue Yonder and The Giggle. All releases are available on Disney Plus.
But what's interesting is CNET goes on to explain "why a VPN could be a useful tool." Perhaps you're traveling abroad and want to stream Disney Plus while away from home. With a VPN, you're able to virtually change your location on your phone, tablet or laptop to get access to the series from anywhere in the world. There are other good reasons to use a VPN for streaming too. A VPN is the best way to encrypt your traffic and stop your ISP from throttling your speeds...

You can use a VPN to stream content legally as long as VPNs are allowed in your country and you have a valid subscription to the streaming service you're using. The U.S. and Canada are among the countries where VPNs are legal

United States

US Water Utilities Hacked After Default Passwords Set to '1111', Cybersecurity Officials Say (fastcompany.com) 84

An anonymous reader shared this report from Fast Company: Providers of critical infrastructure in the United States are doing a sloppy job of defending against cyber intrusions, the National Security Council tells Fast Company, pointing to recent Iran-linked attacks on U.S. water utilities that exploited basic security lapses [earlier this month]. The security council tells Fast Company it's also aware of recent intrusions by hackers linked to China's military at American infrastructure entities that include water and energy utilities in multiple states.

Neither the Iran-linked or China-linked attacks affected critical systems or caused disruptions, according to reports.

"We're seeing companies and critical services facing increased cyber threats from malicious criminals and countries," Anne Neuberger, the deputy national security advisor for cyber and emerging tech, tells Fast Company. The White House had been urging infrastructure providers to upgrade their cyber defenses before these recent hacks, but "clearly, by the most recent success of the criminal cyberattacks, more work needs to be done," she says... The attacks hit at least 11 different entities using Unitronics devices across the United States, which included six local water facilities, a pharmacy, an aquatics center, and a brewery...

Some of the compromised devices had been connected to the open internet with a default password of "1111," federal authorities say, making it easy for hackers to find them and gain access. Fixing that "doesn't cost any money," Neuberger says, "and those are the kinds of basic things that we really want companies urgently to do." But cybersecurity experts say these attacks point to a larger issue: the general vulnerability of the technology that powers physical infrastructure. Much of the hardware was developed before the internet and, though they were retrofitted with digital capabilities, still "have insufficient security controls," says Gary Perkins, chief information security officer at cybersecurity firm CISO Global. Additionally, many infrastructure facilities prioritize "operational ease of use rather than security," since many vendors often need to access the same equipment, says Andy Thompson, an offensive cybersecurity expert at CyberArk. But that can make the systems equally easy for attackers to exploit: freely available web tools allow anyone to generate lists of hardware connected to the public internet, like the Unitronics devices used by water companies.

"Not making critical infrastructure easily accessible via the internet should be standard practice," Thompson says.

AI

AI Companies Would Be Required To Disclose Copyrighted Training Data Under New Bill (theverge.com) 42

An anonymous reader quotes a report from The Verge: Two lawmakers filed a bill requiring creators of foundation models to disclose sources of training data so copyright holders know their information was taken. The AI Foundation Model Transparency Act -- filed by Reps. Anna Eshoo (D-CA) and Don Beyer (D-VA) -- would direct the Federal Trade Commission (FTC) to work with the National Institute of Standards and Technology (NIST) to establish rules for reporting training data transparency. Companies that make foundation models will be required to report sources of training data and how the data is retained during the inference process, describe the limitations or risks of the model, how the model aligns with NIST's planned AI Risk Management Framework and any other federal standards might be established, and provide information on the computational power used to train and run the model. The bill also says AI developers must report efforts to "red team" the model to prevent it from providing "inaccurate or harmful information" around medical or health-related questions, biological synthesis, cybersecurity, elections, policing, financial loan decisions, education, employment decisions, public services, and vulnerable populations such as children.

The bill calls out the importance of training data transparency around copyright as several lawsuits have come out against AI companies alleging copyright infringement. It specifically mentions the case of artists against Stability AI, Midjourney, and Deviant Art, (which was largely dismissed in October, according to VentureBeat), and Getty Images' complaint against Stability AI. The bill still needs to be assigned to a committee and discussed, and it's unclear if that will happen before the busy election campaign season starts. Eshoo and Beyer's bill complements the Biden administration's AI executive order, which helps establish reporting standards for AI models. The executive order, however, is not law, so if the AI Foundation Model Transparency Act passes, it will make transparency requirements for training data a federal rule.

Government

Biden Administration Unveils Hydrogen Tax Credit Plan To Jump-Start Industry (npr.org) 104

An anonymous reader quotes a report from NPR: The Biden administration released its highly anticipated proposal for doling out billions of dollars in tax credits to hydrogen producers Friday, in a massive effort to build out an industry that some hope can be a cleaner alternative to fossil fueled power. The U.S. credit is the most generous in the world for hydrogen production, Jesse Jenkins, a professor at Princeton University who has analyzed the U.S. climate law, said last week. The proposal -- which is part of Democrats' Inflation Reduction Act passed last year -- outlines a tiered system to determine which hydrogen producers get the most credits, with cleaner energy projects receiving more, and smaller, but still meaningful credits going to those that use fossil fuel to produce hydrogen.

Administration officials estimate the hydrogen production credits will deliver $140 billion in revenue and 700,000 jobs by 2030 -- and will help the U.S. produce 50 million metric tons of hydrogen by 2050. "That's equivalent to the amount of energy currently used by every bus, every plane, every train and every ship in the US combined," Energy Deputy Secretary David M. Turk said on a Thursday call with reporters to preview the proposal. [...] As part of the administration's proposal, firms that produce cleaner hydrogen and meet prevailing wage and registered apprenticeship requirements stand to qualify for a large incentive at $3 per kilogram of hydrogen. Firms that produce hydrogen using fossil fuels get less. The credit ranges from $.60 to $3 per kilo, depending on whole lifecycle emissions.

One contentious issue in the proposal was how to deal with the fact that clean, electrolyzer hydrogen draws tremendous amounts of electricity. Few want that to mean that more coal or natural gas-fired power plants run extra hours. The guidance addresses this by calling for producers to document their electricity usage through "energy attribute certificates" -- which will help determine the credits they qualify for. Rachel Fakhry, policy director for emerging technologies at the Natural Resources Defense Council called the proposal "a win for the climate, U.S. consumers, and the budding U.S. hydrogen industry." The Clean Air Task Force likewise called the proposal "an excellent step toward developing a credible clean hydrogen market in the United States."

Crime

Teen GTA VI Hacker Sentenced To Indefinite Hospital Order (theverge.com) 77

Emma Roth reports via The Verge: The 18-year-old Lapsus$ hacker who played a critical role in leaking Grand Theft Auto VI footage has been sentenced to life inside a hospital prison, according to a report from the BBC. A British judge ruled on Thursday that Arion Kurtaj is a high risk to the public because he still wants to commit cybercrimes.

In August, a London jury found that Kurtaj carried out cyberattacks against GTA VI developer Rockstar Games and other companies, including Uber and Nvidia. However, since Kurtaj has autism and was deemed unfit to stand trial, the jury was asked to determine whether he committed the acts in question, not whether he did so with criminal intent. During Thursday's hearing, the court heard Kurtaj "had been violent while in custody with dozens of reports of injury or property damage," the BBC reports. A mental health assessment also found that Kurtaj "continued to express the intent to return to cybercrime as soon as possible." He's required to stay in the hospital prison for life unless doctors determine that he's no longer a danger.

Kurtaj leaked 90 videos of GTA VI gameplay footage last September while out on bail for hacking Nvidia and British telecom provider BT / EE. Although he stayed at a hotel under police protection during this time, Kurtaj still managed to carry out an attack on Rockstar Games by using the room's included Amazon Fire Stick and a "newly purchased smart phone, keyboard and mouse," according to a separate BBC report. Kurtaj was arrested for the final time following the incident. Another 17-year-old involved with Lapsus$ was handed an 18-month community sentence, called a Youth Rehabilitation Order, and a ban from using virtual private networks.

Robotics

Massachusetts Lawmakers Mull 'Killer Robot' Bill (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch, written by Brian Heater: Back in mid-September, a pair of Massachusetts lawmakers introduced a bill "to ensure the responsible use of advanced robotic technologies." What that means in the simplest and most direct terms is legislation that would bar the manufacture, sale and use of weaponized robots. It's an interesting proposal for a number of reasons. The first is a general lack of U.S. state and national laws governing such growing concerns. It's one of those things that has felt like science fiction to such a degree that many lawmakers had no interest in pursuing it in a pragmatic manner. [...] Earlier this week, I spoke about the bill with Massachusetts state representative Lindsay Sabadosa, who filed it alongside Massachusetts state senator Michael Moore.

What is the status of the bill?
We're in an interesting position, because there are a lot of moving parts with the bill. The bill has had a hearing already, which is wonderful news. We're working with the committee on the language of the bill. They have had some questions about why different pieces were written as they were written. We're doing that technical review of the language now -- and also checking in with all stakeholders to make sure that everyone who needs to be at the table is at the table.

When you say "stakeholders" ...
Stakeholders are companies that produce robotics. The robot Spot, which Boston Dynamics produces, and other robots as well, are used by entities like Boston Police Department or the Massachusetts State Police. They might be used by the fire department. So, we're talking to those people to run through the bill, talk about what the changes are. For the most part, what we're hearing is that the bill doesn't really change a lot for those stakeholders. Really the bill is to prevent regular people from trying to weaponize robots, not to prevent the very good uses that the robots are currently employed for.

Does the bill apply to law enforcement as well?
We're not trying to stop law enforcement from using the robots. And what we've heard from law enforcement repeatedly is that they're often used to deescalate situations. They talk a lot about barricade situations or hostage situations. Not to be gruesome, but if people are still alive, if there are injuries, they say it often helps to deescalate, rather than sending in officers, which we know can often escalate the situation. So, no, we wouldn't change any of those uses. The legislation does ask that law enforcement get warrants for the use of robots if they're using them in place of when they would send in a police officer. That's pretty common already. Law enforcement has to do that if it's not an emergency situation. We're really just saying, "Please follow current protocol. And if you're going to use a robot instead of a human, let's make sure that protocol is still the standard."

I'm sure you've been following the stories out of places like San Francisco and Oakland, where there's an attempt to weaponize robots. Is that included in this?
We haven't had law enforcement weaponize robots, and no one has said, "We'd like to attach a gun to a robot" from law enforcement in Massachusetts. I think because of some of those past conversations there's been a desire to not go down that route. And I think that local communities would probably have a lot to say if the police started to do that. So, while the legislation doesn't outright ban that, we are not condoning it either.
Representative Sabadosa said Boston Dynamics "sought us out" and is "leading the charge on this."

"I'm hopeful that we will be the first to get the legislation across the finish line, too," added Rep. Sabadosa. "We've gotten thank-you notes from companies, but we haven't gotten any pushback from them. And our goal is not to stifle innovation. I think there's lots of wonderful things that robots will be used for. [...]"

You can read the full interview here.
Privacy

UK Police To Be Able To Run Face Recognition Searches on 50 Million Driving Licence Holders (theguardian.com) 24

The police will be able to run facial recognition searches on a database containing images of Britain's 50 million driving licence holders under a law change being quietly introduced by the government. From a report: Should the police wish to put a name to an image collected on CCTV, or shared on social media, the legislation would provide them with the powers to search driving licence records for a match. The move, contained in a single clause in a new criminal justice bill, could put every driver in the country in a permanent police lineup, according to privacy campaigners.

Facial recognition searches match the biometric measurements of an identified photograph, such as that contained on driving licences, to those of an image picked up elsewhere. The intention to allow the police or the National Crime Agency (NCA) to exploit the UK's driving licence records is not explicitly referenced in the bill or in its explanatory notes, raising criticism from leading academics that the government is "sneaking it under the radar." Once the criminal justice bill is enacted, the home secretary, James Cleverly, must establish "driver information regulations" to enable the searches, but he will need only to consult police bodies, according to the bill.

AI

Rite Aid Banned From Using Facial Recognition Software 60

An anonymous reader quotes a report from TechCrunch: Rite Aid has been banned from using facial recognition software for five years, after the Federal Trade Commission (FTC) found that the U.S. drugstore giant's "reckless use of facial surveillance systems" left customers humiliated and put their "sensitive information at risk." The FTC's Order (PDF), which is subject to approval from the U.S. Bankruptcy Court after Rite Aid filed for Chapter 11 bankruptcy protection in October, also instructs Rite Aid to delete any images it collected as part of its facial recognition system rollout, as well as any products that were built from those images. The company must also implement a robust data security program to safeguard any personal data it collects.

A Reuters report from 2020 detailed how the drugstore chain had secretly introduced facial recognition systems across some 200 U.S. stores over an eight-year period starting in 2012, with "largely lower-income, non-white neighborhoods" serving as the technology testbed. With the FTC's increasing focus on the misuse of biometric surveillance, Rite Aid fell firmly in the government agency's crosshairs. Among its allegations are that Rite Aid -- in partnership with two contracted companies -- created a "watchlist database" containing images of customers that the company said had engaged in criminal activity at one of its stores. These images, which were often poor quality, were captured from CCTV or employees' mobile phone cameras.

When a customer entered a store who supposedly matched an existing image on its database, employees would receive an automatic alert instructing them to take action -- and the majority of the time this instruction was to "approach and identify," meaning verifying the customer's identity and asking them to leave. Often, these "matches" were false positives that led to employees incorrectly accusing customers of wrongdoing, creating "embarrassment, harassment, and other harm," according to the FTC. "Employees, acting on false positive alerts, followed consumers around its stores, searched them, ordered them to leave, called the police to confront or remove consumers, and publicly accused them, sometimes in front of friends or family, of shoplifting or other wrongdoing," the complaint reads. Additionally, the FTC said that Rite Aid failed to inform customers that facial recognition technology was in use, while also instructing employees to specifically not reveal this information to customers.
In a press release, Rite Aid said that it was "pleased to reach an agreement with the FTC," but that it disagreed with the crux of the allegations.

"The allegations relate to a facial recognition technology pilot program the Company deployed in a limited number of stores," Rite Aid said in its statement. "Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC's investigation regarding the Company's use of the technology began."
The Internet

US Regulators Propose New Online Privacy Safeguards For Children 25

An anonymous reader quotes a report from the New York Times: The Federal Trade Commission on Wednesday proposed sweeping changes to bolster the key federal rule that has protected children's privacy online, in one of the most significant attempts by the U.S. government to strengthen consumer privacy in more than a decade. The changes are intended to fortify the rules underlying the Children's Online Privacy Protection Act of 1998, a law that restricts the online tracking of youngsters by services like social media apps, video game platforms, toy retailers and digital advertising networks. Regulators said the moves would "shift the burden" of online safety from parents to apps and other digital services while curbing how platforms may use and monetize children's data.

The proposed changes would require certain online services to turn off targeted advertising by default for children under 13. They would prohibit the online services from using personal details like a child's cellphone number to induce youngsters to stay on their platforms longer. That means online services would no longer be able to use personal data to bombard young children with push notifications. The proposed updates would also strengthen security requirements for online services that collect children's data as well as limit the length of time online services could keep that information. And they would limit the collection of student data by learning apps and other educational-tech providers, by allowing schools to consent to the collection of children's personal details only for educational purposes, not commercial purposes. [...]

The F.T.C. began reviewing the children's privacy rule in 2019, receiving more than 175,000 comments from tech and advertising industry trade groups, video content developers, consumer advocacy groups and members of Congress. The resulting proposal (PDF) runs more than 150 pages. Proposed changes include narrowing an exception that allows online services to collect persistent identification codes for children for certain internal operations, like product improvement, consumer personalization or fraud prevention, without parental consent. The proposed changes would prohibit online operators from employing such user-tracking codes to maximize the amount of time children spend on their platforms. That means online services would not be able to use techniques like sending mobile phone notifications "to prompt the child to engage with the site or service, without verifiable parental consent," according to the proposal. How online services would comply with the changes is not yet known. Members of the public have 60 days to comment on the proposals, after which the commission will vote.
AI

AI Cannot Be Patent 'Inventor,' UK Supreme Court Rules in Landmark Case (reuters.com) 29

A U.S. computer scientist on Wednesday lost his bid to register patents over inventions created by his artificial intelligence system in a landmark case in Britain about whether AI can own patent rights. From a report: Stephen Thaler wanted to be granted two patents in the UK for inventions he says were devised by his "creativity machine" called DABUS. His attempt to register the patents was refused by Britain's Intellectual Property Office on the grounds that the inventor must be a human or a company, rather than a machine. Thaler appealed to the UK's Supreme Court, which on Wednesday unanimously rejected his appeal as under UK patent law "an inventor must be a natural person."

"This appeal is not concerned with the broader question whether technical advances generated by machines acting autonomously and powered by AI should be patentable," Judge David Kitchin said in the court's written ruling. "Nor is it concerned with the question whether the meaning of the term 'inventor' ought to be expanded ... to include machines powered by AI which generate new and non-obvious products and processes which may be thought to offer benefits over products and processes which are already known." Thaler's lawyers said in a statement that "the judgment establishes that UK patent law is currently wholly unsuitable for protecting inventions generated autonomously by AI machines."

Canada

Meta's News Ban In Canada Remains As Online News Act Goes Into Effect (bbc.com) 147

An anonymous reader quotes a report from the BBC: A bill that mandates tech giants pay news outlets for their content has come into effect in Canada amid an ongoing dispute with Facebook and Instagram owner Meta over the law. Some have hailed it as a game-changer that sets out a permanent framework that will see a steady drip of funds from wealthy tech companies to Canada's struggling journalism industry. But it has also been met with resistance by Google and Meta -- the only two companies big enough to be encompassed by the law. In response, over the summer, Meta blocked access to news on Facebook and Instagram for Canadians. Google looked set to follow, but after months of talks, the federal government was able to negotiate a deal with the search giant as the company has agreed to pay Canadian news outlets $75 million annually.

No such agreement appears to be on the horizon with Meta, which has called the law "fundamentally flawed." If Meta is refusing to budge, so is the government. "We will continue to push Meta, that makes billions of dollars in profits, even though it is refusing to invest in the journalistic rigor and stability of the media," Prime Minister Justin Trudeau told reporters on Friday.
According to a study by the Media Ecosystem Observatory, the views of Canadian news on Facebook dropped 90% after the company blocked access to news on the platform. Local news outlets have been hit particularly hard.

"The loss of journalism on Meta platforms represents a significant decline in the resiliency of the Canadian media ecosystem," said Taylor Owen, a researcher at McGill and the co-author of the study. He believes it also hurts Meta's brand in the long run, pointing to the fact that the Canada's federal government, as well as that of British Columbia, other municipalities and a handful of large Canadian corporations, have all pulled their advertising off Facebook and Instagram in retaliation.
Security

Comcast Discloses Data Breach of Close To 36 Million Xfinity Customers [UPDATE] (techcrunch.com) 40

In a notice on Monday, Xfinity notified customers of a "data security incident" that resulted in the theft of customer information, including usernames, passwords, contact information, and more. The Verge reports: Xfinity traces the breach to a security vulnerability disclosed by cloud computing company Citrix, which began alerting customers of a flaw in software Xfinity and other companies use on October 10th. While Xfinity says it patched the security hole, it later uncovered suspicious activity on its internal systems "that was concluded to be a result of this vulnerability."

The hack resulted in the theft of customer usernames and hashed passwords, according to Xfinity's notice. Meanwhile, "some customers" may have had their names, contact information, last four digits of their social security numbers, dates of birth, and / or secret questions and answers exposed. Xfinity has notified federal law enforcement about the incident and says "data analysis is continuing."

We still don't know how many users were affected by the breach. Xfinity will automatically ask customers to change their passwords the next time they log in to their accounts, and it's also encouraging users to turn on two-factor authentication. You can find the full notice, including contact information for the company's incident response team, on Xfinity's website (PDF).
UPDATE 12/19/23: According to TechCrunch, almost 36 million Xfinity customers had their sensitive information accessed by hackers via a vulnerability known as "CitrixBleed." The vulnerability is "found in Citrix networking devices often used by big corporations and has been under mass-exploitation by hackers since late August," the report says. "Citrix made patches available in early October, but many organizations did not patch in time. Hackers have used the CitrixBleed vulnerability to hack into big-name victims, including aerospace giant Boeing, the Industrial and Commercial Bank of China and international law firm Allen & Overy."

"In a filing with Maine's attorney general, Comcast confirmed that almost 35.8 million customers are affected by the breach. Comcast's latest earnings report shows the company has more than 32 million broadband customers, suggesting this breach has impacted most, if not all Xfinity customers."
Crime

Nikola Founder Trevor Milton Sentenced To 4 Years For Securities Fraud (techcrunch.com) 34

An anonymous reader quotes a report from TechCrunch: Trevor Milton, the disgraced founder and former CEO of electric truck startup Nikola, was sentenced Monday to four years in prison for securities fraud. The sentence, by Judge Edgardo Ramos in the U.S. District Court in Manhattan, caps a multi-year saga that at one point sent Nikola stock soaring 83% only to come crashing down months later over accusations of fraud and canceled contracts. The sentencing hearing comes after four separate delays, during which Milton has remained free under a $100 million bond.

In his ruling, Ramos said he would impose a sentence of 48 months on each count, served concurrently, and a fine of $1 million. Milton is expected to appeal the sentence, which Ramos acknowledged. Milton sobbed as he pled with Judge Ramos for leniency in a long and often confusing statement ahead of the sentencing. At one point, Milton said he stepped down from the CEO post at Nikola not because of fraud allegations, but to support his wife. "I stepped down because my wife was suffering live threatening sickness," he said in his statement, which reporter Matthew Russell Lee of Inner City Press shared on social media post X. She suffered medical malpractice, someone else's plasma. So I stepped down for that -- not because I was a fraud. The truth matters. I chose my wife over money or power."

During the sentencing hearing, defense attorneys said that Milton wasn't trying to defraud investors or intending to harm anyone. Instead, they argued he simply wanted to be loved and praised like Elon Musk. Prosecutors pushed back and said he lied repeatedly and targeted retail investors. Federal prosecutors recommended an 11-year sentence, but Milton faced a maximum term of 60 years in prison. The government also sought a $5 million fine, forfeiture of a ranch in Utah and an undetermined amount of restitution to investors. Restitution will be determined after Monday's sentencing hearing.
Timeline of events:

June, 2016: Nikola Motor Receives Over 7,000 Preorders Worth Over $2.3 Billion For Its Electric Truck
December, 2016: Nikola Motor Company Reveals Hydrogen Fuel Cell Truck With Range of 1,200 Miles
February, 2020: Nikola Motors Unveils Hybrid Fuel-Cell Concept Truck With 600-Mile Range
June, 2020: Nikola Founder Exaggerated the Capability of His Debut Truck
September, 2020: Nikola Motors Accused of Massive Fraud, Ocean of Lies
September, 2020: Nikola Admits Prototype Was Rolling Downhill In Promo Video
September, 2020: Nikola Founder Trevor Milton Steps Down as Chairman in Battle With Short Seller
October, 2020: Nikola Stock Falls 14 Percent After CEO Downplays Badger Truck Plans
November, 2020: Nikola Stock Plunges As Company Cancels Badger Pickup Truck
July, 2021: Nikola Founder Trevor Milton Indicted on Three Counts of Fraud
December, 2021: EV Startup Nikola Agrees To $125 Million Settlement
September, 2022: Nikola Founder Lied To Investors About Tech, Prosecutor Says in Fraud Trial
Patents

Apple To Pause Selling New Versions of Its Watch After Losing Patent Dispute (nytimes.com) 36

An anonymous reader quotes a report from the New York Times: Apple said on Monday that it would pause sales of its flagship smartwatches online starting Thursday and at retail locations on Christmas Eve. Two months ago, Apple lost a patent case over the technology its smartwatches use to detect people's pulse rate. The company was ordered to stop selling the Apple Watch Series 9 and Watch Ultra 2 after Christmas, which could set off a run on sales of the watches in the final week of holiday shopping. The move by Apple follows a ruling by the International Trade Commission in October that found several Apple Watches infringe on patents held by Masimo, a medical technology company in Irvine, Calif.

In court, Masimo detailed how Apple poached its top executives and more than a dozen other employees before later releasing a watch with pulse oximeter capabilities -- whichmeasures the percentage of oxygen that red blood cells carry from the lungs to the body -- that were patented by Masimo. To avoid a complete ban on sales, Apple had two months to cut a deal with Masimo to license its technology, or it could appeal to the Biden administration to reverse the ruling. But Joe Kiani, the chief executive of Masimo, said in an interview that Apple had not engaged in licensing negotiations. Instead, he said that Apple had appealed to President Biden to veto the I.T.C. ruling, which Mr. Kiani knows because the administration contacted Masimo about Apple's request. "They're trying to make the agency look like it's helping patent trolls," Mr. Kiani said of the I.T.C.

Mr. Kiani said that he was willing to sell Apple a chip that Masimo had designed to provide pulse oximeter readings on the Apple Watch. The chip is currently in a Masimo medical watch, called the W1, that is approved by the Food and Drug Administration. The device uses algorithms to process red and near-infrared light to determine how oxygen-rich is the blood in arteries. "If they don't want to use our chip, I'll work with them to make their product good," Mr. Kiani said. "Once it's good enough, I'm happy to give them a license." Apple introduced its first watch with pulse oximetry in 2020. It has included the technology, which it calls "blood oxygen," in subsequent models. But unlike Masimo's W1 device, Apple hasn't had its watches cleared by the F.D.A. for use as a medical device for pulse oximetry.
"The Apple Watch accounts for nearly $20 billion of the company's $383.29 billion in annual sales," notes the NYT. The company is the largest smartwatch seller in the world, accounting for about a third of all smartwatch sales.
Government

Lawmakers Push DOJ To Investigate Apple Following Beeper Shutdowns (theverge.com) 55

Following a tumultuous few weeks for Beeper, which has been trying to provide an iMessage-compatible Android app, a group of US lawmakers are pushing for the DOJ to investigate Apple for "potentially anticompetitive conduct" over its attempts to disable Beeper's services. From a report: Senators Amy Klobuchar (D-MN) and Mike Lee (R-UT) as well as Representatives Jerry Nadler (D-NY) and Ken Buck (R-CO) said in a letter to the DOJ that Beeper's Android messaging app, Beeper Mini, was a threat to Apple's leverage by "creating [a] more competitive mobile applications market, which in turn [creates] a more competitive mobile device market."

In an interview with CBS News on Monday, Beeper CEO Eric Migicovsky and 16-year-old developer James Gill talked about the fight to keep Beeper Mini alive. Migicovsky told CBS News that Beeper is trying to provide a service people want and reiterated his belief that Apple has a monopoly over its iMessage service. The company created Beeper Mini after being contacted by Gill, who said he reverse-engineered the software by "poking at it" using a "real Mac and a real iPhone." [...] The lawmakers' letter also pointed to a Department of Commerce report calling Apple a "gatekeeper," mirroring language used in the EU Digital Markets Act (DMA) that went into force earlier this year, regulating the "core" services of several tech platforms (though, notably, iMessage may not be included in this). They went on to cite Migicovsky's December 2021 Senate Judiciary Committee testimony that "the dominant messaging services would use their position to impose barriers to interoperability" and keep companies like Beeper from offering certain services. "Given Apple's recent actions, that concern appears prescient," they added.

Facebook

Does Meta's New Face Camera Herald a New Age of Surveillance? Or Distraction... (seattletimes.com) 74

"For the past two weeks, I've been using a new camera to secretly snap photos and record videos of strangers in parks, on trains, inside stores and at restaurants," writes a reporter for the New York Times. They were testing the recently released $300 Ray-Ban Meta glasses — "I promise it was all in the name of journalism" — which also includes microphones (and speakers, for listening to audio).

They call the device "part of a broader ambition in Silicon Valley to shift computing away from smartphone and computer screens and toward our faces." Meta, Apple and Magic Leap have all been hyping mixed-reality headsets that use cameras to allow their software to interact with objects in the real world. On Tuesday, Zuckerberg posted a video on Instagram demonstrating how the smart glasses could use AI to scan a shirt and help him pick out a pair of matching pants. Wearable face computers, the companies say, could eventually change the way we live and work... While I was impressed with the comfortable, stylish design of the glasses, I felt bothered by the implications for our privacy...

To inform people that they are being photographed, the Ray-Ban Meta glasses include a tiny LED light embedded in the right frame to indicate when the device is recording. When a photo is snapped, it flashes momentarily. When a video is recording, it is continuously illuminated. As I shot 200 photos and videos with the glasses in public, including on BART trains, on hiking trails and in parks, no one looked at the LED light or confronted me about it. And why would they? It would be rude to comment on a stranger's glasses, let alone stare at them... [A] Meta spokesperson, said the company took privacy seriously and designed safety measures, including a tamper-detection technology, to prevent users from covering up the LED light with tape.

But another concern was how smart glasses might impact our ability to focus: Even when I wasn't using any of the features, I felt distracted while wearing them... I had problems concentrating while driving a car or riding a scooter. Not only was I constantly bracing myself for opportunities to shoot video, but the reflection from other car headlights emitted a harsh, blue strobe effect through the eyeglass lenses. Meta's safety manual for the Ray-Bans advises people to stay focused while driving, but it doesn't mention the glare from headlights. While doing work on a computer, the glasses felt unnecessary because there was rarely anything worth photographing at my desk, but a part of my mind constantly felt preoccupied by the possibility...

Ben Long, a photography teacher in San Francisco, said he was skeptical about the premise of the Meta glasses helping people remain present. "If you've got the camera with you, you're immediately not in the moment," he said. "Now you're wondering, Is this something I can present and record?"

The reporter admits they'll fondly cherish its photos of their dog [including in the original article], but "the main problem is that the glasses don't do much we can't already do with phones... while these types of moments are truly precious, that benefit probably won't be enough to convince a vast majority of consumers to buy smart glasses and wear them regularly, given the potential costs of lost privacy and distraction."
Government

ProPublica Argues US Police 'Have Undermined the Promise of Body Cameras' (propublica.org) 96

A new investigation from ProPublica argues that in the U.S., "Hundreds of millions in taxpayer dollars have been spent on what was sold as a revolution in transparency and accountability.

"Instead, police departments routinely refuse to release footage..." The technology represented the largest new investment in policing in a generation. Yet without deeper changes, it was a fix bound to fall far short of those hopes. In every city, the police ostensibly report to mayors and other elected officials. But in practice, they have been given wide latitude to run their departments as they wish and to police — and protect — themselves. And so as policymakers rushed to equip the police with cameras, they often failed to grapple with a fundamental question: Who would control the footage?

Instead, they defaulted to leaving police departments, including New York's, with the power to decide what is recorded, who can see it and when. In turn, departments across the country have routinely delayed releasing footage, released only partial or redacted video or refused to release it at all. They have frequently failed to discipline or fire officers when body cameras document abuse and have kept footage from the agencies charged with investigating police misconduct. Even when departments have stated policies of transparency, they don't always follow them. Three years ago, after George Floyd's killing by Minneapolis police officers and amid a wave of protests against police violence, the New York Police Department said it would publish footage of so-called critical incidents "within 30 days." There have been 380 such incidents since then. The department has released footage within a month just twice.

And the department often does not release video at all. There have been 28 shootings of civilians this year by New York officers (through the first week of December). The department has released footage in just seven of these cases (also through the first week of December) and has not done so in any of the last 16.... For a snapshot of disclosure practices across the country, we conducted a review of civilians killed by police officers in June 2022, roughly a decade after the first body cameras were rolled out. We counted 79 killings in which there was body-worn-camera footage. A year and a half later, the police have released footage in just 33 cases — or about 42%.

The reporting reveals that without further intervention from city, state and federal officials and lawmakers, body cameras may do more to serve police interests than those of the public they are sworn to protect... The pattern has become so common across the country — public talk of transparency followed by a deliberate undermining of the stated goal — that the policing-oversight expert Hans Menos, who led Philadelphia's civilian police-oversight board until 2020, coined a term for it: the "body-cam head fake."

The article includes examples where when footage was ultimately released, it contradicted initial police accounts.

In one instance, past footage of Minneapolis police officer Derek Chauvin "was left in the control of a department where impunity reigned..." the article points out, adding that Minneapolis "fought against releasing the videos, even after Chauvin pleaded guilty in December 2021 to federal civil rights violations."
DRM

'Copyright Troll' Porn Company 'Makes Millions By Shaming Porn Consumers' (yahoo.com) 100

In 1999 Los Angeles Times reporter Michael Hiltzik co-authored a Pulitzer Prize-winning story. Now a business columnist for the Times, he writes that a Southern California maker of pornographic films named Strike 3 Holdings is also "a copyright troll," according to U.S. Judge Royce C. Lamberth: Lamberth cwrote in 2018, "Armed with hundreds of cut-and-pasted complaints and boilerplate discovery motions, Strike 3 floods this courthouse (and others around the country) with lawsuits smacking of extortion. It treats this Court not as a citadel of justice, but as an ATM." He likened its litigation strategy to a "high-tech shakedown." Lamberth was not speaking off the cuff. Since September 2017, Strike 3 has filed more than 12,440 lawsuits in federal courts alleging that defendants infringed its copyrights by downloading its movies via BitTorrent, an online service on which unauthorized content can be accessed by almost anyone with a computer and internet connection.

That includes 3,311 cases the firm filed this year, more than 550 in federal courts in California. On some days, scores of filings reach federal courthouses — on Nov. 17, to select a date at random, the firm filed 60 lawsuits nationwide... Typically, they are settled for what lawyers say are cash payments in the four or five figures or are dismissed outright...

It's impossible to pinpoint the profits that can be made from this courthouse strategy. J. Curtis Edmondson, a Portland, Oregon, lawyer who is among the few who pushed back against a Strike 3 case and won, estimates that Strike 3 "pulls in about $15 million to $20 million a year from its lawsuits." That would make the cases "way more profitable than selling their product...." If only one-third of its more than 12,000 lawsuits produced settlements averaging as little as $5,000 each, the yield would come to $20 million... The volume of Strike 3 cases has increased every year — from 1,932 in 2021 to 2,879 last year and 3,311 this year.

What's really needed is a change in copyright law to bring the statutory damages down to a level that truly reflects the value of a film lost because of unauthorized downloading — not $750 or $150,000 but perhaps a few hundred dollars.

Anone of the lawsuits go to trial. Instead ISPs get a subpoena demanding the real-world address and name behind IP addresses "ostensibly used to download content from BitTorrent..." according to the article. Strike 3 will then "proceed by sending a letter implicitly threatening the subscriber with public exposure as a pornography viewer and explicitly with the statutory penalties for infringement written into federal copyright law — up to $150,000 for each example of willful infringement and from $750 to $30,0000 otherwise."

A federal judge in Connecticut wrote last year that "Given the nature of the films at issue, defendants may feel coerced to settle these suits merely to prevent public disclosure of their identifying information, even if they believe they have been misidentified."

Thanks to Slashdot reader Beerismydad for sharing the article.
Medicine

US Pharmacies Share Medical Data with Police Without a Warrant, Inquiry Finds (msn.com) 23

The Washington Post reports that America's largest pharmacy chains have "handed over Americans' prescription records to police and government investigators without a warrant, a congressional investigation found, raising concerns about threats to medical privacy." Though some of the chains require their lawyers to review law enforcement requests, three of the largest — CVS Health, Kroger and Rite Aid, with a combined 60,000 locations nationwide — said they allow pharmacy staff members to hand over customers' medical records in the store... Pharmacies' records hold some of the most intimate details of their customers' personal lives, including years-old medical conditions and the prescriptions they take for mental health and birth control. Because the chains often share records across all locations, a pharmacy in one state can access a person's medical history from states with more-restrictive laws. Carly Zubrzycki, an associate professor at the University of Connecticut law school, wrote last year that this could link a person's out-of-state medical care via a "digital trail" back to their home state...

In briefings, officials with eight American pharmacy giants — Walgreens Boots Alliance, CVS, Walmart, Rite Aid, Kroger, Cigna, Optum Rx and Amazon Pharmacy — told congressional investigators that they required only a subpoena, not a warrant, to share the records.

A subpoena can be issued by a government agency and, unlike a court order or warrant, does not require a judge's approval. To obtain a warrant, law enforcement must convince a judge that the information is vital to investigate a crime. Officials with CVS, Kroger and Rite Aid said they instruct their pharmacy staff members to process law enforcement requests on the spot, saying the staff members face "extreme pressure to immediately respond," the lawmakers' letter said. The eight pharmacy giants told congressional investigators that they collectively received tens of thousands of legal demands every year, and that most were in connection with civil lawsuits. It's unclear how many were related to law enforcement demands, or how many requests were fulfilled.

Only one of the companies, Amazon, said it notified customers when law enforcement demanded its pharmacy records unless there was a legal prohibition, such as a "gag order," preventing it from doing so, the lawmakers said...

Most investigative requests come with a directive requiring the company to keep them confidential, a CVS spokeswoman said; for those that don't, the company considers "on a case-by-case basis whether it's appropriate to notify the individual."

The article points out that Americans "can request the companies tell them if they've ever disclosed their data...but very few people do.

"CVS, which has more than 40,000 pharmacists and 10,000 stores in the United States, said it received a 'single-digit number' of such consumer requests last year, the letter states."
Google

Why Google Will Stop Telling Law Enforcement Which Users Were Near a Crime (yahoo.com) 69

Earlier this week Google Maps stopped storing user location histories in the cloud. But why did Google make this move? Bloomberg reports that it was "so that the company no longer has access to users' individual location histories, cutting off its ability to respond to law enforcement warrants that ask for data on everyone who was in the vicinity of a crime." The company said Thursday that for users who have it enabled, location data will soon be saved directly on users' devices, blocking Google from being able to see it, and, by extension, blocking law enforcement from being able to demand that information from Google. "Your location information is personal," said Marlo McGriff, director of product for Google Maps, in the blog post. "We're committed to keeping it safe, private and in your control."

The change comes three months after a Bloomberg Businessweek investigation that found police across the US were increasingly using warrants to obtain location and search data from Google, even for nonviolent cases, and even for people who had nothing to do with the crime. "It's well past time," said Jennifer Lynch, the general counsel at the Electronic Frontier Foundation, a San Francisco-based nonprofit that defends digital civil liberties. "We've been calling on Google to make these changes for years, and I think it's fantastic for Google users, because it means that they can take advantage of features like location history without having to fear that the police will get access to all of that data."

Google said it would roll out the changes gradually through the next year on its own Android and Apple Inc.'s iOS mobile operating systems, and that users will receive a notification when the update comes to their account. The company won't be able to respond to new geofence warrants once the update is complete, including for people who choose to save encrypted backups of their location data to the cloud.

The EFF general counsel also pointed out to Bloomberg that "nobody else has been storing and collecting data in the same way as Google." (Apple, for example, is technically unable to provide the same data to police.)
United States

Is Climate-Friendy Flying Possible? The US Tries Subsidizing Sustainable Aviation Fuels (msn.com) 138

"Unlike automobiles, jumbo jets cannot run on batteries," notes the Washington Post.

So Friday the White unveiled a plan for "subsidizing sustainable aviation fuels" — which could also give the U.S. a leg up in a brand new industry: Senior White House officials said the program would make the airline industry cleaner while bringing prosperity to rural America. But environmental groups and some scientists expressed reservations about the plan, which would award subsidies based on a scientific model that has previously been used to justify incentives for corn-based ethanol. Studies have found the gasoline additive is exacerbating climate change.

The new tax credits, created through President Biden's signature climate law, are meant to spur production of jet fuels that create no more than half the emissions of the petroleum-based product. Each gallon of such fuel qualifies for a tax credit up to $1.75 per gallon. "The concern is they will end up subsidizing fuels that take an enormous amount of land to produce," said Tim Searchinger, a senior research scholar at Princeton University... Administration officials said on a call with reporters Thursday that they are carefully weighing such concerns. Agencies are in the process of updating the scientific model for gauging climate friendliness of jet fuels, they said, and it will be revised to factor in the emissions impact of cropland converted from food to fuel production. Federal agencies plan to complete their revisions by March 1.

"The sustainable aviation fuel industry is a potential 36 billion gallon industry that for all intents and purposes is just getting started," Agriculture Secretary Tom Vilsack said on the call. "This is a big, big deal."

Privacy

Delta Dental of California Data Breach Exposed Info of 7 Million People (bleepingcomputer.com) 20

Delta Dental of California announced that they've suffered a data breach that exposed the personal data of almost seven million patients. BleepingComputer reports: Delta Dental of California is a dental insurance provider that covers 45 million people across 15 states and is part of the Delta Dental Plans Association. According to a Delta Dental of California data breach notification (PDF), the company suffered unauthorized access by threat actors through the MOVEit file transfer software application.

The software was vulnerable to a zero-day SQL injection flaw leading to remote code execution, tracked as CVE-2023-34362, which the Clop ransomware gang leveraged to breach thousands of organizations worldwide. Delta Dental of California learned about the compromise on June 1, 2023, and five days later, following an internal investigation, it confirmed that unauthorized actors had accessed and stolen data from its systems between May 27 and May 30, 2023. The second, more lengthy investigation to determine the exact impact of the security incident was completed on November 27, 2023.

Based on this, the data breach has so far impacted 6,928,932 customers of Delta Dental of California, who had their names, financial account numbers, and credit/debit card numbers, including security codes, exposed. Delta Dental of California provides 24 months of free credit monitoring and identity theft protection services to impacted patients to mitigate the risk of their exposed data. Details on enrolling in the program are enclosed in the personal notices.

The Courts

TikTok Requires Users To 'Forever Waive' Rights To Sue Over Past Harms (arstechnica.com) 23

An anonymous reader quotes a report from Ars Technica: Some TikTok users may have skipped reviewing an update to TikTok's terms of service this summer that shakes up the process for filing a legal dispute against the app. According to The New York Times, changes that TikTok "quietly" made to its terms suggest that the popular app has spent the back half of 2023 preparing for a wave of legal battles. In July, TikTok overhauled its rules for dispute resolution, pivoting from requiring private arbitration to insisting that legal complaints be filed in either the US District Court for the Central District of California or the Superior Court of the State of California, County of Los Angeles. Legal experts told the Times this could be a way for TikTok to dodge arbitration claims filed en masse that can cost companies millions more in fees than they expected to pay through individual arbitration.

Perhaps most significantly, TikTok also added a section to its terms that mandates that all legal complaints be filed within one year of any alleged harm caused by using the app. The terms now say that TikTok users "forever waive" rights to pursue any older claims. And unlike a prior version of TikTok's terms of service archived in May 2023, users do not seem to have any options to opt out of waiving their rights. Lawyers told the Times that these changes could make it more challenging for TikTok users to pursue legal action at a time when federal agencies are heavily scrutinizing the app and complaints about certain TikTok features allegedly harming kids are mounting.

Cellphones

Suspects Can Refuse To Provide Phone Passcodes To Police, Court Rules (arstechnica.com) 64

An anonymous reader quotes a report from Ars Technica: Criminal suspects can refuse to provide phone passcodes to police under the US Constitution's Fifth Amendment privilege against self-incrimination, according to a unanimous ruling issued (PDF) today by Utah's state Supreme Court. The questions addressed in the ruling could eventually be taken up by the US Supreme Court, whether through review of this case or a similar one. The case involves Alfonso Valdez, who was arrested for kidnapping and assaulting his ex-girlfriend. Police officers obtained a search warrant for the contents of Valdez's phone but couldn't crack his passcode.

Valdez refused to provide his passcode to a police detective. At his trial, the state "elicited testimony from the detective about Valdez's refusal to provide his passcode when asked," today's ruling said. "And during closing arguments, the State argued in rebuttal that Valdez's refusal and the resulting lack of evidence from his cell phone undermined the veracity of one of his defenses. The jury convicted Valdez." A court of appeals reversed the conviction, agreeing "with Valdez that he had a right under the Fifth Amendment to the United States Constitution to refuse to provide his passcode, and that the State violated that right when it used his refusal against him at trial." The Utah Supreme Court affirmed the court of appeals ruling.

The Valdez case does not involve an order to compel a suspect to unlock a device. Instead, "law enforcement asked Valdez to verbally provide his passcode," Utah justices wrote. "While these circumstances involve modern technology in a scenario that the Supreme Court has not yet addressed, we conclude that these facts present a more straightforward question that is answered by settled Fifth Amendment principles." Ruling against the state, the Utah Supreme Court said it "agree[s] with the court of appeals that verbally providing a cell phone passcode is a testimonial communication under the Fifth Amendment."

Privacy

Beeper Says Apple is Blocking Some iMessages (theverge.com) 111

After investigating reports that some users aren't getting iMessages on Beeper Mini and Beeper Cloud, Beeper says that Apple seems to be "deliberately blocking" iMessages from being delivered to about five percent of Beeper Mini users. From a report: The company says that uninstalling and reinstalling the app fixes the issue and that it's working on a broader fix.

Apple didn't immediately reply to a request for comment about Beeper's new claim, and it hasn't replied to my original request for comment, either. But given that the company has already blocked Beeper Mini before, it's not too surprising that it seems to be taking action against the app again.

Slashdot Top Deals