×
United Kingdom

Britain Covered Up Tainted Blood Scandal That Killed Thousands, Report Finds (upi.com) 78

UPI reports that the British government covered up "a multi-decade tainted blood scandal, leading to thousands of related deaths, a report published Monday found." Britain's National Health Service allowed blood tainted with HIV and Hepatitis to be used on patients without their knowledge, leading to 3,000 deaths and more than 30,000 infections, according to the 2,527-page final report by Justice Brian Justice Langstaff, a former judge on the High Court of England and Wales. Langstaff oversaw a five-year investigation into the use of tainted blood and blood products in Britain's healthcare system between 1970 and 1991. The report blames multiple administrations over the time period for knowingly exposing victims to unacceptable risks...

In several cases, health officials lied about the risks to patients... The NHS also gave patients false reassurances, an attempt to "save face," failing victims "not once but repeatedly...." The situation could "largely, though not entirely, have been avoided," Langstaff found...

The British government on Monday began operating a support phone line for people and their families affected by the tainted blood scandal.

The article notes that Langstaff described the coverup as "subtle" but "pervasive" and "chilling in its implications...

"To save face and to save expense, there has been a hiding of much of the truth."

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Facebook

Meta, Activision Sued By Parents of Children Killed in Last Year's School Shooting (msn.com) 152

Exactly one year after the fatal shooting of 19 elementary school students in Texas, their parents filed a lawsuit against the publisher of the videogame Call of Duty, against Meta, and against the manufacturer of the AR-15-style weapon used in the attack, Daniel Defense.

The Washington Post says the lawsuits "may be the first of their kind to connect aggressive firearms marketing tactics on social media and gaming platforms to the actions of a mass shooter." The complaints contend the three companies are responsible for "grooming" a generation of "socially vulnerable" young men radicalized to live out violent video game fantasies in the real world with easily accessible weapons of war...

Several state legislatures, including California and Hawaii, passed consumer safety laws specific to the sale and marketing of firearms that would open the industry to more civil liability. Texas is not one of them. But it's just one vein in the three-pronged legal push by Uvalde families. The lawsuit against Activision and Meta, which is being filed in California, accuses the tech companies of knowingly promoting dangerous weapons to millions of vulnerable young people, particularly young men who are "insecure about their masculinity, often bullied, eager to show strength and assert dominance."

"To put a finer point on it: Defendants are chewing up alienated teenage boys and spitting out mass shooters," the lawsuit states...

The lawsuit alleges that Meta, which owns Instagram, easily allows gun manufacturers like Daniel Defense to circumvent its ban on paid firearm advertisements to reach scores of young people. Under Meta's rules, gunmakers are not allowed to buy advertisements promoting the sale of or use of weapons, ammunition or explosives. But gunmakers are free to post promotional material about weapons from their own account pages on Facebook and Instagram — a freedom the lawsuit alleges Daniel Defense often exploited.

According to the complaint, the Robb school shooter downloaded a version of "Call of Duty: Modern Warfare," in November 2021 that featured on the opening title page the DDM4V7 model rifle [shooter Salvador] Ramos would later purchase. Drawing from the shooter's social media accounts, Koskoff argued he was being bombarded with explicit marketing and combat imagery from the company on Instagram... The complaint cites Meta's practice, first reported by The Washington Post in 2022, of giving gun sellers wide latitude to knowingly break its rules against selling firearms on its websites. The company has allowed buyers and sellers to violate the rule 10 times before they are kicked off, The Post reported.

The article adds that the lawsuit against Meta "echoes some of the complaints by dozens of state attorneys general and school districts that have accused the tech giant of using manipulative practices to hook... while exposing them to harmful content." It also includes a few excerpts from the text of the lawsuit.
  • It argues that both Meta and Activision "knowingly exposed the Shooter to the weapon, conditioned him to see it as the solution to his problems, and trained him to use it."
  • The lawsuit also compares their practices to another ad campaign accused of marketing harmful products to children: cigarettes. "Over the last 15 years, two of America's largest technology companies — Defendants Activision and Meta — have partnered with the firearms industry in a scheme that makes the Joe Camel campaign look laughably harmless, even quaint."

Meta and Daniel Defense didn't respond to the reporters' requests for comment. But they did quote a statement from Activision expressing sympathy for the communities and families impacted by the "horrendous and heartbreaking" shooting.

Activision also added that "Millions of people around the world enjoy video games without turning to horrific acts."


AI

FTC Chair: AI Models Could Violate Antitrust Laws (thehill.com) 42

An anonymous reader quotes a report from The Hill: Federal Trade Commission (FTC) Chair Lina Khan said Wednesday that companies that train their artificial intelligence (A) models on data from news websites, artists' creations or people's personal information could be in violation of antitrust laws. At The Wall Street Journal's "Future of Everything Festival," Khan said the FTC is examining ways in which major companies' data scraping could hinder competition or potentially violate people's privacy rights. "The FTC Act prohibits unfair methods of competition and unfair or deceptive acts or practices," Khan said at the event. "So, you can imagine, if somebody's content or information is being scraped that they have produced, and then is being used in ways to compete with them and to dislodge them from the market and divert businesses, in some cases, that could be an unfair method of competition."

Khan said concern also lies in companies using people's data without their knowledge or consent, which can also raise legal concerns. "We've also seen a lot of concern about deception, about unfairness, if firms are making one set of representations when you're signing up to use them, but then are secretly or quietly using the data you're feeding them -- be it your personal data, be it, if you're a business, your proprietary data, your competitively significant data -- if they're then using that to feed their models, to compete with you, to abuse your privacy, that can also raise legal concerns," she said.

Khan also recognized people's concerns about companies retroactively changing their terms of service to let them use customers' content, including personal photos or family videos, to feed into their AI models. "I think that's where people feel a sense of violation, that that's not really what they signed up for and oftentimes, they feel that they don't have recourse," Khan said. "Some of these services are essential for navigating day to day life," she continued, "and so, if the choice -- 'choice' -- you're being presented with is: sign off on not just being endlessly surveilled, but all of that data being fed into these models, or forego using these services entirely, I think that's a really tough spot to put people in." Khan said she thinks many government agencies have an important role to play as AI continues to develop, saying, "I think in Washington, there's increasingly a recognition that we can't, as a government, just be totally hands off and stand out of the way."
You can watch the interview with Khan here.
EU

UK Law Will Let Regulators Fine Big Tech Without Court Approval (theverge.com) 34

Emma Roth reports via The Verge: The UK could subject big tech companies to hefty fines if they don't comply with new rules meant to promote competition in digital markets. On Thursday, lawmakers passed the Digital Markets, Competition and Consumer Bill (DMCC) through Parliament, which will let regulators enforce rules without the help of the courts. The DMCC also addresses consumer protection issues by banning fake reviews, forcing companies to be more transparent about their subscription contracts, regulating secondary ticket sales, and getting rid of hidden fees. It will also force certain companies to report mergers to the UK's Competition and Markets Authority (CMA). The European Union enacted a similar law, called the Digital Markets Act (DMA).

Only the companies the CMA designates as having Strategic Market Status (SMS) have to comply. These SMS companies are described as having "substantial and entrenched market power" and "a position of strategic significance" in the UK. They must have a global revenue of more than 25 billion euros or UK revenue of more than 1 billion euros. The law will also give the CMA the authority to determine whether a company has broken a law, require compliance, and issue a fine -- all without going through the court system. The CMA can fine companies up to 10 percent of the total value of a business's global revenue for violating the new rules.

Encryption

Signal Slams Telegram's Security (techcrunch.com) 33

Messaging app Signal's president Meredith Whittaker criticized rival Telegram's security on Friday, saying Telegram founder Pavel Durov is "full of s---" in his claims about Signal. "Telegram is a social media platform, it's not encrypted, it's the least secure of messaging and social media services out there," Whittaker told TechCrunch in an interview. The comments come amid a war of words between Whittaker, Durov and Twitter owner Elon Musk over the security of their respective platforms. Whittaker said Durov's amplification of claims questioning Signal's security was "incredibly reckless" and "actually harms real people."

"Play your games, but don't take them into my court," Whittaker said, accusing Durov of prioritizing being "followed by a professional photographer" over getting facts right about Signal's encryption. Signal uses end-to-end encryption by default, while Telegram only offers it for "secret chats." Whittaker said many in Ukraine and Russia use Signal for "actual serious communications" while relying on Telegram's less-secure social media features. She said the "jury is in" on the platforms' comparative security and that Signal's open source code allows experts to validate its privacy claims, which have the trust of the security community.
The Courts

Political Consultant Behind Fake Biden Robocalls Faces $6 Million Fine, Criminal Charges (apnews.com) 49

Political consultant Steven Kramer faces a $6 million fine and over two dozen criminal charges for using AI-generated robocalls mimicking President Joe Biden's voice to mislead New Hampshire voters ahead of the presidential primary. The Associated Press reports: The Federal Communications Commission said the fine it proposed Thursday for Steven Kramer is its first involving generative AI technology. The company accused of transmitting the calls, Lingo Telecom, faces a $2 million fine, though in both cases the parties could settle or further negotiate, the FCC said. Kramer has admitted orchestrating a message that was sent to thousands of voters two days before the first-in-the-nation primary on Jan. 23. The message played an AI-generated voice similar to the Democratic president's that used his phrase "What a bunch of malarkey" and falsely suggested that voting in the primary would preclude voters from casting ballots in November.

Kramer is facing 13 felony charges alleging he violated a New Hampshire law against attempting to deter someone from voting using misleading information. He also faces 13 misdemeanor charges accusing him of falsely representing himself as a candidate by his own conduct or that of another person. The charges were filed in four counties and will be prosecuted by the state attorney general's office. Attorney General John Formella said New Hampshire was committed to ensuring that its elections "remain free from unlawful interference."

Kramer, who owns a firm that specializes in get-out-the-vote projects, did not respond to an email seeking comment Thursday. He told The Associated Press in February that he wasn't trying to influence the outcome of the election but rather wanted to send a wake-up call about the potential dangers of artificial intelligence when he paid a New Orleans magician $150 to create the recording. "Maybe I'm a villain today, but I think in the end we get a better country and better democracy because of what I've done, deliberately," Kramer said in February.

The Almighty Buck

IRS Extends Free File Tax Program Through 2029 (cnbc.com) 21

The IRS has extended the Free File program through 2029, "continuing its partnership with a coalition of private tax software companies that allow most Americans to file federal taxes for free," reports CNBC. From the report: This season, Free File processed 2.9 million returns through May 11, a 7.3% increase compared to the same period last year, according to the IRS. "Free File has been an important partner with the IRS for more than two decades and helped tens of millions of taxpayers," Ken Corbin, chief of IRS taxpayer services, said in a statement Wednesday. "This extension will continue that relationship into the future."

"This multi-year agreement will also provide certainty for private-sector partners to help with their future Free File planning," Corbin added. IRS Free File remains open through the Oct. 15 federal tax extension deadline. You can use Free File for 2023 returns with an adjusted gross income of $79,000 or less, which is up from $73,000 in 2022. Fillable Forms are also still available for all income levels.

IT

Leaked Contract Shows Samsung Forces Repair Shop To Snitch On Customers (404media.co) 34

Speaking of Samsung, samleecole shares a report about the contract the South Korean firm requires repair shops to sign: In exchange for selling them repair parts, Samsung requires independent repair shops to give Samsung the name, contact information, phone identifier, and customer complaint details of everyone who gets their phone repaired at these shops, according to a contract obtained by 404 Media. Stunningly, it also requires these nominally independent shops to "immediately disassemble" any phones that customers have brought them that have been previously repaired with aftermarket or third-party parts and to "immediately notify" Samsung that the customer has used third-party parts.

"Company shall immediately disassemble all products that are created or assembled out of, comprised of, or that contain any Service Parts not purchased from Samsung," a section of the agreement reads. "And shall immediately notify Samsung in writing of the details and circumstances of any unauthorized use or misappropriation of any Service Part for any purpose other than pursuant to this Agreement. Samsung may terminate this Agreement if these terms are violated."

Businesses

iFixit is Breaking Up With Samsung (theverge.com) 13

iFixit and Samsung are parting ways. Two years after they teamed up on one of the first direct-to-consumer phone repair programs, iFixit CEO and co-founder Kyle Wiens tells The Verge the two companies have failed to renegotiate a contract -- and says Samsung is to blame. From a report: "Samsung does not seem interested in enabling repair at scale," Wiens tells me, even though similar deals are going well with Google, Motorola, and HMD. He believes dropping Samsung shouldn't actually affect iFixit customers all that much. Instead of being Samsung's partner on genuine parts and approved repair manuals, iFixit will simply go it alone, the same way it's always done with Apple's iPhones. While Wiens wouldn't say who technically broke up with whom, he says price is the biggest reason the Samsung deal isn't working: Samsung's parts are priced so high, and its phones remain so difficult to repair, that customers just aren't buying.
United States

US Sues To Break Up Ticketmaster Owner, Live Nation (nytimes.com) 60

The Justice Department on Thursday said it was suing Live Nation Entertainment [non-paywalled link], the concert giant that owns Ticketmaster, asking a court to break up the company over claims it illegally maintained a monopoly in the live entertainment industry. From a report: In the lawsuit, which is joined by 29 states and the District of Columbia, the government accuses Live Nation of dominating the industry by locking venues into exclusive ticketing contracts, pressuring artists to use its services and threatening its rivals with financial retribution. Those tactics, the government argues, have resulted in higher ticket prices for consumers and have stifled innovation and competition throughout the industry.

"It is time to break up Live Nation-Ticketmaster," Merrick Garland, the attorney general, said in a statement announcing the suit, which is being filed in the U.S. District Court for the Southern District of New York. The lawsuit is a direct challenge to the business of Live Nation, a colossus of the entertainment industry and a force in the lives of musicians and fans alike. The case, filed 14 years after the government approved Live Nation's merger with Ticketmaster, has the potential to transform the multibillion-dollar concert industry. Live Nation's scale and reach far exceed those of any competitor, encompassing concert promotion, ticketing, artist management and the operation of hundreds of venues and festivals around the world.

Mozilla

Mozilla Says It's Concerned About Windows Recall (theregister.com) 67

Microsoft's Windows Recall feature is attracting controversy before even venturing out of preview. From a report: The principle is simple. Windows takes a snapshot of a user's active screen every few seconds and dumps it to disk. The user can then scroll through the snapshots and, when something is selected, the user is given options to interact with the content.

Mozilla's Chief Product Officer, Steve Teixeira, told The Register: "Mozilla is concerned about Windows Recall. From a browser perspective, some data should be saved, and some shouldn't. Recall stores not just browser history, but also data that users type into the browser with only very coarse control over what gets stored. While the data is stored in encrypted format, this stored data represents a new vector of attack for cybercriminals and a new privacy worry for shared computers.

"Microsoft is also once again playing gatekeeper and picking which browsers get to win and lose on Windows -- favoring, of course, Microsoft Edge. Microsoft's Edge allows users to block specific websites and private browsing activity from being seen by Recall. Other Chromium-based browsers can filter out private browsing activity but lose the ability to block sensitive websites (such as financial sites) from Recall. "Right now, there's no documentation on how a non-Chromium based, third-party browser, such as Firefox, can protect user privacy from Recall. Microsoft did not engage our cooperation on Recall, but we would have loved for that to be the case, which would have enabled us to partner on giving users true agency over their privacy, regardless of the browser they choose."

Encryption

Undisclosed WhatsApp Vulnerability Lets Governments See Who You Message (theintercept.com) 38

WhatsApp's security team warned that despite the app's encryption, users are vulnerable to government surveillance through traffic analysis, according to an internal threat assessment obtained by The Intercept. The document suggests that governments can monitor when and where encrypted communications occur, potentially allowing powerful inferences about who is conversing with whom. The report adds: Even though the contents of WhatsApp communications are unreadable, the assessment shows how governments can use their access to internet infrastructure to monitor when and where encrypted communications are occurring, like observing a mail carrier ferrying a sealed envelope. This view into national internet traffic is enough to make powerful inferences about which individuals are conversing with each other, even if the subjects of their conversations remain a mystery. "Even assuming WhatsApp's encryption is unbreakable," the assessment reads, "ongoing 'collect and correlate' attacks would still break our intended privacy model."

The WhatsApp threat assessment does not describe specific instances in which it knows this method has been deployed by state actors. But it cites extensive reporting by the New York Times and Amnesty International showing how countries around the world spy on dissident encrypted chat app usage, including WhatsApp, using the very same techniques. As war has grown increasingly computerized, metadata -- information about the who, when, and where of conversations -- has come to hold immense value to intelligence, military, and police agencies around the world. "We kill people based on metadata," former National Security Agency chief Michael Hayden once infamously quipped.
Meta said "WhatsApp has no backdoors and we have no evidence of vulnerabilities in how WhatsApp works." Though the assessment describes the "vulnerabilities" as "ongoing," and specifically mentions WhatsApp 17 times, a Meta spokesperson said the document is "not a reflection of a vulnerability in WhatsApp," only "theoretical," and not unique to WhatsApp.
Android

Google Brings Back Group Speaker Controls After Sonos Lawsuit Win (arstechnica.com) 16

Android Authority's Mishaal Rahman reports that the group speaker volume controls feature is back in Android 15 Beta 2. "Google intentionally disabled this functionality on Pixel phones back in late 2021 due to a legal dispute with Sonos," reports Rahman. "In late 2023, Google announced it would bring back several features they had to remove, following a judge's overturning of a jury verdict that was in favor of Sonos." From the report: When you create a speaker group consisting of one or more Assistant-enabled devices in the Google Home app, you're able to cast audio to that group from your phone using a Cast-enabled app. For example, let's say I make a speaker group named "Nest Hubs" that consists of my bedroom Nest Hub and my living room Nest Hub. If I open the YouTube Music app, start playing a song, and then tap the cast icon, I can select "Nest Hubs" to start playback on both my Nest Hubs simultaneously.

If I keep the YouTube Music app open, I can control the volume of my speaker group by pressing the volume keys on my phone. This functionality is available no matter what device I use. However, if I open another app while YouTube Music is casting, whether I'm able to still control the volume of my speaker group using my phone's volume keys depends on what phone I'm using and what software version it's running. If I'm using a Pixel phone that's running a software version before Android 15 Beta 2, then I'm unable to control the volume of my speaker group unless I re-open the YouTube Music app. If I'm using a phone from any other manufacturer, then I won't have any issues controlling the volume of my speaker group.

The reason for this weird discrepancy is that Google intentionally blocked Pixel devices from being able to control the volume of Google Home speaker groups while casting. Google did this out of an abundance of caution while they were fighting a legal dispute. [...] With the release of last week's Android 15 Beta 2, we can confirm that Google finally restored this functionality.

AI

DOJ Makes Its First Known Arrest For AI-Generated CSAM (engadget.com) 98

In what's believed to be the first case of its kind, the U.S. Department of Justice arrested a Wisconsin man last week for generating and distributing AI-generated child sexual abuse material (CSAM). Even if no children were used to create the material, the DOJ "looks to establish a judicial precedent that exploitative materials are still illegal," reports Engadget. From the report: The DOJ says 42-year-old software engineer Steven Anderegg of Holmen, WI, used a fork of the open-source AI image generator Stable Diffusion to make the images, which he then used to try to lure an underage boy into sexual situations. The latter will likely play a central role in the eventual trial for the four counts of "producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16." The government says Anderegg's images showed "nude or partially clothed minors lasciviously displaying or touching their genitals or engaging in sexual intercourse with men." The DOJ claims he used specific prompts, including negative prompts (extra guidance for the AI model, telling it what not to produce) to spur the generator into making the CSAM.

Cloud-based image generators like Midjourney and DALL-E 3 have safeguards against this type of activity, but Ars Technica reports that Anderegg allegedly used Stable Diffusion 1.5, a variant with fewer boundaries. Stability AI told the publication that fork was produced by Runway ML. According to the DOJ, Anderegg communicated online with the 15-year-old boy, describing how he used the AI model to create the images. The agency says the accused sent the teen direct messages on Instagram, including several AI images of "minors lasciviously displaying their genitals." To its credit, Instagram reported the images to the National Center for Missing and Exploited Children (NCMEC), which alerted law enforcement. Anderegg could face five to 70 years in prison if convicted on all four counts. He's currently in federal custody before a hearing scheduled for May 22.

EU

EU Sets Benchmark For Rest of the World With Landmark AI Laws (reuters.com) 28

An anonymous reader quotes a report from Reuters: Europe's landmark rules on artificial intelligence will enter into force next month after EU countries endorsed on Tuesday a political deal reached in December, setting a potential global benchmark for a technology used in business and everyday life. The European Union's AI Act is more comprehensive than the United States' light-touch voluntary compliance approach while China's approach aims to maintain social stability and state control. The vote by EU countries came two months after EU lawmakers backed the AI legislation drafted by the European Commission in 2021 after making a number of key changes. [...]

The AI Act imposes strict transparency obligations on high-risk AI systems while such requirements for general-purpose AI models will be lighter. It restricts governments' use of real-time biometric surveillance in public spaces to cases of certain crimes, prevention of terrorist attacks and searches for people suspected of the most serious crimes. The new legislation will have an impact beyond the 27-country bloc, said Patrick van Eecke at law firm Cooley. "The Act will have global reach. Companies outside the EU who use EU customer data in their AI platforms will need to comply. Other countries and regions are likely to use the AI Act as a blueprint, just as they did with the GDPR," he said, referring to EU privacy rules.

While the new legislation will apply in 2026, bans on the use of artificial intelligence in social scoring, predictive policing and untargeted scraping of facial images from the internet or CCTV footage will kick in in six months once the new regulation enters into force. Obligations for general purpose AI models will apply after 12 months and rules for AI systems embedded into regulated products in 36 months. Fines for violations range from $8.2 million or 1.5% of turnover to 35 million euros or 7% of global turnover depending on the type of violations.

Slashdot Top Deals