×
The Courts

Political Consultant Behind Fake Biden Robocalls Faces $6 Million Fine, Criminal Charges (apnews.com) 19

Political consultant Steven Kramer faces a $6 million fine and over two dozen criminal charges for using AI-generated robocalls mimicking President Joe Biden's voice to mislead New Hampshire voters ahead of the presidential primary. The Associated Press reports: The Federal Communications Commission said the fine it proposed Thursday for Steven Kramer is its first involving generative AI technology. The company accused of transmitting the calls, Lingo Telecom, faces a $2 million fine, though in both cases the parties could settle or further negotiate, the FCC said. Kramer has admitted orchestrating a message that was sent to thousands of voters two days before the first-in-the-nation primary on Jan. 23. The message played an AI-generated voice similar to the Democratic president's that used his phrase "What a bunch of malarkey" and falsely suggested that voting in the primary would preclude voters from casting ballots in November.

Kramer is facing 13 felony charges alleging he violated a New Hampshire law against attempting to deter someone from voting using misleading information. He also faces 13 misdemeanor charges accusing him of falsely representing himself as a candidate by his own conduct or that of another person. The charges were filed in four counties and will be prosecuted by the state attorney general's office. Attorney General John Formella said New Hampshire was committed to ensuring that its elections "remain free from unlawful interference."

Kramer, who owns a firm that specializes in get-out-the-vote projects, did not respond to an email seeking comment Thursday. He told The Associated Press in February that he wasn't trying to influence the outcome of the election but rather wanted to send a wake-up call about the potential dangers of artificial intelligence when he paid a New Orleans magician $150 to create the recording. "Maybe I'm a villain today, but I think in the end we get a better country and better democracy because of what I've done, deliberately," Kramer said in February.

The Almighty Buck

IRS Extends Free File Tax Program Through 2029 (cnbc.com) 8

The IRS has extended the Free File program through 2029, "continuing its partnership with a coalition of private tax software companies that allow most Americans to file federal taxes for free," reports CNBC. From the report: This season, Free File processed 2.9 million returns through May 11, a 7.3% increase compared to the same period last year, according to the IRS. "Free File has been an important partner with the IRS for more than two decades and helped tens of millions of taxpayers," Ken Corbin, chief of IRS taxpayer services, said in a statement Wednesday. "This extension will continue that relationship into the future."

"This multi-year agreement will also provide certainty for private-sector partners to help with their future Free File planning," Corbin added. IRS Free File remains open through the Oct. 15 federal tax extension deadline. You can use Free File for 2023 returns with an adjusted gross income of $79,000 or less, which is up from $73,000 in 2022. Fillable Forms are also still available for all income levels.

IT

Leaked Contract Shows Samsung Forces Repair Shop To Snitch On Customers (404media.co) 22

Speaking of Samsung, samleecole shares a report about the contract the South Korean firm requires repair shops to sign: In exchange for selling them repair parts, Samsung requires independent repair shops to give Samsung the name, contact information, phone identifier, and customer complaint details of everyone who gets their phone repaired at these shops, according to a contract obtained by 404 Media. Stunningly, it also requires these nominally independent shops to "immediately disassemble" any phones that customers have brought them that have been previously repaired with aftermarket or third-party parts and to "immediately notify" Samsung that the customer has used third-party parts.

"Company shall immediately disassemble all products that are created or assembled out of, comprised of, or that contain any Service Parts not purchased from Samsung," a section of the agreement reads. "And shall immediately notify Samsung in writing of the details and circumstances of any unauthorized use or misappropriation of any Service Part for any purpose other than pursuant to this Agreement. Samsung may terminate this Agreement if these terms are violated."

Businesses

iFixit is Breaking Up With Samsung (theverge.com) 13

iFixit and Samsung are parting ways. Two years after they teamed up on one of the first direct-to-consumer phone repair programs, iFixit CEO and co-founder Kyle Wiens tells The Verge the two companies have failed to renegotiate a contract -- and says Samsung is to blame. From a report: "Samsung does not seem interested in enabling repair at scale," Wiens tells me, even though similar deals are going well with Google, Motorola, and HMD. He believes dropping Samsung shouldn't actually affect iFixit customers all that much. Instead of being Samsung's partner on genuine parts and approved repair manuals, iFixit will simply go it alone, the same way it's always done with Apple's iPhones. While Wiens wouldn't say who technically broke up with whom, he says price is the biggest reason the Samsung deal isn't working: Samsung's parts are priced so high, and its phones remain so difficult to repair, that customers just aren't buying.
United States

US Sues To Break Up Ticketmaster Owner, Live Nation (nytimes.com) 54

The Justice Department on Thursday said it was suing Live Nation Entertainment [non-paywalled link], the concert giant that owns Ticketmaster, asking a court to break up the company over claims it illegally maintained a monopoly in the live entertainment industry. From a report: In the lawsuit, which is joined by 29 states and the District of Columbia, the government accuses Live Nation of dominating the industry by locking venues into exclusive ticketing contracts, pressuring artists to use its services and threatening its rivals with financial retribution. Those tactics, the government argues, have resulted in higher ticket prices for consumers and have stifled innovation and competition throughout the industry.

"It is time to break up Live Nation-Ticketmaster," Merrick Garland, the attorney general, said in a statement announcing the suit, which is being filed in the U.S. District Court for the Southern District of New York. The lawsuit is a direct challenge to the business of Live Nation, a colossus of the entertainment industry and a force in the lives of musicians and fans alike. The case, filed 14 years after the government approved Live Nation's merger with Ticketmaster, has the potential to transform the multibillion-dollar concert industry. Live Nation's scale and reach far exceed those of any competitor, encompassing concert promotion, ticketing, artist management and the operation of hundreds of venues and festivals around the world.

Mozilla

Mozilla Says It's Concerned About Windows Recall (theregister.com) 65

Microsoft's Windows Recall feature is attracting controversy before even venturing out of preview. From a report: The principle is simple. Windows takes a snapshot of a user's active screen every few seconds and dumps it to disk. The user can then scroll through the snapshots and, when something is selected, the user is given options to interact with the content.

Mozilla's Chief Product Officer, Steve Teixeira, told The Register: "Mozilla is concerned about Windows Recall. From a browser perspective, some data should be saved, and some shouldn't. Recall stores not just browser history, but also data that users type into the browser with only very coarse control over what gets stored. While the data is stored in encrypted format, this stored data represents a new vector of attack for cybercriminals and a new privacy worry for shared computers.

"Microsoft is also once again playing gatekeeper and picking which browsers get to win and lose on Windows -- favoring, of course, Microsoft Edge. Microsoft's Edge allows users to block specific websites and private browsing activity from being seen by Recall. Other Chromium-based browsers can filter out private browsing activity but lose the ability to block sensitive websites (such as financial sites) from Recall. "Right now, there's no documentation on how a non-Chromium based, third-party browser, such as Firefox, can protect user privacy from Recall. Microsoft did not engage our cooperation on Recall, but we would have loved for that to be the case, which would have enabled us to partner on giving users true agency over their privacy, regardless of the browser they choose."

Encryption

Undisclosed WhatsApp Vulnerability Lets Governments See Who You Message (theintercept.com) 35

WhatsApp's security team warned that despite the app's encryption, users are vulnerable to government surveillance through traffic analysis, according to an internal threat assessment obtained by The Intercept. The document suggests that governments can monitor when and where encrypted communications occur, potentially allowing powerful inferences about who is conversing with whom. The report adds: Even though the contents of WhatsApp communications are unreadable, the assessment shows how governments can use their access to internet infrastructure to monitor when and where encrypted communications are occurring, like observing a mail carrier ferrying a sealed envelope. This view into national internet traffic is enough to make powerful inferences about which individuals are conversing with each other, even if the subjects of their conversations remain a mystery. "Even assuming WhatsApp's encryption is unbreakable," the assessment reads, "ongoing 'collect and correlate' attacks would still break our intended privacy model."

The WhatsApp threat assessment does not describe specific instances in which it knows this method has been deployed by state actors. But it cites extensive reporting by the New York Times and Amnesty International showing how countries around the world spy on dissident encrypted chat app usage, including WhatsApp, using the very same techniques. As war has grown increasingly computerized, metadata -- information about the who, when, and where of conversations -- has come to hold immense value to intelligence, military, and police agencies around the world. "We kill people based on metadata," former National Security Agency chief Michael Hayden once infamously quipped.
Meta said "WhatsApp has no backdoors and we have no evidence of vulnerabilities in how WhatsApp works." Though the assessment describes the "vulnerabilities" as "ongoing," and specifically mentions WhatsApp 17 times, a Meta spokesperson said the document is "not a reflection of a vulnerability in WhatsApp," only "theoretical," and not unique to WhatsApp.
Android

Google Brings Back Group Speaker Controls After Sonos Lawsuit Win (arstechnica.com) 16

Android Authority's Mishaal Rahman reports that the group speaker volume controls feature is back in Android 15 Beta 2. "Google intentionally disabled this functionality on Pixel phones back in late 2021 due to a legal dispute with Sonos," reports Rahman. "In late 2023, Google announced it would bring back several features they had to remove, following a judge's overturning of a jury verdict that was in favor of Sonos." From the report: When you create a speaker group consisting of one or more Assistant-enabled devices in the Google Home app, you're able to cast audio to that group from your phone using a Cast-enabled app. For example, let's say I make a speaker group named "Nest Hubs" that consists of my bedroom Nest Hub and my living room Nest Hub. If I open the YouTube Music app, start playing a song, and then tap the cast icon, I can select "Nest Hubs" to start playback on both my Nest Hubs simultaneously.

If I keep the YouTube Music app open, I can control the volume of my speaker group by pressing the volume keys on my phone. This functionality is available no matter what device I use. However, if I open another app while YouTube Music is casting, whether I'm able to still control the volume of my speaker group using my phone's volume keys depends on what phone I'm using and what software version it's running. If I'm using a Pixel phone that's running a software version before Android 15 Beta 2, then I'm unable to control the volume of my speaker group unless I re-open the YouTube Music app. If I'm using a phone from any other manufacturer, then I won't have any issues controlling the volume of my speaker group.

The reason for this weird discrepancy is that Google intentionally blocked Pixel devices from being able to control the volume of Google Home speaker groups while casting. Google did this out of an abundance of caution while they were fighting a legal dispute. [...] With the release of last week's Android 15 Beta 2, we can confirm that Google finally restored this functionality.

AI

DOJ Makes Its First Known Arrest For AI-Generated CSAM (engadget.com) 96

In what's believed to be the first case of its kind, the U.S. Department of Justice arrested a Wisconsin man last week for generating and distributing AI-generated child sexual abuse material (CSAM). Even if no children were used to create the material, the DOJ "looks to establish a judicial precedent that exploitative materials are still illegal," reports Engadget. From the report: The DOJ says 42-year-old software engineer Steven Anderegg of Holmen, WI, used a fork of the open-source AI image generator Stable Diffusion to make the images, which he then used to try to lure an underage boy into sexual situations. The latter will likely play a central role in the eventual trial for the four counts of "producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16." The government says Anderegg's images showed "nude or partially clothed minors lasciviously displaying or touching their genitals or engaging in sexual intercourse with men." The DOJ claims he used specific prompts, including negative prompts (extra guidance for the AI model, telling it what not to produce) to spur the generator into making the CSAM.

Cloud-based image generators like Midjourney and DALL-E 3 have safeguards against this type of activity, but Ars Technica reports that Anderegg allegedly used Stable Diffusion 1.5, a variant with fewer boundaries. Stability AI told the publication that fork was produced by Runway ML. According to the DOJ, Anderegg communicated online with the 15-year-old boy, describing how he used the AI model to create the images. The agency says the accused sent the teen direct messages on Instagram, including several AI images of "minors lasciviously displaying their genitals." To its credit, Instagram reported the images to the National Center for Missing and Exploited Children (NCMEC), which alerted law enforcement. Anderegg could face five to 70 years in prison if convicted on all four counts. He's currently in federal custody before a hearing scheduled for May 22.

EU

EU Sets Benchmark For Rest of the World With Landmark AI Laws (reuters.com) 27

An anonymous reader quotes a report from Reuters: Europe's landmark rules on artificial intelligence will enter into force next month after EU countries endorsed on Tuesday a political deal reached in December, setting a potential global benchmark for a technology used in business and everyday life. The European Union's AI Act is more comprehensive than the United States' light-touch voluntary compliance approach while China's approach aims to maintain social stability and state control. The vote by EU countries came two months after EU lawmakers backed the AI legislation drafted by the European Commission in 2021 after making a number of key changes. [...]

The AI Act imposes strict transparency obligations on high-risk AI systems while such requirements for general-purpose AI models will be lighter. It restricts governments' use of real-time biometric surveillance in public spaces to cases of certain crimes, prevention of terrorist attacks and searches for people suspected of the most serious crimes. The new legislation will have an impact beyond the 27-country bloc, said Patrick van Eecke at law firm Cooley. "The Act will have global reach. Companies outside the EU who use EU customer data in their AI platforms will need to comply. Other countries and regions are likely to use the AI Act as a blueprint, just as they did with the GDPR," he said, referring to EU privacy rules.

While the new legislation will apply in 2026, bans on the use of artificial intelligence in social scoring, predictive policing and untargeted scraping of facial images from the internet or CCTV footage will kick in in six months once the new regulation enters into force. Obligations for general purpose AI models will apply after 12 months and rules for AI systems embedded into regulated products in 36 months. Fines for violations range from $8.2 million or 1.5% of turnover to 35 million euros or 7% of global turnover depending on the type of violations.

The Courts

Apple Says US Antitrust Lawsuit Should Be Dismissed 62

Apple said on Tuesday it plans to ask a U.S. judge to dismiss a lawsuit filed by the Justice Department and 15 states in March that alleged the iPhone maker monopolized the smartphone market, hurt smaller rivals and drove up prices. From a report: In a letter to U.S. District Judge Julien X. Neals in New Jersey, Apple said "far from being a monopolist, Apple faces fierce competition from well-established rivals, and the complaint fails to allege that Apple has the ability to charge supra-competitive prices or restrict output in the alleged smartphone markets." In the letter to the judge, Apple said the DOJ relies on a new "theory of antitrust liability that no court has recognized."

The government is expected to respond within seven days to the Apple letter, which the court requires parties to submit, hoping to expedite cases before advancing to a potentially more robust and expensive effort to dismiss a lawsuit. The Justice Department alleges that Apple uses its market power to get more money from consumers, developers, content creators, artists, publishers, small businesses and merchants. The civil lawsuit accuses Apple of an illegal monopoly on smartphones maintained by imposing contractual restrictions on, and withholding critical access from, developers.
Google

Google Cuts Mystery Check To US In Bid To Sidestep Jury Trial (reuters.com) 38

An anonymous reader quotes a report from Reuters: Alphabet's Google has preemptively paid damages to the U.S. government, an unusual move aimed at avoiding a jury trial in the Justice Department's antitrust lawsuit over its digital advertising business. Google disclosed (PDF) the payment, but not the amount, in a court filing last week that said the case should be heard and decided by a judge directly. Without a monetary damages claim, Google argued, the government has no right to a jury trial. The Justice Department, which has not said if it will accept the payment, declined to comment on the filing. Google asserted that its check, which it said covered its alleged overcharges for online ads, allows it to sidestep a jury trial whether or not the government takes it.

The Justice Department filed the case last year with Virginia and other states, alleging Google was stifling competition for advertising technology. The government has said Google should be forced to sell its ad manager suite. Google, which has denied the allegations, said in a statement that the Justice Department "manufactured a damages claim at the last minute in an attempt to secure a jury trial." Without disclosing the size of its payment, Google said that after months of discovery, the Justice Department could only point to estimated damages of less than $1 million. The company said the government has said the case is "highly technical" and "outside the everyday knowledge of most prospective jurors."

Privacy

Police Found Ways to Use Facial Recognition Tech After Their Cities Banned It (yahoo.com) 32

An anonymous reader shared this report from the Washington Post: As cities and states push to restrict the use of facial recognition technologies, some police departments have quietly found a way to keep using the controversial tools: asking for help from other law enforcement agencies that still have access. Officers in Austin and San Francisco — two of the largest cities where police are banned from using the technology — have repeatedly asked police in neighboring towns to run photos of criminal suspects through their facial recognition programs, according to a Washington Post review of police documents...

Austin police officers received the results of at least 13 face searches from a neighboring police department since the city's 2020 ban — and appeared to get hits on some of them, according to documents obtained by The Post through public records requests and sources who shared them on the condition of anonymity. "That's him! Thank you very much," one Austin police officer wrote in response to an array of photos sent to him by an officer in Leander, Tex., who ran a facial recognition search, documents show. The man displayed in the pictures, John Curry Jr., was later charged with aggravated assault for allegedly charging toward someone with a knife, and is currently in jail awaiting trial. Curry's attorney declined to comment.

"Police officers' efforts to skirt these bans have not been previously reported and highlight the challenge of reining in police use of facial recognition," the article concludes.

It also points out that the technology "has played a role in the wrongful arrests of at least seven innocent Americans," according to the lawsuits they filed after charges against them were dismissed.
Crime

What Happened After a Reporter Tracked Down The Identity Thief Who Stole $5,000 (msn.com) 46

"$5,000 in cash had been withdrawn from my checking account — but not by me," writes journalist Linda Matchan in the Boston Globe. A police station manager reviewed footage from the bank — which was 200 miles away — and deduced that "someone had actually come into the bank and spoken to a teller, presented a driver's license, and then correctly answered some authentication questions to validate the account..." "You're pitting a teller against a national crime syndicate with massive resources behind them," says Paul Benda, executive vice president for risk, fraud, and cybersecurity at the American Bankers Association. "They're very well-funded, well-resourced criminal gangs doing this at an industrial scale."
The reporter writes that "For the past two years, I've worked to determine exactly who and what lay behind this crime..." [N]ow I had something new to worry about: Fraudsters apparently had a driver's license with my name on it... "Forget the fake IDs adolescents used to get into bars," says Georgia State's David Maimon, who is also head of fraud insights at SentiLink, a company that works with institutions across the United States to support and solve their fraud and risk issues. "Nowadays fraudsters are using sophisticated software and capable printers to create virtually impossible-to-detect fake IDs." They're able to create synthetic identities, combining legitimate personal information, such as a name and date of birth, with a nine-digit number that either looks like a Social Security number or is a real, stolen one. That ID can then be used to open financial accounts, apply for a bank or car loan, or for some other dodgy purpose that could devastate their victims' financial lives.



And there's a complex supply chain underpinning it all — "a whole industry on the dark web," says Eva Velasquez, president and CEO of the Identity Theft Resource Center, a nonprofit that helps victims undo the damage wrought by identity crime. It starts with the suppliers, Maimon told me — "the people who steal IDs, bring them into the market, and manufacture them. There's the producers who take the ID and fake driver's licenses and build the facade to make it look like they own the identity — trying to create credit reports for the synthetic identities, for example, or printing fake utility bills." Then there are the distributors who sell them in the dark corners of the web or the street or through text messaging apps, and finally the customers who use them and come from all walks of life. "We're seeing females and males and people with families and a lot of adolescents, because social media plays a very important role in introducing them to this world," says Maimon, whose team does surveillance of criminals' activities and interactions on the dark web. "In this ecosystem, folks disclose everything they do."

The reporter writes that "It's horrifying to discover, as I have recently, that someone has set up a tech company that might not even be real, listing my home as its principal address."

Two and a half months after the theft the stolen $5,000 was back in their bank account — but it wasn't until a year later that the thief was identified. "The security video had been shared with New York's Capital Region Crime Analysis Center, where analysts have access to facial recognition technology, and was run through a database of booking photos. A possible match resulted.... She was already in custody elsewhere in New York... Evidently, Deborah was being sought by law enforcement in at least three New York counties. [All three cases involved bank-related identity fraud.]"

Deborah was finally charged with two separate felonies: grand larceny in the third degree for stealing property over $3,000, and identity theft. But Deborah missed her next two court dates, and disappeared. "She never came back to court, and now there were warrants for her arrest out of two separate courts."

After speaking to police officials the reporter concludes "There was a good chance she was only doing the grunt work for someone else, maybe even a domestic or foreign-organized crime syndicate, and then suffering all the consequences."

The UK minister of state for security even says that "in some places people are literally captured and used as unwilling operators for fraudsters."
The Courts

Amazon Defends Its Use of Signal Messages in Court (geekwire.com) 54

America's Federal Trade Commission and 17 states filed an antitrust suit against Amazon in September. This week Amazon responded in court about its usage of Signal's "disappearing messages" feature.

Long-time Slashdot reader theodp shares GeekWire's report: At a company known for putting its most important ideas and strategies into comprehensive six-page memos, quick messages between executives aren't the place for meaningful business discussions. That's one of the points made by Amazon in its response Monday to the Federal Trade Commission's allegations about executives' use of the Signal encrypted communications app, known for its "disappearing messages" feature. "For these individuals, just like other short-form messaging, Signal was not a means to send 'structured, narrative text'; it was a way to get someone's attention or have quick exchanges on sensitive topics like public relations or human resources," the company says as part of its response, filed Monday in U.S. District Court in Seattle. Of course, for regulators investigating the company's business practices, these offhanded private comments between Amazon executives could be more revealing than carefully crafted memos meant for wider internal distribution. But in its filing this week, Amazon says there is no evidence that relevant messages have been lost, or that Signal was used to conceal communications that would have been responsive to the FTC's discovery requests. The company says "the equally logical explanation — made more compelling by the available evidence — is that such messages never existed."

In an April 25 motion, the FTC argued that the absence of Signal messages from Amazon discussing substantive business issues relevant to the case was a strong indication that such messages had disappeared. "Amazon executives deleted many Signal messages during Plaintiffs' pre-Complaint investigation, and Amazon did not instruct its employees to preserve Signal messages until over fifteen months after Amazon knew that Plaintiffs' investigation was underway," the FTC wrote in its motion. "It is highly likely that relevant information has been destroyed as a result of Amazon's actions and inactions...."

Amazon's filing quotes the company's founder, Jeff Bezos, saying in a deposition in the case that "[t]o discuss anything in text messaging or Signal messaging or anything like that of any substance would be akin to business malpractice. It's just too short of a messaging format...." The company's filing traces the initial use of Signal by executives back to the suspected hacking of Bezos' phone in 2018, which prompted the Amazon founder to seek ways to send messages more securely.

Crime

Deep Fake Scams Growing in Global Frequency and Sophistication, Victim Warns (cnn.com) 19

In an elaborate scam in January, "a finance worker, was duped into attending a video call with people he believed were the chief financial officer and other members of staff," remembers CNN. But Hong Kong police later said that all of them turned out to be deepfake re-creations which duped the employee into transferring $25 million. According to police, the worker had initially suspected he had received a phishing email from the company's UK office, as it specified the need for a secret transaction to be carried out. However, the worker put aside his doubts after the video call because other people in attendance had looked and sounded just like colleagues he recognized.
Now the targeted company has been revealed: a major engineering consulting firm, with 18,500 employees across 34 offices: A spokesperson for London-based Arup told CNN on Friday that it notified Hong Kong police in January about the fraud incident, and confirmed that fake voices and images were used. "Unfortunately, we can't go into details at this stage as the incident is still the subject of an ongoing investigation. However, we can confirm that fake voices and images were used," the spokesperson said in an emailed statement. "Our financial stability and business operations were not affected and none of our internal systems were compromised," the person added...

Authorities around the world are growing increasingly concerned about the sophistication of deepfake technology and the nefarious uses it can be put to. In an internal memo seen by CNN, Arup's East Asia regional chairman, Michael Kwok, said the "frequency and sophistication of these attacks are rapidly increasing globally, and we all have a duty to stay informed and alert about how to spot different techniques used by scammers."

The company's global CIO emailed CNN this statement. "Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes.

"What we have seen is that the number and sophistication of these attacks has been rising sharply in recent months."

Slashdot reader st33ld13hl adds that in a world of Deep Fakes, insurance company USAA is now asking its customers to authenticate with voice. (More information here.)

Thanks to Slashdot reader quonset for sharing the news.
Earth

America Takes Its Biggest Step Yet to End Coal Mining (msn.com) 161

The Washington Post reports that America took "one of its biggest steps yet to keep fossil fuels in the ground," announcing Thursday that it will end new coal leasing in the Powder River Basin, "which produces nearly half the coal in the United States...

"It could prevent billions of tons of coal from being extracted from more than 13 million acres across Montana and Wyoming, with major implications for U.S. climate goals." A significant share of the nation's fossil fuels come from federal lands and waters. The extraction and combustion of these fuels accounted for nearly a quarter of U.S. carbon dioxide emissions between 2005 and 2014, according to a study by the U.S. Geological Survey. In a final environmental impact statement released Thursday, Interior's Bureau of Land Management found that continued coal leasing in the Powder River Basin would harm the climate and public health. The bureau determined that no future coal leasing should happen in the basin, and it estimated that coal mining in the Wyoming portion of the region would end by 2041.

Last year, the Powder River Basin generated 251.9 million tons of coal, accounting for nearly 44 percent of all coal produced in the United States. Under the bureau's determination, the 14 active coal mines in the Powder River Basin can continue operating on lands they have leased, but they cannot expand onto other public lands in the region... "This means that billions of tons of coal won't be burned, compared to business as usual," said Shiloh Hernandez, a senior attorney at the environmental law firm Earthjustice. "It's good news, and it's really the only defensible decision the BLM could have made, given the current climate crisis...."

The United States is moving away from coal, which has struggled to compete economically with cheaper gas and renewable energy. U.S. coal output tumbled 36 percent from 2015 to 2023, according to the Energy Information Administration. The Sierra Club's Beyond Coal campaign estimates that 382 coal-fired power plants have closed down or proposed to retire, with 148 remaining. In addition, the Environmental Protection Agency finalized an ambitious set of rules in April aimed at slashing air pollution, water pollution and planet-warming emissions spewing from the nation's power plants. One of the most significant rules will push all existing coal plants by 2039 to either close or capture 90 percent of their carbon dioxide emissions at the smokestack.

"The nation's electricity generation needs are being met increasingly by wind, solar and natural gas," said Tom Sanzillo, director of financial analysis at the Institute for Energy Economics and Financial Analysis, an energy think tank. "The nation doesn't need any increase in the amount of coal under lease out of the Powder River Basin."

Government

Are AI-Generated Search Results Still Protected by Section 230? (msn.com) 62

Starting this week millions will see AI-generated answers in Google's search results by default. But the announcement Tuesday at Google's annual developer conference suggests a future that's "not without its risks, both to users and to Google itself," argues the Washington Post: For years, Google has been shielded for liability for linking users to bad, harmful or illegal information by Section 230 of the Communications Decency Act. But legal experts say that shield probably won't apply when its AI answers search questions directly. "As we all know, generative AIs hallucinate," said James Grimmelmann, professor of digital and information law at Cornell Law School and Cornell Tech. "So when Google uses a generative AI to summarize what webpages say, and the AI gets it wrong, Google is now the source of the harmful information," rather than just the distributor of it...

Adam Thierer, senior fellow at the nonprofit free-market think tank R Street, worries that innovation could be throttled if Congress doesn't extend Section 230 to cover AI tools. "As AI is integrated into more consumer-facing products, the ambiguity about liability will haunt developers and investors," he predicted. "It is particularly problematic for small AI firms and open-source AI developers, who could be decimated as frivolous legal claims accumulate." But John Bergmayer, legal director for the digital rights nonprofit Public Knowledge, said there are real concerns that AI answers could spell doom for many of the publishers and creators that rely on search traffic to survive — and which AI, in turn, relies on for credible information. From that standpoint, he said, a liability regime that incentivizes search engines to continue sending users to third-party websites might be "a really good outcome."

Meanwhile, some lawmakers are looking to ditch Section 230 altogether. [Last] Sunday, the top Democrat and Republican on the House Energy and Commerce Committee released a draft of a bill that would sunset the statute within 18 months, giving Congress time to craft a new liability framework in its place. In a Wall Street Journal op-ed, Reps. Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr. (D-N.J.) argued that the law, which helped pave the way for social media and the modern internet, has "outlived its usefulness."

The tech industry trade group NetChoice [which includes Google, Meta, X, and Amazon] fired back on Monday that scrapping Section 230 would "decimate small tech" and "discourage free speech online."

The digital law professor points out Google has traditionally escaped legal liability by attributing its answers to specific sources — but it's not just Google that has to worry about the issue. The article notes that Microsoft's Bing search engine also supplies AI-generated answers (from Microsoft's Copilot). "And Meta recently replaced the search bar in Facebook, Instagram and WhatsApp with its own AI chatbot."

The article also note sthat several U.S. Congressional committees are considering "a bevy" of AI bills...
Transportation

Eight Automakers Grilled by US Lawmakers Over Sharing of Connected Car Data With Police (autoblog.com) 35

An anonymous reader shared this report from Automotive News: Automotive News recently reported that eight automakers sent vehicle location data to police without a court order or warrant. The eight companies told senators that they provide police with data when subpoenaed, getting a rise from several officials.

BMW, Kia, Mazda, Mercedes-Benz, Nissan, Subaru, Toyota, and Volkswagen presented their responses to lawmakers. Senators Ron Wyden from Oregon and Ed Markey from Massachusetts penned a letter to the Federal Trade Commission, urging investigative action. "Automakers have not only kept consumers in the dark regarding their actual practices, but multiple companies misled consumers for over a decade by failing to honor the industry's own voluntary privacy principles," they wrote.

Ten years ago, all of those companies agreed to the Consumer Privacy Protection Principles, a voluntary code that said automakers would only provide data with a warrant or order issued by a court. Subpoenas, on the other hand, only require approval from law enforcement. Though it wasn't part of the eight automakers' response, General Motors has a class-action suit on its hands, claiming that it shared data with LexisNexis Risk Solutions, a company that provides insurers with information to set rates.

The article notes that the lawmakers praised Honda, Ford, GM, Tesla, and Stellantis for requiring warrants, "except in the case of emergencies or with customer consent."
The Courts

The Delta Emulator Is Changing Its Logo After Adobe Threatened It (theverge.com) 56

After Adobe threatened legal action, the Delta Emulator said it'll abandon its current logo for a different, yet-to-be-revealed mark. The issue centers around Delta's stylized letter "D", which the digital media giant says is too similar to its stylized letter "A". The Verge reports: On May 7th, Adobe's lawyers reached out to Delta with a firm but kindly written request to go find a different icon, an email that didn't contain an explicit threat or even use the word infringement -- it merely suggested that Delta might "not wish to confuse consumers or otherwise violate Adobe's rights or the law." But Adobe didn't wait for a reply. On May 8th, one day later, Testut got another email from Apple that suggested his app might be at risk because Adobe had reached out to allege Delta was infringing its intellectual property rights.

"We responded to both Apple and Adobe explaining our icon was a stylized Greek letter delta -- not an A -- but that we would update the Delta logo anyway to avoid confusion," Testut tells us. The icon you're seeing on the App Store now is just a temporary one, he says, as the team is still working on a new logo. "Both the App Store and AltStore versions have been updated with this temporary icon, but the plan is to update them to the final updated logo with Delta 1.6 once it's finished."

Slashdot Top Deals