×
AI

OpenAI Must Defend ChatGPT Fabrications After Failing To Defeat Libel Suit 65

An anonymous reader quotes a report from Ars Technica: OpenAI may finally have to answer for ChatGPT's "hallucinations" in court after a Georgia judge recently ruled against the tech company's motion to dismiss a radio host's defamation suit (PDF). OpenAI had argued that ChatGPT's output cannot be considered libel, partly because the chatbot output cannot be considered a "publication," which is a key element of a defamation claim. In its motion to dismiss, OpenAI also argued that Georgia radio host Mark Walters could not prove that the company acted with actual malice or that anyone believed the allegedly libelous statements were true or that he was harmed by the alleged publication.

It's too early to say whether Judge Tracie Cason found OpenAI's arguments persuasive. In her order denying OpenAI's motion to dismiss, which MediaPost shared here, Cason did not specify how she arrived at her decision, saying only that she had "carefully" considered arguments and applicable laws. There may be some clues as to how Cason reached her decision in a court filing (PDF) from John Monroe, attorney for Walters, when opposing the motion to dismiss last year. Monroe had argued that OpenAI improperly moved to dismiss the lawsuit by arguing facts that have yet to be proven in court. If OpenAI intended the court to rule on those arguments, Monroe suggested that a motion for summary judgment would have been the proper step at this stage in the proceedings, not a motion to dismiss.

Had OpenAI gone that route, though, Walters would have had an opportunity to present additional evidence. To survive a motion to dismiss, all Walters had to do was show that his complaint was reasonably supported by facts, Monroe argued. Failing to convince the court that Walters had no case, OpenAI's legal theories regarding its liability for ChatGPT's "hallucinations" will now likely face their first test in court. "We are pleased the court denied the motion to dismiss so that the parties will have an opportunity to explore, and obtain a decision on, the merits of the case," Monroe told Ars.
"Walters sued OpenAI after a journalist, Fred Riehl, warned him that in response to a query, ChatGPT had fabricated an entire lawsuit," notes Ars. "Generating an entire complaint with an erroneous case number, ChatGPT falsely claimed that Walters had been accused of defrauding and embezzling funds from the Second Amendment Foundation."

"With the lawsuit moving forward, curious chatbot users everywhere may finally get the answer to a question that has been unclear since ChatGPT quickly became the fastest-growing consumer application of all time after its launch in November 2022: Will ChatGPT's hallucinations be allowed to ruin lives?"
Chrome

Chrome Updates Incognito Warning To Admit Google Tracks Users In 'Private' Mode (arstechnica.com) 40

An anonymous reader quotes a report from Ars Technica: Google is updating the warning on Chrome's Incognito mode to make it clear that Google and websites run by other companies can still collect your data in the web browser's semi-private mode. The change is being made as Google prepares to settle a class-action lawsuit that accuses the firm of privacy violations related to Chrome's Incognito mode. The expanded warning was recently added to Chrome Canary, a nightly build for developers. The warning appears to directly address one of the lawsuit's complaints, that the Incognito mode's warning doesn't make it clear that Google collects data from users of the private mode.

Many tech-savvy people already know that while private modes in web browsers prevent some data from being stored on your device, they don't prevent tracking by websites or Internet service providers. But many other people may not understand exactly what Incognito mode does, so the more specific warning could help educate users. The new warning seen in Chrome Canary when you open an incognito window says: "You've gone Incognito. Others who use this device won't see your activity, so you can browse more privately. This won't change how data is collected by websites you visit and the services they use, including Google." The wording could be interpreted to refer to Google websites and third-party websites, including third-party websites that rely on Google ad services. The new warning was not yet in the developer, beta, and stable branches of Chrome as of today. It also wasn't in Chromium. The change to Canary was previously reported by MSPowerUser.

Incognito mode in the stable version of Chrome still says: "You've gone Incognito. Now you can browse privately, and other people who use this device won't see your activity." Among other changes, the Canary warning replaces "browse privately" with "browse more privately." The stable and Canary warnings both say that your browsing activity might still be visible to "websites you visit," "your employer or school," or "your Internet service provider." But only the Canary warning currently includes the caveat that Incognito mode "won't change how data is collected by websites you visit and the services they use, including Google." The old and new warnings both say that Incognito mode prevents Chrome from saving your browsing history, cookies and site data, and information entered in forms, but that "downloads, bookmarks and reading list items will be saved." Both warnings link to this page, which provides more detail on Incognito mode.

The Courts

Supreme Court Rejects Apple-Epic Games Legal Battle (reuters.com) 52

The U.S. Supreme Court on Tuesday declined to hear a challenge by Apple to a lower court's decision requiring changes to certain rules in its lucrative App Store, as the justices shunned the lengthy legal battle between the iPhone maker and Epic Games, maker of the popular video game "Fortnite." Reuters: The justices also turned away Epic's appeal of the lower court's ruling that Apple's App Store policies limiting how software is distributed and paid for do not violate federal antitrust laws. The justices gave no reasons for their decision to deny the appeals. In a series of posts on X, Epic CEO Tim Sweeney wrote: The Supreme Court denied both sides' appeals of the Epic v. Apple antitrust case. The court battle to open iOS to competing stores and payments is lost in the United States. A sad outcome for all developers. Now the District Court's injunction against Apple's anti-steering rule is in effect, and developers can include in their apps "buttons, external links, or other calls to action that direct customers to purchasing mechanisms, in addition to IAP."

As of today, developers can begin exercising their court-established right to tell US customers about better prices on the web. These awful Apple-mandated confusion screens are over and done forever. The fight goes on. Regulators are taking action and policymakers around the world are passing new laws to end Apple's illegal and anticompetitive app store practices. The European Union's Digital Markets Act goes into effect March 7.

Piracy

Reddit Must Share IP Addresses of Piracy-Discussing Users, Film Studios Say 36

For the third time in under a year, film studios are pressing Reddit to reveal users allegedly discussing piracy, despite two prior failed attempts. Studios including Voltage Holdings and Screen Media have filed fresh motions to compel Reddit to comply with a subpoena seeking IP addresses and logs of six Redditors, claiming the information is needed for copyright suits against internet provider Frontier Communications.

The same federal judge previously denied the studios' bid to unmask Reddit users, citing First Amendment protections. However, the studios now argue IP addresses fall outside privacy rights. Reddit maintains the new subpoena fails to meet the bar for identifying anonymous online speakers.
EU

Python Software Foundation Says EU's 'Cyber Resilience Act' Includes Wins for Open Source (blogspot.com) 18

Last April the Python Software Foundation warned that Europe's proposed Cyber Resilience Act jeopardized their organization and "the health of the open-source software community" with overly broad policies that "will unintentionally harm the users they are intended to protect."

They'd worried that the Python Software Foundation could incur financial liabilities just for hosting Python and its PyPI package repository due to the proposed law's attempts to penalize cybersecurity lapses all the way upstream. But a new blog post this week cites some improvements: We asked for increased clarity, specifically:

"Language that specifically exempts public software repositories that are offered as a public good for the purpose of facilitating collaboration would make things much clearer. We'd also like to see our community, especially the hobbyists, individuals and other under-resourced entities who host packages on free public repositories like PyPI be exempt."


The good news is that CRA text changed a lot between the time the open source community — including the PSF — started expressing our concerns and the Act's final text which was cemented on December 1st. That text introduces the idea of an "open source steward."

"'open-source software steward' means any legal person, other than a manufacturer, which has the purpose or objective to systematically provide support on a sustained basis for the development of specific products with digital elements qualifying as free and open-source software that are intended for commercial activities, and ensures the viability of those products;" (p. 76)


[...] So are we totally done paying attention to European legislation? Ah, while it would be nice for the Python community to be able to cross a few things off our to-do list, that's not quite how it works. Firstly, the concept of an "open source steward" is a brand new idea in European law. So, we will be monitoring the conversation as this new concept is implemented or interacts with other bits of European law to make sure that the understanding continues to reflect the intent and the realities of open source development. Secondly, there are some other pieces of legislation in the works that may also impact the Python ecosystem so we will be watching the Product Liability Directive and keeping up with the discussion around standard-essential patents to make sure that the effects on Python and open source development are intentional (and hopefully benevolent, or at least benign.)

AI

What Laws Will We Need to Regulate AI? (mindmatters.ai) 86

johnnyb (Slashdot reader #4,816) is a senior software R&D engineer who shares his proposed framework for "what AI legislation should cover, what policy goals it should aim to achieve, and what we should be wary of along the way." Some excerpts?

Protect Content Consumers from AI
The government should legislate technical and visual markers for AI-generated content, and the FTC should ensure that consumers always know whether or not there is a human taking responsibility for the content. This could be done by creating special content markings which communicate to users that content is AI-generated... This will enable Google to do things such as allow users to not include AI content when searching. It will enable users to detect which parts of their content are AI-generated and apply the appropriate level of skepticism. And future AI language models can also use these tags to know not to consume AI-generated content...

Ensure Companies are Clear on Who's Taking Responsibility
It's fine for a software product to produce a result that the software company views as advisory only, but it has to be clearly marked as such. Additionally, if one company includes the software built by another company, all companies need to be clear as to which outputs are derived from identifiable algorithms and which outputs are the result of AI. If the company supplying the component is not willing to stand behind the AI results that are produced, then that needs to be made clear.

Clarify Copyright Rules on Content Used in Models

Note that nothing here limits the technological development of Artificial Intelligence... The goal of these proposals is to give clarity to all involved what the expectations and responsibilities of each party are.

OpenAI's Sam Altman has also been pondering this, but on a much larger scale. In a (pre-ouster) interview with Bill Gates, Altman pondered what happens at the next level.

That is, what happens "If we are right, and this technology goes as far as we think it's going to go, it will impact society, geopolitical balance of power, so many things..." [F]or these, still hypothetical, but future extraordinarily powerful systems — not like GPT- 4, but something with 100,000 or a million times the compute power of that, we have been socialized in the idea of a global regulatory body that looks at those super-powerful systems, because they do have such global impact. One model we talk about is something like the IAEA. For nuclear energy, we decided the same thing. This needs a global agency of some sort, because of the potential for global impact. I think that could make sense...

I think if it comes across as asking for a slowdown, that will be really hard. If it instead says, "Do what you want, but any compute cluster above a certain extremely high-power threshold" — and given the cost here, we're talking maybe five in the world, something like that — any cluster like that has to submit to the equivalent of international weapons inspectors. The model there has to be made available for safety audit, pass some tests during training, and before deployment. That feels possible to me. I wasn't that sure before, but I did a big trip around the world this year, and talked to heads of state in many of the countries that would need to participate in this, and there was almost universal support for it.

The Courts

Despite 16-Year Glitch, UK Law Still Considers Computers 'Reliable' By Default (theguardian.com) 96

Long-time Slashdot reader Geoffrey.landis writes: Hundreds of British postal workers wrongly convicted of theft due to faulty accounting software could have their convictions reversed, according to a story from the BBC. Between 1999 and 2015, the Post Office prosecuted 700 sub-postmasters and sub-postmistresses — an average of one a week — based on information from a computer system called Horizon, after faulty software wrongly made it look like money was missing. Some 283 more cases were brought by other bodies including the Crown Prosecution Service.
2024 began with a four-part dramatization of the scandal airing on British television, and the BBC reporting today that its reporters originally investigating the story confronted "lobbying, misinformation and outright lies."

Yet the Guardian notes that to this day in English and Welsh law, computers are still assumed to be "reliable" unless and until proven otherwise. But critics of this approach say this reverses the burden of proof normally applied in criminal cases. Stephen Mason, a barrister and expert on electronic evidence, said: "It says, for the person who's saying 'there's something wrong with this computer', that they have to prove it. Even if it's the person accusing them who has the information...."

He and colleagues had been expressing alarm about the presumption as far back as 2009. "My view is that the Post Office would never have got anywhere near as far as it did if this presumption wasn't in place," Mason said... [W]hen post office operators were accused of having stolen money, the hallucinatory evidence of the Horizon system was deemed sufficient proof. Without any evidence to the contrary, the defendants could not force the system to be tested in court and their loss was all but guaranteed.

The influence of English common law internationally means that the presumption of reliability is widespread. Mason cites cases from New Zealand, Singapore and the U.S. that upheld the standard and just one notable case where the opposite happened... The rise of AI systems made it even more pressing to reassess the law, said Noah Waisberg, the co-founder and CEO of the legal AI platform Zuva.

Thanks to Slashdot reader Bruce66423 for sharing the article.
Earth

America Cracks Down on Methane Emissions from Oil and Gas Facilities (msn.com) 36

Friday America's Environmental Protection Agency "proposed steep new fees on methane emissions from oil and gas facilities," reports the Washington Post, "escalating a crackdown on the fossil fuel industry's planet-warming pollution."

Methane does not linger in the atmosphere as long as carbon dioxide, but it is far more effective at trapping heat — roughly 80 times more potent in its first decade. It is responsible for roughly a third of global warming today, and the oil and gas industry accounts for about 14 percent of the world's annual methane emissions, according to estimates from the International Energy Agency. Other large methane sources include livestock, landfills and coal mines.
So America's new Methane Emissions Reduction Program "levies a fee on wasteful methane emissions from large oil and gas facilities," according to the article: The fee starts at $900 per metric ton of emissions in 2024, increasing to $1,200 in 2025 and $1,500 in 2026 and thereafter. The EPA proposal lays out how the fee will be implemented, including how the charge will be calculated...

At the U.N. Climate Change Conference in Dubai in December, EPA Administrator Michael Regan announced final standards to limit methane emissions from U.S. oil and gas operations. Fossil fuel companies that comply with these standards will be exempt from the new fee... Fred Krupp, president of the Environmental Defense Fund, said the fee will encourage fossil fuel firms to deploy innovative technologies that detect methane leaks. Such cutting-edge technologies range from ground-based sensors to satellites in space. "Proven solutions to cut oil and gas methane and to avoid the fee are being used by leading companies in states across the country," Krupp said in a statement...

In addition to methane, the EPA proposal could slash emissions of hazardous air pollutants, including smog-forming volatile organic compounds and cancer-causing benzene [according to an EPA official].

The federal government also gave America's fossil fuel companies nearly $1 billion to help them comply with the methane regulation, according to the article.

The article also includes this statement from an executive at the American Petroleum Institute, the top lobbying arm of the U.S. oil and gas industry, complaining that the fines create a "regime" that would "stifle innovation," and urging Congress to repeal it.
Censorship

Removal of Netflix Film Shows Advancing Power of India's Hindu Right Wing (nytimes.com) 110

An anonymous reader quotes a report from the New York Times: The trailer for "Annapoorani: The Goddess of Food" promised a sunny if melodramatic story of uplift in a south Indian temple town. A priest's daughter enters a cooking tournament, but social obstacles complicate her inevitable rise to the top. Annapoorani's father, a Brahmin sitting at the top of Hindu society's caste ladder, doesn't want her to cook meat, a taboo in their lineage. There is even the hint of a Hindu-Muslim romantic subplot. On Thursday, two weeks after the movie premiered, Netflix abruptly pulled it from its platform. An activist, Ramesh Solanki, a self-described "very proud Hindu Indian nationalist," had filed a police complaint arguing that the film was "intentionally released to hurt Hindu sentiments." He said it mocked Hinduism by "depicting our gods consuming nonvegetarian food."

The production studio quickly responded with an abject letter to a right-wing group linked to the government of Prime Minister Narendra Modi, apologizing for having "hurt the religious sentiments of the Hindus and Brahmins community." The movie was soon removed from Netflix both in India and around the world, demonstrating the newfound power of Hindu nationalists to affect how Indian society is depicted on the screen. Nilesh Krishnaa, the movie's writer and director, tried to anticipate the possibility of offending some of his fellow Indians. Food, Brahminical customs and especially Hindu-Muslim relations are all part of a third rail that has grown more powerfully electrified during Mr. Modi's decade in power. But, Mr. Krishnaa told an Indian newspaper in November, "if there was something disturbing communal harmony in the film, the censor board would not have allowed it."

With "Annapoorani," Netflix appears to have in effect done the censoring itself even when the censor board did not. In other cases, Netflix now seems to be working with the board unofficially, though streaming services in India do not fall under the regulations that govern traditional Indian cinema. For years, Netflix ran unredacted versions of Indian films that had sensitive parts removed for their theatrical releases -- including political messages that contradicted the government's line. Since last year, though, the streaming versions of movies from India match the versions that were censored locally, no matter where in the world they are viewed. [...] Nikhil Pahwa, a co-founder of the Internet Freedom Foundation, thinks the streaming companies are ready to capitulate: "They're unlikely to push back against any kind of bullying or censorship, even though there is no law in India" to force them.

Privacy

Apple Knew AirDrop Users Could Be Identified and Tracked as Early as 2019 (cnn.com) 27

Security researchers warned Apple as early as 2019 about vulnerabilities in its AirDrop wireless sharing function that Chinese authorities claim they recently used to track down users of the feature, the researchers told CNN, in a case that experts say has sweeping implications for global privacy. From a report: The Chinese government's actions targeting a tool that Apple customers around the world use to share photos and documents -- and Apple's apparent inaction to address the flaws -- revive longstanding concerns by US lawmakers and privacy advocates about Apple's relationship with China and about authoritarian regimes' ability to twist US tech products to their own ends.

AirDrop lets Apple users who are near each other share files using a proprietary mix of Bluetooth and other wireless connectivity without having to connect to the internet. The sharing feature has been used by pro-democracy activists in Hong Kong and the Chinese government has cracked down on the feature in response. A Chinese tech firm, Beijing-based Wangshendongjian Technology, was able to compromise AirDrop to identify users on the Beijing subway accused of sharing "inappropriate information," judicial authorities in Beijing said this week. Although Chinese officials portrayed the exploit as an effective law enforcement technique, internet freedom advocates are urging Apple to address the issue quickly and publicly.

Power

White House Unveils $623 Million In Funding To Boost EV Charging Points (theguardian.com) 101

An anonymous reader quotes a report from The Guardian: Joe Biden's administration has unveiled $623 million in funding to boost the number of electric vehicle charging points in the U.S., amid concerns that the transition to zero-carbon transportation isn't keeping pace with goals to tackle the climate crisis. The funding will be distributed in grants for dozens of programs across 22 states, such as EV chargers for apartment blocks in New Jersey, rapid chargers in Oregon and hydrogen fuel chargers for freight trucks in Texas. In all, it's expected the money, drawn from the bipartisan infrastructure law, will add 7,500 chargers to the US total.

There are about 170,000 electric vehicle chargers in the U.S., a huge leap from a network that was barely visible prior to Biden taking office, and the White House has set a goal for 500,000 chargers to help support the shift away from gasoline and diesel cars. "The U.S. is taking the lead globally on electric vehicles," said Ali Zaidi, a climate adviser to Biden who said the US is on a trajectory to "meet and exceed" the administration's charger goal. "We will continue to see this buildout over the coming years and decades until we've achieved a fully net zero transportation sector," he added.
On Thursday, the House approved legislation to undo a Biden administration rule meant to facilitate the proliferation of EV charging stations. "S. J. Res. 38 from Sen. Marco Rubio (R-Fla.), would scrap a Federal Highway Administration waiver from domestic sourcing requirements for EV chargers funded by the 2021 bipartisan infrastructure law. It already passed the Senate 50-48," reports Politico.

"A waiver undercuts domestic investments and risks empowering foreign nations," said Rep. Sam Graves (R-Mo.), chair of the Transportation and Infrastructure Committee, during House debate Thursday. "If the administration is going to continue to push for a massive transition to EVs, it should ensure and comply with Buy America requirements." The White House promised to veto it and said it would backfire, saying it was so poorly worded it would actually result in fewer new American-made charging stations.
The Courts

eBay To Pay $3 Million Penalty For Employees Sending Live Cockroaches, Fetal Pig To Bloggers (cbsnews.com) 43

E-commerce giant eBay agreed to pay a $3 million penalty for the harassment and stalking of a Massachusetts couple by several of its employees. "The couple, Ina and David Steiner, had been subjected to threats and bizarre deliveries, including live spiders, cockroaches, a funeral wreath and a bloody pig mask in August 2019," reports CBS News. From the report: Thursday's fine comes after several eBay employees ran a harassment and intimidation campaign against the Steiners, who publish a news website focusing on players in the e-commerce industry. "eBay engaged in absolutely horrific, criminal conduct. The company's employees and contractors involved in this campaign put the victims through pure hell, in a petrifying campaign aimed at silencing their reporting and protecting the eBay brand," Levy said. "We left no stone unturned in our mission to hold accountable every individual who turned the victims' world upside-down through a never-ending nightmare of menacing and criminal acts."

The Justice Department criminally charged eBay with two counts of stalking through interstate travel, two counts of stalking through electronic communications services, one count of witness tampering and one count of obstruction of justice. The company agreed to pay $3 million as part of a deferred prosecution agreement. Under the agreement, eBay will be required to retain an independent corporate compliance monitor for three years, officials said, to "ensure that eBay's senior leadership sets a tone that makes compliance with the law paramount, implements safeguards to prevent future criminal activity, and makes clear to every eBay employee that the idea of terrorizing innocent people and obstructing investigations will not be tolerated," Levy said.

Former U.S. Attorney Andrew Lelling said the plan to target the Steiners, which he described as a "campaign of terror," was hatched in April 2019 at eBay. Devin Wenig, eBay's CEO at the time, shared a link to a post Ina Steiner had written about his annual pay. The company's chief communications officer, Steve Wymer, responded: "We are going to crush this lady." About a month later, Wenig texted: "Take her down." Prosecutors said Wymer later texted eBay security director Jim Baugh. "I want to see ashes. As long as it takes. Whatever it takes," Wymer wrote. Investigators said Baugh set up a meeting with security staff and dispatched a team to Boston, about 20 miles from where the Steiners live. "Senior executives at eBay were frustrated with the newsletter's tone and content, and with the comments posted beneath the newsletter's articles," the Department of Justice wrote in its Thursday announcement.
Two former eBay security executives were sentenced to prison over the incident.
Google

Google Formally Endorses Right To Repair, Will Lobby To Pass Repair Laws (404media.co) 47

Google formally endorsed the concept of right to repair Thursday and is set to testify in favor of a strong right to repair bill in Oregon later Thursday, a massive step forward for the right to repair movement. 404 Media: "Google believes that users should have more control over repair -- including access to the same documentation, parts and tools that original equipment manufacturer (OEM) repair channels have -- which is often referred to as 'Right to Repair,'" Google's Steven Nickel wrote in a white paper published Thursday.

Crucially, Google specifically says that regulators should ban "parts pairing," which is a tactic used by Apple, John Deere, and other major manufacturers to artificially restrict which repair parts can be used with a given device: "Policies should constrain OEMs from imposing unfair anti-repair practices. For example, parts-pairing, the practice of using software barriers to obstruct consumers and independent repair shops from replacing components, or other restrictive impediments to repair should be discouraged," the white paper says.

Bitcoin

Englishman Who Posed As HyperVerse CEO Says Sorry To Investors Who Lost Millions (theguardian.com) 23

Stephen Harrison, an Englishman living in Thailand who posed as chief executive Steven Reece Lewis for the launch of the HyperVerse crypto scheme, told the Guardian Australia that he was paid to play the role of chief executive but denies having 'pocketed' any of the money lost. He says he received 180,000 Thai baht (about $7,500) over nine months and a free suit, adding that he was "shocked" to learn the company had presented him as having fake credentials to promote the scheme. From the report: He said he felt sorry for those who had lost money in relation to the scheme -- which he said he had no role in -- an amount Chainalysis estimates at US$1.3 billion in 2022 alone. "I am sorry for these people," he said. "Because they believed some idea with me at the forefront and believed in what I said, and God knows what these people have lost. And I do feel bad about this. "I do feel deeply sorry for these people, I really do. You know, it's horrible for them. I just hope that there is some resolution. I know it's hard to get the money back off these people or whatever, but I just hope there can be some justice served in all of this where they can get to the bottom of this." He said he wanted to make clear he had "certainly not pocketed" any of the money lost by investors.

Harrison, who at the time was a freelance television presenter engaged in unpaid football commentary, said he had been approached and offered the HyperVerse work by a friend of a friend. He said he was new to the industry and had been open to picking up more work and experience as a corporate "presenter." "I was told I was acting out a role to represent the business and many people do this," Harrison said. He said he trusted his agent and accepted that. After reading through the scripts he said he was initially suspicious about the company he was hired to represent because he was unfamiliar with the crypto industry, but said he had been reassured by his agent that the company was legitimate. He said he had also done some of his own online research into the organization and found articles about the Australian blockchain entrepreneur and HyperTech chairman Sam Lee. "I went away and I actually looked at the company because I was concerned that it could be a scam," Harrison said. "So I looked online a bit and everything seemed OK, so I rolled with it."
The HyperVerse crypto scheme was promoted by Lee and his business partner Ryan Xu, both of which were founders of the collapsed Australian bitcoin company Blockchain Global. "Blockchain Global owes creditors $58 million and its liquidator has referred Xu and Lee to the Australian Securities and Investments Commission for alleged possible breaches of the Corporations Act," reports The Guardian. "Asic has said it does not intend to take action at this time."

Rodney Burton, known as "Bitcoin Rodney," was arrested and charged in the U.S on Monday for his alleged role in promoting the HyperVerse crypto scheme. The IRS alleges Burton was "part of a network that made 'fraudulent' presentations claiming high returns for investors based on crypto-mining operations that did not exist," reports The Guardian.
Piracy

Piracy Is Surging Again Because Streaming Execs Ignored The Lessons Of The Past (techdirt.com) 259

Karl Bode, reporting for TechDirt: Back in 2019 we noted how the streaming sector risked driving consumers back to piracy if they didn't heed the lessons of the past. We explored how the rush to raise rates, nickel-and-dime users, implement arbitrary restrictions, and force users toward hunting and pecking their way through a confusing platter of exclusives and availability windows risked driving befuddled users back to piracy. And lo and behold, that's exactly what's happening.

After several decades of kicking and screaming, studio and music execs somewhere around 2010 finally realized they needed to offer users affordable access to easy-to-use online content resources. They finally realized they needed to compete with piracy and focus on consumer satisfaction whether they liked the concept or not. And unsurprisingly, once they learned that lesson piracy began to dramatically decrease. That was until 2021, when piracy rates began to climb slowly upward again in the U.S. and EU. As the Daily Beast notes, users have grown increasingly frustrated at having to hunt and peck through a universe of different, often terrible streaming services just to find a single film or television program.

As every last broadcaster, cable company, broadband provider, and tech company got into streaming they began to lock down "must watch" content behind an ever-shifting number of exclusivity silos, across an ocean of sometimes substandard "me too" services. Initially competition worked, but as the market saturated and the most powerful companies started to silo content, those benefits have been muted. Now users have to hunt and peck between Disney+, Netflix, Starz, Max, Apple+, Acorn, Paramount+, Hulu, Peacock, Amazon Prime, and countless other services in the hopes that a service has the rights to a particular film or program. When you already pay for five different services, you're not keen to sign up to fucking Starz just to watch a single 90s film. And availability is constantly shifting, confusing things further.

China

AirDrop 'Cracked' By Chinese Authorities To Identify Senders (macrumors.com) 25

According to Bloomberg, Apple's AirDrop feature has been cracked by a Chinese state-backed institution to identify senders who share "undesirable content". MacRumors reports: AirDrop is Apple's ad-hoc service that lets users discover nearby Macs and iOS devices and securely transfer files between them over Wi-Fi and Bluetooth. Users can send and receive photos, videos, documents, contacts, passwords and anything else that can be transferred from a Share Sheet. Apple advertises the protocol as secure because the wireless connection uses Transport Layer Security (TLS) encryption, but the Beijing Municipal Bureau of Justice (BMBJ) says it has devised a way to bypass the protocol's encryption and reveal identifying information.

According to the BMBJ's website, iPhone device logs were analyzed to create a "rainbow table" which allowed investigators to convert hidden hash values into the original text and correlate the phone numbers and email accounts of AirDrop content senders. The "technological breakthrough" has successfully helped the public security authorities identify a number of criminal suspects, who use the AirDrop function to spread illegal content, the BMBJ added. "It improves the efficiency and accuracy of case-solving and prevents the spread of inappropriate remarks as well as potential bad influences," the bureau added.

It is not known if the security flaw in the AirDrop protocol has been exploited by a government agency before now, but it is not the first time a flaw has been discovered. In April 2021, German researchers found that the mutual authentication mechanism that confirms both the receiver and sender are on each other's address book could be used to expose private information. According to the researchers, Apple was informed of the flaw in May of 2019, but did not fix it.

Bitcoin

SEC Claims Account Was 'Compromised' After Announcing False Bitcoin ETF Approval (cnbc.com) 48

With the approval of new rule change applications, the SEC is now allowing bitcoin ETFs to be traded in the United States.



UPDATE: The SEC said that the announcement about bitcoin ETFs on social media was incorrect, and that its X account was compromised. "The SEC's @SECGov X/Twitter account has been compromised. The unauthorized tweet regarding bitcoin ETFs was not made by the SEC or its staff," an SEC spokesperson told CNBC.

"The SEC has not approved the listing and trading of spot bitcoin exchange-traded products," said SEC Chair Gary Gensler in a post on X. From the original CNBC article: The decision will likely lead to the conversion of the Grayscale Bitcoin Trust, which holds about $29 billion of the cryptocurrency, into an ETF, as well as the launch of competing funds from mainstream issuers like BlackRock's iShares. The approval could prove to be a landmark event in the adoption of cryptocurrency by mainstream finance, as the ETF structure gives institutions and financial advisors a familiar and regulated way to buy exposure to bitcoin.

The SEC has for years opposed a so-called spot bitcoin fund, with several firms filing and then withdrawing applications for ETFs in the past. SEC Chair Gary Gensler has been an outspoken critic of crypto during his tenure. However, the regulator appeared to change course on the ETF question in 2023, possibly due in part to an August loss to Grayscale in court which criticized the SEC for blocking bitcoin ETFs while allowing funds that track bitcoin futures.

United States

FTC Bans X-Mode From Selling Phone Location Data (techcrunch.com) 10

The U.S. Federal Trade Commission has banned the data broker X-Mode Social from sharing or selling users' sensitive location data, the federal regulator said Tuesday. From a report: The first of its kind settlement prohibits X-Mode, now known as Outlogic, from sharing and selling users' sensitive information to others. The settlement will also require the data broker to delete or destroy all the location data it previously collected, along with any products produced from this data, unless the company obtains consumer consent or ensures the data has been de-identified. X-Mode buys and sells access to the location data collected from ordinary phone apps. While just one of many organizations in the multibillion-dollar data broker industry, X-Mode faced scrutiny for selling access to the commercial location data of Americans' past movements to the U.S. government and military contractors. Soon after, Apple and Google told developers to remove X-Mode from their apps or face a ban from the app stores.
The Courts

Judges in England and Wales Given Cautious Approval To Use AI in Writing Legal Opinions (apnews.com) 23

Press2ToContinue writes: England's 1,000-year-old legal system -- still steeped in traditions that include wearing wigs and robes -- has taken a cautious step into the future by giving judges permission to use artificial intelligence to help produce rulings . The Courts and Tribunals Judiciary last month said AI could help write opinions but stressed it shouldn't be used for research or legal analyses because the technology can fabricate information and provide misleading, inaccurate and biased information.

"Judges do not need to shun the careful use of AI," said Master of the Rolls Geoffrey Vos, the second-highest ranking judge in England and Wales. "But they must ensure that they protect confidence and take full personal responsibility for everything they produce." At a time when scholars and legal experts are pondering a future when AI could replace lawyers, help select jurors or even decide cases, the approach spelled out Dec. 11 by the judiciary is restrained. But for a profession slow to embrace technological change, it's a proactive step as government and industry -- and society in general -- react to a rapidly advancing technology alternately portrayed as a panacea and a menace.

Google

Google Faces Multibillion-Dollar US Patent Trial Over AI Tech (reuters.com) 27

Alphabet's Google is set to go before a federal jury in Boston on Tuesday in a trial over accusations that processors it uses to power AI technology in key products infringe a computer scientist's patents. From a report: Singular Computing, founded by Massachusetts-based computer scientist Joseph Bates, claims Google copied his technology and used it to support AI features in Google Search, Gmail, Google Translate and other Google services. A Google court filing said that Singular has requested up to $7 billion in monetary damages, which would be more than double the largest-ever patent infringement award in U.S. history.

Google spokesperson Jose Castaneda called Singular's patents "dubious" and said that Google developed its processors "independently over many years." "We look forward to setting the record straight in court," Castaneda said.

Slashdot Top Deals