×
Crime

Retailers Can't Keep Scammers Away From Their Favorite Payment Form: Gift Cards (axios.com) 96

Retailers are struggling to rein in the proliferation of scammers tricking Americans into buying thousands of dollars' worth of gift cards. From a report: The Federal Trade Commission estimates that Americans lost at least $217 million to gift card scams last year. That number is likely higher, given many victims are too embarrassed to report to law enforcement. Cracking down on gift card scams was a hot topic this week at the National Retail Federation's (NRF) cybersecurity conference in Long Beach, California.

Some gift card scams start with texts from people pretending to be tech support, your boss, the government or a wrong number. Eventually, those conversations lead to someone asking the victim to buy gift cards on their behalf and send the barcode number to them via text. Others involve criminals in physical locations, tampering with a gift card to access the barcode information and then stealing the funds without taking the actual card. Each scam targets vulnerable populations: elderly, less-tech savvy people; those who are lonely and work from home; and even young kids, experts say.

Privacy

Bangladeshi Police Agents Accused of Selling Citizens' Personal Information on Telegram (techcrunch.com) 5

An anonymous reader shares a report: Two senior officials working for anti-terror police in Bangladesh allegedly collected and sold classified and personal information of citizens to criminals on Telegram, TechCrunch has learned. The data allegedly sold included national identity details of citizens, cell phone call records and other "classified secret information," according to a letter signed by a senior Bangladeshi intelligence official, seen by TechCrunch.

The letter, dated April 28, was written by Brigadier General Mohammad Baker, who serves as a director of Bangladesh's National Telecommunications Monitoring Center, or NTMC, the country's electronic eavesdropping agency. Baker confirmed the legitimacy of the letter and its contents in an interview with TechCrunch. "Departmental investigation is ongoing for both the cases," Baker said in an online chat, adding that the Bangladeshi Ministry of Home Affairs ordered the affected police organizations to take "necessary action against those officers." The letter, which was originally written in Bengali and addressed to the senior secretary of the Ministry of Home Affairs Public Security Division, alleges the two police agents accessed and passed "extremely sensitive information" of private citizens on Telegram in exchange for money.

AI

Artists Are Deleting Instagram For New App Cara In Protest of Meta AI Scraping (fastcompany.com) 21

Some artists are jumping ship for the anti-AI portfolio app Cara after Meta began using Instagram content to train its AI models. Fast Company explains: The portfolio app bills itself as a platform that protects artists' images from being used to train AI, and only allowing AI content to be posted if it's clearly labeled. Based on the number of new users the Cara app has garnered over the past few days, there seems to be a need. Between May 31 and June 2, Cara's user base tripled from less than 100,000 to more than 300,000 profiles, skyrocketing to the top of the app store. [...] Cara is a social networking app for creatives, in which users can post images of their artwork, memes, or just their own text-based musings. It shares similarities with major social platforms like X (formerly Twitter) and Instagram on a few fronts. Users can access Cara through a mobile app or on a browser. Both options are free to use. The UI itself is like an arts-centric combination of X and Instagram. In fact, some UI elements seem like they were pulled directly from other social media sites. (It's not the most innovative approach, but it is strategic: as a new app, any barriers to potential adoption need to be low).

Cara doesn't train any AI models on its content, nor does it allow third parties to do so. According to Cara's FAQ page, the app aims to protect its users from AI scraping by automatically implementing "NoAI" tags on all of its posts. The website says these tags "are intended to tell AI scrapers not to scrape from Cara." Ultimately, they appear to be html metadata tags that politely ask bad actors not to get up to any funny business, and it's pretty unlikely that they hold any actual legal weight. Cara admits as much, too, warning its users that the tags aren't a "fully comprehensive solution and won't completely prevent dedicated scrapers." With that in mind, Cara assesses the "NoAI" tagging system as a "a necessary first step in building a space that is actually welcoming to artists -- one that respects them as creators and doesn't opt their work into unethical AI scraping without their consent."

In December, Cara launched another tool called Cara Glaze to defend its artists' work against scrapers. (Users can only use it a select number of times.) Glaze, developed by the SAND Lab at University of Chicago, makes it much more difficult for AI models to accurately understand and mimic an artist's personal style. The tool works by learning how AI bots perceive artwork, and then making a set of minimal changes that are invisible to the human eye but confusing to the AI model. The AI bot then has trouble "translating" the art style and generates warped recreations. In the future, Cara also plans to implement Nightshade, another University of Chicago software that helps protect artwork against AI scapers. Nightshade "poisons" AI training data by adding invisible pixels to artwork that can cause AI software to completely misunderstand the image. Beyond establishing shields against data mining, Cara also uses a third party service to detect and moderate any AI artwork that's posted to the site. Non-human artwork is forbidden, unless it's been properly labeled by the poster.

Google

Google To Start Permanently Deleting Users' Location History (theguardian.com) 51

Google will delete everything it knows about users' previously visited locations, the company has said, a year after it committed to reducing the amount of personal data it stores about users. From a report: The company's "timeline" feature -- previously known as Location History -- will still work for those who choose to use it, letting them scroll back through potentially decades of travel history to check where they were at a specific time. But all the data required to make the feature work will be saved locally, to their own phones or tablets, with none of it being stored on the company's servers.

In an email sent by the company to Maps users, seen by the Guardian, Google said they have until 1 December to save all their old journeys before it is deleted for ever. Users will still be able to back up their data if they're worried about losing it or want to sync it across devices but that will no longer happen by default. The company is also reducing the default amount of time that location history is stored for. Now, it will begin to delete past locations after just three months, down from a previous default of a year and a half. In a blogpost announcing the changes, Google didn't cite a specific reason for the updates, beyond suggesting that users may want to delete information from their location history if they are "planning a surprise birthday party."

The Courts

Court Rules $17 Billion UK Advertising Lawsuit Against Google Can Go Ahead (reuters.com) 18

An anonymous reader quotes a report from Reuters: Google parent Alphabet must face a lawsuit worth up to $17.4 billion for allegedly abusing its dominance in the online advertising market, London's Competition Appeal Tribunal (CAT) ruled on Wednesday. The lawsuit, which seeks damages on behalf of publishers of websites and apps based in the United Kingdom, is the latest case to focus on the search giant's business practices. Ad Tech Collective Action is bringing the claim on behalf of publishers who say they have suffered losses due to Google's allegedly anti-competitive behavior.

Google last month urged the CAT to block the case, which it argued was incoherent. The company "strongly rejects the underlying allegations", its lawyers said in court documents. The CAT said in a written ruling that it would certify the case to proceed towards a trial, which is unlikely to take place before the end of 2025. The tribunal also emphasized the test for certifying a case under the UK's collective proceedings regime -- which is roughly equivalent to the United States' class action regime -- is relatively low.
"Google works constructively with publishers across the UK and Europe," Google legal director Oliver Bethell said in a statement. Bethell added: "This lawsuit is speculative and opportunistic. We'll oppose it vigorously and on the facts."
Social Networks

Israel Reportedly Uses Fake Social Media Accounts To Influence US Lawmakers On Gaza War (nytimes.com) 146

An anonymous reader quotes a report from the New York Times: Israel organized and paid for an influence campaign last year targeting U.S. lawmakers and the American public with pro-Israel messaging, as it aimed to foster support for its actions in the war with Gaza, according to officials involved in the effort and documents related to the operation. The covert campaign was commissioned by Israel's Ministry of Diaspora Affairs, a government body that connects Jews around the world with the State of Israel, four Israeli officials said. The ministry allocated about $2 million to the operation and hired Stoic, a political marketing firm in Tel Aviv, to carry it out, according to the officials and the documents. The campaign began in October and remains active on the platform X. At its peak, it used hundreds of fake accounts that posed as real Americans on X, Facebook and Instagram to post pro-Israel comments. The accounts focused on U.S. lawmakers, particularly ones who are Black and Democrats, such as Representative Hakeem Jeffries, the House minority leader from New York, and Senator Raphael Warnock of Georgia, with posts urging them to continue funding Israel's military.

ChatGPT, the artificial intelligence-powered chatbot, was used to generate many of the posts. The campaign also created three fake English-language news sites featuring pro-Israel articles. The Israeli government's connection to the influence operation, which The New York Times verified with four current and former members of the Ministry of Diaspora Affairs and documents about the campaign, has not previously been reported. FakeReporter, an Israeli misinformation watchdog, identified the effort in March. Last week, Meta, which owns Facebook and Instagram, and OpenAI, which makes ChatGPT, said they had also found and disrupted the operation. The secretive campaign signals the lengths Israel was willing to go to sway American opinion on the war in Gaza.

The Courts

Jury Finds Boeing Stole Technology From Electric Airplane Startup Zunum 46

A federal court jury in Seattle has ruled against Boeing in a lawsuit brought by failed electric airplane startup Zunum and awarded $81 million in damages -- which the judge has the option to triple. From a report: Zunum alleged that Boeing, while ostensibly investing seed money to get the startup off the ground, stole Zunum's technology and actively undermined its attempts to build a business. It accused Boeing of "a targeted and coordinated campaign" to gain access to its "business plan, market and technological analysis, and other trade secrets and proprietary information," then using that to develop its own hybrid-electric plane design.

Zunum also accused Boeing of sabotaging its efforts to attract funding from aerospace suppliers Safran and United Technologies. The jury found that Boeing had misappropriated Zunum's trade secrets and breached its contract with the startup. It also found that Boeing's actions were "willful and malicious," which opens the door for the judge to award triple damages plus legal costs in a case that has already been running for more than four years.
Privacy

Hacker Tool Extracts All the Data Collected By Windows' New Recall AI 145

An anonymous reader quotes a report from Wired: When Microsoft CEO Satya Nadella revealed the new Windows AI tool that can answer questions about your web browsing and laptop use, he said one of the"magical" things about it was that the data doesn't leave your laptop; theWindows Recall system takes screenshots of your activity every five seconds and saves them on the device. But security experts say that data may not stay there for long. Two weeks ahead ofRecall's launch on new Copilot+ PCs on June 18, security researchers have demonstrated how preview versions of the tool store the screenshots in an unencrypted database. The researchers say the data could easily be hoovered up by an attacker. And now, in a warning about how Recall could be abused by criminal hackers, Alex Hagenah, a cybersecurity strategist and ethical hacker, has released a demo tool that can automatically extract and display everything Recall records on a laptop.

Dubbed TotalRecall -- yes, after the 1990 sci-fi film -- the tool can pull all the information that Recall saves into its main database on a Windows laptop. "The database is unencrypted. It's all plain text," Hagenah says. Since Microsoft revealed Recall in mid-May, security researchers have repeatedly compared it to spyware or stalkerware that can track everything you do on your device. "It's a Trojan 2.0 really, built in," Hagenah says, adding that he built TotalRecall -- which he's releasing on GitHub -- in order to show what is possible and to encourage Microsoft to make changes before Recall fully launches. [...] TotalRecall, Hagenah says, can automatically work out where the Recall database is on a laptop and then make a copy of the file, parsing all the data as it does so. While Microsoft's new Copilot+ PCs aren't out yet, it's possible to use Recall by emulating a version of the devices. "It does everything automatically," he says. The system can set a date range for extracting the data -- for instance, pulling information from only one specific week or day. Pulling one day of screenshots from Recall, which stores its information in an SQLite database, took two seconds at most, Hagenah says.

Included in what the database captures are screenshots of whatever is on your desktop -- a potential gold mine for criminal hackers or domestic abusers who may physically access their victim's device. Images include captures of messages sent on encrypted messaging apps Signal and WhatsApp, and remain in the captures regardless of whether disappearing messages are turned on in the apps. There are records of websites visited and every bit of text displayed on the PC. Once TotalRecall has been deployed, it will generate a summary about the data; it is also possible to search for specific terms in the database. Hagenah says an attacker could get a huge amount of information about their target, including insights into their emails, personal conversations, and any sensitive information that's captured by Recall. Hagenah's work builds on findings from cybersecurity researcher Kevin Beaumont, who has detailed how much information Recall captures and how easy it can be to extract it.
Piracy

Nintendo Hits 127 Switch Piracy Tutorial Repos After 'Cracking' URL Encryption (torrentfreak.com) 28

An anonymous reader quotes a report from TorrentFreak: A popular GitHub repo and over 120 forks containing Switch emulation tutorials have been targeted by Nintendo. While most forks are now disabled, the main repository has managed to survive after being given the opportunity to put things right. Whether Nintendo appreciated the irony is unclear, but it appears that use of encoding as a protection measure to obfuscate links, was no match for the video game company's circumvention skills. [...] The Switch Emulators Guide was presented in the context of piracy, something made clear by a note on the main page of the original repo which stated that the tutorial was made, in part, for use on the /r/NewYuzuPiracy subreddit. Since the actions of Yuzu and its eventual demise are part of the unwritten framework for similar takedowns, that sets the tone (although not the legal basis) in favor of takedown.

When asked to provide a description and URL pointing to the copyrighted content allegedly infringed by the repos, Nintendo states that the works are the 'Nintendo Switch firmware" and various games protected by technological protection measures (TPM) which prevent users from unlawfully copying and playing pirated games. The notice states the repos 'provide access' to keys that enable circumvention of its technical measures. "The reported repositories offer and provide access to unauthorized copies of cryptographic keys that are used to circumvent Nintendo's Technological Measures and infringe Nintendo's intellectual property rights. Specifically, the reported repositories provide to users unauthorized copies of cryptographic keys (prod.keys) extracted from the Nintendo Switch firmware," Nintendo writes.

"The prod.keys allow users to bypass Nintendo's Technological Measures for digital games; specifically, prod.keys allow users to decrypt and play Nintendo Switch games in unauthorized ways. Distribution of keys without the copyright owner's authorization is a violation of Section 1201 of the DMCA." Nintendo further notes that unauthorized distribution of prod.keys "facilitates copyright infringement by permitting users to play pirated versions of Nintendo's copyright-protected game software on systems without the Nintendo Technological Measures or systems on which Nintendo's Technological Measures have been disabled." Since the prod.keys are extracted from the Nintendo Switch firmware, which is also protected by copyright, distribution amounts to "infringement of Nintendo Switch firmware itself."

Given that the repo's stated purpose was to provide information on how to circumvent Nintendo's technical protection measures, it's fairly ironic that it appears to have used technical measures itself to hinder detection. "The reported repositories attempt to evade detection of their illegal activities by providing access to prod.keys and unauthorized copies of Nintendo's firmware and video games via encoded links that direct users to third-party websites to download the infringing content," Nintendo explains in its notice. "The repositories provide strings of letters and numbers and then instruct users to 'use [private] to decode the lines of strings given here to get an actual link.' The decoded links take users to sites where they can access the prod.keys and unauthorized copies of Nintendo's copyright-protected material." The image below shows the encoded links (partially redacted) that allegedly link to the content in question on third-party sites. To hide their nature, regular URLs are encoded using Base64, a binary-to-text encoding scheme that transforms them into a sequence of characters. Those characters can be decoded to reveal the original URL using online tools.

Google

Google Contractor Used Admin Access To Leak Info From Private Nintendo YouTube Video (404media.co) 12

A Google contractor used admin privileges to access private information from Nintendo's YouTube account about an upcoming Yoshi game in 2017, which later made its way to Reddit before Nintendo announced the game, according to a copy of an internal Google database detailing potential privacy and security incidents obtained by 404 Media. From the report: The news provides more clarity on how exactly a Redditor, who teased news of the new Yoshi game, which was later released as Yoshi's Crafted World in 2019, originally obtained their information. A screenshot in the Reddit post shows a URL that starts with www.admin.youtube.com, which is a Google corporate login page. "Google employee deliberately leaked private Nintendo information," the entry in the database reads. The database obtained by 404 Media includes privacy and security issues that Google's own employees reported internally.
The Almighty Buck

Online Streaming Services In Canada Must Hand Over 5% of Domestic Revenues (www.cbc.ca) 94

An anonymous reader quotes a report from CBC News: Online streaming services operating in Canada will be required to contribute five percent of their Canadian revenues to support the domestic broadcasting system, the country's telecoms regulator said on Tuesday. The money will be used to boost funding for local and Indigenous broadcasting, officials from the Canadian Radio-television and Telecommunications Commission (CRTC) said in a briefing. "Today's decision will help ensure that online streaming services make meaningful contributions to Canadian and Indigenous content," wrote CRTC chief executive and chair Vicky Eatrides in a statement.

The measure was introduced under the auspices of a law passed last year designed to make sure that companies like Netflix make a more significant contribution to Canadian culture. The government says the legislation will ensure that online streaming services promote Canadian music and stories, and support Canadian jobs. Funding will also be directed to French-language content and content created by official language minority communities, as well as content created by equity-deserving groups and Canadians of diverse backgrounds. The release also said that online streaming services will "have some flexibility" to send their revenues to support Canadian television directly. [...]

The measure, which will start in the 2024-2025 broadcasting year, would raise roughly $200 million annually, CRTC officials said. It will only apply to services that are not already affiliated with Canadian broadcasters. The CMPA was among 20 screen organizations from around the world that signed a statement in January asking governments to impose stronger regulations on streaming companies operating in local markets. One of the demands was a measure that would force companies profiting from their presence in those markets to contribute financially to the creation of new local content.
"We are disappointed by today's decision and concerned by the negative impact it will have on Canadian consumers. We are assessing the decision in full, but this onerous and inflexible financial levy will be harmful to consumer choice," a spokesperson for Prime Video wrote to CBC News in a statement.
Security

Crooks Threaten To Leak 3 Billion Personal Records 'Stolen From Background Firm' (theregister.com) 67

An anonymous reader quotes a report from The Register: Billions of records detailing people's personal information may soon be dumped online after being allegedly obtained from a Florida firm that handles background checks and other requests for folks' private info. A criminal gang that goes by the handle USDoD put the database up for sale for $3.5 million on an underworld forum in April, and rather incredibly claimed the trove included 2.9 billion records on all US, Canadian, and British citizens. It's believed one or more miscreants using the handle SXUL was responsible for the alleged exfiltration, who passed it onto USDoD, which is acting as a broker. The pilfered information is said to include individuals' full names, addresses, and address history going back at least three decades, social security numbers, and people's parents, siblings, and relatives, some of whom have been dead for nearly 20 years. According to USDoD, this info was not scraped from public sources, though there may be duplicate entries for people in the database.

Fast forward to this month, and the infosec watchers at VX-Underground say they've not only been able to view the database and verify that at least some of its contents are real and accurate, but that USDoD plans to leak the trove. Judging by VX-Underground's assessment, the 277.1GB file contains nearly three billion records on people who've at least lived in the United States -- so US citizens as well as, say, Canadians and Brits. This info was allegedly stolen or otherwise obtained from National Public Data, a small information broker based in Coral Springs that offers API lookups to other companies for things like background checks. There is a small silver lining, according to the VX team: "The database DOES NOT contain information from individuals who use data opt-out services. Every person who used some sort of data opt-out service was not present." So, we guess this is a good lesson in opting out.

Piracy

Napster Sparked a File-Sharing Revolution 25 Years Ago (torrentfreak.com) 49

TorrentFreak's Ernesto Van der Sar recalls the rise and fall of Napster, the file-sharing empire that kickstarted a global piracy frenzy 25 years ago. Here's an excerpt from his report: At the end of the nineties, technology and the Internet were a playground for young engineers and 'hackers'. Some of them regularly gathered in the w00w00 IRC chatroom on the EFnet network. This tech-think-tank had many notable members, including WhatsApp founder Jan Koum and Shawn Fanning, who logged on with the nickname Napster. In 1998, 17-year-old Fanning shared an idea with the group. 'Napster' wanted to create a network of computers that could share files with each other. More specifically, a central music database that everyone in the world could access.

This idea never left the mind of the young developer. Fanning stopped going to school and flanked by his friend Sean Parker, devoted the following months to making his vision a reality. That moment came on June 1, 1999, when the first public release of Napster was released online. Soon after, the software went viral. Napster was quickly embraced by millions of users, who saw the software as something magical. It was a gateway for musical exploration, one that dwarfed even the largest record stores in town. And all for free. It sounds mundane today, but some equated it to pure technological sorcery. For many top players in the music industry, Napster's sorcery was pure witchcraft. At the time, manufacturing CDs with high profit margins felt like printing money and Napster's appearance threatened to ruin the party. [...]

At the start of 2001, Napster's user base reached a peak of more than 26.4 million worldwide. Yet, despite huge growth and backing from investors, the small file-sharing empire couldn't overcome the legal challenges. The RIAA lawsuit resulted in an injunction from the Ninth Circuit Court, which ordered the network to shut down. This happened during July 2001, little more than two years after Napster launched. By September that year, the case had been settled for millions of dollars. While the Napster craze was over, file-sharing had mesmerized the masses and the genie was out of the bottle. Grokster, KaZaa, Morpheus, LimeWire, and many others popped up and provided sharing alternatives, for as long as they lasted. Meanwhile, BitTorrent was also knocking on the door.
"Napster paved the way for Apple's iTunes store, to serve the demand that was clearly there," notes Ernesto. "This music streaming landscape was largely pioneered by a Napster 'fan' from Sweden, Daniel Ek."

"Like many others, Ek was fascinated by the 'all you can play' experience offered by file-sharing software, and that planted the seeds for the music streaming startup Spotify, where he still serves as CEO today. In fact, Spotify itself used file-sharing technology under the hood to ensure swift playback."
The Courts

Samsung Sues Oura Preemptively To Block Smart Ring Patent Claims (theverge.com) 26

An anonymous reader shares a report: Samsung isn't waiting around for Oura to file any patent claims over its forthcoming smart ring. Instead, it's preemptively filed its own suit against Oura, seeking a "declaratory judgment" that states the Galaxy Ring doesn't infringe on five Oura patents. The suit alleges that Oura has a pattern of filing patent suits against competitors based on "features common to virtually all smart rings." In particular, the suit references sensors, electronics, batteries, and scores based on metrics gathered from sensors. The case lists instances in which Oura sued rivals like Ultrahuman, Circular, and RingConn, sometimes before they even entered the US market. For those reasons, Samsung says in the suit that it anticipates being the target of an Oura suit.
Google

Google Leak Reveals Thousands of Privacy Incidents (404media.co) 20

Google has accidentally collected childrens' voice data, leaked the trips and home addresses of car pool users, and made YouTube recommendations based on users' deleted watch history, among thousands of other employee-reported privacy incidents, according to a copy of an internal Google database which tracks six years worth of potential privacy and security issues obtained by 404 Media. From the report: Individually the incidents, most of which have not been previously publicly reported, may only each impact a relatively small number of people, or were fixed quickly. Taken as a whole, though, the internal database shows how one of the most powerful and important companies in the world manages, and often mismanages, a staggering amount of personal, sensitive data on people's lives.

The data obtained by 404 Media includes privacy and security issues that Google's own employees reported internally. These include issues with Google's own products or data collection practices; vulnerabilities in third party vendors that Google uses; or mistakes made by Google staff, contractors, or other people that have impacted Google systems or data. The incidents include everything from a single errant email containing some PII, through to substantial leaks of data, right up to impending raids on Google offices. When reporting an incident, employees give the incident a priority rating, P0 being the highest, P1 being a step below that. The database contains thousands of reports over the course of six years, from 2013 to 2018. In one 2016 case, a Google employee reported that Google Street View's systems were transcribing and storing license plate numbers from photos. They explained that Google uses an algorithm to detect text in Street View imagery.

Government

Did the US Government Ignore a Chance to Make TikTok Safer? (yahoo.com) 59

"To save itself, TikTok in 2022 offered the U.S. government an extraordinary deal," reports the Washington Post. The video app, owned by a Chinese company, said it would let federal officials pick its U.S. operation's board of directors, would give the government veto power over each new hire and would pay an American company that contracts with the Defense Department to monitor its source code, according to a copy of the company's proposal. It even offered to give federal officials a kill switch that would shut the app down in the United States if they felt it remained a threat.

The Biden administration, however, went its own way. Officials declined the proposal, forfeiting potential influence over one of the world's most popular apps in favor of a blunter option: a forced-sale law signed last month by President Biden that could lead to TikTok's nationwide ban. The government has never publicly explained why it rejected TikTok's proposal, opting instead for a potentially protracted constitutional battle that many expect to end up before the Supreme Court... But the extent to which the United States evaluated or disregarded TikTok's proposal, known as Project Texas, is likely to be a core point of dispute in court, where TikTok and its owner, ByteDance, are challenging the sale-or-ban law as an "unconstitutional assertion of power."

The episode raises questions over whether the government, when presented with a way to address its concerns, chose instead to back an effort that would see the company sold to an American buyer, even though some of the issues officials have warned about — the opaque influence of its recommendation algorithm, the privacy of user data — probably would still be unresolved under new ownership...

A senior Biden administration official said in a statement that the administration "determined more than a year ago that the solution proposed by the parties at the time would be insufficient to address the serious national security risks presented. While we have consistently engaged with the company about our concerns and potential solutions, it became clear that divestment from its foreign ownership was and remains necessary."

"Since federal officials announced an investigation into TikTok in 2019, the app's user base has doubled to more than 170 million U.S. accounts," according to the article.

It also includes this assessment from Anupam Chander, a Georgetown University law professor who researches international tech policy. "The government had a complete absence of faith in [its] ability to regulate technology platforms, because there might be some vulnerability that might exist somewhere down the line."
United Kingdom

How Facial Recognition Tech Is Being Used In London By Shops - and Police (bbc.co.uk) 98

"Within less than a minute, I'm approached by a store worker who comes up to me and says, 'You're a thief, you need to leave the store'."

That's a quote from the BBC by a wrongly accused customer who was flagged by a facial-recognition system called Facewatch. "She says after her bag was searched she was led out of the shop, and told she was banned from all stores using the technology."

Facewatch later wrote to her and acknowledged it had made an error — but declined to comment on the incident in the BBC's report: [Facewatch] did say its technology helped to prevent crime and protect frontline workers. Home Bargains, too, declined to comment. It's not just retailers who are turning to the technology... [I]n east London, we joined the police as they positioned a modified white van on the high street. Cameras attached to its roof captured thousands of images of people's faces. If they matched people on a police watchlist, officers would speak to them and potentially arrest them...

On the day we were filming, the Metropolitan Police said they made six arrests with the assistance of the tech... The BBC spoke to several people approached by the police who confirmed that they had been correctly identified by the system — 192 arrests have been made so far this year as a result of it.

Lindsey Chiswick, director of intelligence for the Met, told the BBC that "It takes less than a second for the technology to create a biometric image of a person's face, assess it against the bespoke watchlist and automatically delete it when there is no match."

"That is the correct and acceptable way to do it," writes long-time Slashdot reader Baron_Yam, "without infringing unnecessarily on the freedoms of the average citizen. Just tell me they have appropriate rules, effective oversight, and a penalty system with teeth to catch and punish the inevitable violators."

But one critic of the tech complains to the BBC that everyone scanned automatically joins "a digital police line-up," while the article adds that others "liken the process to a supermarket checkout — where your face becomes a bar code." And "The error count is much higher once someone is actually flagged. One in 40 alerts so far this year has been a false positive..."

Thanks to Slashdot reader Bruce66423 for sharing the article.
AI

Apple's AI Plans Include 'Black Box' For Cloud Data (appleinsider.com) 14

How will Apple protect user data while their requests are being processed by AI in applications like Siri?

Long-time Slashdot reader AmiMoJo shared this report from Apple Insider: According to sources of The Information [four different former Apple employees who worked on the project], Apple intends to process data from AI applications inside a virtual black box.

The concept, known as "Apple Chips in Data Centers" internally, would involve only Apple's hardware being used to perform AI processing in the cloud. The idea is that it will control both the hardware and software on its servers, enabling it to design more secure systems. While on-device AI processing is highly private, the initiative could make cloud processing for Apple customers to be similarly secure... By taking control over how data is processed in the cloud, it would make it easier for Apple to implement processes to make a breach much harder to actually happen.

Furthermore, the black box approach would also prevent Apple itself from being able to see the data. As a byproduct, this means it would also be difficult for Apple to hand over any personal data from government or law enforcement data requests.

Processed data from the servers would be stored in Apple's "Secure Enclave" (where the iPhone stores biometric data, encryption keys and passwords), according to the article.

"Doing so means the data can't be seen by other elements of the system, nor Apple itself."
Crime

How an Apple AirTag Helped Police Recover 15,000 Stolen Power Tools (msn.com) 89

An anonymous reader shared this report from the Washington Post: Twice before, this Virginia carpenter had awoken in the predawn to start his work day only to find one of his vans broken into. Tools he depends on for a living had been stolen, and there was little hope of retrieving them. Determined to shut down thieves, he said, he bought a bunch of Apple AirTags and hid the locator devices in some of his larger tools that hadn't been pilfered. Next time, he figured, he would track them.

It worked.

On Jan. 22, after a third break-in and theft, the carpenter said, he drove around D.C.'s Maryland suburbs for hours, following an intermittent blip on his iPhone, until he arrived at a storage facility in Howard County. He called police, who got a search warrant, and what they found in the locker was far more than just one contractor's nail guns and miter saws.

The storage unit, stuffed with purloined power tools, led detectives to similar caches in other places in the next four months — 12 locations in all, 11 of them in Howard County — and the recovery of about 15,000 saws, drills, sanders, grinders, generators, batteries, air compressors and other portable (meaning easily stealable) construction equipment worth an estimated $3 million to $5 million, authorities said.

Some were stolen as long ago as 2014, a police spokesperson told the Washington Post, coming from "hundreds if not thousands" of victims...
The Almighty Buck

FCC Ends Affordable Internet Program Due To Lack of Funds (cnn.com) 68

The Affordable Connectivity Program (ACP), which provided monthly internet bill credits for low-income Americans, will officially end on June 1 due to a lack of additional funding from Congress. This termination threatens nearly 60 million Americans with increased financial hardship, as the program's lapse leaves them without the subsidies that made internet access affordable. CNN reports: The 2.5-year-old ACP provided eligible low-income Americans with a monthly credit off their internet bills, worth up to $30 per month and as much as $75 per month for households on tribal lands. The pandemic-era program was a hit with members of both political parties and served tens of millions of seniors, veterans and rural and urban Americans alike. Program participants received only partial benefits in May ahead of the ACP's expected collapse. [...]

On Friday, Biden reiterated his calls for Congress to pass legislation extending the ACP. He also announced a series of voluntary commitments by a handful of internet providers to offer -- or continue offering -- their own proprietary low-income internet plans. The list includes AT&T, Comcast, Cox, Charter's Spectrum and Verizon, among others. Those providers will continue to offer qualifying ACP households a broadband plan for $30 or less, the White House said, and together the companies are expected to cover roughly 10 million of the 23 million households relying on the ACP.
"The Affordable Connectivity Program filled an important gap that provider low-income programs, state and local affordability programs, and the Lifeline program cannot fully address," said FCC Chairwoman Jessica Rosenworcel in a statement, referring to the name of another, similar FCC program that subsidizes wireless and home internet service. "The Commission is available to provide any assistance Congress may need to support funding the ACP in the future and stands ready to resume the program if additional funding is provided."

Slashdot Top Deals