×
Crime

Decades-Old Missing Person Mystery Solved After Relative Uploads DNA To GEDMatch (npr.org) 30

In 1970 an Oregon man discovered a body with "clear signs of foul play".

NPR reports that "The identity of the young woman remained a mystery — until Thursday." State authorities identified the woman as Sandra Young, a teenager from Portland who went missing between 1968 and 1969. Her identity was discovered through advanced DNA technology, which has helped solve stubborn cold cases in recent years. The case's breakthrough came last year in January, when a person uploaded their DNA to the genealogy database GEDMatch and the tool immediately determined that the DNA donor was a distant family member of Young....

From there, a genetic genealogist working with local law enforcement helped track down other possible relatives and encouraged them to provide their DNA. That work eventually led to Young's sister and other family members, who confirmed that Young went missing around the same time.

Thanks to Slashdot reader Tony Isaac for sharing the news.
Transportation

Road-Embedded Sensors to Find Street Parking Tested in Taiwanese City (taiwannews.com.tw) 17

Taiwan doesn't have parking meters, writes long-time Slashdot reader Badlands, "but rather roving armies of maids on electric scooters that cruise their area with their smartphone and take a pic of your license plate and timestamp it, leaving a receipt under your windshield wipers."

But now one city will try "smart parking" services — which will also help drivers find vacant parking spots, according to Taiwan News: The service will utilize 3,471 geomagnetic sensors installed along 122 stretches of roadway in Banqiao, Yonghe, Zhonghe and Xindian Districts, according to a press release. The sensors will be linked to a publicly available online database to indicate where open parking spaces are available.

The "New Taipei Street Parking Inquiry Service" will be accessible through a main website run by the Department of Transportation. The service is also linked to two smartphone applications... Payments can be made automatically by linking one's app profile to their smartphone's telecommunications provider... For drivers that use spaces without linking their phone and vehicle to the smart network, cameras located along the street where the sensors are installed will allow the city to identify and bill drivers via mail, based on their vehicle's registration information.

AI

Researchers Create AI Worms That Can Spread From One System to Another (arstechnica.com) 46

Long-time Slashdot reader Greymane shared this article from Wired: [I]n a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers has created one of what they claim are the first generative AI worms — which can spread from one system to another, potentially stealing data or deploying malware in the process. "It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before," says Ben Nassi, a Cornell Tech researcher behind the research. Nassi, along with fellow researchers Stav Cohen and Ron Bitton, created the worm, dubbed Morris II, as a nod to the original Morris computer worm that caused chaos across the Internet in 1988. In a research paper and website shared exclusively with WIRED, the researchers show how the AI worm can attack a generative AI email assistant to steal data from emails and send spam messages — breaking some security protections in ChatGPT and Gemini in the process...in test environments [and not against a publicly available email assistant]...

To create the generative AI worm, the researchers turned to a so-called "adversarial self-replicating prompt." This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies... To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the system — by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.

In one instance, the researchers, acting as attackers, wrote an email including the adversarial text prompt, which "poisons" the database of an email assistant using retrieval-augmented generation (RAG), a way for LLMs to pull in extra data from outside its system. When the email is retrieved by the RAG, in response to a user query, and is sent to GPT-4 or Gemini Pro to create an answer, it "jailbreaks the GenAI service" and ultimately steals data from the emails, Nassi says. "The generated response containing the sensitive user data later infects new hosts when it is used to reply to an email sent to a new client and then stored in the database of the new client," Nassi says. In the second method, the researchers say, an image with a malicious prompt embedded makes the email assistant forward the message on to others. "By encoding the self-replicating prompt into the image, any kind of image containing spam, abuse material, or even propaganda can be forwarded further to new clients after the initial email has been sent," Nassi says.

In a video demonstrating the research, the email system can be seen forwarding a message multiple times. The researchers also say they could extract data from emails. "It can be names, it can be telephone numbers, credit card numbers, SSN, anything that is considered confidential," Nassi says.

The researchers reported their findings to Google and OpenAI, according to the article, with OpenAI confirming "They appear to have found a way to exploit prompt-injection type vulnerabilities by relying on user input that hasn't been checked or filtered." OpenAI says they're now working to make their systems "more resilient."

Google declined to comment on the research.
Databases

A Leaky Database Spilled 2FA Codes For the World's Tech Giants (techcrunch.com) 11

An anonymous reader quotes a report from TechCrunch: A technology company that routes millions of SMS text messages across the world has secured an exposed database that was spilling one-time security codes that may have granted users' access to their Facebook, Google and TikTok accounts. The Asian technology and internet company YX International manufactures cellular networking equipment and provides SMS text message routing services. SMS routing helps to get time-critical text messages to their proper destination across various regional cell networks and providers, such as a user receiving an SMS security code or link for logging in to online services. YX International claims to send 5 million SMS text messages daily. But the technology company left one of its internal databases exposed to the internet without a password, allowing anyone to access the sensitive data inside using only a web browser, just with knowledge of the database's public IP address.

Anurag Sen, a good-faith security researcher and expert in discovering sensitive but inadvertently exposed datasets leaking to the internet, found the database. Sen said it was not apparent who the database belonged to, nor who to report the leak to, so Sen shared details of the exposed database with TechCrunch to help identify its owner and report the security lapse. Sen told TechCrunch that the exposed database included the contents of text messages sent to users, including one-time passcodes and password reset links for some of the world's largest tech and online companies, including Facebook and WhatsApp, Google, TikTok, and others. The database had monthly logs dating back to July 2023 and was growing in size by the minute. In the exposed database, TechCrunch found sets of internal email addresses and corresponding passwords associated with YX International, and alerted the company to the spilling database. The database went offline a short time later.

Programming

Stack Overflow To Charge LLM Developers For Access To Its Coding Content (theregister.com) 32

Stack Overflow has launched an API that will require all AI models trained on its coding question-and-answer content to attribute sources linking back to its posts. And it will cost money to use the site's content. From a report: "All products based on models that consume public Stack Overflow data are required to provide attribution back to the highest relevance posts that influenced the summary given by the model," it confirmed in a statement. The Overflow API is designed to act as a knowledge database to help developers build more accurate and helpful code-generation models. Google announced it was using the service to access relevant information from Stack Overflow via the API and integrate the data with its latest Gemini models, and for its cloud storage console.
Privacy

Vending Machine Error Reveals Secret Face Image Database of College Students (arstechnica.com) 100

Ashley Belanger reports via Ars Technica: Canada-based University of Waterloo is racing to remove M&M-branded smart vending machines from campus after outraged students discovered the machines were covertly collecting facial-recognition data without their consent. The scandal started when a student using the alias SquidKid47 posted an image on Reddit showing a campus vending machine error message, "Invenda.Vending.FacialRecognitionApp.exe," displayed after the machine failed to launch a facial recognition application that nobody expected to be part of the process of using a vending machine. "Hey, so why do the stupid M&M machines have facial recognition?" SquidKid47 pondered. The Reddit post sparked an investigation from a fourth-year student named River Stanley, who was writing for a university publication called MathNEWS. [...]

MathNEWS' investigation tracked down responses from companies responsible for smart vending machines on the University of Waterloo's campus. Adaria Vending Services told MathNEWS that "what's most important to understand is that the machines do not take or store any photos or images, and an individual person cannot be identified using the technology in the machines. The technology acts as a motion sensor that detects faces, so the machine knows when to activate the purchasing interface -- never taking or storing images of customers." According to Adaria and Invenda, students shouldn't worry about data privacy because the vending machines are "fully compliant" with the world's toughest data privacy law, the European Union's General Data Protection Regulation (GDPR). "These machines are fully GDPR compliant and are in use in many facilities across North America," Adaria's statement said. "At the University of Waterloo, Adaria manages last mile fulfillment services -- we handle restocking and logistics for the snack vending machines. Adaria does not collect any data about its users and does not have any access to identify users of these M&M vending machines." [...]

But University of Waterloo students like Stanley now question Invenda's "commitment to transparency" in North American markets, especially since the company is seemingly openly violating Canadian privacy law, Stanley told CTV News. On Reddit, while some students joked that SquidKid47's face "crashed" the machine, others asked if "any pre-law students wanna start up a class-action lawsuit?" One commenter summed up students' frustration by typing in all caps, "I HATE THESE MACHINES! I HATE THESE MACHINES! I HATE THESE MACHINES!"

Books

Darwin Online Has Virtually Reassembled the Naturalist's Personal Library 24

Jennifer Ouellette reports via Ars Technica: Famed naturalist Charles Darwin amassed an impressive personal library over the course of his life, much of which was preserved and cataloged upon his death in 1882. But many other items were lost, including more ephemeral items like unbound volumes, pamphlets, journals, clippings, and so forth, often only vaguely referenced in Darwin's own records. For the last 18 years, the Darwin Online project has painstakingly scoured all manner of archival records to reassemble a complete catalog of Darwin's personal library virtually. The project released its complete 300-page online catalog -- consisting of 7,400 titles across 13,000 volumes, with links to electronic copies of the works -- to mark Darwin's 215th birthday on February 12.

"This unprecedentedly detailed view of Darwin's complete library allows one to appreciate more than ever that he was not an isolated figure working alone but an expert of his time building on the sophisticated science and studies and other knowledge of thousands of people," project leader John van Wyhe of the National University of Singapore said. "Indeed, the size and range of works in the library makes manifest the extraordinary extent of Darwin's research into the work of others."
Privacy

Vietnam To Collect Biometrics For New ID Cards (theregister.com) 33

Starting in July, the Vietnamese government will begin collecting biometric information from its citizens when issuing new identification cards. The Register reports: Prime minister Pham Minh Chinh instructed the nation's Ministry of Public Security to collect the data in the form of iris scans, voice samples and actual DNA, in accordance with amendments to Vietnam's Law on Citizen Identification. The ID cards are issued to anyone over the age of 14 in Vietnam, and are optional for citizens between the ages of 6 and 14, according to a government news report. Amendments to the Law on Citizen Identification that allow collection of biometrics passed on November 27 of last year.

The law allows recording of blood type among the DNA-related information that will be contained in a national database to be shared across agencies "to perform their functions and tasks." The ministry will work with other parts of the government to integrate the identification system into the national database. [...] Vietnam's future identity cards will incorporate the functions of health insurance cards, social insurance books, driver's licenses, birth certificates, and marriage certificates, as defined by the amendment.

As for how the information will be collected, the amendments state: "Biometric information on DNA and voice is collected when voluntarily provided by the people or the agency conducting criminal proceedings or the agency managing the person to whom administrative measures are applied in the process of settling the case according to their functions and duties whether to solicit assessment or collect biometric information on DNA, people's voices are shared with identity management agencies for updating and adjusting to the identity database."

EU

EU Opens Formal Investigation Into TikTok Over Possible Online Content Breaches (reuters.com) 18

An anonymous reader quotes a report from Reuters: The European Union will investigate whether ByteDance's TikTok breached online content rules aimed at protecting children and ensuring transparent advertising, an official said on Monday, putting the social media platform at risk of a hefty fine. EU industry chief Thierry Breton said he took the decision after analyzing the short video app's risk assessment report and its replies to requests for information, confirming a Reuters story. "Today we open an investigation into TikTok over suspected breach of transparency & obligations to protect minors: addictive design & screen time limits, rabbit hole effect, age verification, default privacy settings," Breton said on X.

The European Union's Digital Services Act (DSA), which applies to all online platforms since Feb. 17, requires in particular very large online platforms and search engines to do more to tackle illegal online content and risks to public security. TikTok's owner, China-based ByteDance, could face fines of up to 6% of its global turnover if TikTok is found guilty of breaching DSA rules. TikTok said it would continue to work with experts and the industry to keep young people on its platform safe and that it looked forward to explaining this work in detail to the European Commission.

The European Commission said the investigation will focus on the design of TikTok's system, including algorithmic systems which may stimulate behavioral addictions and/or create so-called 'rabbit hole effects'. It will also probe whether TikTok has put in place appropriate and proportionate measures to ensure a high level of privacy, safety and security for minors. As well as the issue of protecting minors, the Commission is looking at whether TikTok provides a reliable database on advertisements on its platform so that researchers can scrutinize potential online risks.

Programming

How Rust Improves the Security of Its Ecosystem (rust-lang.org) 45

This week the non-profit Rust Foundation announced the release of a report on what their Security Initiative accomplished in the last six months of 2023. "There is already so much to show for this initiative," says the foundation's executive director, "from several new open source security projects to several completed and publicly available security threat models."

From the executive summary: When the user base of any programming language grows, it becomes more attractive to malicious actors. As any programming language ecosystem expands with more libraries, packages, and frameworks, the surface area for attacks increases. Rust is no different. As the steward of the Rust programming language, the Rust Foundation has a responsibility to provide a range of resources to the growing Rust community. This responsibility means we must work with the Rust Project to help empower contributors to participate in a secure and scalable manner, eliminate security burdens for Rust maintainers, and educate the public about security within the Rust ecosystem...

Recent Achievements of the Security Initiative Include:

- Completing and releasing Rust Infrastructure and Crates Ecosystem threat models

- Further developing Rust Foundation open source security project Painter [for building a graph database of dependencies/invocations between crates] and releasing new security project, Typomania [a toolbox to check for typosquatting in package registries].

- Utilizing new tools and best practices to identify and address malicious crates.

- Helping reduce technical debt within the Rust Project, producing/contributing to security-focused documentation, and elevating security priorities for discussion within the Rust Project.

... and more!

Over the Coming Months, Security Initiative Engineers Will Primarily Focus On:

- Completing all four Rust security threat models and taking action to address encompassed threats

- Standing up additional infrastructure to support redundancy, backups, and mirroring of critical Rust assets

- Collaborating with the Rust Project on the design and potential implementation of signing and PKI solutions for crates.io to achieve security parity with other popular ecosystems

- Continuing to create and further develop tools to support Rust ecosystem, including the crates.io admin functionality, Painter, Typomania, and Sandpit

United States

No 'GPT' Trademark For OpenAI (techcrunch.com) 22

The U.S. Patent and Trademark Office has denied OpenAI's attempt to trademark "GPT," ruling that the term is "merely descriptive" and therefore unable to be registered. From a report: [...] The name, according to the USPTO, doesn't meet the standards to register for a trademark and the protections a "TM" after the name affords. (Incidentally, they refused once back in October, and this is a "FINAL" in all caps denial of the application.) As the denial document puts it: "Registration is refused because the applied-for mark merely describes a feature, function, or characteristic of applicant's goods and services."

OpenAI argued that it had popularized the term GPT, which stands in this case for "generative pre-trained transformer," describing the nature of the machine learning model. It's generative because it produces new (ish) material, pre-trained in that it is a large model trained centrally on a proprietary database, and transformer is the name of a particular method of building AIs (discovered by Google researchers in 2017) that allows for much larger models to be trained. But the patent office pointed out that GPT was already in use in numerous other contexts and by other companies in related ones.

Privacy

'World's Biggest Casino' App Exposed Customers' Personal Data (techcrunch.com) 10

An anonymous reader shares a report: The startup that develops the phone app for casino resort giant WinStar has secured an exposed database that was spilling customers' private information to the open web. Oklahoma-based WinStar bills itself as the "world's biggest casino" by square footage. The casino and hotel resort also offers an app, My WinStar, in which guests can access self-service options during their hotel stay, their rewards points and loyalty benefits, and casino winnings.

The app is developed by a Nevada software startup called Dexiga. The startup left one of its logging databases on the internet without a password, allowing anyone with knowledge of its public IP address to access the WinStar customer data stored within using only their web browser. Dexiga took the database offline after TechCrunch alerted the company to the security lapse. Anurag Sen, a good-faith security researcher who has a knack for discovering inadvertently exposed sensitive data on the internet, found the database containing personal information, but it was initially unclear who the database belonged to. Sen said the personal data included full names, phone numbers, email addresses and home addresses. Sen shared details of the exposed database with TechCrunch to help identify its owner and disclose the security lapse.

Medicine

AI Cannot Be Used To Deny Health Care Coverage, Feds Clarify To Insurers 81

An anonymous reader quotes a report from Ars Technica: Health insurance companies cannot use algorithms or artificial intelligence to determine care or deny coverage to members on Medicare Advantage plans, the Centers for Medicare & Medicaid Services (CMS) clarified in a memo (PDF) sent to all Medicare Advantage insurers. The memo -- formatted like an FAQ on Medicare Advantage (MA) plan rules -- comes just months after patients filed lawsuits claiming that UnitedHealth and Humana have been using a deeply flawed, AI-powered tool to deny care to elderly patients on MA plans. The lawsuits, which seek class-action status, center on the same AI tool, called nH Predict, used by both insurers and developed by NaviHealth, a UnitedHealth subsidiary.

According to the lawsuits, nH Predict produces draconian estimates for how long a patient will need post-acute care in facilities like skilled nursing homes and rehabilitation centers after an acute injury, illness, or event, like a fall or a stroke. And NaviHealth employees face discipline for deviating from the estimates, even though they often don't match prescribing physicians' recommendations or Medicare coverage rules. For instance, while MA plans typically provide up to 100 days of covered care in a nursing home after a three-day hospital stay, using nH Predict, patients on UnitedHealth's MA plan rarely stay in nursing homes for more than 14 days before receiving payment denials, the lawsuits allege.

It's unclear how nH Predict works exactly, but it reportedly uses a database of 6 million patients to develop its predictions. Still, according to people familiar with the software, it only accounts for a small set of patient factors, not a full look at a patient's individual circumstances. This is a clear no-no, according to the CMS's memo. For coverage decisions, insurers must "base the decision on the individual patient's circumstances, so an algorithm that determines coverage based on a larger data set instead of the individual patient's medical history, the physician's recommendations, or clinical notes would not be compliant," the CMS wrote.
"In all, the CMS finds that AI tools can be used by insurers when evaluating coverage -- but really only as a check to make sure the insurer is following the rules," reports Ars. "An 'algorithm or software tool should only be used to ensure fidelity,' with coverage criteria, the CMS wrote. And, because 'publicly posted coverage criteria are static and unchanging, artificial intelligence cannot be used to shift the coverage criteria over time' or apply hidden coverage criteria."

The CMS also warned insurers to ensure that any AI tool or algorithm used "is not perpetuating or exacerbating existing bias, or introducing new biases." It ended its notice by telling insurers that it is increasing its audit activities and "will be monitoring closely whether MA plans are utilizing and applying internal coverage criteria that are not found in Medicare laws."
Biotech

Struggling 23andMe Is Exploring Splitting the DNA Company In Two (seekingalpha.com) 35

In a recent interview with Bloomberg, 23andMe CEO Anne Wojcicki said the company was considering splitting its consumer and therapeutics businesses in an effort to boost its stock price and maintain its listing. From a report: Bloomberg noted that the stock has lost more than 90% of its value since the company went public in 2021 through a SPAC merger. Bloomberg said the therapeutics business could be an attractive asset for a larger healthcare company. The business currently has two drug candidates in clinical trials and just received approval to commence testing for a third. The company's DNA database of more than 14 million customers is one of the world's largest. Currently, 23andMe derives most of its revenue from its consumer business, Bloomberg added.

23andMe's stock has been trading below $1 since December, hurt by revelations that hackers were able to access the personal information of around 50% of its subscribers. The company is also facing mounting litigation over the incident. The company is expected to release its Q4 earnings report after market close on Wednesday.

AI

Police Departments Are Turning To AI To Sift Through Unreviewed Body-Cam Footage (propublica.org) 40

An anonymous reader quotes a report from ProPublica: Over the last decade, police departments across the U.S. have spent millions of dollars equipping their officers with body-worn cameras that record what happens as they go about their work. Everything from traffic stops to welfare checks to responses to active shooters is now documented on video. The cameras were pitched by national and local law enforcement authorities as a tool for building public trust between police and their communities in the wake of police killings of civilians like Michael Brown, an 18 year old black teenager killed in Ferguson, Missouri in 2014. Video has the potential not only to get to the truth when someone is injured or killed by police, but also to allow systematic reviews of officer behavior to prevent deaths by flagging troublesome officers for supervisors or helping identify real-world examples of effective and destructive behaviors to use for training. But a series of ProPublica stories has shown that a decade on, those promises of transparency and accountability have not been realized.

One challenge: The sheer amount of video captured using body-worn cameras means few agencies have the resources to fully examine it. Most of what is recorded is simply stored away, never seen by anyone. Axon, the nation's largest provider of police cameras and of cloud storage for the video they capture, has a database of footage that has grown from around 6 terabytes in 2016 to more than 100 petabytes today. That's enough to hold more than 5,000 years of high definition video, or 25 million copies of last year's blockbuster movie "Barbie." "In any community, body-worn camera footage is the largest source of data on police-community interactions. Almost nothing is done with it," said Jonathan Wender, a former police officer who heads Polis Solutions, one of a growing group of companies and researchers offering analytic tools powered by artificial intelligence to help tackle that data problem.

The Paterson, New Jersey, police department has made such an analytic tool a major part of its plan to overhaul its force. In March 2023, the state's attorney general took over the department after police shot and killed Najee Seabrooks, a community activist experiencing a mental health crisis who had called 911 for help. The killing sparked protests and calls for a federal investigation of the department. The attorney general appointed Isa Abbassi, formerly the New York Police Department's chief of strategic initiatives, to develop a plan for how to win back public trust. "Changes in Paterson are led through the use of technology," Abbassi said at a press conference announcing his reform plan in September, "Perhaps one of the most exciting technology announcements today is a real game changer when it comes to police accountability and professionalism." The department, Abassi said, had contracted with Truleo, a Chicago-based software company that examines audio from bodycam videos to identify problematic officers and patterns of behavior.

For around $50,000 a year, Truleo's software allows supervisors to select from a set of specific behaviors to flag, such as when officers interrupt civilians, use profanity, use force or mute their cameras. The flags are based on data Truleo has collected on which officer behaviors result in violent escalation. Among the conclusions from Truleo's research: Officers need to explain what they are doing. "There are certain officers who don't introduce themselves, they interrupt people, and they don't give explanations. They just do a lot of command, command, command, command, command," said Anthony Tassone, Truleo's co-founder. "That officer's headed down the wrong path." For Paterson police, Truleo allows the department to "review 100% of body worn camera footage to identify risky behaviors and increase professionalism," according to its strategic overhaul plan. The software, the department said in its plan, will detect events like uses of force, pursuits, frisks and non-compliance incidents and allow supervisors to screen for both "professional and unprofessional officer language."
There are around 30 police departments currently use Truleo, according to the company.

Christopher J. Schneider, a professor at Canada's Brandon University who studies the impact of emerging technology on social perceptions of police, is skeptical the AI tools will fix the problems in policing because the findings might be kept from the public just like many internal investigations. "Because it's confidential," he said, "the public are not going to know which officers are bad or have been disciplined or not been disciplined."
Security

Mistakenly Published Password Exposes Mercedes-Benz Source Code (techcrunch.com) 29

An anonymous reader quotes a report from TechCrunch: Mercedes-Benz accidentally exposed a trove of internal data after leaving a private key online that gave "unrestricted access" to the company's source code, according to the security research firm that discovered it. Shubham Mittal, co-founder and chief technology officer of RedHunt Labs, alerted TechCrunch to the exposure and asked for help in disclosing to the car maker. The London-based cybersecurity company said it discovered a Mercedes employee's authentication token in a public GitHub repository during a routine internet scan in January. According to Mittal, this token -- an alternative to using a password for authenticating to GitHub -- could grant anyone full access to Mercedes's GitHub Enterprise Server, thus allowing the download of the company's private source code repositories.

"The GitHub token gave 'unrestricted' and 'unmonitored' access to the entire source code hosted at the internal GitHub Enterprise Server," Mittal explained in a report shared by TechCrunch. "The repositories include a large amount of intellectual property connection strings, cloud access keys, blueprints, design documents, [single sign-on] passwords, API Keys, and other critical internal information." Mittal provided TechCrunch with evidence that the exposed repositories contained Microsoft Azure and Amazon Web Services (AWS) keys, a Postgres database, and Mercedes source code. It's not known if any customer data was contained within the repositories. It's not known if anyone else besides Mittal discovered the exposed key, which was published in late-September 2023.
A Mercedes spokesperson confirmed that the company "revoked the respective API token and removed the public repository immediately."

"We can confirm that internal source code was published on a public GitHub repository by human error. The security of our organization, products, and services is one of our top priorities. We will continue to analyze this case according to our normal processes. Depending on this, we implement remedial measures."
Crime

IT Consultant Fined For Daring To Expose Shoddy Security (theregister.com) 102

Thomas Claburn reports via The Register: A security researcher in Germany has been fined $3,300 for finding and reporting an e-commerce database vulnerability that was exposing almost 700,000 customer records. Back in June 2021, according to our pals at Heise, an contractor identified elsewhere as Hendrik H. was troubleshooting software for a customer of IT services firm Modern Solution GmbH. He discovered that the Modern Solution code made an MySQL connection to a MariaDB database server operated by the vendor. It turned out the password to access that remote server was stored in plain text in the program file MSConnect.exe, and opening it in a simple text editor would reveal the unencrypted hardcoded credential.

With that easy-to-find password in hand, anyone could log into the remote server and access data belonging to not just that one customer of Modern Solution, but data belonging to all of the vendor's clients stored on that database server. That info is said to have included personal details of those customers' own customers. And we're told that Modern Solution's program files were available for free from the web, so truly anyone could inspect the executables in a text editor for plain-text hardcoded database passwords. The contractor's findings were discussed in a June 23, 2021 report by Mark Steier, who writes about e-commerce. That same day Modern Solution issued a statement [PDF] -- translated from German -- summarizing the incident [...]. The statement indicates that sensitive data about Modern Solution customers was exposed: last names, first names, email addresses, telephone numbers, bank details, passwords, and conversation and call histories. But it claims that only a limited amount of data -- names and addresses -- about shoppers who made purchases from these retail clients was exposed. Steier contends that's incorrect and alleged that Modern Solution downplayed the seriousness of the exposed data, which he said included extensive customer data from the online stores operated by Modern Solution's clients.

In September 2021 police in Germany seized the IT consultant's computers following a complaint from Modern Solution that claimed he could only have obtained the password through insider knowledge â" he worked previously for a related firm -- and the biz claimed he was a competitor. Hendrik H. was charged with unlawful data access under Section 202a of Germany's Criminal Code, based on the rule that examining data protected by a password can be classified as a crime under the Euro nation's cybersecurity law. In June, 2023, a Julich District Court in western Germany sided with the IT consultant because the Modern Solution software was insufficiently protected. But the Aachen regional court directed the district court to hear the complaint. Now, the district court has reversed its initial decision. On January 17, a Julich District Court fined Hendrik H. and directed him to pay court costs.

Classic Games (Games)

Billy Mitchell and Twin Galaxies Settle Lawsuits On Donkey Kong World Records (nme.com) 64

"What happens when a loser who needs to win faces a winner who refuses to lose?"

That was the tagline for the iconic 2007 documentary The King of Kong: A Fistful of Quarters, chronicling a middle-school teacher's attempts to take the Donkey Kong record from reigning world champion Billy Mitchell. "Billy Mitchell always has a plan," says Billy Mitchell in the movie (who is also shown answering his phone, "World Record Headquarters. Can I help you?") By 1985, 30-year-old Mitchell was already listed in the "Guinness Book of World Records" for having the world's highest scores for Pac-Man, Ms. Pac-Man, Donkey Kong, Donkey Kong, Jr., Centipede, and Burger Time.

But then, NME reports... In 2018, a number of Mitchell's Donkey Kong high-scores were called into question by a fellow gamer, who supplied a string of evidence on the Twin Galaxies forums suggesting Mitchell had used an emulator to break the records, rather than the official, unmodified hardware that's typically required to keep things fair. [Twin Galaxies is Guiness World Records' official source for videogame scores.] Following "an independent investigation," Mitchell's hi-scores were removed from video game database Twin Galaxies as well as the Guinness Book Of Records, though the latter reversed the decision in 2020. Forensic analysts also accused him of cheating in 2022 but Mitchell has fought the accusations ever since.
This week, 58-year-old Billy Mitchell posted an announcement on X. "Twin Galaxies has reinstated all of my world records from my videogame career... I am relieved and satisfied to reach this resolution after an almost six-year ordeal and look forward to pursuing my unfinished business elsewhere. Never Surrender, Billy Mitchell."

X then wrote below the announcement, "Readers added context they thought people might want to know... Twin Galaxies has only reinstated Michell's scores on an archived leaderboard, where rules were different prior to TG being acquired in 2014. His score remains removed from the current leaderboard where he continues to be ineligible by today's rules."

The statement from Twin Galaxies says they'd originally believed they'd seen "a demonstrated impossibility of original, unmodified Donkey Kong arcade hardware" in a recording of one of Billy's games. As punishment they'd then invalidated every record he'd ever set in his life.

But now an engineer (qualified as an expert in federal courts) says aging components in the game board could've produced the same visual artifacts seen in the videotape of the disputed game. Consistent with Twin Galaxies' dedication to the meticulous documentation and preservation of video game score history, Twin Galaxies shall heretofore reinstate all of Mr. Mitchell's scores as part of the official historical database on Twin Galaxies' website. Additionally, upon closing of the matter, Twin Galaxies shall permanently archive and remove from online display the dispute thread... as well as all related statements and articles.
NME adds: Twin Galaxies' lawyer David Tashroudian told Ars Technica that the company had all its "ducks in a row" for a legal battle with Mitchell but "there were going to be an inordinate amount of costs involved, and both parties were facing a lot of uncertainty at trial, and they wanted to get the matter settled on their own terms."
And the New York Times points out that while Billy scored 1,062,800 in that long-ago game, "The vigorous long-running and sometimes bitter dispute was over marks that have long since been surpassed. The current record, as reported by Twin Galaxies, belongs to Robbie Lakeman. It's 1,272,800."

Thanks to long-time Slashdot reader UnknowingFool for sharing the news.
Privacy

Have I Been Pwned Adds 71 Million Emails From Naz.API Stolen Account List (bleepingcomputer.com) 17

An anonymous reader quotes a report from BleepingComputer: Have I Been Pwned has added almost 71 million email addresses associated with stolen accounts in the Naz.API dataset to its data breach notification service. The Naz.API dataset is a massive collection of 1 billion credentials compiled using credential stuffing lists and data stolen by information-stealing malware. Credential stuffing lists are collections of login name and password pairs stolen from previous data breaches that are used to breach accounts on other sites.

Information-stealing malware attempts to steal a wide variety of data from an infected computer, including credentials saved in browsers, VPN clients, and FTP clients. This type of malware also attempts to steal SSH keys, credit cards, cookies, browsing history, and cryptocurrency wallets. The stolen data is collected in text files and images, which are stored in archives called "logs." These logs are then uploaded to a remote server to be collected later by the attacker. Regardless of how the credentials are stolen, they are then used to breach accounts owned by the victim, sold to other threat actors on cybercrime marketplaces, or released for free on hacker forums to gain reputation amongst the hacking community.

The Naz.API is a dataset allegedly containing over 1 billion lines of stolen credentials compiled from credential stuffing lists and from information-stealing malware logs. It should be noted that while the Naz.API dataset name includes the word "Naz," it is not related to network attached storage (NAS) devices. This dataset has been floating around the data breach community for quite a while but rose to notoriety after it was used to fuel an open-source intelligence (OSINT) platform called illicit.services. This service allows visitors to search a database of stolen information, including names, phone numbers, email addresses, and other personal data. The service shut down in July 2023 out of concerns it was being used for Doxxing and SIM-swapping attacks. However, the operator enabled the service again in September. Illicit.services use data from various sources, but one of its largest sources of data came from the Naz.API dataset, which was shared privately among a small number of people. Each line in the Naz.API data consists of a login URL, its login name, and an associated password stolen from a person's device, as shown [here].
"Here's the back story: this week I was contacted by a well-known tech company that had received a bug bounty submission based on a credential stuffing list posted to a popular hacking forum," explained Troy Hunt, the creator of Have I Been Pwned, in blog post. "Whilst this post dates back almost 4 months, it hadn't come across my radar until now and inevitably, also hadn't been sent to the aforementioned tech company."

"They took it seriously enough to take appropriate action against their (very sizeable) user base which gave me enough cause to investigate it further than your average cred stuffing list."

To check if your credentials are in the Naz.API dataset, you can visit Have I Been Pwned.
The Internet

Brace Yourself, IPv6 is Coming (supabase.com) 320

Paul Copplestone, co-founder of Supabase, writing in a blog post: On February 1st 2024, AWS will start charging for IPv4 addresses. This will cost $0.005 per hour -- around $4 month. A more accurate title for this post would be "Brace yourself, IPv4 is leaving," because I can't imagine many companies will pay to keep using the IPv4 address. While $4 is relatively small for an individual, my hypothesis is that AWS is a foundational layer to many infrastructure companies, like Supabase -- we offer a full EC2 instance for every Postgres database, so this would add millions to our AWS bill.

Infrastructure companies on AWS have a few choices:
1. Pass on the cost to the customer.
2. Provide a workaround (for example, a proxy).
3. Only offer IPv6 and hope the world will catch up.

Slashdot Top Deals