Medicine

COVID-19 Hospital Data Is a Hot Mess After Feds Take Control (arstechnica.com) 174

slack_justyb shares a report from Ars Technica: As COVID-19 hospitalizations in the US approach the highest levels seen in the pandemic so far, national efforts to track patients and hospital resources remain in shambles after the federal government abruptly seized control of data collection earlier this month. Watchdogs and public health experts were immediately aghast by the switch to the HHS database, fearing the data would be manipulated for political reasons or hidden from public view all together. However, the real threat so far has been the administrative chaos. The switch took effect July 15, giving hospitals and states just days to adjust to the new data collection and submission process.

As such, hospitals have been struggling with the new data reporting, which involves reporting more types of data than the CDC's previous system. Generally, the data includes stats on admissions, discharges, beds and ventilators in use and in reserve, as well as information on patients. For some hospitals, that data has to be harvested from various sources, such as electronic medical records, lab reports, pharmacy data, and administrative sources. Some larger hospital systems have been working to write new scripts to automate new data mining, while others are relying on staff to compile the data manually into excel spreadsheets, which can take multiple hours each day, according to a report by Healthcare IT News. The task has been particularly onerous for small, rural hospitals and hospitals that are already strained by a crush of COVID-19 patients.
"It seems the obvious of going from a system that is well tested, to something new and alien to everyone is happening exactly as everyone who has ever done these kinds of conversions predicted," adds Slashdot reader slack_justyb.
Security

Hackers Stole GitHub and GitLab OAuth Tokens From Git Analytics Firm Waydev (zdnet.com) 28

Waydev, an analytics platform used by software companies, has disclosed a security breach that occurred earlier this month. From a report: The company says that hackers broke into its platform and stole GitHub and GitLab OAuth tokens from its internal database. Waydev, a San Francisco-based company, runs a platform that can be used to track software engineers' work output by analyzing Git-based codebases. To do this, Waydev runs a special app listed on the GitHub and GitLab app stores. When users install the app, Waydev receives an OAuth token that it can use to access its customers' GitHub or GitLab projects. Waydev stores this token in its database and uses it on a daily basis to generate analytical reports for its customers. Waydev CEO and co-founder Alex Circei told ZDNet today in a phone call that hackers used a blind SQL injection vulnerability to gain access to its database, from where they stole GitHub and GitLab OAuth tokens. The hackers then used some of these tokens to pivot to other companies' codebases and gain access to their source code projects.
Science

NIST Study Finds That Masks Defeat Most Facial Recognition Algorithms (venturebeat.com) 46

In a report published today by the National Institutes of Science and Technology (NIST), a physical sciences laboratory and non-regulatory agency of the U.S. Department of Commerce, researchers attempted to evaluate the performance of facial recognition algorithms on faces partially covered by protective masks. They report that even the best of the 89 commercial facial recognition algorithms they tested had error rates between 5% and 50% in matching digitally applied masks with photos of the same person without a mask. From a report: "With the arrival of the pandemic, we need to understand how face recognition technology deals with masked faces," Mei Ngan, a NIST computer scientist and a coauthor of the report, said in a statement. "We have begun by focusing on how an algorithm developed before the pandemic might be affected by subjects wearing face masks. Later this summer, we plan to test the accuracy of algorithms that were intentionally developed with masked faces in mind."

The study -- part of a series from NIST's Face Recognition Vendor Test (FRVT) program conducted in collaboration with the Department of Homeland Security's Science and Technology Directorate, the Office of Biometric Identity Management, and Customs and Border Protection -- explored how well each of the algorithms was able to perform "one-to-one" matching, where a photo is compared with a different photo of the same person. (NIST notes this sort of technique is often used in smartphone unlocking and passport identity verification systems.) The team applied the algorithms to a set of about 6 million photos used in previous FRVT studies, but they didn't test "one-to-many" matching, which is used to determine whether a person in a photo matches any in a database of known images. Because real-world masks differ, the researchers came up with nine mask variants to test, which included differences in shape, color, and nose coverage.

Databases

'Meow' Attack Has Now Wiped Nearly 4,000 Databases (arstechnica.com) 54

On Thursday long-time Slashdot reader PuceBaboon wrote: Ars Technica is reporting a new attack on unprotected databases which, to date, has deleted all content from over 1,000 ElasticSearch and MongoDB databases across the 'net, leaving the calling-card "meow" in its place.

Most people are likely to find this a lot less amusing than a kitty video, so if you have a database instance on a cloud machine, now would be a good time to verify that it is password protected by something other than the default, install password...

From the article: The attack first came to the attention of researcher Bob Diachenko on Tuesday, when he discovered a database that stored user details of the UFO VPN had been destroyed. UFO VPN had already been in the news that day because the world-readable database exposed a wealth of sensitive user information... Besides amounting to a serious privacy breach, the database was at odds with the Hong Kong-based UFO's promise to keep no logs. The VPN provider responded by moving the database to a different location but once again failed to secure it properly. Shortly after, the Meow attack wiped it out.
"Attacks have continued and are getting closer to 4,000," reports Bleeping Computer. "A new search on Saturday using Shodan shows that more than 3,800 databases have entry names matching a 'meow' attack. More than 97% of them are Elastic and MongoDB."
Privacy

New York Bans Use of Facial Recognition In Schools Statewide (venturebeat.com) 29

The New York legislature today passed a moratorium banning the use of facial recognition and other forms of biometric identification in schools until 2022. VentureBeat reports: The bill, which has yet to be signed by Governor Andrew Cuomo, appears to be the first in the nation to explicitly regulate the use of the technologies in schools and comes in response to the planned launch of facial recognition by the Lockport City School District. In January, Lockport Schools became one of the only U.S. school districts to adopt facial recognition in all of its K-12 buildings, which serve about 5,000 students. Proponents argued the $1.4 million system could keep students safe by enforcing watchlists and sending alerts when it detected someone dangerous (or otherwise unwanted). But critics said it could be used to surveil students and build a database of sensitive information about people's faces, which the school district then might struggle to keep secure.

While Lockport Schools' privacy policy states the watchlist wouldn't include students and the database would only cover non-students deemed a threat, including sex offenders or those banned by court order, the district's superintendent ultimately oversaw which individuals were added to the system. And it was reported earlier this month that the school board's president, John Linderman, couldn't guarantee that student photos would never be included in the system for disciplinary reasons.
"This is especially important as schools across the state begin to acknowledge the experiences of Black and Brown students being policed in schools and funneled into the school-to-prison pipeline," said Stefanie Coyle, Deputy Director of the Education Policy Center at the New York Civil Liberties Union. "Facial recognition is notoriously inaccurate especially when it comes to identifying women and people of color. For children, whose appearances change rapidly as they grow, biometric technologies' accuracy is even more questionable. False positives, where the wrong student is identified, can result in traumatic interactions with law enforcement, loss of class time, disciplinary action, and potentially a criminal record."
Privacy

Security Breach Exposes More Than One Million DNA Profiles On Major Genealogy Database (buzzfeednews.com) 28

An anonymous reader quotes a report from BuzzFeed News: On July 19, genealogy enthusiasts who use the website GEDmatch to upload their DNA information and find relatives to fill in their family trees got an unpleasant surprise. Suddenly, more than a million DNA profiles that had been hidden from cops using the site to find partial matches to crime scene DNA were available for police to search. The news has undermined efforts by Verogen, the forensic genetics company that purchased GEDmatch last December, to convince users that it would protect their privacy while pursuing a business based on using genetic genealogy to help solve violent crimes.

A second alarm came on July 21, when MyHeritage, a genealogy website based in Israel, announced that some of its users had been subjected to a phishing attack to obtain their log-in details for the site -- apparently targeting email addresses obtained in the attack on GEDmatch just two days before. In a statement emailed to BuzzFeed News and posted on Facebook, Verogen explained that the sudden unmasking of GEDmatch profiles that were supposed to be hidden from law enforcement was "orchestrated through a sophisticated attack on one of our servers via an existing user account." "As a result of this breach, all user permissions were reset, making all profiles visible to all users. This was the case for approximately 3 hours," the statement said. "During this time, users who did not opt in for law enforcement matching were available for law enforcement matching and, conversely, all law enforcement profiles were made visible to GEDmatch users." It's unclear whether any unauthorized profiles were searched by law enforcement.

Crime

Surveillance Software Scanning File-Sharing Networks Led To 12,000 Arrests (nbcnews.com) 106

Mr. Cooper was a retired high school history teacher using what NBC News calls those peer-to-peer networks where "the lack of corporate oversight creates the illusion of safety for people sharing illegal images."
Police were led to Cooper's door by a forensic tool called Child Protection System, which scans file-sharing networks and chatrooms to find computers that are downloading photos and videos depicting the sexual abuse of prepubescent children. The software, developed by the Child Rescue Coalition, a Florida-based nonprofit, can help establish the probable cause needed to get a search warrant... Cooper is one of more than 12,000 people arrested in cases flagged by the Child Protection System software over the past 10 years, according to the Child Rescue Coalition... The Child Protection System, which lets officers search by country, state, city or county, displays a ranked list of the internet addresses downloading the most problematic files...

The Child Protection System "has had a bigger effect for us than any tool anyone has ever created. It's been huge," said Dennis Nicewander, assistant state attorney in Broward County, Florida, who has used the software to prosecute about 200 cases over the last decade. "They have made it so automated and simple that the guys are just sitting there waiting to be arrested." The Child Rescue Coalition gives its technology for free to law enforcement agencies, and it is used by about 8,500 investigators in all 50 states. It's used in 95 other countries, including Canada, the U.K. and Brazil. Since 2010, the nonprofit has trained about 12,000 law enforcement investigators globally. Now, the Child Rescue Coalition is seeking partnerships with consumer-focused online platforms, including Facebook, school districts and a babysitter booking site, to determine whether people who are downloading illegal images are also trying to make contact with or work with minors...

The tool has a growing database of more than a million hashed images and videos, which it uses to find computers that have downloaded them. The software is able to track IP addresses — which are shared by people connected to the same Wi-Fi network — as well as individual devices. The system can follow devices even if the owners move or use virtual private networks, or VPNs, to mask the IP addresses, according to the Child Rescue Coalition.... Before getting a warrant, police typically subpoena the internet service provider to find out who holds the account and whether anyone at the address has a criminal history, has children or has access to children through work.

A lawyer who specializes in digital rights tells NBC that these tools need more oversight and testing. "There's a danger that the visceral awfulness of the child abuse blinds us to the civil liberties concerns. Tools like this hand a great deal of power and discretion to the government. There need to be really strong checks and safeguards."
Security

VPN With 'Strict No-Logs Policy' Exposed Millions of User Log Files (betanews.com) 86

New submitter kimmmos shares a report from BetaNews: An unprotected database belonging to the VPN service UFO VPN was exposed online for more than two weeks. Contained within the database were more than 20 million logs including user passwords stored in plain text. User of both UFO VPN free and paid services are affected by the data breach which was discovered by the security research team at Comparitech. Despite the Hong Kong-based VPN provider claiming to have a "strict no-logs policy" and that any data collected is anonymized, Comparitech says that "based on the contents of the database, users' information does not appear to be anonymous at all." A total of 894GB of data was exposed, and the API access records and user logs included: Account passwords in plain text; VPN session secrets and tokens; IP addresses of both user devices and the VPN servers they connected to; Connection timestamps; Geo-tags; Device and OS characteristics; and URLs that appear to be domains from which advertisements are injected into free users' web browsers. Comparitech notes that this runs counter to UFO VPN's privacy policy.
Earth

The Entire World's Carbon Emissions Will Finally Be Trackable In Real Time (vox.com) 46

An anonymous reader quotes a report from Vox: There's an old truism in the business world: what gets measured gets managed. One of the challenges in managing the greenhouse gas emissions warming the atmosphere is that they aren't measured very well. The ultimate solution to this problem-- the killer app, as it were -- would be real-time tracking of all global greenhouse gases, verified by objective third parties, and available for free to the public. Now, a new alliance of climate research groups called the Climate TRACE (Tracking Real-Time Atmospheric Carbon Emissions) Coalition has launched an effort to make the vision a reality, and they're aiming to have it ready for COP26, the climate meetings in Glasgow, Scotland, in November 2021 (postponed from November 2020). If they pull it off, it could completely change the tenor and direction of international climate talks. It could also make it easier for the hundreds of companies, cities, counties, and states that have made ambitious climate commitments to reliably track their process.

In addition to [Al Gore, who had been looking for more reliable ways to track emissions] and WattTime, [which intends to create a public database that will track carbon emissions from all the world's large power plants using AI], the coalition now contains:

-Carbon Tracker uses machine learning and satellite data to predict the utilization of every power plant in the world;
-Earthrise Alliance aggregates and organizes publicly available environmental data into a format meaningful to journalists and researchers;
-CarbonPlan uses satellite data to track changes in aboveground biomass (especially forests) and the associated carbon emissions, down to a spatial resolution of 300 meters;
-Hudson Carbon uses satellite data to track changes in agricultural cover, cropping, and tilling, down to the level of the individual field, and compares that data against ground-level sensors;
-OceanMind uses onboard sensors to track the global movement of ships in real time and combines that with engine specs to extrapolate carbon emissions;
-Rocky Mountain Institute combines multiple sources of data to quantify methane emissions from oil and gas infrastructure;
-Hypervine uses spectroscopic imagery to track vehicle usage and blasting at quarries;
-Blue Sky Analytics uses near-infrared and shortwave infrared imagery from satellites to track fires.

The coalition will also be gathering data from a variety of other sources, from power grid data to fuel sales, sensor networks, and drones. Gore acknowledges that "this is a work in progress," but says the coalition is aiming big: "everything that can be known about where greenhouse gas emissions are coming from will be known, in near-real time."

Government

White House Reportedly Orders Hospitals To Bypass CDC During COVID-19 Data Collection 189

The Trump administration is now ordering hospitals to send coronavirus patient data to a database in Washington, DC as part of a new initiative that may bypass the Centers for Disease Control and Prevention (CDC), according to a report from The New York Times published on Tuesday. The Verge reports: As outlined in a document (PDF) posted to the website of the Department of Health and Human Services (HHS), hospitals are being ordered to send data directly to the administration, effective tomorrow, a move that has alarmed some within the CDC, according to The Times. The database that will collect and store the information is referred to in the document as HHS Protect, which was built in part by data mining and predictive analytics firm Palantir. The Silicon Valley company is known most for its controversial contract work with the US military and other clandestine government agencies as well as for being co-founded and initially funded by Trump ally Peter Thiel.

"A unique link will be sent to the hospital points of contact. This will direct the [point of care] to a hospital-specific secure form that can then be used to enter the necessary information. After completing the fields, click submit and confirm that the form has been successfully captured," reads the HHS instructions. "A confirmation email will be sent to you from the HHS Protect System. This method replaces the emailing of individual spreadsheets previously requested." While the White House's official reasoning is that this plan will help make data collection on the spread of COVID-19 more centralized and efficient, some current and former public health officials fear the bypassing of the CDC may be an effort to politicize the findings and cut experts out of the loop with regard to federal messaging and guidelines, The Times reports.
The Internet

MIT Removes Huge Dataset That Teaches AI Systems To Use Racist, Misogynistic Slurs (theregister.com) 62

An anonymous reader quotes a report from The Register MIT has taken offline its highly cited dataset that trained AI systems to potentially describe people using racist, misogynistic, and other problematic terms. The database was removed this week after The Register alerted the American super-college. MIT also urged researchers and developers to stop using the training library, and to delete any copies. "We sincerely apologize," a professor told us. The training set, built by the university, has been used to teach machine-learning models to automatically identify and list the people and objects depicted in still images. For example, if you show one of these systems a photo of a park, it might tell you about the children, adults, pets, picnic spreads, grass, and trees present in the snap. Thanks to MIT's cavalier approach when assembling its training set, though, these systems may also label women as whores or bitches, and Black and Asian people with derogatory language. The database also contained close-up pictures of female genitalia labeled with the C-word. Applications, websites, and other products relying on neural networks trained using MIT's dataset may therefore end up using these terms when analyzing photographs and camera footage.

The problematic training library in question is 80 Million Tiny Images, which was created in 2008 to help produce advanced object-detection techniques. It is, essentially, a huge collection of photos with labels describing what's in the pics, all of which can be fed into neural networks to teach them to associate patterns in photos with the descriptive labels. So when a trained neural network is shown a bike, it can accurately predict a bike is present in the snap. It's called Tiny Images because the pictures in library are small enough for computer-vision algorithms in the late-2000s and early-2010s to digest. Today, the Tiny Images dataset is used to benchmark computer-vision algorithms along with the better-known ImageNet training collection. Unlike ImageNet, though, no one, until now, has scrutinized Tiny Images for problematic content.

AI

MIT Apologizes, Permanently Pulls Offline Huge Dataset That Taught AI Systems To Use Racist, Misogynistic Slurs (theregister.com) 128

MIT has taken offline its highly cited dataset that trained AI systems to potentially describe people using racist, misogynistic, and other problematic terms. From a report: The database was removed this week after The Register alerted the American super-college. And MIT urged researchers and developers to stop using the training library, and to delete any copies. "We sincerely apologize," a professor told us. The training set, built by the university, has been used to teach machine-learning models to automatically identify and list the people and objects depicted in still images. For example, if you show one of these systems a photo of a park, it might tell you about the children, adults, pets, picnic spreads, grass, and trees present in the snap. Thanks to MIT's cavalier approach when assembling its training set, though, these systems may also label women as whores or bitches, and Black and Asian people with derogatory language. The database also contained close-up pictures of female genitalia labeled with the C-word.

Applications, websites, and other products relying on neural networks trained using MIT's dataset may therefore end up using these terms when analyzing photographs and camera footage. The problematic training library in question is 80 Million Tiny Images, which was created in 2008 to help produce advanced object detection techniques. It is, essentially, a huge collection of photos with labels describing what's in the pics, all of which can be fed into neural networks to teach them to associate patterns in photos with the descriptive labels. So when a trained neural network is shown a bike, it can accurately predict a bike is present in the snap. It's called Tiny Images because the pictures in library are small enough for computer-vision algorithms in the late-2000s and early-2010s to digest.

Businesses

AWS Launches 'Amazon Honeycode', a No-Code App Building Service (zdnet.com) 43

"Amazon Web Services on Wednesday launched Amazon Honeycode, a fully-managed service that enables companies to build mobile and web applications without any programming," reports ZDNet: Customers can use the service to build apps that leverage an AWS-built database, such as a simple task-tracking application or a more complex project management app to manage multiple workflows. "Customers have told us that the need for custom applications far outstrips the capacity of developers to create them," AWS VP Larry Augustin said in a statement.

Low-code and no-code tools have been growing in popularity in recent years, enabling people with little or no coding experience to be able to build the applications they need. Other major cloud companies like Salesforce offer low-code app builders. With IT teams stretched thin during the COVID-19 pandemic, low-code tools can prove particularly useful.

Customers "can get started by selecting a pre-built template, where the data model, business logic, and applications are pre-defined and ready-to-use..." Amazon explains in a press release. "Or, they can import data into a blank workbook, use the familiar spreadsheet interface to define the data model, and design the application screens with objects like lists, buttons, and input fields.

"Builders can also add automations to their applications to drive notifications, reminders, approvals, and other actions based on conditions. Once the application is built, customers simply click a button to share it with team members."
Databases

Appeals Court Says California's IMDb-Targeting 'Ageism' Law Is Unconstitutional (techdirt.com) 140

The state of California has lost again in its attempt to punish IMDb for ageism perpetrated by movie studios who seem to refuse to cast actresses above a certain age in choice roles. Techdirt reports: The law passed by the California legislature does one thing: prevents IMDb (and other sites, theoretically) from publishing facts about actors: namely, their ages. This stupid law was ushered into existence by none other than the Screen Actors Guild, capitalizing on a (failed) lawsuit brought against the website by an actress who claimed the publication of her real age cost her millions in Hollywood paychecks. These beneficiaries of the First Amendment decided there was just too much First Amendment in California. To protect actors from studio execs, SAG decided to go after a third-party site respected for its collection of factual information about movies, actors, and everything else film-related.

The federal court handling IMDb's lawsuit against the state made quick work of the state's arguments in favor of very selective censorship. In only six pages, the court destroyed the rationale offered by the government's finest legal minds. [...] Even if the law had somehow survived a First Amendment challenge, it still wouldn't have prevented studios from engaging in discriminatory hiring practices. If this was really the state's concerns, it would have stepped up its regulation of the entertainment industry, rather than a single site that was unsuccessfully sued by an actress, who speculated IMDb's publication of her age was the reason she wasn't landing the roles she wanted.

Privacy

IRS Used Cellphone Location Data To Try To Find Suspects (wsj.com) 24

The Internal Revenue Service attempted to identify and track potential criminal suspects by purchasing access to a commercial database that records the locations of millions of American cellphones. The Wall Street Journal reports: The IRS Criminal Investigation unit, or IRS CI, had a subscription to access the data in 2017 and 2018, and the way it used the data was revealed last week in a briefing by IRS CI officials to Sen. Ron Wyden's (D., Ore.) office. The briefing was described to The Wall Street Journal by an aide to the senator. IRS CI officials told Mr. Wyden's office that their lawyers had given verbal approval for the use of the database, which is sold by a Virginia-based government contractor called Venntel Inc. Venntel obtains anonymized location data from the marketing industry and resells it to governments. IRS CI added that it let its Venntel subscription lapse after it failed to locate any targets of interest during the year it paid for the service, according to Mr. Wyden's aide.

Justin Cole, a spokesman for IRS CI, said it entered into a "limited contract with Venntel to test their services against the law enforcement requirements of our agency." IRS CI pursues the most serious and flagrant violations of tax law, and it said it used the Venntel database in "significant money-laundering, cyber, drug and organized-crime cases." "The tool provided information as to where a phone with an anonymized identifier (created by Venntel) is located at different times," Mr. Cole said. "For example, if we know that a suspicious ATM deposit was made at a specific time and at a specific location, and we have one or more other data points for the same scheme, we can cross reference the data from each event to see if one or more devices were present at multiple transactions. This would then allow us to identify the device used by a potential suspect and attempt to follow that particular movement."

IRS CI "attempted to use Venntel data to look for location records for mobile devices that were consistently present during multiple financial transactions related to an alleged crime," Mr. Cole said. He said that the tool could be used to track an individual criminal suspect once one was identified but said that it didn't do so because the tool produced no leads.

Oracle

Oracle's BlueKai Tracks You Across the Web. That Data Spilled Online (techcrunch.com) 20

From a report: Have you ever wondered why online ads appear for things that you were just thinking about? There's no big conspiracy. Ad tech can be creepily accurate. Tech giant Oracle is one of a few companies in Silicon Valley that has near-perfected the art of tracking people across the internet. The company has spent a decade and billions of dollars buying startups to build its very own panopticon of users' web browsing data. One of those startups, BlueKai, which Oracle bought for a little over $400 million in 2014, is barely known outside marketing circles, but it amassed one of the largest banks of web tracking data outside of the federal government. BlueKai uses website cookies and other tracking tech to follow you around the web. By knowing which websites you visit and which emails you open, marketers can use this vast amount of tracking data to infer as much about you as possible -- your income, education, political views, and interests to name a few -- in order to target you with ads that should match your apparent tastes. If you click, the advertisers make money.

But for a time, that web tracking data was spilling out onto the open internet because a server was left unsecured and without a password, exposing billions of records for anyone to find. Security researcher Anurag Sen found the database and reported his finding to Oracle through an intermediary -- Roi Carthy, chief executive at cybersecurity firm Hudson Rock and former TechCrunch reporter.

Medicine

A Medical Device Maker Threatens iFixit Over Ventilator Repair Project (vice.com) 69

STERIS Corporation, a company that makes sterilization and other medical equipment, sent a letter to iFixit claiming their online database of repair manuals for ventilators and medical equipment violates their copyrights. Motherboard reports: "It has come to my attention that you have been reproducing certain installation and maintenance manuals relating to our products, documentation which is protected by copyright law," the letter said. The letter then went on to tell [Kyle Wiens, CEO of iFixit] to remove all Steris copyrighted material from the iFixit website within 10 days of the letter. As Motherboard reported in March, major manufacturers of medical devices have long made it difficult for their devices to be repaired through third party repair professionals. Manufacturers have often lobbied against right to repair legislation and many medical devices are controlled by artificial "software locks" that allow only those with authorization to make modifications.

"I'm disappointed that Steris is resorting to legal threats to stop hospitals from having access to information about how to maintain critical sterilization equipment during a pandemic," Wiens told Motherboard in an email. "No manufacturer should be stopping hospitals from repairing their equipment," Wiens said. "The best way to ensure patient safety is to make sure that equipment is being maintained regularly using the manufacturer's recommended procedures. The only way to do that is if hospitals have up to date manuals." With regards to the letter sent by Steris, Wiens said iFixit has not removed any material from its website. "We explained to Steris that what we did is a lawful and protected fair use under the U.S. Copyright act," Wiens said.
"iFixit is protected by Section 512 of the Digital Millennium Copyright Act, which allows online platforms to host content contributed by users provided they comply with the Act's requirements, which iFixit does," a letter to Steris from the Electronic Frontier Foundation on behalf of iFixit said.
China

China Is Collecting DNA From Tens of Millions of Men and Boys, Using US Equipment (nytimes.com) 67

The police in China are collecting blood samples from men and boys from across the country to build a genetic map of its roughly 700 million males, giving the authorities a powerful new tool for their emerging high-tech surveillance state. From a report: They have swept across the country since late 2017 to collect enough samples to build a vast DNA database, according to a new study published on Wednesday by the Australian Strategic Policy Institute, a research organization, based on documents also reviewed by The New York Times. With this database, the authorities would be able to track down a man's male relatives using only that man's blood, saliva or other genetic material. An American company, Thermo Fisher, is helping: The Massachusetts company has sold testing kits to the Chinese police tailored to their specifications. American lawmakers have criticized Thermo Fisher for selling equipment to the Chinese authorities, but the company has defended its business.

The project is a major escalation of China's efforts to use genetics to control its people, which had been focused on tracking ethnic minorities and other, more targeted groups. It would add to a growing, sophisticated surveillance net that the police are deploying across the country, one that increasingly includes advanced cameras, facial recognition systems and artificial intelligence. The police say they need the database to catch criminals and that donors consent to handing over their DNA. Some officials within China, as well as human rights groups outside its borders, warn that a national DNA database could invade privacy and tempt officials to punish the relatives of dissidents and activists. Rights activists argue that the collection is being done without consent because citizens living in an authoritarian state have virtually no right to refuse.

Privacy

How Accurate Were Ray Kurzweil's Predictions for 2019? (lesswrong.com) 70

In 1999, Ray Kurzweil made predictions about what the world would be like 20 years in the future. Last month the community blog LessWrong took a look at how accurate Kurzweil's predictions turned out to be: This was a follow up to a previous assessment about his predictions about 2009, which showed a mixed bag, roughly evenly divided between right and wrong, which I'd found pretty good for 10-year predictions... For the 2019 predictions, I divided them into 105 separate statements, did a call for volunteers [and] got 46 volunteers with valid email addresses, of which 34 returned their predictions... Of the 34 assessors, 24 went the whole hog and did all 105 predictions; on average, 91 predictions were assessed by each person, a total of 3078 individual assessments...

Kurzweil's predictions for 2019 were considerably worse than those for 2009, with more than half strongly wrong.

The assessors ultimately categorized just 12% of Kurzweil's predictions as true, with another 12% declared "weakly true," while another 10% were classed as "cannot decide." But 52% were declared "false" -- with another 15% also called "weakly false."

Among Kurzweil's false predictions for the year 2019:
  • "Phone" calls routinely include high-resolution three-dimensional images projected through the direct-eye displays and auditory lenses... Thus a person can be fooled as to whether or not another person is physically present or is being projected through electronic communication.
  • The all-enveloping tactile environment is now widely available and fully convincing.

"As you can see, Kurzweil suffered a lot from his VR predictions," explains the LessWrong blogpost. "This seems a perennial thing: Hollywood is always convinced that mass 3D is just around the corner; technologists are convinced that VR is imminent."

But the blog post also thanks Kurzweil, "who, unlike most prognosticators, had the guts and the courtesy to write down his predictions and give them a date. I strongly suspect that most people's 1999 predictions about 2019 would have been a lot worse."

And they also took special note of Kurzweil's two most accurate predictions. First, "The existence of the human underclass continues as an issue." And second:

"People attempt to protect their privacy with near-unbreakable encryption technologies, but privacy continues to be a major political and social issue with each individual's practically every move stored in a database somewhere."


Programming

GitHub, Android, Python, Go: More Software Adopts Race-Neutral Terminology (zdnet.com) 413

"The terms 'allowlist' and 'blocklist' describe their purpose, while the other words use metaphors to describe their purpose," reads a change description on the source code for Android -- from over a year ago. 9to5Mac calls it "a shortened version of Google's (internal-only) explanation" for terminology changes which are now becoming more widespread.

And Thursday GitHub's CEO said they were also "already working on" renaming the default branches of code from "master" to a more neutral term like "main," reports ZDNet: GitHub lending its backing to this movement effectively ensures the term will be removed across millions of projects, and effectively legitimizes the effort to clean up software terminology that started this month.

But, in reality, these efforts started years ago, in 2014, when the Drupal project first moved in to replace "master/slave" terminology with "primary/replica." Drupal's move was followed by the Python programming language, Chromium (the open source browser project at the base of Chrome), Microsoft's Roslyn .NET compiler, and the PostgreSQL and Redis database systems... The PHPUnit library and the Curl file download utility have stated their intention to replace blacklist/whitelist with neutral alternatives. Similarly, the OpenZFS file storage manager has also replaced its master/slave terms used for describing relations between storage environments with suitable replacements. Gabriel Csapo, a software engineer at LinkedIn, said on Twitter this week that he's also in the process of filing requests to update many of Microsoft's internal libraries.

A recent change description for the Go programming language says "There's been plenty of discussion on the usage of these terms in tech. I'm not trying to have yet another debate." It's clear that there are people who are hurt by them and who are made to feel unwelcome by their use due not to technical reasons but to their historical and social context. That's simply enough reason to replace them.

Anyway, allowlist and blocklist are more self-explanatory than whitelist and blacklist, so this change has negative cost.

That change was merged on June 9th -- but 9to5Mac reports it's just one of many places these changes are happening. "The Chrome team is beginning to eliminate even subtle forms of racism by moving away from terms like 'blacklist' and 'whitelist.' Google's Android team is now implementing a similar effort to replace the words 'blacklist' and 'whitelist.'" And ZDNet reports more open source projects are working on changing the name of their default Git repo from "master" to alternatives like main, default, primary, root, or another, including the OpenSSL encryption software library, automation software Ansible, Microsoft's PowerShell scripting language, the P5.js JavaScript library, and many others.

Slashdot Top Deals