AT&T

Hacker Selling Private Data Allegedly From 70 Million AT&T Customers (restoreprivacy.com) 12

An anonymous reader quotes a report from Restore Privacy: A well-known threat actor with a long list of previous breaches is selling private data that was allegedly collected from 70 million AT&T customers. We analyzed the data and found it to include social security numbers, date of birth, and other private information. The hacker is asking $1 million for the entire database (direct sell) and has provided RestorePrivacy with exclusive information for this report. The threat actor goes by the name of ShinyHunters and was also behind other previous exploits that affected Microsoft, Tokopedia, Pixlr, Mashable, Minted, and more. The hacker posted the leak on an underground hacking forum earlier today, along with a sample of the data that we analyzed. AT&T has initially denied the breach in a statement to RestorePrivacy. The hacker has responded by saying, "they will keep denying until I leak everything." "Based on our investigation yesterday, the information that appeared in an internet chat room does not appear to have come from our systems," AT&T said in a statement. When pressed harder and asked specifically if there was no AT&T breach, the company said: "Based on our investigation, no, we don't believe this was a breach of AT&T systems."

"Given this information did not come from us, we can't speculate on where it came from or whether it is valid," they added. The hacker says they're willing to reach "an agreement" with AT&T to remove the data from sale.

The possible breach of AT&T follows a T-Mobile hack from earlier this week, which impacts 40 million records of former and prospective customers.
Apple

We Built a CSAM System Like Apple's - the Tech Is Dangerous (washingtonpost.com) 186

An anonymous reader writes: Earlier this month, Apple unveiled a system that would scan iPhone and iPad photos for child sexual abuse material (CSAM). The announcement sparked a civil liberties firestorm, and Apple's own employees have been expressing alarm. The company insists reservations about the system are rooted in "misunderstandings." We disagree.

We wrote the only peer-reviewed publication on how to build a system like Apple's -- and we concluded the technology was dangerous. We're not concerned because we misunderstand how Apple's system works. The problem is, we understand exactly how it works.

Our research project began two years ago, as an experimental system to identify CSAM in end-to-end-encrypted online services. As security researchers, we know the value of end-to-end encryption, which protects data from third-party access. But we're also horrified that CSAM is proliferating on encrypted platforms. And we worry online services are reluctant to use encryption without additional tools to combat CSAM.

We sought to explore a possible middle ground, where online services could identify harmful content while otherwise preserving end-to-end encryption. The concept was straightforward: If someone shared material that matched a database of known harmful content, the service would be alerted. If a person shared innocent content, the service would learn nothing. People couldn't read the database or learn whether content matched, since that information could reveal law enforcement methods and help criminals evade detection.

But we encountered a glaring problem.

Our system could be easily repurposed for surveillance and censorship. The design wasn't restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.
About the authors of this report: Jonathan Mayer is an assistant professor of computer science and public affairs at Princeton University. He previously served as technology counsel to then-Sen. Kamala D. Harris and as chief technologist of the Federal Communications Commission Enforcement Bureau. Anunay Kulshrestha is a graduate researcher at the Princeton University Center for Information Technology Policy and a PhD candidate in the department of computer science.
Privacy

'Apple's Device Surveillance Plan Is a Threat To User Privacy -- And Press Freedom' (freedom.press) 213

The Freedom of the Press Foundation is calling Apple's plan to scan photos on user devices to detect known child sexual abuse material (CSAM) a "dangerous precedent" that "could be misused when Apple and its partners come under outside pressure from governments or other powerful actors." They join the EFF, whistleblower Edward Snowden, and many other privacy and human rights advocates in condemning the move. Advocacy Director Parker Higgins writes: Very broadly speaking, the privacy invasions come from situations where "false positives" are generated -- that is to say, an image or a device or a user is flagged even though there are no sexual abuse images present. These kinds of false positives could happen if the matching database has been tampered with or expanded to include images that do not depict child abuse, or if an adversary could trick Apple's algorithm into erroneously matching an existing image. (Apple, for its part, has said that an accidental false positive -- where an innocent image is flagged as child abuse material for no reason -- is extremely unlikely, which is probably true.) The false positive problem most directly touches on press freedom issues when considering that first category, with adversaries that can change the contents of the database that Apple devices are checking files against. An organization that could add leaked copies of its internal records, for example, could find devices that held that data -- including, potentially, whistleblowers and journalists who worked on a given story. This could also reveal the extent of a leak if it is not yet known. Governments that could include images critical of its policies or officials could find dissidents that are exchanging those files.
[...]
Journalists, in particular, have increasingly relied on the strong privacy protections that Apple has provided even when other large tech companies have not. Apple famously refused to redesign its software to open the phone of an alleged terrorist -- not because they wanted to shield the content on a criminal's phone, but because they worried about the precedent it would set for other people who rely on Apple's technology for protection. How is this situation any different? No backdoor for law enforcement will be safe enough to keep bad actors from continuing to push it open just a little bit further. The privacy risks from this system are too extreme to tolerate. Apple may have had noble intentions with this announced system, but good intentions are not enough to save a plan that is rotten at its core.

Privacy

Afghans Scramble To Delete Digital History, Evade Biometrics (reuters.com) 203

Thousands of Afghans struggling to ensure the physical safety of their families after the Taliban took control of the country have an additional worry: that biometric databases and their own digital history can be used to track and target them. From a report: U.N. Secretary-General Antonio Guterres has warned of "chilling" curbs on human rights and violations against women and girls, and Amnesty International on Monday said thousands of Afghans - including academics, journalists and activists - were "at serious risk of Taliban reprisals." After years of a push to digitise databases in the country, and introduce digital identity cards and biometrics for voting, activists warn these technologies can be used to target and attack vulnerable groups. "We understand that the Taliban is now likely to have access to various biometric databases and equipment in Afghanistan," the Human Rights First group wrote on Twitter on Monday.

"This technology is likely to include access to a database with fingerprints and iris scans, and include facial recognition technology," the group added. The U.S.-based advocacy group quickly published a Farsi-language version of its guide on how to delete digital history - that it had produced last year for activists in Hong Kong - and also put together a manual on how to evade biometrics. Tips to bypass facial recognition include looking down, wearing things to obscure facial features, or applying many layers of makeup, the guide said, although fingerprint and iris scans were difficult to bypass.

Security

Secret Terrorist Watchlist With 2 Million Records Exposed Online (bleepingcomputer.com) 87

A secret terrorist watchlist with 1.9 million records, including classified "no-fly" records was exposed on the internet. The list was left accessible on an Elasticsearch cluster that had no password on it. BleepingComputer reports: July this year, Security Discovery researcher Bob Diachenko came across a plethora of JSON records in an exposed Elasticsearch cluster that piqued his interest. The 1.9 million-strong recordset contained sensitive information on people, including their names, country citizenship, gender, date of birth, passport details, and no-fly status. The exposed server was indexed by search engines Censys and ZoomEye, indicating Diachenko may not have been the only person to come across the list.

The researcher discovered the exposed database on July 19th, interestingly, on a server with a Bahrain IP address, not a US one. However, the same day, he rushed to report the data leak to the U.S. Department of Homeland Security (DHS). "I discovered the exposed data on the same day and reported it to the DHS." "The exposed server was taken down about three weeks later, on August 9, 2021." "It's not clear why it took so long, and I don't know for sure whether any unauthorized parties accessed it," writes Diachenko in his report. The researcher considers this data leak to be serious, considering watchlists can list people who are suspected of an illicit activity but not necessarily charged with any crime. "In the wrong hands, this list could be used to oppress, harass, or persecute people on the list and their families." "It could cause any number of personal and professional problems for innocent people whose names are included in the list," says the researcher.

Electronic Frontier Foundation

Edward Snowden and EFF Slam Apple's Plans To Scan Messages and iCloud Images (macrumors.com) 55

Apple's plans to scan users' iCloud Photos library against a database of child sexual abuse material (CSAM) to look for matches and childrens' messages for explicit content has come under fire from privacy whistleblower Edward Snowden and the Electronic Frontier Foundation (EFF). MacRumors reports: In a series of tweets, the prominent privacy campaigner and whistleblower Edward Snowden highlighted concerns that Apple is rolling out a form of "mass surveillance to the entire world" and setting a precedent that could allow the company to scan for any other arbitrary content in the future. Snowden also noted that Apple has historically been an industry-leader in terms of digital privacy, and even refused to unlock an iPhone owned by Syed Farook, one of the shooters in the December 2015 attacks in San Bernardino, California, despite being ordered to do so by the FBI and a federal judge. Apple opposed the order, noting that it would set a "dangerous precedent."

The EFF, an eminent international non-profit digital rights group, has issued an extensive condemnation of Apple's move to scan users' iCloud libraries and messages, saying that it is extremely "disappointed" that a "champion of end-to-end encryption" is undertaking a "shocking about-face for users who have relied on the company's leadership in privacy and security." The EFF highlighted how various governments around the world have passed laws that demand surveillance and censorship of content on various platforms, including messaging apps, and that Apple's move to scan messages and "iCloud Photos" could be legally required to encompass additional materials or easily be widened. "Make no mistake: this is a decrease in privacy for all "iCloud Photos" users, not an improvement," the EFF cautioned.

Transportation

Infrastructure Bill Could Enable Government To Track Drivers' Travel Data (theintercept.com) 238

Presto Vivace shares a report from The Intercept: The Senate's $1.2 trillion bipartisan infrastructure bill proposes a national test program that would allow the government to collect drivers' data in order to charge them per-mile travel fees. The new revenue would help finance the Highway Trust Fund, which currently depends mostly on fuel taxes to support roads and mass transit across the country. Under the proposal, the government would collect information about the miles that drivers travel from smartphone apps, another on-board device, automakers, insurance companies, gas stations, or other means. For now, the initiative would only be a test effort -- the government would solicit volunteers who drive commercial and passenger vehicles -- but the idea still raises concerns about the government tracking people's private data.

The bill would establish an advisory board to guide the program that would include officials representing state transportation departments and the trucking industry as well as data security and consumer privacy experts. As the four-year pilot initiative goes on, the Transportation and Treasury departments would also have to keep Congress informed of how they maintain volunteers' privacy and how the per-mile fee idea could affect low-income drivers. Still, [Sean Vitka, policy counsel at Demand Progress] said the concept could put Americans' private data at risk. "We already know the government is unable to keep data like this secure, which is another reason why the government maintaining a giant database of travel information about people in the United States is a bad idea."
"If you think this is a bad idea, NOW would be a good time to let your Senators and representative know," says Slashdot reader Presto Vivace.
Security

Hackers Shut Down System For Booking COVID-19 Shots in Italy's Lazio Region (reuters.com) 33

Hackers have attacked and shut down the IT systems of the company that manages COVID-19 vaccination appointments for the Lazio region surrounding Rome, the regional government said on Sunday. From a report: "A powerful hacker attack on the region's CED (database) is under way," the region said in a Facebook posting. It said all systems had been deactivated, including those of the region's health portal and vaccination network, and warned the inoculation programme could suffer a delay. "It is a very powerful hacker attack, very serious... everything is out. The whole regional CED is under attack," Lazio region's health manager Alessio D'Amato said.


Programming

Are Python Libraries Riddled With Security Holes? (techradar.com) 68

"Almost half of the packages in the official Python Package Index (PyPI) repository have at least one security issue," reports TechRadar, citing a new analysis by Finnish researchers, which even found five packages with more than a thousand issues each... The researchers used static analysis to uncover the security issues in the open source packages, which they reason end up tainting software that use them. In total the research scanned through 197,000 packages and found more than 749,000 security issues in all... Explaining their methodology the researchers note that despite the inherent limitations of static analysis, they still found at least one security issue in about 46% of the packages in the repository. The paper reveals that of the issues identified, the maximum (442,373) are of low severity, while 227,426 are moderate severity issues. However, 11% of the flagged PyPI packages have 80,065 high severity issues.
The Register supplies some context: Other surveys of this sort have come to similar conclusions about software package ecosystems. Last September, a group of IEEE researchers analyzed 6,673 actively used Node.js apps and found about 68 per cent depended on at least one vulnerable package... The situation is similar with package registries like Maven (for Java), NuGet (for .NET), RubyGems (for Ruby), CPAN (for Perl), and CRAN (for R). In a phone interview, Ee W. Durbin III, director of infrastructure at the Python Software Foundation, told The Register, "Things like this tend not to be very surprising. One of the most overlooked or misunderstood parts of PyPI as a service is that it's intended to be freely accessible, freely available, and freely usable. Because of that we don't make any guarantees about the things that are available there..."

Durbin welcomed the work of the Finnish researchers because it makes people more aware of issues that are common among open package management systems and because it benefits the overall health of the Python community. "It's not something we ignore but it's also not something we historically have had the resources to take on," said Durbin. That may be less of an issue going forward. According to Durbin, there's been significantly more interest over the past year in supply chain security and what companies can do to improve the situation. For the Python community, that's translated into an effort to create a package vulnerability reporting API and the Python Advisory Database, a community-run repository of PyPI security advisories that's linked to the Google-spearheaded Open Vulnerability Database.

China

Early Virus Sequences 'Mysteriously' Deleted Have Been Not-So-Mysteriously Undeleted (nytimes.com) 128

"A batch of early coronavirus data that went missing for a year has emerged from hiding," reports the New York Times. (Jesse Bloom, a virologist at the Fred Hutchinson Cancer Center in Seattle, had found copies of 13 of the deleted sequences on Google Cloud.)

Though their deletion raised some suspicions, "An odd explanation has emerged, stemming from an editorial oversight by a scientific journal," reports the Times. "And the sequences have been uploaded into a different database, overseen by the Chinese government."

The Times also notes that the researchers had already posted their early findings online in March 2020: That month, they also uploaded the sequences to an online database called the Sequence Read Archive, which is maintained by the National Institutes of Health, and submitted a paper describing their results to a scientific journal called Small. The paper was published in June 2020... [A] spokeswoman for the N.I.H. said that the authors of the study had requested in June 2020 that the sequences be withdrawn from the database. The authors informed the agency that the sequences were being updated and would be added to a different database... On July 5, more than a year after the researchers withdrew the sequences from the Sequence Read Archive and two weeks after Dr. Bloom's report was published online, the sequences were quietly uploaded to a database maintained by China National Center for Bioinformation by Ben Hu, a researcher at Wuhan University and a co-author of the Small paper.

On July 21, the disappearance of the sequences was brought up during a news conference in Beijing... According to a translation of the news conference by a journalist at the state-controlled Xinhua News Agency, the vice minister of China's National Health Commission, Dr. Zeng Yixin, said that the trouble arose when editors at Small deleted a paragraph in which the scientists described the sequences in the Sequence Read Archive. "Therefore, the researchers thought it was no longer necessary to store the data in the N.C.B.I. database," Dr. Zeng said, referring to the Sequence Read Archive, which is run by the N.I.H.

An editor at Small, which specializes in science at the micro and nano scale and is based in Germany, confirmed his account. "The data availability statement was mistakenly deleted," the editor, Plamena Dogandzhiyski, wrote in an email. "We will issue a correction very shortly, which will clarify the error and include a link to the depository where the data is now hosted." The journal posted a formal correction to that effect on Thursday.

While the researchers' first report had described their sequences as coming from patients "early in the epidemic," thus provoking intense curiosity, the sequences were, as promised, updated, to include a more specific date after they were published in the database, according to the Times. "They were taken from Renmin Hospital of Wuhan University on January 30 — almost two months after the earliest reports of Covid-19 in China."
Privacy

Estonia Says a Hacker Downloaded 286,000 ID Photos From Government Database (therecord.media) 11

Estonian officials said they arrested last week a local suspect who used a vulnerability to gain access to a government database and downloaded government ID photos for 286,438 Estonians. From a report: The attack took place earlier this month, and the suspect was arrested last week on July 23, Estonian police said in a press conference yesterday, July 28. The identity of the attacker was not disclosed, and he was only identified as a Tallinn-based male. Officials said the suspect discovered a vulnerability in a database managed by the Information System Authority (RIA), the Estonian government agency which manages the country's IT systems.
Facebook

Facebook, Twitter and Other Tech Giants To Target Attacker Manifestos, Far-right Militias in Database (reuters.com) 197

A counterterrorism organization formed by some of the biggest U.S. tech companies including Facebook and Microsoft is significantly expanding the types of extremist content shared between firms in a key database, aiming to crack down on material from white supremacists and far-right militias, the group told Reuters. From the report: Until now, the Global Internet Forum to Counter Terrorism's (GIFCT) database has focused on videos and images from terrorist groups on a United Nations list and so has largely consisted of content from Islamist extremist organizations such as Islamic State, al Qaeda and the Taliban. Over the next few months, the group will add attacker manifestos -- often shared by sympathizers after white supremacist violence -- and other publications and links flagged by U.N. initiative Tech Against Terrorism. It will use lists from intelligence-sharing group Five Eyes, adding URLs and PDFs from more groups, including the Proud Boys, the Three Percenters and neo-Nazis. The firms, which include Twitter and Alphabet 's YouTube, share "hashes," unique numerical representations of original pieces of content that have been removed from their services. Other platforms use these to identify the same content on their own sites in order to review or remove it.
United Kingdom

Hole Blasted In Guntrader: UK Firearms Sales Website's CRM Database Breached, 111K Users' Info Spilled Online (theregister.com) 63

Criminals have hacked into a Gumtree-style website used for buying and selling firearms, making off with a 111,000-entry database containing partial information from a CRM product used by gun shops across the UK. The Register reports: The Guntrader breach earlier this week saw the theft of a SQL database powering both the Guntrader.uk buy-and-sell website and its electronic gun shop register product, comprising about 111,000 users and dating between 2016 and 17 July this year. The database contains names, mobile phone numbers, email addresses, user geolocation data, and more including bcrypt-hashed passwords. It is a severe breach of privacy not only for Guntrader but for its users: members of the UK's licensed firearms community. Guntrader spokesman Simon Baseley told The Register that Guntrader.uk had emailed all the users affected by the breach on July 21 and issued a further update yesterday.

Guntrader is roughly similar to Gumtree: users post ads along with their contact details on the website so potential purchasers can get in touch. Gun shops (known in the UK as "registered firearms dealers" or RFDs) can also use Guntrader's integrated gun register product, which is advertised as offering "end-to-end encryption" and "daily backups", making it (so Guntrader claims) "the most safe and secure gun register system on today's market." [British firearms laws say every transfer of a firearm (sale, drop-off for repair, gift, loan, and so on) must be recorded, with the vast majority of these also being mandatory to report to the police when they happen...]

The categories of data in the stolen database are: Latitude and longitude data; First name and last name; Police force that issued an RFD's certificate; Phone numbers; Fax numbers; bcrypt-hashed passwords; Postcode; Postal addresses; and User's IP addresses. Logs of payments were also included, with Coalfire's Barratt explaining that while no credit card numbers were included, something that looks like a SHA-256 hashed string was included in the payment data tables. Other payment information was limited to prices for rifles and shotguns advertised through the site.
The Register recommends you check if your data is included in the hack by visiting Have I Been Pwned. If you are affected and you used the same password on Guntrader that you used on other websites, you should change it as soon as possible.
Google

Google Turns AlphaFold Loose On the Entire Human Genome (arstechnica.com) 20

An anonymous reader quotes a report from Ars Technica: Just one week after Google's DeepMind AI group finally described its biology efforts in detail, the company is releasing a paper that explains how it analyzed nearly every protein encoded in the human genome and predicted its likely three-dimensional structure -- a structure that can be critical for understanding disease and designing treatments. In the very near future, all of these structures will be released under a Creative Commons license via the European Bioinformatics Institute, which already hosts a major database of protein structures. In a press conference associated with the paper's release, DeepMind's Demis Hassabis made clear that the company isn't stopping there. In addition to the work described in the paper, the company will release structural predictions for the genomes of 20 major research organisms, from yeast to fruit flies to mice. In total, the database launch will include roughly 350,000 protein structures.
[...]
At some point in the near future (possibly by the time you read this), all this data will be available on a dedicated website hosted by the European Bioinformatics Institute, a European Union-funded organization that describes itself in part as follows: "We make the world's public biological data freely available to the scientific community via a range of services and tools." The AlphaFold data will be no exception; once the above link is live, anyone can use it to download information on the human protein of their choice. Or, as mentioned above, the mouse, yeast, or fruit fly version. The 20 organisms that will see their data released are also just a start. DeepMind's Demis Hassabis said that over the next few months, the team will target every gene sequence available in DNA databases. By the time this work is done, over 100 million proteins should have predicted structures. Hassabis wrapped up his part of the announcement by saying, "We think this is the most significant contribution AI has made to science to date." It would be difficult to argue otherwise.
Further reading: Google details its protein-folding software, academics offer an alternative (Ars Technica)
Bug

MITRE Updates List of Top 25 Most Dangerous Software Bugs (bleepingcomputer.com) 16

An anonymous reader quotes a report from BleepingComputer: MITRE has shared this year's top 25 list of most common and dangerous weaknesses plaguing software throughout the previous two years. MITRE developed the top 25 list using Common Vulnerabilities and Exposures (CVE) data from 2019 and 2020 obtained from the National Vulnerability Database (NVD) (roughly 27,000 CVEs). "A scoring formula is used to calculate a ranked order of weaknesses that combines the frequency that a CWE is the root cause of a vulnerability with the projected severity of its exploitation," MITRE explained. "This approach provides an objective look at what vulnerabilities are currently seen in the real world, creates a foundation of analytical rigor built on publicly reported vulnerabilities instead of subjective surveys and opinions, and makes the process easily repeatable."

MITRE's 2021 top 25 bugs are dangerous because they are usually easy to discover, have a high impact, and are prevalent in software released during the last two years. They can also be abused by attackers to potentially take complete control of vulnerable systems, steal targets' sensitive data, or trigger a denial-of-service (DoS) following successful exploitation. The list [here] provides insight to the community at large into the most critical and current software security weaknesses.

AI

AI Firm DeepMind Puts Database of the Building Blocks of Life Online (theguardian.com) 19

Last year the artificial intelligence group DeepMind cracked a mystery that has flummoxed scientists for decades: stripping bare the structure of proteins, the building blocks of life. Now, having amassed a database of nearly all human protein structures, the company is making the resource available online free for researchers to use. From a report: The key to understanding our basic biological machinery is its architecture. The chains of amino acids that comprise proteins twist and turn to make the most confounding of 3D shapes. It is this elaborate form that explains protein function; from enzymes that are crucial to metabolism to antibodies that fight infectious attacks. Despite years of onerous and expensive lab work that began in the 1950s, scientists have only decoded the structure of a fraction of human proteins.

DeepMind's AI program, AlphaFold, has predicted the structure of nearly all 20,000 proteins expressed by humans. In an independent benchmark test that compared predictions to known structures, the system was able to predict the shape of a protein to a good standard 95% of time. DeepMind, which has partnered with the European Molecular Biology Laboratory's European Bioinformatics Institute (EMBL-EBI), hopes the database will help researchers to analyse how life works at an atomic scale by unpacking the apparatus that drives some diseases, make strides in the field of personalised medicine, create more nutritious crops and develop "green enzymes" that can break down plastic.

Privacy

Man Behind LinkedIn Scraping Said He Grabbed 700 Million Profiles 'For Fun' (9to5mac.com) 27

The man behind last month's scraping of LinkedIn data, which exposed the location, phone numbers, and inferred salaries of 700 million users, says that he did it "for fun" -- though he is also selling the data. 9to5Mac reports: BBC News spoke with the man who took the data, under the name Tom Liner: "How would you feel if all your information was catalogued by a hacker and put into a monster spreadsheet with millions of entries, to be sold online to the highest paying cyber-criminal? That's what a hacker calling himself Tom Liner did last month 'for fun' when he compiled a database of 700 million LinkedIn users from all over the world, which he is selling for around $5,000 [...]. In the case of Mr Liner, his latest exploit was announced at 08:57 BST in a post on a notorious hacking forum [...] 'Hi, I have 700 million 2021 LinkedIn records,' he wrote. Included in the post was a link to a sample of a million records and an invite for other hackers to contact him privately and make him offers for his database."

Liner says he was also behind the scraping of 533 million Facebook profiles back in April (you can check whether your data was grabbed): "Tom told me he created the 700 million LinkedIn database using 'almost the exact same technique' that he used to create the Facebook list. He said: 'It took me several months to do. It was very complex. I had to hack the API of LinkedIn. If you do too many requests for user data in one time then the system will permanently ban you.'"

Databases

The Case Against SQL (scattered-thoughts.net) 297

Long-time Slashdot reader RoccamOccam shares "an interesting take on SQL and its issues from Jamie Brandon (who describes himself as an independent researcher who's built database engines, query planners, compilers, developer tools and interfaces).

It's title? "Against SQL." The relational model is great... But SQL is the only widely-used implementation of the relational model, and it is: Inexpressive, Incompressible, Non-porous. This isn't just a matter of some constant programmer overhead, like SQL queries taking 20% longer to write. The fact that these issues exist in our dominant model for accessing data has dramatic downstream effects for the entire industry:

- Complexity is a massive drag on quality and innovation in runtime and tooling
- The need for an application layer with hand-written coordination between database and client renders useless most of the best features of relational databases

The core message that I want people to take away is that there is potentially a huge amount of value to be unlocked by replacing SQL, and more generally in rethinking where and how we draw the lines between databases, query languages and programming languages...

I'd like to finish with this quote from Michael Stonebraker, one of the most prominent figures in the history of relational databases:

"My biggest complaint about System R is that the team never stopped to clean up SQL... All the annoying features of the language have endured to this day. SQL will be the COBOL of 2020..."

It's been interesting to follow the discussion on Twitter, where the post's author tweeted screenshots of actual SQL code to illustrate various shortcomings. But he also notes that "The SQL spec (part 2 = 1732) pages is more than twice the length of the Javascript 2021 spec (879 pages), almost matches the C++ 2020 spec (1853) pages and contains 411 occurrences of 'implementation-defined', occurrences which include type inference and error propagation."

His Twitter feed also includes a supportive retweet from Rust creator Graydon Hoare, and from a Tetrane developer who says "The Rust of SQL remains to be invented. I would like to see it come."
Government

EPA Approved Toxic Chemicals For Fracking a Decade Ago, New Files Show (nytimes.com) 137

An anonymous reader quotes a report from The New York Times: For much of the past decade, oil companies engaged in drilling and fracking have been allowed to pump into the ground chemicals that, over time, can break down into toxic substances known as PFAS -- a class of long-lasting compounds known to pose a threat to people and wildlife -- according to internal documents from the Environmental Protection Agency. The E.P.A. in 2011 approved the use of these chemicals, used to ease the flow of oil from the ground, despite the agency's own grave concerns about their toxicity, according to the documents, which were reviewed by The New York Times. The E.P.A.'s approval of the three chemicals wasn't previously publicly known. The records, obtained under the Freedom of Information Act by a nonprofit group, Physicians for Social Responsibility, are among the first public indications that PFAS, long-lasting compounds also known as "forever chemicals," may be present in the fluids used during drilling and hydraulic fracturing, or fracking.

In a consent order issued for the three chemicals on Oct. 26, 2011, E.P.A. scientists pointed to preliminary evidence that, under some conditions, the chemicals could "degrade in the environment" into substances akin to PFOA, a kind of PFAS chemical, and could "persist in the environment" and "be toxic to people, wild mammals, and birds." The E.P.A. scientists recommended additional testing. Those tests were not mandatory and there is no indication that they were carried out. "The E.P.A. identified serious health risks associated with chemicals proposed for use in oil and gas extraction, and yet allowed those chemicals to be used commercially with very lax regulation," said Dusty Horwitt, researcher at Physicians for Social Responsibility. [...] There is no public data that details where the E.P.A.-approved chemicals have been used. But the FracFocus database, which tracks chemicals used in fracking, shows that about 120 companies used PFAS -- or chemicals that can break down into PFAS; the most common of which was "nonionic fluorosurfactant" and various misspellings -- in more than 1,000 wells between 2012 and 2020 in Texas, Arkansas, Louisiana, Oklahoma, New Mexico, and Wyoming. Because not all states require companies to report chemicals to the database, the number of wells could be higher. Nine of those wells were in Carter County, Okla., within the boundaries of Chickasaw Nation. "This isn't something I was aware of," said Tony Choate, a Chickasaw Nation spokesman. [...] The findings underscore how, for decades, the nation's laws governing various chemicals have allowed thousands of substances to go into commercial use with relatively little testing. The E.P.A.'s assessment was carried out under the 1976 Toxic Substances Control Act, which authorizes the agency to review and regulate new chemicals before they are manufactured or distributed.
"[T]he Toxic Substances Control Act grandfathered in thousands of chemicals already in commercial use, including many PFAS chemicals," the report says. "In 2016, Congress strengthened the law, bolstering the E.P.A.'s authority to order health testing, among other measures. The Government Accountability Office, the watchdog arm of Congress, still identifies the Toxic Substances Control Act as a program with one of the highest risks of abuse and mismanagement." According to a recent report from the Intercept, "the E.P.A. office in charge of reviewing toxic chemicals tampered with the assessments of dozens of chemicals to make them appear safer."
Republicans

Hackers Scrape 90,000 GETTR User Emails, Surprising No One (vice.com) 75

Just days after its launch, hackers have already found a way to take advantage of GETTR's buggy API to get the username, email address, and location of thousands of users. Motherboard reports: Hackers were able to scrape the email addresses and other data of more than 90,000 GETTR users. On Tuesday, a user of a notorious hacking forum posted a database that they claimed was a scrape of all users of GETTR, the new social media platform launched last week by Trump's former spokesman Jason Miller, who pitched it as an alternative to "cancel culture." The data seen by Motherboard includes email addresses, usernames, status, and location. One of the people whose email is in the database confirmed to Motherboard that they are indeed registered to GETTR. Motherboard also verified the database by attempting to create an account with three email addresses that appear in the database. When doing that, the site displayed the message: "The email is taken," suggesting it's already registered. It's unclear if the database contains the usernames and email addresses of all users on the site. Alon Gal, the co-founder and CTO of cybersecurity firm Hudson Rock, found the forum post with the database. "When threat actors are able to extract sensitive information due to neglectful API implementations, the consequence is equivalent to a data breach and should be handled accordingly by the firm and to be examined by regulators," he told Motherboard in an online chat.

Slashdot Top Deals