AI

Google, YouTube and Venmo Send Cease-and-Desist Letters To Facial Recognition App That Helps Law Enforcement (cbsnews.com) 54

Google, YouTube and Venmo have sent cease-and-desist letters to Clearview AI, a facial recognition app that scrapes images from websites and social media platforms, CBS News has learned. The tech companies join Twitter, which sent a similar letter in January, in trying to block the app from taking pictures from their platforms. From the report: Clearview AI can identify a person by comparing their picture to its database of three billion images from the internet, and the results are 99.6% accurate, CEO Hoan Ton-That told CBS News correspondent Errol Barnett. The app is only available to law enforcement to be used to identify criminals, Ton-That said. "You have to remember that this is only used for investigations after the fact. This is not a 24/7 surveillance system," he said. But YouTube, which is owned by Google, as well as Venmo and Twitter say the company is violating its policies. [...] In addition to demanding that Clearview AI stop scraping content from Twitter, the social media platform demanded that the app delete all data already collected from Twitter, according to an excerpt of the cease-and-desist letter given to CBS News. Update: LinkedIn is joining the party.
Businesses

Instacart Employees in One Chicago Store Have Just Voted To Join a Union (engadget.com) 47

"Gig economy workers may have won an important, if conditional, battle in their push for better conditions," reports Engadget: Instacart employees in the Chicago suburb of Skokie have voted to unionize through their local branch of United Food and Commercial Workers, giving them more collective bargaining power than they had before.

The move only covers 15 staffers who operate at the Mariano's grocery store, but it's the first time Instacart employees have unionized in the U.S. and could affect issues like turnover rates, work pacing and mysterious employee rating algorithms. In a statement, Instacart said it "will honor" the unionization vote pending certification of the results, and that it intended to negotiate in "good faith" on a collective bargaining agreement. The company added that it "respect[s] our employees' rights to explore unionization."

Motherboard reports that prior to the vote Instacart had "enlisted high-level managers to visit the Mariano's grocery store where the unionizing workers pick and pack groceries for delivery. The managers distributed anti-union literature warning employees that a union would drain paychecks and 'exercise a great deal of control' over workers."

They also cite stats from the "Collective Actions in Tech" database showing there were 100 organizing actions in just the last year by workers at Google, Amazon, Facebook, and Microsoft -- and note that this month will also see the results of a vote by Kickstarter employees on whether to unionize.
Privacy

Breach at Indian Airline SpiceJet Affects 1.2 Million Passengers (techcrunch.com) 13

SpiceJet, one of India's largest privately owned airlines, suffered a data breach involving the details of more than a million of its passengers, a security researcher told TechCrunch. From the report: The security researcher, who described their actions as "ethical hacking" but whom we are not naming as they likely fell afoul of U.S. computer hacking laws, gained access to one of SpiceJet's systems by brute-forcing the system's easily guessable password. An unencrypted database backup file on that system contained private information of more than 1.2 million passengers of the budget-carrier last month, TechCrunch has learned. Each record included details such as name of the passenger, their phone number, email address and their date of birth, the researcher told TechCrunch. Some of these passengers were state officials, they said. The database included a rolling month's worth of flight information and details of each commuter, they said, adding that they believe that the database was easily accessible for anyone who knew where to look.
Security

New Web Service Can Notify Companies When Their Employees Get Phished (zdnet.com) 18

Starting today, companies across the world have a new free web service at their disposal that will automatically send out email notifications if one of their employees gets phished. From a report: The service is named "I Got Phished" and is managed by Abuse.ch, a non-profit organization known for its malware and cyber-crime tracking operations. Just like all other Abuse.ch services, I Got Phished will be free to use. Any company can sign-up via the I Got Phished website. Signing up only takes a few seconds. Subscribing for email notifications is done on a domain name basis, and companies don't have to expose a list of their employee email addresses to a third-party service. Once a company's security staff has subscribed to the service, I Got Phished will check its internal database for email addresses for the company's email domain. This database contains logs from phishing operations, with emails for phished victims.
Privacy

Clearview AI Is Struggling To Address Complaints As Its Legal Issues Mount (buzzfeednews.com) 19

An anonymous reader quotes a report from BuzzFeed News: Clearview AI, the facial recognition company that claims to have amassed a database of more than 3 billion photos scraped from Facebook, YouTube, and millions of other websites, is scrambling to deal with calls for bans from advocacy groups and legal threats. These troubles come after news reports exposed its questionable data practices and misleading statements about working with law enforcement. Following stories published in the New York Times and BuzzFeed News, the Manhattan-based startup received cease-and-desist letters from Twitter and the New Jersey attorney general. It was also sued in Illinois in a case seeking class-action status.

Despite its legal woes, Clearview continues to contradict itself, according to documents obtained by BuzzFeed News that are inconsistent with what the company has told the public. In one example, the company, whose code of conduct states that law enforcement should only use its software for criminal investigations, encouraged officers to use it on their friends and family members. In the aftermath of revelations about its technology, Clearview has tried to clean up its image by posting informational webpages, creating a blog, and trotting out surrogates for media interviews, including one in which an investor claimed Clearview was working with "over a thousand independent law enforcement agencies." Previously, Clearview had stated that the number was around 600. Clearview has also tried to allay concerns that its technology could be abused or used outside the scope of police investigations. In a code of conduct that the company published on its site earlier this month, it said its users should "only use the Services for law enforcement or security purposes that are authorized by their employer and conducted pursuant to their employment." It bolstered that idea with a blog post on Jan. 23, which stated, "While many people have advised us that a public version would be more profitable, we have rejected the idea."
"Clearview exists to help law enforcement agencies solve the toughest cases, and our technology comes with strict guidelines and safeguards to ensure investigators use it for its intended purpose only," the post stated.

But in a November email, a company representative encouraged a police officer to use the software on himself and his acquaintances. "Have you tried taking a selfie with Clearview yet?" the email read. "It's the best way to quickly see the power of Clearview in real time. Try your friends or family. Or a celebrity like Joe Montana or George Clooney. Your Clearview account has unlimited searches. So feel free to run wild with your searches."
Privacy

Government Privacy Watchdog Under Pressure To Recommend Facial Recognition Ban (thehill.com) 31

An anonymous reader quotes a report from The Hill: The Privacy and Civil Liberties Oversight Board (PCLOB), an independent agency, is coming under increasing pressure to recommend the federal government stop using facial recognition. Forty groups, led by the Electronic Privacy Information Center, sent a letter Monday to the agency calling for the suspension of facial recognition systems "pending further review." "The rapid and unregulated deployment of facial recognition poses a direct threat to 'the precious liberties that are vital to our way of life,'" the advocacy groups wrote.

The PCLOB "has a unique responsibility, set out in statute, to assess technologies and polices that impact the privacy of Americans after 9-11 and to make recommendations to the President and executive branch," they wrote. The agency, created in 2004, advises the administration on privacy issues. The letter cited a recent New York Times report about Clearview AI, a company which claims to have a database of more than 3 billion photos and is reportedly collaborating with hundreds of police departments. It also mentioned a study by the National Institute of Standards and Technology, part of the Commerce Department, which found that the majority of facial recognition systems have "demographic differentials" that can worsen their accuracy based on a person's age, gender or race.

Earth

Albatrosses Outfitted With GPS Trackers Detect Illegal Fishing Vessels (smithsonianmag.com) 71

schwit1 shares a report from the Smithsonian: Capable of following fishing boats into remote regions out of reach of monitoring machines like ships, aircraft and even certain satellites, these feathered crimefighters could offer a convenient and cost-effective way to keep tabs on foul play at sea -- and may even help gather crucial conservation data along the way. [...] On top of their stamina and moxie, albatrosses also have a certain fondness for fish-toting vessels, says study author Samantha Patrick, a marine biologist at the University of Liverpool. To the birds, the fishing gear attached to these boats is basically a smorgasbord of snacks -- and albatrosses can spot the ships from almost 20 miles away.

To test the birds' patrolling potential, the researchers stomped into the marshy nesting grounds of wandering albatrosses (Diomedea exulans) and Amsterdam albatrosses (Diomedea amsterdamensis) roosting on Crozet, Kerguelen and Amsterdam, three remote island locales in the southern Indian Ocean. After selecting 169 individuals of different ages, the team taped or glued transceivers, each weighing just two ounces, to the birds' backs and bid them adieu. Over the course of six months, the team's army of albatrosses surveyed over 20 million square miles of sea. Whenever the birds came within three or so miles of a boat, their trackers logged its coordinates, then beamed them via satellite to an online database that officials could access and cross-check with automatic identification system (AIS) data. Of the 353 fishing vessels detected, a whopping 28 percent had their AIS switched off. The number of covert ships was especially high in international waters, where about 37 percent of vessels operated AIS-free. [...] Because the birds and their transceivers detected only radar, no identifying information was logged. The task of verifying a boat's legal status still falls to officials, who must then decide whether to take action, Patrick explains. But in mapping potential hotspots of illegal fishing, the birds set off a chain reaction that could help bring perpetrators to justice.
The results of the tracking method were published in the journal PNAS.
Government

Maryland Bill Would Outlaw Ransomware, Keep Researchers From Reporting Bugs (arstechnica.com) 85

A proposed law introduced in Maryland's state senate last week would criminalize the possession of ransomware and other criminal activities with a computer. However, CEO of Luta Security Katie Moussouris warns that the current bill "would prohibit vulnerability disclosure unless the specific systems or data accessed by the helpful security researcher were explicitly authorized ahead of time and would prohibit public disclosure if the reports were ignored." Ars Technica reports: The bill, Senate Bill 3, covers a lot of ground already covered by U.S. Federal law. But it classifies the mere possession of ransomware as a misdemeanor punishable by up to 10 years of imprisonment and a fine of up to $10,000. The bill also states (in all capital letters in the draft) that "THIS PARAGRAPH DOES NOT APPLY TO THE USE OF RANSOMWARE FOR RESEARCH PURPOSES."

Additionally, the bill would outlaw unauthorized intentional access or attempts to access "all or part of a computer network, computer control language, computer, computer software, computer system, computer service, or computer database; or copy, attempt to copy, possess, or attempt to possess the contents of all or part of a computer database accessed." It also would criminalize under Maryland law any act intended to "cause the malfunction or interrupt the operation of all or any part" of a network, the computers on it, or their software and data, or "possess, identify, or attempt to identify a valid access code; or publicize or distribute a valid access code to an unauthorized person." There are no research exclusions in the bill for these provisions.
"While access or attempted access would be a misdemeanor (punishable by a fine of $1,000, three years of imprisonment, or both), breaching databases would be a felony if damages were determined to be greater than $10,000 -- punishable by a sentence of up to 10 years, a fine of $10,000, or both," the report adds. "The punishments go up if systems belonging to the state government, electric and gas utilities, or public utilities are involved, with up to 10 years of imprisonment and a $25,000 fine if more than $50,000 in damage is done."
Twitter

Twitter Tells Facial Recognition Trailblazer To Stop Using Site's Photos (nytimes.com) 45

Kashmir Hill reporting for The New York Times: A mysterious company that has licensed its powerful facial recognition technology to hundreds of law enforcement agencies is facing attacks from Capitol Hill and from at least one Silicon Valley giant. Twitter sent a letter this week to the small start-up company, Clearview AI, demanding that it stop taking photos and any other data from the social media website "for any reason" and delete any data that it previously collected, a Twitter spokeswoman said. The cease-and-desist letter, sent on Tuesday, accused Clearview of violating Twitter's policies.

The New York Times reported last week that Clearview had amassed a database of more than three billion photos from social media sites -- including Facebook, YouTube, Twitter and Venmo -- and elsewhere on the internet. The vast database powers an app that can match people to their online photos and link back to the sites the images came from. The app is used by more than 600 law enforcement agencies, ranging from local police departments to the F.B.I. and the Department of Homeland Security. Law enforcement officials told The Times that the app had helped them identify suspects in many criminal cases.
It's unclear what social media sites can do to force Clearview to remove images from its database. "In the past, companies have sued websites that scrape information, accusing them of violating the Computer Fraud and Abuse Act, an anti-hacking law," notes the NYT. "But in September, a federal appeals court in California ruled against LinkedIn in such a case, establishing a precedent that the scraping of public data most likely doesn't violate the law."
Microsoft

Microsoft Discloses Security Breach of Customer Support Database Containing 250 Million Records (zdnet.com) 32

An anonymous reader quotes a report from ZDNet: Microsoft disclosed today a security breach that took place last month in December 2019. In a blog post today, the OS maker said that an internal customer support database that was storing anonymized user analytics was accidentally exposed online without proper protections between December 5 and December 31. The database was spotted and reported to Microsoft by Bob Diachenko, a security researcher with Security Discovery.

The leaky customer support database consisted of a cluster of five Elasticsearch servers, a technology used to simplify search operations, Diachenko told ZDNet today. All five servers stored the same data, appearing to be mirrors of each other. Diachenko said Microsoft secured the exposed database on the same day he reported the issue to the OS maker, despite being New Year's Eve. The servers contained roughly 250 million entries, with information such as email addresses, IP addresses, and support case details. Microsoft said that most of the records didn't contain any personal user information.
"Microsoft blamed the accidental server exposure on misconfigured Azure security rules it deployed on December 5, which it now fixed," adds ZDNet.

They went on to list several changes to prevent this sort of thing from happening again, such as "auditing the established network security rules for internal resources" and "adding additional alerting to service teams when security rule misconfigurations are detected."
AI

IBM's Debating AI Just Got a Lot Closer To Being a Useful Tool (technologyreview.com) 24

We make decisions by weighing pros and cons. Artificial intelligence has the potential to help us with that by sifting through ever-increasing mounds of data. But to be truly useful, it needs to reason more like a human. An artificial intelligence technique known as argument mining could help. From a report: IBM has just taken a big step in that direction. The company's Project Debater team has spent several years developing an AI that can build arguments. Last year IBM demonstrated its work-in-progress technology in a live debate against a world-champion human debater, the equivalent of Watson's Jeopardy! showdown. Such stunts are fun, and it provided a proof of concept. Now IBM is turning its toy into a genuinely useful tool. The version of Project Debater used in the live debates included the seeds of the latest system, such as the capability to search hundreds of millions of new articles. But in the months since, the team has extensively tweaked the neural networks it uses, improving the quality of the evidence the system can unearth. One important addition is BERT, a neural network Google built for natural-language processing, which can answer queries. The work will be presented at the Association for the Advancement of Artificial Intelligence conference in New York next month.

To train their AI, lead researcher Noam Slonim and his colleagues at IBM Research in Haifa, Israel, drew on 400 million documents taken from the LexisNexis database of newspaper and journal articles. This gave them some 10 billion sentences, a natural-language corpus around 50 times larger than Wikipedia. They paired this vast evidence pool with claims about several hundred different topics, such as "Blood donation should be mandatory" or "We should abandon Valentine's Day." They then asked crowd workers on the Figure Eight platform to label sentences according to whether or not they provided evidence for or against particular claims. The labeled data was fed to a supervised learning algorithm.

Privacy

Bruce Schneier: Banning Facial Recognition Isn't Enough (nytimes.com) 90

Bruce Schneier, writing at New York Times: Communities across the United States are starting to ban facial recognition technologies. In May of last year, San Francisco banned facial recognition; the neighboring city of Oakland soon followed, as did Somerville and Brookline in Massachusetts (a statewide ban may follow). In December, San Diego suspended a facial recognition program in advance of a new statewide law, which declared it illegal, coming into effect. Forty major music festivals pledged not to use the technology, and activists are calling for a nationwide ban. Many Democratic presidential candidates support at least a partial ban on the technology. These efforts are well intentioned, but facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we're in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it's being built by corporations in order to influence our buying behavior, and is incidentally used by the government.

In all cases, modern mass surveillance has three broad components: identification, correlation and discrimination. Let's take them in turn. Facial recognition is a technology that can be used to identify people without their knowledge or consent. It relies on the prevalence of cameras, which are becoming both more powerful and smaller, and machine learning technologies that can match the output of these cameras with images from a database of existing photos. But that's just one identification technology among many. People can be identified at a distance by their heart beat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and iris patterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses. Other things identify us as well: our phone numbers, our credit card numbers, the license plates on our cars. China, for example, uses multiple identification technologies to support its surveillance state.

Government

Facial Recognition Database With 3 Billion Scraped Images 'Might End Privacy as We Know It' (muckrock.com) 86

One police detective bragged that photos "could be covertly taken with a telephoto lens" then input into Clearview AI's database of more than three billion scraped images to immediately identify suspects.

Long-time Slashdot reader v3rgEz writes: For the past year, government transparency non-profits and Open the Government have been digging into how local police departments around the country use facial recognition. The New York Times reports on their latest discovery: That a Peter Thiel-backed startup Clearview has scraped Facebook, Venmo, and dozens of other social media sites to create a massive, unregulated tool for law enforcement to track where you were, who you were with, and more, all with just a photo.

Read the Clearview docs yourself and file a request in your town to see if your police department is using it.

The Times describes Clearview as "the secretive company that might end privacy as we know it," with one of the company's early investors telling the newspaper that because information technology keeps getting more powerful, he's concluded that "there's never going to be privacy."

He also expresses his belief that technology can't be banned, then acknowledges "Sure, that might lead to a dystopian future or something, but you can't ban it."
Medicine

98.6 Degrees Fahrenheit Isn't the Average Anymore (smithsonianmag.com) 148

schwit1 shares a report from The Wall Street Journal: Nearly 150 years ago, [German physician Carl Reinhold August Wunderlich] analyzed a million temperatures from 25,000 patients and concluded that normal human-body temperature is 98.6 degrees Fahrenheit. In a new study, researchers from Stanford University argue that Wunderlich's number was correct at the time but is no longer accurate because the human body has changed. Today, they say, the average normal human-body temperature is closer to 97.5 degrees Fahrenheit (Warning: source paywalled; alternative source).

To test their hypothesis that today's normal body temperature is lower than in the past, Dr. Parsonnet and her research partners analyzed 677,423 temperatures collected from 189,338 individuals over a span of 157 years. The readings were recorded in the pension records of Civil War veterans from the start of the war through 1940; in the National Health and Nutrition Examination Survey I conducted by the U.S. Centers for Disease Control and Prevention from 1971 through 1974; and in the Stanford Translational Research Integrated Database Environment from 2007 through 2017. Overall, temperatures of the Civil War veterans were higher than measurements taken in the 1970s, and, in turn, those measurements were higher than those collected in the 2000s.
The study has been published in the journal eLife.
Security

Researchers Find Serious Flaws In WordPress Plugins Used On 400K Sites (arstechnica.com) 11

An anonymous reader quotes a report from Ars Technica: Serious vulnerabilities have recently come to light in three WordPress plugins that have been installed on a combined 400,000 websites, researchers said. InfiniteWP, WP Time Capsule, and WP Database Reset are all affected. The highest-impact flaw is an authentication bypass vulnerability in the InfiniteWP Client, a plugin installed on more than 300,000 websites. It allows administrators to manage multiple websites from a single server. The flaw lets anyone log in to an administrative account with no credentials at all. From there, attackers can delete contents, add new accounts, and carry out a wide range of other malicious tasks.

The critical flaw in WP Time Capsule also leads to an authentication bypass that allows unauthenticated attackers to log in as an administrator. WP Time Capsule, which runs on about 20,000 sites, is designed to make backing up website data easier. By including a string in a POST request, attackers can obtain a list of all administrative accounts and automatically log in to the first one. The bug has been fixed in version 1.21.16. Sites running earlier versions should update right away. Web security firm WebARX has more details.

The last vulnerable plugin is WP Database Reset, which is installed on about 80,000 sites. One flaw allows any unauthenticated person to reset any table in the database to its original WordPress state. The bug is caused by reset functions that aren't secured by the standard capability checks or security nonces. Exploits can result in the complete loss of data or a site reset to the default WordPress settings. A second security flaw in WP Database Reset causes a privilege-escalation vulnerability that allows any authenticated user -- even those with minimal system rights -- to gain administrative rights and lock out all other users. All site administrators using this plugin should update to version 3.15, which patches both vulnerabilities. Wordfence has more details about both flaws here.

Oracle

Oracle Ties Previous All-Time Patch High With January 2020 Updates (threatpost.com) 9

"Not sure if this is good news (Oracle is very busy patching their stuff) or bad news (Oracle is very busy patching their stuff) but this quarterly cycle they tied their all-time high number of vulnerability fixes released," writes Slashdot reader bobthesungeek76036. "And they are urging folks to not drag their feet in deploying these patches." Threatpost reports: The software giant patched 300+ bugs in its quarterly update. Oracle has patched 334 vulnerabilities across all of its product families in its January 2020 quarterly Critical Patch Update (CPU). Out of these, 43 are critical/severe flaws carrying CVSS scores of 9.1 and above. The CPU ties for Oracle's previous all-time high for number of patches issued, in July 2019, which overtook its previous record of 308 in July 2017. The company said in a pre-release announcement that some of the vulnerabilities affect multiple products. "Due to the threat posed by a successful attack, Oracle strongly recommends that customers apply Critical Patch Update patches as soon as possible," it added.

"Some of these vulnerabilities were remotely exploitable, not requiring any login data; therefore posing an extremely high risk of exposure," said Boris Cipot, senior security engineer at Synopsys, speaking to Threatpost. "Additionally, there were database, system-level, Java and virtualization patches within the scope of this update. These are all critical elements within a company's infrastructure, and for this reason the update should be considered mandatory. At the same time, organizations need to take into account the impact that this update could have on their systems, scheduling downtime accordingly."

AI

The Military Is Building Long-Range Facial Recognition That Works In the Dark (medium.com) 21

According to contracts posted on a federal spending database, the U.S. military is working to develop facial recognition technology that reads the pattern of heat being emitted by faces in order to identify specific people. OneZero reports: Now, the military wants to develop a facial recognition system that analyzes infrared images to identify individuals. The Army Research Lab has previously publicized research in this area, but these contracts, which started at the end of September 2019 and run until 2021, indicate the technology is now being actively developed for use in the field. "Sensors should be demonstrable in environments such as targets seen through automotive windshield glass, targets that are backlit, and targets that are obscured due to light weather (e.g., fog)," the Department of Defense indicated when requesting proposals.

The DoD is calling for the technology to be incorporated into a device that is small enough to be carried by an individual. The device should be able to operate from a distance of 10 to 500 meters and match individuals against a watchlist. According to the details of the request, the Defense Forensics and Biometrics Agency is directly overseeing work on the technology. Two companies are working on this technology on behalf of the DFBA, Cyan Systems, Inc. and Polaris Sensor Technologies.

The Military

The Military Is Building Long-Range Facial Recognition That Works in the Dark (medium.com) 60

An anonymous reader shares a report: The U.S. military is spending more than $4.5 million to develop facial recognition technology that reads the pattern of heat being emitted by faces in order to identify specific people. The technology would work in the dark and across long distances, according to contracts posted on a federal spending database. Facial recognition is already employed by the military, which uses the technology to identify individuals on the battlefield. But existing facial recognition technology typically relies on images generated by standard cameras, such as those found in iPhone or CCTV networks.

Now, the military wants to develop a facial recognition system that analyzes infrared images to identify individuals. The Army Research Lab has previously publicized research in this area, but these contracts, which started at the end of September 2019 and run until 2021, indicate the technology is now being actively developed for use in the field. "Sensors should be demonstrable in environments such as targets seen through automotive windshield glass, targets that are backlit, and targets that are obscured due to light weather (e.g., fog)," the Department of Defense indicated when requesting proposals.

Databases

'Top Programming Skills' List Shows Employers Want SQL (dice.com) 108

Former Slashdot contributor Nick Kolakowski is now a senior editor at Dice Insights, where he's just published a list of the top programming skills employers were looking for during the last 30 days.
If you're a software developer on the hunt for a new gig (or you're merely curious about what programming skills employers are looking for these days), one thing is clear: employers really, really, really want technologists who know how to build, maintain, and scale everything database- (and data-) related.

We've come to that conclusion after analyzing data about programming skills from Burning Glass, which collects and organizes millions of job postings from across the country.

The biggest takeaway? "When it comes to programming skills, employers are hungriest for SQL." Here's their ranking of the top most in-demand skills:
  1. SQL
  2. Java
  3. "Software development"
  4. "Software engineering"
  5. Python
  6. JavaScript
  7. Linux
  8. Oracle
  9. C#
  10. Git

The list actually includes the top 18 programming skills, but besides languages like C++ and .NET, it also includes more generalized skills like "Agile development," "debugging," and "Unix."

But Nick concludes that "As a developer, if you've mastered database and data-analytics skills, that makes you insanely valuable to a whole range of companies out there."


Medicine

23andMe Licenses Antibody It Developed From its Genetic Database To Spanish Firm Almirall (bloomberg.com) 30

23andMe has licensed an antibody it developed from its genetic database to treat inflammatory diseases to Spanish drugmaker Almirall SA. "The deal, announced by Almirall in a filing with Spanish regulators on Thursday, marks the first time that 23andMe has licensed a drug compound that it has developed itself," reports Bloomberg. From the report: Leveraging its genetic data to develop drugs has become an increasingly important part of 23andMe's business. More than 10 million customers have taken its DNA tests, and that trove of data can help illuminate new drug targets to treat disease. Previously, the company had made a deal to share its data and collaborate on drug development with U.K. drugmaker GlaxoSmithKline Plc, which took a $300 million stake in the company in 2018. But this is the first time it has licensed a compound it has developed in-house.

The compound belongs to a class of large-molecule drugs designed to target a single protein in the body, what's known as a bispecific monoclonal antibody. That antibody is designed to block signals from a family of proteins known as IL-36 cytokine that is associated with many autoimmune and inflammatory conditions, such as lupus and Crohn's disease. 23andMe was most interested in the antibody's effectiveness to treat severe forms of psoriasis. The company put the drug compound through animal testing but it will still need to undergo clinical trials in humans.

Slashdot Top Deals