Australia

Australia Formally Censors Christchurch Attack Videos (theguardian.com) 318

"Australian internet service providers have been ordered to block eight websites hosting video of the Christchurch terrorist attacks," according to the Guardian.

Slashdot reader aberglas shares their report: In March, shortly after the Christchurch massacre, Australian telecommunications companies and internet providers began proactively blocking websites hosting the video of the Christchurch shooter murdering more than 50 people or the shooter's manifesto. A total of 43 websites based on a list provided by Vodafone New Zealand were blocked. The government praised the internet providers despite the action being in a legally grey area by blocking the sites from access in Australia for people not using virtual private networks (VPNs) or other workarounds.

To avoid legal complications the prime minister, Scott Morrison, asked the e-safety commissioner and the internet providers to develop a protocol for the e-safety commissioner to order the websites to block access to the offending sites. The order issued on Sunday covers just eight websites, after several stopped hosting the material, or ceased operating, such as 8chan. The order means the e-safety commissioner will be responsible for monitoring the sites. If they remove the material they can be unblocked. The blocks will be reviewed every six months.

"The remaining rogue websites need only to remove the illegal content to have the block against them lifted," the e-safety commissioner, Julie Inman Grant, said.

Security

Hong Kong Protesters Using Mesh Messaging App China Can't Block: Usage Up 3685% (forbes.com) 57

An anonymous reader quotes Forbes: How do you communicate when the government censors the internet? With a peer-to-peer mesh broadcasting network that doesn't use the internet.

That's exactly what Hong Kong pro-democracy protesters are doing now, thanks to San Francisco startup Bridgefy's Bluetooth-based messaging app. The protesters can communicate with each other — and the public — using no persistent managed network...

While you can chat privately with contacts, you can also broadcast to anyone within range, even if they are not a contact.

That's clearly an ideal scenario for protesters who are trying to reach people but cannot use traditional SMS texting, email, or the undisputed uber-app of China: WeChat. All of them are monitored by the state.

Wednesday another article in Forbes confirmed with Bridgefy that their app uses end-to-end RSA encryption -- though an associate professor at the Johns Hopkins Information Security Institute warns in the same article about the possibility of the Chinese government demanding that telecom providers hand over a list of all users running the app and where they're located.

Forbes also notes that "police could sign up to Bridgefy and, at the very least, cause confusion by flooding the network with fake broadcasts" -- or even use the app to spread privacy-compromising malware. "But if they're willing to accept the risk, Bridgefy could remain a useful tool for communicating and organizing in extreme situations."
Youtube

YouTube Removes 17,000 Channels For Hate Speech (hollywoodreporter.com) 409

An anonymous reader quotes a report from The Hollywood Reporter: YouTube says it has removed more than 17,000 channels for hate speech, representing a spike in takedowns since its new hate speech policy went into effect in June. The Google-owned company calls the June update -- in which YouTube said it would specifically prohibit videos that glorify Nazi ideology or deny documented violent events like the Holocaust -- a "fundamental shift in our policies" that resulted in the takedown of more than 100,000 individual videos during the second quarter of the year. The number of comments removed during the same period doubled to over 500 million, in part due to the new hate speech policy. YouTube said that the 30,000 videos it had removed in the last month represented 3 percent of the views that knitting videos generated during the same period. YouTube says the videos removed represented a five-times increase compared with the previous three months. Still, in early August the ADL's Center on Extremism reported finding "a significant number of channels" that continue to spread anti-Semitic and white supremacist content.
Social Networks

Facebook Says it May Remove Like Counts (techcrunch.com) 80

Facebook could soon start hiding the Like counter on News Feed posts to protect users from envy and dissuade them from self-censorship. From a report: Instagram is already testing this in 7 countries including Canada and Brazil, showing a post's audience just a few names of mutual friends who've Liked it instead of the total number. The idea is to prevent users from destructively comparing themselves to others and potentially fleeing if their posts don't get as many Likes. It could also stop users from deleting posts they think aren't getting enough Likes or not sharing in the first place. Reverse engineering master Jane Manchun Wong spotted Facebook prototyping the hidden Like counts in its Android app. When we asked Facebook, the company confirmed to TechCrunch that it's considering testing removal of Like counts. However it's not live for users yet.
Censorship

China Intercepts WeChat Texts From US and Abroad, Researcher Says (npr.org) 27

China is intercepting texts from WeChat users living outside of the country, mostly from the U.S. Taiwan, South Korea, and Australia. NPR reports: The popular Chinese messaging app WeChat is Zhou Fengsuo's most reliable communication link to China. That's because he hasn't been back in over two decades. Zhou, a human rights activist, had been a university student in 1989, when the pro-democracy protests broke out in Beijing's Tiananmen Square. After a year in jail and another in political reeducation, he moved to the United States in 1995. But WeChat often malfunctions. Zhou began noticing in January that his chat groups could not read his messages. "I realized this because I was expecting some feedback [on a post] but there was no feedback," Zhou tells NPR at from his home in New Jersey.

As Chinese technology companies expand their footprint outside China, they are also sweeping up vast amounts of data from foreign users. Now, analysts say they know where the missing messages are: Every day, millions of WeChat conversations held inside and outside China are flagged, collected and stored in a database connected to public security agencies in China, according to a Dutch Internet researcher. Zhou is not the only one experiencing recent issues. NPR spoke to three other U.S. citizens who have been blocked from sending messages in WeChat groups or had their accounts frozen earlier this year, despite registering with U.S. phone numbers. This March, [Victor Gevers, co-founder of the nonprofit GDI Foundation, an open-source data security collection] found a Chinese database storing more than 1 billion WeChat conversations, including more than 3.7 billion messages, and tweeted out his findings. Each message had been tagged with a GPS location, and many included users' national identification numbers. Most of the messages were sent inside China, but more than 19 million of them had been sent from people outside the country, mostly from the U.S., Taiwan, South Korea and Australia.

Google

Google Doesn't Want Staff Debating Politics at Work Anymore (bloomberg.com) 301

Google posted new internal rules that discourage employees from debating politics, a shift away from the internet giant's famously open culture. From a report: The new "community guidelines" tell employees not to have "disruptive" conversations and warn workers that they'll be held responsible for whatever they say at the office. The company is also building a tool to let employees flag problematic posts and creating a team of moderators to monitor conversations, a Google spokeswoman said. "While sharing information and ideas with colleagues helps build community, disrupting the workday to have a raging debate over politics or the latest news story does not," the new policy states. "Our primary responsibility is to do the work we've each been hired to do." Google has long encouraged employees to question each other and push back against managers when they think they're making the wrong decision. Google's founders point to the open culture as instrumental to the success they've had revolutionizing the tech landscape over the last two decades.
Privacy

Degrading Tor Network Performance Only Costs a Few Thousand Dollars Per Month (zdnet.com) 16

Threat actors or nation-states looking into degrading the performance of the Tor anonymity network can do it on the cheap, for only a few thousands US dollars per month, new academic research has revealed. An anonymous reader writes: According to researchers from Georgetown University and the US Naval Research Laboratory, threat actors can use tools as banal as public DDoS stressers (booters) to slow down Tor network download speeds or hinder access to Tor's censorship circumvention capabilities. Academics said that while an attack against the entire Tor network would require immense DDoS resources (512.73 Gbit/s) and would cost around $7.2 million per month, there are far simpler and more targeted means for degrading Tor performance for all users. In research presented this week at the USENIX security conference, the research team showed the feasibility and effects of three types of carefully targeted "bandwidth DoS [denial of service] attacks" that can wreak havoc on Tor and its users. Researchers argue that while these attacks don't shut down or clog the Tor network entirely, they can be used to dissuade or drive users away from Tor due to prolongued poor performance, which can be an effective strategy in the long run.
AI

The Algorithms That Detect Hate Speech Online Are Biased Against Black People (vox.com) 328

An anonymous reader shares a report: Platforms like Facebook, YouTube, and Twitter are banking on developing artificial intelligence technology to help stop the spread of hateful speech on their networks. The idea is that complex algorithms that use natural language processing will flag racist or violent speech faster and better than human beings possibly can. Doing this effectively is more urgent than ever in light of recent mass shootings and violence linked to hate speech online. But two new studies show that AI trained to identify hate speech may actually end up amplifying racial bias. In one study [PDF], researchers found that leading AI models for processing hate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans, and 2.2 times more likely to flag tweets written in African American English (which is commonly spoken by black people in the US). Another study [PDF] found similar widespread evidence of racial bias against black speech in five widely used academic data sets for studying hate speech that totaled around 155,800 Twitter posts.

This is in large part because what is considered offensive depends on social context. Terms that are slurs when used in some settings -- like the "n-word" or "queer" -- may not be in others. But algorithms -- and content moderators who grade the test data that teaches these algorithms how to do their job -- don't usually know the context of the comments they're reviewing. Both papers, presented at a recent prestigious annual conference for computational linguistics, show how natural language processing AI -- which is often proposed as a tool to objectively identify offensive language -- can amplify the same biases that human beings have. They also prove how the test data that feeds these algorithms have baked-in bias from the start.

China

Huawei Technicians Helped African Governments Spy on Political Opponents (wsj.com) 34

phalse phace writes: A WSJ investigation appears to have uncovered multiple instances where the African governments in Uganda and Zambia, with the help of Huawei technicians, used Huawei's communications equipment to spy on and censor political opponents and its citizens. From the report, writes phalse phace: Huawei Technologies dominates African markets, where it has sold security tools that governments use for digital surveillance and censorship. But Huawei employees have provided other services, not disclosed publicly. Technicians from the Chinese powerhouse have, in at least two cases, personally helped African governments spy on their political opponents, including intercepting their encrypted communications and social media, and using cell data to track their whereabouts, according to senior security officials working directly with the Huawei employees in these countries.

It should be noted that while the findings "show how Huawei employees have used the company's technology and other companies' products to support the domestic spying of those governments," the investigation didn't turn up evidence of spying by or on behalf of Beijing in Africa. Nor did it find that Huawei executives in China knew of, directed or approved the activities described. It also didn't find that there was something particular about the technology in Huawei's network that made such activities possible. Details of the operations, however, offer evidence that Huawei employees played a direct role in government efforts to intercept the private communications of opponents.

The Internet

Should Some Sites Be Liable For The Content They Host? (nytimes.com) 265

America's lawmakers are scrutinizing the blanket protections in Section 230 of the Communications Decency Act, which lets online companies moderate their own sites without incurring legal liability for everything they host.

schwit1 shared this article from the New York Times: Last month, Senator Ted Cruz, Republican of Texas, said in a hearing about Google and censorship that the law was "a subsidy, a perk" for big tech that may need to be reconsidered. In an April interview, Speaker Nancy Pelosi of California called Section 230 a "gift" to tech companies "that could be removed."

"There is definitely more attention being paid to Section 230 than at any time in its history," said Jeff Kosseff, a cybersecurity law professor at the United States Naval Academy and the author of a book about the law, The Twenty-Six Words That Created the Internet .... Mr. Wyden, now a senator [and a co-author of the original bill], said the law had been written to provide "a sword and a shield" for internet companies. The shield is the liability protection for user content, but the sword was meant to allow companies to keep out "offensive materials." However, he said firms had not done enough to keep "slime" off their sites. In an interview with The New York Times, Mr. Wyden said he had recently told tech workers at a conference on content moderation that if "you don't use the sword, there are going to be people coming for your shield."

There is also a concern that the law's immunity is too sweeping. Websites trading in revenge pornography, hate speech or personal information to harass people online receive the same immunity as sites like Wikipedia. "It gives immunity to people who do not earn it and are not worthy of it," said Danielle Keats Citron, a law professor at Boston University who has written extensively about the statute. The first blow came last year with the signing of a law that creates an exception in Section 230 for websites that knowingly assist, facilitate or support sex trafficking. Critics of the new law said it opened the door to create other exceptions and would ultimately render Section 230 meaningless.

The article notes that while lawmakers from both parties are challenging the protections, "they disagree on why," with Republicans complaining that the law has only protected some free speech while still leaving conservative voices open to censorship on major platforms.

The Times also notes that when Wyden co-authored the original bill in 1996, Google didn't exist yet, and Mark Zuckerberg was 11 years old.
Facebook

White House Proposal Would Have FCC and FTC Police Alleged Social Media Censorship (cnn.com) 140

A draft executive order from the White House could put the Federal Communications Commission in charge of shaping how Facebook, Twitter and other large tech companies curate what appears on their websites, CNN reported Friday, citing multiple people familiar with the matter. From the report: The draft order, a summary of which was obtained by CNN, calls for the FCC to develop new regulations clarifying how and when the law protects social media websites when they decide to remove or suppress content on their platforms. Although still in its early stages and subject to change, the Trump administration's draft order also calls for the Federal Trade Commission to take those new policies into account when it investigates or files lawsuits against misbehaving companies. If put into effect, the order would reflect a significant escalation by President Trump in his frequent attacks against social media companies over an alleged but unproven systemic bias against conservatives by technology platforms. And it could lead to a significant reinterpretation of a law that, its authors have insisted, was meant to give tech companies broad freedom to handle content as they see fit.
Communications

Turkey Moves To Oversee All Online Content, Raises Concerns Over Censorship (reuters.com) 71

stikves writes: Turkey has granted its radio and television watchdog sweeping oversight over all online content, including streaming platforms like Netflix and online news outlets, in a move that raised concerns over possible censorship. The move was initially approved by Turkey's parliament in March last year, with support from President Tayyip Erdogan's ruling AK Party and its nationalist ally. The regulation, published in Turkey's Official Gazette on Thursday, mandates all online content providers to obtain broadcasting licenses from RTUK, which will then supervise the content put out by the providers. Aside from streaming giant Netflix, other platforms like local streaming websites PuhuTV and BluTV, which in recent years have produced popular shows, will be subject to supervision and potential fines or loss of their license. In addition to subscription services like Netflix, free online news outlets which rely on advertising for their revenues will also be subject to the same measures.
The Internet

Cloudflare Terminates 8chan (cloudflare.com) 940

"We just sent notice that we are terminating 8chan as a customer effective at midnight tonight Pacific Time," writes Cloudflare CEO Matthew Prince.

"The rationale is simple: they have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths. Even if 8chan may not have violated the letter of the law in refusing to moderate their hate-filled community, they have created an environment that revels in violating its spirit." We do not take this decision lightly. Cloudflare is a network provider. In pursuit of our goal of helping build a better internet, we've considered it important to provide our security services broadly to make sure as many users as possible are secure, and thereby making cyberattacks less attractive -- regardless of the content of those websites. Many of our customers run platforms of their own on top of our network. If our policies are more conservative than theirs it effectively undercuts their ability to run their services and set their own policies. We reluctantly tolerate content that we find reprehensible, but we draw the line at platforms that have demonstrated they directly inspire tragic events and are lawless by design. 8chan has crossed that line. It will therefore no longer be allowed to use our services.

Unfortunately, we have seen this situation before and so we have a good sense of what will play out. Almost exactly two years ago we made the determination to kick another disgusting site off Cloudflare's network: the Daily Stormer. That caused a brief interruption in the site's operations but they quickly came back online using a Cloudflare competitor. That competitor at the time promoted as a feature the fact that they didn't respond to legal process. Today, the Daily Stormer is still available and still disgusting. They have bragged that they have more readers than ever. They are no longer Cloudflare's problem, but they remain the Internet's problem.

I have little doubt we'll see the same happen with 8chan.

Prince adds that since terminating the Daily Stormer they've been "engaging" with law enforcement and civil society organizations to "try and find solutions," which include "cooperating around monitoring potential hate sites on our network and notifying law enforcement when there was content that contained an indication of potential violence." Earlier today Prince had used this argument in defense of Cloudflare's hosting of the 8chan, telling the Guardian "There are lots of competitors to Cloudflare that are not nearly as law abiding as we have always been." He added in today's blog post that "We believe this is our responsibility and, given Cloudflare's scale and reach, we are hopeful we will continue to make progress toward solving the deeper problem."

"We continue to feel incredibly uncomfortable about playing the role of content arbiter and do not plan to exercise it often.... Cloudflare is not a government. While we've been successful as a company, that does not give us the political legitimacy to make determinations on what content is good and bad. Nor should it. Questions around content are real societal issues that need politically legitimate solutions..."

"What's hard is defining the policy that we can enforce transparently and consistently going forward. We, and other technology companies like us that enable the great parts of the Internet, have an obligation to help propose solutions to deal with the parts we're not proud of. That's our obligation and we're committed to it."
Encryption

Did Facebook End The Encryption Debate? (forbes.com) 163

Forbes contributor Kalev Leetaru argues that "the encryption debate is already over -- Facebook ended it earlier this year." The ability of encryption to shield a user's communications rests upon the assumption that the sender and recipient's devices are themselves secure, with the encrypted channel the only weak point... [But] Facebook announced earlier this year preliminary results from its efforts to move a global mass surveillance infrastructure directly onto users' devices where it can bypass the protections of end-to-end encryption. In Facebook's vision, the actual end-to-end encryption client itself such as WhatsApp will include embedded content moderation and blacklist filtering algorithms. These algorithms will be continually updated from a central cloud service, but will run locally on the user's device, scanning each cleartext message before it is sent and each encrypted message after it is decrypted. The company even noted that when it detects violations it will need to quietly stream a copy of the formerly encrypted content back to its central servers to analyze further, even if the user objects, acting as true wiretapping service...

If Facebook's model succeeds, it will only be a matter of time before device manufacturers and mobile operating system developers embed similar tools directly into devices themselves, making them impossible to escape... Governments would soon use lawful court orders to require companies to build in custom filters of content they are concerned about and automatically notify them of violations, including sending a copy of the offending content. Rather than grappling with how to defeat encryption, governments will simply be able to harness social media companies to perform their mass surveillance for them, sending them real-time alerts and copies of the decrypted content.

Putting this all together, the sad reality of the encryption debate is that after 30 years it is finally over: dead at the hands of Facebook. If the company's new on-device content moderation succeeds it will usher in the end of consumer end-to-end encryption and create a framework for governments to outsource their mass surveillance directly to social media companies, completely bypassing encryption.

In the end, encryption's days are numbered and the world has Facebook to thank.


UPDATE: 8/2/2019 Will Cathcart, WhatsApp's vice president of product management, took to the internet with this forceful response. "We haven't added a backdoor to WhatsApp. To be crystal clear, we have not done this, have zero plans to do so, and if we ever did, it would be quite obvious and detectable that we had done it. We understand the serious concerns this type of approach would raise, which is why we are opposed to it."
Electronic Frontier Foundation

EFF Argues For 'Empowerment, Not Censorship' Online (eff.org) 62

An activism director and a legislative analyst at the EFF have co-authored an essay arguing that the key to children's safetly online "is user empowerment, not censorship," reporting on a recent hearing by the U.S. Senate's Judiciary Commitee: While children do face problems online, some committee members seemed bent on using those problems as an excuse to censor the Internet and undermine the legal protections for free expression that we all rely on, including kids. Don't censor users; empower them to choose... [W]hen lawmakers give online platforms the impossible task of ensuring that every post meets a certain standard, those companies have little choice but to over-censor.

During the hearing, Stephen Balkam of the Family Online Safety Institute provided an astute counterpoint to the calls for a more highly filtered Internet, calling to move the discussion "from protection to empowerment." In other words, tech companies ought to give users more control over their online experience rather than forcing all of their users into an increasingly sanitized web. We agree.

It's foolish to think that one set of standards would be appropriate for all children, let alone all Internet users. But today, social media companies frequently make censorship decisions that affect everyone. Instead, companies should empower users to make their own decisions about what they see online by letting them calibrate and customize the content filtering methods those companies use. Furthermore, tech and media companies shouldn't abuse copyright and other laws to prevent third parties from offering customization options to people who want them.

The essay also argues that Congress "should closely examine companies whose business models rely on collecting, using, and selling children's personal information..."

"We've highlighted numerous examples of students effectively being forced to share data with Google through the free or low-cost cloud services and Chromebooks it provides to cash-strapped schools. We filed a complaint with the FTC in 2015 asking it to investigate Google's student data practices, but the agency never responded."
Youtube

YouTube Executive Says the Video Service Doesn't Drive Its Users Down the Rabbit Hole (bbc.com) 124

YouTube has defended its video recommendation algorithms, amid suggestions that the technology serves up increasingly extreme videos. On Thursday, a BBC report explored how YouTube had helped the Flat Earth conspiracy theory spread. But the company's new managing director for the UK, Ben McOwen Wilson, said YouTube "does the opposite of taking you down the rabbit hole". From a report: He told the BBC that YouTube worked to dispel misinformation and conspiracies. But warned that some types of government regulation could start to look like censorship. YouTube, as well as other internet giants such as Facebook and Twitter, have some big decisions to make. All must decide where they draw the line between freedom of expression, hateful content and misinformation. And the government is watching. It has published a White Paper laying out its plans to regulate online platforms. In his first interview since starting his new role, Ben spoke about the company's algorithms, its approach to hate speech and what it expects from the UK government's "online harms" legislation. [...] YouTube has never explained exactly how its algorithms work. Critics say the platform offers up increasingly sensationalist and conspiratorial videos. Mr McOwen Wilson disagrees. "It's what's great about YouTube. It is what brings you from one small area and actually expands your horizon and does the opposite of taking you down the rabbit hole," he says.
Google

Google's Project Dragonfly 'Terminated' In China (bbc.com) 41

An executive at Google said the company's plan to launch a censored search engine in China has been "terminated." The project was reportedly put on hold last year but rumors that it remained active persisted. From a report: "We have terminated Project Dragonfly," Google executive Karan Bhatia told the U.S. Senate Judiciary Committee. Buzzfeed, which reported the new comments, said it was the first public confirmation that Dragonfly had ended. A spokesman for Google later confirmed to the site that Google currently had no plans to launch search in China and that no work was being done to that end.
China

How America's Tech Giants Are Helping Build China's Surveillance State (theintercept.com) 147

"An American organization founded by tech giants Google and IBM is working with a company that is helping China's authoritarian government conduct mass surveillance against its citizens," the Intercept reports.

The OpenPower Foundation -- a nonprofit led by Google and IBM executives with the aim of trying to "drive innovation" -- has set up a collaboration between IBM, Chinese company Semptian, and U.S. chip manufacturer Xilinx. Together, they have worked to advance a breed of microprocessors that enable computers to analyze vast amounts of data more efficiently. Shenzhen-based Semptian is using the devices to enhance the capabilities of internet surveillance and censorship technology it provides to human rights-abusing security agencies in China, according to sources and documents. A company employee said that its technology is being used to covertly monitor the internet activity of 200 million people...

Semptian presents itself publicly as a "big data" analysis company that works with internet providers and educational institutes. However, a substantial portion of the Chinese firm's business is in fact generated through a front company named iNext, which sells the internet surveillance and censorship tools to governments. iNext operates out of the same offices in China as Semptian, with both companies on the eighth floor of a tower in Shenzhen's busy Nanshan District. Semptian and iNext also share the same 200 employees and the same founder, Chen Longsen. [The company's] Aegis equipment has been placed within China's phone and internet networks, enabling the country's government to secretly collect people's email records, phone calls, text messages, cellphone locations, and web browsing histories, according to two sources familiar with Semptian's work.

Promotional documents obtained from the company promise "location information for everyone in the country." One company representative even told the Intercept they were processing "thousands of terabits per second," and -- not knowing they were talking to a reporter -- forwarded a 16-minute video detailing their technology. "If a government operative enters a person's cellphone number, Aegis can show where the device has been over a given period of time: the last three days, the last week, the last month, or longer," the Intercept reports.

Joss Wright, a senior research fellow at the University of Oxford's Internet Institute, told the Intercept that "by any meaningful definition, this is a vast surveillance effort."

Read what the U.S. companies had to say about their involvement with Chinese surveillance technology:
Facebook

In 'Bold Experiment', Facebook Creates Independent 'Oversight Board'' For Content Decisions (siliconvalley.com) 112

Facebook is being applauded for a new "bold experiment" in content decision-making by tech journalist Larry Magid, a founding member (for the last 10 years) of what he describes as "the less powerful Facebook Safety Advisory Board, which is composed of safety experts mostly representing nonprofit organizations in several countries....

"We are not empowered to overrule Facebook's management." Facebook is a company, not a government, but its user base is bigger than the population of any country in the world and the decisions made by its staff affect people in some of the same ways as decisions made by legislatures and courts in many countries. Nowhere is this more evident than in the way Facebook regulates speech. What it allows and forbids affects people's ability to communicate, but also impacts their safety, privacy, security and human rights... [W]hen it comes to some decisions, even Zuckerberg realizes that the stakes are too high for one person or one company to hold all the cards, and that's one of the reason's Facebook is in the process of putting together an Oversight Board for Content Decisions.

That board, which will be made up of a diverse group of about 40 people from around the world, will be like what The Verge called a "Supreme Court for content moderation." The board, according to Facebook, will serve as an "independent authority outside of Facebook," and have the power to "reverse Facebook's decisions when necessary...." This is an extraordinary and mostly unprecedented undertaking from a private company which recognizes the potential impact of its decisions. If the board operates as planned, it will have the ability to overrule Zuckerberg himself on matters of what content is and isn't allowed on the service... If Facebook does a good job in creating a board which is both representative and independent and if it faithfully abides by its decisions, even when they are in conflict with what executives like Zuckerberg want, it will be at least a partial shift in the nature of corporate governance by creating a body that is neither controlled by the corporation itself or the governments in countries where the corporation operates.

At the end of the day, local law in each jurisdiction will trump any decisions by this board and -- I suppose -- Facebook could change its mind and fail to implement one or more of the board's decisions, but if we take the company at its word, that isn't supposed to happen... Although Facebook is not completely rewriting the rules of corporate governance, it is making a bold move that changes the way some of its most important decisions will be made by empowering people who represent those affected by the company who -- without such a board -- would have no power over how the company operates. It is, to an extent, taking on powers held by governments as well as powers held by stockholders and board members. It's a bold experiment.

Social Networks

Facebook Downgrades Posts That Promote Miracle Cures (venturebeat.com) 87

Facebook said on Tuesday that it's downgrading content that makes dubious health claims, including posts that try to sell or promote "miracle cures." From a report: Big technology platforms have faced growing criticism over the spread of fake or misleading content. Reports emerged last year that Facebook had been featuring homemade cancer "cures" more prominently than genuine information from renowned organizations, such as cancer research charities. And a few months back, a separate report found that YouTube videos were promoting bleach as a cure for autism. Facebook also recently said it would crack down on anti-vaccine content. The fight against digital misinformation is ongoing, and it isn't limited to spurious health cures. "In order to help people get accurate health information and the support they need, it's imperative that we minimize health content that is sensational or misleading," Facebook product manager Travis Yeh wrote in a blog post.

Slashdot Top Deals