Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Facebook Social Networks United States Your Rights Online Technology

Mark Zuckerberg Suggests How To Tweak Tech's Liability Shield (axios.com) 52

Facebook CEO Mark Zuckerberg will tell lawmakers his plan for "thoughtful reform" of a key tech liability shield rests on requiring best practices for treating illegal content online. From a report: Tech giants are starting to embrace changes to the foundational law that shields platforms from liability from content users post as lawmakers from both parties threaten it. In written testimony ahead of the House hearing Thursday with Google, Twitter and Facebook CEOs, Zuckerberg suggested making Section 230 protections for certain types of unlawful content conditional on platforms' ability to meet best practices to fight the spread of the content. "Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it," Zuckerberg wrote in the testimony. "Platforms should not be held liable if a particular piece of content evades its detection -- that would be impractical for platforms with billions of posts per day -- but they should be required to have adequate systems in place to address unlawful content." The detection system would be proportionate to platform size, with practices defined by a third party. The best practices would not include "unrelated issues" like encryption or privacy changes, he notes. He also suggested Congress bring more transparency and oversight on how companies make and enforce rules about content that is harmful but still legal.
This discussion has been archived. No new comments can be posted.

Mark Zuckerberg Suggests How To Tweak Tech's Liability Shield

Comments Filter:
  • Fantastic (Score:5, Insightful)

    by fuzzyfuzzyfungus ( 1223518 ) on Wednesday March 24, 2021 @09:51AM (#61192618) Journal
    Tech CEO whose platform has a notably undistinguished record promotes plan that very specifically doesn't punish any failures; but makes having a content-detection system large enough to tick the compliance box legally mandatory.

    I can't imagine why Facebook would be interested in a proposal that is, quite explicitly, toothless in the face of any specific failures of moderation; but requires the possession of a moderation system that smaller potential competitors are going to either be unable to afford or have to tie themselves to the ecosystem of one of the incumbents to get access to.

    The only surprise here is that he couldn't be bothered to be less visibly self-serving.
    • Re:Fantastic (Score:5, Insightful)

      by Kisai ( 213879 ) on Wednesday March 24, 2021 @10:08AM (#61192684)

      This is the same tech CEO who literately made Facebook to datamine for getting women and saw no problem with his plan, even years down the road about using that tech for facial recognition.

      No Zucks is not on the right path here.

      Here's the better, albeit paraphrased idea but with one important detail.

      "Platforms that profit off of user-generated content must"
      "Use An independent, non-profit, Non-Government Orgnaization (NGO) data to flag problematic content", eg consumer reports, or the BBB. Or in this case as far as misinformation and thought-crime goes, snopes.
      "Use automated systems to detect posting patterns, especially mental damaging (eg violent, shocking and traumatic events) information, and "information that appears to phish/misinform and/or divulge private information about an individual that the individual doesn't want released"
      and most importantly
      "Hire enough human staff to deal with conflicting moderation flags."

      Like 95% of all social media is fluff and nonsense, but 95% of the stuff trolls, bots, and advertising puts out is garbage by quantity, not quality.

    • says Please mr Fox don't throw me in that bush.

      • Sure take Zucks suggestion as level 1. But lets add to it. The standard he wants here is to be compared to how well the average naughty-post reduction per naughty-post is done in the industry. Presumably if you are within 3 sigma of that then he want's a pass.

        But that's the wrong point of comparison for two reasons.
        First it's not how many naughty-post occur that matters but how many people see the naughty-post. So bigger platforms that have more eyeballs on posts need to be held to that standard not th

    • Kabuki methods (not Taguchi). Like the banks will self regulate.
    • by fermion ( 181285 )
      He is in a place where he can maximize his advantage by utilizing the desire of congress to push their values and agenda on the online world.

      Unfortunately, he is not going to succeed as any reform is going to involve the government making subjective decisions on the content value of the platforms. It is like the vagrancy laws of the mid 20th century. Some were objective, where you had to have money in your pocket, be employed. Some, however, were subjective, where if a cop thought you looked funny he cou

  • by MachineShedFred ( 621896 ) on Wednesday March 24, 2021 @09:53AM (#61192624) Journal

    Maybe we should ask the fox for his input on security design for the hen house too.

  • Sure, all of the Tech social media sites would go bye-bye, but where's the problem?

    • by Jason Levine ( 196982 ) on Wednesday March 24, 2021 @10:34AM (#61192772) Homepage

      The problem is that this would impact the entire Internet, not just social media sites. Before Section 230, the rule was set by the Prodigy and CompuServe rulings. At the time, they were two of the biggest ISPs around and they were both sued (separately) for unwanted/indecent content on their platforms. CompuServe advertised their Internet offering as unfiltered and the court ruled that CompuServe couldn't be held responsible for what someone else posted. Prodigy, however, billed their Internet as "family friendly" and tried to filter out "bad content." A piece of content slipped by their filters, though, and they were ruled liable.

      Section 230 was written because the precedents set would mean that any site or service that accepted user generated content would be liable for that content if they did any filtering at all. To use Slashdot as an example, if I posted a defamatory comment about a person on Slashdot then, under Section 230 they wouldn't be liable. However, under the Prodigy/CompuServe precedent, they would be if they did any filtering at all. This includes removing spam/scam posts, death threats, etc. So Slashdot (and every other site/service) would have to choose between no filtering at all - letting their service turn into a wasteland of spam, scam, and the worst that the Internet has to offer - or filtering their content and risking a lawsuit over any piece of content that evaded their filters.

      An Internet without Section 230 would either be an Internet overrun by spam/scam/etc or an Internet without any user generated content at all (basically a glorified "online TV" service).

      • by sjames ( 1099 )

        Ironically, that would mean that any site that didn't close down discussion would become 4chan. The worst of the worst that has people wanting to "do something" about "the problem" in the first place.

    • by Comboman ( 895500 ) on Wednesday March 24, 2021 @10:38AM (#61192786)

      The problem is that ALL websites that have ANY kind of user-generated content would also go "bye-bye". Any news site that allows comments? Gone. Any discussion group or forum? Gone.

      • Don't forget anywhere with public source code repositories. GitHub, SourceForge, even Savannah would end up making groups liable for unmoderated code submitted to those platforms.

    • Because then slashdot, wikipedia, and literally every website with a comments section would go, too.

      Companies that provide free hosting? Gone.

      Companies that provide for-pay hosting? Probably gone too, unless you're part of a team with enough manpower for a legal department.

      What's left? AmazonSpace, where you can choose from over 200 emojis and nearly 1,000 different status updates! And maybe if we're lucky, Netflix, minus all the objectionable movies, and minus any form of user reviews of course.

      To paraphra

      • "Because then slashdot, wikipedia, and literally every website with a comments section would go, too."

        I'm not a proponent of completely yanking 230 away, but it's worth the time to think about what a post-230 world could be. I think most platforms that offer comments as a side business would need to turn them off. I'm ok with that. Sure, I browse Youtube comments from time to time to see reactions, but if they were gone...I'm fine with that. Same with rando comments on newsites and the like. aka 'nothi

        • Companies already comply with court orders to turn usernames into IP addresses. This has nothing whatsoever to do with Section 230.

          Your entire post amounts to "people commit crimes in houses where we can't see them from the street, so we need the ability to arrest construction firms when crimes are commited in buildings they've built. I may not like it, but that's the reality of the situation". No, you just get the landlords to hand you a copy of the keys once you've established probable cause - you know, t

        • by sjames ( 1099 )

          Real identity wouldn't save you without section 230. The site owner remains the publisher of the objectionable content and so stays on the hook.

        • by sjames ( 1099 )

          Sorry to double post, but you do realise the videos on Youtube are also user generated content and would also have to go, right? All that would be left would be a home page with their logo. They couldn't even say "so long and thanks for all the fish" because they could get a copyright takedown for that.

    • Sure, all of the Tech social media sites would go bye-bye, but where's the problem?

      Yeah, that's fun and funny to say, but all you're doing is reverting to commerce being run by major corporate gate keepers. I just bought growing gear from Bootstrap Farmer, for example. They sell really good seed starting gear and they rely heavily on social media for advertising. The same applies to Peak Design (who I have issues with, but that's a different story), many of my favorite woodworking tool makers, and even my favorite water bottle company.

      Without social media and especially Instagram, th

    • by sjames ( 1099 )

      How ironic that you have chosen to express your opinion on a site that would go away.

  • Weed out startups (Score:4, Insightful)

    by omnichad ( 1198475 ) on Wednesday March 24, 2021 @10:06AM (#61192674) Homepage

    If you want to be in this space, you already have to be big enough to afford the best in automated systems. Great rule if you want to boost the entrenched players and keep everyone else out. Close the door behind you.

    • I guess you missed this part:

      The detection system would be proportionate to platform size, with practices defined by a third party.

      • by omnichad ( 1198475 ) on Wednesday March 24, 2021 @10:47AM (#61192838) Homepage

        Sounds like they want to recoup their investment by selling these services on a pricing scale proportionate to platform size.

      • How does that make the 'solution' less of a barrier to entry? Any point of compliance is a barrier to entry. Barriers to entry help existing companies like FB. Did FB have these barriers to entry? No? Why should should a new company now have these requirements? Because Mark said so?

    • by Jason Levine ( 196982 ) on Wednesday March 24, 2021 @10:41AM (#61192806) Homepage

      I was very interested when he said "proportional to platform size" for this very reason. I run a small forum. We could definitely be held liable for violations if Section 230 were repealed. A spam post that evaded moderation could lead to a lawsuit that, even if we won, would bankrupt us. However, we're also too small and have too tiny a budget to invest in automated content detection systems. We don't have millions of dollars to make sure that any user generated content is fine to post. Our budget is closer to five hundred dollars. (It's a very small forum.)

      Any "practices defined by a third party" would need to take the small players into account. If the third party is stocked with big companies, it could require expensive content moderation tools that only the big companies can afford. This would force the smaller sites to decide whether to shut down or run the risk of getting sued. Either way, a third party of this composition might ensure that the smaller sites (especially startups that might not have the funding to take on the big fish yet, but are growing fast) might be cleared out so that the bigger companies don't face competition.

  • by SumDog ( 466607 ) on Wednesday March 24, 2021 @10:18AM (#61192728) Homepage Journal

    Zuckerberg is the last person to trust in making any good fiath statements about Section 230 reform. Facebook and Twitter are publishers at this point. They are steering narrative. Meaningful 230 reform would involve the following:

    1. Stop editorializing content. No fact checks, no little warning labels. You can moderate/delete content, but if content is up, it should be presented the same!
    2. Stop moderating private messages (Facebook and Twitter block certain private messages. Could you imagine if AT&T or Verizon did this with SMS?).
    3. Default timeline should always be reverse chronological. You click on a button to get an algorithm, but the default view goes back to the way it was in the 2000s

    I did a whole article elaborating a lot of these points:

    https://battlepenguin.com/poli... [battlepenguin.com]

  • PROPOSED PROBLEM: Companies who dominate in areas with strong network-effect are protected from legal consequences by virtue of being considered "neutral, non-moderating carriers", and then heavy-handedly moderate content which doesn't appeal to them, with users who feel they are - or are - unfairly removed from these dominant platforms left with no recourse.

    PROPOSED SOLUTION: Force companies to heavy-handedly moderate content...(!?)

    • by Comboman ( 895500 ) on Wednesday March 24, 2021 @11:00AM (#61192904)

      Free speech has consequences. There may not be "legal consequences" (except for copyright infringement, libel, etc), but there are "market consequences" to unpopular speech. If companies refuse to moderate unpopular speech, they may lose advertisers, customers, etc. If users post unpopular speech, they may lose access to a free internet service with no recourse. That's not "cancel culture", that's the free market in action. While the constitution guarantees you the right to say whatever you want, the world doesn't owe you a platform for your rantings or protection from the consequences of your speech.

  • by RitchCraft ( 6454710 ) on Wednesday March 24, 2021 @10:44AM (#61192814)
    I just don't trust anything this little weasel has to say. He's proven time and time again he can't be trusted or taken for his word.
  • by groobly ( 6155920 ) on Wednesday March 24, 2021 @10:47AM (#61192836)

    It's easy, says Zuck: make everyone liable except me.

  • Hmmmm. A guy known for abusive practices, has suggestions on how to avoid liabilities for such abusive practices...

    WHAT COULD POSSIBLY GO WRONG?

  • by BAReFO0t ( 6240524 ) on Wednesday March 24, 2021 @11:05AM (#61192922)

    The problem is, that you want both.

    You want to be free from liability, like a common carrier.
    But you want to declare "community guidelines" (corporate laws), like you're the boss.

    Pick one. Or die. As a "business". Frankly, I'd prefer you'd take the latter, and get back to your own dimension. :P

  • by smooth wombat ( 796938 ) on Wednesday March 24, 2021 @12:00PM (#61193164) Journal
    The detection system would be proportionate to platform size, with practices defined by a third party.

    In the movie 40 Days and 40 Nights, Josh Hartnett's love interest, played by Shannyn Sossamon, works at a company called Cyber Nanny whose job it is to find porn and block it.

    What Zuck is suggesting sounds an awful lot like what went on back in the day. And we saw how well that worked out.
  • Automated moderation has ruined hosted and streaming video content(Twitch, YouTube, etc copyright moderation), so let's apply that to text, too, and make it the law!
  • While there's all sorts of misinformation about the "platform vs. publisher" false dichotomy, nothing Zuck said here is novel.

    The liability protections have never been blanket protections that allow a "platform" to ignore reports of unlawful harassment, threats, IP violations, etc. Likewise, 230 is already clear that "platforms" are entirely within their rights to "make and enforce rules about content that is harmful but still legal" however they see fit without forfeiting the liability protections for user

  • Facebook would be nothing without all the posts and comments its users make. Users should be both compensated and held responsible for the content they post.

  • We will know now to do the exact opposite of what you suggest, no matter what it is.
  • This is exactly like asking a child what you should do to punish him. If he is bright, as Zuckerberg undoubtedly is, he will propose ice cream as the punishment. In this instance, Zuckerberg has not let us down. He proposes a punishment that provides no punishment at all. Why do we not just eliminate all of the liability protection these nosy Parkers have amassed for themselves? They are a cancer on the country and their misbehavior is making them rich beyond imagination through exploiting the data abo

  • the entire board goes to jail where a normal person would.

Mausoleum: The final and funniest folly of the rich. -- Ambrose Bierce

Working...