Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Facebook Government Social Networks United States

New Algorithm Bill Could Force Facebook To Change How the News Feed Works (theverge.com) 97

A new bipartisan bill, introduced on Wednesday, could mark Congress' first step toward addressing algorithmic amplification of harmful content. The Social Media NUDGE Act, authored by Sens. Amy Klobuchar (D-MN) and Cynthia Lummis (R-WY), would direct the National Science Foundation and the National Academy of Sciences, Engineering and Medicine to study "content neutral" ways to add friction to content-sharing online. From a report: The bill instructs researchers to identify a number of ways to slow down the spread of harmful content and misinformation, whether through asking users to read an article before sharing it (as Twitter has done) or other measures. The Federal Trade Commission would then codify the recommendations and mandate that social media platforms like Facebook and Twitter put them into practice. "For too long, tech companies have said 'Trust us, we've got this,'" Klobuchar said in a statement on Thursday. "But we know that social media platforms have repeatedly put profits over people, with algorithms pushing dangerous content that hooks users and spreads misinformation."
This discussion has been archived. No new comments can be posted.

New Algorithm Bill Could Force Facebook To Change How the News Feed Works

Comments Filter:
  • It will not be allowed by the courts.
    • Re: (Score:3, Insightful)

      Exactly this. No matter how you feel about Twitter/Facebook/et al. it's not in any way the government's job to decide "misinformation" or "bad content". The very concept of this is fucking ludicrous.
      • Re: (Score:2, Insightful)

        by Powercntrl ( 458442 )

        and it's bipartisan because both major parties hate the idea that throwing money at political ads doesn't buy nearly as much influence as it used to. Social media, flawed as it may be, allows people to have a voice back in the face of big corporate spending.

        Look at any of the Facebook ads Disney has posted for their new Star Wars hotel as an example. Almost all the comments are people making jokes and complaining about how overpriced the experience is. That's regular people who have had their speech elev

        • Re: (Score:2, Informative)

          > Social media, flawed as it may be, allows people to have a voice back in the face of big corporate spending.

          Until you get deplatformed / cancelled for having an unpopular opinion or even citing someone else's and you get grouped in with them. Worse, an honest mistake was made? Too fucking bad. You have ZERO chances of getting that decision reversed unless you find someone popular to speak up for you.

          • Until you get deplatformed / cancelled for having an unpopular opinion

            That falls under the whole "flawed as it may be" umbrella. You may or may not be happy to see the existing social media giants taken down a notch by government regulation, but the effects won't stop at their virtual doors. This regulation will inevitably extend beyond Facebook/Twitter, and when you start your own website because you've been deplatformed, cue the government with "your algorithms better not be spreading any misinformation, citizen!" See the problem?

          • > Until you get deplatformed / cancelled for having an unpopular opinion

            Is this something that happens on a regular basis?
            And what unpopular opinions gets you deplatformed/cancelled? Be specific.

          • by sabri ( 584428 )

            Until you get deplatformed / cancelled for having an unpopular opinion

            Freedom of speech is not freedom of consequences.

            Freedom of speech means that the government has no business preventing you from having and voicing a personal opinion about any matter.

            Freedom of speech also means that people disagreeing with your opinion, have the right to voice theirs.

            If you voice your opinion and find that the majority disagrees with you, that's your own problem.

            It's your right to be an asshole, and everyone else's right to avoid you like the plague. That's also freedom of speech.

            • "Unpopular opinion" can mean something as benign as questioning the usefulness of masks for schoolchildren, who are literally 700 times less vulnerable to Covid (measured in infection fatality rate) than someone in their mid-60's.

              You are also missing the fact that Big Tech didn't begin censoring until Congress leaned on them to do so -- a clear First Amendment problem.

          • It's worse than that. On Reddit many subreddit moderators have created bots that automatically ban you if you join or issue even a single comment in any other subreddit that questions the legay media narrative on Covid.

        • by Rob Y. ( 110975 )

          The problem isn't so much what content is there on Facebook - it's the way Facebook decides what to show you. You'd think the whole point of social media would be to have your friends suggesting content - and a minuscule fraction of what's in peoples' feeds is just that. But most of it is paid content - much of which is not flagged as such - calculated to keep you on the site looking at more paid content - also not flagged as such.

          And in the most egregious case, posts will be labeled " likes ". As in y

      • Worst than that, it sounds like they trying to basically regulate mathematics and logic.

      • by dbialac ( 320955 )
        Yep. How many studies would be "misinformation" because they were unpopular within parts of the scientific community, even if proven true.
      • by jaffey ( 2516362 )
        I agree with your sentiment, however this bill is about stopping FB from censoring some posts and promoting others versus giving users a 'clear view' of everything being posted. So this bill is decidely anti-censorship. Of course, gov't wouldn't have to do this if you didn't have evil cunts like Zuck running around doing whatever they please no matter the consequences to the country or society. But we do and there's a bunch of them and something should have been done years ago.
    • by mmell ( 832646 )

      I desperately hope it will not be allowed by SCOTUS

      FTFY

    • This is the government trying to regulate speech.

      Actually, what they are regulating is businesses in the social media sector.

      It will not be allowed by the courts.

      With the highly partisan SCOTUS, you might be right.

    • by whitroth ( 9367 )

      Wrong. It's the government stepping in to regulate corporate control of speech, based on clickbait and ad revenue.

  • by Powercntrl ( 458442 ) on Thursday February 10, 2022 @02:09PM (#62256215) Homepage

    Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

    I think what we really need is a bill requiring our leaders in congress to read The Constitution, before introducing a bill.

    • Well, we've already had Biden and Psaki (sp?) speaking from the podium asking social media sites to take down "xyz" information, labeling it as misinformation, or harmful, etc.

      They're on tape asking this publicly....so, they're used to crossing that line verbally already, now they're trying to codify it.

      • by pchasco ( 651819 ) on Thursday February 10, 2022 @03:18PM (#62256543)
        False statements of fact are not constitutionally protected speech, according to SCOTUS rulings.
        • by phantomfive ( 622387 ) on Thursday February 10, 2022 @07:26PM (#62257319) Journal

          False statements are constitutionally protected speech, according to SCOTUS rulings. See: https://case-law.vlex.com/vid/... [vlex.com]

          It has to be false and specifically harmful to particular people (not some people in general) for it to be unprotected. Furthermore, any such limits must be the least restrictive way possible (that is, if there is a way to reach the goal without limiting speech, then you can't limit speech).

          In short, lies are usually protected speech, which makes sense since basically every movie, book, or TV show is/contains false statements.

          • More importantly, it is *precisely* the freedom to lie (that is, make statements that are considered untrue by others) that defines free speech. Otherwise, if you think only truthful statements are protected free speech - well, someone has to decide the truthfulness of them, we don't have a magical, infallible crystal ball oracle. Of course the government will be all to happy to bear this "burden".

            So, what in theory is "truth only and only truth is protected free speech" has no other choice at all, but
        • A very large fraction of what government has been calling "misinformation" has simply been presenting legitimate scientific evidence that goes against the desired narrative.

      • by Anonymous Coward

        To be fair, asking is only asking. And especially to Americans (it's one of the few things every American from the far right to the far left can high-five each other about, in unity), any time a president asks you to do something, the default answer is always "go fuck yourself," so I think asking doesn't cross any line at all.

    • by Anonymous Coward

      This *should* pass constitutional muster. The law doesn't prohibit speech. You'll still be able to post what you want, within the T's & C's of the platform. Don't like Trump, Biden, local dog catcher - fine, post that. What they are asking is to see if there are ways that can slow the dissemination just enough that opposing viewpoints, and tamping down of possible panics and firestorms.

      The issue isn't that folks don't have the ability to seek out alternatives and/or the truth. They fail to do so be

      • Can't yell Fire in a crowded space (unless there's a fire).

        It is act of inciting a riot that is illegal, not the speech itself. Yelling "Vaccines cause autism!" in a crowded theater is perfectly legal, although you probably will get ejected by the staff, as most theaters are private property.

        Can't put nudity on the exposed cover of magazines.

        Obscene speech isn't remotely the same thing as misinformation. Some of the obscenity laws have actually been reversed in recent years, as they're more a relic of puritanical influence on US laws than a proper unbiased interpretation of the 1A.

        FCC regulates what's shown on broadcast airwaves (both the pictures and language).

        The airwaves are a finite resour

      • by shanen ( 462549 )

        Arrived here looking to see who was feeding the trolls. AC is less than no one. Why would anyone talk to such a coward?

    • There's no law that requires our representatives in Congress to have a dammed clue about anything. It's our responsibility as citizens to vote the idiots out when they crop up.
    • by pchasco ( 651819 )
      Maybe, but probably not. The courts have repeatedly upheld limits to free speech. Perhaps you are the one who needs to do more than read a few sentences of the constitution and then pretend you are a constitutional scholar? Even the most cursory education on the subject would have informed you that incitement, libel, violation of copyright and intellection property, counterfeit currency, threatening the president of the United States, and many, many more types of speech and expression have all been deemed u
      • The problem is that misinformation is a very nebulous concept. For example, if I tell you "you could save money by buying an electric car", that could be misinformation if you presently don't own or need a car. If I say "cryptocurrency is a good investment", that may or may not be misinformation depending on how the crypto market performs at the time.

        Laws need to be very clear in their wording and implementation.

        • The problem is that misinformation is a very nebulous concept. For example, if I tell you "you could save money by buying an electric car", that could be misinformation if you presently don't own or need a car. If I say "cryptocurrency is a good investment", that may or may not be misinformation depending on how the crypto market performs at the time.

          Laws need to be very clear in their wording and implementation.

          How about this "misinformation"?

          https://www.independent.co.uk/... [independent.co.uk]

          The article is another in a long line of literal "fake news". Gorsuch and Sotomayor issued a joint statement:

          https://thehill.com/regulation... [thehill.com]

          I personally find this "misinformation" to be far more damaging than whatever people are sharing on facebook. This is a fake story disguised as a news story and was specifically crafted to make a certain justice look bad. Yet, oddly, nobody ever brings this stuff up when talking about "misinformation"

          • The UK has more restrictive laws regarding speech than the USA, and that article hasn't been taken down. Fake news is part of the price we pay for allowing the free exchange of information. Much like the imperfections inherent to democracy itself, we tolerate the flaws because the alternative way of doing things is worse. [cfr.org]

            • The UK has more restrictive laws regarding speech than the USA, and that article hasn't been taken down. Fake news is part of the price we pay for allowing the free exchange of information. Much like the imperfections inherent to democracy itself, we tolerate the flaws because the alternative way of doing things is worse. [cfr.org]

              I totally agree, I'm just pointing out that people who are getting vapors over "misinformation" seem to be 100% concerned about "right-wing misinformation" and ignore literal fake news and such.

              See the Canadian trucker thing for another example. When it started, the loonies were claiming that there were only maybe 50 trucks. Hahahaha. It's nothing, just a couple of chumps, nothing to see here. When that narrative obviously failed, they switched to "it's a bunch of fascists!" Yes, fascists who want [che

        • by pchasco ( 651819 )
          "Laws need to be very clear in their wording and implementation"

          That's a fine goal, but it's not practical or acheivable. The constitution is written by humans, for humans, and interpreted by humans. Even justices who fancy themselves "textualists" or "originalists" often find ways to bend the words of the constitution to their own ideology when presented the opportunity. There will never be any rubric or set of tests that a court can apply to some speech that will perfectly categorize as constitutionall
    • by Ichijo ( 607641 )
      Repealing Section 230 [wikipedia.org] and allowing users to sue websites for misinformation they publish might be the easiest solution.
      • Repealing Section 230 [wikipedia.org] and allowing users to sue websites for misinformation they publish might be the easiest solution.

        That will just make it so the only people who are willing to run discussion-related websites are the folks rich enough to afford a team of cover-your-ass lawyers. It would have an absolutely chilling effect on free speech on the internet.

        • by Ichijo ( 607641 )

          There's no getting away from the fact that if you harm another person through misinformation, one of you has to pay the price. Right, now the person who is harmed pays the price and has no recourse, and the person who harmed them has no incentive to avoid harming someone else in the future. So Section 230 facilitates injustice.

          You wouldn't need to be, as you wrote, "rich enough to afford a team of cover-your-ass lawyers." If you believe that, then you're also a victim of misinformation. No, all you would ne

          • You've basically described an internet that works the way traditional broadcasting has worked. As we've already seen, that only allows those with sufficient funds to have a platform. It may improve the signal to noise ratio on the internet, but at the cost of silencing a lot of voices.

          • Incorrect.

            Section 230 protects the website on which you are posting your misinformation from liability for what you post.

            Section 230 does not protect you from liability for what you post. If your post results in harm, you can face a lawsuit, or even criminal charges.

            • by Ichijo ( 607641 )

              Section 230 does not protect you from liability for what you post.

              Incorrect. Section 230 protects the website from legal action by the victim to reveal the identity of the poster. This helps protect you from liability for what you post.

              • Incorrect. Section 230 protects the website from legal action by the victim to reveal the identity of the poster. This helps protect you from liability for what you post.

                No, it does not. 47 USC 230 provides a shield from liability to the website, nothing more.

                Cite the part of the law that makes you think that it does anything else

          • "There's no getting away from the fact that if you harm another person through misinformation, one of you has to pay the price."

            And when it's governmental authorities peddling misinformation? Like the CDC director claiming that masks are 80% effective at stopping Covid, in contradiction of the scientific evidence, an amazingly high effectiveness which would make them more effective than the Johnson & Johnson vaccine.

    • Nothing about this restricts speech/assembly/press/religion or anything else in the first amendment. What this does is regulate how social media businesses promote content. This is regulating business, not restricting speech.

      It may be a fine line but thems the breaks.

      • What this does is regulate how social media businesses promote content.

        The 1A also prohibits compelled speech [mtsu.edu]. It's pretty obvious telling social media companies what content they can and cannot promote falls under compelled speech.

    • I think what we really need is a bill requiring our leaders in congress to read The Constitution, before introducing a bill.

      Stretch goal. Personally I'd settle for them reading the bill they are introducing.

    • I think what we really need is a bill requiring our leaders in congress to read The Constitution, before introducing a bill.

      I think what we REALLY REALLY need is to stop calling the elected in Washington "leaders". They ARE NOT leaders. Not even close. They are representatives and need to reminded of that by their constituents.

      I don't want to be led, I want to be represented.

    • by stikves ( 127823 )

      The question is: how is the arbiter of truth?

      Can social media companies classify news and blog posts as "true", "false", and "questionable"? Can they do it with accuracy?

      If they could, we should stop all peer reviewed journals, and have those algorithms take over publishing.

  • by fahrbot-bot ( 874524 ) on Thursday February 10, 2022 @02:13PM (#62256223)

    But we know that social media platforms have repeatedly put profits over people, with algorithms pushing dangerous content that hooks users and spreads misinformation.

    Some "News" companies too ... and, of course, Veridian Dynamics [fandom.com]:

    "‘Money before people.' That’s the company motto - engraved right there on the lobby floor. It just looks more heroic in Latin."
    -- Veronica Palmer [wikipedia.org] (Portia de Rossi), “Racial Sensitivity” [fandom.com], Better Off Ted [wikipedia.org]: (s1, e4)

  • Money in fear. (Score:4, Insightful)

    by jellomizer ( 103300 ) on Thursday February 10, 2022 @02:22PM (#62256253)

    Fear is our most primal emotion. So primal that it usually turns off a good amount of our rational thinking. This normally is a good survival instinct. Thinking about lion and determining if it wants to eat you or just curious about you is a good way to die, while if you see the Lion either you run away, hide, or attack it.
    However we had found a way to manufacture "safe" fear. If you do this than you are going to hell, those other guys want to take away something you have, they are trying to take your right away to do something that you probably wouldn't do anyways, but dag-nab-it they are taking away my rights.
    Often with clever wording, and giving a one sided view of any statement anything can be twisted into something that can create fear in the person. Once someone fears it they pay attention to it, and any logical counter statements or balance to that idea will go to death ears, because you are afraid of it, thus not open to consider it safe. However you get fixated on that danger, and gravitate to getting things that would help protect you from that danger, donating to the political party that is going to stop that fearful thing, buying a tool, product, service that will keep you safe, or support that product/service made by the good guys who are on your side whos power will be on your side when this fear becomes a real problem.

    While this has been a problem before social media, or even before western culture, or even culture itself. Social Media is an excellent tool at amping up fear. As it bypasses moderating groups and geographic boundaries, and can find who will be most acceptable to that type of fear and target them. And it can be done for cheap.

    • ... a one sided view of any statement ...

      No, they mix half-truths to create a dishonest conclusion. Tucker Carlson said something like "97% of wind turbines froze and that's why Texas had no power". Wrong, Texas lost power because 83% of gas turbines froze. But blaming a half-truth tells people the facts are 'wrong'. The dialog creates an idea, usually "I'm smart and you're stupid", and facts become unimportant (ie. post-truth).

      ... whos[e] power will be on your side when this fear becomes a real problem.

      No, the rich, or the fear-monger rarely promise to fight by your side. Their goal is making you believe you're a vic

      • I was trying to be politically agnostic. There are lies and crooked news stations and sources. However most of the time they report what is technically truthful but just in a way to mislead you.

        Now the likes of Tucker Carlson, it isn't news, But commentary, it is the musing of a madman, even other commentary by actual experts is not news, just the expert opinion on events of the time, however the station can often find experts at various degrees to get the point they want to come across.

        I was turned off

  • by Huitzil ( 7782388 ) on Thursday February 10, 2022 @02:23PM (#62256255)
    A few years ago (2019-2020) I built an app for a side project while I was taking an artificial intelligence course. This app would download a bunch of news articles from all over the world and use a collaborative filtering algorithm to optimize a newsfeed in real time.

    The algo was super simple, it looked at the articles that were getting clicks when being posted at random in a Twitter feed with hashtags that were automatically pinpointed by the algorithm as descriptive of the article (mostly nouns from the title of the article).

    I let this thing run in the wild for a few weeks. I was surprised by how biased the algorithm would become simply by listening to what people where clicking on in Twitter. Two topics bubbled up to the top: Anything that mentioned Elon Musk (or Tesla) and anything that mentioned a certain republican candidate would just get amplified like there's no tomorrow. You could not weigh down those specific topics even if you wanted to -- the ratios of interest were magnitudes of 100s to 1.

    So here's what I think this showed... you can create a 3 liner algorithm that only shows popular content. Then misinformed or controversial content will still bubble up to the top.

    Because we have a generation of folks clicking on highly emotional content and confuse information with entertainment. The two realms are now hand in hand.

    The solution? It's not JUST the algo that should be reviewed - maybe it starts a bit earlier, like hiring really good teachers when kids are learning to understand the difference between objective reality and amplified entertainment.
    • maybe it starts a bit earlier, like hiring really good teachers when kids are learning to understand the difference between objective reality and amplified entertainment.

      Maybe Congress and you are being unrealistic. What makes you think users want social media to be for serious news and "objective reality" rather than "amplified entertainment"?

      Oh, no! People want the wrong thing! Quick, Congress! Do something to make people want the right thing!

      • Their euphemistic language was sufficiently vague that you're able to twist it. The "entertainment" in question is in fact psy-ops disinformation designed to support the most banal of villainous purposes, the accretion of power and money for certain people who already have power and money.

      • To clarify: I believe the problem is that there are lots of people using social media that cannot differentiate between the two (news or entertainment), but also the fact that pressure to optimize news businesses has resulted in the proliferation of infotainment in substitution of objective journalism. Do I think congress should step in? No. Do I think we need to work as a society to educate our kids to understand what is true and what is not? Yes.
    • by pchasco ( 651819 )
      Of course, media literacy and civics are #1.

      Also, simply declaring that any algorithmically recommended content is considered as content PUBLISHED by the company, and shall be held to all the same laws and regulations as any publisher of information. If the information is found to be libelous or otherwise not constitutionally protected, then the company may be liable for it.

      After that, allow users to opt-out of any and all content not generated by users not immediately connected to the primary user, or
    • Because we have a generation of folks clicking on highly emotional content and confuse information with entertainment. The two realms are now hand in hand.

      This is the problem: companies are mixing politics with entertainment. This wasn't a problem until prominent partisan Republicans decided to do away with the FCC fairness doctrine [wikipedia.org]. Since then, partisanship became mainstream and exploded into hyper-partisanship.

      The solution is obvious but Republicans prefer the current situation as they have struck down every measure to claw back some sanity.

    • the ratios of interest were magnitudes of 100s to 1

      Sounds like the work of bots, they are setup to sabotage the algorithms

  • by CelticNinja2k22 ( 9266567 ) on Thursday February 10, 2022 @02:24PM (#62256263)
    How many times has Facebook in particular faced scandal? Some have been about the algorithms others have been their security practices, our information needs to be safe. we can't allow a corporate entity to manipulate us through their proprietary algorithm. Remember if you aren't paying for the service you are the product they are selling.
  • by Kwirl ( 877607 ) <kwirlkarphys@gmail.com> on Thursday February 10, 2022 @02:27PM (#62256269)
    It doesn't take rocket science to realize that constantly showing people what they want to see is going to burn those thoughts into their heads. didn't they use tactics like that to brainwash prisoners of war or something? i dont know, but it seems like an easy solution is to make sure anyone who obsesses over something be forced to at least see opposing viewpoints in their feed
    • i dont know, but it seems like an easy solution is to make sure anyone who obsesses over something be forced to at least see opposing viewpoints in their feed

      Yeah, next we'll make a television that forces you to watch shows you don't like, a streaming service that plays bands you'd rather not hear, and a self-driving car that takes you to the wrong places.

      Call me crazy, but I don't think making things more user-hostile by force of law is a solution.

      • by Kwirl ( 877607 )
        if an algorithm puts a video that that affirms your beliefs and then one that contradicts your beliefs in front of you, how would either of those be 'force of law'?
        • The "force of law" problem is having a law saying the algorithm on social networks must behave in such a manner. I don't want posts from /r/Candidate_2 showing up in my Reddit feed because I subscribed to /r/Candidate_1.

  • That's the real problem. Democracy, although the best of the worst, permits laws to be proposed, and even be made, by morons who were elected by morons.
    • That's the real problem. Democracy, although the best of the worst, permits laws to be proposed, and even be made, by morons who were elected by morons.

      Even worse... many bills are written by lobbyists.

  • Whenever we see the term 'misinformation' thrown about, we should just think of it as information that goes against what the government wants you to think.

  • Most of the recommendations are just going to be twists on the Rooney Rule for the NFL...

    If a site says "hey read this before posting", we all just scroll to the bottom and click "Ok" without reading. Likely most of the other measures applied to either posters or readers would be treated the same way.

    It's going to take internal company changes to stop showing something outragous just to keep someone engaged on the site a little longer...

  • by matthewcharles2006 ( 960827 ) on Thursday February 10, 2022 @03:00PM (#62256445)
    Like it or not, the First Amendment has never been interpreted to give companies carte blanche to distribute whatever they want wherever they want with guarantees the government won't regulate them. When things like this come up, people always say "read the Constitution!", as if the US government has not played a heavy role in regulating media since the beginning. Maybe you personally interpret the Constitution differently, but until you get a seat on the Supreme Court, it doesn't really matter.
    • by pchasco ( 651819 )
      Absolutely. The courts have repeatedly ruled that all sorts of types of speech and expression are not constitutionally protected. Incitement to commit a crime or suicide, certain categories of false statements of fact, obscenity, child pornography, artwork that could be used as counterfeit currency, speech that violates copyright law, the list goes on United States Supreme Court.

      My view is that any social media company that amplifies these sorts of speech should be held just as liable as the author.
  • Don't block the right, enforce the responsibility.

    Don't block speech, make sources accountable for their choices. Dox everyone. No anonymity. Fully traceable accountability.

    Don't block freedom to not wear masks or not get vaccinated, we need laws allowing victims to sue the sources of the spread of disease. Full accountability.

    • Free speech only for those who can afford to lawyer up. If you don't like what someone is saying, keep 'em buried in frivolous lawsuits until they go away.

  • If I were someone that posted on facebook, I'd make this and post it:

    I don't always go to facebook, but when I do, I go there like:

    https://www.facebook.com/?sk=h... [facebook.com]

  • I think the internet is not so much different than TV in that outrageous/extreme content draws attention. The internet is just better setup to adjust to what draws attention. If there really is a workable solution here it needs to apply to all media in the same way. As much as they might want to, you can't tell people what to think and attempting to do so will backfire.

  • The courts have long already determined what constitutionally protected free speech limits there are in the US. This just appears to be an attempt to have the government piggyback on what tech began to censor three years ago in order to silence anything the Uniparty doesnâ(TM)t like.
  • by RyoShin ( 610051 ) <tukaro@g m a il.com> on Thursday February 10, 2022 @03:57PM (#62256679) Homepage Journal

    Facebook is a festering pile of shit (other platforms like Twitter slightly less; and I say this as a regular FB user), but trying to establish an "algorithm" that social media platforms are forced to use will, at best, provide a placebo ("Okay, we did something, so everything is fine now!") and at worst actually amplify the problem.

    There are a few problems with social media as they exist, but the main one is that the very popular ones rely on ad revenue, and thus engaged users, and thus have an incentive to increase engagement. Even after tweaks, most algs don't have a good way to weigh the truthiness of a post (and I wouldn't expect them ever to) and look mostly at comments/clicks/reactions/shares. The content that tends to get the most of that is the type of content that enrages, either in opposition against the post or in support of whatever bullshit it says.

    Outside of outlawing social media platforms entirely, I don't think there are many legal options to counter this. The only two possibilities I can think of are:
    1) Require platforms within the social media space to operate in a decentralized nature, such that if I use a Facebook competitor like MeWe I can still stay connected with Facebook friends without them also having to go to MeWe (or they could choose to go to a third platform and we'd still stay connected)
    2) Force platforms to not show content to a user unless the user explicitly requests to be shown it (e.g. follows a page, joins a group) or explicitly goes to an "Explore" type of section on the platform (and even then, the alg-based content stays within Explore)

    Not a fan of either of these, though. I would like to see #1 be done, but I can't see the government doing it in a sufficient way. (If the NSF or w/e creates and maintains a standard that anyone could take an implement, maybe that would be okay but not if it's being shoved onto platforms.) If done successfully, then users could easily pack up and go to a different platform without losing their connections if their current platform is pushing bullshit. If someone does that now, they'll only be able to maintain connections they've been able to establish outside the platform. (I would leave FB if I didn't mean losing 60% of my connections; I have alt connections with the other 40%, but split over a dozen platforms.)

    #2 does have some benefits:
    1) Users must take an extra step to be shown whatever the platform's shit alg wants to shove in their face, so if they're normally shown "adverse" content it's because they specifically sought it out and/or have a connection that regularly shares it (and thus no algorithm-meddling would counteract that)
    2) The specification means no risk of favoritism or algorithm-meddling by the government: the only qualification to something showing up in a user's standard feed is "did the user ask to be regularly shown this"
    3) The legal language should be fairly simple, and doable in a way that doesn't require a ton of jargon

    The engagement-driven problem doesn't completely evaporate with #2, but the problem should diminish appreciably.

    P.S. "NUDGE" being capitalized suggests it's an acronym, especially as all horrible law ideas typically employ a backronym in the Act name, but the article doesn't spell it out and I can't find what it might stand for on a brief search

    P.P.S. If social media platforms openly hosted "adult entertainment" content, the engagement from that could completely drown out the conspiracy/frothing/etc. that tends to bubble to the top currently. Just sayin'...

    • by shanen ( 462549 )

      Interesting and substantive comment. No wonder it vanishes into the intellectual vacuum that mostly occupies the space formerly occupied by Slashdot. (Yeah, that was one of my weak jokes, but I rarely get a funny mod.)

      However I largely disagree with you. I even think the Constitution could be used to fix the problems. The key would be honest application of the Bill of Rights to make OUR personal information our own and under OUR control. One aspect would be strongly encouraging data portability (though I fa

      • by RyoShin ( 610051 )

        I strongly support limiting what data companies can surreptitiously collect and use, too, though I don't think the Constitution or Bill of Rights play too strongly into it. And, though there's a lot of overlap, taking care of that problem is not guaranteed to fix the engagement-driven problem.

        Anyway, the issue goes well beyond ads, and even if your "Why?" button was attached to all algorithm-assigned content it's reactionary and requires explicit action by the user to opt-out rather than being a limitation

        • by shanen ( 462549 )

          I largely agree with your data selection, though I have reservations about your analysis. I see the biggest problem is that many, probably most, people just don't want to make the effort to be free. Even if the data was available (perhaps via a strong "Why?" button) and even if they had options to using the abusive services, most people just wouldn't want to be bothered. That's why I think the philosophic focus should be around optimizing the number of choices available at each decision point. Too few is no

  • Possibly the only group I'd trust LESS than facebook is politicians.

    There's no freaking way any good will come of this, even if it somehow doesn't get struck down as unconstitutional.

    At best, this is another stupid 'feel-good' law that will do nothing. At worst, it's outright censorship and government manipulation of information/speech.

  • Does this mean that /. submitter's will need to read the article first?

  • Bipartisan simply means that they received the bill from the same lobbyists along with a campaign contribution.

    There is no misinformation, there is data, stories, words, lies, and facts that some people are offended by. There is also industry-generated advertising that serves as news articles, i.e., "people who live in X can eliminate their mortgage." Those need to be banned or flagged as advertising.

    We've become too butt hurt over people expressing their views that now we want to start crawling into our li

  • Companies should partner with several fact-checking organizations. They should provide areas where fact-checking ratings and links can appear. When there is no fact-checking rating yet, any user could click one of those areas and pay - yes, pay - the fact-checker to check the posting. The social media company can tack on an extra percentage for themselves on each fact-check. The social media company can also allow the entity paying for the fact-check to sponsor it, and have their username displayed, for

  • The algorithm needs daylight. That's the source of the problem.

"Remember, extremism in the nondefense of moderation is not a virtue." -- Peter Neumann, about usenet

Working...