New Algorithm Bill Could Force Facebook To Change How the News Feed Works (theverge.com) 97
A new bipartisan bill, introduced on Wednesday, could mark Congress' first step toward addressing algorithmic amplification of harmful content. The Social Media NUDGE Act, authored by Sens. Amy Klobuchar (D-MN) and Cynthia Lummis (R-WY), would direct the National Science Foundation and the National Academy of Sciences, Engineering and Medicine to study "content neutral" ways to add friction to content-sharing online. From a report: The bill instructs researchers to identify a number of ways to slow down the spread of harmful content and misinformation, whether through asking users to read an article before sharing it (as Twitter has done) or other measures. The Federal Trade Commission would then codify the recommendations and mandate that social media platforms like Facebook and Twitter put them into practice. "For too long, tech companies have said 'Trust us, we've got this,'" Klobuchar said in a statement on Thursday. "But we know that social media platforms have repeatedly put profits over people, with algorithms pushing dangerous content that hooks users and spreads misinformation."
This is the government trying to regulate speech. (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2, Insightful)
and it's bipartisan because both major parties hate the idea that throwing money at political ads doesn't buy nearly as much influence as it used to. Social media, flawed as it may be, allows people to have a voice back in the face of big corporate spending.
Look at any of the Facebook ads Disney has posted for their new Star Wars hotel as an example. Almost all the comments are people making jokes and complaining about how overpriced the experience is. That's regular people who have had their speech elev
Re: (Score:2, Informative)
> Social media, flawed as it may be, allows people to have a voice back in the face of big corporate spending.
Until you get deplatformed / cancelled for having an unpopular opinion or even citing someone else's and you get grouped in with them. Worse, an honest mistake was made? Too fucking bad. You have ZERO chances of getting that decision reversed unless you find someone popular to speak up for you.
Re: (Score:1)
Until you get deplatformed / cancelled for having an unpopular opinion
That falls under the whole "flawed as it may be" umbrella. You may or may not be happy to see the existing social media giants taken down a notch by government regulation, but the effects won't stop at their virtual doors. This regulation will inevitably extend beyond Facebook/Twitter, and when you start your own website because you've been deplatformed, cue the government with "your algorithms better not be spreading any misinformation, citizen!" See the problem?
Re: (Score:2)
> Until you get deplatformed / cancelled for having an unpopular opinion
Is this something that happens on a regular basis?
And what unpopular opinions gets you deplatformed/cancelled? Be specific.
Re: (Score:2)
Until you get deplatformed / cancelled for having an unpopular opinion
Freedom of speech is not freedom of consequences.
Freedom of speech means that the government has no business preventing you from having and voicing a personal opinion about any matter.
Freedom of speech also means that people disagreeing with your opinion, have the right to voice theirs.
If you voice your opinion and find that the majority disagrees with you, that's your own problem.
It's your right to be an asshole, and everyone else's right to avoid you like the plague. That's also freedom of speech.
Re: (Score:2)
"Unpopular opinion" can mean something as benign as questioning the usefulness of masks for schoolchildren, who are literally 700 times less vulnerable to Covid (measured in infection fatality rate) than someone in their mid-60's.
You are also missing the fact that Big Tech didn't begin censoring until Congress leaned on them to do so -- a clear First Amendment problem.
Re: (Score:2)
It's worse than that. On Reddit many subreddit moderators have created bots that automatically ban you if you join or issue even a single comment in any other subreddit that questions the legay media narrative on Covid.
Re: (Score:2)
The problem isn't so much what content is there on Facebook - it's the way Facebook decides what to show you. You'd think the whole point of social media would be to have your friends suggesting content - and a minuscule fraction of what's in peoples' feeds is just that. But most of it is paid content - much of which is not flagged as such - calculated to keep you on the site looking at more paid content - also not flagged as such.
And in the most egregious case, posts will be labeled " likes ". As in y
Re: (Score:2)
Worst than that, it sounds like they trying to basically regulate mathematics and logic.
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
I wonder who gets to decide what is "harmful" and what is "misinformation"?
Re: (Score:2)
I wonder who gets to decide what is "harmful" and what is "misinformation"?
The party currently in power. Which ultimately means that both parties get to take turns abusing it.
Re: (Score:2)
Wrong again. (Score:2)
This is the government trying to regulate speech.
Actually, what they are regulating is businesses in the social media sector.
It will not be allowed by the courts.
With the highly partisan SCOTUS, you might be right.
Re: (Score:1)
Actually, what they are regulating is businesses in the social media sector.
By regulation their speech
Re: (Score:2)
Let me guess, you're against regulating false advertisement too? It's basically the same thing.
Re: (Score:2)
Wrong. It's the government stepping in to regulate corporate control of speech, based on clickbait and ad revenue.
Unconstitutional IMHO (Score:4, Insightful)
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
I think what we really need is a bill requiring our leaders in congress to read The Constitution, before introducing a bill.
Re: (Score:1)
They're on tape asking this publicly....so, they're used to crossing that line verbally already, now they're trying to codify it.
Re:Unconstitutional IMHO (Score:4, Informative)
Re:Unconstitutional IMHO (Score:4, Informative)
False statements are constitutionally protected speech, according to SCOTUS rulings. See: https://case-law.vlex.com/vid/... [vlex.com]
It has to be false and specifically harmful to particular people (not some people in general) for it to be unprotected. Furthermore, any such limits must be the least restrictive way possible (that is, if there is a way to reach the goal without limiting speech, then you can't limit speech).
In short, lies are usually protected speech, which makes sense since basically every movie, book, or TV show is/contains false statements.
Re: (Score:2)
So, what in theory is "truth only and only truth is protected free speech" has no other choice at all, but
Re: (Score:2)
A very large fraction of what government has been calling "misinformation" has simply been presenting legitimate scientific evidence that goes against the desired narrative.
Re: (Score:1)
To be fair, asking is only asking. And especially to Americans (it's one of the few things every American from the far right to the far left can high-five each other about, in unity), any time a president asks you to do something, the default answer is always "go fuck yourself," so I think asking doesn't cross any line at all.
Re: (Score:1)
This *should* pass constitutional muster. The law doesn't prohibit speech. You'll still be able to post what you want, within the T's & C's of the platform. Don't like Trump, Biden, local dog catcher - fine, post that. What they are asking is to see if there are ways that can slow the dissemination just enough that opposing viewpoints, and tamping down of possible panics and firestorms.
The issue isn't that folks don't have the ability to seek out alternatives and/or the truth. They fail to do so be
Re: (Score:1)
Can't yell Fire in a crowded space (unless there's a fire).
It is act of inciting a riot that is illegal, not the speech itself. Yelling "Vaccines cause autism!" in a crowded theater is perfectly legal, although you probably will get ejected by the staff, as most theaters are private property.
Can't put nudity on the exposed cover of magazines.
Obscene speech isn't remotely the same thing as misinformation. Some of the obscenity laws have actually been reversed in recent years, as they're more a relic of puritanical influence on US laws than a proper unbiased interpretation of the 1A.
FCC regulates what's shown on broadcast airwaves (both the pictures and language).
The airwaves are a finite resour
Re: (Score:2)
Arrived here looking to see who was feeding the trolls. AC is less than no one. Why would anyone talk to such a coward?
Re: (Score:2)
Re: (Score:3)
Re: (Score:1)
The problem is that misinformation is a very nebulous concept. For example, if I tell you "you could save money by buying an electric car", that could be misinformation if you presently don't own or need a car. If I say "cryptocurrency is a good investment", that may or may not be misinformation depending on how the crypto market performs at the time.
Laws need to be very clear in their wording and implementation.
Re: (Score:2)
The problem is that misinformation is a very nebulous concept. For example, if I tell you "you could save money by buying an electric car", that could be misinformation if you presently don't own or need a car. If I say "cryptocurrency is a good investment", that may or may not be misinformation depending on how the crypto market performs at the time.
Laws need to be very clear in their wording and implementation.
How about this "misinformation"?
https://www.independent.co.uk/... [independent.co.uk]
The article is another in a long line of literal "fake news". Gorsuch and Sotomayor issued a joint statement:
https://thehill.com/regulation... [thehill.com]
I personally find this "misinformation" to be far more damaging than whatever people are sharing on facebook. This is a fake story disguised as a news story and was specifically crafted to make a certain justice look bad. Yet, oddly, nobody ever brings this stuff up when talking about "misinformation"
Re: (Score:1)
The UK has more restrictive laws regarding speech than the USA, and that article hasn't been taken down. Fake news is part of the price we pay for allowing the free exchange of information. Much like the imperfections inherent to democracy itself, we tolerate the flaws because the alternative way of doing things is worse. [cfr.org]
Re: (Score:2)
The UK has more restrictive laws regarding speech than the USA, and that article hasn't been taken down. Fake news is part of the price we pay for allowing the free exchange of information. Much like the imperfections inherent to democracy itself, we tolerate the flaws because the alternative way of doing things is worse. [cfr.org]
I totally agree, I'm just pointing out that people who are getting vapors over "misinformation" seem to be 100% concerned about "right-wing misinformation" and ignore literal fake news and such.
See the Canadian trucker thing for another example. When it started, the loonies were claiming that there were only maybe 50 trucks. Hahahaha. It's nothing, just a couple of chumps, nothing to see here. When that narrative obviously failed, they switched to "it's a bunch of fascists!" Yes, fascists who want [che
Re: (Score:2)
That's a fine goal, but it's not practical or acheivable. The constitution is written by humans, for humans, and interpreted by humans. Even justices who fancy themselves "textualists" or "originalists" often find ways to bend the words of the constitution to their own ideology when presented the opportunity. There will never be any rubric or set of tests that a court can apply to some speech that will perfectly categorize as constitutionall
Re: (Score:2)
Re: (Score:1)
Repealing Section 230 [wikipedia.org] and allowing users to sue websites for misinformation they publish might be the easiest solution.
That will just make it so the only people who are willing to run discussion-related websites are the folks rich enough to afford a team of cover-your-ass lawyers. It would have an absolutely chilling effect on free speech on the internet.
Re: (Score:2)
There's no getting away from the fact that if you harm another person through misinformation, one of you has to pay the price. Right, now the person who is harmed pays the price and has no recourse, and the person who harmed them has no incentive to avoid harming someone else in the future. So Section 230 facilitates injustice.
You wouldn't need to be, as you wrote, "rich enough to afford a team of cover-your-ass lawyers." If you believe that, then you're also a victim of misinformation. No, all you would ne
Re: (Score:1)
You've basically described an internet that works the way traditional broadcasting has worked. As we've already seen, that only allows those with sufficient funds to have a platform. It may improve the signal to noise ratio on the internet, but at the cost of silencing a lot of voices.
Re: (Score:3)
Incorrect.
Section 230 protects the website on which you are posting your misinformation from liability for what you post.
Section 230 does not protect you from liability for what you post. If your post results in harm, you can face a lawsuit, or even criminal charges.
Re: (Score:2)
Incorrect. Section 230 protects the website from legal action by the victim to reveal the identity of the poster. This helps protect you from liability for what you post.
Re: (Score:2)
Incorrect. Section 230 protects the website from legal action by the victim to reveal the identity of the poster. This helps protect you from liability for what you post.
No, it does not. 47 USC 230 provides a shield from liability to the website, nothing more.
Cite the part of the law that makes you think that it does anything else
Re: (Score:2)
"There's no getting away from the fact that if you harm another person through misinformation, one of you has to pay the price."
And when it's governmental authorities peddling misinformation? Like the CDC director claiming that masks are 80% effective at stopping Covid, in contradiction of the scientific evidence, an amazingly high effectiveness which would make them more effective than the Johnson & Johnson vaccine.
You misunderstand. (Score:2)
Nothing about this restricts speech/assembly/press/religion or anything else in the first amendment. What this does is regulate how social media businesses promote content. This is regulating business, not restricting speech.
It may be a fine line but thems the breaks.
Re: (Score:1)
What this does is regulate how social media businesses promote content.
The 1A also prohibits compelled speech [mtsu.edu]. It's pretty obvious telling social media companies what content they can and cannot promote falls under compelled speech.
Re: (Score:2)
No more so than regulating false advertising. I mean, that's basically what this is, false advertising.
Re: (Score:2)
I think what we really need is a bill requiring our leaders in congress to read The Constitution, before introducing a bill.
Stretch goal. Personally I'd settle for them reading the bill they are introducing.
Re: (Score:1)
I think what we really need is a bill requiring our leaders in congress to read The Constitution, before introducing a bill.
I think what we REALLY REALLY need is to stop calling the elected in Washington "leaders". They ARE NOT leaders. Not even close. They are representatives and need to reminded of that by their constituents.
I don't want to be led, I want to be represented.
Re: (Score:2)
The question is: how is the arbiter of truth?
Can social media companies classify news and blog posts as "true", "false", and "questionable"? Can they do it with accuracy?
If they could, we should stop all peer reviewed journals, and have those algorithms take over publishing.
Money before people. (Score:3)
But we know that social media platforms have repeatedly put profits over people, with algorithms pushing dangerous content that hooks users and spreads misinformation.
Some "News" companies too ... and, of course, Veridian Dynamics [fandom.com]:
"‘Money before people.' That’s the company motto - engraved right there on the lobby floor. It just looks more heroic in Latin."
-- Veronica Palmer [wikipedia.org] (Portia de Rossi), “Racial Sensitivity” [fandom.com], Better Off Ted [wikipedia.org]: (s1, e4)
Money in fear. (Score:4, Insightful)
Fear is our most primal emotion. So primal that it usually turns off a good amount of our rational thinking. This normally is a good survival instinct. Thinking about lion and determining if it wants to eat you or just curious about you is a good way to die, while if you see the Lion either you run away, hide, or attack it.
However we had found a way to manufacture "safe" fear. If you do this than you are going to hell, those other guys want to take away something you have, they are trying to take your right away to do something that you probably wouldn't do anyways, but dag-nab-it they are taking away my rights.
Often with clever wording, and giving a one sided view of any statement anything can be twisted into something that can create fear in the person. Once someone fears it they pay attention to it, and any logical counter statements or balance to that idea will go to death ears, because you are afraid of it, thus not open to consider it safe. However you get fixated on that danger, and gravitate to getting things that would help protect you from that danger, donating to the political party that is going to stop that fearful thing, buying a tool, product, service that will keep you safe, or support that product/service made by the good guys who are on your side whos power will be on your side when this fear becomes a real problem.
While this has been a problem before social media, or even before western culture, or even culture itself. Social Media is an excellent tool at amping up fear. As it bypasses moderating groups and geographic boundaries, and can find who will be most acceptable to that type of fear and target them. And it can be done for cheap.
Re: (Score:2)
No, they mix half-truths to create a dishonest conclusion. Tucker Carlson said something like "97% of wind turbines froze and that's why Texas had no power". Wrong, Texas lost power because 83% of gas turbines froze. But blaming a half-truth tells people the facts are 'wrong'. The dialog creates an idea, usually "I'm smart and you're stupid", and facts become unimportant (ie. post-truth).
No, the rich, or the fear-monger rarely promise to fight by your side. Their goal is making you believe you're a vic
Re: (Score:2)
I was trying to be politically agnostic. There are lies and crooked news stations and sources. However most of the time they report what is technically truthful but just in a way to mislead you.
Now the likes of Tucker Carlson, it isn't news, But commentary, it is the musing of a madman, even other commentary by actual experts is not news, just the expert opinion on events of the time, however the station can often find experts at various degrees to get the point they want to come across.
I was turned off
There is only one way to stop misinformation (Score:5, Insightful)
The algo was super simple, it looked at the articles that were getting clicks when being posted at random in a Twitter feed with hashtags that were automatically pinpointed by the algorithm as descriptive of the article (mostly nouns from the title of the article).
I let this thing run in the wild for a few weeks. I was surprised by how biased the algorithm would become simply by listening to what people where clicking on in Twitter. Two topics bubbled up to the top: Anything that mentioned Elon Musk (or Tesla) and anything that mentioned a certain republican candidate would just get amplified like there's no tomorrow. You could not weigh down those specific topics even if you wanted to -- the ratios of interest were magnitudes of 100s to 1.
So here's what I think this showed... you can create a 3 liner algorithm that only shows popular content. Then misinformed or controversial content will still bubble up to the top.
Because we have a generation of folks clicking on highly emotional content and confuse information with entertainment. The two realms are now hand in hand.
The solution? It's not JUST the algo that should be reviewed - maybe it starts a bit earlier, like hiring really good teachers when kids are learning to understand the difference between objective reality and amplified entertainment.
Re: (Score:3)
maybe it starts a bit earlier, like hiring really good teachers when kids are learning to understand the difference between objective reality and amplified entertainment.
Maybe Congress and you are being unrealistic. What makes you think users want social media to be for serious news and "objective reality" rather than "amplified entertainment"?
Oh, no! People want the wrong thing! Quick, Congress! Do something to make people want the right thing!
Re: (Score:2)
Their euphemistic language was sufficiently vague that you're able to twist it. The "entertainment" in question is in fact psy-ops disinformation designed to support the most banal of villainous purposes, the accretion of power and money for certain people who already have power and money.
Re: There is only one way to stop misinformation (Score:2)
Re: (Score:2)
Also, simply declaring that any algorithmically recommended content is considered as content PUBLISHED by the company, and shall be held to all the same laws and regulations as any publisher of information. If the information is found to be libelous or otherwise not constitutionally protected, then the company may be liable for it.
After that, allow users to opt-out of any and all content not generated by users not immediately connected to the primary user, or
Re: (Score:2)
Because we have a generation of folks clicking on highly emotional content and confuse information with entertainment. The two realms are now hand in hand.
This is the problem: companies are mixing politics with entertainment. This wasn't a problem until prominent partisan Republicans decided to do away with the FCC fairness doctrine [wikipedia.org]. Since then, partisanship became mainstream and exploded into hyper-partisanship.
The solution is obvious but Republicans prefer the current situation as they have struck down every measure to claw back some sanity.
Re: (Score:1)
the ratios of interest were magnitudes of 100s to 1
Sounds like the work of bots, they are setup to sabotage the algorithms
Social media needed to be regulated for a while (Score:3, Insightful)
or as reddit would put it, 'sort by controversial' (Score:4, Interesting)
Re: (Score:2)
i dont know, but it seems like an easy solution is to make sure anyone who obsesses over something be forced to at least see opposing viewpoints in their feed
Yeah, next we'll make a television that forces you to watch shows you don't like, a streaming service that plays bands you'd rather not hear, and a self-driving car that takes you to the wrong places.
Call me crazy, but I don't think making things more user-hostile by force of law is a solution.
Re: (Score:2)
Re: (Score:1)
The "force of law" problem is having a law saying the algorithm on social networks must behave in such a manner. I don't want posts from /r/Candidate_2 showing up in my Reddit feed because I subscribed to /r/Candidate_1.
Who voted for these clowns? (Score:2)
Re: (Score:2)
That's the real problem. Democracy, although the best of the worst, permits laws to be proposed, and even be made, by morons who were elected by morons.
Even worse... many bills are written by lobbyists.
Mis-information??? (Score:2)
Whenever we see the term 'misinformation' thrown about, we should just think of it as information that goes against what the government wants you to think.
The Rooney Rule (Score:2)
Most of the recommendations are just going to be twists on the Rooney Rule for the NFL...
If a site says "hey read this before posting", we all just scroll to the bottom and click "Ok" without reading. Likely most of the other measures applied to either posters or readers would be treated the same way.
It's going to take internal company changes to stop showing something outragous just to keep someone engaged on the site a little longer...
Yes, they can regulate "speech" (Score:3)
Re: (Score:3)
My view is that any social media company that amplifies these sorts of speech should be held just as liable as the author.
Wrong direction. Need accountability. (Score:1)
Don't block the right, enforce the responsibility.
Don't block speech, make sources accountable for their choices. Dox everyone. No anonymity. Fully traceable accountability.
Don't block freedom to not wear masks or not get vaccinated, we need laws allowing victims to sue the sources of the spread of disease. Full accountability.
Re: (Score:1)
Free speech only for those who can afford to lawyer up. If you don't like what someone is saying, keep 'em buried in frivolous lawsuits until they go away.
World's Most Interesting Man Meme (Score:2)
If I were someone that posted on facebook, I'd make this and post it:
I don't always go to facebook, but when I do, I go there like:
https://www.facebook.com/?sk=h... [facebook.com]
TV News (Score:2)
I think the internet is not so much different than TV in that outrageous/extreme content draws attention. The internet is just better setup to adjust to what draws attention. If there really is a workable solution here it needs to apply to all media in the same way. As much as they might want to, you can't tell people what to think and attempting to do so will backfire.
Define harmful. (Score:1)
Gross and a Pre-failure (Score:3)
Facebook is a festering pile of shit (other platforms like Twitter slightly less; and I say this as a regular FB user), but trying to establish an "algorithm" that social media platforms are forced to use will, at best, provide a placebo ("Okay, we did something, so everything is fine now!") and at worst actually amplify the problem.
There are a few problems with social media as they exist, but the main one is that the very popular ones rely on ad revenue, and thus engaged users, and thus have an incentive to increase engagement. Even after tweaks, most algs don't have a good way to weigh the truthiness of a post (and I wouldn't expect them ever to) and look mostly at comments/clicks/reactions/shares. The content that tends to get the most of that is the type of content that enrages, either in opposition against the post or in support of whatever bullshit it says.
Outside of outlawing social media platforms entirely, I don't think there are many legal options to counter this. The only two possibilities I can think of are:
1) Require platforms within the social media space to operate in a decentralized nature, such that if I use a Facebook competitor like MeWe I can still stay connected with Facebook friends without them also having to go to MeWe (or they could choose to go to a third platform and we'd still stay connected)
2) Force platforms to not show content to a user unless the user explicitly requests to be shown it (e.g. follows a page, joins a group) or explicitly goes to an "Explore" type of section on the platform (and even then, the alg-based content stays within Explore)
Not a fan of either of these, though. I would like to see #1 be done, but I can't see the government doing it in a sufficient way. (If the NSF or w/e creates and maintains a standard that anyone could take an implement, maybe that would be okay but not if it's being shoved onto platforms.) If done successfully, then users could easily pack up and go to a different platform without losing their connections if their current platform is pushing bullshit. If someone does that now, they'll only be able to maintain connections they've been able to establish outside the platform. (I would leave FB if I didn't mean losing 60% of my connections; I have alt connections with the other 40%, but split over a dozen platforms.)
#2 does have some benefits:
1) Users must take an extra step to be shown whatever the platform's shit alg wants to shove in their face, so if they're normally shown "adverse" content it's because they specifically sought it out and/or have a connection that regularly shares it (and thus no algorithm-meddling would counteract that)
2) The specification means no risk of favoritism or algorithm-meddling by the government: the only qualification to something showing up in a user's standard feed is "did the user ask to be regularly shown this"
3) The legal language should be fairly simple, and doable in a way that doesn't require a ton of jargon
The engagement-driven problem doesn't completely evaporate with #2, but the problem should diminish appreciably.
P.S. "NUDGE" being capitalized suggests it's an acronym, especially as all horrible law ideas typically employ a backronym in the Act name, but the article doesn't spell it out and I can't find what it might stand for on a brief search
P.P.S. If social media platforms openly hosted "adult entertainment" content, the engagement from that could completely drown out the conspiracy/frothing/etc. that tends to bubble to the top currently. Just sayin'...
Re: (Score:2)
Interesting and substantive comment. No wonder it vanishes into the intellectual vacuum that mostly occupies the space formerly occupied by Slashdot. (Yeah, that was one of my weak jokes, but I rarely get a funny mod.)
However I largely disagree with you. I even think the Constitution could be used to fix the problems. The key would be honest application of the Bill of Rights to make OUR personal information our own and under OUR control. One aspect would be strongly encouraging data portability (though I fa
Re: (Score:2)
I strongly support limiting what data companies can surreptitiously collect and use, too, though I don't think the Constitution or Bill of Rights play too strongly into it. And, though there's a lot of overlap, taking care of that problem is not guaranteed to fix the engagement-driven problem.
Anyway, the issue goes well beyond ads, and even if your "Why?" button was attached to all algorithm-assigned content it's reactionary and requires explicit action by the user to opt-out rather than being a limitation
Re: (Score:2)
I largely agree with your data selection, though I have reservations about your analysis. I see the biggest problem is that many, probably most, people just don't want to make the effort to be free. Even if the data was available (perhaps via a strong "Why?" button) and even if they had options to using the abusive services, most people just wouldn't want to be bothered. That's why I think the philosophic focus should be around optimizing the number of choices available at each decision point. Too few is no
Trust them instead? (Score:2)
Possibly the only group I'd trust LESS than facebook is politicians.
There's no freaking way any good will come of this, even if it somehow doesn't get struck down as unconstitutional.
At best, this is another stupid 'feel-good' law that will do nothing. At worst, it's outright censorship and government manipulation of information/speech.
Does this mean (Score:1)
Does this mean that /. submitter's will need to read the article first?
two idiots get together and push an industry bill (Score:2)
Bipartisan simply means that they received the bill from the same lobbyists along with a campaign contribution.
There is no misinformation, there is data, stories, words, lies, and facts that some people are offended by. There is also industry-generated advertising that serves as news articles, i.e., "people who live in X can eliminate their mortgage." Those need to be banned or flagged as advertising.
We've become too butt hurt over people expressing their views that now we want to start crawling into our li
Social media should get ahead of this (Score:2)
Companies should partner with several fact-checking organizations. They should provide areas where fact-checking ratings and links can appear. When there is no fact-checking rating yet, any user could click one of those areas and pay - yes, pay - the fact-checker to check the posting. The social media company can tack on an extra percentage for themselves on each fact-check. The social media company can also allow the entity paying for the fact-check to sponsor it, and have their username displayed, for
Force Facebook to open-source its algorithm (Score:2)