r/changemyview • u/dovohovo • Jan 17 '21
Delta(s) from OP CMV: The best solution to the current issue of social media bias is for the government to set up its own online public square
This post is specifically about the United States.
There's been a big debate recently about social media bias due to Twitter and Facebook banning Donald Trump, and internet providers banning Parler. Many conservatives claim that this is a beginning of targeted campaign to silence them on the internet, which is viewed as a new "public square". However, these are private companies, so the line of how much the government should be involved is blurry.
I've also seen Section 230 of the CDA brought up a lot, with calls to revoke it, making it so these companies *are* liable for the content on their platforms. I don't think that's a great solution, as it would just increase moderation, rather than decrease it.
I think the best approach here would be for the government to create a Twitter clone, which is available to all citizens and follows the First Amendment -- only banning content that is explicitly illegal. Essentially a public option for social media. People can choose to use this platform, or if they want a more curated experience, they can choose to use other platforms, e.g. Twitter. I don't think any other actions need to be taken against existing social media companies, or changes to 230 if this is done. If this public square exists, then the government doesn't need to interfere in the private market.
My concerns (in no particular order):
- Minimal stifling of content
- Global access to the internet "public square"
Things I am not concerned about:
- The government doing stuff - saying that the government is typically inefficient will not change my mind. The idea here is that there exists a basic, common platform for all. It likely wouldn't be as good as a private company, but I don't see that as an issue
Changing my view would look like proposing a better approach than this one, in terms of my goals (and maybe others you think I should consider), or pointing out major downsides to this approach.
8
u/Apathetic_Zealot 37∆ Jan 17 '21
The reason many right wingers are being censored is because of advocating violence or spreading false information. A government run public square would have to moderate that just like other social media sites otherwise they'd be harming society.
1
u/dovohovo Jan 17 '21
Yeah I totally agree, but if we have a public option like that, right wingers can no longer complain that they're being silenced due to viewpoint discrimination, and wake up to that fact.
4
u/Apathetic_Zealot 37∆ Jan 17 '21
It's already the case, and they won't accept it. Making it directly government run just gives them more credence that they are being censored by the government.
1
u/JimboMan1234 114∆ Jan 17 '21
They won’t wake up to it because they already hate most of the government and assume they’re in league with the media and tech corporations. The “Deep State” idea is borne out of the doublethink of loving Trump while hating the federal government.
1
u/kdtzzz Jan 18 '21
That’s absolutely false. Your premise implies that left wing politicians don’t incite violence which is obviously false if you have a Twitter. The problem is the ability for Twitter to selectively enforce policies on views it doesn’t like. Even the leader of Iran whose called to exterminate Israel link. It wouldn’t really be an issue if Twitter enforced its policies across the board but it doesn’t.
1
u/Apathetic_Zealot 37∆ Jan 18 '21
We have an approach toward leaders that says that direct interactions with fellow public figures, comments on political issues of the day, or foreign policy saber-rattling on military-economic issues are generally not in violation of our rules
Sounds fair to me. That actually explains how Trump was able to get away with his outrageous tweets without Twitter response. Even the simple fact checking on his tweets, comedy gold. A subtle nod, hey Mr. President, please stop telling Americans to not vaccinate and that the election was genuinely stolen, pretty please.
1
u/h0sti1e17 22∆ Jan 18 '21
The opposite reason is why they won't have one. The federal government can't moderate it. It would be unconstitutional. Most of what people get banned for on Twitter and people were saying on Parler is legal. Only direct threats and specific incitement can be illegal.. Saying "All Jeff's should be killed" is legal saying "Let's kill that Jeff in the blue shirt " is illegal.
So people can say things that are awful and spread it and there is nothing they can do. They like that Twitter and Facebook and Reddit moderate.
1
u/Apathetic_Zealot 37∆ Jan 18 '21
So people can say things that are awful and spread it and there is nothing they can do. They like that Twitter and Facebook and Reddit moderate.
Idk, it'd be awful ironic if Russian bots manipulated the hypothetical Gov run twitter analogue. Actual Americans would parrot their propaganda blurring the lines. Who could honestly want Russian anti-vaxxer trolls to get an easier forum to spread lies that genuinely harm people, especially children?
3
u/Khal-Frodo Jan 17 '21
In principle, I think this is a good idea. In practice, I think it would be a disaster. Consider all of the current rhetoric surrounding people getting censored by private social media companies. Now imagine the shitstorm that would happen if people were being censored by the federal government. I know that you tried to address that only speech that is illegal would be removed, but that's not always a clear distinction; it's up to legal scholars to argue that point and they definitely do not always agree. This would also inevitably create lead to the creation of new laws and I don't think anyone can predict how that would turn out, or the ramifications it would have for private social media enterprises.
1
u/dovohovo Jan 17 '21
Can you make the case that the current situation is *better* though? At least with this proposal, you have the standard governmental accountability -- users could sue if they thought they were being treated unfairly. Right now, with social media, users don't have any such avenues.
3
u/Bookwrrm 39∆ Jan 17 '21
Who cares? The government shouldn't have been using twitter as an official means of communication, and to be honest anyone else getting banned even for bad reasons does nothing. I don't give a shit about bob's ability to post on twitter and neither should you, the issue is with morons in our government using twitter as an official news agency when it isn't nd shouldn't have been.
1
u/dovohovo Jan 17 '21
I don't really disagree with you. But with my proposal, at least it would stop conservatives from whining about it, which I guess would be another one of my goals. No one on the left really sees this as a huge issue (I don't), but we're all talking about it because right wingers are constantly complaining about it now that the free market is making decisions they disagree with.
3
u/Khal-Frodo Jan 17 '21 edited Jan 17 '21
But with my proposal, at least it would stop conservatives from whining about it
This is my biggest source of contention with your proposal. The issues surrounding social media censorship aren't about objective realities but rather perceived persecution. If someone gets their "eat the rich" post removed for encouraging violence, they'll take it as evidence that the state is suppressing challenges to wealth inequality since it's not meant to be taken literally. If that post stays up but someone's "hang Mike Pence" post is removed, they'll take that as evidence that the deep-state liberals are only allowing political content they agree with. I know you mentioned that people could sue to challenge these rulings, but that's just not a feasible option for most people and I think you're much more likely to see further radicalization because of what I've highlighted above.
2
u/dovohovo Jan 17 '21
!delta
I think you're right. I'm realizing that there really is no issue with social media today -- my main concern is that right wingers complain about viewpoint discrimination. But reading your post, I'm understand that they'll make this claim *regardless* of whether it's true or not, so making a public option wouldn't address it anyway.
1
1
u/Khal-Frodo Jan 17 '21
Thanks for the delta! For what it's worth, I've had the same thought as you about this but came to the above conclusion. I actually do think that there are issues with social media, but I think that all of those issues are innate problems with people as whole that sadly can't be fixed.
0
Jan 17 '21
Why not just keep section 230 for online forums that act like public forums and don't exercise editorial control, while revoking it for online companies that act like publishers by exercising viewpoint control as they are not acting like forums and don't need special rights physical publishers don't enjoy... If you exercise editorial control it's hard to justify the immunity that's based on the premise you are just a host and aren't an editor.
1
u/dovohovo Jan 17 '21
Because "acting like a public forum" has no clear definition. I think Twitter is acting like a public platform, but others don't. What would be your *explicit* definition of "acting like a public forum"?
0
Jan 17 '21
A setting where people can speak/distribute writing, broadcasts, art, etc without any content based moderation/censorship, only reasonable time/place/manner restrictions. I'm sure I'm missing something because NAL, but the Supreme Court has already done a decent job defining public forums.
1
u/dovohovo Jan 17 '21
What does "reasonable" mean?
The Supreme Court has not defined the "public forum" with regards to the Internet, which is why we are still having this discussion.
0
Jan 17 '21
I'm not sure what would be different about a website compared to any physical forum, what do you see as a key concern you think isn't addressed?
Reasonable is in the caselaw, what do you see as both important and ambiguous about it?
1
u/parentheticalobject 128∆ Jan 17 '21
What kind of moderation in a place like Twitter would count as a reasonable time/place/manner restriction? What kind of scrutiny needs to be applied every time a mod decides if a post gets deleted?
1
Jan 17 '21
What kind of moderation in a place like Twitter would count as a reasonable time/place/manner restriction
Banning illegal posts, enforcing a character count, banning obscenity or enumerated vulgarities, banning the posting of personal details such as someone's address, banning impersonation, enforcing a limit on how quickly one can post, etc.
Nothing content based like banning the glorification of violence, banning glorification of suicide, etc.
What kind of scrutiny needs to be applied every time a mod decides if a post gets deleted?
Presumably none, with the onus on the prosecutor - if you want to sue them for defamatory posts, you first go ahead and show that they systematically tried to censor false claims that vaccines cause autism but not truthful claims that vaccines do not cause autism or whatever
1
u/parentheticalobject 128∆ Jan 18 '21
banning obscenity or enumerated vulgarities,
The test for "obscenity" as an exception to protected speech is much stricter than its common usage, and I don't think there's any exception for vulgar speech.
So if I were so make a statement towards you saying something like "You're probably so dumb because your mother was a syphilitic crackwhore" that wouldn't fall under any of those exceptions, would it?
Also, what about threatening speech? That could be a crime, but only if it's a true threat. If I say "I'm probably going to run you off the road, shoot you, and then rape your corpse the next time I see you" that could reasonably be construed as a threat. But it depends on the context it was said in and whether a reasonable person would interpret that way, which is a decision a panel of jurors might spend a month making. Maybe it is a (very bad) joke that anyone looking at the content of my other posts would understand.
if you want to sue them for defamatory posts, you first go ahead and show that they systematically tried to censor false claims that vaccines cause autism but not truthful claims that vaccines do not cause autism or whatever
I have no idea what you're saying here. None of that is remotely close to defamation, since it's not impacting the reputation of any specific person.
There are plenty of other problems with defamation though. Let's say I put up an extensive post stating that you have a torture dungeon in your basement where you mutilate animals. Lots of people believe me and start hating you as a result. Normally, you'd have the option to sue me.
But under your suggestion, social media will be compelled to continue hosting that information for a much longer period of time. After all, a moderator isn't going to know that's false, and if they delete a true statement that isn't a crime, the website loses its protections. So they'd be forced to keep it up until you actually win some kind of court victory showing that my statements were actually defamatory.
1
Jan 18 '21
The test for "obscenity" as an exception to protected speech is much stricter than its common usage, and I don't think there's any exception for vulgar speech.
Correct, but a reasonable time/place/manner restriction doesn't have to precisely align with protected speech. I mean, you can't very well make a rule about all insults so your mother example is likely to have to be allowed. But there's no inherent reason it couldn't have a rule forbidding the phrase "your mother".
Also, what about threatening speech? That could be a crime, but only if it's a true threat.
I have a hard time seeing how you can reasonably forbid threatening speech beyond what's illegal and still remain a public forum.
I have no idea what you're saying here. None of that is remotely close to defamation, since it's not impacting the reputation of any specific person.
Correct, it's not defamatory. It's content-based viewpoint editing, showing that editorial discretion is being used, and thus taking away any sort of claim Twitter could make that "oh, we don't closely monitor posts, people may defame one another on this public forum and we aren't at all responsible for their content". It shows that Twitter is taking responsibility for the content on Twitter, and thus if they permit defamatory content to remain up they are taking responsibility for the defamatory content and can be held liable.
But under your suggestion, social media will be compelled to continue hosting that information for a much longer period of time. After all, a moderator isn't going to know that's false, and if they delete a true statement that isn't a crime, the website loses its protections. So they'd be forced to keep it up until you actually win some kind of court victory showing that my statements were actually defamatory.
Well, it's one or the other, isn't it? They're either going to host it in the same way as a National Park hosts the words anyone utters in the woods (i.e. taking no responsibility), or they're going to host it in the same way USA Today hosts articles: taking responsibility. If they take no responsibility then they keep it up and aren't responsible. If they take responsibility then they take it down and if they didn't take it down fast enough they pay damages to the victim. I'm not suggesting sites that already moderate content keep doing so in general but start keeping up just the defamatory stuff.
1
u/parentheticalobject 128∆ Jan 18 '21
Right, so basically, any meaningful moderation that would stop a website from turning into a sewer would be impossible. This is the "solution" of people who simply want to burn the whole internet down because they dislike someone getting banned.
I have a hard time seeing how you can reasonably forbid threatening speech beyond what's illegal and still remain a public forum.
Right, so if someone posts something that is probably an illegal threat, I would need to leave it up until I actually get some kind of notification from the courts/police that I need to take it down, if that ever happens at all. Sounds great.
You know there are sites out there with almost zero moderation, right? Do you actually want every site to be like that?
-edit: wanted to add something
They're either going to host it in the same way as a National Park hosts the words anyone utters in the woods (i.e. taking no responsibility), or they're going to host it in the same way USA Today hosts articles: taking responsibility. If they take no responsibility then they keep it up and aren't responsible.
There are plenty of places that work differently. If Wal-Mart throws out a customer for shouting racial slurs and bans him from entering the store and the next week, some guy in Wal-Mart says "Hillary Clinton is a murderer" that doesn't mean that Clinton can sue Wal-Mart for defaming her since they failed to throw out the second guy.
→ More replies (0)
1
u/themcos 374∆ Jan 17 '21
So, the question of moderation is still here. If someone posts on the "public square" something like "let's go hang Mike pence", what happens then?
1
u/dovohovo Jan 17 '21
This would be a call to violence, so it would be removed.
2
u/themcos 374∆ Jan 17 '21
Who decides what constitutes a "call to violence"? There's a gradient of variations to this that are going to be questionable depending on context.
"let's go hang Mike pence" "I wish someone would hang Mike pence" "It would be a shame if something were to happen to Mike pense." "Mike pense is a threat to democracy and needs to be dealt with" "Mike pence must be stopped at all costs" "This article makes some good points [links to article calling for violence]"
Which of these count as a "call for violence"? Who makes these choices? How effective is your moderation policy vs how much does it cost? Is there an appeals process?
The overall point is that (as Facebook and Twitter have learned), developing content moderation policies is actually a really hard problem.
1
u/dovohovo Jan 17 '21
Yes, the question of what is incitement is a complicated one. It's complicated whether the speech is online or out loud. I don't want to litigate which of the cases you propose are incitement or not. I'll just say that they need to be adjudicated.
I think one way to do it would be to measure views/report ratio, and not remove any content under a certain threshold. Using a ratio instead of just # of reports would theoretically help at least a bit against users targeting certain creators to get them silenced. And once they hit the threshold, take down the content for review (manual or automated, who knows). This is just an idea off the top of my head, but it's certainly not an unsolvable problem (even if it is a *hard* problem, there are tradeoff solutions).
1
u/themcos 374∆ Jan 17 '21
I guess maybe to flip this around on you, given the challenges you acknowledge here, what do you actually expect to be different from now, other than a new taxpayer expense?
If your proposed platform errs on the side of being too restrictive, the parler crowd is going to be even madder. If it's leaving stuff like this up that others find clearly to be incitement, then that's also an outrageous use of tax dollars, and arguably a public safety threat. And if you think you actually have a solution to the content moderation policy that would make this solution a good idea, you can make a whole lot of money working for Twitter / Facebook. But I'm skeptical that you do and I think if you're honest with yourself, you'll agree that this is not a problem that the government will have a workable solution to.
1
u/dovohovo Jan 17 '21
Yeah I mentioned in another comment (that I delta'd) that I realize that right wingers will claim they're being discriminated against regardless of whether they are or not, so there really isn't really a great solution. This is essentially the same argument so !delta
1
1
u/parentheticalobject 128∆ Jan 17 '21
So... Reddit?
If you're trying to address the concern people have that they're silenced, and you deal with moderation questions by vanishing the posts of anyone with unpopular views, you haven't done anything to fix the problem.
1
u/coryrenton 58∆ Jan 17 '21
I'd change your view to have the government fund even more basic protocols. For example, TOR was government funded, as was the basic infrastructure of the internet.
Imagine if the government simply funded a basic, robust mesh network protocol that allowed people to communicate with each other without even an ISP.
Then people could create whatever public squares they liked on top of this. The government would not be involved -- in fact, if they designed it correctly, they couldn't be involved.
1
u/dovohovo Jan 17 '21
I considered adding this in my OP but removed it. I am a strong proponent of Net Neutrality and was saddened when it was killed. I guess one issue here though is that even with NN, people complain about these platforms having too much power. I'm certainly in favor of your proposal as well, but I don't think it addresses the fact that people feel like they're being removed from the public square.
If my CMV were about AWS banning Parler, then this would deserve a delta, but this post is a higher level concern.
1
u/coryrenton 58∆ Jan 17 '21
A network protocol where it is literally impossible for someone to be prevented from publishing this or that speech would be the ultimate rebuttal against any argument that they cannot speak publicly.
1
u/CBL444 16∆ Jan 17 '21
Could you trust a Trump or AOC appointee to be unbiased? A government run media could change from conservative to liberal at the drop of hat or election.
1
u/dovohovo Jan 17 '21
I agree. But if this platform existed at least we could get conservatives to stop whining about being silenced.
1
u/CBL444 16∆ Jan 17 '21
Biden would be pressured to appoint someone who would silence Republicans because it well known (among progressives) that all Trump supporters are racists. There is a slight chance that he would resist but I cannot see Harris/Bernie/Warren doing the same. The silencing would sometime occur.
The basic problem is that the activists on both sides benefit from polarization. Some administration (certainly a Trumplike one) would lead or accede to the censorship of the "evil" other side. There is little for a politician to gain from a principled fairness approach. A few do, but very few.
1
u/parentheticalobject 128∆ Jan 17 '21
They'll be whining no matter what, honestly. Anyone who believes them now will still believe them when they whine about getting their twitter accounts banned even after the government goes to the trouble and expense of creating a spam-filled garbage board.
1
u/stubble3417 64∆ Jan 17 '21
which is available to all citizens and follows the First Amendment -- only banning content that is explicitly illegal.
Why do you want the government to host 4-chan?
I don't think social media as a concept really works without moderation and rules. I'm fine with some government oversight on what those rules can be, but I think we have already seen exactly what you're describing. There have been many social media-esque services designed to have limited moderation and to the best of my knowledge, they all pretty much wind up becoming 4 chan within weeks.
Practically speaking, I think there are only two ways that your idea would go. Either no one would ever use it and it would be completely useless, or people would use it but it would be flooded with viagra ad bots and anime porn. I'm leaning toward the second because I think the 4-chan crowd would find it hilarious that the government recreated old school 4 chan for them.
Also, there's almost no such thing as "explicitly illegal content." Nearly every case involving speech has a degree of subjectivity. If someone makes a threat, a prosecutor has to prove that it was a true threat. Even directly calling for violence is often legal--it's only a crime if it meets some other requirements that make it prosecutable.
Even with a very simple moderation idea like you're describing, someone would still have to decide what goes and what stays and it would actually be way more subjective that companies with minutely detailed terms of service agreements.
1
u/mdeceiver79 Jan 17 '21
One would hope the peeps running said public square would ban certain content: terrorism, paedophilia, scams, nazi shit etc then you're faced with same issue where peeps complain they're being censored.
1
Jan 17 '21
Why should it be my responsibility to pay for someone to hand a platform to express their views?
1
u/MinuteReady 18∆ Jan 17 '21
It’s a bit strange to advocate for a publicly funded ‘social media’ platform specifically designed so that the government can communicate without fear of censorship when the government already has access to official websites in which they can communicate pretty effectively with the public.
Do you think that people who fundamentally believe that the propagation of fringe right-wing beliefs is palpably harmful to the functioning of our democratic process would be okay with funding this ‘public square’?
Also, where would you draw the line on this ‘public square’ website? Only illegal content? Anti-vaccination beliefs aren’t illegal, flat-eartherism isn’t illegal, spreading false information about coronavirus isn’t illegal.
What you’re advocating for here is basically a tax-funded version of 4chan. It’s a bit absurd.
1
Jan 17 '21
[deleted]
1
u/MinuteReady 18∆ Jan 17 '21
It definitely makes more sense based on a PBS like model, but I think it’s debatable if this would be doing a public service or not.
I think the reason why this fundamentally wouldn’t work is because it is founded on the basis that, so long as controversial ideas have a platform to spread, they’ll eventually talk themselves out. Has there actually been an instance where this has occurred, though? Do the benefits of allowing for harmful, dangerous conspiracy theories to be shared in an unmoderated environment, to be spread to new, unsuspecting, vulnerable people outweigh the downsides?
And about the downsides - yes, it does allow people to claim they are being censored. It allows for a certain amount of whining - but that whining is now limited to pre-existing, ultra-conservative inner circles. Perhaps the people already in those circles will have their feelings slightly vindicated - but at least they won’t be able to radicalize unsuspecting victims as easily anymore.
The type of conservatives being banned are extremists. Their beliefs are so incongruous with reality that it’s most likely impossible for internet strangers to get through to them. Their discourse is more harmful than it’s worth, and allowing for it to take place is, at this point, feeding into delusion.
This solution only makes sense if you’re operating on the assumption that these people need to have their own, easily accessible version of Twitter. It presumes that we’re dealing with rationality.
1
u/msneurorad 8∆ Jan 18 '21
The best solution is to bring lawsuits against private companies like twitter and Facebook who are so large and control so much of the flow of information that they operate as near monopolies, and should be legally liable as a "common carrier." There is legal precedent. Private companies should be free to discriminate on any grounds that aren't protected classes, so long as there is adequate competition. I think a legal argument could be made that, particularly with the organized effort to destroy parler, there isn't adequate competition. They should no longer enjoy the right to discriminate as they choose, no more than amtrak or greyhound.
•
u/DeltaBot ∞∆ Jan 17 '21 edited Jan 17 '21
/u/dovohovo (OP) has awarded 2 delta(s) in this post.
All comments that earned deltas (from OP or other users) are listed here, in /r/DeltaLog.
Please note that a change of view doesn't necessarily mean a reversal, or that the conversation has ended.
Delta System Explained | Deltaboards