Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's not going to be how things are likely to work. The financial industry censors disreputable sites because, effectively, disreputable sites are too expensive to support (e.g., high chargeback rates). The big social media sites today aren't going to "gee, golly, we have to completely abandon moderation to deal with liability now"; no, they're going to ramp up their moderation hard, because reputational risk is still a thing (also, they still have liabilities in countries that aren't the US that they are going to be very concerned about!).

And when you have sites like Youtube that retain a strict moderation policy, there is going to be absolutely 0 incentive for them to cave to whatever new niche sites crop up that try to play the "we can't enact moderation" card, exactly as happens today.



If youtube (and every similar website) was forced to abandon content moderation (or cease being "you"tube), that wouldn't substantially increase their chargeback rate.

> ramp up their moderation hard

Not good enough. Every claim about a person in every video posted to youtube would need to be fact checked prior to being visible on youtube. You are underestimating the degree to which this kills youtube qua youtube. This is subtantially more true for facebook, or for comment sections.

>still have liabilities in countries that aren't the US

Already most sites that are required to censor content for a certain locale only block it for IPs from that locale. I doubt this would change.


> Every claim about a person in every video posted to youtube would need to be fact checked prior to being visible on youtube.

No it doesn't. Defamation requires some form of mens rea of the statement being made (negligence or actual malice depending on who it's about), so liability for defamation in a no-§230 world largely means "will take it down after someone complains" as there's a pretty solid defense that not verifying the statements before uploading doesn't constitute even negligence, much less actual malice.

The liability rules in a no-§230 world aren't entirely clear since we're basing everything on just two court cases that existed pre-§230, but the social media companies are going to be extremely willing to throw down millions of dollars in legal fees to persuade judges to evolve liability rules that make their moderation policies feasible.

And this comes back to the main point again: no moderation just doesn't scale, not even in channels of merely a hundred people, let alone hundreds of millions. It is naïve to think that companies are going to suddenly decide that "no moderation" is less risk than "tailor both our moderation policies and the law to find some middle ground that is actually feasible."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: