Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I think the ultimate problem is that social media is not unbiased — it curates what people are shown.

This is literally the purpose of Section 230. It's Section 230 of the Communications Decency Act. The purpose was to change the law so platforms could moderate content without incurring liability, because the law was previously that doing any moderation made you liable for whatever users posted, and you don't want a world where removing/downranking spam or pornography or trolling causes you to get sued for unrelated things you didn't remove.



The CDA was about making it clearly criminal to send obscene content to minors via the internet. Section 230 was intended to clarify the common carrier role of ISPs and similar providers of third party content. It does have a subsection to clarify that attempting to remove objectionable content doesn't remove your common carrier protections, but I don't believe that was a response to pre-CDA status quo.


> The CDA was about making it clearly criminal to send obscene content to minors via the internet.

Basically true.

> Section 230 was intended to clarify the common carrier role of ISPs and similar providers of third party content.

No, it wasn't, and you can tell that because there is literally not a single word to that effect in Section 230. It was to enable information service providers to exercise editorial control over user-submitted content without acquiring publisher-style liability, because the alternative, giving liability decisions occurring at the time and the way providers were reacting to them, was that any site using user-sourced content at scale would, to mitigate legal risk, be completely unmoderated, which was the opposite of the vision the authors of Section 230 and the broader CDA had for the internet. There are no "common carrier" obligations or protections in Section 230. The terms of the protection are the opposite of common carrier, and while there are limitations on the protections, there are no common carrier like obligations attached to them.

>


> The CDA was about making it clearly criminal to send obscene content to minors via the internet.

That part of the law was unconstitutional and pretty quickly got struck down, but it still goes to the same point that the intent of Congress was for sites to remove stuff and not be "common carriers" that leave everything up.

> Section 230 was intended to clarify the common carrier role of ISPs and similar providers of third party content. It does have a subsection to clarify that attempting to remove objectionable content doesn't remove your common carrier protections, but I don't believe that was a response to pre-CDA status quo.

If you can forgive Masnick's chronic irateness he does a decent job of explaining the situation:

https://www.techdirt.com/2024/08/29/third-circuits-section-2...


Yeah but they're not just removing spam and porn. They're picking out things that makes them money even if it harms people. That was never in the spirit of the law


Yes, it is. Section 230 doesn't replace the 1A, and deciding what you want to show or not show is classic 1A activity.


It's also classic commercial activity. Because 230 exists, we are able to have many intentionally different social networks and web tools. If there was no moderation -- for example, if you couldn't delete porn from linkedin -- all social networks would be the same. Likely there would only be one large one. If all moderation was pushed to the client side, it might seem like we could retain what we have but it seems very possible we could lose the diverse ecosystem of Online and end up with something like Walmart.

This would be the worst outcome of a rollback of 230.


> The purpose was to change the law so platforms could moderate content

What part of deliberately showing political content to people algorithmically expected to agree with it, constitutes "moderation"?

What part of deliberately showing political content to people algorithmically expected to disagree with it, constitutes "moderation"?

What part of deliberately suppressing or promoting political content based on the opinions of those in charge of the platform, constitutes "moderation"?

What part of suppressing "misinformation" on the basis of what's said in "reliable sources" (rather than any independent investigation - but really the point would still stand), constitutes "moderation"?

What part of favouring content from already popular content creators because it brings in more ad revenue, constitutes "moderation"?

What part of algorithmically associating content with ads for specific products or services, constitutes "moderation"?


Prosaically, all of your examples are moderation. And as a private space that a user must choose to access, I'd argue that's great.


There is (or should be, in any case) a difference between moderation and recommendation.


There is no difference. Both are editorial choices and protected 1A activity.


> What part of deliberately showing political content to people algorithmically expected to agree with it, constitutes "moderation"?

Well, maybe it's just me, but only showing political content that doesn't include "kill all the (insert minority here)", and expecting users to not object to that standard, is a pretty typical aspect of moderation for discussion sites.

> What part of deliberately suppressing or promoting political content based on the opinions of those in charge of the platform, constitutes "moderation"?

Again, deliberately suppressing support for literal and obvious facism, based on the opinions of those in charge of the platform, is a kind of moderation so typical that it's noteworthy when it doesn't happen (e.g. Stormfront).

> What part of suppressing "misinformation" on the basis of what's said in "reliable sources" (rather than any independent investigation - but really the point would still stand), constitutes "moderation"?

Literally all of Wikipedia, where the whole point of the reliable sources policy is that the people running it don't have to be experts to have a decently objective standard for what can be published.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: