Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What ever happened to all that talk about Section 230 protections for platforms? It used to get a ton of discussion in the past, did something change?


> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

That's what section 230 says. The content in question here is not provided by "another information content provider", it is provided by X itself.


I guess people draw a difference between a platform generating illegal content vs merely hosting illegal content uploaded by users.


Section 230 is not a magical "get of jail free" card that you can use to absolve your tech platform of any responsibilities to its users. Removing posts and banning users is obviously not a workable solution for a technology that can abuse individuals very quickly.


My point is more that a lot of people were talking about removing Section 230 protections, which I think is implicitly what X is saying absolves them of responsibility for Grok-generated CSAM.

Removing Section 230 was a big discussion point for the current ruling party in the US, when they didn't have so much power. Now that they do have power, why has that discussion stopped? I'd be very interested in knowing what changed.


Ah, I misinterpreted - apologies. The current ruling party is not a monolith. The tech faction has been more or less getting its way at the expense of the tradionalist natcon faction. The former has no interest in removing section 230 protections, while a few in the latter camp say they do.

But beyond the legality or obvious immorality, this is a huge long-term mistake for X. 1 in 3 users of X are women - that fraction will get smaller and smaller. The total userbase will also get smaller and smaller, and the platform will become a degenerate hellhole like 4chan.


Section 230 only covers user-generated content. I imagine this gets dicey considering Grok is platform-owned and generating the content.


This will be interesting to see how it plays out.

When do we cross the line of culpability with tool-assisted content? If I have a typo in my prompt and the result is illegal content, am I responsible for an honest mistake or should the tool have refused to generate illegal content in the first place?

Do we need to treat genAI like a handgun that is always loaded?


and in good faith

knowingly allowing it is not in good faith.


I don't care about American law. Sharing fake porn of real children is illegal in my country, where X offers its services.


Something must have changed, there's a whole lot less concern about censorship and government intervention in social media, despite many "just the facts" reports of just such interventions going on.

I'm at a loss to explain it, given media's well known liberal bias.


How curious that your comment was downvoted! It seems completely appropriate and in line with the discussion.

I think it's time to revisit these discussions and in fact remove Section 230. X is claiming that the Grok CSAM is "user generated content" but why should X have any protection to begin with, be it a human user directly uploading it or using Grok to do this distribution publicly?

The section 230 discussion must return, IMHO. These platforms are out of control.


Even ignoring that Grok is generating the content, not users, I think you can still hold to Section 230 protections while thinking that companies should take more significant moderation actions with regards to issues like this.

For example, if someone posted CSAM on HN and Dang deleted it, I think that it would be wrong to go after HN for hosting the content temporarily. But if HN hosted a service that actively facilitated, trivialized, and generated CSAM on behalf of users, with no or virtually no attempt to prevent that, then I think that mere deletion after the fact would be insufficient.

But again, you can just use "Grok is generating the content" to differentiate if that doesn't compel you.


Should Adobe be held accountable if someone creates CSAM using their software? They could put image recognition into it that would block it, but they don't.

Look what happens when you put in an image of money into Photoshop. They detect it and block it.


I don't know. Does it matter what I think about that? Let's say I answer "yes, they should". Then what? Or what if I say "no, I see a difference". Then what?

Who cares about Adobe? I'm talking about Grok. I can consistently say "I believe platforms should moderate content in accordance with Section 230" while also saying "And I think that the moderation of content with regards to CSAM, for major platforms with XYZ capabilities should be stricter".

The answer to "what about Adobe?" is then either that it falls into one of those two categories, in which case you have your answer, or it doesn't, in which case it isn't relevant to what I've said.


Logical fallacy.

but to answer your point, no for two reasons:

1) you need to bring your own source material to create it. You can't press a button that says "make child porn"

2) its not a reasonable to expect that someone would be able to make CSAM in photoshop. However more importantly the user is the one hosting the software, not adobe.


>You can't press a button that says "make child porn"

Where is this button in Grok? You have to as the user explicitly write out a very obviously bad request. Nobody is going to accidentally get CSAM content without making a conscious choice about a prompt that's pretty clearly targeting it.


is it reasonable (legal term, ie anyone can do it) that someone with little effort could create CSAM using photoshop?

No, you need to train, take a lot of time and effort to do it. with grok you say "hey make a sexy version of [picture of this minor]" and it'll do it. that doesn't take traning, and its not a high bar to stopping people doing it.

The non-CSAM example is this, it's illegal, in the USA to make anything that looks like a US dollar bill. ie photocopiers have blocks on them to stop you making copies of it.

You can get round that as a private citizen but its still illegal. A company knowingly making a photocopier that allows you to photocopy dollar bills is in for a bad time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: