Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm probably mis-understanding the implications but, IIUC, as it is, HN is moderated by dang (and others?) but still falls under 230 meaning HN is not responsible for what other users post here.

With this ruling, HN is suddenly responsibly for all posts here specifically because of the moderation. So they have 2 options.

(1) Stop the moderation so they can be safe under 230. Result, HN turns to 4chan.

(2) enforce the moderation to a much higher degree by say, requiring non-anon accounts and TOS that make each poster responsible for their own content and/or manually approve every comment.

I'm not even sure how you'd run a website with user content if you wanted to moderate that content and still avoid being liable for illegal content.



> With this ruling, HN is suddenly responsibly for all posts here specifically because of the moderation.

I think this is a mistaken understanding of the ruling. In this case, TikTok decided, with no other context, to make a personalized recommendation to a user who visited their recommendation page. On HN, your front page is not different from my front page. (Indeed, there is no personalized recommendation page on HN, as far as I'm aware.)


> The Court held that a platform’s algorithm that reflects “editorial judgments” about “compiling the third-party speech it wants in the way it wants” is the platform’s own “expressive product” and is therefore protected by the First Amendment.

I don't see how this is about personalization. HN has an algorithm that shows what it wants in the way it wants.


So, yes, the TikTok FYP is different from a forum with moderation.

But the basis of this ruling is basically "well the Moody case says that curation/moderation/suggestion/whatever is First Amendment protected speech, therefore that's your speech and not somebody else's and so 230 doesn't apply and you can be liable for it." That rationale extends to basically any form of moderation or selection, personalized or not, and would blow a big hole in 230's protections.

Given generalized anti-Big-Tech sentiment on both ends of the political spectrum, I could see something that claimed to carve out just algorithmic personalization/suggestion from protection meeting with success, either out of the courts or Congress, but it really doesn't match the current law.


"well the Moody case says that curation/moderation/suggestion/whatever is First Amendment protected speech, therefore that's your speech and not somebody else's and so 230 doesn't apply and you can be liable for it."

I see a lot of people saying this is a bad decision because it will have consequences they don't like, but the logic of the decision seems pretty damn airtight as you describe it. If the recommendation systems and moderation policies are the company's speech, then the company can be liable when the company "says", by way of their algorithmic "speech", to children that they should engage in some reckless activity likely to cause their death.


It's worth noting that personalisation isn't moderation, An app like TikTok needs both.

Personalisation simply matches users with the content the algorithm thinks they want to see. Moderation (which is typically also an algorithm) tries to remove harmful content from the platform altogether.

The ruling isn't saying that Section 230 doesn't apply because TikTok moderated. It's saying Section 230 doesn't apply because TikTok personalised, allegedly knew about the harmful content and allegedly didn't take enough action to moderate this harmful content.


>Personalisation simply matches users with the content the algorithm thinks they want to see.

These algorithms aren't matching you with what you want to see, they're trying to maximize your engagement- or, its what the operator wants you to see, so you'll use the site more and generate more data or revenue. Its a fine, but extremely important distinction.


What the operator wants you to see also gets into the area of manipulation, hence 230 shouldn't apply - by making algorithms based on manipulation or paid for boosting companies move from impartial unknowing deliverers of harmful content into committed distributors of it.


"harmful content" is such a joke word. if a piece of text or media could harm people the military would have weaponized it long ago. Even monty python made a satire of such a "harmful content" scenario: https://www.youtube.com/watch?v=Qklvh5Cp_Bs Big tech's hyperbole of the word is even more severe than Monty Pythons absurdist satire. Sadly it's not a joke.


Let me introduce you to the concept of psychological warfare. . .


Doesn't seem to have anything to do with personalization to me, either. It's about "editorial judgement," and an algorithm isn't necessarily a get out of jail free card unless the algorithm is completely transparent and user-adjustable.

I even think it would count if the only moderation you did on your Lionel model train site was to make sure that most of the conversation was about Lionel model trains, and that they be treated in a positive (or at least neutral) manner. That degree of moderation, for that purpose, would make you liable if you left illegal or tortious content up i.e. if you moderate, you're a moderator, and your first duty is legal.

If you're just a dumb pipe, however, you're a dumb pipe and get section 230.

I wonder how this works with recommendation algorithms, though, seeing as they're also trade secrets. Even when they're not dark and predatory (advertising related.) If one has a recommendation algo that makes better e.g. song recommendations, you don't want to have to share it. Would it be something you'd have to privately reveal to a government agency (like having to reveal the composition of your fracking fluid to the EPA, as an example), and they would judge whether or not it was "editorial" or not?

[edit: that being said, it would probably be very hard to break the law with a song recommendation algorithm. But I'm sure you could run afoul of some financial law still on the books about payola, etc.]


> That degree of moderation, for that purpose, would make you liable if you left illegal or tortious content up i.e. if you moderate, you're a moderator, and your first duty is legal.

I'm not sure that's quite it. As I read the article and think about its application to Tiktok, the problem was more that "the algorithm" was engaged in active and allegedly expressive promotion of the unsafe material. If a site like HN just doesn't remove bad content, then the residual promotion is not exactly Hacker News's expression, but rather its users'.

The situation might change if a liability-causing article were itself given 'second chance' promotion or another editorial thumb on the scale, but I certainly hope that such editorial management is done with enough care to practically avoid that case.


Specifically NetChoice argued that personalized feeds based on user data were protected due to first person speech. This went to supreme court and supreme court agreed. Now precedent is set by the highest court that those feeds are "expressive product". It doesn't make sense, but that's how the law works - by trying to define as best as possible the things in gray areas.

And they probably didn't think through how this particular argument could affect other areas of their business.


It absolutely makes sense. What NetChoice held was that the curation aspect of algorithmic feeds makes the weighting approach equivalent to the speech of the platforms and therefore when courts evaluated challenges to government imposed regulation, they had to perform standard First Amendment analysis to determine if the contested regulation passed muster.

Importantly, this does not mean that before the Third Circuit decision platforms could just curate any which way they want and government couldn't regulate at all -- the mandatory removal regime around CSAM content is a great example of government regulating speech and forcing platforms to comply.

The Third Circuit decision, in a nutshell, is telling the platforms that they can't have their cake and eat it too. If they want to claim that their algorithmic feeds are speech that is protected from most government regulation, they can't simultaneously claim that these same algorithmic feeds are mere passive vessels for the speech of third parties. If that were the case, then their algorithms would enjoy no 1A protection from government regulation. (The content itself would still have 1A protection based on the rights of the creators, but the curation/ranking/privileging aspect would not).


I misunderstood the Supreme Court ruling that it hinged on personalization per user of algorithms and thought it made a distinction between editorial decisions that show to everyone vs individual users. I thought that part didn’t make sense. I see now it’s really the third circuit ruling that interpreted the user customization part as editorial decisions, not excluding the non-per user algorithms.


Yeah, I agree.

This ruling is a natural consequence of the NetChoice ruling. Social media companies can't have it both ways.

> If that were the case, then their algorithms would enjoy no 1A protection from government regulation.

Well, the companies can still probably claim some 1st Amendment protections for their recommendation algorithms (for example, a law banning algorithmic political bias would be unconstitutional). All this ruling does is strip away the safe harbour protections, which weren't derived from the 1A in the first place.


> law banning algorithmic political bias would be unconstitutional

Would it? The TV channels of old were heavily regulated well past 1st amendment limits.


Only because they were using public airwaves.

Cable was never regulated like that. The medium actually mattered in this case


Cellphones use public airwaves too


From the article:

> TikTok, Inc., via its algorithm, recommended and promoted videos posted by third parties to ten-year-old Nylah Anderson on her uniquely curated “For You Page.”


That's the difference between the case and a monolithic electronic bulletin board like HN. HN follows an old-school BB model very close to the models that existed when Section 230 was written.

Winding up in the same place as the defendant would require making a unique, dynamic, individualized BB for each user tailored to them based on pervasive online surveillance and the platform's own editorial "secret sauce."


The HN team explicitly and manually manages the front page of HN, so I think it's completely unarguable that they would be held liable under this ruling if at least the front page contained links to articles that caused harm. They manually promote certain posts that they find particularly good, even if they didn't get a lot of votes, so this is even more direct than what TikTok did in this case.


It is absolutely still arguable in court, since this ruling interpreted the Supreme Court ruling to pertain to “a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,”

In other words, the Supreme Court decision mentions editorial decisions but no court case has yet backed up if that means editorial decisions in the HN front page sense (as in mods make some choices but it’s not personalized.) Common sense may say mods making decisions is editorial decisions but it’s a gray area until a court case makes it clear. Precedence is the most important thing when interpreting law, and the only precedence we have is that it pertains to personalized feeds.


The decision specifically mentions algorithmic recommandation as being speech, ergo the recommandation itself is the responsibility of the platform.

Where is the algorithmic recommandation that differs per user on HN?


where does it say that it matters if it differs per user?


> Because TikTok’s “algorithm curates and recommends a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,” it becomes TikTok’s own speech.


HN is _not_ a monolithic bulletin board -- the messages on a BBS were never (AFAIK) sorted by 'popularity' and users didn't generally have the power to demote or flag posts.

Although HN's algorithm depends (mostly) on user input for how it presents the posts, it still favours some over others and still runs afoul here. You would need a literal 'most recent' chronological view and HN doesn't have that for comments. It probably should anyway!

@dang We need the option to view comments chronologically, please


Writing @dang is a no-op. He'll respond if he sees the mention, but there's no alert sent to him. Email hn@ycombinator.com if you want to get his attention.

That said, the feature you requested is already implemented but you have to know it is there. Dang mentioned it in a recent comment that I bookmarked: https://news.ycombinator.com/item?id=41230703

To see comments on this story sorted newest-first, change the link to

https://news.ycombinator.com/latest?id=41391868

instead of

https://news.ycombinator.com/item?id=41391868


> HN is _not_ a monolithic bulletin board -- the messages on a BBS were never (AFAIK) sorted by 'popularity' and users didn't generally have the power to demote or flag posts.

I don't think the feature was that unknown. Per Wikipedia, the CDA passed in 1996 and Slashdot was created in 1997, and I doubt the latter's moderation/voting system was that unique.


> We need the option to view comments chronologically

You might like this then: https://hckrnews.com/


Key words are "editorial" and "secret sauce". Platforms should not be liable for dangerous content which slips through the cracks, but certainly should be when their user-personalized algorithms mess up. Can't have your cake and eat it to.


Dangerous content slipping through the cracks and the algorithms messing up is the same thing. There is no way for content to "slip through the cracks" other than via the algorithm.


You can view the content via direct links or search, recommendation algorithms isn't the only way to view it.

If you child porn that gets shared via direct links then that is bad even if nobody can see it, but it is much much worse if you start recommending that to people as well.


Everything is related. Search results are usually generated based on recommendations, and direct links usually influence recommendations, or include recommendations as related content.

It's rarely if ever going to be the case that there is some distinct unit of code called "the algorithm" that can be separated and considered legally distinct from the rest of the codebase.


It’d be interesting to know what constitutes an “algorithm”. Does a message board sorting by “most recent” count as one?


> algorithm that reflects “editorial judgments”

I don't think timestamps are, in any way, construed editorial judgement. They are a content agnostic related attribute.


On HN, timestamps are adjusted when posts are given a second-chance boost. While the boost is done automatically, candidates are chosen manually.


What about filtering spam? Or showing the local weather / news headlines?


Moderating content is explicitly protected by the text of Section 230(c)(2)(a):

"(2)Civil liability No provider or user of an interactive computer service shall be held liable on account of— (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or"

Algorithmic ranking, curation, and promotion are not.


Or ordering posts by up votes/down votes, or some combination of that with the age of the post.


The text of the Third Circuit decision explicitly distinguishes between algorithms that respond to user input -- such as by surfacing content that was previously searched for, or favorited, or followed. Allowing users to filter content by time, upvotes, number of replies etc would be fine.

The FYP algorithm that's contested in the case surfaced the video to the minor without her searching for that topic, following any specific content creator, or positively interacting (liking/favoriting/upvoting) with previous instances of said content. It was fed to her based on a combination of what TikTok knew about her demographic information, what was trending on the platform, and TikTok's editorial secret sauce. TikTok's algorithm made an active decision to surface this content to her, despite knowing that other children had died from similar challenge videos, they promoted it and should be liable for that promotion.


But something like Reddit would be held liable for showing posts, then. Because you get shown different results depending on the subreddits you subscribe to, your browsing patterns, what you've upvoted in the past, and more. Pretty much any recommendation engine is a no-go of this ruling becomes precedence.


TBH, Reddit really shouldn't have 230 protection anyways.

You can't be licensing user content to AI as it's not yours. You also can't be undeleting posts people make (otherwise it's really reddit's posts and not theirs).

When you start treating user data as your own; it should become your own and that erodes 230.


> You also can't be undeleting posts people make

undeleting is bad enough, but they've edited the content of user's comments too.


> You can't be licensing user content to AI as it's not yours.

It is theirs. Users agreed to grant Reddit a license to use the content when they accepted the terms of service.


It belongs to reddit, the user handed over the content willingly.


In which case, Reddit is the author, publisher and distributor, and cannot claim any protection from libel or defamation suits


From my reading, if the site only shows you based on your selections, then it wouldn't be liable. For example, if someone else with the exact same selections gets the same results, then that's not their platform deciding what to show.

If it does any customization based on what it knows about you, or what it tries to sell you because you are you, then it would be liable.

Yep., recommendation engines would have to be very carefully tuned, or you risk becoming liable. Recommending only curated content would be a way to protect yourself, but that costs money that companies don't have to pay today. It would be doable.


> For example, if someone else with the exact same selections gets the same results, then that's not their platform deciding what to show.

This could very well be true for TikTok. Of course "selection" would include liked videos, how long you spend watching each video, and how many videos you have posted

And on the flip side a button that brings you to a random video would supply different content to users regardless of "selections".


It could be difficult to draw the line. I assume TikTok’s suggestions are deterministic enough that an identical user would see the same things - it’s just incredibly unlikely to be identical at the level of granularity that TikTok is able to measure due to the type of content and types of interactions the platform has.


And time.

An account otherwise identical made two days later is going to interact with a different stream. Technically deterministic but in practice no two end up ever being exactly alike, (despite similar people having similar channels.)

The "answer" will turn back into tv channels. Have communities curate playlists of videos, and then anyone can go watch the playlist at any time. Reinvent broadcast tv / the subreddit.


>Pretty much any recommendation engine is a no-go of this ruling becomes precedence.

That kind of sounds... great? The only instance where I genuinely like to have a recommendation engine around is music steaming. Like yeah sometimes it does recommend great stuff. But anywhere else? No thank you


If one were to subscribe to such a distinction between algorithmic ranking and algorithmic suggestions I would liken it with a broad paintbrush to:

Ranking: A group of people share a ouija board, and together make selections.

Suggestion: A singular entity clips together media to create a new narrative, akin to a ransom note.

If the sum of the collection of content more than its parts, if is different in strength not kind, or self reenforcing, it's really hard to distinguish where the algorithm ends and the voters begin.


Per the court of appeals, TikTok is not in trouble for showing a blackout challenge video. TikTok is in trouble for not censoring them after knowing they were causing harm.

> "What does all this mean for Anderson’s claims? Well, § 230(c)(1)’s preemption of traditional publisher liability precludes Anderson from holding TikTok liable for the Blackout Challenge videos’ mere presence on TikTok’s platform. A conclusion Anderson’s counsel all but concedes. But § 230(c)(1) does not preempt distributor liability, so Anderson’s claims seeking to hold TikTok liable for continuing to host the Blackout Challenge videos knowing they were causing the death of children can proceed."

As-in, Dang would be liable if say somebody started a blackout challenge post on HN and he didn't start censoring all of them once news reports of programmers dieing broke out.

https://fingfx.thomsonreuters.com/gfx/legaldocs/mopaqabzypa/...


The ingenuity of kids to believe and be easily influenced by what they see online had a big role in this ruling, disregarding that is a huge disservice to a productive discussion.


Does TikTok have to know that “as a category blackout videos are bad” or that “this specific video is bad”.

Does TikTok have preempt this category of videos in the future or simply respond promptly when notified such a video is posted to their system?


Are you asking about the law, or are you asking our opinion?

Do you think its reasonable for social media to send videos to people without considering how harmful they are?

Do you even think its reasonable for search engine to respond to a specific request for this information?


Personally, I wouldn't want search engines censoring results for things explicitly searched for, but I'd still expect that social media should be responsible for harmful content they push onto users who never asked for it in the first place. Push vs Pull is an important distinction that should be considered.


That IS the distinction at play here.


Did some hands come out of the screen, pull a rope out then choke someone? Platforms shouldn’t be held responsible when 1 out of a million users wins a Darwin award.


I think it's a very different conversation when you're talking about social media sites pushing content they know is harmful onto people who they know are literal children.


What constitutes "censoring all of them"


Trying to define "all" is an impossibility; but, by virtue of having taken no action whatsoever, answering that question is irrelevant in the context of this particular judgment: Tiktok took no action, so the definition of "all" is irrelevant. See also for example: https://news.ycombinator.com/item?id=41393921

In general, judges will be ultimately responsible for evaluating whether "any", "sufficient", "appropriate", etc. actions were taken in each future case judgement they make. As with all things legalese, it's impossible to define with certainty a specific degree of action that is the uniform boundary of acceptable; but, as evident here, "none" is no longer permissible in that set.

(I am not your lawyer, this is not legal advice.)


Any good will attempt at censoring would have been as a reasonable defense even if technically they don't censor 100% of them, such as blocking videos with the word "blackout" on their title or manually approving videos with such thing, but they did nothing instead.


> TikTok is in trouble for not censoring them after knowing they were causing harm.

This has interesting higher-order effects on free speech. Let's apply the same ruling to vaccine misinformation, or the ability to organize protests on social media (which opponents will probably call riots if there are any injuries)


Uh yeah, the court of appeals has reached an interesting decision.

But I mean what do you expect from a group of judges that themselves have written they're moving away from precedent?


I don't doubt the same court relishes the thought of deciding what "harm" is on a case-by-case basis. The continued politicization of the courts will not end well for a society that nominally believes in the rule of law. Some quarters have been agitating for removing §230 safe harbor protections (or repealing it entirely), and the courts have delivered.


The personalized aspect wasn't emphasized at all in the ruling. It was the curation. I don't think TikTok would have avoided liability by simply sharing the video with everyone.


> On HN, your front page is not different from my front page.

It’s still curated, and not entirely automatically. Does it make a difference whether it’s curated individually or not?


"I think this is a mistaken understanding of the ruling."

I think that is quite generous. I think it is a deliberate reinterpretation of what the order says. The order states that 230(c)(1) provides immunity for removing harmful content after being made aware of it, i.e., moderation.


Section 230 hasn't changed or been revoked or anything, so, from what I understand, manual moderation is perfectly fine, as long as that is what it is: moderation. What the ruling says is that "recommended" content and personalised "for you" pages are themselves speech by the platform, rather than moderation, and are therefore not under the purview of Section 230.

For HN, Dang's efforts at keeping civility don't interfere with Section 230. The part relevant to this ruling is whatever system takes recency and upvotes, and ranks the front page posts and comments within each post.


Under Judge Matey's interpretation of Section 230, I don't even think option 1 would remain on the table. He includes every act except mere "hosting" as part of publisher liability.


I feel like the end result of path #1 is that your site just becomes overrun with spams and scams. See also: mail, telephones.


No, that's not the end the result.

It would be perfectly legal for a platform to choose to allow a user to decide on their own to filter out spam.

Maybe a user could sign up for such an algorithm, but if they choose to whitelist certain accounts, that would also be allowed.

Problem solved.


Exactly. Moderation is not a problem as long as you can opt out of it, for both reading and writing.


If I were to start posting defamatory material about you on various internet forums, how would you opt out of that?


Same as if you were to post it on notice boards, I would opt to not give a fuck.


Yeah, no moderation leads to spams, scams, rampant hate, and CSAM. I spent all of an hour on Voat when it was in its heyday and it mostly literal Nazis calling for the extermination of undesirables. The normies just stayed on moderated Reddit.


> stayed on moderated Reddit

... being manipulated by the algorithm (per this judges decision).


voat wasnt exactly a single place, any more than reddit is


Were there non KKK/nazi/qanon whatever subvoats (or whatever they call them?) the one time i visited the site every single post on the frontpage was alt right nonsense


Yes. There were a ton of them for various categories of sex drawings, mostly in the style common in Japanese comics and cartoons.


there was a whole lot of stuff that very judgemental people just never got to see, blinded by rage of wrongthink that they would gladly sweep all the others away to get some victory over something that will still just be somewhere else.

I enjoyed the communities around farming, homesteading, homeschooling of kids, classic litterature etc... but oh no, someone said some naughty words? lets shut it down.


Yeah, I think my comment implied i'll cry and turn away if I see bad words. It was more like, i'm personally not a nazi and don't want to read their stuff and it appeared to be a nazi site. Why wouldn't farming, homesteading, homeschooling etc. just use reddit?


It was the people who were chased out of other websites that drove much of their traffic so it's no surprise that their content got the front page. It's a shame that they scared so many other people away and downvoted other perspectives because it made diversity difficult.


Not sure about the downvotes on this comment; but what parent says has precedent in Cubby Inc. vs Compuserve Inc.[1] and this is one of the reasons Section 230 came about to be in the first place.

HN is also heavily moderated with moderators actively trying to promote thoughtful comments over other, less thoughtful or incendiary contributions by downranking them (which is entirely separate from flagging or voting; and unlike what people like to believe, this place relies more on moderator actions as opposed to voting patterns to maintain its vibe.) I couldn't possibly see this working with the removal of Section 230.

[1] https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.


If I upvote something illegal, my liability was the same before, during, and after 230 exists, right?


Theoretically, your liability is the same because the First Amendment is what absolves you of liability for someone else's speech. Section 230 provides an avenue for early dismissal in such a case if you get sued; without Section 230, you'll risk having to fight the lawsuit on the merits, which will require spending more time (more fees).


I'd probably like the upvote itself to be considered "speech". The practical effect of upvoting is to endorse, together with the site's moderators and algorithm-curators, the comment to be shown to a wider audience.

Along those lines then then an upvote i.e. endorsement would be protected, up to any point where it violated one of the free speech exceptions, e.g. incitement.


4chan is actually moderated too.


> Result, HN turns to 4chan.

As if it was something bad. 4chan has /g and it's absolutely awesome.


Nuff said. Underneath the ever-lasting political cesspool from /pol/ and... _specific_ atmosphere, it's still one of the best places to visit for tech-based discussion.


2) Require confirmation you are a real person (check ID) and attach accounts per person. The commercial Internet has to follow the laws they're currently ignoring and the non-commercial Internet can do what they choose (because of being untraceable).


4chan is moderated and the moderation is different on each board with the only real global moderation rule being "no illegal stuff". In addition to that the site does curate the content it shows you using an algorithm even though it is a very basic one (the thread with last reply goes to the top of the page and threads older then X are removed automatically.)

For example the qanon conspiracy nuts got moderated out of /pol/ for arguing in bad faith/just being too crazy to actually have any kind of conversation with and they fled to another board (8chan and later 8kun) that has even less moderation.


> 4chan is moderated

Yep, 4chan isn't bad because "people I disagree with can talk there", it's bad because the interface is awful and they can't attract enough advertisers to meet their hosting demands.


Nah. HN is not the same as these others.

TikTok. Facebook. Twitter. YouTube.

All of these have their algorithms specifically curated to try to keep you angry. YouTube outright ignores your blocks every couple months, and no matter how many people dropping n-bombs you report and block, it never endingly pushes more and more.

These company know that their algorithms are harmful and they push them anyway. They absolutely should have liability for what their algorithm pushes.


There's moderation to manage disruption to a service. There's editorial control to manage the actual content on a service.

HN engages in the former but not the latter. The big three engage in the latter.


HN engages in the latter. For example, user votes are weighted based on their alignment with the moderation team's view of good content.


I don't understand your explanation. Do you mean just voting itself? That's not controlled or managed by HN. That's just more "user generated content." That posts get hidden or flagged due to thresholding is non-discriminatory and not _individually_ controlled by the staff here.

Or.. are you suggesting there's more to how this works? Is dang watching votes and then making decisions based on those votes?

"Editorial control" is more of a term of art and has a narrower definition then you're allowing for.


The HN moderation team makes a lot of editorial choices, which is what gives HN its specific character. For example, highly politically charged posts are manually moderated and kept off the main page regardless of votes, with limited exceptions entirely up to the judgement of the editors. For example, content about the wars in Ukraine and Israel is not allowed on the mainpage except on rare occasions. dang has talked a lot about the reasoning behind this.

The same applies to comments on HN. Comments are not moderated based purely on legal or certain general "good manners" grounds, they are moderated to keep a certain kind of discourse level. For example, shallow jokes or meme comments are not generally allowed on HN. Comments that start discussing controversial topics, even if civil, are also discouraged when they are not on-topic.

Overall, HN is very much curated in the direction of a newspaper "letter to the editor" section, then more algorithmic and hands-off like the Facebook wall or TikTok feed. So there is no doubt whatsoever, I believe, that HN would be considered responsible for user content (and is, in fact, already pretty good at policing that in my experience, at least on the front page).


> The HN moderation team makes a lot of editorial choices, which is what gives HN its specific character. For example, highly politically charged posts are manually moderated and kept off the main page regardless of votes, with limited exceptions entirely up to the judgement of the editors. For example, content about the wars in Ukraine and Israel is not allowed on the mainpage except on rare occasions. dang has talked a lot about the reasoning behind this.

This is meaningfully different in kind from only excluding posts that reflect certain perspectives on such a conflict. Maintaining topicality is not imposing a bias.


> This is meaningfully different in kind from only excluding posts that reflect certain perspectives on such a conflict. Maintaining topicality is not imposing a bias.

Maintaining topicality is literally a bias. Excluding posts that reflect certain perspectives is censorship.


There's things like 'second chance' where the editorial team can re-up posts they feel didn't get a fair shake the first time around, sometimes if a post gets too 'hot' they will cool it down -- all of this is understandable but unfortunately does mean they are actively moderating content and thus are responsible for all of it.


Dang has been open about voting being only one part of the way HN works, and that manual moderator intervention does occur. They will downweigh the votes of "problem" accounts, manually adjust the order of the frontpage, and do whatever they feel necessary to maintain a high signal to noise ratio.


Every time you see a comment marked as [dead] that means a moderator deleted it. There is no auto-deletion resulting from downvotes.

Even mentioning certain topics, such as Israel's invasion of Palestine, even when the mention is on-topic and not disruptive, as in this comment you are reading, is practically a death sentence for a comment. Not because of votes, but because of the moderators. Downvotes may prioritize which comments go in front of moderators (we don't know) but moderators make the final decision; comments that are downvoted but not removed merely stick around in a light grey colour.

By enabling showdead in your user preferences and using the site for a while, especially reading controversial threads, you can get a feel for what kinds of comments are deleted by moderators exercising. It is clear that most moderation is about editorial control and not simply the removal of disruption.

This comment may be dead by the time you read it, due to the previous mention of Palestine - hi to users with showdead enabled. Its parent will probably merely be down voted because it's wrong but doesn't contain anything that would irk the mods.


Comments that are marked [dead] without the [flagged] indicator are like that because the user that posted the comment has been banned. For green (new) accounts this can be due to automatic filters that threw up false positives for new accounts. For old accounts this shows that the account (not the individual comment) has been banned by moderators. Users who have been banned can email hn@ycombinator.com pledging to follow the rules in the future and they'll be granted another chance. Even if a user remains banned, you can unhide a good [dead] comment by clicking on its timestamp and clicking "vouch."

Comments are marked [flagged] [dead] when ordinary users have clicked on the timestamp and selected "flag." So user downvotes cannot kill a comment, but flagging by ordinary non-moderator users can kill it.


Freedom of speech, not reach of their personal curation preferences, narrative shaping due to confirmation bias and survivorship bias. Tech is in the put them on scales to increase their signal, decrease others based upon some hokey story of academic and free market genius.

The pro-science crowd (which includes me fwiw) seems incapable of providing a proof any given scientist is that important. Same old social politics norms inflate some deflate others and we confirm our survival means we special. Ones education is vacuous prestige given physics applies equally; oh you did the math! Yeah I just tell the computer to do it. Oh you memorized the circumlocutions and dialectic of some long dead physicist. Outstanding.

There’s a lot of ego driven banal classist nonsense in tech and science. At the end of the day just meat suits with the same general human condition.


(1) 4chin is too dumb to use HN, and there's no image posting so, I doubt they'd even be interested in raiding us (2) I've never seen anything illegal here, I'm sure it happens, and it gets dealt with quickly enough that it's not really ever going to be a problem if things continue as they have been.

They may lose 230 protection, sure, but probably not really a problem here. For Facebook et al, it's going to be an issue, no doubt. I suppose they could drop their algos and bring back the chronological feeds, but, my guess is that wouldn't be profitable given that ad-tech and content feeds are one in the same at this point.

I'd also assume that "curation" is the sticking point here, if a platform can claim that they do not curate content, they probably keep 230 protection.


>4chin is too dumb to use HN

I don't frequent 4cuck, I use soyjak.party which I guess from your perspective is even worse, but there are of plenty of smart people on the 'cuck thoughbeit, like the gemmy /lit/ schizo. I think you would feel right at home in /sci/.


Certain boards most definitely raid various HN threads.

Specifically, every political or science thread that makes it, is raided by 4chan. 4chan also regularly pushes anti/science and anti-education agenda threads to the top here, along with posts from various alt-right figures on occasion.


search: site:4chan.org news.ycombinator.com

Seems pretty sparse to me, and from a casual perusal, I haven't seen any actual calls to raiding anything here, it's more of a reference where articles/posts have happened, and people talking about them.

Remember, not everyone who you disagree with comes from 4chan, some of them probably work with you, you might even be friends with them, and they're perfectly serviceable people with lives, hopes, dreams, same as yours, they simply think differently than you.


lol dude. Nobody said that 4chan links are posted to HN, just that 4chan definitely raids HN.

4chan is very well known for brigading. It is also well known that using 4chan as well as a number of other locations, such as discord, to post links for brigades are an extremely common thing that the alt-right does to try to raise the “validity” of their statements.

I also did not claim that only these opinions come from 4chan. Nice strawman bro.

Also, my friends do not believe these things. I do not make a habit of being friends with people that believe in genociding others purely because of sexual orientation or identity.


Go ahead and type that search query into google and see what happens.

Also the alt-right is a giant threat, if you categorize everyone right of you as alt-right, which seems to be the standard definition.

That's not how I've chosen to live, and I find that it's peaceful to choose something more reasonable. The body politic is cancer on the individual, and on the list of things that are important in life, it's not truly important. With enough introspection you'll find that the tendency to latch onto politics, or anything politics-adjacent, comes from an overall lack of agency over the other aspects of life you truly care about. It's a vicious cycle. You have a finite amount of mental energy, and the more you spend on worthless things, the less you have to spend on things that matter, which leads to you latching further on to the worthless things, and having even less to spend on things that matter.

It's a race to the bottom that has only losers. If you're looking for genocide, that's the genocide of the modern mind, and you're one foot in the grave already. You can choose to step out now and probably be ok, but it's going to be uncomfortable to do so.

That's all not to say there aren't horrid, problem-causing individuals out in the world, there certainly are, it's just that the less you fixate on them, the more you realize that they're such an extreme minority that you feel silly fixating on them in the first place. That goes for anyone that anyone deems 'horrid and problem-causing' mind you, not just whatever idea you have of that class of person.


> Go ahead and type that search query into google and see what happens.

What are you expecting it to show? That site removes all content after a matter of days.


Indeed. But 4chan has archives, for instance https://archived.moe/


These people win elections and make news cycles. They are not an “ignorable, small minority”.

For the record, ensuring that those who wish to genocide LGBT+ people are not the majority voice on the internet is absolutely not “a worthless matter”, not by any stretch. I would definitely rather not have to do this, but then, the people who dedicate their lives to trolling and hate are extremely active.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: