Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems like the extortionists can keep sending claims until they stumble across a YouTube moderator who guesses wrong and clicks the "this claim was legit" button. Even if 90% of the moderators would get it right eventually your video is going to be down for good. Even humans make mistakes sometimes, so human-in-the-loop isn't a perfect solution.

The thing that really doesn't make sense to me is that when a moderator marks a claim as invalid it doesn't switch the automod system to requiring the moderator to review the claim before taking the video down. Ideally you'd like that to be the case for all videos, but presumably that would anger the media cartels that dictated the requirements for the system and just want a way to do mass takedowns that doesn't cost them lawyer hours or have the potential of consequences for them.



> It seems like the extortionists can keep sending claims until they stumble across a YouTube moderator who guesses wrong and clicks the "this claim was legit" button.

This is exactly what happens. At least, with Facebook. It's not that pranksters or trolls can flag your post/account/page and have it automatically blocked or banned; it's that eventually one of the human reviewers will make a mistake. I've seen it happen more than once and the only way to appeal was through back channels (i.e. via a friend working for Facebook).


If Google created systems to defend against a single actor spamming their reporting system, it would be an admission that they're aware that they aren't capable of identifying and filtering that out automatically.

"We don't pay for fraud traffic, we don't bill our advertisers for fraud traffic, and youtube analytics represent real users not bots" is pretty much Google's whole business model.

They can survive being caught being mistaken, but it's much worse for them to be caught lying.

So they have to pretend to believe that reporting spam represents large numbers of real humans filing mistaken or dishonest reports that they have to manually review.

This doesn't have anything to do with the media cartels though -- they have access to ContentID and can take down YT videos without filing reports or involving Google employees at all. Media companies wouldn't care if YT removed the report feature entirely.

If you're looking for a corporation to blame for this it's the advertisers, they don't want to see their ads next to brand-unsafe videos. YT has outsourced the hard work of actually watching and classifying videos for appropriateness to the report system. They assume that once they take down anything that catches a lot of reports, whatever is left will be anodyne enough to be brand-safe.


Would be pretty interesting if some moderators (who are probably residents of low wage countries) not just simply "guess wrong" but are part of the scheme and mark some videos on purpose.


I feel like this is too deep into the conspiracy domain. There are so many videos and moderators, that, of course, there's assholes in there. But the chance that this one specific video ends up on the task queue of the fraudulent moderator just seems too low to be a feasible strategy.


Agree. This will have to be fixed at the legislative level.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: