> "I also see a lot of people complaining that YouTube/Twitter/etc aren't doing enough to take down false/immoral/illegal content quick enough"
I dont want these companies touching false/immoral take downs. who are they to decide? sorry, I dont want these people in charge of the moral police. illegal stuff, like revenge porn makes sense.
> "YouTube is doing it's best to blend automation (fast but inaccurate) with human curation (more thoughtful but slower), and sometimes it gets it wrong"
I feel like they usually get it wrong, they are closer to being a blind man swinging a chain saw than a surgeon with a knife. HIRE SOME PEOPLE. they have huge margins, it would take very little for these companies to hire some people to review content.
> " feel like most of the time I see posts like this, the situation is resolved favorably and relatively quickly."
Tell that to people who have been shadow banned or demonized. most of the time, there never is resolution, their channel and all the hard work they have put together is ruined forever.
>HIRE SOME PEOPLE. they have huge margins, it would take very little for these companies to hire some people to review content.
Agreed. That's something that's frequently said about Google and Facebook... That their volume is so large, they couldn't possibly do a better job with customer service. Given that they each make billions in profit every quarter, I find it hard to believe that improving customer service is unfeasible or too expensive.
They should hire more humans to do a proper job and provide actual customer service, using their excessive profits. The AI/automation stuff is just cost cutting and its lowering the quality of service.
No one's saying not to have algorithms involved at all. Humans don't have to review all of the content, just the stuff the algorithms are unsure about, as well as takedown/review requests.
The latter is more important, I think: there is no reasonable way for a bad actor to set out to spam the REVIEWING of takedown requests. You could even make spamming takedowns more likely to trigger human review. So you can target a person on YouTube, you can try to do things to harass them, but it's not really feasible to try and attack the entire YouTube 'takedown review' process just to protect your attack on an individual.
How could this type of blackmail happen if someone was already reviewing the videos manually? Takedowns being automated is fundamentally at odds with free speech.
YT is legally required to accept all takedowns and takedown counter-notices to reduce liability. Not taking down something in lieu of a DMCA takedown request would put YT at risk of a lawsuit if the submitter does own said copyright.
Content ID, on the other hand, is a problem of their own making.
A sufficiently poorly done job is indistinguishable from nothing being done at all.
It's possible that it's impossible to do any better than they currently are, but as we don't have access to all of the data that they do, we cannot make that determination.
10 billion is a lot. And that's 10 billion that is not making them any more money.
From google's perspective, none of these blackmail or customer support issue is an actual issue, because you, the customer, is gonna use google's services regardless. The business case for paying support personnel is non-existent. Even if you're a paying customer, the overhead of hiring more personnel is not an effective use of capital - better spent on R&D and other scalable revenue generating options.
The reality is that watching all the videos all the way through is an extreme strawman of the actual amount they would need to do to have competent moderation, and yet even that strawman is within their budget!
Even watching every video that hits 1000 views would be a huge overkill, and that would already cut your budget by a factor of multiple thousands.
A more relevant metric might be how many channels are subjected to community guideline or copyright strikes per day. Hiring humans to individual examine each of these circumstances is probably far more feasible than hiring humans to individually examine every single video.
They are the ones who own the systems and services that make YouTube a thing?
Your problem is with the ownership model. Good luck.
So long as more people provide a market to have their agency bought for an ever lower price... you’ll have that.
To their models we’re replaceable cogs, so long as the correct stats are up wtf do they care?
Don’t bother rethinking how we compute or anything. It’s not like people weren’t predicting just this as the cloud took off (oh wait they were!, Google it!)
> HIRE SOME PEOPLE. they have huge margins, it would take very little for these companies to hire some people to review content.
How many people? Youtube is probably the biggest nexus of complaints, but apps on Play and the Chrome Extension store are also up there.
Consider what it would cost to manually vet all apps on Play against even the most uncontroversial malware and data abuse policy. Malicious authors are clever and apps are big. It'd take a trained reverse engineer days to truly vet a single app. How many millions of apps are on the app store? How many different versions of those apps are there? You are looking at an army of people being paid six figures to figure out if apps are doing nasty things.
Now consider how pissed people get that things like domestic abuse apps (often hiding as child monitoring apps) exist on the app store. Or plenty of other objectionable crap. If you want to tackle any of this (you don't but lots of people do) then now you've got a whole set of complex contextual problems for enforcement.
It would take a lot more than "very little" to review content.
You can do it for every app with a hundred thousand installs, though. And they could manually review support tickets from youtube partner channels of similar size.
They can't vet everything, but the bare minimum of what they should be vetting is a lot closer to apple levels than their current levels.
> "I also see a lot of people complaining that YouTube/Twitter/etc aren't doing enough to take down false/immoral/illegal content quick enough"
I dont want these companies touching false/immoral take downs. who are they to decide? sorry, I dont want these people in charge of the moral police. illegal stuff, like revenge porn makes sense.
> "YouTube is doing it's best to blend automation (fast but inaccurate) with human curation (more thoughtful but slower), and sometimes it gets it wrong"
I feel like they usually get it wrong, they are closer to being a blind man swinging a chain saw than a surgeon with a knife. HIRE SOME PEOPLE. they have huge margins, it would take very little for these companies to hire some people to review content.
> " feel like most of the time I see posts like this, the situation is resolved favorably and relatively quickly."
Tell that to people who have been shadow banned or demonized. most of the time, there never is resolution, their channel and all the hard work they have put together is ruined forever.