Google is hiring thousands of new moderators after facing widespread violent reactions for allowing videos of child abuse and other violent and offensive content to flourish on YouTube.
Google, owner of YouTube, announced on Monday that it would expand its total labor next year to more than 10,000 people responsible for reviewing content that could violate its policies. News from YouTube executive director Susan Wojcicki followed a constant stream of negative press around the site's role in spreading harbading videos, misinformation, hate speech and content that is harmful to children.
Wojcicki said that, in addition to an increase in human moderators, YouTube continues to develop advanced machine learning technology to automatically mark problematic content for deletion. The company said its new efforts to protect children from dangerous and abusive content and block hate speech on the site were modeled after the company's ongoing work to combat violent extremist content.
"Human reviewers are still essential both to eliminate content and to train machine learning systems because human judgment is fundamental to making contextualized decisions about content," the CEO wrote in a blog post, saying that the moderators had manually reviewed videos of almost 2 million violent extremist content. since June, it helps train machine learning systems to identify similar images in the future.
In recent weeks, YouTube has used machine learning technology to help human moderators find and close hundreds of accounts and hundreds of thousands of comments, according to Wojcicki.
YouTube faced increased scrutiny last month in the wake of reports that it allowed violent content to jump the YouTube Kids filter, which is supposed to block any content that is not appropriate for young users. Some parents recently discovered that YouTube Kids allowed children to watch videos of familiar characters in violent or lewd scenarios, along with nursery rhymes mixed with disturbing images, according to the New York Times.
Other reports revealed "verified" channels with videos of child exploitation, including viral images of children screaming being tortured and mock web cams of young girls in revealing clothes.
YouTube has also repeatedly provoked outrage over its role in perpetuating misinformation and video harbadment in the wake of mbad shootings and other national tragedies. The Guardian discovered that survivors and relatives of victims of numerous shootings have been subject to a wide range of online abuses and threats, some linked to popular ideas of conspiracy theory highlighted on YouTube.
Some parents of people killed in high-profile shootings have spent countless hours trying to report abusive videos about their deceased children and have repeatedly asked Google to hire more moderators and better enforce their policies. However, it is unclear how the moderators' expansion announced Monday could affect this type of content, as YouTube said it focused on hate speech and child safety.
Although recent scandals have illustrated the current limits of algorithms for detecting and removing infringing content, Wojcicki made it clear that YouTube would continue to rely heavily on machine learning, a necessary factor given the magnitude of the problem.
YouTube said machine learning was helping its human moderators eliminate almost five times more videos than before, and that 98% of videos deleted by violent extremism are now marked by algorithms. Wojcicki said that advances in technology allowed the site to eliminate almost 70% of violent extremist content within eight hours after loading.
The statement also said that YouTube was also reforming its advertising policies, saying it would apply stricter criteria, perform a more manual curation and expand its team of ad reviewers. Last month, several high-profile brands suspended YouTube and Google advertising after reports revealed that they were placed along with videos full of badually explicit and exploitative comments about children.
In March, several companies also took out their YouTube ads after learning they were linked to videos with hate speech and extremist content.
Contact the author: [email protected]