YouTube has spent years operating like many other technology platforms: do not listen to evil, do not see evil. That is no longer defensible.
The online video giant owned by Google in recent weeks has been pushed to the forefront of the broader discussion about the role that technology platforms have in regulating what is placed on their networks, and to whom it is reaching .
Facebook, Twitter and others have faced similar challenges, but none seems as daunting as YouTube's. The platform relies on its automated system to display ads based on the videos that are uploaded, and does not have good answers for the many questions that are being asked about its ability to effectively moderate that system.
What is clear, however, is that YouTube has evolved. A YouTube spokesperson said that the company's community guidelines have always been evolving. With recent stories detailing the various types of questionable content, the guidelines seem to be changing almost every day.
Some of the most striking things have revolved around content aimed at children, even in their supposed safe YouTube Kids space. The journalist Mashable Brian Koerber delved into the world of disturbing videos with children's characters, like a video in which Elsa, Spider-Man and other cartoon characters shoot people with automatic weapons. The Times of London reported on great brand advertising along with videos showing barely dressed children and unmoderated comments that were badual in nature.
Even more recently, BuzzFeed illustrated that the problem seems to have extended to YouTube's autocomplete function. When writing "how to have", the search tool threw searches such as "how to have s * x with your children". This happened due to a troll campaign to play his system.
Then there's the false news. The role of YouTube in the hosting and dissemination of disinformation and propaganda received less attention than Facebook and Twitter, but the platform was certainly at that level. YouTube plays a central role in far-right media, as evidenced by an badysis by Jonathan Albright, research director of the Tow Center for digital journalism. Albright also found many fake self-generated news on YouTube. YouTube has begun taking steps to address this, including the elimination of Russia's RT from its preferred advertising program.
RT has been widely regarded as the weapons of the Russian government and the task of pushing the country's perspective. It has been shown that YouTube helped build RT's YouTube channel into one of the most important news related operations on the platform.
And there are also extremists. YouTube has been criticized for years for its willingness to present videos of extremist Muslim clerics trying to recruit people for jihadist movements. Then, something changed. YouTube removed tens of thousands of videos from Anwar al-Awlaki, a well-known English-speaking clergyman who was killed in a drone attack six years ago.
I'm glad that you follow these steps, but I'm also so confused about how child abuse was allowed to thrive on the platform for so long, to audiences of millions of people? especially while the political videos were monetized in a matter of minutes ☠️ https://t.co/RBrGdCKPYD
– Laci Green (@ gogreen18) November 26, 2017
The hotline here is that none of these problems are terribly new YouTube has been dealing with problematic content on its platform since its early days. Just what the company is supposed to do, and what it is capable of doing, has been in debate for the same time. Sometimes it's proactive, like when you created Content ID to address the many hours of movies and TV content that violated copyright protections. Even that has not been without problems, since the creators of videos have taken the trouble of how the system does not take fair use into account.
But in more general terms, YouTube's position has been similar to other platforms, including Facebook and Twitter, that its role was not to monitor what was uploaded to its platform unless it was so clearly on the line that it was illegal . Even then, YouTube is protected from punishment for illegal content by the Communications Decision Law, which virtually excuses everything users add to digital platforms that do not review the content before publication.
That meant a lot of questionable content and communities that formed around him. John Herrman in the New York Times has covered this extensively, in particular how the alt-right elements have come together on YouTube. Meanwhile, YouTube was busy incubating its growing group of "creators", usually lifestyle personalities, games or entertainment that made the platform a destination for young people.
Whatever YouTube's knowledge of the extremely questionable content of its platform, it seems clear that the company did not seriously consider addressing it. As YouTube, Facebook, Twitter, Reddit and a variety of other platforms became global phenomena, the narrative emerged that having a certain amount of particularly nasty things on your platform was something that was at stake. Most platforms adopted some form of defense of freedom of expression.
Things began to change about two years ago, and surprisingly, Reddit was the first to show signs of change. Once a bastion of "anything goes" moderation, Reddit was home to some of the vilest things on the Internet. The efforts to section it were not working. Then, one day, Reddit changed. It banned five subreddits, and has banned much more since then.
Many other platforms have followed suit in their own way. Twitter has had a similar evolution, and recently it has established new rules that allow it to prohibit users from affiliating with hate groups. Facebook has been slower, although it has established some new rules to prohibit the use of its advertising system to spread propaganda.
YouTube, however, was the slowest, and it's easy to understand why. The YouTube business is placing ads along with videos. More videos will generally mean more ads and more money. Putting serious limitations on what can be charged or what can be monetized with ads is contrary to the business of the platform. It's also hard to do, since moderating all YouTube videos is extremely challenging.
This is the beating heart of YouTube's rarity: it's delivering TV-style advertising $$$ with absolutely nothing of TV responsibility or responsibility https://t.co/On8XhnU43y
– Tom Gara (@ tomgara) November 22, 2017
YouTube is working hard with artificial intelligence (in particular, machine learning) to help solve this problem. It was reported that the company's efforts to find and remove extremist videos had some success, providing a way to find content quickly and put it in front of human moderators for a final decision, said a YouTube spokesperson.
Those systems, however, require a lot of training and are one step away from being able to badyze why a video of a child character killing people would be disturbing, leaving the company to depend on the users that mark videos and human moderators. Meanwhile, the automatic rules it has implemented routinely frustrate the creators of legitimate videos that can not monetize their videos because they accidentally came into conflict with the YouTube system.
All this leaves YouTube in a difficult place, maybe even harder than Facebook, Twitter, Reddit and the rest. The threat that YouTube faces is so essential to their business that it is not obvious how the platform can adequately address these problems without alienating a large portion of the people who feed it on video.
This is one of the main reasons why YouTube has been introducing original content and a television service: there is not a good solution for this existential problem. The current YouTube business needs you to host as much content as possible while playing at peace with the bad things that fuel the public's anger. The TV packages and original digital content in which you participate do not have that problem, but they are also monetized for YouTube through subscriptions instead of ads. (YouTube does not run ads on YouTube Red in its TV package, but expects to see commercials on other channels such as cable).
This takes away You and leaves the Tube. And maybe that's where this company should go.
Editor's note: This piece has been updated to clarify the statements of a YouTube spokesperson.