YouTube is by far the world’s largest video sharing site—people upload hundreds of hours of video to the service every minute. Sifting through all those videos to provide recommendations had proven to be a minefield for the company. It’s dealt with clickbait, repeat recommendations, and more. Now, YouTube says it’s working on improving recommendations to get rid of content that doesn’t technically violate YouTube policies but is still offensive or irresponsible.
YouTube’s community guidelines ban things like hate speech, graphic violence, nudity, and spam. However, just because it allows certain videos to show up on YouTube doesn’t mean it should recommend them. According to the company’s latest blog post, it’s targeting this so-called “borderline” content going forward. That includes, for example, “videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11.” YouTube says it plans to clean up recommendations with the help of machine learning and real human evaluators.
Borderline content makes up less than 1% of what you can find on YouTube, and these videos aren’t being culled with the new filters. You can still find wild conspiracy theories if you go hunting for them, but people browsing videos will hopefully see less of this garbage in the recommendation list. The changes will start small in the US, but YouTube plans to roll the new recommendations out globally as its systems get smarter.