What any media platform wants the least is bad reputation for spreading misinformation. YouTube is no exception. Neal Mohan, the chief product officer of YouTube, breaks down on how the greatest video hosting of the world addresses the misinformation issues. This is the problem that requires both artificial and human intelligence.
False information is one of the biggest challenges, and YouTube obviously doesn’t like accusations of being a source of it. It’s especially frustrating if the misinformation pops up in the recommendations section. To address this, YouTube uses three main methods.
The first of them is detecting and marking new misinformation before it goes viral. This is a big challenge because when a new potential conspiracy theory emerges, YouTube cannot detect it automatically because it simply doesn’t have enough reference materials.
The second method is used to address sharing of content on platforms other than YouTube. This challenge requires YouTube to deal with videos where it cannot directly moderate them. Instead, it can disable the Share button which makes the process of sharing harder. But, again, it requires even more precision in detecting fake information.
The third one is using interstitials that warn the viewer that the information they are about to watch is potentially untrue. Google already uses this approach with age-restricted videos, so why not apply it to misleading videos? The problem is, again, in detecting it early enough, but YouTube is ready to accept this challenge.
Do you feel you know misinformation when you see it? Some theories and fake facts are quite convincing, so an expert opinion would be a great addition to it. Have you ever believed the hype? We’d like you to tell these stories in the comments and save the same mistakes for others.
Leave a comment
Your comment is awaiting moderation. We save your draft here
0 Comments