The YouTube comment section has always had a bit of a Wild West reputation. Unless content creators take the time to manage their communities, the comment section can quickly turn into a toxic mess.
The problem: paedophiles have been using YouTube’s systems to exploit children
Paedophiles have been using YouTube comments to sexualise children.
Commenters can timestamp videos to call people’s attention directly to an interesting part of the video, such as a line of dialogue or scene that they want to discuss. Other viewers can click on a hyperlinked timestamp and get straight to that part of the video.
Paedophiles are using this function to highlight parts (or freeze frames) of innocent videos where children are caught in suggestive poses or can be seen wearing revealing clothing. They may then go on to discuss the child in graphic detail or leave predatory comments.
YouTube’s recommendation system then acts a way for paedophiles to discover other videos that feature similar content (and feature children or teens that they can victimise in the same way).
For example, a parent may have decided to vlog about their family life and post videos of shopping with their kids, the family on vacation and things like the kids and dogs playing in their pool. Paedophiles then scan these videos for content that they can sexualise. Because they watched, liked and commented on the video, they then get recommended similar videos, and the problem spreads.
When The Verge tried to replicate the problem, it found that “it took six clicks or less to find videos with predatory comments in the comment section.”
The response: YouTube acts after advertisers start to leave the platform
Disney, Nestlé and Epic Games (the makers of Fortnite) quickly pulled their advertising from YouTube after the revelations.
YouTube responded with a strong statement, calling the reported behaviour “abhorrent” and promising to do more to catch abuse more quickly.
Around a week later, YouTube published a blog post addressing the changes it was making, which included:
- Removing comments from (and the ability to comment on) videos that featured children
- Allowing a few content creators to keep their comment sections, as long as they moderated the comments on their videos
- Launching a new “classifier” that would identify and remove “predatory comments”
- Taking down channels that endangered children “in any way”
This comes after YouTube revamped its strike system to make the consequences of violating its community guidelines clearer to YouTubers.
A technology solution to a societal problem?
Switching off comments isn’t a long-term solution to the problem. As YouTube states, many content creators value their comment sections as ways to continually engage with their viewers.
But YouTube comments aren’t the problem per se. Neither is the ability to timestamp a moment in a video. YouTube’s recommendation algorithm isn’t really the issue either.
The problem is that all these features can be manipulated by a minority for malicious purposes, and for all its technological innovation, YouTube doesn’t yet have an effective way to deal with that.
We’ve talked before about how the platforms have a duty to protect children and young people, and deal with abuse, something we’re passionate about at The Social Element. This highlights that there’s also a reputational risk for brands, who are unconsciously funding the content that is being used in this way. They rely on YouTube to ensure that its content (including the comments) is properly moderated. This is something that the Conscious Advertising Network is addressing, by encouraging brands to put pressure on the platforms to clean up.
If brands are withdrawing their advertising support, it can only be a matter of time before YouTube comes up with a technology solution to the problem.
Leave a Reply