We need to address harmful content on social media – but blocking networks isn’t the way to do it


Matt Hancock (UK Health Secretary) said that Parliament could consider blocking social networks if they can’t stop children accessing content that promotes self-harm and suicide.
Self-harm content, pro-ana sites and suicide ideation content have been a problem that the internet and social media have struggled with for as long as they’ve existed.
It’s a problem that’s growing, as we increase our exposure to online content, using social apps on our phones. There are more ways than ever for young people to access harmful content that is no longer confined to the darker corners of the internet. And in some cases it can have tragic, devastating consequences.
Social networks are taking action, but it’s not perfect
I’ve worked for many years in child safety online, and this is an issue I care about, passionately. I believe that social networks are doing the right thing in using technology, algorithms and humans to filter and take down as much of this content as they can. But they need to do more. The sheer volume of content means that it’s an uphill battle. They also rely heavily on users to flag harmful or abusive content so that it can be taken down, rather than pre-moderating and preventing it from being posted at all.
The process will never be perfect, but of course, that doesn’t mean the networks shouldn’t try harder. Networks do have a duty of care, but that duty is in continuing to improve technology to create better filtering and monitoring.
Threatening to block access to social networks is a short-term, alarmist approach, in my view. It’s not a solution to the real problem, and it could cut off all the positive things we see from social media.
Many people depend on social media. It helps people with disabilities and illnesses stay in touch with others when they can’t leave the house. When we experience mental health challenges, social media can help us find support from people with similar experiences. Studies have found that using social media has helped teens find a community to belong to and helped them to feel included and confident.
Blocking that access would take away access to vital support groups for people who often feel that they don’t have the same network of support in the offline world.
So what can we do?
Social networks can invest more in identifying harmful content
Of course, more can (and should) be done to make social networks safer. No child should be exposed to content that contributes to damaging their mental health.
The technology that networks are using for marketing purposes could be the solution to this problem. They are investing in image recognition and tagging technology that could help identify harmful content.
Researchers have developed algorithms that can not only detect whether someone is depressed, but can predict depression just by their social media posts. Facebook can predict when people will get into relationships, how long that relationship may last and – slightly disturbingly – whether or not a user is in love. That technology could also spot negative behaviour.
Despite the volume of content it has to deal with, Facebook can identify some suicide threats, and has called the police to intervene. This has caused debate: sometimes it identifies the post too late. There are concerns over how the network identifies and classifies the posts it targets, and whether it will unintentionally cause a user significant problems.
Social networks can do more to signpost charities and help groups, but of course, it’s then up to the user to contact these services.
But we should be incentivising social networks to invest more in these technologies that could spot a problem that, if kept private, may go undetected.
It takes a village
Several countries have introduced legislation to tackle harmful online content, but Ofcom says that regulation of social networks would need to address concerns over freedom of expression.
Of course, no one wants content that promotes issues like suicide, self-harm or eating disorders online. But it’s easy to put all the responsibility at the doors of social networks, and I don’t think it’s that simple.
It takes a village to raise a child. We need genuine collaboration between governments, the industry, brands, charities, schools and parents to tackle this issue.
We need proper funding for teenage support services. We need to support the people who post the content, not just silence them when they are asking for help.
We need to ask why some see posting this content as acceptable. Why do popular YouTubers, like Logan Paul, feel that suicide is a good topic to exploit, for example?
It’s a societal problem, and its resolution will be a complicated and prolonged process. I’m hopeful, though, that starting to discuss the issues will be the start of a real drive to address the source of the problem, not just the symptom.

EnglishUSA
Contact Us
close slider